The Questionnaire for Experts “Malicious Use of Artificial Intelligence and Challenges to Psychological Security”
This questionnaire is a part of the research project “Malicious Use of Artificial Intelligence and Challenges to Psychological Security in Northeast Asia” funded by the Russian Foundation for Basic Research and the Vietnam Academy of Social Sciences, project number 21-514-92001.
The analysis of the survey results, which does not claim comprehensiveness and takes into account the diversity of experts’ points of view, focuses on some of the experts’ most important conclusions. For the reader’s consideration, the analysis presents both the general and prevailing conclusions and the assessments of a minority of experts on a particular issue, with brief comments and a conclusion by the author.
Preparation of a questionnaire for the survey of experts. A questionnaire with thirteen questions was prepared, the first six of which were designed to find out the opinion of experts on the global issues of MUAI and IPS. The seventh question was designed to find out whether any measures (political, legal, technical, or other) are being taken in the expert’s homeland to overcome the threats to PS caused by MUAI, and what these measures are. The remaining questions were devoted to clarifying the threats of MUAI against IPS in NEA.
Four questions in the questionnaire are closed-ended and nine are open-ended. Open-ended questions seem to be especially necessary when researching a new problem, where research approaches and terminology have not been established and it is natural to expect a significant difference in initial responses. In the case of the present survey, the open-ended questions were designed to find out which threats to IPS by means of MUAI are the most relevant for the modern world, which counteraction measures exist, and the relevance of these threats to the country represented by the expert, as well as to the NEA countries. The closed-ended questions were designed to find out how the threats of MUAI in the field of IPS are relevant today and how they may be relevant in 2030. These questions required a choice from several options.
Experts were asked to answer the maximum number of questions at their discretion. They were told that answers to open-ended questions (assuming a detailed answer) should not exceed 800 words in total. This word limit for open-ended questions allows the experts to highlight the main aspects of the problem under study, but may reduce the justification for the answer and the description of the factors, causes and consequences of a phenomenon that are less significant for the expert but objectively important for the formation of a holistic view of the various aspects of the survey topic. When analyzing the results, it is necessary to take into account whether each expert had enough time to answer the questions and their different levels of competence in the themes of particular questions. Thus, each expert was given the opportunity to determine how much to write for each question, and, if they were unsure of their opinion, not to give an answer.
Formation of a knowledge base of experts on the issues of the survey and related fields of science and practice
- Preliminary selection. In organizing the preliminary selection of experts, the author of the study commenced based on the need to attract specialists from countries with different levels of socioeconomic and technological development in order to obtain a sufficiently broad cross-section of the assessment of the problem. Experts from both the leading countries in the field of AI technology and the countries that are mainly AI consumers were invited.
When choosing specific experts, the author considered it necessary to involve, first of all, researchers in the field of political sciences (including political psychology and political communication), as well as the fields of international security and law. This is due to the task of identifying the threats of MUAI to IPS and the resulting threats to political stability and the security of society as a whole. The author took into account the already-demonstrated research interest of experts in the role of AI in social development and, in particular, in the political aspects of MUAI and PS, as well as academic publications by experts in the subject area of the questionnaire and/or research in adjacent areas. As much as possible, the author took the experts’ experiences of practical consulting in the field of AI into account. Because there are plans to invite researchers from NEA countries to participate in a separate survey on the issues of MUAI, no emphasis was placed on attracting experts from the region for this survey.
- Experts who filled out the questionnaire. Of the nineteen experts who filled out the questionnaire, at least nine have peer reviewed publications in the field of MUAI and PS. Most of the publications of the other experts are in related fields (the historical and political aspects of cybersecurity, AI regulation, the spread of AI in the modern world, etc.).
All experts except one hold PhDs; three hold the title of professor. At least nine experts have performed the role of consultant for legislative and executive authorities, reputable international organizations including relevant UN bodies, and/or startups. Four experts are specialists in technical science in the field of AI, and are interested in the sociopolitical aspects and consequences of MUAI. This makes it possible to correlate the analytical assessments of specialists in the fields of technical and social sciences. At least a third of the experts have publications in or adjacent to the subject area of this survey based on their research on NEA as a whole or specific countries in that region. Prior to the invitation to participate in the survey, the author was personally acquainted with and had cooperated academically with seven experts from the nineteen who filled out the questionnaire (and with 21 of the 58 experts who were invited to participate in the survey). Taking into account the still-narrow circle of specialists on the problem under consideration, it was possible to obtain a high level of expert agreement to fill out the questionnaire (19/58≈33%).
Sending questions to experts and the response received. Extramural forms of work with experts, as is known, make it possible to disregard geographical boundaries during an expert survey and reduce the risks of the experts mutually influencing each other, but make the work of expert groups less operational. In this case, the survey was launched in June 2021 and completed in October 2021. The survey materials were published in November 2021. To obtain a certain representativeness of the assessments, the questionnaire was sent to 58 specialists from fourteen countries. Experts from Belarus (1), Belgium (1), Cuba (1), France (1), the United Kingdom (1 questionnaire — presented jointly by two researchers), Poland (1), Romania (1), Russia (7), the USA (1), and Vietnam (3) responded positively and completed the questionnaire.
The main results obtained
The answers to the first question (made up of two interrelated questions) — “What threats to psychological security caused by the malicious use of artificial intelligence do you consider the most relevant for the modern world? Why?” — revealed a wide panorama of capabilities possessed by MUAI to threaten the PS of individuals, groups, nations, and the whole of humankind. The experts also recognized (directly or indirectly) the presence of antisocial actors who are ready to have such an impact. None of the experts blamed MUAI on AI as such.
Unfortunately, in society, including in the leading countries in the development of AI, there are major yet completely unfounded concerns about AI. This may cause the consequent movement of new Luddites (for example, in connection with the further rapid growth of automation and the growth of the quantitative and qualitative capabilities of AI). Remedying this requires not just proactive explanatory work on the part of the scientific community, but also adequate strategic communication (the synchronization of the deeds, words, and images of the state in important and long-term areas of social development, as well as their perception by various target audiences).
There is no fundamental difference in the approaches of experts in the field of social and technical sciences to the definition of the MUAI threat. The experts’ responses list both technologies of psychological influence utilized through MUAI and areas of antisocial activity where MUAI is or may be a tool directed against the PS of society (activities of criminal organizations, including terrorists; targeted malicious influence on the results of elections and referendums; etc.). When listing specific MUAI technologies, despite their answers not overlapping, most experts clearly indicated the general manipulative nature of such activities and their complex nature.
On this basis, it is possible to talk about the multi-variance of tasks, methods, fields of application, territorial coverage, and social conditions, as well as MUAI actors. The multi-variance of MUAI, however, is not limited to this list and requires an adequate systemic public response.
Answers to the closed-ended second question — “How much does the malicious use of artificial intelligence increase the level of threat to international psychological security today?” — showed that most experts (n = 10, or ≈53%) answered “noticeably,” five (≈26%) answered “strongly,” and four (≈21%) answered “only slightly.” Thus, most of the experts note a significant or strong influence of MUAI on the growth of IPS threats today. Notably, none of the experts denied such influence; the question is on its extent. Here, experts differed in their assessments, which is probably partly due to the limited statistical base of MUAI in the field of PS.
For the closed-ended third question about the situation in 2030 — “How much will the malicious use of artificial intelligence increase the level of threat to international psychological security by 2030?” — the assessments change, compared to the answers to the second question, leaning toward the deterioration of the situation: ten (≈53%) experts answered that MUAI will “strongly” raise threats to international IPS and nine (≈47%) answered “noticeably.” No one pointed to an insignificant level of such a threat (“only slightly”), let alone its absence. This implies the need to take preventive measures against a negative scenario. But, as indicated in the introduction to this review of expert responses, the worst-case scenario follows not from the prospect of the further development of AI technologies (which open up wide opportunities for social progress for humanity), but from the high probability of deepening the crises of modern society, strengthening the role of antisocial actors. The latter naturally leads to increased MUAI risks, including in the sphere of IPS.
The fourth question is “What measures (political, legal, technical or other) do you consider to be important to neutralize the threat to international psychological security caused by the malicious use of artificial intelligence?” The answers to this question generated a large number of specific (and often interrelated) proposals to prevent and neutralize MUAI and minimize its negative consequences. This seems to be an extremely important result of the survey because a successful response to a complex threat implies complex solutions. In several responses, experts placed justifiable emphasis on the need for people to be educated in order to successfully resist MUAI. Nevertheless, some experts drew attention to the fact that there are objective social and political disagreements that make it difficult to make coordinated decisions at the level of the state authorities of individual countries or within the framework of interstate cooperation.
A clear minority of experts drew attention to the extreme importance of AI in neutralizing MUAI. The fact is that the technical methods of countering MUAI are best known to a narrow circle of specialists with relevant profiles (illustratively, only four technical specialists took part in this survey). Meanwhile, humanitarian experts naturally focus their attention on the political and legal aspects of countering MUAI, in which they are competent. This once again confirms the need for an interdisciplinary approach to assessing MUAI threats. Conceivably, because of the future growth in the practice of the effective use of AI in order to neutralize MUAI, it will be better known and understood, albeit in a simplified form, by humanitarians. What remained outside the scope of discussion when answering this question were approaches based on the possibility of the successful, socially oriented development of AI and the prevention of MUAI as a result of the progressive transformation of the system of social relations, the more complete integration of AI with humans, and other revolutionary solutions that can, under certain conditions, become breakthroughs for human progress.
The fifth question is “How important is international co-operation in successfully countering the malicious use of artificial intelligence? On what international platforms (and why) is this cooperation the most effective? What are the existing obstacles to this co-operation?” The majority of experts support the idea of such cooperation, although they noted serious difficulties in its implementation and insufficient effectiveness; in other words, their responses reflected the structure “International cooperation is very important, but…”. The two points of view that differed from this come from more pessimistic assessments. The first is expressed by Matthew Crosston, who wrote, “International cooperation is almost irrelevant in countering MUAI, as it operates at a sub-level far below where international laws, sanctions, and countermeasures could successfully operate.” The second point of view is reflected in the very close positions of the two experts Marius Vacarelu and Pierre-Emmanuel Thomann: “…International cooperation will exist only between countries that do not compete for the same territories, resources, or geo-political positions” (Vacarelu) and “Ad hoc coalitions might be more successful than large international organizations” (Thomann).
The difference between the approaches is sometimes conditional, which was quite clearly demonstrated by the approach of an expert from Belgium: “While political cooperation on the matter is unavoidable in order to take effective countermeasures, … unfortunately, such initiatives are blocked by geopolitical tensions and political interests.”
Because MUAI is mainly developing in the arena of the Internet today, unless it is stopped, it will be possible to suspend MUAI with further quantitative and qualitative growth in the absence of effective international cooperation only by leaving the single global virtual space. A country choosing to retreat from the Internet would broadly protect its population from external psychological attacks, but it would greatly limit its residents in terms of free communication and the use of the achievements of modern civilization. It would be even worse if, due to the increase in global conflicts, the Internet as a unified information space ceased to exist. This would not be the best basis for the development of mutual understanding and interaction between peoples. The losses for each nation would be enormous. Understanding this is what can stop states from taking extreme measures. However, if the threats from MUAI in the psychological field continue to grow and not only connect with traditional Internet propaganda (which is already happening), but, worse, synchronize with larger and more dangerous cyber-attacks against critical infrastructure, the position of the public and nation-states regarding the Internet may undergo radical changes.
The sixth question is “Which of the threats to international psychological security caused by the malicious use of artificial intelligence do you consider the most relevant for your country?” On this topic, experts are united in their view that psychological manipulation is the main threat. They highlighted different spheres and forms of this manipulation using AI. Some experts focused on specific MUAI technologies that allow (or will allow) for the successful manipulation of people. Another focus was on the qualitative characteristics of the public environment that makes (or can make) MUAI effective and successful. A third focus was on specific areas where the threat is greatest (for example, national security or the use of Al to disrupt administrative management) or on possible MUAI actors (for example, political parties). All of the above points—and many more—are important, indicating that the issue undoubtedly requires an interdisciplinary approach and international cooperation, even if it is limited.
The concentration of society on socially oriented tasks of sustainable development seems to be a means of limiting (an elusive but realistic goal), if not of excluding (an utopian task for a foreseeable future) the activity and very existence of antisocial actors in general and in the field of AI in particular. It is hardly possible to systematically and effectively restrict the activities of antisocial actors exclusively in the field of AI usage without limiting them in the basic economic, social, and political spheres. Antisocial actors are increasingly seeing the growing capabilities of AI in financial, political, and military terms. Hopes that they will voluntarily give up an important tool to strengthen their positions in society are largely unfounded. Similar illusions that the Internet and, later, social networks would become exceptionally positive phenomena of technical and social progress existed at the beginning of the 21st century, but this did not turn out to be the case. The positive role of AI in the transformation of society is an important aspect of socially oriented strategic communication. Conversely, negative scenarios of social and geopolitical development can lead antisocial actors to trigger potentially disastrous MUAI-led outcomes.
The seventh question is “Are any measures (political, legal, technical or other) being taken in your country to overcome threats to psychological security caused by the malicious use of artificial intelligence? What are these measures?” The approaches mentioned by the experts vary from a denial of the adoption of such measures to an enumeration of specific political, legal, social, and technical decisions made in a particular country. This indicates great differences both in the states’ approaches to the adoption of such measures and in the experts’ assessments of their breadth and effectiveness. It is crucial to consider the following three circumstances: the insufficient theoretical elaboration on the MUAI and IPS problem, the different degrees of application of various AI technologies for malicious purposes, and the objective limitations of publicly available data. The large variation in estimates may be due, among other factors, to the greater or lesser restrictions on the dissemination of such information in different countries.
The eighth question is “Which of the threats to international psychological security caused by the malicious use of artificial intelligence do you consider the most relevant for Northeast Asia?” There is also a large variation in estimates for this question. Some experts found their responses on the fact that threats to IPS, by and large, are universal in nature, whereas others pay attention to the severity of internal political conflicts, interstate disagreements in the region, and the clash of geopolitical interests in NEA, which is a factor that stimulates and facilitates MUAI. These two approaches do not contradict but complement each other: today, the region uses basically the same MUAI technologies as the modern world as a whole. However, the intensity of MUAI may increase, both due to interstate conflicts and due to the greater prevalence of certain technologies in the region. For example, the abuse of video games, where AI is actively used, leads to an increase in painful attachment to them, and, according to some sources, half of porn deepfakes originate from South Korea. Moreover, if AI continues to develop rapidly, and if conflicts increase, China and, to a lesser extent, Japan and South Korea, aiming to become world leaders in AI by 2030, will be able to become fields for testing local “innovative developments” of MUAI, which will then penetrate into other countries.
Answers to the closed-ended ninth question — “How much does the malicious use of artificial intelligence increase the level of threat to international psychological security in Northeast Asia today?” — showed that most experts (n = 12, or ≈63%) answered “noticeably,” two (≈10%) answered “strongly,” three (≈16%) answered “only slightly,” and two (≈10%) did not give an answer. None of the experts denied such influence; the question is on its extent.
Regarding the closed-ended tenth question about the situation in 2030 — “How much will the malicious use of artificial intelligence increase the level of threat to international psychological security in Northeast Asia by 2030?” — the estimates change in comparison with assessments of the current situation in the answers to the ninth question, tending towards the deterioration of the situation: six (≈32%) experts believe that, already, MUAI will “strongly” increase threats to IPS, nine (≈47%) believe it “noticeably” will, and two (≈10%) noted that “the answer … will depend on how governments react to the trials. If they decide to go ahead with the widespread adoption of this technology, the answer would be “significantly.”” None of the experts pointed to an insignificant level of such a threat, let alone its absence. Two experts (≈10%) did not answer.
The answers to the eleventh question (made up of two interrelated questions)—“In which countries of Northeast Asia (no more than three) have the threats to international psychological security caused by the malicious use of artificial intelligence reached the highest level? Why?”—produced the following results. China is the clear leader: six experts placed it first in terms of threat level, and two experts placed it second. Japan was ranked first by one expert, second by four experts, and third by three experts. South Korea was ranked first by one expert, second by four, and third by two. Matthew Crosston went beyond the proposed classification by giving a detail answer of interest: “China (domestically), North Korea (domestically), Japan/South Korea (internationally).” When ranking NEA countries by the level of threat to IPS, the experts took into account, among other factors, the level of economic development, the development and implementation of AI technologies, the severity of the MUAI problem under different political systems, and related internal and external conflicts.
The twelfth question is “How well is the public in Northeast Asia aware of the threats to international psychological security caused by the malicious use of artificial intelligence?” The most common answer (expressed by seven experts) was that the public in the region is not well aware (“not aware enough of MUAI threats,” “not very well aware,” “the public … is not well aware,” etc.). Four responses provide a positive assessment of public awareness, although the perceived degree of such awareness sometimes differs by country.
The thirteenth question is “How do you assess the degree of readiness of state bodies of the countries of Northeast Asia to counter threats to international psychological security caused by the malicious use of artificial intelligence?” The experts’ votes were divided approximately equally. A number of experts determined the readiness of government agencies to counter the threats of IPS through MUAI as being of very different degrees, depending on the level of technological development, the degree of openness/closeness of the political system, and other factors.
It was only possible to attract experts from Cuba, EU countries, Russia, the USA, and Vietnam to the study, which does not allow for an accurate idea of MUAI and PS in other regions and, to a certain extent, reduces the accuracy of the assessment of the problem in the context of the modern world.
The circle of experts on the integrated assessment of the MUAI and IPS situation is still being formed, as this problem has become acute only in recent years. This happened both because of the rapid development of several AI technologies and MUAI practices, and because of the global crisis covering all major spheres of public life, in particular the sphere of international relations, where acute psychological warfare is taking place as AI usage grows.
The questions allow for some of the most important aspects of the threat of MUAI to IPS to be revealed, but are limited in number, which does not allow for a comprehensive analysis of the problem. This would require the formation of not only more detailed questionnaires and the expansion of the geographical location of the experts involved, but also, and above all, a combination of expert surveys that use other methods of academic research.
There are many opportunities for the development of the methodology of this study, including methods based on the use of AI in expert surveys.
This project demonstrates limited but real opportunities for international cooperation in a new, very important, and yet extremely problematic area of interdisciplinary research that is taking its first steps: MUAI and IPS. The high degree of readiness of specialists to take part in the survey and, in general, the comprehensiveness and professional competence of the answers received are highly encouraging. During the survey, experts expressed coinciding, significantly different, and even mutually exclusive points of view, which is understandable given the novelty and particularity of the issues being discussed.
The political science aspect of the problem is especially important in the context of the aggravation of the psychological warfare between state and non-state actors against the backdrop of acute economic and sociopolitical conflicts in the modern world.
The consideration of the regional aspect of the MUAI and IPS problem using the example of NEA provided a valuable cross-section of data on the nature and dynamics of the formation of a new type of threat. The leading NEA countries, due to their high level of development and application of AI, face serious problems in some areas of MUAI (malicious use of deepfakes, emerging negative aspects of computer gaming with the rising use of AI, etc.). Their analysis can be useful and applicable to developing countermeasures not only in the countries in the region, but also far beyond its borders.
Overcoming the global crisis requires a clear understanding of its causes, driving forces, and consequences, as well as a broad, public discussion about ways to overcome it. We need to act strategically, and, for this, we need clarity of strategic thought.
People can be disoriented by a combination of skillful propaganda in conditions of information hunger, the prohibition of alternative information channels, and open violence. Fascism and many other obvious forms of dictatorship were based on these components. However, in an open dictatorship, people feel encouraged to search for a social alternative and fight for it despite the threat of repression and death. The historical doom of such dictatorships has been proven in practice, but does not prevent dangerous relapses of the dark pages of history in a new environment. An implicit dictatorship, hidden from the public consciousness, is more dangerous under its conditions of the skillful manipulation of a multitude of half-truths that place people in the kingdom of crooked mirrors, where, in fact, they are deprived of choice. Yet, although it is not easy, there is a chance to find a way out of the labyrinth of lies of traditional forms of propaganda.
The growing MUAI by antisocial actors poses a serious threat at the international level, further narrowing the ability of people to understand the current situation when it is extremely necessary for them and all future generations. Today, we are closer than ever to the end of human history. Further progress of AI technologies, MUAI and its large-scale use as a psychological weapon may lead, at the least, to an even greater delay in an adequate public response to the dangerous behaviors of antisocial actors. On a global scale, this will facilitate the formation of conditions for various kinds of fabricated and social disasters, including a Third World War. With a qualitatively perfect MUAI, the matrices of thinking and behaviors that are hostile to people in the near future may become practically insurmountable at the level of public consciousness and political practice. It may become an important element in the formation of techno-fascism, with the subsequent almost conflict-free liquidation of the population because of continued automation, robotization of production processes, and widespread incorporation of AI in the interests not of society but of a narrow oligarchic elite.
MUAI threats to IPS should be considered at three levels.
At the first level, MUAI threats are associated with a deliberately distorted interpretation of the circumstances and consequences of AI development for the benefit of antisocial groups, and the spread of the false negative image of AI can slow its incorporation and cause sociopolitical tensions. At the same time, the deliberately inflated expectations for the use of AI that are transmitted to society through various channels are no less dangerous: they, for example, can be effectively used to disorient the general public, interested commercial and non-profit structures, and public authorities, and, ultimately, can also turn into disappointments, wrong decisions, and social and political conflicts.
Where MUAI is aimed primarily not at managing target audiences in the psychological sphere but at committing other malicious actions (for example, destroying critical infrastructure), we can talk about the second level of the effect of MUAI on IPS. Such attacks can have a great psychological effect due to the damage caused.
MUAI designed primarily to cause psychological damage belongs to the third and highest level of threat to IPS. The use of AI in psychological warfare already makes covert perception management campaigns more dangerous. Examples include AI phishing and the use of deepfakes and smart bots in information campaigns for various purposes, such as marring the reputation of an opponent, be it a person, an organization, or even a country. At some point, this can allow aggressive actors to control the public consciousness and eventually lead to the destabilization of the international situation.
PS threats posed by MUAI can exist in both pure forms (for example, the misinformation of citizens about the nature of AI without its malicious use) and combined forms. For example, overestimating the effectiveness of current AI technologies, forming expectations of certain highly favorable results of their implementation or any products based on them would be a first-level attack with a communicative effect (for example, a speculative boom in the stock market). However, if the perpetrators were to accompany their actions with physical attacks on critical infrastructure or people and a widespread, malicious psychological campaign using different AI tools, the threat would become a combined attack.
Manipulation of broad segments of the population in targeted perception management campaigns is particularly dangerous, as many experts pointed out in their answers with different nuances in their wording.
Given the extreme tensions of today’s world, it seems that attention should be paid to the first level of threat to IPS through MUAI in combination with subsequent levels because this is where very disturbing phenomena and trends are observed. This may become the subject of a future survey and the subject of a broad discussion. In the present publication, the author pays attention to only some of these phenomena and trends.
The largest companies in the field of high technology actively use AI according to their narrow corporate interests, which rather often go against the interests of society. It is clear that companies with access to large amounts of data to power AI models are leading AI development. Key groups within AI include GAFAM—Google (Alphabet), Apple, Facebook (Meta), Amazon, and Microsoft, also known as the Big Five, which is a name given to the five largest, most dominant, and most prestigious companies in the information technology industry of the United States, BAT (the BAT is the Chinese name given to the leading internet and software companies in China: Baidu, Alibaba, and Tencent), early-mover IBM, and hardware giants Intel and NVIDIA (Lee, 2021).
It is hardly accidental that among the individuals with the ten largest fortunes in the world, six represent Amazon (1), Microsoft (2), Google (2), and Facebook (1) (Forbes, 2021). $7.5 trillion: that was the combined market capitalization of GAFAM at the end of 2020, according to an analysis by the Wall Street Journal. At the end of 2019, these firms’ combined market capitalization was $4.9 trillion, which means they increased in value by 52% in a single year. As of November 12, 2021, the capitalization of these companies has grown by another $2.5 trillion and reached approximately $10 trillion (Statista, 2021a). That is nearly a quarter of the combined $41.8 trillion market capitalization of all companies in the S&P 500 (La Monica, 2021). It is appropriate to recall that the United States’ nominal GDP in 2020 was around $21 trillion. Japan, the world’s third-largest economy, had a GDP of about $5 trillion, and Russia had one of only about $1.5 trillion.
However, the positioning and consolidation of these individuals in the top ranks in terms of assets coincided with the degradation of the reputational capital of most of their associated companies.
The 2021 Edelman Trust Barometer, an annual survey conducted by the global public relations firm Edelman for more than two decades, shows this clearly. Although technology has long been the most trusted industry sector, trust has plummeted more than in any other sector over the past ten years. In 2012, 77% of survey respondents expressed trust in tech companies to “do what is right.” The 2021 research shows that that percentage has dropped to 68%. This percentage decline is three times that of any other industry in the study (Shabir, 2021).Three of the Big Five companies—Google, Amazon, and Microsoft—have dropped in rank year after year in the Global RepTrak 100 rankings. A fourth, Facebook, did not appear in the rankings in 2020-2021.The fifth—Apple—managed a decent improvement. Apple’s gain was overshadowed, however, by Amazon plummeting by 50 places, from 42nd in 2020 to 92nd in 2021 (Abdulla, 2021).
Four Big Five CEOs testified before the U.S. House Antitrust, Commercial, and Administrative Law Subcommittee in an antitrust hearing on July 29, 2020. Amazon founder and CEO Jeff Bezos, Facebook founder and CEO Mark Zuckerberg, Apple CEO Tim Cook, and Alphabet and Google CEO Sundar Pichai defended their companies against accusations of anticompetitive practices (Rev, 2021). Former Facebook product manager Frances Haugen testified before the US Senate on October 5, 2021, that the company’s social media platforms “harm children, stoke division and weaken our democracy” (Menczer, 2021), and that Facebook did not use AI technologies ethically. “Right now, Facebook is closing the door on us being able to act. We have a slight window of time to regain people control over AI” (Browne & Shead, 2021).In November 2021, a new bipartisan Senate bill aimed at restricting tech companies’ “anticompetitive” acquisition was introduced by Senators Amy Klobuchar (D-MN) and Tom Cotton (R-AR) and would greatly limit the ability of Big Five companies to acquire other tech companies.
US tech giants GAFAM have been accused in the EU of not paying enough taxes, stifling competition, stealing media content, and threatening democracy by spreading fake news. An EU court in November 2021 rejected a Google appeal against a 2.4-billion euro (2.8-billion dollar) anti-trust fine. Amazon was fined 746 million euros in July 2021 by Luxembourg authorities for flouting the EU’s data protection rules. France has also fined Google and Amazon a total of 135 million euros for breaking rules on computer cookies. The European Parliament and member states agreed to force platforms to remove terrorist content, and to do so within one hour. EU rules now also forbid the use of algorithms to spread false information and hate speech, which some major platforms are suspected of doing to, among other efforts, increase advertising revenue (AFP, 2021).
The Chinese government strengthened control measures over the country’s technology companies in 2021.More than one trillion dollars were wiped off the collective market capitalization of some of the Chinese largest Internet groups, such as Tencent, a gaming and social-media giant, and Alibaba, China’s e-commerce powerhouse (He, 2021). The Cyberspace Administration of China (CAC) reported on March 18, 2021, that representatives of CAC and the Ministry of State Security met with employees of Alibaba Group, Tencent, ByteDance, and other companies specializing in information technology to discuss potential problems with deepfakes. China’s state control authorities have instructed local CAC offices and state security agencies to strengthen the security assessment of voice software and deepfakes. To do this, it is proposed to take into account the “Law on Network Security,” “regulations on the Assessment of the Security of Information Services on the Internet”(中共中央网络安全和信息化委员会办公室 (Cyberspace administration of China), 2021), and other laws and regulations. In August 2021, the Cyberspace Administration of China announced draft regulations for Internet recommendation algorithms. It wants to halt algorithms that encourage users to spend large amounts of money or spend money in ways that “may disrupt public order”(Frater, 2021). In September 2021, regulators told gaming companies, including Tencent, that they should stop focusing on profits and instead concentrate on reducing adolescents’ addiction to playing. The short-video industry, dominated by companies such as ByteDance, Kuaishou, and Bilibili, may receive similar treatment (中共中央网络安全和信息化委员会办公室 (Cyberspace administration of China), 2021).
In Russia, the state communications regulator Roskomnadzor demanded in November 2021 that thirteen foreign (mostly U.S.-based) technology companies be officially represented on Russian soil by the end of 2021 or face possible restrictions or outright bans. This year, Russia fined foreign social media giants Google, Facebook, Twitter, and TikTok and messaging app Telegram for failing to delete content it deems illegal. Apple, which Russia has targeted for the alleged abuse of its dominant position in the mobile applications market, was also on the list. Roskomnadzor said firms that violate the legislation could face advertising, data collection, and money transfer restrictions or outright bans (Marrow & Stolyarov, 2021).
In Russia, the authorities want to force major foreign streaming services (like YouTube) to pay local operators for using their traffic. This proposal was put forward in 2021 by the Ministry of Digital Development, Communications and Mass Media of the Russian Federation. If the proposal is accepted, it should serve as a new barrier against foreign Internet giants. The Ministry explained its position by stating that operators are put in a difficult position due to the overloading of Russian networks. Thus, the authorities can force foreign companies to finance, to some extent, the development of communications infrastructure in Russia (Tsargrad, 2021). The Russian authorities have been trying to regulate foreign Internet services as one of their instruments of influence for at least the past six years, but they took decisive action only at the end of 2020. Then, a number of laws were passed that toughened the responsibility of companies. These are, for example, amendments to the law “On measures to influence persons involved in violations of fundamental human rights and freedoms, the rights and freedoms of citizens of the Russian Federation,” according to which Roskomnadzor received the right (by decision of the Prosecutor General’s Office) to slow down the traffic of services not only because of security threats, but also if they restrict access to “socially important information”. In addition, on February 1, 2021, the law came into force, according to which social networks are obliged to identify and block illegal content. So far, it does not involve sanctions. The authorities first used the technology of slowing down traffic within the framework of the law “on the sovereign Internet” in March 2021, recognizing Twitter as a security threat for providing access to information prohibited in Russia (Shestopyorov & Lebedeva, 2021).
Thus, restrictive measures are being taken in many countries and, in the context of PS and MUAI, these measures are related to the use of AI technologies, which indicates an unintentional or intentional disregard for the interests of the public on behalf of the biggest high-tech companies. Are these measures in different countries sufficient and balanced enough to prevent negative antisocial phenomena associated with the new technical (primarily based on AI technologies) and financial capabilities of high-tech companies? Any answer to this question would be premature.
Elon Musk’s electric car giant Tesla can rightfully be included in the arena of Big Tech. It recently passed the $1 trillion mark in market capitalization and has since surged to about $1.25 trillion. The fortune of Musk, the richest man on the planet, reached a record high of $305 billion on November 22 (Forbes, 2021). (At the end of 2019, it was less than $30 billion.) Musk’s total net worth is now greater than the market value of Exxon Mobil Corporation or Nike Inc. The basis of Tesla’s success is the widespread development and application of AI. As stated on Tesla’s official site, “We develop and deploy autonomy at scale in vehicles, robots and more. We believe that an approach based on advanced AI for vision and planning, supported by efficient use of inference hardware, is the only way to achieve a general solution for full self-driving and beyond” (Tesla, 2021). However, to what extent will AI technologies in Musk’s projects (Tesla, Neuralink, etc.), as well as in numerous Big Tech projects in general, primarily serve society rather than financial elites? To what extent will AI technologies (for example, in medicine) be publicly available? Perhaps the boom of Big Tech based on AI and other technologies will be successful and sustainable, but will it unite rather than destroy humanity if decisions continue to be made within the framework of a modern socioeconomic model without a clear vision of the goals and means of our movement toward a more progressive dynamic and socially oriented model? So far, the examination of the effect of the Coronavirus pandemic on mankind suggests that there is no alternative to the transition to new technologies. However, it is extremely socially unbalanced and gives a completely disproportionate amount of preferential material dividends to an insignificant minority, and, to the overwhelming majority, mostly limited to promises of a better life in the future.
Furthermore, to what extent is the rapid growth of Big Tech as a whole not inflating a huge financial bubble on inflated expectations from highly promising and extremely important technologies for humanity? In the near future, a crushing financial and economic crisis can follow, which will further enrich the few and ruin hundreds if not billions of people around the world. Big Tech Companies Amass Property Holdings During Covid-19 Pandemic, they “…are sitting on record piles of cash. They are getting paid next to nothing for holding it, and they are running out of ways to spend it” (Bangkok Post, 2021). For example, Alphabet Inc., Google’s parent company, held $135.9 billion in cash, cash equivalents and short-term investments as of the second quarter of 2021—more than any other publicly traded company, not counting financial and real-estate firms, according to S&P Global. Alphabet is now one of the biggest real-estate owners in New York City and the U.S. It held $49.7 billion worth of land and buildings as of 2020, up from $5.2 billion in 2011. Amazon, which owns many warehouses, held $57.3 billion worth of land and buildings—more than any other U.S. public company except Walmart (Bangkok Post, 2021). The European Central Bank warned in November 2021 of bubbles in property and financial markets (The Liberty Beacon, 2021). According to Willem H. Buiter, an adjunct professor of international and public affairs at Columbia University, “the next financial crisis is fast approaching” (Buiter, 2021).
Unfortunately, if this negative crisis scenario turns out to be true, is not the rapid deterioration of the international situation the natural evolution, fraught with, if not a world war, a very large military provocation? Will it not become a necessary trigger for the collapse of the markets? The “culprit/guilty party” will, of course, be found where it is necessary, since the world’s main information resources are, dangerously, controlled by and subordinated to the interests of global corporate structures. Further, MUAI already exists on a global scale as a game based on inflated expectations of benefits from the incorporation of AI. This game is played through a versatile psychological impact on target audiences who are particularly susceptible and vulnerable to perception management in a crisis situation. In whose hands are the most advanced tools of global psychological influence and whose financial interests are at stake? Not only is there enough objective data to answer this question; there is an abundance. Therefore, the possible and specific scenarios of combined, targeted impact—not only with the help of specific AI technologies, but also of the very perception of AI—on the public consciousness for the purpose of speculative enrichment and the destabilization of public order requires the most serious attention and comprehensive study by specialists from different countries and with different scientific specializations.
Of course, one cannot blame only high-technology information companies for the antisocial use of AI: one of the main reasons for increased MUAI is the increasingly uncontrolled behavior of large businesses, which has only become more obvious during the crisis. Just 1,275 wealthy families paid $9.3 billion in estate tax to the U.S. Treasury in 2020. As recently as 2018, the IRS collected more than $20 billion from nearly 5,500 families. The dramatic decline—to the point where the tax is paid by 0.04% of dying Americans—is largely the result of the tax overhaul enacted by Republicans in 2017, which doubled the amount the wealthy can pass to heirs without triggering the levy (Bloomberg News, 2021). Between 2010 and 2020,the U.S. and its allies accounted for only 5% of worldwide increases in democracy. But a staggering 36% of all backsliding occurred in U.S.-aligned countries. On average, allied countries saw the quality of their democracies decline by nearly double the rate of non-allies, according to V-Dem’s figures (Fisher, 2021). According to a new national survey organized in the U.S. by the nonprofit Public Religion Research Institute, nearly one in five (18%) of overall respondents said they agreed with the statement: “Because things have gotten so far off track, true American patriots may have to resort to violence in order to save our country” (Dickson, 2021).
In a wide-ranging interview with UN News in September 2021, UN Secretary-General António Guterres called on world leaders to “wake up,” make an immediate course correction at home and abroad, and unite. “The institutions we have, have no teeth. And sometimes, even when they have teeth, like in the case of the Security Council, they have not much appetite to bite,” the UN chief said (UN Affairs, 2021). The same month, the UN secretary general warned that the world is “on the edge of an abyss and moving in the wrong direction” in an urgent and sometimes angry address to the world’s leaders at the UN general assembly. “We are seeing an explosion in seizures of powers by force. Military coups are back,” he said. When democracies fail to deliver on the basic needs of their people, Guterres added, “it provides oxygen for easy fixes, silver solutions and conspiracy theories” (Borger, 2021).
Total global military expenditure rose to $1,981 billion last year, an increase of 2.6% in real terms from 2019, according to new data published today by the Stockholm International Peace Research Institute (SIPRI). The five biggest spenders in 2020, which together accounted for 62% of global military expenditure, were the United States, China, India, Russia, and the United Kingdom.The 2.6% increase in world military spending came in a year when global GDP shrank by 3.3% (Statista, 2021b).
Under the conditions of continuing acute economic problems, the impoverishment of hundreds of millions of people and the rapid concentration of world wealth in the hands of very few, an arms race, and acute geopolitical contradictions, MUAI against IPS, conducted by the forces of a variety of antisocial actors, can play an extremely negative and dangerous role. This is why countries, especially those with leading scientific and technical potential, can and should cooperate in order to prevent the antisocial actors’ use of information technologies based increasingly on new AI capabilities. On November 3, 2021, without calling for a vote, the United Nations General Assembly First Committee adopted a drafted Russian–U.S. resolution on the rules of behavior in cyberspace. The document will be considered by the General Assembly in December (Suciu, 2021). This is a good example of the possibility of such cooperation.
Meanwhile, it is clearly insufficient and ineffective to resist increasingly successful MUAI in a society where the influence of antisocial actors is increasing via separate, unrelated decisions of a political, legal, and technical kind. Under these conditions, the countermeasures are nothing more than a palliative, at best allowing one to gain time, at worst a cover for the systemic deterioration of the situation. It is important to take into account the following factors:
First, MUAI is qualitatively more dangerous for a sick social organism than for a healthy one. We need a socially oriented transformation of society, part of which will be a complex of systemic and effective political, legal, technical, and organizational solutions to prevent MUAI and minimize its negative impact on the public consciousness. This is not about copying models of the past, but forming a progressive model that meets the realities, risks, and opportunities of the 21st century.
Second, increasing investments in science and education in order to develop the capabilities of the main productive force of modern society—people—is an important response to the threats of MUAI in the broad context of the formation of a comprehensively developed responsible citizen of a democratic society rather than a one-dimensional consumer who is convenient to manipulate for selfish purposes. A multidimensional, harmonious, and socially responsible person can protect themselves, their loved ones, and their society more successfully than a one-dimensional consumer. This rule holds for a developed, civil society; for a different society. In an unhealthy society, a clearly expressed civic position often means increased risks for its bearer, including the threat of physical destruction. We know this not only from history, but also from the modern reality of many countries.
Third, there are well-known estimates that indicate that society is becoming more complicated, and that the volume of incoming information is many times greater than the ability of the existing personal, group, and public consciousness to assimilate it and use it adequately in decision-making. This situation increasingly does not meet the needs of further dynamic sustainable development. One of the new, specific mechanisms for solving the problem may be augmented intelligence (also referred to as intelligence amplification, cognitive augmentation, or enhanced intelligence) is a design pattern for a human-centered partnership model of people and AI working together to enhance cognitive performance, including learning, decision-making, and new experiences (Gartner, 2021). Taking into account the growing possibilities of cyborgization (Pro Robots, 2020), a closer (and, in the future, symbiotic) connection between human and machine is also being facilitated, which will increase our capabilities to obtain, process, and verify data, and therefore to resist MUAI.
The author would like to believe that this survey is just a prologue for future joint international research in the field of MUAI and IPS. Such research will not only be designed to solve important scientific problems; its principal practical task will be to help ensure the PS of society.People must have a clear systemic understanding of the surrounding reality to make conscious choices in their lives. AI is a means to take away this choice in the interests of antisocial actors, but, to a greater extent, it is also a tool for the protection and self-development of the individual and society as a whole. We still have a choice.
Whole text of study – presented here in three parts – is available at : https://www.academia.edu/62873193/Experts_on_the_Malicious_Use_of_Artificial_Intelligence_and_Challenges_to_International_Psychological_Security .
Abdulla, N. (2021). Only One of Big Tech’s Big Five Comes Out Unscathed in RepTrak’s 2021 Global Reputation Rankings. Retrieved 28 November 2021, from https://www.trustsignals.com/blog/big-tech-plummets-in-reptrak-100
AFP. (2021). Europe’s battle to curb Big Tech. Retrieved 28 November 2021, from https://sg.finance.yahoo.com/news/europes-battle-curb-big-tech-040606211.html
Bangkok Post. (2021). Big Tech Companies Amass Property Holdings During Covid-19 Pandemic. Retrieved 28 November 2021, from https://www.bangkokpost.com/business/2189935/big-tech-companies-amass-property-holdings-during-covid-19-pandemic
Bloomberg News. (2021). Ultra-rich skip estate tax, sparking 50% drop in IRS revenue. Retrieved 28 November 2021, from https://www.investmentnews.com/ultra-rich-skip-estate-tax-sparking-50-drop-in-irs-revenue-214350
Borger, J. (2021). AntónioGuterres ‘sounds the alarm’ over global inequalities in UN speech. Retrieved 28 November 2021, from https://www.theguardian.com/world/2021/sep/21/antonio-guterres-united-nations-unga-speech
Browne, R., &Shead, S. (2021). ‘Facebook is closing the door on us being able to act,’ whistleblower says in UK hearing. Retrieved 28 November 2021, from https://www.cnbc.com/2021/10/25/facebook-whistleblower-frances-haugen-testifies-in-uk-parliament.html
Buiter, W. (2021). The next financial crisis is fast approaching. Retrieved 28 November 2021, from https://www.marketwatch.com/story/the-next-financial-crisis-is-fast-approaching-11633447555
Dickson, C. (2021). ‘Alarming finding’: 30 percent of Republicans say violence may be needed to save U.S., poll shows. Retrieved 28 November 2021, from https://news.yahoo.com/prri-poll-republicans-violence-040144322.html?fr=sycsrp_catchall
Fisher, M. (2021). U.S. Allies Drive Much of World’s Democratic Decline, Data Shows. Retrieved 28 November 2021, from https://www.yahoo.com/news/u-allies-drive-much-worlds-194121292.html
Forbes. (2021). The World’s Real-Time Billionaires. Retrieved 28 November 2021, from https://www.forbes.com/real-time-billionaires/#1d7a52b83d78
Frater, P. (2021). Celebrities Disappear From Internet As China Moves Against Fan Culture. Retrieved 28 November 2021, from https://variety.com/2021/digital/asia/china-celebrities-disappear-internet-fan-culture-crackdown-1235050381/
Gartner.(2021). Definition of Augmented Intelligence – Gartner Information Technology Glossary. Retrieved 28 November 2021, from https://www.gartner.com/en/information-technology/glossary/augmented-intelligence
He, L. (2021). China’s ‘unprecedented’ crackdown stunned private enterprise. One year on, it may have to cut business some slack. Retrieved 28 November 2021, from https://edition.cnn.com/2021/11/02/tech/china-economy-crackdown-private-companies-intl-hnk/index.html
La Monica, P. (2021). The race to $3 trillion: Big Tech keeps getting bigger. Retrieved 28 November 2021, from https://edition.cnn.com/2021/11/07/investing/stocks-week-ahead/index.html
Lee, G. (2021). Big Tech leads the AI race – but watch out for these six challengers. Retrieved 28 November 2021, from https://www.airport-technology.com/features/big-tech-leads-the-ai-race-but-watch-out-for-these-six-challenger-companies/
Marrow, A., &Stolyarov, G. (2021). Moscow tells 13 mostly U.S. tech firms they must set up in Russia by 2022. Retrieved 28 November 2021, from https://finance.yahoo.com/news/moscow-says-13-foreign-tech-122138251.html
Menczer, F. (2021). Facebook whistleblower Frances Haugen testified that the company’s algorithms are dangerous – here’s how they can manipulate you. Retrieved 28 November 2021, from https://news.yahoo.com/facebook-whistleblower-frances-haugen-testified-122343232.html?fr=sycsrp_catchall
Pro Robots. (2020). Cyborg Revolution: Latest Technologies and TOP of Real Cyborgs. Retrieved 28 November 2021, from https://www.youtube.com/watch?v=TyWohWpozp0
Rev. (2021). Big Tech Antitrust Hearing Full Transcript July 29. Retrieved 28 November 2021, from https://www.rev.com/blog/transcripts/big-tech-antitrust-hearing-full-transcript-july-29
Shabir, S. (2021). Four Steps To Winning Over An Increasingly Skeptical Public. Retrieved 28 November 2021, from https://www.technologytimes.pk/2021/02/02/four-steps-to-winning-over-an-increasingly-skeptical-public/
Shestopyorov, D., &Lebedeva, V. (2021).Mimozamedlennogodejstviya.Vlastiishchutnovyerychagidavleniyanazarubezhnyj IT-biznes (The authorities are looking for new levers of pressure on foreign IT business). Retrieved 28 November 2021, from https://www.kommersant.ru/doc/4783593
Statista. (2021a). S&P 500: largest companies by market cap 2021. Retrieved 28 November 2021, from https://www.statista.com/statistics/1181188/sandp500-largest-companies-market-cap/
Statista.(2021b). Growth of the global gross domestic product (GDP) from 2016 to 2026. Retrieved 28 November 2021, from https://www.statista.com/statistics/273951/growth-of-the-global-gross-domestic-product-gdp/#:~:text=In%202020%2C%20the%20global%20economy%20fell%20by%20about,by%20a%20country%20in%20a%20certain%20time%20period
Suciu, P. (2021). Is a U.S-Russian Cyber Alliance in the Works?. Retrieved 28 November 2021, from https://nationalinterest.org/blog/buzz/us-russian-cyber-alliance-works-196194
Tesla. (2021). Artificial Intelligence & Autopilot. Retrieved 28 November 2021, from https://www.tesla.com/AI
The Liberty Beacon. (2021). Whistleblowers Torpedo Big Tech And Big Pharma: Who’s Next?. Retrieved 28 November 2021, from https://www.thelibertybeacon.com/whistleblowers-torpedo-big-tech-and-big-pharma-whos-next/
Tsargrad. (2021). V Rossiiobsuzhdayutplatuza YouTube: Novyjzaslondlya internet-gigantov (YouTube Payments Discussed in Russia: New Barrier for Internet Giants). Retrieved 28 November 2021, from https://tsargrad.tv/news/v-rossii-obsuzhdajut-platu-za-youtube-novyj-zaslon-dlja-internet-gigantov_451654
UN Affairs. (2021). UN chief’s message to world leaders: ‘Wake up, change course, unite’. Retrieved 28 November 2021, from https://news.un.org/en/story/2021/09/1100152
中共中央网络安全和信息化委员会办公室 (Cyberspace administration of China).(2021). 国家互联网信息办公室、公安部加强对语音社交软件和涉深度伪造技术的互联网新技术新应用安全评估. Retrieved 28 November 2021, from http://www.cac.gov.cn/2021-03/18/c_1617648089558637.htm
 Open-ended questions require a detailed answer and any explanations, whereas closed-ended questions require only “yes” or “no” answers or a choice between several options.
 Fifteen experts from Belarus, Cuba, France, Poland, Romania, Russia, the United Kingdom, the USA, and Vietnam have agreed to have their answers published. Four experts (one from Belgium, one from Russia, and two from Vietnam) did not give such consent. The answers of these four experts are used in the analytical part of this publication, with their answers to closed questions taken into account. These experts are referred to as the “expert from Belgium,” “expert from Russia,” “expert from Vietnam #2,” and “expert from Vietnam #3”. During the survey, two completed questionnaires were received from unknown sources, which are not taken into account in this analysis.