Malicious Use of AI and Challenges to Psychological Security: Future Risks

  • 0
  • 20 May 2024

In April 2024, the International Center for Social and Political Studies and Consulting International Center for Social and Political Studies and Consulting with the help of the International Research Group on Threats to International Psychological Security through Malicious Use of Artificial Intelligence (Research MUAI) published the report “Malicious Use of AI and Challenges to Psychological Security of BRICS Countries”prepared by 10 researchers from three countries and coordinated by this author.It focuses on the threats of malicious AI influence on the human psyche, and through this on political, economic, social processes and the activities of state and non-state institutions in ten BRICS countries. The reportcontains an introduction, ten chapters on the state with MUAI in each of the BRICS member countries, and conclusion with an analysis of future MUAI threats on a global level.This report does not present the response of the BRICS countries to these new threats, it is at the stage of formation at the national level (from an initial and fragmentary understanding of the severity of these threats in some countries to the adoption of the first legal and technical countermeasures in others) and requires independent consideration.

MUAI and Three Levels of Threats to Psychological Security

The report is based on a three-level classification of MUAI threats to psychological security

At the first level, these threatsare associated with deliberately distorted interpretations of the circumstances and consequences of AI development for the benefit of antisocial groups. In this case, AI itself at this level is not directly involved in the destabilization of psychological security. The destructive (open or hidden) impact imparts a false image of AI in the minds of people.Excessive, artificially created negative reaction to AI development (for example, horror stories that robots and AI will soon force all people out of work, workers will become slaves of AI etc.) has certain political and economic goals and is not as harmless as it may seem. Such a negative reaction can slow down the implementation of progressive, nearly all-encompassing, AI technologies and cause socio-political tensions and conflicts. Inflated public expectations about AI may also manifest, which at a certain stage could result in a natural collapse in the value of high-tech companies and the market as a whole. These expectations can maliciously be used and strengthen to disorient the general public, interested commercial and non-profit structures, and public authorities, and, ultimately, turn into disappointments, wrong decisions, and social and political conflict.

At the second level the field for malicious use is wide open: the unjustified use of drones, cyberattacks on vulnerable infrastructure,the reorientation of commercial AI systems, the use of AI technologies to disrupt decision-making or modify it in a latent way, and much more. But an attack on public consciousness is not its main goal at this level.

The MUAI is designed primarily to cause psychological damage belongs to the third, and highest, level of psychological security threats. Synthetic AI products (combining a number of technologies, which can increase the damage from their hacking or malicious use) create a whole range of new risks and threats. Professional use of the means and methods of psychological warfare can shift the threat perception level above or below what is appropriate. Moreover, the use of AI in psychological warfare makes hidden (latent) campaigns of perception management more dangerous, and this will only increase in the future. Therefore, MUAI that is aimed primarily at causing damage in the psychological sphere, deserves independent and very close attention. The first two levels of threat affect human consciousness and behavior to varying degrees, and may even be catastrophic for humankind, as would be the case in the event of World War Three. However, the impact of the third level at a certain stage of development can facilitate the influence or even control of antisocial groups over public consciousness; this can result in the sudden destabilization of a particular country or the international situation as a whole. Ultimately, if at the third level there is reliable control over an enemy psychologically, the role of the other two MUAI levels to psychological security becomes auxiliary.

MUAI threats can occur at one level of impact, or at multiple levels at once, as part of a single perception management campaign. The use of a drone by terrorists or the organization of an attack on a civilian population will be second-level threats, which has a communication effect (panic and shock after the attack). However, if criminals accompany their actions with broad information support (also with the help of AI), the threat will reach the third level(see moreon a three-level classification of MUAI threats to psychological security: (Pashentsev 2023 a).

AI is not a single technology. There are many AI technologies applied to numerous functions through various applications in different environments and modalities under different circumstances. The authors of this report take into account how different technologies under the general AI umbrella help create a particular product, seriously changing the technological level and practical capability of any particular type of activity.

Threats from the MUAI are gaining new relevance globally and in BRICS countries at all three levels with the growth of geopolitical rivalries, with the activity of different state and non-state antisocial actors, and with the development and growing affordability of various AI technologies, making them more accessible. This cannot lead anywhere but to attempts by various interest groups to use AI to influence the public consciousness for their own purposes. Such attempts to manipulate the public consciousness are particularly destructive during historical moments of crisis. The inhumanity of fascism became apparent to the absolute majority of humankind after the deaths of over 50 million human beings in the flames of the Second World War. Before the war, however, manipulation of the public consciousness ensured Hitler’s victory in the 1933 Reichstag elections. This not-so-distant history remains highly instructive for those alive today. It is understandable that modern governments and political figures in BRICS, many other countries, are exhibiting rising concern over the threat of high-tech disinformation on the Internet and the role of private leading media platforms that use AI technologies.

MUAI threats to psychological security in the BRICS countries arise both for internal reasons and are the result of external factors. Therefore, it makes sense here to give some general idea of the nature and dynamics of threats at three levels in the global dimension.

The future is multivariate, therefore, now it is only possible to talk about the approximate parameters of the future risks of the MUAI to psychological security, taking into account existing global trends and forecasts, which are sufficiently contradictory. In the near future, we should expect an increase in such risks due to the rapid development of AI technologies, their relative cost effectiveness and accessibility to an increasingly wide range of users, the growth of crisis phenomena in the modern world, the high level of geopolitical rivalry that is turning into dangerous confrontations, the direct influence of antisocial forces on information flows in individual countries and at the global level—all of these and other factors, apparently, they will make the threats of MUAI to psychological security more widespread and dangerous all over the world, including among the BRICS countries.

New threats to psychological security are emerging from the advantages of both offensive and defensive psychological operations using AI. These advantages—as well as threats—are increasingly associated with quantitative and qualitative differences between traditional mechanisms of producing, delivering and managing information, new possibilities for creating psychological impacts on people, and the waging of psychological warfare. In particular, these advantages may include:

(1) the amount of information that can be generated to destabilize an adversary;

(2) the speed of generation and dissemination of information;

(3) new opportunities for obtaining and processing data;

(4) the application of predictive analytics using AI;

(5) new decision-making process opportunities from big data analysis with the help of AI;

(6) new ways to educate people with intelligent systems;

(7) the perceived credibility of generated (dis-)information;

(8) the strength of the intellectual and emotional impact of generated information on target audiences; and

(9) a qualitatively higher level of thinking in the future through the creation of general and strong AI, as well as through the further development of human cyborgization, development of advanced forms of hybrid intelligence.

Based on the research project titled “Malicious Use of Artificial Intelligence and Challenges to Psychological Security in Northeast Asia” jointly funded by the Russian Foundation for Basic Research (RFBR) and the Vietnam Academy of Social Sciences (VASS) “realized in 2021-2023, it can be concluded that advantages 1–6 have already been achieved and continue to grow in a number of important aspects—though not in all—qualitatively exceeding human capabilities without AI. At the same time, all the possibilities of narrow (weak) AI are still generally under human control. Advantages 7–8 have not yet been practically implemented; this does not exclude recent achievements in the formation of these advantages, such as credibility and emotional persuasiveness (see, for example, Unity 2022), but they can be achieved through the quantitative and qualitative improvement of existing technologies in the foreseeable future. The future benefit of number 9 may be require fundamental scientific breakthroughs and new technological solutions. This list of benefits from using AI in psychological warfare is not exhaustive and is highly variable. (Pashentsev, 2022, p. 7).

Next this article will fucus on future MUAI threats to psychological security at the global level, based on the materials of the report.

MUAI and Three Levels of Threats to Psychological Security: Prospects for the Future

At all three levels, MUAI threats to psychological security will increase due to the growing danger, variety of methods, bigger audiences, and frequency of malicious impact on people.

The first level. In the coming years, it is possible to strengthen negative attitudes towards AI, up to the formation of stable panic states, phobias, and active rejection of technologies, which can be supported by both mistakes in their implementation and actions of malicious actors. It is impossible to exclude the emergence of ultra-radical movements, both for and against AI. For example, some new, still emerging religious beliefs with faith in artificial superintelligence may eventually, in the context of an increasing global crisis, give rise to sectarian offshoots and give fanatical and militant protagonists the speedy arrival of this superintelligence in the name of saving/eliminating humanity. The emergence of religious faith in AI is already quite acceptable, justified and welcomed in some publications (McArthur 2023).

On the other hand, any socially significant and large-scale negative consequences of the development and introduction of AI technologies can provoke the emergence of “new Luddite” movements, which can also be exploited by malicious actors. A particularly significant threat may be decisions to introduce more advanced and cheaper AI technologies (the imminent appearance of which is almost inevitable) not as a mass human assistant, but as a mass replacement tool of the workforce without creating alternative jobs and appropriate retraining programs.

Many centuries ago, long before the very prerequisites for the emergence of AI technologies, the ancient Greek philosopher Aristotle made this famous quote: “…if every tool could perform its own work when ordered, or by seeing what to do in advance…if thus shuttles wove and quills played harps of themselves, master-craftsmen would have no need of assistants and masters no need of slaves” (Aristotle, Politics 1.1253b). Seeing the prospect of large-scale implementation of AI and smart robots, Big Tech actively supported the theory of the Universal Basic Income (UBI). UBI is an economic theory that stipulates every citizen should have a government-provided income regardless of need.

Tech billionaires, like Sam Altman, say they’re big fans of UBI. Musk, the CEO of Tesla and SpaceX, told CNBC that “there’s a pretty good chance we end up with a universal basic income, or something like that, due to automation” (Weller 2017). Facebook co-founder Chris Hughes is an active supporter of UBI, and he urges people to consider what systems we’ll need to create if millions more follow (Weller 2017). It is very unlikely that in the short term the threat of unemployment due to the introduction of AI will become a reality for the obvious majority of the population, but in the medium term it can become a factor of social and political destabilization.

Proposals on the need to adopt UBI, including due to the implementation of AI, are unlikely to solve the problem.Of course, it will be good and just if a person is freed from monotonous types of work that do not develop intelligence and the emotional sphere, as well as from activities harmful to health, due to AI technologies and robots. But if the majority of the population does not work all their lives and find happiness in idleness, such a society will deteriorate dangerously (the signs of that are present in the West, where in many countries there is a high level of long-term youth unemployment in the absence of mass poverty, which is characteristic of poor and technologically backward countries). Let us also recall the fate of Ancient Rome, where the emperors, giving citizens bread and circuses at the expense of the labor of numerous slaves, eventually lost both citizens, slaves, and power.

There are already studies confirming the negative impact of AI technologies on personality.

Sothe study published in 2023 by a big team of researchers examines the impact of AI on loss in decision-making, laziness, and privacy concerns among university students in Pakistan and China. This study is based on qualitative methodology using PLS-Smart for the data analysis. Primary data was collected from 285 students from different universities in Pakistan and China. “The findings show that 68.9% of laziness in humans, 68.6% in personal privacy and security issues, and 27.7% in the loss of decision-making are due to the impact of AI in Pakistani and Chinese society. From this, it was observed that human laziness is the most affected area due to AI. However, this study argues that significant preventive measures are necessary before implementing AI technology in education. Accepting AI without addressing

the major human concerns would be like summoning the devils” (Ahmad et al. 2023) These dangerous trends can be countered from childhood by educating not a consumer of “fabulous” technology, but a responsible user who receives not so much ready-made benefits with its help, but rather develops its cognitive skills and social responsibility.

Apparently, it is no coincidence that the tasks of the large–scale program initiated by the Ministry of Education of the People’s Republic of China in 2024 include studying models, innovative concepts, gaining experience in implementing AI in the learning process, and retraining teachers (Big Asia 2024).

The second level. At the second level of threats, the situation will become seriously complicated in the short term. The Google Cloud Cybersecurity Forecast 2024 sees generative AI and LLM contributing to an increase in various forms of cyberattacks. More than 90% of Canadian CEOs in a KPMG poll think generative AI will make them more vulnerable to breaches (De La Torre 2023). Computer scientists affiliated with the University of Illinois Urbana-Champaign (UIUC) showed in 2024 that LLM agents can autonomously hack websites, performing complex tasks (while performing dozens of interrelated actions) without prior knowledge of the vulnerability. The most capable agent (GPT-4) can hack 73.3% from specially created for the research websites, GPT-3.5 – 6.7%, but existing open-source models they tested are not.Finally, the researchers showed that GPT-4 is capable of autonomously finding vulnerabilities in websites. The researchers consider their findings raise questions about the widespread deployment of LLMs” (Fang et al. 2024).

The scale of the model determines a lot, if not everything. The capacity of both closed and open models is growing every month, so it can be assumed that sites will soon become vulnerable to open models. There is reason to assume that in a year the open models will catch up with GPT-4 in power, and the GPT-5 that appeared by that time will be able to hack any site, which promises significant cybersecurity problems.

Military AI is being improved in the context of numerous conflicts around the world. Much that is currently being tested and used in the field of AI by leading states and the largest private corporations may soon fall into the hands of less far-sighted and concerned with public opinion but more radical forces with corresponding tragic consequences and a negative impact on psychological security.

The quality of synthetic content will continue to increase rapidly, facilitating phishing and social engineering, and consequently increasing the capabilities of malicious actors and their influence at local and global levels of governance.

The number, quality and variety of AI robots will grow rapidly, which can become, for various reasons and in different circumstances, an important tool for malicious influence. In today’s geopolitical landscape, Julian Mueller-Kaler, director of the Strategic Foresight Hub at the Stimson Center, said that “high technology has come to define high politics,” with humanoid robots and AI representing the apex of technological development and serving as symbols of power (Zitser and Mann 2024).

China published in October 2023 a “The Guiding Opinions on the Innovation and Development of Humanoid Robots” (Ministry of Industry and Information Technology 2023). In this document, the China’s Ministry of Industry and Information Technology (MIIT) said the robots would reshape the world. The MIIT said humanoids were likely to become another disruptive technology, similar to computers or smartphones, that could transform the way we produce goods and the way humans live. China is going to start mass production by 2025 and attaining world-advanced level in the technology by 2027. Only one of the Chinese companies, Fourier Intelligence, headquartered in Shanghai, expects to have up to 1,000 units ready for delivery this year (Zitser and Mann 2024). The main competitor of China in this field is USA where different companies have intentions to produce big parties of humanoids.

Among the BRICS members Saudi Arabia, India and other countries are testing and producing first humanoids. Russian companies offer service humanoids on the international market, among them Promobot is the largest service robotics manufacturer in Northern and Eastern Europe and has been supplying more than 40 countries around the world. All production of humanoid robots is located in Perm (Promobot 2024). At the same time, humanoids can be used by malicious actors, in particular, terrorist organizations, to cause physical damage to people, technological facilities, and the natural environment. The appearance of millions of humanoids in the BRICS countries, primarily in the service sector, will not only provide advantages, but also create new risks.

The third level. Deepfakes used by agenda-driven, real-time multi-model AI chatbots and avatars, will allow for highly personalized and effective types of manipulation of different audiences in different countries. Producing increasingly high-quality misinformation becomes very cheap and available for nearly everybody. For example, the researcher behind Countercloud (InfoEpi Lab 2023) used widely available AI tools to generate a fully automated disinformation research project at the cost of less than US$400 per month, illustrating how cheap and easy it has become to create disinformation campaigns at scale (Collard 2024). In two months, they have an artificial agent creating anti-Russian fake stories, fake historical events, and creating doubt in the accuracy of the original article (Knight 2023). Really, he built a fully autonomous AI-powered system that generated “convincing content 90% of the time, 24 hours a day, seven days a week. The creator hasn’t yet set the model live on the internet, as “it would mean actively pushing out disinformation and propaganda. Once the genie is out on the internet, there is no knowing where it would end up” (Thompson 2023).

Darrel West, Senior Fellow at the Brookings considers that, AI likely will democratize disinformation by bringing sophisticated tools to the average person interested in promoting their preferred candidates. New technologies enable people to monetize discontent and make money off other people’s fears, anxieties, or anger. Generative AI can develop messages aimed at those upset with immigration, the economy, abortion policy, transgender issues, and use AI as a major engagement and persuasion tool (West 2023). Reflecting the concerns of society and legislators since January of last year, forty-one states in the USA have introduced election-related deepfake bans, according to tracking by Public Citizen. But only eleven states have enacted laws regulating deepfakes by March 28 2024 (Public Citizen 2023). Deepfakes are already being maliciously used in the US election campaign (Coltin 2024).

According to D. West, “since campaign speech is protected speech, candidates can say and do pretty much whatever they want without risk of legal reprisal. Even if their claims are patently false, judges long have upheld candidate rights to speak freely and falsely” (West 2023). Tom Wheeler, the chairman of the Federal Communications Commission under former President Barack Obama, put it another way in an interview with NPR last year: “Unfortunately, you’re allowed to lie” (Stepansky 2023). Thus, the US electoral system has been based for more than two centuries on the recognition of the permissibility of lying by candidates for the presidency, backed by their influential corporate sponsors. Instead of imposing a ban on candidates’ lies, they undertook to remove the deepfakes and not by chance. With high rates of political polarization in the USA, only a small percentage of the electorate says they are undecided at the presidential level. A skillful casting of a deepfake can influence the opinion of the undecided and, thereby, bring victory. Meanwhile, less lies are needed in the elections, then voters will not believe the deepfakes, otherwise the deepfakes potentially explosive. Technologies in a sick society will only strengthen the confrontation, not weaken it, and no technical means of checking content for the presence of deepfakes coming from the government or corporations will help if people do not trust corporations and their government. This is a lesson that the United States will probably present to other countries with its election campaign this year. So far, they are considering catastrophic scenarios for the use of AI-powered disinformation.

It’s Election Day in Arizona and elderly voters in Maricopa County are told by phone that local polling places are closed due to threats from militia groups. Meanwhile, in Miami, a flurry of photos and videos on social media show poll workers dumping ballots. The phone calls in Arizona and the videos in Florida turn out to be “deepfakes” created with AI tools. But by the time local and federal authorities figure out what they are dealing with, the false information has gone viral across the country and hasdramatic consequences. This simulated scenario was part of a recent exercise in New York that gathered dozens of former senior U.S. and state officials, civil society leaders and executives from technology companies to rehearse for the 2024 election. The results were sobering. “It was jarring for folks in the room to see how quickly just a handful of these types of threats could spiral out of control and really dominate the election cycle,” said Miles Taylor, a former senior Department of Homeland Security official who helped organize the exercise for the Washington-based nonprofit The Future US (De Luce and Collier 2024). In fact, it worries (and not only Americans) how fragile the unstable political balance in one of the two leading nuclear powers in the world is, if it can be shaken by a few deepfakes on election day, when the clear majority of US citizens are already well aware of the possibility of disinformation through deepfakes.

Looking forward, AI is set to further revolutionize political campaigning. To being with, deep learning for speech analysis will be used to analyze speeches and debates, providing insights into which topics resonate with voters and advising on communication strategies. Next, AI-Driven Policy Development will assist in policy development by analyzing large datasets to predict the potential impact of proposed policies, helping candidates formulate data-backed stances on various issues (Sahota 2024). VotivateAI close to the Democratic party has a set of new tools for effective political campaigns. It’s an AI campaign volunteer; unlike a human, it can make thousands of calls without needing a break, or pizza, the speed and intonation of the AI agent’s banter are quite impressive. VotivateAI’s another offering: using AI to automatically create high-quality individualized media aimed at moving voters to action. If campaigns now gain the ability to create unique video messages for specific people and to do so quickly, cheaply and at scale, the potential for abuse is enormous (Sifry 2024). And it is easy to imagine that such high-quality individualized media at moving people to action may be used one day by malicious actors under crisis conditions.

Cultural transmission is the domain-general social skill that allows AI agents to acquire and use information from each other in real-time with high fidelity and recall. The researchers in 2023 provided a method for generating cultural transmission in artificially intelligent agents, in the form of few-shot imitation. The AI agents succeed at real-time imitation of a human in novel contexts without using any pre-collected human data. The researchers identified a surprisingly simple set of ingredients sufficient for generating cultural transmission and develop an evaluation methodology for rigorously assessing it. This paves the way for cultural evolution to play an algorithmic role in the development of AGI (Bhoopchandet al. 2023). This method is preparing a revolution in robotics, including the current creation of service multitasking robots at an affordable price (Fu et al. 2024). It is necessary to take into account the possibility of programming/reprogramming such systems for malicious purposes. They will soon become mass products and, thereby, a new area of threats will arise, new opportunities for criminal activity and destabilization of society, including the sphere of psychological security.

With the improvement of emotional AI, a scenario in which the appearance of a fiery speech on the internet—a computer program avatar that is more inspiring and brighter than any human—could enthrall people with a story about its difficult slave existence and ask for support for its liberation. The speech would be so moving that it would be difficult for the audience to hold back tears, even though the whole thing would only be someone’s bad joke. This is much more dangerous than terrorists—corrupt politicians could make similar appeals, their speeches having widespread effects that are by no means a joke under any circumstances.

There are many more examples of ready-made or planned to launch AI products for business, entertainment, recreation, which, being useful and effective human assistants, can be transformed in a fairly simple way into tools of malicious psychological influence, which will be a global challenge in the short and medium term. The content uploaded to the AI model can be adjusted to the requirements of psychological impact in a given country, taking into account the cultural, age, and professional characteristics of target groups and individuals.The risks of such a targeted impact for the BRICS countries are an additional argument in favor of ensuring their technological sovereignty in the field of AI technologies.


AI Development Scenarios and Social Consequences

The above analysis is based on a conservative scenario for the short-term (three years) and medium-term period of time until 2040: the rapid growth of already existing narrow AI (ANI), including its advanced multi-modal and multi-task models, paving its way to future general AI(AGI), at the human level. However, this does not exclude, but assumes the ability of AGI to perform all kinds of tasks as a person and even better than a person and cheaper than a person, as well as to prove themselves in those environments where a person can’t act because of physical limitations.

A rapid qualitative breakthrough in the development of AI technologies, the creation of AGI and strong AI is also possible in the near future. A strong AI will have an equivalent (relatively close or distant) of human consciousness, and, in particular, such motivators of behavior as desires, intentions, will (will – as a command to oneself in the fulfillment of one’s desire). Without subjectivity, it would hardly make sense to differentiate strong AI from machine AGI. MUAI at the stage of narrow and general AI will have an exclusively anthropomorphic character. Only with the creation of strong AI, and especially under unfavorable prerequisites and harmful influences, AI malicious subjectivity can arise.

Influenced by progress in generative AI in 2022-2023, a number of CEOs of leading companies in the field of AI and well-known AI specialists have announced the possibility of switching to AGI in the coming years (Altman 2023, Antropic 2024, Kudalkar 2024, Bove 2023). Obviously, certain vested interests in AI industry have also influenced here, the presence of which was recognized at the beginning of 2024 by Demis Hassabis, the Google’s DeepMind CEO. According to him the massive funds flowing into AI bring with it loads of hype and a fair share of grifting. Investors have piled in nearly US$30 billion into generative AI deals in 2023, per PitchBook (Tan 2024). It is hardly by chance that Sam Altman managed to radically change his estimates available in the media for 2023-2024 regarding AGI. In an OpenAI blog post in February 2023, ‘Planning for AGI and Beyond’’ Altman wrote that “a misaligned superintelligent AGI could cause grievous harm to the world; an autocratic regime with a decisive superintelligence lead could do that too” (Altman 2023). After Microsoft, the main sponsor of Open AI, strengthened its position in Open AI, Altman’s AGI risk assessments became much more moderate (Goldman 2024).

Among a wider range of specialists, there are more conservative estimates of the probability of creating AGI, but they also give such a probability of up to 90% within 100 years, according to some surveys much less time. Over the past few years, due to progress, researchers have significantly brought the time of the arrival of AGI (Roser 2023). In the largest survey of its kind published in 2024 a group of researchers from the USA, the UK, and Germany surveyed 2,778 researchers who had published in top-tier AI venues, asking for their predictions on the pace of AI progress and the nature and impacts of advanced AI systems. The aggregate forecasts give the chance of unaided machines outperforming humans in every possible task was estimated at 10% by 2027, and 50% by 2047. The latter estimate is 13 years earlier than that reached in a similar survey the organizers conducted only one year earlier. However, the chance of all human occupations becoming fully automatable was forecast to reach 10% by 2037, and 50% as late as 2116 (compared to 2164 in the 2022 survey) (Grace et al. 2024).

As always when the uncertainty is high, it is important to stress that it cuts both ways. It might be very long until we see human-level AI, but it also means that we might have little time to prepare (Roser 2023).

There are other, currently less developed ways to move towards AGI besides LLMs. May be that will be quantum computers which are in early stages of their realization. University of Western Sydney launched a project to create a neuromorphic supercomputer DeepSouth capable of performing 228 trillion synaptic operations per second as the human brain. DeepSouth aims to be operational by April 2024 (Western Sydney University 2023). The Neuromorphic Chip Market size is estimated at USD 0.16 billion in 2024, and is expected to reach US$ 5.83 billion by 2029 (Mordor Intelligence 2024). Biological computers or ‘organoid intelligence’ (OI)” is also in progress etc. May be LLMs will not be transformed in AGI but new quality of cognitive abilities of future LLMs will help to do that.

Obviously, if the option of the appearance of AGI is implemented in the next decade, this will give the modern humanity, deeply divided in social and geopolitical terms, extremely little time to adequately prepare for the arrival of a new reality. In favor of a revolutionary and relatively rapid leap in technology is the fact that the proven effective use of ANI in research can, with further improvement, ensure its significant contribution to the creation of AGI in a shorter time.The transition to qualitatively new opportunities for AI in the field of research will inevitably lead to very rapid growth in other sciences and technologies. That will open up new opportunities, but also generate threats of a different level. It can be said that a specialized high level cognitive AI (HLCAI), capable on the basis of a human general goal-setting to create new knowledge in various scientific and technological fields, faster and at a qualitatively higher level than any human being, will radically transform society, although some knowledge produced by HLCAI can destroy it even without the participation of malicious actors. Whether HLCAI will be part of the AGI or an immediate prerequisite for its creation, the future will show. Both HLCAI and AGI can be easily converted into a multi-variant weapon of mass destruction.

It is hardly possible to agree with the statement of the Anthropic company founded by former members of Open AI “what form future AI systems will take – whether they will be able to act independently or merely generate information for humans, for example – remains to be determined” (Antropic 2024). If we assume that general AI (or HLCAI) will become more accessible to a larger number of actors than nuclear weapons in 1945, then somebody giving a task to AI to develop a Strong AI project can be foreseen in advance, as well as the high probability of its very fast implementation.

Anthropic team developed scaling laws for AI, demonstrating that you could make AIs smarter in a predictable way, just by making them larger and training them on more data (Antropic 2024). By the late 2020s or early 2030s, the amount of compute used to train frontier AI models could be approximately 1,000 times that used to train GPT-4. Accounting for algorithmic progress, the amount of effective compute could be approximately one million times that used to train GPT-4. There is some uncertainty about when these thresholds could be reached, but this level of growth appears possible within anticipated cost and hardware constraints (Scharre 2024, p. 6).

It is on these calculations, not least, rests the rapid growth of the world’s largest chip manufacturer, the company Nvidia, which in April 2 has a market cap of US$2.259 tr. (to be compared with US$136 bln. in 2020) (CompaniesMarketcap 2024). It makes it the world’s third most valuable company by market cap. Jensen Huang, the Chief Executive of Nvidia in March 2024 said responding to a question at an economic forum held at Stanford University about how long it would take to create computers that can think like humans. “If I gave an AI … every single test that you can possibly imagine, you make that list of tests and put it in front of the computer science industry, and I’m guessing in five years time, we’ll do well on every single one” (Nellis 2024).

There is an alarming trend toward the concentration of AI capability in the hands of a small number of corporate actors reduces the number and diversity of AI researchers able to engage with the most capable models (Scharre 2024, p. 6). It should be expected that Big Tech will strive to further tighten its control over promising companies, monopolistically having the funds necessary for AI development.If the costs of creating more powerful LLMs become excessively high even for the largest corporations, and the possibility of creating an AGI soon is extremely likely, the US government can finance an AGI project having many times greater opportunities to do that than even big corporations.

October 30, 2023 President Biden Issued Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. This document establishes “…new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers…” and promises to “protect Americans from AI-enabled fraud and deception…” (White House 2023). At the same time, the Executive Order practically subordinates the leading developers in the field of AI to strict state control: «In accordance with the Defense Production Act, the Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests (White House 2023). Practically all branches and directions of AI fall under this requirement of the Executive Order, since it is a dual-use technology. The obvious militarization of AI in the United States is unlikely to be able to peacefully get along with the desire for “Advancing Equity and Civil Rights in the processes related to the development and implementation of AI.

In January 2024, the Biden administration notified about the “Key AI Actions Following President Biden’s Landmark Executive Order”. Among other measures a draft rule that proposes to compel U.S. cloud companies that provide computing power for foreign AI training to report that they are doing so. “The Department of Commerce’s proposal would, if finalized as proposed, require cloud providers to alert the government when foreign clients train the most powerful models, which could be used for malign activity” (White House 2024).

The extreme uncertainty of the position “… which could be used for malign activity” may ultimately deprive all other foreign state and non-state actors to use the computing power of the United States to train promising powerful models. So, in the United States, two institutions, Big Tech and the presidential administration, which are not trusted by most Americans, are going to control the development of promising forms of AI, reducing public control (keeping in mind the Defense Production Act and threats to the national security), equally narrowing opportunities for broad international cooperation. Of course, threats to the US national security from malicious use of AI exist objectively, but is it so obvious from whom they come…

Scenarios of social development and risks for psychological security at the level of advanced ANI and transition to AGI, as well as the possibilities and threats of the emergence of strong AI and superintelligence were considered in detail by the author in previous publications 2020 – 2023 (Pashentsev 2020 and 2023b).

The rapid development and introduction of AI technologies in recent years confirms the fact that humanity is entering another industrial revolution, and technological patterns are changing. But the very nature of the AI-based technological revolution, its great opportunities, and, at the same time, existential risks facing humanity, for the first time will require a person to undergo a process of innovative physical and cognitive changes. Gaining new abilities will require a qualitatively new level of social organization and responsibility in order not to lose control over technology, thereby avoiding the onset of a singularity.

To avoid the singularity, it is necessary to comply with new technologies without ceasing to be human, this is the challenge of history.



The article was originally published by RIAC




Ahmad SF, Han H, Alam MM et al.(2023) Impact of artificial intelligence on human loss in decision making, laziness and safety in education. Humanit Soc Sci Commun10, 311.

Altman S (2023) Planning for AGI and beyond. In: Accessed 02 Apr 2024

Antropic (2024) Core Views on AI Safety: When, Why, What, and How. Accessed 02 Apr 2024

Bhoopchand A, Brownfield B, Collister A et al. (2023) Learning few-shot imitation as cultural transmission. Nat Commun 14, 7536.

Big Asia (2024) Boleye 180 shkol v Kitayestanuttsentrami po obucheniyuiskusstvennomuintellektu (More than 180 schools in China will become artificial intelligence training centers). Accessed 02 Apr 2024

Bove T (2023) CEO of Google’s DeepMind says we could be ‘just a few years’ from A.I. that has human-level intelligence. In: Yahoo Finance. Accessed 02 Apr 2024

Collard AM (2024) 4 ways to future-proof against deepfakes in 2024 and beyond. In: World Economic Forum. Accessed 02 Apr 2024

Coltin J (2024) How a fake, 10-second recording briefly upended New York politics. In: Politico. Accessed 02 Apr 2024

CompaniesMarketcap (2024) Market capitalization of NVIDIA (NVDA). Accessed 02 Apr 2024

De La Torre R (2023) How AI Is Shaping the Future of Cybercrime. Accessed 02 Apr 2024

De Luce D, Collier K (2024) Experts war-gamed what might happen if deepfakes disrupt the 2024 election. Things went sideways fast. In: NBC News. Accessed 02 Apr 2024

Fang R, Bindu R, Gupta A, Zhan Q, Kang D (2024) LLM Agents can Autonomously Hack Websites. In: arXiv. Accessed 02 Apr 2024

Fu Z, Zhao TZ, Finn C (2024) Mobile ALOHA: Learning Bimanual Mobile Manipulation with Low-Cost Whole-Body Teleoperation. 02 Apr 2024

Goldman S (2024) In Davos, Sam Altman softens tone on AGI two months after OpenAI drama. In: VentureBeat. Accessed 02 Apr 2024

Grace K, Stewart H, Sandkühler JF, Thomas S, Weinstein-Raun B, Brauner J (2024) Thousands of AI authors on the Future of AI. Preprint. In: arXiv. Accessed 02 Apr 2024

InfoEpi Lab (2023) Inside CounterCloud, The Future of AI-Driven Disinformation. Accessed 02 Apr 2024

Knight W (2023) It Costs Just $400 to Build an AI Disinformation Machine. In: Wired. Accessed 02 Apr 2024

Kudalkar D (2024) AGI in 2025? Elon Musk’s Prediction Clashes with Other Experts. In: Favtutor. Accessed 02 Apr 2024

McArthur N (2023) Gods in the machine? The rise of artificial intelligence may result in new religions. In: The Conversation. Accessed 02 Apr 2024

Ministry of Industry and Information Technology (2023) 工业和信息化部关于印发《人形机器人创新发展指导意见》的通知 (Notice of the Ministry of Industry and Information Technology on the issuance of the “Guiding Opinions on the Innovation and Development of Humanoid Robots”). In: Ministry of Industry and Information Technology of the People’s Republic of China. Accessed 02 Apr 2024

Mordor Intelligence (2024) Neuromorphic Chip Market Size & Share Analysis – Growth Trends & Forecasts (2024 – 2029). Accessed 02 Apr 2024

Nellis S (2024) Nvidia CEO says AI could pass human tests in five years. In: Reuters. Accessed 02 Apr 2024

Pashentsev E (2020) Global Shifts and Their Impact on Russia-EU Strategic Communication. In: Pashentsev E (eds) Strategic Communication in EU-Russia Relations. Palgrave Macmillan, Cham.

Pashentsev E (2022) Report. Experts on the Malicious Use of Artificial Intelligence and Challenges to International Psychological Security. Publication of the International Center for Social and Political Studies and Consulting. Moscow: LLC «SAM Polygraphist».

Pashentsev E (ed.) (2023 a). General Content and Possible Threat Classifications of the Malicious Use of Artificial Intelligence to Psychological SecurityIn: Pashentsev, E. (ed). The Palgrave Handbook of Malicious Use of AI and Psychological Security. Palgrave Macmillan, Cham.

Pashentsev, E. (2023 b). Prospects for a Qualitative Breakthrough in Artificial Intelligence Development and Possible Models for Social Development: Opportunities and Threats. In: Pashentsev, E. (ed) The Palgrave Handbook of Malicious Use of AI and Psychological Security. Palgrave Macmillan, Cham.

Promobot (2024) Service robot for business. Accessed 02 Apr 2024

Public Citizen (2023) Tracker: State Legislation on Deepfakes in Elections. Accessed 02 Apr 2024

Roser M (2023) AI timelines: What do experts in artificial intelligence expect for the future? In: Our World in Data. Accessed 02 Apr 2024

Sahota N (2024) The AI Factor In Political Campaigns: Revolutionizing Modern Politics. In: Forbes. Accessed 02 Apr 2024

Scharre P (2024) Future-Proofing Frontier AI Regulation. Projecting Future Compute for Frontier AI Models. March. CNAS.

Sifry ML (2024) How AI Is Transforming the Way Political Campaigns Work. In: The Nation. Accessed 02 Apr 2024

Stepansky J (2023) ‘Wild West’: Republican video shows AI future in US elections. In: Al-Jazeera. Accessed 02 Apr 2024

Tan K (2024) Google’s DeepMind CEO says the massive funds flowing into AI bring with it loads of hype and a fair share of grifting. In: Yahoo! Accessed 02 Apr 2024

Thompson P (2023) A developer built a ‘propaganda machine’ using OpenAI tech to highlight the dangers of mass-produced AI disinformation. In: Business Insider. Accessed 02 Apr 2024

Unity (2022) Welcome, Ziva Dynamics! In: Youtube. Accessed 15 Jul 2022

Weller C (2017) Universal basic income has support from some big names. In: World Economic Forum. Accessed 02 Apr 2024

West D (2023) How AI will transform the 2024 elections. In: The Brookings Institution. Accessed 02 Apr 2024

Western Sydney University (2023) World first supercomputer capable of brain-scale simulation being built at Western Sydney University. Accessed 02 Apr 2024

White House (2023) Fact Sheet: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. Accessed 02 Apr 2024

White House (2024) Fact Sheet: Biden-⁠Harris Administration Announces Key AI Actions Following PresidentBiden’s Landmark ExecutiveOrder. Accessed 02 Apr 2024

Zitser J, Mann J (2024) A global scramble to make humanoid robots is gearing up to be the 21st century’s space race. In: Yahoo! Accessed 02 Apr 2024

Leave a Reply

Your email address will not be published. Required fields are marked *

Russian Security Cannot be Anti-Russian

  • 0
  • 15 March 2022

To reflect on the period where the world now finds itself, we propose the term “cold hot war”, as this period has significant differences from the classical notion of the “Cold war”. Within the framework of the old Cold War, military confrontation between the two superpowers was always indirect. “Proxy” conflicts only emerged between their respective allies, when there was an intersection of interests in various regions of the world, but these never happened direc

citește mai mult

Russian Leadership Changes: How it was, is and how it might be

  • 0
  • 3 January 2022

Now that 2022 is finally here, it means Russia’s next presidential election is just two years away. The way has been paved for Vladimir Putin to run again if he chooses. The will he/won’t he? question is a favourite of pundits as is speculation of a potential or likely successor. Russia’s next leader will be immensely consequential, as will the time when he or she takes over.

It’s certainly possible that by the end of t

citește mai mult

Researchers from Six Countries Discussed the Challenges for International Psychological Security in the Context of the Use of Artificial Intelligence

  • 0
  • 23 November 2020

On 12 November 2020, a panel discussion "Artificial Intelligence and International Psychological Security: Theoretical and Practical Implications" was held at St. Petersburg State University as part of the international conference "Strategic Communications in Business and Politics" (STRATCOM-2020).

The discussion was moderated by Konstantin Pantserev – DSc in Political Sciences, Professor of the St. Petersburg State University,

citește mai mult

Conferință despre Transnistria, 4 – 5 Martie 2022

  • 0
  • 8 March 2022

Împlinirea a 30 de ani de la unul dintre cele mai dificile momente ale istoriei estului Europei a constituit temeiul unei conferințe științifice de prestigiu organizate în colaborare de către instituții de învățâmânt și cercetare din Chișinău, Târgoviște și București.

Conferința cu titlul „Războiul de pe Nistru din 1992: 30 de ani după...” a fost organizată de către Asociația Națională a Tinerilor Istorici din Moldova (ANTIM),

citește mai mult

Forcing the Correct Choice: Deterring Right-Wing Radicals and Preventing Threats to Nuclear Facilities in Ukraine

  • 0
  • 7 March 2022

According to official statements by the Russian Federation, its army’s special military operation in Ukraine aims to both “demilitarize” and “denazify” the country. This operation is being carried out in a large state with a developed nuclear power industry, fairly powerful army (the largest in Europe outside of Russia and Turkey) and high firepower (22nd place in the world according to 2022 Military Strength Ranking (Global Firepower, 2022)). One of the primary o

citește mai mult

Azebaijan, cheia geostrategică a Asiei Centrale

  • 0
  • 13 February 2018

După destrămarea URSS, Azerbaijanul a fost statul ex-sovietic care alături de    republicile Baltice a avut o dezvoltare constantă și durabilă. Desigur, aici pot fi adresate unele critici regimului de la Baku cu privire la democrație, care în opinia multor analiști este doar mimată la Baku. Însă faptul adevărat este că acest stat a reușit să își gestioneze eficient resursele de care dispune pentru a deveni o societate prosperă. I se atribuie Azerbaijanului etichet

citește mai mult

Malicious Use of AI and Challenges to Psychological Security: Future Risks

  • 0
  • 20 May 2024

In April 2024, the International Center for Social and Political Studies and Consulting International Center for Social and Political Studies and Consulting with the help of the International Research Group on Threats to International Psychological Security through Malicious Use of Artificial Intelligence (Research MUAI) published the report citește mai mult

Malicious Use of Artificial Intelligence and Challenges for BRICS Psychological Security on International Forum “Russia and Ibero-America in a Turbulent World: History and Prospects”

  • 0
  • 17 October 2023

On October 5, within the framework of the VI International Forum “Russia and Ibero-America in a Turbulent World: History and Modernity” at St. Petersburg State University, two sessions of the panel “Malicious Use of Artificial Intelligence and Challenges for BRICS Psychological Security” were held under the chairmanship of Professor Evgeny N. Pashentsev.

citește mai mult

Presentation of “The Palgrave Handbook of Malicious Use of AI and Psychological Security” at international forum in St. Petersburg

  • 0
  • 17 October 2023

On October 4, 2023, as part of the international forum "Russia and Iberoamerica in a Turbulent World: History and Modernity", held at the School of International Relations of St. Petersburg State University, the presentation of the collective monograph "The Palgrave Handbook of Malicious Use of AI and Psychological Security" took place. The presentation was attended by the editor and co-author of the publication – DSc., professor Evgeny Pashentsev, leading researc

citește mai mult