The expansion of the artificial intelligence (AI) usage has been causing deep concerns in experts’ circles for a long time. However, it is getting out AI of human control that tends to be regarded and securitized as the main threat. Such a scenario, nonetheless, does not take into account the fact the capabilities of AI can be deliberately used for criminal purposes.
Researchers from different countries are vigorously studying the threats the malicious use of artificial intelligence (MUAI) creates for society as a whole and for particular spheres of human activity such as politics, economics, military affairs, etc. Nevertheless, threats targeting international information and psychological security (IPS) have not yet been identified as an independent category for analysis. Meanwhile, the usage of AI with the purpose to destabilize international relations through deliberate informational and psychological influence on public opinion and psychology of individuals through high technology obviously presents a great danger.
The international seminar of young researchers «The Political Situation in the Northeast Asia and Threats of Malicious Use of AI to Destabilize the International Psychological Security in the Context of Russia’s National Security» was dedicated to this aspect of the use of AI and held on the November 25th in the Institute of International Cotemporary Studies at the Diplomatic Academy of the Russian Ministry of Foreign Affairs. The event was held online as a part of the «Malicious Use of Artificial Intelligence and Challenges to Psychological Security in Northeast Asia» international grant project implementation, scheduled for 2021 – 2022. The project is being implemented by research groups from Russia and Vietnam on the basis of grants from the Russian Foundation for Basic Research and the Vietnamese Academy of Social Sciences respectively. The seminar was attended by researchers from Russia, Belarus, Vietnam and India.
During the seminar the issues of malicious use of artificial intelligence, threats and risks to society resulting from MUAI, as well as the negative consequences of specific high-tech information and psychological operations of various kinds were examined. It is important to note that speakers touched not only on the main aspects and types of MUAI in the field of IPS, but also analyzed various regions where the affected issues are relevant.
Evgeny Nikolaevich Pashentsev, a leading researcher at the Center for Digital Studies of the Institute of Current International Problems of the Diplomatic Academy of the Ministry of Foreign Affairs of Russia, coordinator of the International Group of Experts on the Study of Threats to International Information and Psychological Security as a result of Malicious Use of artificial intelligence (Research MUAI), was the moderator of the seminar. In his welcoming speech E. N. Pashentsev stressed the relevance of the topic under discussion.
The main part of the seminar was opened by the reports of Yuri Kolotaev and Darya Matyashova on the problem of MUAI in Northeast Asia. It was of deliberate character that the speakers focused other seminar participants’ attention on this region, which is characterized both by a high level of AI technologies development and by acute internal and geopolitical conflicts.
Thus, Yuri Kolotaev, a graduate student at St. Petersburg State University, in his report «Practices of using AI for conducting perception management and propaganda campaigns in the Northeast Asia» described the NEA countries as one of the main centers of AI development. Developing the ideas of the report, he focused on the arsenal of technologies that are already available and applied in the countries of this region, also highlighting the political and economic factors of the increase in the scale of the MUAI. Among the conclusions of Yuri Kolotaev, the following should be noted: firstly, the AI penetration into the information standoff practice is already being gradually tracked in some countries; secondly, NEA countries demonstrate the distribution of information influence within the region, while the use of AI technologies becomes one of the factors of such distribution.
Darya Matyashova, a master’s student at St. Petersburg State University, in her report «MUAI threats in Taiwan’s cybersecurity policy: problems, vulnerabilities, prospects (socio-psychological aspect)» identified one of the regional epicenters in the field of countering MUAI – Taiwan. The importance of Taiwan in this context is determined both by the island’s position in the architecture of regional security and by the information and psychological manipulations potential, already demonstrated in the framework of the internal political confrontation. The speaker concluded that Taiwan’s vulnerability to MUAI poses a threat to its reputation among potential allies as well as to its own identity as a democratic society. These risks can be minimized if the authorities of Taiwan clearly define the factors contributing to the information and psychological field: in particular, if they include a socio-psychological aspect into the local cybersecurity strategies.
Another important area of MUAI is the malicious use of deepfakes. Ekaterina Mikhalevich, a graduate student at St. Petersburg State University, addressed this issue in her report on the topic «The position of the People’s Republic of China on countering the risks of malicious use of AI: a view from Russia». Using the example of deepfakes in China, she clearly illustrated the threats and damage to the reputation of business structures and directly to the leadership of China. Basing on a row of cases, E. Mikhalevich demonstrated that the untimely suppression of MUAI adversely affects not only the reputation of the Chinese IT giants themselves, but also the level of confidence of Western investors, resulting in the outflow of foreign investment from China, the reduction in the number of Chinese programs users, and, as following, the reduction of China’s influence on the digital economy of the countries that consume its services. The proof of the given conclusion can be found in the case of a viral video, where Donald Trump allegedly said that he had used nuclear weapons against North Korea. The “sliced” audio tracks from the public speeches of the former US president, superimposed on a video clip of one of the speeches, caused a short-term panic among residents of Northeast Asian countries, until this fake news fragment was officially debunked by news agencies.
Lyubov Shmatkova, a lecturer at the Department of Regional Problems of World Politics, Faculty of World Politics, Lomonosov Moscow State University, in continuation of the topic covered by Ekaterina Mikhalevich, presented a report «Deepfakes as a Threat to the Psychological Security of Individuals and Society: the View of the European Union». In her report L. Shmatkova, relying on sources from the European Union and its member states, familiarized the audience with the specifics of threats that emerge from the deep fakes exploitation and that are recognized as of the highest priority in the EU – from online crimes against children to shaping the results of investigations. Among such features, Lyubov Shmatkova highlighted the high level of coverage, increasing availability and quality of tools for creating “deep fakes”, as well as the risk of danger at different social levels (individual, organizational, national) and causing harm of a different nature (psychological, financial, social). One of the conclusions reached by the speaker is that the result of thedeep fakes impact, especially of those used for disinformation purposes, is not long-term and has a short-term effect (the so-called «window of effectiveness»). This quality contributes to the fact that deep fakes can be crucial for an immediate emotional reaction.
A speaker from India, Kallakuri Radhakrishna, a master’s student at the International Laboratory of Applied Network Research of the National Research University Higher School of Economics, revealed the problem of MUAI in the field of IPS in his country. In his analysis he relied on real examples of disinformation political campaigns conducted in India. The speaker also noted how AI technologies directly affect the psychological component of the perception of information content by citizens. In the conclusions of K. Radhakrishna stressed the urgency of combating hybrid threats, which are becoming more dangerous in the context of the AI development, especially in times of crisis for the country.
Vitaly Romanovsky, a researcher from Belarus who cooperates with the Diplomatic Academy of the Russian Foreign Ministry, in his report «Malicious Use of Artificial Intelligence for Terrorist Purposes: Recommendations for Policy-Makers and Counter-Terrorism Entities» analyzed 3 areas of the of AI exploitation by terrorists: cybernetic, physical and political ones, having illustrated each of them with overt examples. In cyberspace, terrorists can use DDoS attacks, malware attacks and ransomware attacks. In the physical sphere, it is increasingly likely that terrorists will use AI to infect strategic systems with data: for example, public transport, power plants, sewage systems, public health infrastructure or military facilities. Finally, in the political sphere, terrorists can potentially resort to the use of AI to determine the agenda in a particular state or region through propaganda and disinformation and thereby increase public distrust of the authorities. Based on the above, the speaker gave practical recommendations that can be implemented at the legislative level, provided that the MUAI in the fight against terrorism is eliminated. They include, firstly, the involvement of national governments and groups of partner states in international cooperation for timely tracking of MUAI application by terrorists; secondly, the expansion of data processing capabilities of national counter-terrorism organizations; thirdly, the establishment of relevant international norms and standards related to the application of new technologies, such as artificial intelligence.
When conflicts escalate in the international arena, AI can be used as a powerful weapon. Oleg Filatov, a graduate student of the Russian Presidential Academy of National Economy and Public Administration (RANEPA) «Main directions of NATO activities in confronting the malicious use of artificial intelligence» was dedicated to this issue. The speaker drew attention to the fact that MUAI is not only a military threat to a certain country, but also a destabilizing informational and psychological factor in society on the eve, during and after a military conflict.
Nguyen Minh Phuong, Candidate of Technical Sciences of the Department of Communication Systems and Telecommunications of the Russian Technological University, in his report «Malicious use of AI in Vietnam in the psychological aspect» focused on the psychological aspects of AI in Vietnam, confirming this with an example of using the General Video Game AI framework, the use of which allows players to access an unlimited range of AI games. In particular, the framework solves the problem of developing an algorithm that allows you to play any game, even if the game is unknown a priori. This example perfectly illustrates the impact of AI on the dependence of young people on computer games, which, as a rule, leads to decrease in mental activity and critical perception of reality.
The presented reports covered various types and aspects of MUAI in the field of IPS and experience of countering it. The presentations of the speakers aroused great interest among the audience, who asked numerous questions. The leitmotif of all the speeches made at the meeting was the recognition of the relevance of the problem considered during the seminar for all regions of the world. All countries – some sooner, others later – will experience the challenges of the practices of malicious use of AI. Therefore, the sooner various States realize this, the easier it will be for them to prevent the threat from the ZII.
The event ended with a speech by moderator Evgeny Nikolaevich Pashentsev, who summed up the results of the International Seminar of Young Researchers «The Political Situation in the Northeast Asia and Threats of Malicious Use of AI to Destabilize the International Psychological Security in the Context of Russia’s National Security», emphasizing the importance of the issue under discussion. E. N. Pashentsev noted that this seminar was an important stage in the work of young researchers and gave impetus to further study of the issues discussed during the last seminar, as well as the development of a set of measures to counteract ZII in the information and psychological sphere.
Following the results of this event, it is important to emphasize the importance of international cooperation in this area. Holding such seminars can serve as an impetus to the creation of systematic targeted research in this direction, since all over the world, including the NEA, where there is a delay in assessing threats to information and psychological security from the MUAI today.
Young researchers from Russia, Belarus, India and Vietnam have identified the complexity of the MUAI problem, which requires comprehensive approaches based on interdisciplinary research.
Russia, relying on its national practices in the field of countering MUAI, should take into account the relevant international experience in countering high-tech threats in the field of information and psychological security. Scientific cooperation in this area can become a prologue to practical cooperation.
We also thank to Darya Mathusheva – researcher at Sankt-Petersburg State University – for her contribution on the Artificial Intelligence Study Group (MARIUS VACARELU)