The Malicious Use of Artificial Intelligence Was Discussed at the XIII International IT-Forum

  • ORIENTUL EXTINS
  • 0
  • 2212 Views
  • 15 June 2022

On the 7-9th of June 2022, the 13th International IT-Forum was held in Khanty-Mansiysk. The Forum attracted more than 5,000 participants from around the world, especially those from BRICS and SCO states.

A range of international conferences were held within the framework of the Forum, including the Fourth International Conference “Tangible and Intangible Impact of Information and Communication in the Digital Age”. It was organized by the Government of the Khanty-Mansi Autonomous Area – Ugra, Ministry of Digital Development, Telecommunications and Mass Media of the Russian Federation, Commission of the Russian Federation for UNESCO, UNESCO / UNESCO Information for All Programme, Russian Committee of the UNESCO Information for All Programme, and Interregional Library Cooperation Centre. The conference was opened by the Governor of Ugra Natalia KOMAROVA, ambassadors of a number of countries and the representatives of international organizations.

One of the special conference sections was dedicated to the threats of malicious use of artificial intelligence (MUAI) and the challenges of psychological security. The section was presided by Professor Evgeny PASHENTSEV DSc, a leading researcher of the Diplomatic Academy at the Ministry for Foreign Affairs of Russia and Coordinator of the International Research Group on Threats for International Psychological Security by Malicious Use of Artificial Intelligence (Research MUAI). In his opening speech, he stressed that, unfortunately, the ethical codes of AI adopted in different countries, almost all of them have a common feature: they do not speak clearly or speak very briefly about malicious use of AI, despite the obvious quantitative and qualitative growth of this massive threat in its manifestations and consequences. According to Evgeny Pashentsev, it is of crucial importance to think about the malicious use of AI and take appropriate measures now, since the penetration of AI into our lives is of an everyday nature, and the degree of threat posed by the malicious use of AI may soon become the main one. At the same time, it is far from being adequately realized by the majority of society. The malicious use of AI and psychological security is, in its turn, in the focus of experts’ attention, because at a certain stage of its qualitative development, pretty soon, it can become an effective means of total control, and not just one of the propaganda channels and means like radio, television, the Internet…

                                                                           Arvind GUPTA

Arvind GUPTA (India), Head and Co-founder of Digital India Foundation explained in his paper “Are Algorithms Deciding What We See, Read and Consume? And if Yes, under What Ethical Frameworks?” the important issues that is the monopoly of big technology companies in critical areas such as operating systems, payments architecture, e-commerce, social media interactions and global advertisement revenues. Due to this monopoly, there is a lack of accountability of these companies for misinformation in social networks. The speaker also provided evidence where will full design of algorithms lead to the spread of fake news. Personalized recommendations create echo chambers and confirm users’ biased judgments, and this bias is managed through the usage of bots. Due to this, users, with their political differences, become vulnerable to such news curated through bots. In addition to these problems, the outsourcing of critical inputs manufacturing to few countries creates supply chain security challenges for nation-states. All of this is unscrupulous corporate behavior that undermines the fundamental principle of an interconnected world: content neutrality.

To overcome these problems, A. Gupta recommended a number of practical measures like: open access to algorithms for researchers with proper incentives of AI studies, harmonization in policies regulating algorithms across the world, implementation of legislation defining various types of information: personal, confidential, etc., prohibition of targeted advertising, local manufacturing of strategic industries’ components, development of digital public goods under privacy by design framework.

                                                                                                     Evgeny PASHENTSEV

Evgeny PASHENTSEV stressed in his paper “Malicious Use of AI in Agenda-Setting: Trends and Prospects in the Context of Rising Global Crisis” that numerous positive aspects of the use of AI in society in general, and in public communications in particular, are undeniable. However, due to the growing socio-political and economic contradictions in modern society, the development of geopolitical rivalries and international tensions, it can be assumed that the large-scale malicious use of AI through agenda setting is already taking place at national and global levels in the form of disinformation campaigns. At the same time, no government or transnational corporation will take responsibility for this. As in traditional forms of propaganda, these entities blame only their opponents, and do not publicly admit that they actively resort to propaganda themselves. New threats to agenda-setting and political stability are arising from the advantages of offensive and defensive psychological operations using AI. These advantages are increasingly associated with quantitative and qualitative departures from the traditional mechanisms of producing, delivering, and managing information; new possibilities for having psychological impacts on people; and the waging of psychological warfare. In particular, these advantages may include: (1) the volume of information that can be generated, (2) the speed at which information can be generated and distributed, (3) the believability of information, (4) the strength of the intellectual and emotional impacts that can be created, (5) the analytical data-processing capabilities that are available, (6) the use of predictive analytics resources based on AI, (7) the methods of persuasion that can be used, and (8) new capabilities for integration in the decision-making process. Based on a qualitative and rather approximate assessment of the data available from primary and secondary open access sources, Evgeny Pashentsev draws the preliminary conclusion that advantages 1 and 2 have already been achieved, whereas advantages 3–8 are in the developmental stage at the operational level.

The usage of AI in radio, film, television and advertising is growing rapidly and manifests itself in a variety of forms. For example, researchers at Lancaster and the University of California have found that the average rating for synthetic faces was 7.7% more trustworthy than the average rating for real faces(Suleiman 2022)[1] which is statistically significant. Due to the crisis in the world economy, the degradation of democratic institutions in many countries, and increasingly acute geopolitical rivalries, MUAI through agenda setting at the national and global levels is growing. The redistribution of material resources in favour of the super-rich over the years of the Coronavirus pandemic not only increases socio-political tensions in society, but also creates additional opportunities and incentives for the growth of MUAI. The growth in the number of billionaires’ fortunes in the world from 8 to 13 trillion dollars in the crisis year of 2020 (Dolan, Wang & Peterson-Withorn, 2021)[2] against the background of a record economic decline in recent decades, new hundreds of millions of unemployed, and the growth according to the UN of the number of hungry people in the world from 690 million in 2019 (Kretchmer, 2020)[3] to 811 million in 2020 (World Health Organization, 2021)[4] does not contribute to solving these and other acute problems of our time. The ten largest personal fortunes in the world, six represent Amazon (1), Microsoft (2), Google (2), and Facebook (1) (Forbes, 2021)[5]. At the end of 2019, the five Big Tech companies — Alphabet, Amazon, Apple, Microsoft, and Facebook (all arebased on AI technologies progress) combined market cap was $4.9 trillion, which means they gained 52% in value in a single crisis year $7.5 trillion in the end of 2020). Together with the rising technological giant: Elon Musk’s Tesla Inc these six companies were collectively worth in 2021 almost $11 trillion (Statista, 2021)[6].

In the near future, antidemocratic regimes will focus the entire set of AI technologies associated with agenda-setting on keeping the population under their control. Such regimes in countries with large military and economic potential are then able to focus more on psychological aggression against other nations, thereby turning agenda-setting into an important element of hybrid warfare. It should be borne in mind that the relative cheapness and ease of transferring AI software, as well as the involvement of AI specialists in criminal activities, allow psychological operations through AI to be carried out by relatively small groups of people, which can destabilize the situation in the country or even at the global level. However, it only underlines the importance of the skillful use of AI technologies by socially oriented forces not only at the level of public administration, but also through various structures of civil society, in order to neutralize threats to the psychological security of society. The paper of Evgeny Pashentsev was prepared with the financial support of the RFBR and the VASS, project No. 21-514-92001 “Malicious Use of Artificial Intelligence and Challenges to Psychological Security in Northeast Asia”.

                                                                                        Darya BAZARKINA

Darya BAZARKINA, a leading researcher at the Institute of Europe of the Russian Academy of Sciences, a member of Research MUAI, presented the paper “Artificial Intelligence in Terrorists’ Hands: Ways of Influencing Public Consciousness”, where she outlined current and future threats to the usage of AI by terrorist organizations and individual terrorists. She noted that communication remains one of the main aspects of terrorist activity. Their propaganda, recruitment and searches for funding are not just in the digital arena, but also involve the use of a wide range of sophisticated technologies—new encryption tools, crypto currencies, operations in the darknet, etc. At the same time, more and more crimes are committed with the help of social engineering tools (psychological manipulation in order to induce a person to perform certain actions or share confidential information). Given the importance of influencing the public consciousness for terrorists, as well as the convergence of terrorism and cybercrime, terrorist organizations and lone-wolf terrorists can actively use the mechanisms of social engineering in their psychological operations. This threat to the psychological security of society (and in some cases, its physical security) is already a reality. It can become even more relevant due to the development and spread of AI technologies, which (if used maliciously) can facilitate the tasks of social engineering even for criminals without special technical knowledge.

Since 2015, so-called Islamic State (IS) has used bots to exchange instructions and coordinate terrorist attacks. IS and Al-Qaeda use Telegram bots to provide access to content archives. Terrorists use bots not only for agenda-setting, but also to coordinate active and potential fighters, and, as a result, to expand their audience among active users of existing AI products. The fact that for the purposes of social engineering, terrorists would like to attract not only users but also AI developers to their ranks is shown by open ads. Terrorist propaganda in the EU countries is currently aimed at encouraging individuals to commit terrorist attacks in their places of residence. The suggested methods included drones. With the decline in the combat power of the terrorists, they move from direct armed clashes to attacks in which the perpetrator is removed from the object. Theoretically, even the use of modern robotics by terrorists, primarily unmanned aerial vehicles, carries an element of social engineering. Some research suggests that people who use autonomous technology may experience a decline in their ability to make decisions related to moral choices, self-control or empathy. In the case of a terrorist organization, this may be a deliberate removal of personal responsibility from the person who committed the terrorist act. At the same time, according to Darya Bazarkina, social engineering in a narrow sense (psychological manipulation in order to obtain passwords and other confidential data) is also used by terrorist groups.

Peter MANTELLO, a researcher from Italy, mentioned in his paper “Automating Extremism: Mapping the Affective Roles of Artificial Agents in Online Radicalization” that mass campaigns of political indoctrination were once the province of the state. However, today the low-cost or even free easy-to-create bots has lowered the entry bar for violent extremist organizations, allowing them to build affective bonds with social media users to mobilize them to act radically and violently. According to Peter Mantello, one of the most rapidly growing areas of artificial intelligence development is interactive software deployed on social networks to replace or enhance human “efforts” for various purposes. These computing agents, “web robots” “social” or “chatbots”, these computational agents perform a variety of functions such as automatically generate messages, advocate ideas, act as followers of users or surrogate agents. Conversational software applications are now augmenting and replacing human efforts across an ever-expanding array of fields – advertising, finance, mental health counselling, dating, and wellness. AI can be trained on the materials of interactions in social networks, news articles and other text content, which will allow them to “understand” the context of the situation, history and ethics in order to interact more human-like manner.

                                                            Peter MANTELLO

Mantello mentioned that such achievements allow artificial agents to read the emotional state of users and react in an appropriate “emotional” way, increasing the anthropomorphic attractiveness of AI. Like other AI applications, social media bots were promising tools for achieving the public good. But their malicious misuse by non–state and state actors is already a reality. Hostile states and non-state actors use social bots to increase the speed and scale of the spread of disinformation on the Internet, create fake accounts on social networks, collect personal data of unsuspecting users, impersonate friends/associates of people, and manipulate political communications. In the hands of militant extremist organizations, AI-based bots are rapidly replacing human propagandists, recruiters and troll armies. Currently, such bots are also used to recruit neo-Nazis in Germany or white supremacists in the United States. Organizations like Al-Qaeda or IS have the most extensive experience and the widest networks in social networks and, thus, represent the most dangerous and disturbing example of this trend on a global scale. The researcher also emphasized the growing dependence of modern warfare on intelligent machines and its special value for parties with fewer resources or traditional weapons in an asymmetric conflict.

                                                          Anna BYCHKOVA

Anna BYCHKOVA, head of the Scientific Research Department of the Irkutsk Institute (branch) of the All-Russian State University of Justice, in her paper “International Legal Aspects of Countering Threats to Psychological Security Evoked by the Malicious Use of Artificial Intelligence” proved the importance of the transition from moral AI regulators to legal ones: the time to protect psychological security through appealing to the norms of ethics has passed– the time of the legal norms is coming. However, legal norms inherit ethical ones. Awareness of the risks of using AI has led to the development of ethical standards at the level of individual countries. Thus, the Russian “Code of Ethics in the field of AI” is an act of a recommendatory nature containing ethical norms and establishing general principles, as well as standards of behavior for actors in the field of AI, operates with such phrases: “AI actors should treat responsibly …”, “AI actors should not allow …”, “AI actors are recommended…” etc. The Chinese approach is fundamentally different: ethics in the field of AI is not an independent subject of consideration, but one forming a triad together with the legislation and regulation of AI. It defines the main goal – the creation of an AI that people can trust (Trustworthy AI). Such an AI has five characteristics: a) reliable and manageable; b) its decisions are transparent and understandable; c) its data is protected; d) its responsibility is clearly regulated; e) its actions are fair and tolerant towards any communities. Planning, development and implementation of practices to ensure each of the five characteristics is carried out at the corporate level, a system of standards and controls for all practices is being developed at the industry level (joint responsibility of industry associations and the state).

In connection with the last point, Anna Bychkova pointed to the formation of quasi-legal norms by Big Tech platforms, which have become, in fact, “states within a state”. Given the scale of Big Tech’s influence on society as a whole, there really is a need for an agreement between representatives of these digital platforms and the state, which is obliged to defend its interests and protect the rights of its citizens. At the same time, while some countries, including Russia, only timidly urge people involved in AI to behave ethically, China is forming a comprehensive system of ethical, regulatory and legislative practices that allow trust in AI systems.

                                                       Sergei SEBEKIN

Sergey SEBEKIN, senior lecturer at Irkutsk State University, pointed out in the paper “Malicious Use of Artificial Intelligence to Undermine Psychological Security as an Act of Aggression: Issues of Classification in International Law” that the existing “traditional” international law is not yet adapted to attribute the malicious use of AI to acts of aggression, since it was formed in the “pre-information industrial” era, when conventional weapons were the decisive factor in achieving military-political objectives and influencing the strategic situation. At the same time, the need for such a legal classification has already matured.

The phenomenon of malicious psychological impact through AI is “dual”: on the one hand, it is the nature of the impact, which is psychological, affecting the thoughts and consciousness of people, and which has been used for several hundred years; on the other hand, it is an instrument of influence –i.e. AI, the full application of which can be expected in the future. None of the above-described components of this complex impact fully falls within the field of regulation of international law from the point of view of their qualification as an act of aggression. According to Sergey Sebekin, the main criterion by which the malicious use of AI in order to undermine psychological security will be qualified as an act of aggression are the effects produced and the consequences of such influences. Thus, the solution of the issue of qualifying the use of AI for the purposes of psychological destabilization requires the search for effects equivalent to an act of aggression, which in this case can be expressed through socio-political, economic and physical consequences.

Vitali ROMANOVSKI, Chief Adviser of the Belarusian Institute of Strategic Research (Belarus) presented the paper “Malicious Use of Artificial Intelligence for Disinformation and Agenda-Setting: The Case of African States”. He pointed out that digital disinformation and agenda formation are becoming an increasingly common feature of the domestic political landscape of Africa. For example, in their report “Industrialized Disinformation: 2020 Global Inventory of Organised Social Media Manipulation” Oxford University researchers found evidence that in 81 countries worldwide social media were used to spread computational propaganda and disinformation about politics. Among these countries are Tunisia, Libya, Egypt, Sudan, Ghana, Nigeria, Ethiopia, Kenya, Angola, Zimbabwe and South Africa.

Moreover, according to National Democratic Institute data, from 1 January 2020 to 31 July 2021, African states held 32 different elections. Various intergovernmental and non-governmental organizations used the term “fake news” in their reports to describe respective election campaigns. Among them are Final Report 2020, European Union Election Observation Mission to Burkina Faso; Final Report 2020, European Union Election Observation Mission to Ghana; Central African Republic Report of the Secretary-General S/2021/571, United Nations; Digital Voter Manipulation: A situational analysis of how online spaces were used as a manipulative tool during Uganda’s 2021 General Election, African Institute for Investigative Journalism and Konrad Adenauer Stiftung and others.

                                                                   Vitali ROMANOVSKI

According to Vitali Romanovski, in view of the growing evidence of politically motivated manipulation of the media in several African states, it is reasonable to support the assumption that AI-based deepfakes technologies are likely to be used more often to determine the agenda. This assumption is confirmed by the Europol 2022 report “Law Enforcement Agencies and the Challenge to Deepfakes”, which states that technology can contribute to various types of criminal activity, including spreading disinformation, manipulating public opinion and supporting narratives of extremist or terrorist groups. National Governments should develop more consistent policies to counter disinformation. For example, it would be possible to consider the creation of a specialized state interdepartmental structure to counter disinformation. The most important task will be to promptly inform the internal and external audience about the registered cases of disinformation.

                                                                                                Pierre-Emmanuel THOMANN

The topic of malicious use of AI was also considered by speakers from other sections. Thus, Pierre-Emmanuel THOMANN, president of the «Eurocontinent» international association, Professor at Jean Moulin University, Lyon III (France) and Research MUAI member, presented the paper “Artificial Intelligence and Europe: Between Ethics and Great Power Geopolitics”. He regards that the systemic nature and effect of strategic malicious use of AI would be made possible by an increase in the actors’ room for manoeuvre in space and time. Great powers that can implement AI-enhanced strategies leading to multiple areas of supremacy in spatial dimensions such as the land, maritime, air, cybernetic, spatial, and cognitive domains and time gains, with their anticipation capacity favoured by predictive analysis, could lead to an overthrow of the international order or make stabilization impossible. Studies devoted to the geopolitical implications of strategic malicious use of AI and its implications for the EU and international psychological security are currently lacking. Analysis of the risks of malicious use of AI in international relations tends to be focused on threats to democracy and the use that non-democratic regimes can make of it. The link between AI and international relations and the possible consequences the former might have in systemic terms (i.e., the mutation of the geopolitical configuration), is awaiting investigation.

Strategic malicious use of AI is likely to have a decisive effect on the evolution of the geopolitical configuration, leading to a reinforcement of hierarchies and inequalities and possibly a new America-China bipolarity. Pierre-Emmanuel Thomann also analyzed the European Union’s prospect on these processes. The EU recognizes that the United States and China will dominate AI and digitalization in the international geopolitical arena in the coming years. Until 2020, the main focus of the EU regarding AI and digitalization was on its ethical, normative and economic aspects in the context of regulating the EU common market, and this is reflected in its main communication strategy. This is in line with the EU’s promotion of ‘multilateralism’ as an international doctrine in its Global Strategy for the Foreign and Security Policy of the EU, which is known as the EU Global Strategy (EUGS) and intended to foster international cooperation at the European and global levels. However, the EU has not changed its doctrinal position on multilateralism and refuses to accept a multipolar model of the world order. The Union promotes strategic autonomy, but considers itself an addition to NATO and the United States – its main strategic partner. Thus, the EU policy de facto corresponds to the US policy of unipolarity.

                                                                                                                               Marius VACARELU

Marius VACARELU, Professor at the National School of Political and Administrative Studies (Romania) and a member of Research MUAI, proceeded in the paper “Global vs. Regional Political Competitions under Artificial Intelligence Development” from the understanding of political competition as a natural situation, a source of technological progress and economic development. At the same time, certain standards of political competition exist both for leaders and for economies, armies, and education systems. Marius Vacarelu raised the question whether AI developments today can balance the costs of political competition (lead to a reduction in military and economic costs) and whether AI will become the most important tool of such a competition in the future. The speaker pointed out that the superiority of a particular region in the field of AI can lead to new alliances to achieve global goals and, possibly, to wars for which it is necessary to set a limit on the permissibly of the use of force.

The presented papers indicate the need for further interdisciplinary studies of threats to psychological security caused by the malicious use of AI. Comprehensive solutions combining legal, political, economic, technological, psychological and educational measures are already needed in the fight against these threats. The problem of malicious use of AI aroused keen interest among representatives of the academic community and in political circles not only in Russia, but also among representatives of foreign countries. All participants in the discussion of the problem of malicious use of AI made their practical recommendations that can be used in the preparation of the final document of the conference.

[1]Suleiman, E. (2022). Deepfake: a New StudyFound That People Trust “AI” Fake Faces More Than Real Ones. Retrieved12 Jun 2022, from https://reclaimthefacts.com/en/2022/03/25/deepfake-a-new-study-found-that-people-trust-ai-fake-faces-more-than-real-ones/

[2]Dolan, K., Wang, J., & Peterson-Withorn, C. (2021).The Forbes World’s Billionaires list. Retrieved 5 November 2021, from https://www.forbes.com/billionaires/

[3]Kretchmer, H. (2020). Global hunger fell for decades, but it’s rising again. Retrieved 5 November 2021, from https://www.weforum.org/agenda/2020/07/global-hunger-rising-food-agriculture-organization-report/

[4]World Health Organization. (2021). UN report: Pandemic year marked by spike in world hunger. Retrieved 5 November 2021, from https://www.who.int/news/item/12-07-2021-un-report-pandemic-year-marked-by-spike-in-world-hunger

[5]Forbes. (2021). The World’s Real-Time Billionaires. Retrieved 28 November 2021, from https://www.forbes.com/real-time-billionaires/#1d7a52b83d78

[6]Statista. (2021). S&P 500: largest companies by market cap 2021. Retrieved 28 November 2021, from https://www.statista.com/statistics/1181188/sandp500-largest-companies-market-cap/

Leave a Reply

Your email address will not be published. Required fields are marked *

Russian Security Cannot be Anti-Russian

  • 0
  • 2655 Views
  • 15 March 2022

To reflect on the period where the world now finds itself, we propose the term “cold hot war”, as this period has significant differences from the classical notion of the “Cold war”. Within the framework of the old Cold War, military confrontation between the two superpowers was always indirect. “Proxy” conflicts only emerged between their respective allies, when there was an intersection of interests in various regions of the world, but these never happened direc

citește mai mult

Russian Leadership Changes: How it was, is and how it might be

  • 0
  • 2773 Views
  • 3 January 2022

Now that 2022 is finally here, it means Russia’s next presidential election is just two years away. The way has been paved for Vladimir Putin to run again if he chooses. The will he/won’t he? question is a favourite of pundits as is speculation of a potential or likely successor. Russia’s next leader will be immensely consequential, as will the time when he or she takes over.

It’s certainly possible that by the end of t

citește mai mult

Researchers from Six Countries Discussed the Challenges for International Psychological Security in the Context of the Use of Artificial Intelligence

  • 0
  • 24643 Views
  • 23 November 2020

On 12 November 2020, a panel discussion "Artificial Intelligence and International Psychological Security: Theoretical and Practical Implications" was held at St. Petersburg State University as part of the international conference "Strategic Communications in Business and Politics" (STRATCOM-2020).

The discussion was moderated by Konstantin Pantserev – DSc in Political Sciences, Professor of the St. Petersburg State University,

citește mai mult

Conferință despre Transnistria, 4 – 5 Martie 2022

  • 0
  • 2528 Views
  • 8 March 2022

Împlinirea a 30 de ani de la unul dintre cele mai dificile momente ale istoriei estului Europei a constituit temeiul unei conferințe științifice de prestigiu organizate în colaborare de către instituții de învățâmânt și cercetare din Chișinău, Târgoviște și București.

Conferința cu titlul „Războiul de pe Nistru din 1992: 30 de ani după...” a fost organizată de către Asociația Națională a Tinerilor Istorici din Moldova (ANTIM),

citește mai mult

Forcing the Correct Choice: Deterring Right-Wing Radicals and Preventing Threats to Nuclear Facilities in Ukraine

  • 0
  • 2367 Views
  • 7 March 2022

According to official statements by the Russian Federation, its army’s special military operation in Ukraine aims to both “demilitarize” and “denazify” the country. This operation is being carried out in a large state with a developed nuclear power industry, fairly powerful army (the largest in Europe outside of Russia and Turkey) and high firepower (22nd place in the world according to 2022 Military Strength Ranking (Global Firepower, 2022)). One of the primary o

citește mai mult

Azebaijan, cheia geostrategică a Asiei Centrale

  • 0
  • 22483 Views
  • 13 February 2018

După destrămarea URSS, Azerbaijanul a fost statul ex-sovietic care alături de    republicile Baltice a avut o dezvoltare constantă și durabilă. Desigur, aici pot fi adresate unele critici regimului de la Baku cu privire la democrație, care în opinia multor analiști este doar mimată la Baku. Însă faptul adevărat este că acest stat a reușit să își gestioneze eficient resursele de care dispune pentru a deveni o societate prosperă. I se atribuie Azerbaijanului etichet

citește mai mult

Malicious Use of AI and Challenges to Psychological Security: Future Risks

  • 0
  • 1139 Views
  • 20 May 2024

In April 2024, the International Center for Social and Political Studies and Consulting International Center for Social and Political Studies and Consulting with the help of the International Research Group on Threats to International Psychological Security through Malicious Use of Artificial Intelligence (Research MUAI) published the report citește mai mult

Malicious Use of Artificial Intelligence and Challenges for BRICS Psychological Security on International Forum “Russia and Ibero-America in a Turbulent World: History and Prospects”

  • 0
  • 1369 Views
  • 17 October 2023

On October 5, within the framework of the VI International Forum “Russia and Ibero-America in a Turbulent World: History and Modernity” at St. Petersburg State University, two sessions of the panel “Malicious Use of Artificial Intelligence and Challenges for BRICS Psychological Security” were held under the chairmanship of Professor Evgeny N. Pashentsev.

citește mai mult

Presentation of “The Palgrave Handbook of Malicious Use of AI and Psychological Security” at international forum in St. Petersburg

  • 0
  • 1383 Views
  • 17 October 2023

On October 4, 2023, as part of the international forum "Russia and Iberoamerica in a Turbulent World: History and Modernity", held at the School of International Relations of St. Petersburg State University, the presentation of the collective monograph "The Palgrave Handbook of Malicious Use of AI and Psychological Security" took place. The presentation was attended by the editor and co-author of the publication – DSc., professor Evgeny Pashentsev, leading researc

citește mai mult