On February 5, 2019 the international press centre of MIA “Russia Today” hosted in Moscow a round table on “International Safer Internet Day. Global Trends and Common Sense”. Safer Internet Day was originally started in 2004 by InSafe – a European network of Awareness Centers promoting safer and better usage of Internet. However, it quickly spread out from Europe and became a worldwide initiative and is now celebrated in over 140 countries worldwide. Safer Internet Day (SID) celebrations aim to raise awareness of both a safer and a better Internet, where everyone is empowered to use technology not just safely but also responsibly, respectfully, critically and creatively.
Photo: Vladimir Trefilov, RIA Novosti
The event at MIA “Russia Today” was organized by the “Center of Internet Technologies “(ROCIT) and the international media group MIA “Russia Today” (Rossiya Sevodnya) , with the support of the Russian Association of Electronic Communications (RAEC) and the Coordination Center of Domains RU / Russia.
Experts and participants of the round table were:
Igor ASHMANOV, General Director of Ashmanov and Partners;
Maxim BUYAKEVICH, Deputy Director of the Information and Press Department of the Ministry of Foreign Affairs of the Russian Federation;
Sergey PLUGOTARENKO, Director of RAEC;
Urvan PARFENTIEV, ROCIT;
Alexander MALKEVICH, Public Chamber of the Russian Federation;
Evgeny PASHENTSEV, Professor, leading researcher at the Institute of Contemporary International Studies at the Diplomatic Academy of the Ministry of Foreign Affairs of the Russian Federation; Coordinator of GlobalStratCom
Anna SEREBRYANNIKOVA, Association of Participants of the Market of Big Data;
Artem SOKOLOV, Association of Internet Trade Companies
Andrei VOROBYOV, Coordination Centre of National Domains .RU/.Russia;
Victor LEVANOV, Institute of the Development of Internet.
The discussion was moderated by Peter Lidov-Petrovsky (Director of Communications and Public Relations, MIA “Russia Today”) and Sergey Grebennikov (Director of ROCIT). Within two hours, the speakers were able to discuss the hot issues of cyber-security: from fake news to legislative initiatives and their assessment by the expert community. It is on these issues that the round table held the most lively (and even tough) discussion. A detailed video recording of the round table in Russian and accompanying photographs provide a clear picture of the event.
Photo: Vladimir Trefilov, RIA Novosti
Professor Evgeny Pashentsev’s speech on the topic Artificial Intelligence and Challenges to International Psychological Security in Internet was met with great interest. Below we present the full text of his speech at the round table.
The Internet undoubtedly provides great opportunities for the development of human civilization, but also contains many threats. I would like to speak about some of them in the context of the implementation of artificial intelligence capabilities in the Internet environment and new challenges to international psychological security today. Why psychological security? Because the adequate behaviour of state and non-state actors in the international arena is a guarantee, if not peace, but a balanced approach of the parties to really very serious issues in the international arena. Psychological destabilization of actors in the current situation may lead rather easily to a world war.
Among the possible threats of malicious use of AI (MUAI) through Internet, which may be dangerous to international stability I can name:
The growth of complex comprehensive systems with active or leading AI participation increases the risk of malicious interception of control over such systems. Numerous infrastructure objects – for example, robotic and self-learning transport systems with a centralized AI-controlled system – can become a convenient target for high-tech terrorist attacks through Internet. Thus, interception of the management of a centralized AI traffic control system in a large city can lead to numerous victims. According to Marc Ph. Stoecklin, principal research staff member and manager at Cognitive Cybersecurity Intelligence (CCSI), a class of malware “like DeepLocker has not been seen in the wild to date; these AI tools are publicly available, as are the malware techniques being employed – so it’s only a matter of time before we start seeing these tools combined by adversarial actors and cybercriminals. In fact, we would not be surprised if this type of attack were already being deployed”.
Terrorist are repurposing commercial AI systems. Commercial systems are used in harmful and unintended ways, such as using drones or autonomous vehicles to deliver explosives and cause crashes.
Researchers are in a pitched battle against deepfakes, which are artificial intelligence algorithms that create convincing fake images, audio and video, but it could take years before they invent a system that can sniff out most or all of them. A fake video of a world leader making an incendiary threat could, if widely believed, set off a trade war – or a conventional one. Just as dangerous is the possibility that deepfake technology spreads to the point that people are unwilling to trust video or audio evidence. For example, Prime Minister Benjamin Netanyahu or other government officials talking, for instance, about impending plans to take over Jerusalem’s Temple Mount and Al-Aqsa Mosque could spread like wildfire in the Middle East.
Amplification and agenda setting
Studies indicate that bots made up over 50 percent of all online traffic in 2016. Entities that artificially promote content can manipulate the “agenda setting” principle, which dictates that the more often people see certain content, the more they think it is important. Damage reputation through bots activities during political campaigns, for example, could be used by terrorist groups to attract new supporters or organize killings of politicians.
Sentiment analysis provides a very accurate analysis of the overall emotion of the text content incorporated from sources like blogs, articles, forums, surveys, etc. It may be a very useful tool for terrorists too.
AI, machine learning (ML) and sentiment analysis are said to “predict the future through analyzing the past” – the Holy Grail of the finance sector but potentially for terrorists too. One of the existing products in the field of “anticipating intelligence” has been in operation for more than three years, i.e., the program EMBERS (“embers”) which was launched by IARPA back in 2012. Its full name – is “event detection based on earlier models with the use of surrogates” (Early Model Based Event Recognition using Surrogates). The program is based on the use of big data to predict significant events such as social unrest, disease outbreaks, and election results in South America, the clashes on the streets in Venezuela in February 2014 or in Brazil before ‘farce’ and ‘coup’ against Dilma Rousseff. In bad hands, for example in the hands of terrorists the relevant program predicts that as a result of the unrest, there will be several victims and the protest demonstration will involve about 10,000 people, which will not lead to the overthrow of the government. Certain terrorist structures may, having received relevant information a month before the event, try to further aggravate the situation by increasing the number of victims of the “rotten liberal regime,” “bloody dictatorship,” etc. (according to the situation) and adding to their number more significant figures, then again checking the consequences through the appropriate program, followed by the correction of the results.
One can imagine that based on a combination of techniques of psychological impact, complex AI systems, and Big Data in the coming years there will be synthetic information products that are similar in nature to modular malicious software. However, they will act not on inanimate objects, social media resources, etc., but on humans (individuals and masses) as psychological and biophysical beings. Such a synthetic information product will contain software modules that introduce masses of people to depression; and after the depression comes the latent period of the suggestive programs. Appealing to habits, stereotypes, and even psychophysiology, they will encourage people to perform strictly defined actions (Larina and Ovchinskiy 2018, 126 – 127).
We have highlighted only some of the possibilities of MUAI through Internet, which can be a great danger in the hands of the state and non-state asocial groups.
And finally I want to stress that all mentioned here MUAI are connected not with “bad intentions” of Narrow (Weak) AI but with egoistic interests and bad will of asocial reactionary groups which are a real threat to human civilization.
It is curious to see rapid and drastic changes in the awareness of potential threats of AI use by the public authorities and security community of the USA. In a White House document regarding the outgoing administration of Barack Obama in 2016, the experts’ assessments were given: that General AI will not be achieved for at least decades. Two years later, in the US national security bodies there is a clear reassessment of the possible threat from General AI. The GAO 2018 report focuses on long-range emerging threats those that may occur in approximately five or more years, as identified by various respondents at the Department of Defense (DOD), Department of State (State), Department of Homeland Security (DHS), and the Office of the Director of National Intelligence (ODNI). Among Dual-Use Technologies, the first in the list in the GAO report is AI. Moreover, the only two examples of AI threats given are deeply interrelated: 1) Nation State and Non-state Development of AI; 2) Intelligent Systems with General AI. It is no coincidence that at all those changes in US approaches to possibility of General AI appeared in the last two years.
The survey prepared in 2018 by the researchers of Oxford, Yale Universities and AI Impacts on “When Will AI Exceed Human Performance?” is based on Evidence from AI Experts. Their survey population was all researchers who published at the 2015 NIPS and ICML conferences (two of the premier venues for peer-reviewed research in machine learning). A total of 352 researchers responded to our survey invitation (21% of the 1634 authors we contacted). The survey used the following definition: “High-level machine intelligence” (HLMI) is achieved when unaided machines can accomplish every task better and more cheaply than human workers.
Each individual respondent estimated the probability of HLMI arriving in future years. Taking the mean over each individual, the aggregate forecast gave a 50% chance of HLMI occurring within 45 years and a 10% chance of it occurring within 9 years. The survey displays the probabilistic predictions for a random subset of individuals, as well as the mean predictions. There is large inter-subject variation: The figures of the survey show that Asian respondents expect HLMI in 30 years, whereas North Americans expect it in 74 years. The survey displays a similar gap between the two countries with the most respondents in the survey: China (median 28 years) and USA (median 76 years).
Seems that is a key to understand well the concerns of security community in the USA. The majority of researchers in different countries now believe that General AI is a real task and not for centuries to achieve, but for decades if not years for a minority.
Of course, this is a good reason for serious concern of security professionals who possess in the complex, both open and closed information on the merits. However, the huge discrepancy between the Chinese and US experts hardly warrants a complete break from reality of AI specialists in China, the country which is quickly catching up with the US in AI, and in some areas is already obviously ahead. This mass evidence of the best Chinese experts in favour of the very early emergence of General AI, apparently is based on something real, which explains the serious concerns of security experts in the United States.
Alas, the false conclusions are drawn from this growing backlog. The main problems of the US are not in the “aggressiveness” of China and Russia, but the rising corruption and inefficiency of some parts of the country elites. The breakthrough in AI and other areas of research is not because of stolen commercial secrets from the USA. If they are stolen from the USA why that country is not capable to go ahead so rapidly, to use its own technologies as China? Infamous words by Marcellus: “Something is rotten in the state of Denmark”, something is rotten in the state of … And it is extremely bad, because the creative potential of such great country as of the USA is far from being fully realized, bad for the US citizens and entire world. It is also a lesson for Russia which as a result of “reforms” started in 1990s is capable now to compete with USA chiefly in two areas: the military sector and in the issue who is the most powerful energy producer.
The international collaboration in understanding the challenges connected with General AI issues seem to require the establishment of an expert group in the UNO. If we receive a signal from an extraterrestrial civilization that its representatives may be on Earth in 10 years, we will start to prepare for this. And what is about the case of General AI?
In the context of today’s topic, it is important to clearly define that the Internet is the main means of transmitting scientific, popular and tabloid information on AI. This is an important tool to accelerate the creation of AI because to some extent the Internet integrates the capabilities of humankind in this area. The collapse of the Internet and the emergence of hard Firewalls will not stop the creation of General AI, but can slow it down, as well as its subsequent distribution. Even the problems of Weak AI are increasingly affecting humanity, including the financial and psychological aspects of robotics etc. A much more powerful effect on humankind will have the progress towards General AI itself.
It is very likely that the creation of General AI in a few years will lead to its self-improvement and the emergence of Super AI, and entry into a period of singularity. This is not a given, not a guarantee, but a real opportunity. Quite a few experts do not see here any chance for humanity.
It happened so that in the early 1990s, I wrote that AI and robotics are not the end of humanity, but one of the conditions for its further progress.
1. Unlike hypothetical aliens, in the case of General AI, we will deal with the intelligence coming from the historical, scientific, philosophical, cultural sense of modern human civilization; intelligence that will go forward faster and better than any of the past human generations. But this intelligence will inherit the heritage of human race. We do not consider our ancestors who lived two thousand years animals, but they under many circumstances would consider us the gods. Another thing is that this intelligence may not want to put up with some unsympathetic manifestations of contemporary mankind very close to cruel traditions of the past.
2. Much depends from us, what will be this intelligence. Will it have our legacy or not? We can destroy ourselves just before new intelligence appears on Earth, it’s a poor reality.
3. It is also important that General AI will not be a product of humanity in general, but of specific people. Different options are on the table, until the appearance of General AI in the laboratory controlled by anti-social, reactionary, militaristic and other circles. If the environment often deforms people (of different intelligence), then why is this not applicable to General AI?
4. We can integrate ourselves into the process of entering the singularity through cyborgization, genetic restructuring that increases our intellectual capabilities.
5. We can consider the nature of General AI as the possibility of the emergence of an integrated intelligence with his will, feelings (albeit quite different from human ones), but it birth and initial development will be in the human environment, on the basis of human information and knowledge, and nothing else. Another thing is if we get an integrated powerful intellectual potential capable of solving problems only on human target designation. Then we are dealing simply with a more powerful machine, and the advantages of its use will depend on the people who will direct it. Perhaps the second will precede the first. Let us see.
This is only part of a number of obvious points that do not allow us with mystical horror to bow our heads head on the chopping block ruthless singularity. A lot depends on us, human beings.
During the round table was officially launched “Safer Runet Week 2019”, the culmination of which will be the International Cyber Security Forum 2019 (14 February 2019). The culmination of the Safer Internet Week will be the international Cyber Security Forum 2019. The program of the Forum covers topical issues of cyber-security and related topics: personal data (including the introduction and operation of the regulation on data protection “General Data Protection Regulation” – GDPR); financial security (including in the crypto industry); security of mobile devices and applications; countering mass cyber threats (associated with the emergence of new technologies of hacking accounts, including social networks); maintaining a positive and safe environment for the content to the users.
 Kirat, D.,Jang, J., and Stoecklin, M. Ph. (2018), DeepLocker – Concealing Targeted Attacks with AI Locksmithing. [online] Black Hat. Available at: https://www.blackhat.com/us-18/briefings/schedule/#deeplocker—concealing-targeted-attacks-with-ai-locksmithing-11549 , [Accessed 31 January 2019].
 Brundage, M., Avin, Sh., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., Dafoe, A., Scharre, P., Zeitzoff, Th., Filar, B., Anderson, H., Roff, H., Allen, G. C., Steinhardt, J., Flynn, C., Ó HÉigeartaigh, S., Beard, S., Belfield, H., Farquhar, S., Lyle, C., Crootof, R., Evans, O., Page, M., Bryson, J., Yampolskiy, R., and Amodei, D. (2018). The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. Oxford, AZ: Future of Humanity Institute, University of Oxford, p. 27.
 Waddel, K. (2018), The Impending War over Deepfakes. [online] Axios. Available at: https://www.axios.com/the-impending-war-over-deepfakes-b3427757-2ed7-4fbc-9edb-45e461eb87ba.html , [Accessed 31 January 2019].
 The Times of Israel. (2018). ‘I Never Said That!’ The High-Tech Deception of ‘Deepfake’ Videos. The Times of Israel, [online] pages. Available at: https://www.timesofisrael.com/i-never-said-that-the-high-tech-deception-of-deepfake-videos/ , [Accessed 31 January 2019].
 Horowitz, M. C., Allen, G. C., Saravalle, E., Cho, A., Frederick, K., and Scharre, P. (2018), Artificial Intelligence and International Security. Washington: Center for a New American Security (CNAS), p. 5 – 6.
 See: Doyle, A., Katz, G., Summers, K., Ackermann, Chr., Zavorin, I., Lim, Z., Muthiah, S., Butler, P., Self, N., Zhao, L., Lu, Ch.-T., Khandpur, R. P., Fayed, Y., and Ramakrishnan, N. (2014), Forecasting Significant Societal Events Using the EMBERS Streaming Predicative Analytics System. Big Data, 4, 185 – 195.
 Larina, E., and Ovchinskiy, V. (2018), Iskusstvennyj intellekt. Bol’shie dannye. Prestupnost’ [Artificial intelligence. Big Data. Crime]. Moscow: Knizhnyj mir, p. 126 – 127.
 Executive Office of the President, National Science and Technology Council, Committee on Technology. 2016. Preparing for the Future. National Science and Technology Council of Artificial Intelligence, Washington, p. 7 – 8.
 U. S. Government Accountability Office (GAO). (2018). Report to Congressional Committees National Security. Long-Range Emerging Threats Facing the United States as Identified by Federal Agencies. GAO-19-204SP. Washington, DC: GAO, p. 8.
 Grace, K., Salvatier, J., Dafoe, A., Zhang, B., Evans, O. (2018). When Will AI Exceed Human Performance? Evidence from AI Experts. Journal of Artificial Intelligence Research, 62, 729-754. – P. 1.
 Idem, p. 5.