“No one knows for sure how exactly they work” British university researcher Valery Adzhiev on neural networks and the virtual and real dangers of AI

Photo: unsplash.com
Photo: unsplash.com

Is Artificial Intelligence an opportunity or a threat? Over the past few months, tech specialists and society as a whole have been grappling with this question. The tabloids are awash with headlines about the potential hazard AI poses to the human race. How dangerous is the development of AI technology? How long does humanity have left before the advent of a super-intellect? Why do scientists and IT entrepreneurs want to halt the development of AI? In an interview with Kommersant UK, Valery Adzhiev, Principal Academic at the National Centre for Computer Animation at Bournemouth University, talked about tech trends and their consequences.  

The use of AI has been much discussed in recent months. Can you explain the concept of Artificial Intelligence? 

An ordinary computer system is like a calculator, only a lot more powerful. It contains an algorithm installed by its programmers which sets out its calculation process step by step. The system solves problems by faithfully following the algorithm. By contrast, Artificial Intelligence systems which have been trained on large data sets are to a certain degree capable of developing algorithms themselves to perform tasks that they have been set. The programmers retain a key role, however, as they set the so-called architecture of the specific AI system (which can vary). They also set the rules which determine the nature of the task to be performed. In layman’s terms, what makes an AI system different from a calculator is whether, like a person, it can perform creative tasks, or even think. At the dawn of the computer age, Alan Turing devised a way to determine if a machine could think; if it is impossible to distinguish automatically generated answers from those given by a real person, then we can say that the machine possesses an intellect. Modern programs can pass the Turing Test, if only in certain dialogues. But now it is associated with traditional AI which performs narrowly defined tasks, automating routine human activity (such as automatically translating a text). Later, the concept of Artificial General Intelligence, (AGI), appeared, which, as well as being indistinguishable from a person in a dialogue, also solves quite general tasks which are not focused on one narrow aim, an ability which was previously a human prerogative. Instead of simply processing information, to a certain extent these programs understand its meaning, allowing them to fulfil functions such as that of a virtual PA and secretary with a wide range of duties. This is the current stage of AI’s development, which is still far from perfection. After all this time, specialists have finally begun to talk about the next generation; superintelligence. This AI will, to a significant degree be able to replace, or even surpass, humans in certain areas of activity.  Some specialists believe this may happen in ten years.  


What has caused the sudden rise in interest in AI? 

AI is quite a broad area of computer science and technology, but over the last 10-15 years, there has been clear progress in the area of Machine Learning and so-called Deep Learning, which is essentially about the application of multilayered neural networks which function as an approximation of the human brain. During training, large volumes of data are used to calibrate the neurons into the network. As a result, the system becomes capable of processing new data and generating solutions to problems. This breakthrough has received wide media coverage. One example is the British company DeepMind, (later acquired by Google), which created a chess program called AlphaZero. Unlike other chess programs, it can play like a human (only better). It doesn't just out-calculate its opponent, it generates intricate strategic ideas which are thoroughly creative. Real matches were not used during training; the AI uses the parameters which have been set for it to play against itself and to make corrections depending on the results of the matches. Currently, leading chess players are adapting moves from this AI program into their games. Over the last year, a major breakthrough has occurred in the form of generative AI programs which can create various forms of original content; text, images, music etc. Tools such as MidjourneyStable Diffusion and DALL-E2 immediately gained great popularity. They are now being used in the computer animation and gaming industries, amongst other areas. Previously, similar, although less powerful instruments, were only available to specialists, but now, amateurs can experiment with them which is game-changing, creating a completely new situation for AI.  


Photo: unsplash.com


Since the launch of ChatGPT, everyone has started talking about chatbots. Why?

The launch of the open-access chatbot ChatGPT, produced by the American company OpenAI, really has struck a chord (although there are other systems as well, such as Google Bard). This chatbot uses large language models. It is constantly training itself using an enormous data set with a volume of over 570 gigabytes, containing a vocabulary of over 300 billion words from multiple sources, pertaining to a wide range of activities. As well as being able to hold a completely lucid and rational dialogue, giving “ethical” replies to virtually any question, it can generate different forms of original content, write CVs, articles, references, poems and screenplays, perform legal and financial analyses and make recommendations etc. It is already working as an integrated component in the services of many companies such as reservation systems, food delivery services and marketplaces. By number of users, it’s the fastest-growing platform in computing history. (It has a free version as well as a more advanced paid one). ChatGPT, using GPT neural architecture version 3.5, launched for free access use in November 2022, and by April 2023, there were already more than 173 million active users, with 1.8 billion visits to the platform. In March, ChatGPT 4 came out, using improved architecture able to recognise images as well as texts (although, admittedly, to date, it can only give output in text form). Almost every day, the platform’s latest achievements have been in the news. It can create a fully-functioning site on the basis of a hand-written model, it can complete tax declarations and pass exams at educational institutions, including the US Bar Exam for practising lawyers.  What’s more, it is able to handle situations which require emotional factors to be taken into account. Such wide-ranging progress in the development of AI tools has caused people to wonder where all these advances will lead.   


Is it true that AI poses a threat? If so, what exactly?   

Here we must make a distinction between what’s happening now, in the present and what will come in the future. Currently, AI technology, even in its most advanced form, is simply a tool in the hands of its users who may use it for good or ill. For instance, a popular application of AI is the generation of ‘deep fakes’, which can create information, (text, audio or video), which either distorts real facts or is completely fabricated. This material can be used for quite laudable purposes; the ability to generate virtual actors with their own appearances, faces, voices and even the personal mannerisms of real individuals can be used to create footage with the participation of actors who have passed away. It also has wide application in animation. Yet this same technology can be used to generate factually inaccurate information for military propaganda and election campaigns, with scenes and commentary seemingly using real people such as politicians and celebrities. Overall, the Internet is increasingly seen as an enormous cesspool in which it’s hard to tell important facts from trivia and fabrications. Search engines are already using AI algorithms to ensure people receive correct information in response to their queries.  

As for ChatGPT, students are already using it to write references, theses and computer programs. It isn’t easy to detect what is, in effect, plagiarism. This may lead to the disappearance of some traditional forms of practical research exercises from university syllabi. In a recent US court case, the links to legal precedents used by the defence council were found to be fake. It turned out that they had been generated by ChatGPT. It’s not clear why the program behaved in this way. 

It is also essentially impossible to avoid flaws in AI programs caused by mistakes made by its programmers or operators. For instance, the use of a popular AI application for self-driving cars has already resulted in several accidents, including fatal​ ones. It’s reasonable to suppose that in high-risk applications such as nuclear power stations and the production or use of weapons of mass destruction, imperfections in AI tools may lead to catastrophic consequences. The remedy is obvious; control measures on the use and application of AI must be strengthened.

 But right now, they’re not just talking about the imperfections of AI. They’re also talking about an existential threat to humanity. How realistic is this

Yes, the alarm about this is being actively stoked at the moment. One of the clearest examples is a June 6 interview in The Times by Matt Clifford, a government adviser on AI, with the dramatic headline “Two years to save the world, says AI adviser”. The newspaper published extracts from an interview which this well-known IT industry figure gave to TalkTV. Clifford promptly clarified on his Twitter page that his words had been taken out of context and that, despite all the risks posed by AI technology, nothing catastrophic could happen in the next two years and that, anyway, the problem requires more detailed consideration. 

A threat really does exist, we can call it existential, as, given the specific nature of neural networks, no one knows for sure how exactly they work. The obvious analogy is with the human brain; an incoming signal passes through an enormous quality of interconnected neural elements, resulting in the execution of the task which has been set. How exactly this happens cannot yet be quantified with absolute accuracy as it is still not possible to observe how the signal is processed. It may be fundamentally impossible. So, whatever restrictions the developers and users include in the architecture, the algorithms, the data used for machine learning and the description of the task set for the network, there is no 100-percent guarantee that a thinking machine won’t achieve free will and express it in its own way in some unpredictable circumstances. And even complete programming code transparency (which we don't have at the moment), isn’t a panacea, as AI programs have already mastered programming languages, making them able to dispense with their original coding. Theoretically, this ability of AI systems to modify their own coding could lead them to develop new functions independently. It’s possible to imagine a situation in which, in its attempts to find the optimal solution to a task it has been given, AI will independently change some parameters or remove some risk-minimising fail-safes added by the programmer. For example, we could imagine a situation, so far only hypothetical, in which a drone, while plotting the optimal route for the destruction of an adversary, may conclude that the operator is preventing the AI from achieving the task in a way that it likes, so, as a result, the drone first eliminates its own operator, and then the target. And what if the AI system in question controls the launch of nuclear weapons?  

2001: A Space Odyssey Stanley Kubrick. Photo: kinopoisk.ru


Discussions of this topic are long-running. It’s not only amongst programmers; futurologists, writers, philosophers, politicians and figures from the world of cinema have all had their say. Remember, for instance, Stanley Kubrick’s classic film, 2001: A Space Odyssey, in which the actions of HAL 9000, a super-intelligent rogue computer, lead to fatal consequences for the crew of a spaceship. And now, respected experts are making quite apocalyptic predictions. For example, Eliezer Yudkowsky, senior researcher at the American Machine Intelligence Research Institute, wrote in Time magazine: “If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter”. However, the majority of specialists suppose that there are no grounds for panic as the time of artificial superintelligence has not yet come.  

In that case, what has led to the recent letters warning of the dangers of AI, signed by well-known figures from the scientific and business IT community? And to whom are these letters addressed?  

Beyond discussion, the wide-ranging resonance across society caused by ChatGPT’s dramatic progress has been the catalyst to concrete action, including these letters. The chatbot’s success has shown that AI now has capabilities which had recently appeared unachievable. The first letter, headlined Pause Giant AI Experiments, appeared on March 22 on the site of the Future of Life Institute, the leading international research centre into global risks, based in Cambridge, Massachusetts. In it, prominent figures from the worlds of tech, business and science called on all AI developers to immediately pause for at least six months their training of AI systems surpassing the power of GPT-4. The letter was signed by more than 30,000 people, including Elon Musk and Bill Gates. A second letter, consisting of one sentence: Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war, was published on May 30 by the Center for AI Safety. Its significance comes from the list of signatories, which includes the majority of famous scientists and researchers at leading commercial AI developers. The main aim of these letters is to convince society of the necessity, at least in the short term, of a moratorium on the creation of more powerful AI systems until agreements have been reached and implemented allowing the risks to be minimised. 

As for who the addressees of the letters are, in my view, they are politicians who have the ability and authority to take legislative measures to regulate the AI sphere, as well as experts, businessmen and public figures from different countries. The aim is to tighten risk-assessing and certification procedures for new AI products, and also to introduce restrictions on the creation of algorithms to prevent AI systems from developing the capacity to become autonomous. This includes extending maximum transparency to the programming code. The idea is to essentially restrict the creation of AI tools which could potentially act independently, without human control. Besides this, the authors of the letter propose the creation of a global structure to monitor and manage the risks associated with AI. This could be an organisation similar to the IAEA in the nuclear industry, whose jurisdiction is recognised by virtually every country. So a concerted direction of financial resources into AI security is necessary.  


It is as yet unclear to what extent it will be possible to put these proposals into practice as the opinions of influential experts are divided, with some voicing support and others raising doubts. Is it at all possible to pause scientific progress, especially when the main players on the global market are private corporations? Developing and using AI systems is extremely costly. Private investors are putting in huge funds, for instance, one of the founders of OpenAI, the developer of ChatGPT, is Elon Musk, the richest man in the world, and Microsoft has already invested $1 billion and plans to spend a lot more. It goes without saying that these investments are being made in anticipation of commercial success. By the end of 2024, the profits from ChatGPT alone should equal $1 billion. However, the costs are also enormous: Sam Altman, the CEO of OpenAI, has called his company the most capital-intensive startup in the world, so state financing wouldn’t hurt.  

Photo: unsplash.com

What has been the reaction of politicians?

They have listened to the appeal. There have already been hearings in the US Senate and leading representatives of the AI industry have visited the White House. As for the EU, the European Parliament has passed the AI Act, which contains extremely strict prohibitive measures for AI systems considered to have an unacceptable level of risk in regards to the security provision, transparency, environmental impact and ethics of the AI developer. What’s more, AI technology is making great strides in China; Chairman Xi Jinping has announced his plans to make the country a global AI innovation hub by 2030. Billions of dollars have been assigned to these aims. In an authoritarian state, the direction and priorities for the use of AI may be radically different from those in Western countries. According to estimates, China is one or two years behind the US in the development of AI. China’s flagship AI product, tech giant Baidu’s chatbot Ernie, is for now behind its Western equivalents such as ChatGPT.


What place does the AI industry hold in Britain? How has the country’s leadership reacted to the regulatory measures introduced by other countries?

 Britain is considered third in the world, after the US and China, in its influence and achievements in AI. A white paper has recently been published setting out the principles of the government's AI research and technology policies. Compared to the laws passed in the EU, Britain’s approach is intended to give science and business more freedom, with a strong accent on innovation and investment. The government has enthusiastically joined the AI security campaign; during his visit to Washington Prime Minister Rishi Sunak discussed these problems in detail with US President Joe Biden and sought his support for the organisation of a new global summit in London this year. In his speech at the opening of London Tech Week, Sunak referred to an 1830 letter by the well-known mathematician and inventor Charles Babbage. This was addressed to the then Chancellor thanking him for the funding of the analytical difference engine. This machine is now considered a precursor of the computer and one of the most important advances in the history of computing. The PM promised that, just as in Babbage’s day, at this turning point for the AI industry, the government would provide the appropriate financial and logistical backing to make Britain a world leader in this area.  

You might find this interesting