As well as bringing more pink into our everyday lives, the Barbie boom inspired by the eponymous film gave Artificial Intelligence a new opportunity to demonstrate its abilities. Social media was inundated with AI-generated photos of Barbie and Ken, which were met with a wave of enthusiastic comments. Yet it was only quite recently, in fact, right about the time ChatGPT was launched, that the new capabilities of AI provoked a squall of criticism, with some going as far as to call for research in this area to be brought to a halt. How well-founded are these concerns? Can Artificial intelligence potentially be a threat? Keep reading for answers to these questions from Kommersant UK’s express survey.
Roman Koposov, Vice Director and Head of Strategic Planning at ARB Pro Group Training Institute:
The first potential threat is the application of AI for military aims. The now familiar drones, which are preprogrammed to attack an opponent’s armed forces, use machine vision systems to recognise obstacles and identify objects. In essence, a drone becomes a kamikaze fighter intent on destroying an enemy. While the algorithm performs this task, mistakes such as identifying a civilian installation as military hardware can’t be ruled out. What’s more, in the pharmaceutical industry, AI is used quite often, as analysis of big data by its algorithms allows experiments to be conducted much more cheaply, quickly and effectively. As well as medical aims, this technology could be put to other uses, right up to the production of chemical or biological weapons.
The second danger is the manipulation of public perception with the help of fake videos. There are currently many instances of deep fakes mimicking famous people, from celebrities and politicians to scientists and businessmen, either to create hype to raise the profile of a particular individual or to lure people into some financial scam. For example, fraudsters made two videos using a digital doppelganger of the founder of a marketing platform, in which he promoted a money-making scheme based on AI. Unfortunately, there are and will continue to be many similar cases, but as in the previous example (the use of AI for military purposes), AI is simply a tool. It is people who intentionally set these tasks for the technology.
The third scenario pertains to the hacking of accounts, postal services, crypto-wallets etc. Various incidents are possible, from the leaking of stars’ private messages and the pilfering of funds from accounts to cyberattacks targeting critical industrial or nuclear infrastructure to disrupt their operations.
The fourth issue is the question of choice. For example, if a car on autopilot injures someone, then the question here is solely about who gave the machine the right to choose this course of action. Who wrote the algorithm that issued the command that caused harm to an individual? It was preprogrammed to follow a certain model of behaviour, whether that was to save the life of the driver at all costs or to make a decision by analysing the other people who might get hurt. For example, the lives of a mother and child in the car could be prioritised over that of an elderly couple.
The fifth point is the potential disappearance of some possessions, whether in the short or long term. For instance, most of the work of a nutritionist is already done by AI; examination of the results of analysis, conduct of patient surveys and interpretation of data. What’s more, unlike a doctor, who may not have enough information at their disposal, a neural network has access to a vast dataset. AI can draw certain conclusions from this data, such as whether to correct a patient’s diet, sleep or lifestyle or to prescribe a course of vitamins or other dietary supplements. The combination of a specialist and a neural network is a powerful tandem, but I’m afraid that the doctor may end up becoming a redundant component as neural networks become more efficient. A similar transformation will take place with other professions.
These eventualities may all be detrimental to people’s well-being, but take note; in each case, a real person was behind the programming. So, in my view, concerns about the threat posed by AI in its popular manifestations such as ChatGPT, Midjourney or any other neural network that has appeared in the last half year are unfounded. These systems can help people to perform simple everyday tasks. We shouldn’t try to bring their development to a halt. On the contrary, we need to harness their beneficial effects and identify bad actors who intend to abuse AI.
Vladimir Kliuev, CEO of ArticMedia Web Development Studio:
We can already see the harm AI can cause when used in the education of the young. If it is used to solve problems or produce study materials, these abilities lose their value. If people lack experience of independent problem-solving, they may struggle to analyse information, draw their own conclusions, or make decisions in a reasonable time.
Aleksei Krol, writer:
To date, the history of the human race has shown that the greatest dangers threatening humanity are our own greed, propensity to conflict and destructive behaviour. Consequently, it is comical to worry that at some stage, AI, without human involvement, will assume responsibility for this or that decision. De facto, 99% of various processes, in industry, technology and communication have long been automatic, run by algorithms and operating without human intervention. People only get involved when something goes wrong. So this is already a fait accompli. What’s more, there are always errors in the algorithms, and on top of that, hackers interfere. This is the current state of affairs. However, AI is just another algorithm. The workforces of corporations also act according to algorithms in the form of job instructions, internal company policy etc. Problems arise either when someone makes a mistake or when the instructions are written incorrectly, as often happens. So I haven’t seen any significant changes for the time being.
In my view, underlying these concerns is the fear that AI could somehow completely take over. We may initially hand over control voluntarily and subsequently, it may decide that humankind is evil and like in The Terminator and The Matrix, start to wipe us out deliberately. Of course, this possibility theoretically exists, because, as Albert Einstein said, ‘Two things are infinite: the universe and human stupidity'. However, this sort of problem is always nipped in the bud; several external scenarios are being developed and the risks are being hedged. In brief, I don’t believe that the integration of AI into command processes (which will inevitably occur) will significantly increase the already existing risks. This is because the main source of risk for humanity is not AI, nature or meteorites (all these catastrophes have undergone in-depth analysis), it is human stupidity itself; the unwillingness to find compromise and resolve conflicts by peaceful means etc. I think that as AI evolves, its growing role in running decision-making processes and global development will overall be beneficial. Will there be some problems? Inevitably; life is a problem and this is natural. An easy life is only possible in the graveyard. This is why I think that, firstly, talk of the apparent dangers of AI is brought on by these fears and secondly, at least for the so-called scientific community, this talk is a way of fomenting public panic. It’s a sort of last refuge of a scoundrel. It’s easier to get attention by conjuring up some kind of threat. In essence, if you want to extract money from fools, or the tax-payer, you have to put the fear of God into them, whether that’s with the threat of a meteorite, an evil AI robot that will take everyone’s jobs, climate change, which may be anthropogenic, (although that’s also open to discussion), or a pandemic. What exactly the danger is doesn’t matter. This is because any threat you convince them of requires action or, in plain Russian, money. That’s how it works. Consequently, I see no fundamental difference between scare stories about AI and similar talk about other threats.
Roman Kores, IT developer and founder of Horum IT:
Right now, we can assess the capabilities of AI by looking at the services and modules that are publicly available. They are equipped with intellect; they can perform many tasks, from the generation of voice, text, photo and video content to the analysis and execution of tasks such as the composition and execution of code and the generation of internet searches on sites or social media. Yet in all public modules, a key component is absent; initiative. This must be provided directly by the user, for instance via prompts. The machine can’t think of tasks for itself independently. So any destructive effects of this technology can be directly ascribed to the human factor. The situation is far from rosy; there is no shortage of people with malicious intentions. AI has already been put to use for many nefarious aims, such as phishing, online scamming and generating fake content. What’s more, this deceptive material can look very realistic, as AI has access to an enormous database, containing everything uploaded onto the internet over the past 30 years. The impossibility of distinguishing fact from reality may bring chaos to society. It could also affect the next generation of AI modules, as they may incorporate inaccurate information into their logic and architecture.
As for the modules being tested by the governments and ministries of defence of various countries, it’s easy to imagine that their algorithms might be a degree of magnitude more powerful than the capabilities of publicly available systems. We can only guess what they can do and what harm they could potentially cause to humanity. In the future, AI may be the key to the creation of decentralised economic models that will operate in Web3 with cryptocurrencies and smart contracts. This poses the next big question: Will states and big business be ready for the changes to the global economy this entails? And, in response, what restrictions will they come up with to prevent this from happening?