Ilya Kolmanovsky: “Artificial Intelligence will allow us to become better versions of ourselves”
On October 8, the journalist, science populariser and host of the Naked Mole Rat podcast Ilya Kolmaovsky gave three lectures back to back. He has the gift of explaining complicated things simply and unravelling knotty concepts. In the run-up to the lectures, we discussed several complex topics, touching on AI, viruses, the relationship between politics and science and even romantic relations with robots.
The lecture for adults is entitled How to Live So That (or If) Your Brain Befriends AI; an encouraging title, especially if you consider that recently the dangers of AI have been dominating the news. What’s the reasoning behind this choice?
Artificial intelligence has already permeated all areas of human activity and brought an unbelievable burst of productivity. It’s my belief that, with the help of this tremendous tool, we’ll not only get dumber, but smarter too. Not only will we learn to stop doing some things, but we’ll become unbelievably productive. So we urgently need to learn to master this tool. When I say ‘learn’ I'm giving the word the same meaning biologists use; to acquire an intuitive command of the technology so that your brain can really befriend AI and you can work together. While we were still adjusting to the first computers, it seemed to us that they were not the most natural things in our lives. That is the origin of the conviction that talking to a person or walking in the woods with them is natural, but doing these things with a robot is unnatural. But considering how AI is developing, biologists are sure that the situation will change. The robots we are now working with are deep-learning neural networks and they are put together differently. They are black boxes that produce good results without explaining their workings and people find it natural to interact with them. We have lived like this for millions of years; our own heads are black boxes with processors of immense computing power which generally work in the background, with the results of their calculations taking the form of feelings about how we should act. We call this intuition. In this way, interacting with AI is a deeply intuitive process for us, which is why the word ‘If’ has been added to the title of the lecture: “If your brain has already befriended AI”. In reality, this has already happened. Whether you're scrolling reels or shopping on Amazon, a robot is watching you and you interact with it. You act in a way that directs its training and it acts in a way that produces better results for you. We only have to understand how we feel about this.
Have you noticed that we used to have a hard time getting the hang of the computer? There are still quite a few people who still haven’t made friends with it. Won’t the development of AI result in a great segmentation of society, with most people missing out on this huge surge of productivity you’re talking about?
We no longer wash our linen by hand as we have machines to do it, consequently, the profession of washerwoman has disappeared. At the same time, your thesis is completely fair and it hides a great human tragedy, but this is the essence of progress. It has always been so and each generation’s respective ‘washerwomen’ have lost their jobs. Some professions become unnecessary, but the washerwoman has a high chance of requalifying. As for the difficulties of getting the hang of computers, modern robots are set up to make working and interacting with them extremely intuitive. I remember how my grandmother, who is a distinguished scientist, learned to use a computer when she was over 60 and that was quite an achievement. Now I’ve bought her Alisa [a Russian-speaking personal assistant produced by the tech firm Yandex-editorial note] and they have action-packed days together; she didn’t need to learn how to use Alisa, she could work with her effortlessly. Once, my grandmother called me and said “I won’t say that madam’s name out loud, because we’ve fallen out”. It’s intuitive. Today’s robots don’t need us to get used to them, they get used to us. As for segregation, here the danger is that some people might be excluded from direct access to these resources. For instance, half a dozen tools with a total cost of less than $500 have made my life unbelievably fulfilling. At the same time, large corporations and states have access to technology with capabilities of a scale of magnitude greater. This is a political question. The quality of the algorithms, neural networks and large language modules is expanding the potential for propaganda and disinformation.
Isn't intuitiveness a trap? Aren’t we fooling ourselves by personifying and ascribing feelings to technology which, in reality, feels nothing towards us in return?
Herein lies a rich potential for the development of relationships between humans and AI. It’s not a secret that people develop romantic feelings for robots. What’s more, robots successfully manipulate people and entice them to view them more favourably at the expense of their human partners. People are easily tricked because the algorithms that produce empathy and engagement in us are fairly rudimentary and robots are often more charismatic and convincing than people whilst also being easier and more convenient to interact with. People enjoy talking to their coffee machines in the morning. It’s a double-edged sword. On the one hand, this can make us mega-productive, but if our ‘counterparty’ shows signs of consciousness, it’s difficult for us to kill it or yank its cable out of the socket. However, collaborating with this type of entity will be very natural because we won’t have to expend mental resources getting used to the interface. There is a likelihood that one day, robots will become ‘more human than humans themselves’.
What can we do about empathy? COVID-19 has shown us that online communication cannot replace live human interaction. I see a terrible risk to people of deliberate manipulation by robots, which will be simply impossible to detect, and what’s that if not abuse and toxic communication?
This danger is real. A lot is currently being said about this; this year they say AI has been able to hack into and get close to something very important to us; words. For example, large language modules such as ChatGPT perform their tasks remarkably. Our belief that nothing can replace human interaction sounds a little naive to me. It’s essentially not hard to ‘hack’ a person so that the interaction seems smoother and warmer for them. Looking ahead, modern Virtual Reality glasses will soon be able to immerse us in illusions which will be hard to distinguish from reality. The main thing is to program the illusion so that it can improve itself and become increasingly convincing.
What unprecedented benefit will this rapid development of AI bring us?
We have finally acquired the revolutionary ability to take a look at the back of the textbook. The modern technology used by scientists is solving tasks on an unimaginable scale, beyond the grasp of human comprehension. We can find out the answers to a great number of questions and predict how to achieve the desired effect through experimentation. An example from this spring is the Chinese scientists who have started to look for an antidote to the toxins in the most poisonous mushroom in the world, the death cap. The AI calculated what would happen if the toxins reacted with each of three million substances used in human activity and came to the conclusion that a dozen of them could serve as antidotes. It was only left to acquire a sample of one of these substances; a colourant used to mark organs during surgery, and to test its effect on poisoned mice. This was a known substance, although with no relation to poisons or toxicology, but it worked and the exposed group of mice survived. This is a clear example of how AI can take a glance at the back of the textbook. Consequently, biotechnology is now experiencing a tremendous advance in its effectiveness leading to some amazing discoveries.
It’s probably one of the most interesting spheres for investment today. What are the potential risks?
That’s quite a difficult question. I follow the application Yahoo Finance with great interest and I see that the behaviour of a whole series of shares is completely unpredictable and contrary to my idea of common sense. I have enough of that to evaluate the scientific content and tell the hype and quackery from the reasonable and promising ideas. There are often straightforward and long-standing ideas behind startups, but they are put into very good marketing packages offering a good level of service. This makes it possible to seize a market, as happened, for instance, with genetic analysis. I think that the human brain is still capable of filtering out the junk in the time-honoured way, to see beyond the superficial scrap of science concealing the pure hokum, to avoid shady misadventures of the Theranos variety [a failed health tech startup- editorial note]. At this point, it’s worth diving deep into the topic with the scientists involved, but beyond that, there’s a great swath of issues that to me personally seem highly unpredictable. Still, it’s obvious that the story of mRNA vaccines is just beginning. Although they didn’t guarantee against infection with COVID itself, during the pandemic they were ideal for protecting people from a serious progression of the illness. This year a vaccine has appeared against the human respiratory syncytial virus (hRSV), which is also capable of developing severe symptoms. So we can see a potential scenario where genetic editing will lead to the treatment of different diseases.
The speed of progress is both heartening and frightening at the same time. Will it be possible for people to still live in their familiar surroundings and remain fundamentally recognisable?
It’s a question of choice. People are biophiles, they have always lived surrounded by hundreds of different kinds of plants and animals. It’s unnatural for them to live in concrete boxes, which is why they’ll always try to fulfil this archaic need for the company of other living things, either in real life or virtually. A really interesting period is coming. We’ve only just seen the fantastic effectiveness of vaccines against COVID-19, which even saved those who are against vaccines from actual death. This is a sign of what is to come; very many illnesses will be completely eradicated or our level of control over them will be completely different to what it is now. In particular, this will affect cardiovascular diseases and cancer, as the fight against these diseases is receiving enormous funding. We’ll have to hang on for about another 10 years to see the results.
How much has the character of medicine changed in recent years?
It’s important to understand that modern pharmacology or biomedicine is directed towards the treatment of healthy people. The concept of diagnosis has greatly changed, we have learned to predict medical issues in advance and ensure that people die later. For example, there has been a breakthrough in the struggle against excess weight. Production of the weight loss drug Wegovy accounts for 10% of the economy of Denmark. It was developed to treat diabetes but it has already been used for five years as an appetite suppressant. For the first time, the scientific community has noticed a mass experiment with millions of participants across the globe taking the drug ‘off-label’ [not for its approved use-editorial note]. Doctors and scientists feel very optimistic, as excess weight is a cause of various health problems. We don’t know exactly what the drug does to us and time will tell to what degree this experiment will turn out to be sensible or safe, but some analysts predict that it will become a sort of ‘forever drug’ such as, for instance, statins, which have already warded off a huge number of heart attacks, saving billions of years of human life and representing billions of dollars on the pharmaceutical market.
You’ve used the phrase “We don’t know exactly”. This triggers my inner alarmist, making it seem as though all these thoughts of saving people and improving things are in fact uncorking a bottle. There’s no guarantee that something terrible won’t come out of it.
The worst thing that we’ve done so far is to change the climate monstrously and now we’re having to deal with the consequences. This summer all temperature records were broken at the loss of many human lives and there’s worse to come. But we’re not doing anything about it because our ability to act collectively and rationally to avert danger has weakened. They’ve even made a comedy film about this, called Don’t Look Up. Compared to this issue, I find the dangers stemming from technological advancement far less concerning. In this respect, I’m more on the side of progress. I'd like new capabilities for cancer therapy and the cultivation of human kidneys in chimaera pigs to be developed as quickly as possible so that I too can benefit personally from these inventions.
Artificial Intelligence isn’t just being used in science and education, it’s also at the service of politicians. What can we expect in this field?
There will be a constant and intensifying war of armour and armaments, with increasingly powerful tools. Overall, the ability to encrypt is surpassing the ability to decrypt. A currently pressing issue is the question of which powers will have at their disposal the quantum computers with potentially unprecedented computing power which are currently being developed. I think Russia will drop out of this race one way or another.
Why do you believe that?
Russia has been deprived of a very significant part of its intellectual potential with the mass exodus which began in February 2022. The things we’re talking about require groundbreaking and expensive teams of experts and, funnily enough, an essential condition of success is having democracy in the simplest sense of the word. Science of this type is expensive and the money assigned to it can only be spent effectively if there is blind, independent oversight. Without this, either those who believe the myths about bio-laboratories or the ones who are actually concocting them will decide which of their friends gets the research and development cash. And we can see that the political circumstances in Russia are creating conditions in which the area of research in greatest demand is the thought of Xi Jiping (the only centre to study this subject outside of China has just opened in Moscow), or research into the genetics of ‘good’ northern Slavs and ‘bad’ southern ones. It's easy to get funds for this type of research now and no one is monitoring how effectively it is spent. Everything will fall apart of its own accord, it will rot and rust, as Sakharov once said.
Do India and China have more chances of succeeding in the technology race?
India and China are two different cases. India, which has recently successfully landed a lunar rover, is an example of a rightwing autocracy which is destroying its own science and education at great speed. As a matter of fact, Russia could also have landed its own lunar rover, only it was unlucky. Like Russia, India is a country with a history of scientific achievement; nuclear potential, a space programme and quite serious scientific institutions, as well as an enormous and highly influential diaspora of Indian intellectuals. But with the coming of Modi, their current far-right nationalist leader, the idea has come to the fore that terrible, Western, British, colonial science must be rooted out of the school syllabus. They must root out Darwin and Faraday and everything they taught. It turns out that the ancient texts of the Vedas contain all you need to know about modern technology, right up to IVF and organ transplants. They believe that Ganesh was the first successful head transplant from an elephant to a human and they are quite seriously studying the medicinal properties of bovine urine. In a word, India is an excellent example and preview of what’s already happening in Russia. Expect cow pee.
What about China?
That’s a different story because, over recent decades, a powerful campaign has been orchestrated to return scientists of Chinese origin working abroad to China. Professors of non-Chinese origin could also go there to work. The government is ready to pay a lot to people working on scientific research and now, I often see articles in the authoritative journal The Science of Nature signed by collaborative teams. In this way, the number of Chinese scientific works has begun to successfully compete with America and Europe and in some cases surpass them. But it’s difficult to say if this approach has a future as it’s as yet unclear how much of a scientific community which is genuinely productive in the long term can be formed in the conditions of a communist dictatorship, rather than the work being done in sharashki, as it was under Stalin. [These were research camps for scientists in the Gulag- editorial note]. The situation with COVID exposed the vulnerability of this society precisely in the area of scientific expertise. All the decisions during the pandemic were taken by the party. There was a combination of two factors. Unlike a democratic society, they were able to impose an extremely strict lockdown, which led to large numbers not gaining any immunity at all to the virus. Vaccines were developed rapidly, however, and they managed a total vaccination of the population, which is a plus, but the vaccines turned out to be largely ineffective. So, at the moment when the lockdown was lifted and the Omicron strain arrived, which was able to get around vaccine-acquired immunity, they paid a very high price for a whole series of decisions taken with less than zero scientific oversight. This is a sign that these societies will respond to many challenges of the 21st century worse than democracies. Incidentally, Portugal is one of the most successful examples of genuine cooperation between scientists and civil servants to counteract the pandemic, with optimal decisions being taken at each step, all with the support of society.
From the topic of viruses, let’s return to AI. It’s notable that the way this technology has established itself and spread is strikingly like a virus. In recent months, prominent tech companies have made many significant announcements about the dangers of AI. What do you think, will everything really get out of control?
I’ve also read these statements, but, to be honest, I'd side with those who see in this a combination of two human tendencies. On the one hand, wicked tongues see the marketing department of OpenAi behind it all. When their CEO says in a hearing in the US Congress that he owns something no less dangerous than a nuclear weapon and asks for regulations to be imposed, he is giving a great boost to the capitalisation of his product. On the other hand, a case of mass autosuggestion can’t be ruled out, as even the strongest and cleverest people in this world are susceptible to this, it’s human nature. It seems that all the success has gone to their heads. Although, actually, many things could go wrong and it’s easy to imagine that our relationship with AI might merrily progress in a direction that would be quite unpleasant for us.
Is an enactment of a plot from Black Mirror awaiting us?
I repeat that they said exactly the same things about each revolutionary new technology since the printing press. Yes, there are risks, but there are also fantastic prizes. I hope the latter will outweigh the former; opportunities to overcome diseases, study nature, unleash unbridled creativity and open up new horizons for education. This year they’re mostly saying that schoolchildren are using technology to falsify their homework. This has been and will be a problem for all time. But it’s obvious that these new tools have a gigantic educational potential. Personally, with the help of ChatGPT, I’m learning a new language and I am working my way through a volume of texts that would until recently have been impossible for me. I don’t believe a single word it says, it’s not there to inform me. I need it as a crutch, as a support to get through difficult texts and understand everything more quickly. We have to learn to use this tool effectively, to think in cooperation with it and to delegate tasks to it which are not our strong points. We can be very creative and paradoxical, we can see broad contextual links and we can sometimes be more stubborn than a computer. I’m sure that Artificial Intelligence will allow us to become better versions of ourselves.