24 November 2024

Sunday, 05:49

LIVING IN FUTURE NOW

The effects of AI on humanity raises more questions than answers

Author:

15.05.2023

Recently the godfather of artificial intelligence (AI), neural network developer Geoffrey Hinton, a 75-year-old British-Canadian scientist, made the headlines with his dismissal from Google after criticising the company for being unethical and warning the world about the dangers of AI. After all, this man is behind the development of the increasingly popular AI chatbots, including ChatGPT, Google Bard, Midjorney and so on.

 

Growing dangers

In 2018, Hinton received the Alan Turing Award "For conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing". On May 1, 2023, during his sensational interview with The New York Times. Hinton noted that the main threat of chatbots is that the Internet, which has been known for unreliable content, will be increasingly spoiled with even more fictitious content, including the AI-generated photos, videos and texts. Earlier the owner of Twitter, SpaceX and Tesla Ilon Musk, Apple co-founder Steve Wozniak and some other significant figures  signed a petition calling to temporarily stop the development of AI systems superior to GPT-4. Authors believe it is necessary to develop mechanisms to contain the AI first before developing such systems. Otherwise, it may be too late, and it will become too independent...

Indeed, the AI-powered systems pose many risks, with the obvious difficulty to distinguish between the truth and fiction being the closest and most understandable of them (in fact, all the users of the Internet have more or less been exposed to this already). What are the markers of truth? What determines our choice of a toothpaste or an election candidate? These are more complex and serious questions than they first appear. After all, if the ordinary users of neural networks mostly use them for entertainment, what will be our reality when the networks are used everywhere? What will happen to our lives if they are increasingly regulated by algorithms that we do not understand?

At the end of March, a photo of Pope Francis in a fashionable white puffer coat went viral on on Twitter and other social media.  Later it became known that the picture was made using the potential of Midjourney, yet another neural network. The request was a simple one—just a description of the desired image. There are already numerous fake images of Donald Trump's detention, or Vladimir Putin appearing before the International Criminal Court in The Hague, as well as the photos of Putin allegedly kneeling in front of Xi Jinping. But if this can somehow be classified as entertainment, Hinton is more concerned that AI tools could be used, for example, to rig election results, blackmail, slander, undermine the national security of a state, start wars...

The future of the media will also be at stake. AI systems open up enormous opportunities for the manipulation of journalists and their products, but also provide the media with a fantastic toolkit to work with. Now the question is to what extent the principles such as trust and ethics  fit in this brave new world. Or will they be necessary at all? Will there be a need for journalists in this case? 

The potential impact of AI technology on the labour market is intimidating. A Goldman Sachs report says it can endanger 300 million jobs worldwide. IBM CEO Arvind Krishna said he would suspend hiring employees who could be replaced by AI. But this seems to be just the beginning.

 

More perfect than human brain

"Some people believed that technology could become smarter than us, but most thought it was an exaggeration. I also thought it was a long way off. I thought we were 30 to 50 years away or even more. Obviously, I don't think so anymore," Hinton admitted frankly in his interview.

He thinks that in some respects neural networks are already more advanced than human brain. "We're biological systems and these are digital systems. And the big difference is that with digital systems, you have many copies of the same set of weights, the same model of the world. And all these copies can learn separately but share their knowledge instantly. So it's as if you had 10,000 people and whenever one person learnt something, everybody automatically knew it. And that's how these chatbots can know so much more than any one person. It is a completely different form of intelligence. A new, more advanced one," Hinton explained.

Indeed, OpenAI is already used in several companies, where it actively works with clients, structural databases, etc. Neural networks also pass exams perfectly, much better than many graduates; they can write good essays. Perhaps everyone knows the story of a graduate of the Russian State University, Aleksandr Zhadan, who wrote his thesis overnight using ChatGPT and defended it. It can be argued that AI is a real threat for programmers, translators, accountants, lawyers, various assistants and consultants, employees of call centres, content creators, designers, even medical professionals and composers. For example, a neural network is already fairly accurate in diagnosing patients. In early April an AI-generated music track Heart On My Sleeve with voices of Drake and The Weeknd appeared in social networks and gained millions of views. And there are many such examples. No one has elaborated on the copyright issues yet. But we can say that neural networks have creative skills and even a sense of humour. By the way, one of the urgent problems of politics and national security is how AI can get rid of mass media or create an army of unemployed.  Or how communication with AI will affect the mental health of millions of people. For example, many users get easily captivated by an illusion of communicating with a real person. Therefore, the emergence of an army of both unemployed and mentally unstable people can be a real threat to any society.

 

Room for fraud

Remarkably, the rapid development of AI technologies have made cybercriminals particularly enthusiastic. They're already talking about how ChatGPT can be used to facilitate malicious cyber activities like writing phishing emails and compiling malware. They're not even hiding how AI can be helpful in generating useful and, more importantly, persuasive content. And with no language barrier at all - a neural network will be able to write a letter so well that even the most vigilant native speakers will not detect a catch. And it will also be able to maintain fake social media profiles, which will also look like the very real ones, filled with posts, messages, statuses, photos and videos. There is also a huge potential for the development of cyber-espionage, which means a new emerging breach in the global and national security.

Finally, the most frightening truth. Hinton is afraid that AI systems may learn unpredictable behaviour when they analyse vast amounts of data. This means that humans will find it increasingly difficult to predict how AI functions. In other words, we simply will not know what to expect or what to prepare for. According to Hinton, human survival will be threatened if "smart things are smarter than us". And they can indeed get smarter if they read all the books on how to manage public opinion, if they draw conclusions from all the correspondence and enquiries, etc. It may not occur to us what they choose to learn from. By and large, we may lose control of civilisation, not in some distant future, but in the coming years. 

Will humanity take collaborative efforts to limit the harmful potential of AI? It is unlikely, considering how we collectively and "effectively" collaborate on issues such as poverty eradication, climate change mitigation, and conflict resolution. It is more likely that we rush into a new race to be better that others with the help of AI. This is the most likely scenario, and perhaps the AI already realises this. However, it is also possible that such a turnaround would be a red line for our civilisation.

In his interview with the NYT, Hinton said he regretted his life's work, but consoled himself with the excuse: "If I hadn't done it, someone else would have done it". However, the scientist added that he would like to have a good and simple solution, but he does not have one. In fact, he is not sure that such a solution exists.



RECOMMEND:

105