Could AI carry out coups next unless stopped now?
Steve Wozniak is no fan of Elon Musk. In February, the Apple co-founder described the Tesla, SpaceX and Twitter owner as a “cult leader” and called him dishonest.
Yet, in late March, the tech titans came together, joining dozens of high-profile academics, researchers and entrepreneurs in calling for a six-month pause in training artificial intelligence systems more powerful than GPT-4, the latest version of Chat GPT, the chatbot that has taken the world by storm.Their letter, penned by the United States-based Future of Life Institute, said the current rate of AI progress was becoming a “dangerous race to ever-larger unpredictable black-box models”. The “emergent capabilities” of these models, the letter said, should be “refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy and loyal”.
ChatGPT, released in November 2022 by research lab OpenAI, employs what is known as generative artificial intelligence – which allows it to mimic humans in producing and analysing text, answering questions, taking tests, and other language and speech-related tasks. Terms previously limited to the world of computer programming, such as “large language model” and “natural language processing,” have become commonplace in a growing debate around ChatGPT and its many AI-based rivals, like Google’s Bard or the image-generating app, Stability AI.
But so have stories of unhinged answers from chatbots and AI-generated “deepfake” images and videos – like the Pope sporting a large white puffer jacket, Ukrainian President Volodymyr Zelenskyy appearing to surrender to Russian forces and, last week, of an apparent explosion at the Pentagon.
Meanwhile, AI is already shaking up industries from the arts to human resources. Goldman Sachs warned in March that generative AI could wipe up 300 million jobs in the future. Other research shows that teachers could be among the most affected.Then there are more nightmare scenarios of a world where humans lose control of AI – a common theme in science fiction writing that suddenly seems not quite so implausible. Regulators and legislators the world over are watching closely. And sceptics received endorsement when computer scientist and psychologist Geoffrey Hinton, considered one of the pioneers of AI, left his job at Google to speak publicly about the “dangers of AI” and how it will become capable of outsmarting humans.
So, should we pause or stop AI before it’s too late?
The short answer: An industry-wide pause is unlikely. In May, OpenAI CEO Sam Altman told a US Senate hearing that the company would not develop a new version of ChatGPT for six months and has used his new mainstream fame to call for more regulation. But competitors could still be pushing forward with their work. For now, say experts, the biggest fear involves the misuse of AI to destabilise economies and governments. The risks extend beyond generative AI, and experts say it is governments – not the market – that must set guardrails before the technology goes rogue.
Science fiction no more
At the moment, much of the technology on offer revolves around what the European Parliamentary Research Service calls “artificial narrow intelligence” or “weak AI”. This includes “image and speech recognition systems … trained on well-labelled datasets to perform specific tasks and operate within a predefined environment”.