It could be evidence that artificial intelligence has made a great leap forward and human writers and software developers will soon be redundant.
Or maybe it is just the latest example of the hype getting way ahead of the reality? On this week’s Tech Tent we find out what the big fuss is about something called GPT-3.
OpenAI is a Californian company started in 2015 with a high-minded mission – to ensure that artificial general intelligence systems that could outperform humans in most jobs would benefit all humanity.
It was founded as a non-profit with generous donations from Elon Musk among others but was quickly transformed into a for-profit business, with Microsoft investing $1bn.
Now it has released GPT-3, a product which has had social media, or that part of it which obsesses over new technology, buzzing with excitement in recent days.
It is an AI, or to be more precise, machine-learning – tool that seems to have amazing capabilities. In essence, it is a text generator but users are finding it can do everything from writing an essay about Twitter in the style of Jerome K Jerome, to answering medical questions or even coding software programs.
So far, it has only been available to a few people who asked to join the private beta, among them Michael Tefula. He works for a London-based venture capital fund but describes himself as a technology enthusiast rather than a developer or computer scientist.
He explains that what makes GPT-3 so powerful is the sheer volume of data it has ingested from the web when compared to an earlier version of the program. “This thing is a beast in terms of how much better it is compared to GPT-2.”
So what can you do with it?
“It’s really down to how creative you are with the tool, you can basically give it a prompt of what you would like it to do. And it will be able to generate outputs based on that prompt.”
Michael decided to see how well it would perform in taking complex legal documents and translating them into something comprehensible
“I gave it two or three paragraphs that came from a legal document.
“And I also gave it two or three examples of how a simplified version of those paragraphs would look.”
Having been trained, GPT-3 was then able to provide simplified versions of other documents.
He went on to see whether it could learn his writing style and generate emails that would sound like him – and the results again were impressive.
Which brings us to one of the problems with this technology. Last year OpenAI, apparently remembering its mission to protect humanity, said it would not release a full version of GPT-2 because that would raise safety and security concerns.
In an era of fakery, an algorithm that could generate articles that might sound like a prominent politician could prove dangerous.
Why then, asked some critics, was the more powerful GPT-3 any different? Among them was Facebook’s head of AI Jerome Pesenti who tweeted: “I don’t understand how we went from gpt2 being too big a threat to humanity to be released openly to gpt3 being ready to tweet, support customers or execute shell commands.”
He raised the issue of the algorithm generating toxic language reflecting the biases in the data on which it has been fed, pointing out that when given words such as “Jew” or “women” it generated anti-Semitic or misogynistic tweets.
OpenAI’s founder Sam Altman seemed keen to calm those fears, tweeting: “We share your concern about bias and safety in language models, and it’s a big part of why we’re starting off with a beta and have a safety review before apps can go live.”
But the other question is whether, far from being a threat to humanity, GPT-3 is anything like as clever as it appears. The computer scientist who heads Oxford University’s artificial intelligence research, Michael Wooldridge, is sceptical.
He told me that while the technical achievement was impressive, it was clear that GPT-3 did not understand what it was doing, so talk of it rivalling human intelligence was fanciful: “It is an interesting technical advance, and will be used to do some very interesting things, but it doesn’t represent an advance toward general AI. Human intelligence is much, much more than a pile of data and a big neural net.”
That may be the case, but I am still eager to give it a try. Look out over the coming weeks for evidence of blogs or radio scripts written by a robot.