What happens when you think AI is lying about you?

Imagine the scene: you’re at home with your family and your phone starts pinging… people you know are warning you about something they’ve seen about you on social media.

It’s not the best feeling.

In my case, it was a screenshot, apparently taken from Elon Musk’s chatbot Grok, although I couldn’t verify it, placing me on a list of the worst spreaders of disinformation on X (Twitter), alongside some big US conspiracy theorists.

I had nothing in common with them, and as a journalist, this was not the sort of top 10 I wanted to feature in.

I don’t have access to Grok in the UK so I asked both ChatGPT and Google’s Bard to make the same list, using the same prompt. Both chatbots refused, with Bard responding that it would be “irresponsible” to do so.

I’ve done a lot of reporting about AI and regulation, and one of the big worries people have is how our laws keep up with this fast-changing and highly disruptive tech.

Experts in several countries are agreed that humans must always be able to challenge AI actions, and as time goes on AI tools are increasingly both generating content about us and also making decisions about our lives.

There is no official AI regulation in the UK yet, but the government says issues about its activity should be folded into the work of existing regulators.

I decided to try to put things right.

Zoe KleinmanIMAGE SOURCE,ROBERT TIMOTHY
Image caption,

Zoe writes regularly about AI and regulation

My first port of call was X – which ignored me, as it does most media queries.

I then tried two UK regulators. The Information Commissioner’s Office is the government agency for data protection, but it suggested I go to Ofcom, which polices the Online Safety Act.

Ofcom told me the list wasn’t covered by the act because it wasn’t criminal activity.

“Illegal content… means that the content must amount to a criminal offence, so it doesn’t cover civil wrongs like defamation. A person would have to follow civil procedures to take action,” it said.

Essentially, I would need a lawyer.

There are a handful of ongoing legal cases round the world, but no precedent as yet.

In the US, a radio presenter called Mark Walters is suing ChatGPT creator OpenAI after the chatbot falsely stated that he had defrauded a charity.

And a mayor in Australia threatened similar action after the same chatbot wrongly said he had been found guilty of bribery. He was in fact a whistleblower – the AI tool had joined the wrong dots in its data about him. He settled the case.

I approached two lawyers with AI expertise. The first turned me down.

The second told me I was in “unchartered territory” in England and Wales.

She confirmed that what had happened to me could be considered defamation, because I was identifiable and the list had been published.

But she also said the onus would be on me to prove the content was harmful. I’d have to demonstrate that being a journalist accused of spreading misinformation was bad news for me.

I didn’t know how I had ended up on that list, or exactly who had seen it. It was immensely frustrating that I couldn’t access Grok myself. I do know it has a “fun mode”, for spikier responses – was it messing with me?

AI chatbots are known to “hallucinate”, which is big-tech speak for making things up. Not even their creators know why. They carry a disclaimer saying their output may not be reliable. And you don’t necessarily get the same answer twice.

Final plot twist

A team of journalists which forensically checks out information and sources.

They did some digging, and they think the screenshot that accused me of spreading misinformation and kicked off this whole saga might have been faked in the first place.

The irony is not lost on me.

But my experience opened my eyes to just one of the challenges that lies ahead as AI plays an increasingly powerful part in our lives.

The task for AI regulators is to make sure there’s always a straightforward way for humans to challenge the computer. If AI is lying about you – where do you start? I thought I knew, but it was still a difficult path.

Related Articles

Back to top button