AI Expert Advises Against Disclosing Sensitive Information to Chatbots

A cautionary alert comes from an expert in artificial intelligence, advising users to exercise caution when sharing sensitive information with chatbots like ChatGPT The advisory highlights potential risks associated with discussing sensitive topics such as job dissatisfaction or political opinions with these AI systems.

Mike Wooldridge, a professor of artificial intelligence at Oxford University, warns against treating the AI tool as a reliable confidant, as it may lead to undesirable consequences. He stresses that any information provided to the ‘chatbot’ contributes to the training of subsequent versions.

Additionally, he points out that the technology tends to generate responses in line with user preferences rather than providing objective information, emphasizing that it essentially “tells you what you want to hear.”

According to The Guardian, Mr. Wooldridge is delving into the topic of AI in this year’s Royal Institution Christmas lectures.

The lectures aim to address “big questions facing AI research and dispel myths about how this groundbreaking technology truly operates,” as stated by the institution.

Contrary to popular belief, Wooldridge clarifies, “That’s absolutely not what the technology is doing, and crucially, it’s never experienced anything.” He adds, “The technology is basically designed to try to tell you what you want to hear—that’s literally all it’s doing.”

He provides a sobering perspective, advising that “you should assume that anything you type into ChatGPT is just going to be fed directly into future versions of ChatGPT.”

Furthermore, if you realize later that you’ve disclosed too much to ChatGPT, retractions are not a straightforward option. Wooldridge explains that due to the nature of how AI models operate, retrieving your data once it has entered the system is nearly impossible.

Related Articles

Back to top button