Elon Musk wants to use AI to run US gov’t, but experts say ‘very bad’ idea

Is Elon Musk planning to use artificial intelligence to run the US government? That seems to be his plan, but experts say it is a “very bad idea”.

Musk has fired tens of thousands of federal government employees through his Department of Government Efficiency (DOGE), and he reportedly requires the remaining workers to send the department a weekly email featuring five bullet points describing what they accomplished that week.Since that will no doubt flood DOGE with hundreds of thousands of these types of emails, Musk is relying on artificial intelligence to process responses and help determine who should remain employed. Part of that plan reportedly is also to replace many government workers with AI systems.

It’s not yet clear what any of these AI systems look like or how they work—something Democrats in the United States Congress are demanding to be filled in on—but experts warn that utilising AI in the federal government without robust testing and verification of these tools could have disastrous consequences.“To use AI tools responsibly, they need to be designed with a particular purpose in mind. They need to be tested and validated. It’s not clear whether any of that is being done here,” says Cary Coglianese, a professor of law and political science at the University of Pennsylvania.

Coglianese says that if AI is being used to make decisions about who should be terminated from their job, he’d be “very sceptical” of that approach. He says there is a very real potential for mistakes to be made, for the AI to be biased and for other potential problems.

“It’s a very bad idea. We don’t know anything about how an AI would make such decisions [including how it was trained and the underlying algorithms], the data on which such decisions would be based, or why we should believe it is trustworthy,” says Shobita Parthasarathy, a professor of public policy at the University of Michigan.

Those concerns don’t seem to be holding back the current government, especially with Musk – a billionaire businessman and close adviser to US President Donald Trump – leading the charge on these efforts.

The US Department of State, for instance, is planning on using AI to scan the social media accounts of foreign nationals to identify anyone who may be a Hamas supporter in an effort to revoke their visas. The US government has not so far been transparent about how these kinds of systems might work.

Undetected harms
“The Trump administration is really interested in pursuing AI at all costs, and I would like to see a fair, just and equitable use of AI,” says Hilke Schellmann, a professor of journalism at New York University and an expert on artificial intelligence. “There could be a lot of harms that go undetected.”AI experts say that there are many ways in which the government use of AI can go wrong, which is why it needs to be adopted carefully and conscientiously. Coglianese says governments around the world, including the Netherlands and the United Kingdom, have had problems with poorly executed AI that can make mistakes or show bias and as a result have wrongfully denied residents welfare benefits they are in need of, for instance.

In the US, the state of Michigan had a problem with AI that was used to find fraud in its unemployment system when it incorrectly identified thousands of cases of alleged fraud. Many of those denied benefits were dealt with harshly, including being hit with multiple penalties and accused of fraud. People were arrested and even filed for bankruptcy. After a five-year period, the state admitted that the system was faulty and a year later it ended up refunding $21m to residents wrongly accused of fraud.

Related Articles

Back to top button