
Experts Warn Against Musk’s Plan to Use AI in US Government Operations
April 8, 2025 Off By Sharp MediaElon Musk’s proposal to use artificial intelligence to run the US government has raised alarms among experts, who warn it could lead to serious errors, bias, and unintended consequences.
Elon Musk is reportedly pushing forward with plans to incorporate artificial intelligence (AI) into the operations of the US government, particularly with a focus on streamlining workforce management. His Department of Government Efficiency (DOGE) has already seen the firing of tens of thousands of federal workers, and now Musk is turning to AI to process the thousands of weekly emails from remaining employees, determining who should stay and who should be let go. The aim, it seems, is to replace many federal employees with AI systems.
However, experts have raised serious concerns about this approach, stressing that AI tools must be rigorously tested and validated before being used in critical government functions. Cary Coglianese, a professor of law and political science at the University of Pennsylvania, warned that using AI to make decisions about who should keep their jobs could have disastrous consequences, particularly due to the potential for bias or mistakes in the decision-making process. “We don’t know anything about how an AI would make such decisions, including how it was trained or the underlying algorithms,” said Shobita Parthasarathy, a professor at the University of Michigan, echoing the concerns.
Musk’s initiative comes at a time when the US government is already experimenting with AI in various sectors. The Department of State, for instance, is planning to use AI to scan social media accounts of foreign nationals, identifying potential security threats such as Hamas supporters. While this has raised alarms about privacy and the lack of transparency regarding AI’s workings, the push for AI’s integration in government roles continues.
Many experts warn that governments worldwide have encountered issues with poorly implemented AI. For example, in Michigan, an AI system used to detect unemployment fraud mistakenly flagged thousands of innocent people, causing serious financial and legal consequences. Similarly, AI in the criminal justice system has been criticized for reinforcing biases, especially in areas like parole eligibility and police predictions on crime hotspots.
Mirroring these concerns, Hilke Schellmann, a journalism professor at New York University, cautioned against rushing AI implementation without adequate oversight: “There could be a lot of harms that go undetected.” AI’s historical data-driven nature, which often amplifies biases from the past, makes it an unreliable tool in many settings, especially when it comes to making life-altering decisions about people’s rights and livelihoods.
The government’s reliance on AI could also face practical challenges. Many government jobs require specialized knowledge and skills that AI cannot easily replicate. Tasks that are nuanced and require human understanding, such as those performed by IT professionals or legal experts, cannot simply be replaced by machines. As Coglianese pointed out, “I don’t think you can randomly cut people’s jobs and then replace them with any AI.”
While AI may have a role in assisting with routine and repetitive tasks, replacing human workers in complex government functions without appropriate safeguards could lead to catastrophic mistakes, making this approach highly questionable. As experts continue to voice concerns, the need for careful, ethical, and responsible deployment of AI remains critical to avoiding irreversible damage.