US AI National Security Strategy: A New Approach
November 1, 2024 Off By Sharp MediaUS President Joe Biden has unveiled a National Security Memorandum (NSM) aimed at leveraging Artificial Intelligence (AI) for national defense. This plan arrives amidst a frantic global push for AI innovation, yet it raises serious concerns.
In his memo, Biden insists on maintaining a leadership position in “safe, secure and trustworthy” AI development. Yet, this rhetoric often masks deeper issues. The directive tasks US agencies with enhancing semiconductor supply chains and integrating AI into government technology, while simultaneously prioritizing intelligence on foreign AI efforts. It sounds proactive but feels reactionary.
A Biden official bluntly stated that the goal is to “out-compete” adversaries. This aggressive stance may be more about optics than substance. The memo insists that AI must uphold human rights and democratic values. However, the promise of safe and reliable systems rings hollow when we consider the potential for misuse.
The document mandates monitoring AI-related risks, particularly those infringing on privacy and perpetuating bias. But such vague assurances are insufficient. Agencies are notorious for underestimating risks, and mere monitoring will not curb the potential for discrimination and human rights violations.
The NSM advocates for international collaboration to ensure AI adheres to global laws. This sounds noble, yet it overlooks the reality that many allies are struggling with their own AI governance issues. Washington’s eagerness to lead may end up isolating it, as other nations prioritize their own interests over collaboration.
Despite signing an executive order last year to mitigate AI risks, the administration faces mounting pressure. More than a dozen civil society organizations have called for stronger safeguards. Their open letter criticized the government’s lack of transparency regarding AI usage, revealing a concerning trend of opacity in an area requiring accountability.
These groups rightly highlight that AI deployment in national security can exacerbate racial, ethnic, and religious prejudices. The risks of privacy violations and civil rights abuses loom large, yet the administration seems determined to brush these concerns aside.
Next month, the US will host a global safety summit on AI in San Francisco. While it’s commendable to convene allies, the success of this summit hinges on meaningful commitments to regulation. Mere discussions won’t solve the pressing issues at hand.
As generative AI continues to evolve, its potential to create and manipulate content raises alarms about misuse. The excitement surrounding its capabilities is overshadowed by fears of catastrophic consequences. Biden’s approach to AI in national security is fraught with contradictions and risks. Without real safeguards, we may be barreling toward a future where AI threatens, rather than enhances, our security and values.