OpenAI's recent deal with the US military has sparked controversy, with the company facing backlash from users and the public. The initial agreement, which was deemed "opportunistic and sloppy," has been revised to address concerns over AI usage in war and the power dynamics between government and private entities. OpenAI's CEO, Sam Altman, acknowledged the mistake of rushing the initial release and emphasized the complexity of the issues involved.
The revised agreement includes stricter guidelines, such as prohibiting the system from being used for domestic surveillance of U.S. citizens and requiring a follow-on modification to the contract for intelligence agencies like the National Security Agency. These changes come amidst a surge in ChatGPT uninstalls, with Sensor Tower reporting a 200% increase in daily uninstall rates post-announcement.
The controversy extends to AI's role in military operations, with Palantir's data analytics tools being used by the US, Ukraine, and NATO for intelligence gathering, surveillance, and counterterrorism. The integration of Palantir's AI-powered defense platform, Maven, into NATO has raised questions about the ethical implications of AI in warfare. Despite this, the use of AI in military decision-making, as highlighted by Louis Mosley, can lead to faster and more efficient outcomes.
However, the potential for AI to make mistakes or "hallucinate" is a concern. Lieutenant Colonel Amanda Gustave assures that human oversight is maintained, ensuring AI does not make decisions independently. The absence of Anthropic, a company that refused to create fully autonomous weapons, from the Pentagon has also sparked debate. Professor Mariarosaria Taddeo warns that this absence may lead to a lack of safety-conscious practices in AI development and usage.