On February 28th, OpenAI founder Sam Altman officially announced a partnership with the U.S. Department of Defense: deploying its AI models on classified networks. Throughout all interactions, the Department has demonstrated a strong emphasis on security and a desire to collaborate with OpenAI to achieve optimal results. The safety and broad benefits of artificial intelligence are at the core of our mission. The two most important safety principles for OpenAI are: prohibiting large-scale surveillance in the United States and holding humans responsible for the use of force, including autonomous weapon systems. The Department agrees with these principles and has incorporated them into laws and policies, which we have also included in the agreement. OpenAI will also develop technical safeguards to ensure proper model operation, which is also what the Department expects. Deployment of Functionality Enhancement Devices (FDE) will assist in model operation, and to ensure safety, models will only be deployed on cloud networks. The Department is required to offer the same terms to all AI companies, and all companies should be willing to accept these terms. We strongly hope the situation can de-escalate, avoiding legal and governmental actions, and that a reasonable agreement can be reached. We will continue to do our best to serve all of humanity. The world is a complex, chaotic, and sometimes even dangerous place.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
OpenAI officially announces partnership with the U.S. Department of Defense, replacing Claude under Anthropic
On February 28th, OpenAI founder Sam Altman officially announced a partnership with the U.S. Department of Defense: deploying its AI models on classified networks. Throughout all interactions, the Department has demonstrated a strong emphasis on security and a desire to collaborate with OpenAI to achieve optimal results. The safety and broad benefits of artificial intelligence are at the core of our mission. The two most important safety principles for OpenAI are: prohibiting large-scale surveillance in the United States and holding humans responsible for the use of force, including autonomous weapon systems. The Department agrees with these principles and has incorporated them into laws and policies, which we have also included in the agreement. OpenAI will also develop technical safeguards to ensure proper model operation, which is also what the Department expects. Deployment of Functionality Enhancement Devices (FDE) will assist in model operation, and to ensure safety, models will only be deployed on cloud networks. The Department is required to offer the same terms to all AI companies, and all companies should be willing to accept these terms. We strongly hope the situation can de-escalate, avoiding legal and governmental actions, and that a reasonable agreement can be reached. We will continue to do our best to serve all of humanity. The world is a complex, chaotic, and sometimes even dangerous place.