Home News US, UK, and Allies Forge Pact to Safeguard AI Security

US, UK, and Allies Forge Pact to Safeguard AI Security

The pact underscores the crucial need for companies to prioritize the security of AI systems, advocating for the adoption of a "secure by design" approach.

By Inc.Arabia Staff
images header

The US, the UK, and a coalition comprising more than a dozen nations have unveiled a groundbreaking agreement to protect artificial intelligence (AI) from potential misuse, as reported by Reuters.[1]

The pact underscores the crucial need for companies to prioritize the security of AI systems, advocating for the adoption of a "secure by design" approach.

Despite being non-binding, the agreement, outlined in a comprehensive 20-page document, provides key recommendations to steer companies toward the responsible development and deployment of AI.

The 18 participating countries unanimously agreed on the necessity for designing and utilizing AI systems in a manner that ensures the safety of customers and the wider public, with a specific focus on guarding against potential misuse.

Jen Easterly, the director of the U.S. Cybersecurity and Infrastructure Security Agency, remarked on the groundbreaking nature of the affirmation, stating, "This is the first time that we have seen an affirmation that these capabilities should not just be about cool features and how quickly we can get them to market or how we can compete to drive down costs."

While the agreement includes general recommendations such as monitoring AI systems for abuse, protecting against data tampering, and vetting software suppliers, it marks a significant stride in establishing a global consensus on the pivotal role of security in AI development.

Among the 18 endorsing countries are Germany, Italy, the Czech Republic, Estonia, Poland, Australia, Chile, Israel, Nigeria, and Singapore.

The collaborative framework addresses concerns surrounding the prevention of AI technology from falling into the wrong hands, with recommendations stipulating rigorous security testing before the release of AI models.

The agreement is the latest in a series of initiatives by governments around the world to shape the development of AI. 

Last update:
Publish date: