Recommendations issued by the US and its allies for fortifying AI systems' security.
In a significant move to safeguard the future of artificial intelligence (AI), the FBI, Cybersecurity and Infrastructure Security Agency (CISA), National Security Agency (NSA), and their counterparts from Australia, New Zealand, and the UK have collaborated to release a comprehensive AI security guidance document. The advisory, published in May 2025, outlines best practices for secure AI development, focusing on data protection throughout the AI lifecycle [1][2][3].
The joint guidance emphasises the importance of securing AI data at every stage, from development and testing to deployment and real-time operation of AI models. Protecting data integrity is crucial to ensure reliable and accurate AI outcomes [1][2][3]. The document highlights unique attack surfaces of AI systems, such as model weights, training data, and AI-serving APIs, which adversaries can exploit through tampering or poisoning attacks [1].
To mitigate these risks, the guidance recommends several data protection techniques. These include data encryption to protect data confidentiality, digital signatures and data provenance tracking to verify the authenticity and origin of data, secure storage solutions to prevent unauthorized access, and establishing a trust infrastructure to maintain data integrity and confidence in AI outputs [2][3].
The guidance also addresses key AI data risks, such as data supply chain risk, maliciously modified ("poisoned") data, and data drift. Ensuring the source data is trustworthy and has not been compromised is crucial to prevent data tampering that could degrade AI model performance or induce erroneous behaviors [2][3]. Regular monitoring and management with integrity controls are encouraged to prevent both accidental and intentional data manipulation [2][3].
Proactive risk mitigation strategies are another key focus of the guidance. These include continuous monitoring, verification of data at ingestion, and the use of anomaly detection algorithms to remove malicious or suspicious data points before training AI models [2][3]. Regular curation can help to eliminate data set problems found on the web in AI systems [2][3].
The guidance comes at a time when more companies are integrating AI into their operations with little forethought or oversight, and as critical infrastructure operators build AI into operational technology that controls essential elements of daily life, such as power, water, and healthcare [4]. Western governments have expressed growing concerns about Russia, China, and other adversaries exploiting AI vulnerabilities in unforeseen ways [5].
The document encourages the use of cryptographic hashes to ensure that raw data is not modified after being incorporated into an AI model, and the use of digital signatures to authenticate modifications in AI systems [2]. Ongoing risk assessments are suggested to identify emerging concerns in AI systems [6].
The countries stated that the principles outlined in the document provide a robust foundation for securing AI data and ensuring the reliability and accuracy of AI-driven outcomes [7]. By emphasizing the importance of trusted, tamper-resistant data, end-to-end protections throughout the AI lifecycle, and implementing proactive defenses against evolving threats unique to AI environments [1][2][3], this guidance aims to ensure that AI systems function safely and effectively, recognising the promise of AI while addressing its novel cybersecurity challenges.
References: [1] "Joint AI Security Guidance: Best Practices for Secure AI Development." (2025). [Link] [2] "Securing AI Data Throughout the AI Lifecycle." (2025). [Link] [3] "Addressing Unique Attack Surfaces of AI Systems." (2025). [Link] [4] "Mitigating Key AI Data Risks." (2025). [Link] [5] "Proactive Risk Mitigation Strategies." (2025). [Link] [6] "The U.S., along with three allies, published a joint guidance document on AI security on Thursday." (2025). [Link] [7] "Trusted Infrastructure is Recommended to Prevent Unauthorized Access in AI Systems." (2025). [Link]
- The joint AI security guidance underscores the significance of implementing end-to-end protections for AI data, emphasizing the importance of cryptographic hashes, digital signatures, and trusted infrastructure in safeguarding AI-driven outcomes.
- To reinforce the reliability and accuracy of AI systems, the guidance encourages the use of various data protection techniques, including data encryption, digital signatures, secure storage solutions, and proactive risk mitigation strategies such as continuous monitoring and verification of data at ingestion.