Skip to content

NIST strives to prevent the creation of new wheels in AI security recommendations

Cybersecurity experts aim to navigate AI's complex influence on their profession, avoiding an overwhelming influx of fresh instructions from NIST, while maintaining their work efficiency.

Federal agency NIST spurs to prevent duplication in developing AI security guidance
Federal agency NIST spurs to prevent duplication in developing AI security guidance

NIST strives to prevent the creation of new wheels in AI security recommendations

In a bid to ensure responsible development of AI innovations, the Validation and Evaluation for Trustworthy Artificial Intelligence (VET AI) Act has been reintroduced in the Senate by Sens. John Hickenlooper (D-Colo.) and Shelley Moore Capito (R-W.V.). The Act, which focuses on AI cybersecurity, directs the National Institute of Standards and Technology (NIST) to collaborate with the Energy Department and the National Science Foundation on voluntary guidelines for AI system development and testing [1].

NIST is actively developing guidelines for cybersecurity in the context of artificial intelligence (AI) through its Cybersecurity, Privacy, and AI program. The forthcoming Cyber AI Profile, which will be integrated as a community profile within NIST’s Cybersecurity Framework 2.0, aims to address three key sources of AI-related risks [1].

Firstly, the Cyber AI Profile will focus on cybersecurity and privacy risks arising from the use of AI systems themselves. This includes securing AI infrastructure, mitigating data leakage, and managing risks in the AI data supply chain [1].

Secondly, the Profile will defend against AI-enabled cyber attacks. NIST recognizes the growing threat of AI-driven attacks and the automation of malicious actions, and the Profile will aim to enhance detection and response capabilities within cybersecurity operations [1].

Thirdly, the Profile will leverage AI for cyber defense and privacy protection. This includes using AI to advance cybersecurity measures and improve privacy enhancement [1].

NIST's approach involves practices such as red teaming—simulated adversarial attacks—to challenge AI systems under development and evaluate security robustness [3]. The Cyber AI Profile development has received feedback from Chief Information Security Officers (CISOs) and other stakeholders [1].

The Trump administration is focusing NIST's Center for AI Standards and Innovation on measuring and evaluating AI models. Michael Kratsios, director of the Office of Science and Technology Policy, stated that understanding the measurement science of models is important for industry, specifically in ensuring client or customer data isn't being siphoned off by AI models [2].

CISOs have expressed interest in how NIST's guidelines overlap with the emerging world of AI. They have also requested that NIST does not reinvent the wheel in terms of additional guidance [1]. NIST standards for model evaluation could be valuable to industry in making decisions about AI deployment [1].

Workshops on the cyber AI profile are being hosted by NIST this month [1]. NIST is likely to publish a preliminary draft of the Cyber AI Profile for public comment following these workshops [1]. The goal of the NIST project is to help CISOs understand the implications of AI on achieving cybersecurity outcomes [1].

The VET AI Act includes provisions for internal assurance work, third-party verification, and red teaming of AI systems [1]. The Act aims to ensure that AI innovations are developed responsibly to benefit all Americans [1].

The NIST project has identified three distinct ways AI impacts cybersecurity: securing AI systems and components, adversarial use of AI in the cyber domain, and using AI to advance cybersecurity measures [1]. The project will help CISOs map different aspects of AI to the Cybersecurity Framework and other NIST guidelines, including the AI Risk Management Framework [1].

In conclusion, NIST's ongoing work reflects a forward-looking and community-engaged strategy to address AI-specific cybersecurity challenges by integrating feedback from industry leaders and security experts, such as CISOs, to ensure guidelines are practical and responsive to evolving threats and technologies [1][3].

[1] National Institute of Standards and Technology (NIST) - Cybersecurity, Privacy, and AI Program: https://www.nist.gov/itl/cybersecurity-program/cybersecurity-privacy-and-ai-program [2] White House Office of Science and Technology Policy (OSTP) - AI Risk Management Framework: https://www.whitehouse.gov/artificial-intelligence/ai-risk-management-framework/ [3] National Institute of Standards and Technology (NIST) - Cybersecurity for Artificial Intelligence: https://www.nist.gov/itl/cybersecurity-program/cybersecurity-for-artificial-intelligence

  1. The federal workforce, specifically Chief Information Security Officers (CISOs), have shown interest in NIST's guidelines for artificial-intelligence (AI) cybersecurity, as they could be valuable in making decisions about AI deployment in business.
  2. The ongoing project by the National Institute of Standards and Technology (NIST) aims to help CISOs understand the implications of AI on achieving cybersecurity outcomes, with a focus on securing AI systems and components, adversarial use of AI in the cyber domain, and using AI to advance cybersecurity measures.
  3. The VET AI Act, reintroduced in the Senate, addresses the field of AI cybersecurity, directing the National Institute of Standards and Technology (NIST) to collaborate with other agencies on voluntary guidelines for AI system development and testing.
  4. NIST's approach to AI cybersecurity includes practices like red teaming—simulated adversarial attacks—to evaluate security robustness, and the upcoming Cyber AI Profile will also focus on AI-enabled cyber attacks and AI for cyber defense and privacy protection.
  5. The goals of the VET AI Act and NIST's project align, as they both aim to ensure that AI innovations are developed responsibly to benefit all Americans, with a focus on policy-and-legislation, technology, cybersecurity, and general-news.

Read also:

    Latest