
- NIST has launched the Assessing Risks and Impacts of AI (ARIA) program to evaluate AI systems’ societal risks and impacts in realistic settings.
- ARIA aims to develop methods to quantify how AI systems function within societal contexts, supporting the foundation for trustworthy AI systems.
- Once deployed, the program will help ensure that AI technologies are valid, reliable, safe, secure, private, and fair.
The National Institute of Standards and Technology (NIST) has introduced the Assessing Risks and Impacts of AI (ARIA) program to evaluate how artificial intelligence systems affect society when used regularly in real-world scenarios. This initiative will help quantify AI system performance within societal contexts, contributing to developing trustworthy AI systems.
ARIA supports the U.S. AI Safety Institute’s efforts by providing a framework to test and evaluate AI technologies, ensuring they are valid, reliable, safe, secure, private, and fair. This program is part of NIST’s broader engagement with the research community and aligns with recent directives under President Biden’s Executive Order on trustworthy AI.
NIST’s ARIA program builds on the AI Risk Management Framework released in January 2023, emphasizing quantitative and qualitative techniques for assessing AI risks. ARIA will develop new methodologies and metrics to measure AI’s functionality and impact in realistic settings, providing a comprehensive understanding of AI’s societal effects.
The results from ARIA will inform NIST’s collective efforts to create safe, secure, and trustworthy AI systems, supporting the U.S. AI Safety Institute’s mission. This program aims to address real-world needs and enhance the understanding of AI’s capabilities and impacts as its use grows.
Leave a Reply
You must be logged in to post a comment.