The National Institute of Standards and Technology (NIST) has introduced the Assessing Risks and Impacts of AI (ARIA) program to evaluate how artificial intelligence systems affect society when used regularly in real-world scenarios. This initiative will help quantify AI system performance within societal contexts, contributing to developing trustworthy AI systems. ARIA … [Read more...] about NIST Launches ARIA, a New Program to Advance Sociotechnical Testing and Evaluation for AI
Cybersecurity-Artificial Intelligence
A Comprehensive Guide to Understanding the Role of ISO/IEC 42001 (AI Management Standard)
Artificial intelligence (AI) is transforming industries with applications like hyper-personalization, automation, and predictive analytics. However, this rapid advancement necessitates responsible development and ethical practices. The ISO/IEC 42001 standard, published in 2023, addresses these needs by providing guidelines for implementing, maintaining, and improving an AI … [Read more...] about A Comprehensive Guide to Understanding the Role of ISO/IEC 42001 (AI Management Standard)
Securing Generative AI with Non-Human Identity Management and Governance
Unique risks and security needs are associated with the rapid innovation in generative AI technologies. As businesses seek value from AI-driven applications, ensuring their safe usage and implementation is crucial. The concept of non-human identity (NHI) governance protects data privacy and integrity in applications built on the Retrieval-Augmented Generation (RAG) … [Read more...] about Securing Generative AI with Non-Human Identity Management and Governance
Organizations unready for AI pose increasing security risks
Implementing generative AI technologies in business presents significant cybersecurity risks often overshadowed by the enthusiasm to stay competitive. While generative AI offers benefits like enhanced productivity and improved performance in various sectors, the rapid adoption without adequate safety measures can expose companies to data breaches and malicious attacks. The hype … [Read more...] about Organizations unready for AI pose increasing security risks
4 use cases for AI in cyber security
Artificial intelligence (AI) is increasingly integrated into various facets of life, including cybersecurity. AI's ability to simulate human intelligence through pattern recognition, learning, and problem-solving makes it a powerful tool for enhancing product security. In cybersecurity, AI is employed to automate, analyze, and improve processes such as log analysis, threat … [Read more...] about 4 use cases for AI in cyber security
ISO 42001: A New AI Management System for the Trustworthy Use of AI
With the rapid advancement and integration of Artificial Intelligence (AI) into organizational operations, concerns around AI's security, privacy, fairness, and transparency have become more pronounced. Recognizing these concerns, ISO is set to introduce ISO 42001 in 2024, a standard to establish safeguards and best practices for an AI Management System (AIMS). This new … [Read more...] about ISO 42001: A New AI Management System for the Trustworthy Use of AI
Is your cloud security strategy ready for LLMs?
The rapid adoption of large language models (LLMs) in enterprise environments in 2024 has brought new cybersecurity challenges to the forefront, particularly concerning data leakage and the complexities introduced by integrating LLMs with cloud services. With enterprises increasingly hosting multiple iterations of LLMs across their cloud environments, the risk landscape … [Read more...] about Is your cloud security strategy ready for LLMs?