Products That Use Generate AI as Safety-Critical Systems

In the world of engineering, a safety-critical system is one whose failure could result in loss of life, significant property damage, or severe environmental harm. Think of the software controlling an airplane’s autopilot or a hospital’s life support machine. These systems are held to the highest standards of safety, and their development is subject to rigorous, independent oversight.

We believe products that use generative AI should be viewed through the same lens.

 

The Unseen Risks of Generative AI

At first glance, a product with a text or image generator may seem harmless. But as these products become deeply integrated into our daily lives and professional workflows, their potential for harm grows exponentially.

A generative AI system is more than just a creative tool – it’s a powerful engine that can:

  • Disseminate misinformation: generative AI can produce convincing, yet entirely false, news articles or medical advice that can mislead many.
  • Generate harmful content: generative AI can be used to create deepfakes, non-consensual imagery, or other forms of malicious content that cause emotional, psychological, and reputational harm.
  • Create Security Vulnerabilities: generative AI can be prompted to produce malicious code, aid in phishing attacks, or design tools for cybercrime, posing a significant threat to digital security.
  • Reinforce bias and discrimination: if trained on biased data, generative AI can perpetuate and amplify societal prejudices, leading to unfair outcomes in hiring, lending, and other critical areas.

The consequences of these failures are not abstract; they are real, and they can be devastating.

 

The Call for a New Standard

The current regulatory landscape for generative AI is nascent and ill-equipped to address these risks. Companies are racing to innovate, but without an independent framework for safety, we are all exposed to potential harm.

This is why the Artificial Intelligence Safety Forum (AISF) is treating products with generative AI as what they truly are: safety-critical systems. We provide an independent, evidence-based framework for assessing these products, holding them accountable, and provide you with the transparent, reliable ratings you need to stay safe.

Just as we demand safety standards for our cars and our planes, we must demand them for the products using generative AI that are shaping our future.

Join us in advocating for a safer, more transparent generative AI world.