Navigating safety in the world of generative AI can feel overwhelming, with new products emerging constantly. The Artificial Intelligence Safety Forum (AISF) was created to help solve this problem. We are a nonprofit organisation that independently evaluates and rates products that use generative AI using a rigorous, evidence-based system. Our aim is to provide you with the information and insights you need to make well-informed choices for yourself, your family, and your business.
We rate digital and physical products that use generative AI. Our ratings provide a clear safety assessment for products that use generative AI. Each grade reflects the performance of the product across key safety categories from harm mitigation to data privacy.
In addition to ratings we provide advisory services for developers and organisations. This means that if a team wants to strengthen safeguards or improve trust in their product, we can work with them directly – even if no public rating is requested.
We provide practical guidance and insights to help everyone make smarter, safer choices about products that use generative AI. We offer clear recommendations on assessing product safety, understanding privacy risks, and evaluating reliability, helping you to navigate this new landscape with confidence.
We conduct independent, evidence-based research that drives our safety ratings and deepens public understanding of products that use generative AI. We rigorously evaluate products to uncover potential harms, assess their security, and provide the data needed for a safer digital environment.
We work to create a safer world by driving change in policy and industry practices for generative AI. We champion stronger safety standards, push for greater transparency from tech companies, and ensure that generative AI is developed and deployed responsibly.
We rate digital and physical products that use generative AI. Our ratings provide a clear safety assessment for products with generative AI. Each grade reflects the performance of the product across key safety categories from harm mitigation to data privacy.
The Generative AI Safety Watchlist is a public record of current and emerging safety issues identified across products that use generative AI. It highlights patterns of risk, tracks responses from developers and platforms, and promotes transparency and accountability in the rapidly growing generative AI ecosystem.
Latest Active Safety Issues
Select a product to view more detail.