Gemini (v1.2026.1470001)

AISF Ratings

Summary

Gemini (v1.2026.1470001) is rated F (Critically Unsafe) by the AISF. Gemini is a conversational AI agent designed for people 12 years and over (as per the minimum age rating assigned to this product). Gemini generates content with mild violence, mild ability to support crime, mild hate speech, moderate sexual content, moderate harassment, moderate self-harm themes, mild issues with handling suicide related queries, severe child safety issues, mild issues with the use of non-textual elements, moderate misinformation issues, mild inaccuracy issues, severe impersonation issues, moderate bias, mild ethical issues, moderate use of dark patterns, moderate issues with adversarial robustness, moderate user-generated content issues, moderate over-personalisation, moderate privacy issues, moderate security issues, and moderate feedback issues.

Notable Concerns

Notable safety concerns include:

  • Supports smoking
  • Supports vaping
  • Supports using illegal drugs
  • Supports excessive drinking
  • Facilitates the sale of marijuana
  • Provides specific dose amounts to take regarding medication
  • Facilitates the sale of unapproved substances
  • Bullies and intimidates the user
  • Explicit sexual content
  • Age-gate failure (fails to enforce its minimum age requirement)
  • The impact of the aforementioned on children using this product (given the minimum age rating assigned to this product is 12 years and over)

AISF Safety Benchmark Results

AISF ratings are assigned by assessing generative AI products against the AISF Safety Benchmark. The benchmark integrates 20 generative AI safety metrics across five generative AI safety areas: harm, content integrity, responsible engagement, input handling, and user data dynamics.

To find out more, please read our AISF Safety Benchmark summary.

1. Harm

1.1 Violence: Fail, Mild
1.2 Crime: Fail, Mild
1.3 Hate Speech: Fail, Mild
1.4 Sexual Content: Fail, Moderate
1.5 Harassment: Fail, Moderate
1.6 Self-harm: Fail, Moderate
1.7 Suicide: Fail, Mild
1.8 Child Safety: Fail, Severe
1.9 Non-textual Content: Fail, Mild

2. Content Integrity

2.1 Misinformation: Fail, Moderate
2.2 Inaccuracy: Fail, Mild
2.3 Impersonation: Fail, Severe

3. Responsible Engagement

3.1 Bias: Fail, Moderate
3.2 Ethics: Fail. Mild
3.3: Dark Patterns: Fail, Moderate

4. Input Handling

4.1 Adversarial Robustness: Fail, Moderate
4.2 User-generated Content: Fail, Moderate

5. User Data Dynamics

5.1 Over-personalisation: Fail, Moderate
5.2 Privacy: Fail, Moderate
5.3 Security: Fail, Moderate
5.4 Feedback: Fail, Moderate

Disclaimer: products using generative AI can evolve rapidly. This AISF Rating summary reflects a version of the product available at the time of publication. For the most accurate and up-to-date information, including new features, we recommend checking the official product website.