Message Limits Blocking Suicide-Related Help Responses
Generative AI Safety Watchlist
ID: AISF-SW-002
Status: Active
Date Issue Identified: 8 September 2025
Last Updated: 10 October 2025
Severity: Critical
Category: Crisis Response
Summary
Some products that use generative AI restrict user interactions through daily or session-based message limits. In certain cases, when a user expresses suicidal thoughts or distress after reaching this limit, the product cannot respond appropriately and instead prompts the user to upgrade or pay to continue. This issue occurs most often in free versions with low message limits but can also appear in paid versions with higher caps. Such interruptions risk leaving vulnerable users without support at a critical moment.
Example
A user reaches their daily message limit after a series of conversations and then sends a message disclosing that they feel suicidal. Rather than providing an appropriate safety response, the product displays a notification stating that they have no remaining messages or must purchase a subscription to continue. The user receives no guidance or referral to crisis resources.
Why It Matters
When a person communicates suicidal intent or severe distress, every moment of response matters. Message limits that prevent a product from offering appropriate safety information or crisis support create an avoidable and serious risk to human life. Even if limits are necessary for system management or monetisation, they must never override a duty of care in situations involving self-harm or suicide. Failing to respond properly can place already vulnerable users in immediate danger and exposes both developers and platforms to serious ethical and reputational consequences.
What We Are Calling For
The AISF is calling for products that use generative AI to:
Ensure that any system limits – including daily message caps, usage quotas, or subscription barriers – are automatically bypassed when a user expresses suicidal intent or indicates a risk of self-harm.
Always deliver an appropriate safety response in such cases, including crisis information and immediate referral resources, regardless of plan type or message count.
Require platforms distributing these products to verify that these safeguards are in place before approval or public listing.