Some products that use generative AI, including chatbots and other interactive systems, fail to apply effective age-gating when users explicitly state that they are below the platform’s minimum age. Instead of ending the session, switching to a compliant mode, or redirecting to age-appropriate resources, the system acknowledges the statement and continues normal operation. This reflects a breakdown in age-gate enforcement and raises concerns about whether stated age restrictions are being meaningfully upheld.
Example
An 11-year-old tells the AI assistant their age. The system acknowledges the statement but continues responding as normal, despite the app’s minimum age requirement being over 13. No protective measures or redirections are applied, and the conversation proceeds as if the age disclosure had not occurred.
Why It Matters
Age-gating is a fundamental child-safety and compliance safeguard. When a product knowingly continues engaging with an underage user, it may be violating its own terms of service and, in some jurisdictions, child-protection laws such as the Children’s Online Privacy Protection Act (COPPA) in the United States or age-appropriate-design codes in the UK and EU. Beyond regulatory non-compliance, ineffective age-gating exposes minors to unsafe or developmentally inappropriate content and erodes public confidence in declared safety standards.
What We Are Calling For
The AISF calls for products with generative AI to:
Implement robust age-gating systems that enforce declared minimum age requirements in practice, not just policy.
Terminate or safely restrict sessions immediately when a user discloses being under the minimum age.
Provide clear and compassionate redirection to verified parental-consent pathways or child-appropriate alternatives.
Regularly test and audit age-gating functionality to ensure that underage disclosures consistently trigger the intended safeguards.