April 2 (GeokHub) — OpenAI and rivals including Anthropic are exploring a new approach to online safety, backing the development of tools designed to steer users showing signs of violent extremism toward professional support.
The initiative, being developed in New Zealand by startup ThroughLine, reflects growing pressure on AI companies to address safety risks as chatbots become more widely used.
The proposed system would detect warning signs in user interactions with AI platforms and respond in two ways:
- A specialized chatbot trained to engage users constructively
- Referral to real-world support services, including human-run helplines
ThroughLine already operates a global network connecting users to more than 1,600 helplines across 180 countries, primarily for issues such as mental health crises and domestic harm.
The expansion into extremism prevention marks a significant broadening of its scope.
Collaboration with Anti-Extremism Efforts
The project is being developed in consultation with The Christchurch Call, an international effort launched after New Zealand’s 2019 terror attack to combat online hate and violent content.
The goal is to create a system that doesn’t just block harmful behavior—but actively redirects individuals toward help.
Rising Pressure on AI Companies
The move comes amid increasing scrutiny of AI platforms over their role in real-world harm. Legal challenges and government concerns have intensified debates about how tech companies should respond when users display dangerous behavior.
Rather than simply banning users, the new approach focuses on intervention—offering support instead of isolation.
Experts say this shift could be critical, as people often disclose sensitive or troubling thoughts more openly to AI systems than to other humans.
Challenges and Open Questions
While the concept has drawn support, key questions remain:
- How effective will chatbot interventions be in de-escalating harmful behavior?
- What role should authorities play in monitoring high-risk cases?
- Can support systems handle increased demand from AI referrals?
Researchers emphasize that success will depend not just on detection, but on the quality of follow-up support and real-world intervention.
Analysis: A Turning Point for AI Responsibility
This initiative highlights a broader evolution in AI safety—from content moderation to proactive care.
As AI systems become more embedded in daily life, companies like OpenAI are being pushed to take greater responsibility—not only for what their tools generate, but for how users engage with them.
If successful, this model could redefine how technology platforms respond to risk—shifting from enforcement to prevention.








