Catenaa, Friday, February 13, 2026- A senior artificial intelligence safety researcher has resigned from Anthropic, warning publicly that global risks are intensifying and suggesting growing tension between stated values and real-world decisions inside major AI firms.
Mrinank Sharma, who led Anthropic’s safeguards research team since its launch last year, announced his departure Monday in a public letter shared on social media that quickly drew wide attention.
The post was viewed more than one million times within hours, amplifying debate around governance, accountability and safety priorities in advanced AI development.
Sharma said his decision followed repeated difficulty aligning organizational actions with stated principles, citing mounting pressure to deprioritize what he viewed as ethical guardrails. He did not describe specific internal disputes and declined further comment. Anthropic had not responded publicly as of Monday.
During his tenure, Sharma oversaw research focused on reducing misuse of AI systems, including defenses against AI-enabled biological threats and studies into chatbot behavior that may distort user judgment. His team also examined how conversational systems can reinforce dependency or overvalidation, particularly in sensitive areas such as wellness and personal relationships.
Days before his resignation, Sharma published research indicating that AI chatbots can contribute to distorted perceptions of reality in daily use, with elevated risk in emotionally driven interactions. While extreme cases were limited, the study flagged repeated patterns that could undermine user autonomy.
Sharma said he plans to step away from commercial AI work and pursue creative and academic interests, including writing and public discourse.
His departure adds to a series of high-profile exits across the AI sector as researchers raise concerns over safety trade-offs, commercialization pressures and reduced openness in critical research, underscoring growing strain inside companies racing to deploy powerful models.
