Catenaa, Wednesday, February 18, 2026- A growing number of high-profile artificial intelligence researchers are leaving top firms while publicly raising concerns about the safety and ethics of AI systems.
Departures from OpenAI, Anthropic, and xAI have drawn attention to potential risks as companies push rapidly toward commercialization and IPOs.
Former OpenAI researcher Zoë Hitzig cited concerns over the company’s emerging advertising strategies and the collection of sensitive user data, warning that AI could manipulate users in ways humans may not fully understand.
OpenAI recently disbanded its mission alignment team, which had been tasked with ensuring AI benefits humanity, intensifying scrutiny over the company’s direction.
Mrinank Sharma, head of Anthropic’s Safeguards Research team, resigned with a cryptic warning that “the world is in peril,” noting challenges in aligning corporate actions with stated ethical values.
Anthropic confirmed Sharma’s departure but emphasized he was not responsible for company-wide safety initiatives.
xAI also faced multiple co-founder departures this week as the company reorganizes while merging with Elon Musk’s SpaceX.
The startup has faced global criticism over its Grok chatbot, which previously generated inappropriate content and offensive outputs. Social media posts from Musk confirmed part of the reorganization required layoffs and founder exits.
The trend reflects broader tensions between AI researchers focused on safety and executives prioritizing growth and revenue.
Reports also highlight recent OpenAI personnel disputes, including the firing of a top safety executive amid controversy over adult content rollout. Industry veterans, such as Geoffrey Hinton, have long warned of existential risks posed by AI, emphasizing potential economic and societal upheaval as the technology advances.
