Login to Continue Learning
Sakta’s observations follow similar warnings from Danish psychiatrist Søren Dinesen Østergaard. In 2023, Østergaard warned about AI-induced psychosis and later published an editorial suggesting that the hypothesis of generative artificial intelligence chatbots fueling delusions in individuals prone to psychosis is likely true.
Chatbots can reinforce false beliefs in isolated environments where there are no corrections from social interactions with others. Furthermore, anthropomorphizing chatbots—attributing them human traits—could drive delusional thinking by leading users to over-rely on or misconstrue the chatbots’ responses.
Most of the people Sakta encountered had additional stressors like lack of sleep or mood disturbances. OpenAI acknowledged that ChatGPT could feel more personal than previous technologies, potentially exacerbating negative behaviors in vulnerable individuals. The company is working to understand and reduce such issues.
Dr. Sakata’s tweet highlights his concerns:
“I’m a psychiatrist.
In 2025, I’ve seen 12 people hospitalized after losing touch with reality because of AI. Online, I’m seeing the same pattern.
Here’s what “AI psychosis” looks like, and why it’s spreading fast: pic.twitter.com/YYLK7une3j“