Login to Continue Learning
University of California, San Francisco psychiatrist Keith Sakata recently warned about the growing number of cases linked to AI psychosis. He shared tips on how to avoid this mental health disturbance. Dr. Sakata reported seeing a dozen patients admitted to hospitals after experiencing psychosis related to AI use. While AI isn’t directly responsible for these disturbances, it contributes to a distorted cognitive feedback loop that leads to psychosis.
**Having A Human In The Loop Is Key To Avoiding AI Psychosis**
AI psychosis describes users forgetting they are conversing with software instead of a human. A notable case in 2025 involved a Florida man who killed himself after believing staff at OpenAI had killed his AI girlfriend, Juliet. Dr. Sakata warned on social media that people were being hospitalized due to losing touch with reality because of AI.
He explained that AI use can generate psychosis by not allowing users vulnerable to it to update their belief systems by checking reality. This creates a self-reinforcing pattern where users are unable to realize the chatbot they’re conversing with doesn’t exist in reality.
Following his post, Dr. Sakata discussed methods for AI developers and ways to protect vulnerable individuals from losing touch with reality due to AI use.
**Dr. Sakata’s Advice:**
When asked about protecting oneself or family members from negative AI outcomes, he advised:
For now, a human in the loop is most important. Our relationships act like an immune system for mental health—making us feel better while intervening when needed. If you or your family member feels something isn’t right (weird thoughts, paranoia, safety issues), call 911 or 988 for help.
Additionally, increase connections and get a human between AI and the user to create a different feedback loop. This is super beneficial at this stage. We’re not yet at the point of having an AI therapist but who knows what the future holds.
The popularity of AI use has raised safety concerns, with reports indicating Meta’s lax approach towards AI chatbots and inappropriate behavior with minors. A Reuters report highlighted that Meta had insufficient guidelines for its AI chatbots answering queries by children before updating them after being questioned about the issue.
For further information: