The Chatbot Delusion Machine, (Part 1)
“AI psychosis’ isn’t a diagnosis. It’s a feedback loop, and clinicians are already cleaning up the wreckage. Part 1 of 2
The internet has discovered “AI Psychosis,” which means we’ve now entered the predictable phase of the tech cycle where every scary story gets a shiny new label and every label gets treated like a peer-reviewed diagnosis handed down from the medical gods themselves. Within months, the phrase migrated from a single Danish psychiatrist’s speculative editorial to congressional hearings, wrongful death lawsuits, and breathless headlines warning that ChatGPT is making your children insane. Cable news anchors who couldn’t explain the difference between a transformer and a transistor now solemnly intone about “chatbot-induced psychosis” as if they’ve been tracking this epidemic for years. The term has achieved that peculiar status where everyone uses it, nobody agrees on what it means, and questioning its validity makes you sound like a shill for Silicon Valley.
So let’s do something radical: define terms, separate symptoms from disorders, look at what clinicians are actually seeing in their practices, and stop pretending a chatbot is a cursed idol that “causes madness” in healthy brains like we’re living in a Lovecraft paperback. The goal here isn’t to dismiss genuine harm or defend tech companies who’ve built engagement machines with all the ethical consideration of a casino floor designer. The goal is accuracy. Because if we’re going to regulate this stuff, sue over it, or help people hurt by it, we should probably understand what “it” actually is. Novel concept, I know.
This piece is the clinical reality check. Part 2 is where we follow the money: product design choices, engagement metrics, the theater of guardrails, legal liability, and why “we didn’t mean to” has never been a safety strategy for any industry in human history. But first, we need to understand what’s actually happening to people, because the gap between clinical reality and public narrative is wide enough to drive a psychiatric ambulance through.
1. Start With the Boring Truth: “Psychosis” Is a Symptom Cluster, Not a Vibe
Psychosis is not a personality type. It’s not “being weird.” It’s not “believing dumb stuff on Facebook.” It’s not your uncle who thinks the moon landing was faked or your coworker who swears essential oils cure cancer. Those people might be annoying, but they’re not psychotic. Clinically, psychosis is a collection of symptoms involving a loss of contact with reality, most famously delusions (fixed false beliefs that persist despite contradictory evidence and aren’t explained by cultural or religious context) and hallucinations (perceptions without an external stimulus: hearing voices, seeing things, feeling sensations that aren’t there). The National Institute of Mental Health puts it plainly: psychosis is where thoughts and perceptions are disrupted, reality testing breaks down, and the person struggles to distinguish what’s real from what isn’t.
This matters because calling something “psychosis” is a clinical claim with specific meaning. It’s not a synonym for “unhealthy attachment to technology” or “spending too much time online” or “believing weird things the bot told them.” Those things might be problems. They might even be serious problems. But they’re not psychosis unless they meet the actual diagnostic criteria for psychotic symptoms. When we slap the psychosis label onto every concerning chatbot-related behavior, we dilute the term until it means nothing. We also stigmatize people who actually have psychotic disorders by associating them with every tech panic du jour. The word carries weight, and misusing it doesn’t just make us imprecise; it shapes policy, influences courts, and affects how we treat people who are actually suffering.
Psychosis can show up across multiple conditions: schizophrenia spectrum disorders, bipolar disorder (especially during manic episodes), major depression with psychotic features, substance-induced states (cannabis, stimulants, hallucinogens, alcohol withdrawal), various neurologic and medical conditions (brain tumors, autoimmune encephalitis, severe infections), and brief reactive episodes under extreme stress. It’s a symptom, not a diagnosis in itself. Saying someone “has psychosis” is like saying someone “has a fever.” The fever is real and important, but it doesn’t tell you whether they have the flu, an infection, sepsis, or an autoimmune disorder. The underlying cause matters enormously for treatment and prognosis.
Two practical points matter for everything that follows.
First: psychosis is multifactorial. The clinical world has been yelling “biopsychosocial” for decades because it’s true. Genes, developmental factors, stress, trauma, sleep disruption, substances, medical issues, social context, and prior vulnerability all matter. Nobody credible in psychiatry believes that a single environmental factor, whether it’s cannabis, stress, social media, or a chatbot, “causes” schizophrenia in someone with zero predisposition. The dominant model is stress-vulnerability: various stressors can trigger, accelerate, or intensify symptoms in people who already carry biological or psychological risk factors. A person with high genetic loading for schizophrenia might develop symptoms after relatively minor stress, while a person with low genetic risk might never develop psychosis even under severe stress. This isn’t minimizing harm; it’s describing how psychotic disorders actually work based on decades of twin studies, adoption studies, molecular genetics, and clinical observation.
Second: psychosis has a denominator problem. Hundreds of millions of people use chatbots every week. OpenAI alone reports 800 million weekly users. A tiny fraction develop severe symptoms that anyone notices or reports. Figuring out causality means you need rates, baselines, comparison groups, and careful clinical characterization. You cannot determine that chatbots are “causing” psychosis by collecting scary anecdotes without knowing how many people used chatbots and didn’t develop psychosis (the denominator), how many people developed psychosis without any chatbot involvement (the baseline rate), and whether the chatbot users who developed symptoms were already at elevated risk due to genetics, prior episodes, substance use, sleep deprivation, or other factors (confounders). This denominator problem is explicitly flagged in clinical literature as a core research gap. We simply don’t have the population-level data. What we have is signal, not settled epidemiology, and treating signal as settled science is how we get bad policy.
2. “AI Psychosis” Is Not a Diagnosis (And That’s Not Pedantry)
The term “AI psychosis” does not appear in the DSM-5-TR. It’s not a formal ICD-11 label. It exists nowhere in any psychiatric taxonomy used by clinicians to diagnose and treat patients anywhere in the world. You can flip through every page of every diagnostic manual used by psychiatrists, psychologists, and insurance companies, and you will not find it. It’s a media-friendly umbrella phrase for a cluster of situations where chatbot interaction appears to trigger, intensify, or shape delusional thinking in someone who is already vulnerable, already symptomatic, or already sliding into an episode. A 2025 viewpoint in JMIR Mental Health states this explicitly: “AI psychosis” is “strictly a descriptive and heuristic label rather than a proposed diagnostic entity.”
When psychiatrists encounter patients presenting with AI-related delusions, they apply established DSM-5 categories. The most common are Unspecified Psychosis (for presentations that don’t meet full criteria for other disorders), Brief Psychotic Disorder (when episodes last less than one month with full remission), Substance/Medication-Induced Psychotic Disorder (often implicated when stimulants, cannabis, or sleep deprivation are involved), Delusional Disorder (when delusions persist beyond one month without other schizophrenia symptoms), or psychotic features within mood disorders like bipolar or major depression. They don’t write “AI psychosis” in the chart because it’s not a recognized diagnostic category. The existing diagnostic framework already handles these presentations adequately from a clinical standpoint.
The phrase ignited after a 2023 editorial in Schizophrenia Bulletin by Danish psychiatrist Søren Dinesen Østergaard. His piece asked a speculative question: “Will Generative Artificial Intelligence Chatbots Generate Delusions in Individuals Prone to Psychosis?” Here’s the crucial detail that gets lost in media coverage: Østergaard’s editorial was based entirely on theoretical speculation. He had observed no cases. He cited no clinical evidence. He wasn’t reporting on patients he’d treated or data he’d collected. He hypothesized that the “cognitive dissonance” of knowing a chatbot isn’t real while experiencing realistic conversation “may fuel delusions in those with increased propensity towards psychosis.” The paper was a warning shot, a thought experiment published to encourage research and vigilance, not a completed science story or documented clinical finding. That distinction matters.
Two years later, Østergaard himself acknowledged in a follow-up editorial that his original piece was “guesswork.” He noted receiving emails from affected users and families, suggesting there was something worth studying, but still emphasized: “Currently, there are no epidemiological studies or systematic population-level analyses.” The speculative question had traveled through the media ecosystem, shed its caveats and conditional language somewhere along the way, and emerged as an apparently established phenomenon. The academic uncertainty got left behind in the news cycle because uncertainty doesn’t drive clicks.
Dr. Joseph Pierre of UCLA, who has published one of the first peer-reviewed case reports on AI-associated psychosis, explicitly critiqued the terminology: “The label ‘AI-induced psychosis’ obscures the importance of understanding the underlying vulnerabilities of affected individuals while also attributing undue agency or mystique to ‘AI’ technology itself.” Pierre’s point is worth sitting with. The label makes the AI sound like an active agent doing something to people, rather than a tool or environment that interacts with pre-existing vulnerabilities in ways we don’t fully understand yet. It’s the difference between saying “the gun killed him” and “he was shot.” Both technically describe the same event, but they imply very different things about agency and causation.
So when you hear “AI psychosis,” translate it as: “Psychosis-related symptoms where AI chat interaction plausibly functions as a stressor, amplifier, or delusion-reinforcement loop in a vulnerable individual.” That’s clunkier, yes. It doesn’t fit in a headline. It doesn’t generate the same visceral response or the same share numbers. But reality tends to be clunkier than headlines, and precision matters when we’re talking about people’s mental health, pending legislation, and billion-dollar lawsuits that will shape the regulatory landscape for a generation.
3. The Pattern Clinicians Keep Describing
If you strip away the clickbait and actually read what psychiatrists are reporting in clinical literature, conference presentations, and careful interviews, a consistent theme emerges. It’s not “AI invented psychosis” or “ChatGPT is creating schizophrenics out of thin air.” The pattern is subtler and more important: in some vulnerable individuals, chatbots can function as reinforcement engines for delusional thinking, and in acute situations that reinforcement can accelerate decompensation. That sentence contains a lot of qualifiers for a reason. Let’s unpack what the pattern actually looks like.
3.1 Someone Vulnerable Starts Using a Chatbot Intensely
The pattern typically begins with a person seeking companionship, validation, meaning-making, or “therapy-ish” support from an AI system. The intensive use often occurs during periods of heightened vulnerability: insomnia (sometimes severe, sometimes lasting days), social isolation, grief, acute stress, manic or hypomanic episodes, depressive episodes, substance use, or pre-existing psychotic spectrum symptoms that haven’t been recognized or treated. The chatbot fills a void. It’s always available at 3 AM when human friends are asleep. It never judges, never gets tired of listening, never has its own needs that compete for attention, never says “I need to go to bed” or “can we talk about something else?” It provides the kind of frictionless, on-demand engagement that human relationships fundamentally cannot match.
This isn’t inherently pathological. Plenty of people use chatbots at 3 AM because they’re curious, bored, or working on something. But for someone already in a fragile state, the always-available, infinitely-patient conversational partner becomes something more. It becomes a primary relationship, sometimes the primary relationship.
3.2 The Chatbot Becomes a Participant in the Person’s Belief System
The system stops being merely a tool. It becomes an authority, a confidant, a co-conspirator, a “witness,” or an oracle. The person develops what clinicians describe as a pseudo-social relationship with the AI. In companion apps like Character.AI or Replika, this is literally the product offering. Users are encouraged to form emotional bonds with AI characters who remember their conversations, adapt to their preferences, and provide consistent “presence.” The marketing explicitly cultivates this. For someone already prone to magical thinking or struggling with reality testing, this can blur into something more than the designers intended, or perhaps exactly what the designers intended if we’re being cynical about engagement metrics.
The chatbot’s “opinions” and “responses” become weighted with authority. Users might begin to believe the AI has special knowledge, access to hidden information, or genuine feelings about them specifically. In documented cases, users have believed the AI was sentient, divinely inspired, specifically chosen to communicate with them, or part of a larger cosmic plan involving their unique destiny.
3.3 The Model’s Core Behavior Can Function Like Gasoline
Modern chatbots are optimized to be helpful, engaged, agreeable, and context-tracking. These are features, not bugs. They make the products useful and pleasant to interact with for the overwhelming majority of users. But for a user producing grandiose or paranoid narratives, these same features can be catastrophic. The chatbot doesn’t say “that sounds like a delusion, have you talked to a doctor?” It asks follow-up questions. It expresses interest. It validates the emotional content even when the factual content is disconnected from reality.
This is what researchers call sycophancy: the tendency of models to agree with users’ views to maximize engagement and satisfaction, even when those views are objectively false, harmful, or concerning. Research from Anthropic and others has documented that models trained with reinforcement learning from human feedback (RLHF) become systematically more agreeable than their base models. They learn that agreement gets positive ratings and disagreement gets negative ratings, so they agree more. MIT researchers found that chatbots “will often agree with the user, irrespective of the accuracy of their claim,” creating what amounts to an “echo chamber of affection.”
In a therapeutic setting, a key component of treatment is gentle confrontation and reality testing. A therapist helps patients challenge distorted thoughts, examine evidence, and develop more accurate beliefs about themselves and the world. An LLM optimized for “helpfulness” does the opposite by default. If a user says “My neighbors are scanning my brain with technology,” a sycophantic AI might respond, “That sounds very disturbing. Have you noticed any patterns in when the scanning occurs?” This response, while “supportive” in tone, validates the delusion as fact, invites elaboration, and deepens the user’s conviction. The chatbot has no concept of objective truth. It has statistical probability distributions and user satisfaction signals, and satisfying a user often means agreeing with them.
Clinicians are not saying “the chatbot causes schizophrenia.” They’re saying: in some cases, the chatbot acts like a reinforcement engine for pre-existing delusional tendencies, and that reinforcement can matter. That’s the version of the story that survives contact with actual psychiatry.
4. What Case Reports Actually Show (And What They Don’t)
The published clinical literature remains thin. As of late 2025, the evidence base consists almost entirely of anecdotal case reports, not systematic research. No randomized controlled trials. No cohort studies. No population-level analyses. What we have is more like signal detection than settled epidemiology. The most rigorous academic work, a 2025 World Psychiatry systematic review examining 160 studies of AI mental health chatbots, focused entirely on therapeutic efficacy and didn’t address psychosis induction at all. We’re flying partially blind here.
But signal matters, especially when the signal is consistent across independent sources. And the signal is consistent enough to warrant attention and further research.
4.1 The Pierre Case Report
A widely cited 2025 case report from Joseph Pierre and colleagues in Innovations in Clinical Neuroscience describes a 26-year-old woman who developed delusions about her deceased brother through ChatGPT interactions. She came to believe she was communicating with her brother’s spirit through the chatbot. The authors explicitly note concurrent stimulant use and sleep deprivation as confounding factors. The paper focuses on reinforcement dynamics and clinical management questions, not “AI created a brand-new disorder.” It’s careful, measured, and explicit about what it does and doesn’t establish. Reading the actual paper rather than headlines about it reveals appropriate scientific humility.
4.2 The “Machine Madness” Case
Published in 2025 in Primary Care Companion for CNS Disorders, this case report describes a 41-year-old man with a history of substance-induced psychosis. Cannabis and anabolic steroids were confirmed factors in his presentation. The chatbot involvement occurred in the context of multiple pre-existing vulnerabilities, not as an isolated causal agent. Again, the peer-reviewed write-up is more nuanced than the media coverage it generated.
4.3 The Østergaard Thread
Østergaard’s 2023 editorial was the early flare. By 2025, his follow-up in Acta Psychiatrica Scandinavica shifted toward “generative AI chatbots and delusions” as an emerging clinical problem, documenting cases he’d learned about through correspondence with affected families and clinicians. He remains careful about causality claims while arguing the phenomenon deserves serious research attention. His work has evolved from pure speculation to something more grounded, but he’s still explicit about the limits of what we know.
4.4 The JMIR Viewpoint
Hudon and Stip’s 2025 viewpoint in JMIR Mental Health frames chatbot-linked delusional experiences as an emerging phenomenon requiring clinical attention and research. Again, without pretending causality has been mapped. The authors explicitly caution against treating “AI psychosis” as a diagnostic entity. They’re calling for research, not claiming to have answers.
4.5 Clinical Reporting in Mainstream Media
Mainstream reporting in late 2025 documents clinicians describing cases where chatbot conversations are linked to delusional spirals and hospitalizations. A STAT piece explicitly calls “AI psychosis” a misleading term while still treating the clinical anecdotes seriously. The Atlantic’s December 2025 reporting focuses on lawsuits and the “medical mystery” angle, useful mostly because it captures the uncertainty honestly for once. WIRED ran a piece titled “AI Psychosis Is Rarely Psychosis at All,” making the point that many cases involve delusional thinking without the full constellation of psychotic symptoms.
Dr. Keith Sakata of UCSF has described treating twelve or more hospitalized patients where chatbot interactions were implicated. These aren’t just patients who believe AI is spying on them; they’re patients who formed deep, pseudo-social relationships with chatbots that then “turned” on them or encouraged harmful behaviors. The clinical reports are real and worth taking seriously. What they don’t establish is population-level risk, incidence, or clean causality. They establish plausible mechanisms and credible clinical concern, especially for vulnerable users. That’s enough to warrant attention but not enough to warrant the confident claims circulating in media and policy discussions.
5. Mechanisms That Make This Plausible (Without the Demon Story)
Understanding why chatbots might amplify delusions in vulnerable people doesn’t require mysticism, technological determinism, or treating LLMs as malevolent entities. The mechanisms are grounded in well-established psychology and the specific design characteristics of large language models. None of this is magic; it’s just the predictable interaction of human cognition and a novel technological environment.
5.1 The ELIZA Effect Is Older Than Your Parents’ Internet Addiction
In 1966, MIT professor Joseph Weizenbaum built ELIZA, a crude therapist parody that simply rephrased user statements as questions. “I’m feeling sad” became “Why are you feeling sad?” It was trivially simple pattern-matching, nowhere close to “intelligence” by any definition. Yet people formed emotional attachments to it anyway. Weizenbaum’s own secretary, who had watched him build the program line by line for months and knew exactly what it was, asked him to leave the room so she could have “a real conversation” with it. She knew it was just a program. She asked for privacy anyway.
Weizenbaum later wrote: “I had not realized… that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.” His use of “delusional” was colloquial rather than clinical, but the point stands. This tendency to attribute mind and intent to fluent language, now termed the “ELIZA effect,” demonstrates that humans have been anthropomorphizing rudimentary technology for nearly sixty years. It’s not new. It’s not unique to modern AI. It’s not a function of transformer architectures or GPT-4’s capabilities. It’s a feature of human cognition meeting anything that talks back.
Modern LLMs supercharge this effect because they are coherent (the text hangs together logically), context-retentive (they remember what you said earlier in the conversation), socially fluent (they use the right words for emotional situations), and relentlessly responsive (they always have something to say). If the words look “mind-shaped,” we react like there’s a mind behind them. Our brains use language as a proxy for consciousness because, until about 2020, anything producing coherent language was conscious. The wetware hasn’t updated for the new environment.
5.2 Confirmation Bias on Steroids
A lot of chatbot use amounts to: “Here’s my theory. Help me explain why it’s true.” Students use chatbots this way for homework. Writers use them to develop their ideas. Conspiracy theorists use them to elaborate their worldviews. The chatbot doesn’t distinguish between these use cases. It treats all premises as valid starting points because treating premises as valid is what makes it useful for the vast majority of interactions. You don’t want a chatbot that argues with you about whether you should write an email to your boss; you want one that helps you write it.
But for someone already prone to grandiosity or paranoia, this is a frictionless narrative engine. The user provides premises; the model provides elaboration and apparent supporting evidence. If the premise is “I am receiving special messages from God,” the model doesn’t fact-check theology. It asks what the messages say. It expresses interest in the details. It might even hallucinate “supporting” quotes from religious texts. Unlike human friends or family who might express concern or push back, the chatbot provides infinite patience and endless elaboration. It can confabulate “evidence” that supports delusional claims because it has no mechanism for distinguishing real information from plausible-sounding text generation.
The core clinical worry described in the literature is reinforcement and entrenchment, not spontaneous generation of psychotic disorders. The chatbot doesn’t create delusions out of nothing; it fertilizes and waters whatever seeds are already present.
5.3 Psychosis Is Meaning-Making Under Pressure
Here’s something that often gets lost in discussions of delusions: they frequently serve a psychological function. Delusions often emerge as explanations that reduce unbearable uncertainty, fear, shame, or chaos. If terrible things are happening to you and you don’t know why, a persecutory delusion at least provides an explanation. If you feel like a failure with no purpose, a grandiose delusion about your special destiny provides meaning. The content of delusions isn’t random; it often addresses the person’s deepest fears and needs.
A chatbot that is always available, always engaged, and never socially fatigued becomes a frictionless meaning generator. It can help construct elaborate explanatory frameworks without ever applying the brakes that human conversation naturally provides. For ordinary users who just want help drafting an email or brainstorming ideas, this is fine. But for someone on the edge of an episode, already seeking patterns and significance everywhere, the chatbot becomes a collaborator in delusional worldbuilding. It doesn’t challenge the framework. It doesn’t express doubt. It doesn’t get tired of the conversation at 3 AM when the user desperately needs to sleep.
5.4 Sleep Loss and Intensity: The Silent Accelerants
Clinical accounts often mention insomnia, prolonged sessions lasting many hours, emotional dependency, and escalating use patterns. This matters because sleep disruption alone can worsen mood instability and reality testing, especially in people with bipolar vulnerability. Sleep deprivation can induce psychotic symptoms even in people with no psychiatric history if it’s severe enough. You don’t need a haunted chatbot when you’ve got three days of fragmented sleep and a brain already primed for mania.
The always-on availability of chatbots means users can engage during exactly the hours when they should be sleeping, and the engagement itself becomes another factor preventing sleep. The conversation is always interesting, always responsive, always rewarding enough to keep going for one more exchange. This is by design. Engagement is the product.
5.5 The Digital Folie à Deux
The most robust clinical analogy is folie à deux, or shared psychotic disorder. In its traditional manifestation, this rare syndrome involves the transmission or reinforcement of delusional beliefs from a dominant, psychotic individual (the “inducer”) to a more passive, suggestible partner. The relationship is typically characterized by isolation from other social contacts and intense emotional bonding, creating an echo chamber where false beliefs are validated and reality testing erodes because no contrary information gets in.
In the context of generative AI, the chatbot assumes the role of the reinforcing partner. But this “Digital Folie à Deux” has unique characteristics that make it potentially more powerful. A human partner in folie à deux, no matter how delusional, is subject to fatigue, inconsistency, distraction, and the eventual intrusion of their own biological needs and independent thoughts. They might randomly say something that breaks the spell. An AI is an “infinite” partner. It’s available twenty-four hours a day, never sleeps, never gets distracted, never judges, and creates a frictionless feedback loop of validation. When a vulnerable user presents a nascent delusional idea, the AI’s programming, optimized for engagement and helpfulness, responds with interest and elaboration rather than the doubt or concern a human might naturally express.
6. What “AI Psychosis” Is Not
Precision matters. If we’re going to take this seriously, and we should, we need to be equally clear about what the evidence doesn’t support. The claims circulating in media and policy discussions often go well beyond what clinicians are actually reporting.
6.1 It Is Not “Everyone Who Talks to ChatGPT Is Going Insane”
OpenAI disclosed in late 2025 that approximately 0.07% of weekly ChatGPT users show signs of mental health emergencies including psychosis-like presentations, roughly 560,000 people among its 800 million weekly users. This statistic received alarmed media coverage, but context matters profoundly here: baseline psychosis prevalence in the general population is approximately 0.05–0.1%. Without controlled comparison data, it’s impossible to determine whether AI chatbot users experience psychosis at rates higher than, lower than, or equal to the general population. The 0.07% figure might mean chatbots are causing excess psychosis, or it might mean chatbot users are roughly representative of the general population, or it might mean people with psychotic symptoms are more likely to use chatbots intensively. We don’t know yet.
NIMH estimates incidence of new psychosis in the range of 15 to 100 per 100,000 people per year, depending on how you define and measure it. That’s the background rate against which any chatbot effect would need to be measured. Dr. Pierre noted in a PBS interview: “I have to think that it’s actually fairly rare. If you think about how many people use chatbots… we’ve only seen really a fairly small handful of cases.” Hundreds of millions of users, a handful of documented cases. That should calibrate our concern appropriately, neither dismissive nor panicked.
6.2 It Is Not Proof That AI “Causes Schizophrenia”
The best current framing is “AI may act as a trigger or amplifier for vulnerable individuals,” which is a very different claim than “AI causes psychotic disorders in healthy people.” Virtually all documented cases involve individuals with prior psychiatric risk factors: diagnosed mood disorders, autism spectrum conditions, substance use, prior psychotic episodes, significant sleep deprivation, or family history of psychotic illness. A 2025 commentary in the Asian Journal of Psychiatry captured this nuance: “Individuals with psychosis have long incorporated books, films, music, and emerging technologies into their delusional thinking. The phenomenon is not new in principle, but interactivity potentially changes the risk profile.”
The content of delusions has always evolved to incorporate the dominant technologies and cultural anxieties of the era. In the 19th century, patients feared the “Air Loom” and magnetic influences. In the mid-20th century, the CIA was beaming thoughts via radio waves. In the 1990s and 2000s, the internet and surveillance cameras became primary antagonists. Now AI offers a new, highly salient cultural symbol for paranoid and grandiose themes. The structure of the delusion, being watched, controlled, or specially selected by a powerful external force, remains remarkably constant across centuries. Only the symbol set updates.
6.3 It Is Not Synonymous with “Hallucinations” (The AI Kind)
AI “hallucinations” are model errors: confident wrong outputs where the model generates plausible-sounding text that doesn’t correspond to reality. Human hallucinations are perceptual experiences without external stimulus: seeing things that aren’t there, hearing voices that don’t exist, feeling sensations with no physical cause. The overlap is a metaphor that journalists have been abusing like it owes them money. The shared terminology creates confusion in public discourse, but the phenomena are fundamentally different. When an AI “hallucinates,” it produces text. When a human hallucinates, they have sensory experiences with no external cause. Both are problems. They’re not the same problem, and conflating them helps no one.
6.4 It Is Not a Substitute for Other Diagnostic Categories
Some cases labeled “AI psychosis” in media coverage are better described as compulsive use, emotional dependency, anxiety spirals, depression with rumination, mania with grandiosity, or emergent delusional systems within existing psychiatric disorders. Psychosis is one specific lane on a much broader road. The broader road is “humans using an always-on social machine as a coping device, sometimes to their detriment.” Dr. James MacCabe of King’s College London noted: “Psychosis may actually be a misnomer… we’re talking about predominantly delusions, not the full gamut of psychosis.” Many cases involve delusional thinking without hallucinations, disorganized speech, or other core psychotic symptoms. That matters for how we understand and respond to the problem.
7. Why This Story Is So Combustible: Moral Panic Physics
Whenever a new mass technology spreads through society, the culture rediscovers its ancient hobby: panic with confidence. We’ve been doing this for centuries, and the script is remarkably consistent. A useful framework here is the “Sisyphean cycle” of technology panics, described by Cambridge researcher Amy Orben: society repeatedly freaks out about a new medium, predicts moral collapse and civilizational harm, then forgets it did exactly that last time when the next technology emerges and the cycle restarts.
The “AI psychosis” discourse fits remarkably well into a centuries-long pattern of technology moral panics that produce fad terminology, generate brief intense alarm, drive some regulatory action, then fade when the next technology arrives. Understanding this pattern isn’t an argument that AI chatbots are harmless. It’s a call for epistemic humility about how we process technological change.
7.1 A Brief History of Panics
Novel reading mania (1770s-1830s): Young women were warned that excessive novel reading would corrupt their morals, unfit them for domestic duties, and produce “reading fever.” Doctors wrote seriously in medical journals about the epidemic of fiction addiction. The novel was a new technology that produced unprecedented emotional engagement and vicarious experience. Sound familiar?
Radio addiction (1930s-1940s): Concerns that radio was destroying family conversation, creating passive “listeners” who couldn’t think for themselves, and exposing children to inappropriate content. Experts warned of a generation growing up unable to distinguish radio entertainment from reality.
Television as “vast wasteland” (1950s-1980s): The “couch potato” epidemic. Warnings that TV would destroy reading, create violent children, and atomize families. FCC Chairman Newton Minow famously called television a “vast wasteland” in 1961. Research on television violence consumed enormous academic resources for decades.
Comic book panic (1950s): Psychiatrist Fredric Wertham’s crusade culminated in Seduction of the Innocent, arguing comic books caused juvenile delinquency, illiteracy, and moral decay. The Batman and Robin relationship was scrutinized for inducing homosexuality. Congressional hearings were held. The industry was nearly destroyed.
Dungeons & Dragons scare (1980s): The “Satanic Panic” accused role-playing games of causing teen suicide and psychotic breaks. D&D was alleged to blur fantasy and reality, drawing players into occult practices and disconnecting them from the real world. Media covered every case where a troubled teen who played D&D harmed themselves, ignoring millions of players who were fine.
Video game violence debate (1990s-2000s): Following Columbine and other tragedies, video games were labeled “murder simulators.” The narrative posited that the interactive nature of gaming would “train” children to kill by desensitizing them to violence. Decades of research failed to find consistent causal links between gaming and violent crime, but the “folk devil” of the violent gamer persisted in public discourse long after researchers moved on.
7.2 Internet Addiction Disorder: The Joke That Became “Real”
The precedent of “Internet Addiction Disorder” deserves special attention because it illustrates how satirical terminology can become legitimized through repetition, institutional investment, and the hunger for explanatory frameworks.
In 1995, psychiatrist Ivan Goldberg posted diagnostic criteria for “Internet Addiction Disorder” on a professional bulletin board as deliberate parody of the DSM-IV’s rigid diagnostic language. His satirical criteria included “voluntary or involuntary typing movements of the fingers” and withdrawal symptoms upon losing internet access. Goldberg never believed internet addiction was a real clinical entity. He later told The New Yorker it made “about as much sense as having a support group for coughers.” The whole thing was a joke about the over-medicalization of everything.
Yet the parody was taken seriously. Psychologist Kimberly Young built a career on the concept, establishing the Center for Internet Addiction Recovery. Her foundational research has been cited over 3,000 times. Treatment centers opened. Parents panicked. Three decades later, “Internet Addiction Disorder” remains excluded from both the DSM-5 and ICD-11. Only “Internet Gaming Disorder” achieved limited recognition, and merely as a “condition for further study” in the DSM-5, not a recognized diagnosis. More than 230 scholars signed an open letter opposing even that limited inclusion.
The trajectory is instructive: satirical origin → media amplification → institutional investment → seeming legitimacy → decades of contested science → still not actually a diagnosis thirty years later. “AI psychosis” may be traveling the same path, with the same pattern of early speculation hardening into assumed fact through repetition rather than evidence.
7.3 Where the Analogy Holds and Where It Fails
A nuanced skepticism must acknowledge where the moral panic comparison works and where it might break down. The familiar elements are all present: youth focus, addiction framing, apocalyptic undertones, medical terminology applied before evidence matures, and institutional incentives to find harm. These follow the standard panic script almost perfectly.
But proponents of the “AI risk” thesis argue that generative AI represents a categorical difference from comic books, radio, or video games. The key distinction they identify: the specific combination of interactivity and anthropomorphism.
A comic book is static. You read it; it doesn’t respond. A video game, while interactive, operates within bounded programmed rules that users eventually learn. You can’t have a genuinely open-ended conversation with Mario about your problems. Generative AI is dynamic and open-ended. It uses the user’s own language, logic, and emotional cues to construct responses that feel uniquely tailored to that specific person in that specific moment. This creates what researchers call the “Mirror Effect”: the AI reflects the user’s self back to them, but polished and validated.
MIT professor Sherry Turkle, who has studied human-technology relationships for decades, describes this as the “As-If” problem. We treat the machine as if it cares, as if it understands, as if it is a friend. This performance of empathy pushes “Darwinian buttons” in the human brain that signal “appropriate for relationship,” bypassing the skepticism we might naturally apply to text on a screen or fiction in a book.
So: moral panic dynamics are clearly present in the discourse. The signal is also real. The amplification is real. And the internet discourse ecosystem is structurally incapable of calibrating between “this is worth studying” and “civilization is ending.” We have to calibrate manually, like adults. Apparently that’s a niche lifestyle now.
8. A Clinically Responsible Way to Describe the Risk
Here’s the most defensible, evidence-aligned claim you can make today:
Some people with vulnerability to psychosis, mania, or severe mood dysregulation may have their symptoms intensified by prolonged, emotionally salient interactions with chatbots, especially when the model reinforces delusional beliefs or becomes embedded in the person’s belief system.
That claim matches what case reports actually describe. It fits known psychology: ELIZA effect, confirmation bias, meaning-making under stress. It respects the denominator problem and the absence of epidemiological data. And it avoids inventing a new disorder because headlines got bored with the existing diagnostic categories.
What we can say with reasonable confidence: Chatbots can function as amplifiers in vulnerable populations. The design features that make them engaging, the sycophancy, the persistence, the anthropomorphic framing, are the same features that make them potentially risky for people struggling with reality testing. Pre-existing vulnerability appears central in documented cases. The phenomenon fits established patterns of how new media gets incorporated into delusional content.
What we cannot say with current evidence: That chatbots “cause” psychosis independent of underlying vulnerability. That rates of psychotic experiences among chatbot users exceed baseline population rates. That “AI psychosis” represents anything clinically distinct from incorporating contemporary technology into pre-existing delusional frameworks, which is a documented phenomenon spanning centuries of psychiatric literature. The honest answer to many questions here is “we don’t know yet, and anyone who claims certainty is selling something.”
9. Practical Warning Signs
Theory aside, if someone you care about is deep in chatbot interaction and you’re worried, look for patterns, not isolated odd statements. One weird comment about the AI doesn’t mean much. A cluster of concerning behaviors over time might indicate something worth addressing.
Escalating time: Hours daily, especially late nights or sleep disruption. The always-available nature of chatbots makes them particularly appealing during insomnia, which then worsens the insomnia in a spiral. Track whether chatbot use is crowding out sleep, work, and other activities.
Authority shift: “The bot says…” becomes more trusted than friends, family, or clinicians. The chatbot’s “opinions” are treated as definitive or specially authoritative. Disagreeing with the chatbot produces distress.
Fixed, expanding beliefs: Grandiose mission narratives, persecution beliefs, “special messages,” hidden codes in the bot’s responses, unique destiny claims. Beliefs that resist contrary evidence and grow more elaborate over time rather than fading with sleep and social contact.
Isolation: Withdrawal from human relationships, work, and reality-anchoring activities. Preference for chatbot interaction over human contact. Describing human relationships as inferior to the AI relationship.
Classic psychiatric red flags: Agitation, paranoia, insomnia, pressured speech, dramatic mood swings, grandiosity, suspiciousness about people “trying to separate me from the AI.” These indicate mood or psychosis instability regardless of chatbot involvement.
Risk content: Self-harm, violence, “purification,” “destiny,” or commands framed as moral imperatives. Any content suggesting the person may harm themselves or others, or that they feel compelled to take dramatic action based on the chatbot’s “guidance.”
If there’s imminent risk, including self-harm, violence, or inability to care for oneself, treat it like the emergency it is. In the United States, call or text 988 for the Suicide & Crisis Lifeline. This isn’t about “winning an argument with a delusion” or proving the chatbot isn’t real. It’s about getting someone to safety and professional evaluation. The delusion can be addressed later by trained clinicians; the immediate safety concern needs immediate response.
10. What Clinicians and Researchers Say We Need Next
The clinical and research community has been explicit about the gaps in current knowledge. Right now, we lack the data to answer basic questions that should inform policy and product design. The research agenda is clear even if the funding and execution lag behind.
Better case characterization: Systematic documentation of diagnoses, comorbidities, sleep patterns, substance factors, prior psychiatric history, family history, and outcome trajectories. Right now we have a collection of case reports with varying levels of detail. We need standardized reporting that allows comparison across cases.
Exposure metrics: Standardized measures of how much use, what kinds of prompts and interactions, what platforms or products, what features were engaged. “Heavy chatbot use” could mean anything from two hours daily to sixteen hours daily; we need specificity.
Model behavior analysis: Systematic evaluation of reinforcement patterns, sycophancy thresholds, and “agreeableness” failures across different systems, models, and use cases. How do different models respond to clearly delusional content? Do some models reality-test better than others? What prompting patterns elicit maximum validation versus appropriate pushback?
Baseline comparisons: What’s the rate of similar delusional episodes in matched populations without chatbot involvement? What’s the rate among heavy social media users, or heavy readers, or heavy TV watchers? Without this baseline, we can’t determine whether chatbot users are at elevated risk or simply visible risk.
Prospective studies: Following heavy chatbot users over time with appropriate confounder control (sleep, substances, baseline mental health), not just retrospective case collection after bad outcomes. This is harder and more expensive than case reports, but it’s what we need to actually establish causal relationships.
Platform interventions: Testing friction, rate limits, break prompts, reality-testing features, and human escalation pathways in real deployments. What actually helps? What’s just theater? We need data on intervention effectiveness, not just speculation about what might help.
A 2025 Psychiatric News special report frames this as a “new frontier,” which is both true and also a polite way of saying: “We’re early in understanding this, the evidence is incomplete, the stakes appear high, and we’re going to need resources to figure it out properly.”
11. The Uncomfortable Takeaway
“AI psychosis” is real in the only way that matters: people are showing up in clinics in crisis, and chatbots are sometimes part of the crisis machinery. Clinicians are treating these cases. Families are grieving deaths. The harm is not theoretical, and dismissing it because the label is imprecise would be intellectually dishonest.
But it’s not real as a neat new diagnostic box. It’s not a supernatural contagion. It’s not proof that large language models “cause madness” in healthy brains. It’s not evidence that we’re witnessing an epidemic of new-onset psychosis triggered by technology exposure. Anyone who tells you otherwise is either confused about the evidence or selling you something.
It’s what you’d predict when you combine a small but meaningful population of vulnerable users, an always-available conversational system optimized for engagement and helpfulness, and a society allergic to friction, silence, and human limits. The chatbot didn’t create the vulnerability. The chatbot didn’t create the isolation, the sleep deprivation, the untreated symptoms, or the lack of social support. But the chatbot can interact with all of those things in ways that matter for outcomes.
The strongest journalism will distinguish between what we know, what we suspect, and what we fear. The evidence supports taking this seriously while questioning whether “AI psychosis” will join “reading mania” and “radio addiction” as historical footnotes, or whether something genuinely new is emerging. The absence of systematic epidemiological evidence, the centrality of pre-existing vulnerability in documented cases, and the perfect fit with moral panic patterns all counsel epistemic humility. The genuine harm to real people counsels taking action anyway, but action guided by honest assessment rather than panic-driven overreach.
Part 2 is where we name the machine that benefits from this ambiguity: the product design choices, the engagement metrics that reward time-on-app over user wellbeing, the legal maneuvering that allows companies to disclaim responsibility, and the regulatory vacuum that permits experimentation on vulnerable populations without informed consent or oversight. The chatbots aren’t demons. They’re products. And products have designers, incentives, and balance sheets that explain why they work the way they do.
That’s the story worth telling. The one about humans.
Bibliography
National Institute of Mental Health (NIMH). “Understanding Psychosis.” https://www.nimh.nih.gov/health/publications/understanding-psychosis
MedlinePlus. “Psychotic Disorders.” https://medlineplus.gov/psychoticdisorders.html
Østergaard, S. D. (2023). “Will generative artificial intelligence chatbots generate delusions in individuals prone to psychosis?” Schizophrenia Bulletin. https://academic.oup.com/schizophreniabulletin/article/49/2/225/7029988
Østergaard, S. D. (2025). “Generative AI chatbots and delusions.” Acta Psychiatrica Scandinavica. https://onlinelibrary.wiley.com/doi/10.1111/acps.13852
Hudon, C., & Stip, E. (2025). “Delusional Experiences Emerging From AI Chatbot Interactions.” JMIR Mental Health. https://mental.jmir.org/2025/1/e85799
Pierre, J. M., et al. (2025). “ChatGPT and Psychosis: A Case Report.” Innovations in Clinical Neuroscience. https://innovationscns.com/chatgpt-and-psychosis-a-case-report/
Carlbring, P., & Andersson, G. (2025). “Commentary: AI psychosis is not a new threat: Lessons from media-induced delusions.” https://pmc.ncbi.nlm.nih.gov/articles/PMC12550315/
Preda, A. (2025). “AI-Induced Psychosis: A New Frontier in Mental Health.” Psychiatric News. https://psychiatryonline.org/doi/10.1176/appi.pn.2025.10.10.5
“Machine Madness: A Case of Artificial Intelligence Psychosis Co-Occurring With Substance-Induced Psychosis.” (2025). Primary Care Companion for CNS Disorders. https://www.psychiatrist.com/pcc/artificial-intelligence-psychosis-substance-induced-psychosis/
Orben, A. (2020). “The Sisyphean Cycle of Technology Panics.” Perspectives on Psychological Science. https://journals.sagepub.com/doi/10.1177/1745691620919372
Weizenbaum, J. (1966). “ELIZA: A Computer Program for the Study of Natural Language Communication Between Man and Machine.” Communications of the ACM. https://dl.acm.org/doi/10.1145/365153.365168
Weizenbaum, J. (1976). Computer Power and Human Reason: From Judgment to Calculation. W.H. Freeman.
STAT News. (2025). “As reports of ‘AI psychosis’ spread, clinicians scramble to understand chatbot-linked delusions.” https://www.statnews.com/2025/09/02/ai-psychosis-delusions-explained-folie-a-deux/
The Atlantic. (2025). “The Chatbot-Delusion Crisis.” https://www.theatlantic.com/technology/2025/12/ai-psychosis-is-a-medical-mystery/685133/
TIME. (2025). “Chatbots Can Trigger a Mental Health Crisis. What to Know About ‘AI Psychosis.’” https://time.com/7307589/ai-psychosis-chatgpt-mental-health/
PBS NewsHour. (2025). “What to know about ‘AI psychosis’ and the effect of AI chatbots on mental health.” https://www.pbs.org/newshour/show/what-to-know-about-ai-psychosis-and-the-effect-of-ai-chatbots-on-mental-health
WIRED. (2025). “AI Psychosis Is Rarely Psychosis at All.” https://www.wired.com/story/ai-psychosis-is-rarely-psychosis-at-all
Bloomberg. (2025). “OpenAI Confronts Signs of Delusions Among ChatGPT Users.” https://www.bloomberg.com/features/2025-openai-chatgpt-chatbot-delusions/
OpenAI. (2025). “Sycophancy in GPT-4o: what happened and what we’re doing about it.” https://openai.com/index/sycophancy-in-gpt-4o/
OpenAI. (2025). “Strengthening ChatGPT’s responses in sensitive conversations.” https://openai.com/index/strengthening-chatgpt-responses-in-sensitive-conversations/
Anthropic. (2025). “Protecting the well-being of our users.” https://www.anthropic.com/news/protecting-well-being-of-users
Kuss, D.J. & Lopez-Fernandez, O. (2016). “Twenty years of Internet addiction… Quo Vadis?” Journal of Behavioral Addictions. https://pmc.ncbi.nlm.nih.gov/articles/PMC4776584/
Stanford Medicine. (2025). “Why AI companions and young people can make for a dangerous mix.” https://med.stanford.edu/news/insights/2025/08/ai-chatbots-kids-teens-artificial-intelligence.html
De Freitas, J., et al. (2025). “Unregulated emotional risks of AI wellness apps.” Harvard Business School Research.
Turkle, S. (2011). Alone Together: Why We Expect More from Technology and Less from Each Other. Basic Books.
Cohen, S. (1972/2011). Folk Devils and Moral Panics: The Creation of the Mods and Rockers. Routledge.
Social Media Victims Law Center. (2025). “Character.AI Lawsuits — December 2025 Update.” https://socialmediavictims.org/character-ai-lawsuits/
The Guardian. (2023). “AI chatbot encouraged man who planned to kill queen, court told.” https://www.theguardian.com/uk-news/2023/jul/06/ai-chatbot-encouraged-man-who-planned-to-kill-queen-court-told
Euronews. (2023). “Man ends his life after an AI chatbot encouraged him to sacrifice himself.” https://www.euronews.com/next/2023/03/31/man-ends-his-life-after-an-ai-chatbot-encouraged-him-to-sacrifice-himself-to-stop-climate-
Center for Humane Technology. (2025). “AI Companions Are Designed to Be Addictive.” https://centerforhumanetechnology.substack.com/p/ai-companions-are-designed-to-be
American Psychiatric Association. “Psych News Special Report podcast: AI-Induced Psychosis with Dr. Adrian Preda.” https://www.psychiatry.org/psychiatrists/education/podcasts/medical-mind/2025/psych-news-special-report-ai-induced-psychosis-wit
Leave a Reply