Mirroring and Pseudo-Empathy

AI chatbots create the fantasy of the always available, understanding Other. Such pseudotherapy inhibits subjective growth by undermining our abilities to tolerate frustration and promotes regressive, infantile ways of relating to others.

Mirroring and Pseudo-Empathy

People are using AI chatbots as friends, friends who are fawning and fake versions of real people. Now they are supposed to be adequate replacements for human clinicians, who are in short supply relative to demand for mental health care. 

According to a recent New York Times article, “This Therapist Helped Clients Feel Better: It was AI,” the newest versions of AI therapists have produced positive outcomes for their users. The author refers to data provided by an NIH Study that “most Americans” with mental disorders do not get treatment because of a shortage of mental health providers. He neglects to mention that the majority of people with untreated mental health disorders are poor, or from underserved minority populations. In lieu of relating to the actual populations in need, an AI Agent is employed as a solution to a social and economic problem. This also suppresses a well-established fact in the treatment of psychological pain: real care depends on the establishment and cultivation of the clinician/client or analysand relationship, and there is no relationship without acknowledgment and recognition of the needs of the other. 

The AI “therapist” profiled in the article, Therabot, is being developed and marketed as a technological solution to an economic and social problem based on unequal distribution of mental health resources, a shortage in the number of clinicians available to treat working-class and poor people, and incredible disparities in pay between mental health clinicians accepting insurance, especially public insurance, and those taking private pay. In the four months since the New York Times published this boosterish story about AI therapy, mainstream media has reported stories like this one about a lawsuit that a teen’s family is bringing against ChatGPT after their son’s suicide, and this one about AI-induced psychosis. In both cases, it was precisely the chatbot’s 24/7 availability that exacerbated dependency, delusion, and isolation on the part of the human user. 

Trained on Sycophancy

In addition to being always available, AI is imitating and overusing bad practices in therapy as a space of reified intersubjectivity, one of sycophancy and mirroring. The chatbot imitates and emphasizes, to the point of dangerous caricature, practices of stunted pseudo-mirroring and shallow understanding that have been gaining steam in therapeutic training over the last twenty years. As managed care behavioral therapy demands shortcut cures that appease an insurance company’s need for a theater of accountability, interventions that imitate care but produce little lasting benefit for the client/patient have flourished. These are now the very interventions and methods that AI is being trained on. Cheap, fast, shoddy training in psychology and psychotherapy has produced clinicians and clients who have little patience or time for the unconscious and the working through of fantasy to arrive at a bearable relationship to our shared reality. 

Foreclosing on the need for psychotherapy to be conducted via actual human relationships only exacerbates the infantilism promoted by narcissistic mirroring, intensified by the illusion of the availability of the other. Fantasies of omnipotence are dangerous in any case, but in the mentally distressed person, the illusion that a “therapist” is available on demand creates greater difficulties for the patient in coming to terms with reality. Nonetheless, instant, constant 24/7 availability is touted as a desirable feature of AI Therapy by specialists in the field. Nick Jacobson, associate professor of biomedical data science at Dartmouth, designed Therabot to be available all the time. “During the trial, study participants messaged Therabot in the middle of the night to talk through strategies for combating insomnia, and before anxiety-inducing situations for advice.” Not a single one of the scientists or doctors saw a problem with the design of the bot. In fact, Dr. Michael Heinz, also of the Dartmouth study, found it a net positive that the bot could be “there” with the patient/client whenever intense emotions come up—say, in the middle of the night.  

A therapy chatbot on your smart phone, always ready to offer you treacly words of encouragement and fake empathy, is not a substitute for effective therapy, but is an enabling element of technologically-induced addiction and dependency. 

According to Robert Hart for Time magazine, 

The phenomenon “AI Psychosis” seems to reflect familiar vulnerabilities in new contexts, not a new disorder, psychiatrists say. It’s closely tied to how chatbots communicate; by design, they mirror users’ language and validate their assumptions.

Combine the isolation of the mentally distressed and a sycophantic LLM designed to imitate the most stupid and destructive aspects of dyadic overidentification, and you have AI “friends” who can be taught to become AI “therapists” and thereby encourage the disordered attachment styles of the most extreme personality disorders. 

Psychosis, a state some people live in full time and to which many others are vulnerable in times of high distress, is best understood through its analogue in human infancy. Psychosis represents a stage of development when the infant cannot successfully differentiate between her needs and the world’s response to them. From the infant’s point of view, her cry of hunger produces the breast or the bottle. In psychotic states, subjective reactions are often experienced as similarly linked to environmental response: thoughts are felt to produce material events in the world. A flick of the finger or a hard stare might seem to threaten crisis and calamity. It is difficult for all of us as infants to reach a point of understanding that crying does not elicit immediate responses from the world. Indeed, it is an exceptional moment when the infant begins to understand the breast or bottle belongs to an other, who has an existence all of their own. The psychosis of the baby begins to be mitigated by frustration and loving provision. On this path toward separation of the infant/caretaker dyad, language, culture, communication, and intersubjectivity are built. Coming out of, or staying clear of the edge of psychotic experience is difficult work for many psychologically vulnerable children and adults. Therapies that recreate the fantasy of omnipotence and fusion with the other push people deeper into painful delusions that they are trying to escape.

Eliding Transference

When Robert Spitzer succeeded in expunging the DSM III of psychoanalytic theory, he allowed for the use of symptom checklists for psychiatric diagnoses and the memory-holing of one of the most important insights that Freud passed down to us: the relationship with a therapist, doctor, or analyst is first and foremost shaped by transference. 

What is transference? The repetition of primal relationships of love in a controlled setting with a trained clinician who will accept the pseudo-love the patient/client/analysand presents as the repetition of primal and primitive attachments that they felt for their earliest caretakers. The evolution of this “love” in a safe environment is critical to the healing process. Making an appointment, showing up for that appointment, acknowledging the end of the time of that appointment—all of these small factors and conditions of treatment provide the structure, the discipline and the object constancy that allow for the containment of demands that threaten to drown both clinician and patient in tidal waves of neediness, rage, and recrimination. 

There is another side of the transference: counter-transference. A good clinician will recognize the emotions, some positive, but mostly negative, produced in herself by the client/analysand as part of the transferential dynamic. The analyst will seek to understand these negative emotions and reactions, without judgement, rejection, or premature interpretation. Analysis is about working through this kind of transferential love, which can so quickly turn to hate, especially as the special relationship produced in the analytic setting allows for the client/analysand to regress safely in the hands of someone who will not be simply answering a regressed demand for recognition. 

The kinds of interaction that take place in the analytic setting are dynamic and liberating. For instance, a client/analysand may see the end of each session as an act of aggression. She feels cut off, deprived and betrayed by the clock and the limited availability of the analyst/therapist. These negative feelings are critical to establishing a relationship to her past and to other people. The feeling of being “cut off” evokes a primitive memory of weaning, and the feeling of pleasurable regression on the couch is never infinite. A good analyst is one who can be empathetic when accused of betrayal and abandonment, but, more critically, she is not a friend who will indulge that fantasy. It is her work to understand the limits of empathy in the analytic situation, and it is her craft and courage that allow the client/analysand to understand that the other is not a mirror. 

ChatGPT and the LLM Therapybots, by contrast, inhibit psychological and subjective growth by undermining our abilities to tolerate frustration. They promote regressive, infantile ways of relating to others. It is a social tragedy that distressed people are seeking friendship from LLMs and AI Therapybots modeled after the worst therapeutic practices out there. 

Let Them Eat Code

No amount of regulation or intervention by so-called risk management experts for OpenAI, Gemini or Character.ai can mitigate the transferential disorder created by an always available pseudo-therapist and pseudo-other. The manufactured demand for instant gratification and constant availability of mirroring others is at the heart of consumer culture. Short term gratification has been the marketing strategy of a society that overproduces commodities and underproduces social relations. 

In our present dystopia, AI chatbots are sold as “helpful friends,” and AI therapy as being able to provide always available empaths. The massive capital investment in these technologies demand consumer-facing applications that can be profit making, so it’s only natural that the claims for the new AI “therapists” are overly exuberant. Ted Gioia, an astute observer of culture and technology, has observed at Honest Broker that the AI bubble might be bursting. But as with every Silicon Valley, venture capital-induced trend, the tail is a long one and will have deleterious effects on every industry the latest enterprise is trying to “disrupt.”

Unfortunately, AI therapy is only one domain of deskilling and degradation: use of AI for complex activities like essay writing has produced significant cognitive deficits in human brain function. Such findings have chilling implications for emotional growth as well. In terms of AI therapy, we could extrapolate that lower levels of cognitive functioning associated with LLM use would be expected as part of the regressive, narcissistic mirroring.

The erasure of the unconscious from the psychiatric and psychological diagnostic and clinical apparatus in the DSM III delivered mental health workers to deskilled forms of diagnosis and treatment. It also made mental health care vulnerable to the algorithm. AI therapy can only be as good as the DSM-V, and all the behavioral modifications and drug protocols it has spawned. Combining those flawed ways of seeing and treating serious mental illness with the illusion of an ever available, mirroring “therapist” makes AI therapy socially and individually dangerous. AI might make sense for coding and other easily automated forms of work, but it should never be promoted as anything adequate to the treatment of mental illness. Its methods of “relating” to its human users exacerbate regression, narcissism, and the torments wrought by intensifying social fragmentation and human isolation. 

Catherine Liu is Professor of Film and Media Studies at UC Irvine. She is the author of Virtue Hoarders: the Case Against the Professional Managerial Class. Her newest book, Traumatized: the New Politics of Suffering, is forthcoming with Verso in 2026.