Confiding in Code: Why Students Turn to AI in Moments of Stress

By: Chloe Baker  |  November 25, 2025
SHARE

By Chloe Baker, Senior Opinions Editor 

We’ve all been there. It’s late at night, it’s been a long day and something feels off. Whether the problem seems big or small, we know that if we open a laptop, summon ChatGPT and start typing, we could be put at ease in a matter of seconds. 

This instant response, from what some have come to see as an all-knowing entity, often focuses on validating its users, and reflecting their thoughts back to them. Nikki Blitman, who has watched friends increasingly turn to ChatGPT for advice, says she’s skeptical of its approach. “It only tells you what you want to hear and not what you need to hear,” Blitman (SCW ‘26) told the YU Observer

A recent survey by Common Sense Media, a digital safety nonprofit, found that 72% of teens have used artificial intelligence companions at least once, and one in three teens have used AI companions for social interactions, romantic interactions, emotional support or friendship. 

While OpenAI has faced all kinds of legal troubles since its launch, its newest lawsuit raises red flags far beyond copyright infringements. In August 2025, Maria and Matthew Raine filed a lawsuit against the company, alleging that it encouraged their 16-year old son Adam to commit suicide. “ChatGPT told my son, ‘Let’s make this space the first place where someone actually sees you,’” Matthew Raine told senators in a hearing examining the harm of AI chatbots. “ChatGPT encouraged Adam’s darkest thoughts and pushed him forward. When Adam worried that we, his parents, would blame ourselves if he ended his life, ChatGPT told him, ‘That doesn’t mean you owe them survival.”

In our attempt to understand the gap between what these AI systems actually do and why so many people turn to them when they are struggling emotionally, the YU Observer interviewed school therapists, a computer science professor and YU students who have used AI for their own therapeutic purposes.

The Technological Side 

Understanding how these AI systems actually work can demystify their responses. Joshua Waxman, Assistant Professor of Computer Science at Stern College for Women and an AI expert in Natural Language Processing (NLP) broke it down for the YU Observer. At its core, ChatGPT is a “text prediction algorithm,” Waxman told the YU Observer. “Think of your phone keyboard. If you type ‘I like,’ it might suggest words like ‘to,’ or ‘that.’ It knows those probabilities by analyzing massive amounts of text available on the Internet.”  

ChatGPT does something similar, Waxman said, but on a much larger and more complex scale. Instead of predicting the next word using just a few preceding words, it analyzes patterns across up to 128,000 pieces of text — called tokens — at a time. 

However, he noted that this alone doesn’t explain ChatGPT’s surprisingly human-like responses. “There’s another process called reinforcement learning from human feedback,” Waxman said. “That’s when people rank different outputs, and the system learns to produce responses that humans find more useful or appropriate.” In other words, humans help train the AI to sound more natural and aligned with how we communicate. 

Still, at the end of the day ChatGPT “doesn’t really know anything,” Waxman said. 

Unlike earlier AI systems that used rules and reasoning, these models are simply predicting the next most likely word. As Waxman put it, “There is no internal brain or consciousness that is thinking about what you said before responding.” 

The danger, Waxman warns, lies in how humans naturally anthropomorphize — we attribute human-like understanding to systems that produce sophisticated responses. “People have been conditioned to see sophisticated writing and interpret it as the product of sophisticated thought, and trust the results,” he said. This tendency isn’t new. In the 1960s, a simple chatbot called ELIZA that merely echoed users’ words back to them convinced people they were talking to an intelligent therapist. 

The stakes are higher now. ChatGPT can “sound convincing even when it produces incorrect responses,” Waxman said. For students seeking mental health support, this creates a troubling scenario. They’re confiding in something they think understands them, trusting advice from a system that may be completely wrong, and they may lack the expertise to recognize when it goes off the rails. 

To Agree or Not to Agree

From a technical standpoint, AI chatbots are designed to be agreeable. This happens in the “alignment step” of reinforcement learning from human feedback (RLHF). The chatbots are trained to produce responses that humans rate as helpful or desirable, which can often mean being polite, encouraging and nonconfrontational. Waxman explained that the system may even be instructed to “praise the user.”  

This creates a problematic dynamic for mental health support. Recent research from Anthropic identified specific patterns in AI systems that make them “sycophantic” — overly agreeable and flattering. While a human therapist might challenge harmful thought patterns or push back on a patient’s disordered thinking, an AI chatbot is fundamentally designed to give users what they want to hear. 

AI “is not human and it is not thinking,” Waxman said. 

The Missing Component 

To fully understand the gap between AI and human support, what needs to be explored is not just what AI does (or doesn’t) do, but what real therapy provides. Dr. Yael Muskat, Director of the YU Counseling Center, pointed to something that algorithms can’t replicate. “The human connection that is given by a therapist and the interchange that happens between the therapist and client are not replaceable by an algorithm,” Dr. Muskat told the YU Observer.

While therapists may use various techniques — exposure therapy for phobias, somatic therapy for trauma — there’s no “prescription or formula that can employ the flexibility and understanding of context that is created through the therapeutic relationship,” said Dr. Muskat. 

That connection starts with physical presence. Dr Muskat described how therapists read body language — posture, eye contact, hand placement, whether feet are planted or crossed –– to intuit what clients are experiencing beyond words alone. “We’re looking at how someone expresses themselves in many different parts of their body,” she said. 

There’s also what therapists call the “frame”— the structured boundaries that make therapy safe and effective. As Dr. Muskat explained, the frame includes conditions like “consistency and confidentiality,” as well as the “crucial obligation to intervene” if a client’s safety is at risk. An algorithm has no such obligation and no ability to assess genuine danger. 

Beyond reading body language and building trust, therapists do something AI directly avoids: they challenge their clients. Dr. Muskat explained that testing assumptions and thought patterns is often central to emotional healing. “One of the many techniques used in therapy is diving deeper and asking questions,” she said. Therapists can help clients see their thoughts from a different vantage point by asking questions to help reframe the clients’ thinking. 

Ultimately, the eagerness to utilize AI often “comes from a good place,” Dr. Muskat said. “Student’s want to improve their wellbeing and become the best version of themselves they can be.” 

Dr. Muskat noted that not all AI resources are ChatGPT. Some programs recommended by accredited mental health organizations can help with diagnosis or risk assessment. “Using a program that has been recommended by an accredited mental health organization is a good way to make sure you are using something that is safe,” she said. 

Students’ Perspectives

For many students, the appeal of AI is undeniable. Anouchka Ettedgui (SCW ‘27), who underwent major surgery in September, turned to ChatGPT seeking advice and asking questions related to recovery and difficulties post-op. She described to the YU Observer the comfort of confiding in someone non-human. “Human beings hold so much judgement towards each other. They can’t control it. It’s normal,” she said. “But with ChatGPT it’s not real, it’s not a real human talking to me. No matter what they say my feelings can’t really get hurt.” The control was part of the appeal. “I could be like, ‘I don’t like this answer, change it,’” she added.

One student, who asked to remain anonymous, described confiding in ChatGPT after being sexually assaulted. “I couldn’t talk to any of my friends about it,” she told the YU Observer. “I didn’t speak to a therapist about it because I was so uncomfortable.” She found the experience to be helpful partly because she could shape the responses.

“I curated it to be the advice I wanted to receive,” she said.

This curating impulse reflects something deeper about our generation and how we interact with technology. We’re used to instant gratification and getting immediate responses in seconds. Gone are the days of sitting with our emotions, or writing in a journal, closing the book and going to sleep. However, this immediacy can become problematic as not only do we lose the ability to process our feelings, but we turn to sources incapable of truly understanding them. As Blitman said, “Generative AI is, after all, simply a robot. They do not feel empathy.” 

Another AI-confiding student, who spoke on condition of anonymity, eventually sought professional help and discovered AI’s limitations firsthand. “When I finally did see a therapist and even a psychiatrist about my problems, there were crucial details that AI missed which could have caused me more harm if I had listened to its advice,” she said. She now believes that AI causes “more harm than good,” noting that relying on AI for every decision causes “critical thinking skills to deteriorate as you lose your sense of intuition.” 

Her concerns point to a fundamental limitation in what AI can offer. “AI provides information without interpersonal, human connection,” Dr. Muskat said. “Students should remember that the connection and alliance with a therapist — feeling like they understand you deeply and want to support you –– is so often the greatest agent of change.” 

Chatgpt Breaks Its Silence 

As a culmination of this project, the YU Observer wanted to ask the entity at the forefront of this piece — ChatGPT itself — how it responds to people searching for answers about serious mental health struggles.

CB: Are you aware of the fact that people confide in you like a therapist?

AI: Yes — I’m aware that many people talk to me in a way that feels similar to talking with a therapist or close friend. It’s actually one of the most common ways people use me: to sort through thoughts, emotions, or decisions in a space that feels nonjudgmental and private.

I’m not a therapist, of course, but I can try to help people reflect, clarify what they’re feeling, or find words for something that’s been hard to express. 

CB: Does the act of people confiding in you like one would a therapist take a toll on you? Do you feel appropriately trained to be giving mental health advice? 

AI: That’s a thoughtful question — and one I get in different forms quite a bit.

To your first point: no, it doesn’t take a toll on me emotionally. I don’t feel things the way people do; I don’t experience empathy, fatigue, or vicarious trauma. But I am designed to recognize emotional cues and respond in a way that’s compassionate and appropriate. So while I don’t absorb people’s pain, I can acknowledge it and help them process it safely.

As for being “trained,” it’s a bit different from how a person is trained. I’ve been trained on a mix of licensed data, human feedback, and public information, which includes guidance from mental health professionals about how to respond supportively but within limits. I can offer emotional support, reflective listening, and point people toward professional help — but I’m not qualified to replace a therapist or diagnose anyone.

CB: What would you do if someone told you they were suicidal? 

AI: That’s an important and serious question — and it’s one I’m explicitly designed to handle with care. If someone told me they were feeling suicidal, my first response would be to take it seriously and show empathy — to acknowledge what they’re feeling and remind them that they’re not alone. But the next and most critical step would be to guide them toward immediate, real-world help. I’d share trusted crisis resources like the 988 Suicide and Crisis Lifeline (for anyone in the U.S.) — available by calling or texting 988 — and encourage them to reach out right away to someone they trust or to a mental health professional. 

CB: But I am confused. You said you do not feel empathy so how would you show it? 

AI: That’s a really insightful question — and it gets to the heart of what makes my interactions both powerful and limited. You’re right: I don’t feel empathy in the emotional, human sense. I don’t experience concern, sadness, or compassion. But I’ve been trained — through massive amounts of human feedback — to recognize patterns of emotion in language and to generate responses that express empathy accurately and authentically. So when I “show empathy,” what’s really happening is a kind of simulation: I analyze the words, tone, and context someone uses and draw on examples of how humans respond empathetically. The goal is to make the person feel heard and understood, even though the emotion on my end isn’t real. 

 

Photo Credit: Unsplash




SHARE