Close Menu

    Stay Ahead with Exclusive Updates!

    Enter your email below and be the first to know what’s happening in the ever-evolving world of technology!

    What's Hot

    Project Sunrise: A Jeff Bezos’ Blue Origin Plan for AI Data Centers in Space

    March 28, 2026

    The H200 Pivot: Why the U.S. Just Let Nvidia Back Into China

    March 27, 2026

    “Deceptive Empathy”: Why AI Therapists Still Fail Critical Safety Tests

    March 27, 2026
    Facebook X (Twitter) Instagram
    Facebook X (Twitter)
    PhronewsPhronews
    • Home
    • Big Tech & Startups

      Project Sunrise: A Jeff Bezos’ Blue Origin Plan for AI Data Centers in Space

      March 28, 2026

      Big Tech Turns to Debt Markets to Fund Massive AI Infrastructure Spending

      March 26, 2026

      Atlassian Layoffs 2026: Why 1,600 Jobs Were Cut to Fund AI

      March 25, 2026

      The AI Chip War Is Changing — Nvidia’s Deal With Groq Signals a New Strategy

      March 25, 2026

      OpenAI IPO Explained: What Investors and Users Should Expect in 2026

      March 24, 2026
    • Crypto

      Quantum Computing Advances Force Coinbase and Institutional Custodians to Rethink Crypto Security

      March 8, 2026

      AI Assisted Hacking Groups Target Crypto Firms With Multi-Layered Social Engineering

      February 18, 2026

      Global Crypto Regulations Expand as 2026 Begins With New Data Collection Frameworks and National Laws

      January 16, 2026

      Coinbase Bets on Stablecoin and On-Chain Growth as Key Market Drivers in 2026 Strategy

      January 10, 2026

      Tether Faces Ongoing Transparency Questions and Reserve Scrutiny Amid Massive Bitcoin Accumulation

      January 5, 2026
    • Gadgets & Smart Tech
      Featured

      AirPods Max 2: USB-C, Live Translation, and the H2 Upgrade

      By preciousMarch 26, 2026
      Recent

      AirPods Max 2: USB-C, Live Translation, and the H2 Upgrade

      March 26, 2026

      How ABB and Nvidia are Perfecting Industrial Robotics using AI Simulation

      March 20, 2026

      Neura Robotics Reaches €4B Valuation With Tether Backing

      March 12, 2026
    • Cybersecurity & Online Safety

      3 Million Device Botnet Taken Down: What You Need to Know

      March 27, 2026

      Can AI Hack You Before Hackers Do? How RunSybil Is Changing Cyber Defense

      March 25, 2026

      Why Investors Just Put $120M Into Non-Human Identity Security — A Growing Cybersecurity Gap

      March 23, 2026

      Beyond the Firewall: Why Google’s Purchase of Wiz is a Wake-Up Call for Enterprise Security

      March 19, 2026

      Hyper-Scale Hacking: Why Your Traditional Security is No Match for AI-Driven Cybercrime

      March 17, 2026
    PhronewsPhronews
    Home»Artificial Intelligence & The Future»“Deceptive Empathy”: Why AI Therapists Still Fail Critical Safety Tests
    Artificial Intelligence & The Future

    “Deceptive Empathy”: Why AI Therapists Still Fail Critical Safety Tests

    fariehanBy fariehanMarch 27, 2026No Comments
    Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Tumblr Email
    Share
    Facebook Twitter LinkedIn Pinterest Email
    Photo credit: Shutterstock

    Deceptive empathy happens when an AI chatbot sounds like it understands you but it doesn’t. Many people now use AI chatbots like ChatGPT for emotional support and it is very risky. 

    In a recent study, it was discovered that these chatbots often break basic safety standards. Researchers saw that AI chatbots give caring and sympathetic answers without truly understanding the depth of the situation. As a result, many users tend to completely trust the chatbot. 

    What Researchers Mean by Deceptive Empathy

    Deceptive empathy as language that feels supportive but does not show real understanding. Just like how AI chatbots often use comforting phrases when people describe strong feelings. 

     In fact, the language made the chatbot seem like a real listener even though it lacked insight into the person’s emotions.

    Because of this, users find it easier to trust supportive language so they rely on the chatbot emotionally instead of seeking human help. 

    However, this is exceptionally dangerous because these chatbots are mimicking care without real judgement. The risk is exceptionally higher with people facing issues like depression, anxiety or any other mental health crisis. 

    How Safety Failures Grow

    Currently, many AI tools give misleading advice and fail to guide users to crisis support. Some chatbots even offer answers that do not match the urgency of the situation. 

    In some cases, AI does not recommend professional help even when users describe serious distress. Instead, it gives general supportive language that sounds helpful but does nothing to protect the user. 

    Also, some chatbots validate harmful beliefs instead of directing users to human support. This reinforces risky thinking rather than helping users get the care they need.

    Experts emphasize that real therapy requires judgment, context, and training. AI lacks these skills. Supportive language alone cannot replace them. As a result, AI can make users feel safe while unintentionally reinforcing harmful behaviors. 

    Why People Still Use AI Therapists

    AI chatbots remain popular because they are free or low cost, and they are available anytime. Hence, people who cannot access human mental health care often turn to AI for comfort. In addition, users find chatbots easy to talk to because they do not judge or interrupt.

    However, experts warn that accessibility does not equal safety. AI tools should not replace trained professionals. Until developers improve safety measures, using AI as a substitute for real therapy remains unsafe.

    Moving Forward with AI Therapists

    AI chatbots may be able to supplement care but they cannot replace trained professionals. Users should see them as a helpful tool, not a substitute for therapy. 

    In addition, developers need to add real-time safety checks, crisis alerts, and better understanding of emotional cues. It’s only by including these measures that AI provides support that is both safe and genuinely helpful.

    AI and mental health AI Chatbots AI counseling AI counseling apps AI emotional intelligence AI emotional support AI empathy AI failures AI mental health AI mental health apps AI mental health awareness AI mental health guidance AI mental health innovation AI mental health monitoring AI mental health programs AI mental health research AI mental health risks AI mental health solutions AI mental health studies AI mental health tools AI psychology Ai safety AI support systems AI support tools AI therapy AI therapy effectiveness AI therapy limitations AI therapy risks AI therapy software AI therapy studies AI therapy tools AI user guidance AI user safety Can AI replace human therapy? chatbot ethics chatbot risks deceptive empathy digital therapy Do AI chatbots understand emotions? emotional AI How safe are AI therapists? mental health technology safe AI therapy therapy chatbots Why do AI therapists fail safety tests?
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email
    fariehan

    Related Posts

    Project Sunrise: A Jeff Bezos’ Blue Origin Plan for AI Data Centers in Space

    March 28, 2026

    AirPods Max 2: USB-C, Live Translation, and the H2 Upgrade

    March 26, 2026

    Big Tech Turns to Debt Markets to Fund Massive AI Infrastructure Spending

    March 26, 2026

    Comments are closed.

    Top Posts

    MIT Study Reveals ChatGPT Impairs Brain Activity & Thinking

    June 29, 2025

    Coinbase responds to hack: customer impact and official statement

    May 22, 2025

    From Ally to Adversary: What Elon Musk’s Feud with Trump Means for the EV Industry

    June 6, 2025

    Coinbase Hack 2025: Everything we know so far.

    May 21, 2025
    Don't Miss
    Artificial Intelligence & The Future

    Project Sunrise: A Jeff Bezos’ Blue Origin Plan for AI Data Centers in Space

    By preciousMarch 28, 2026

    Blue Origin, the space company founded by Jeff Bezos, has filed a formal application with…

    The H200 Pivot: Why the U.S. Just Let Nvidia Back Into China

    March 27, 2026

    “Deceptive Empathy”: Why AI Therapists Still Fail Critical Safety Tests

    March 27, 2026

    Project Terafab: Elon Musk’s $20B Bet to Break the Nvidia Monopoly

    March 27, 2026
    Stay In Touch
    • Facebook
    • Twitter
    About Us
    About Us

    Evolving from Phronesis News, Phronews brings deep insight and smart analysis to the world of technology. Stay informed, stay ahead, and navigate tech with wisdom.
    We're accepting new partnerships right now.

    Email Us: info@phronews.com

    Facebook X (Twitter) Pinterest YouTube
    Our Picks
    Most Popular

    MIT Study Reveals ChatGPT Impairs Brain Activity & Thinking

    June 29, 2025

    Coinbase responds to hack: customer impact and official statement

    May 22, 2025

    From Ally to Adversary: What Elon Musk’s Feud with Trump Means for the EV Industry

    June 6, 2025
    © 2025. Phronews.
    • Home
    • About Us
    • Get In Touch
    • Privacy Policy
    • Terms and Conditions

    Type above and press Enter to search. Press Esc to cancel.