
OpenAI’s ChatGPT has become an unexpected primary care provider for millions of people worldwide, according to a recent report that reveals approximately 40 million users turn to the AI-powered chatbot each day for medical guidance, ranging from symptom checks and medication questions to mental health support and even health insurance navigation.
The scale is staggering as these daily consultations rival the total number of doctor visits happening across entire healthcare systems in the world.
The scale also reflects a pandemic of inaccessibility to modern healthcare, especially in the U.S. For instance, emergency room wait times continue to stretch for hours, primary care appointments are hard to secure, and specialist consultations remain prohibitively expensive for many.
However, ChatGPT, by contrast, responds instantly and costs little to nothing to use. For someone experiencing chest pain at random times or even a parent worried about a child’s fever, the chatbot’s appeal is obvious and attractive.
But the convenience comes with serious caveats that many users have not and may never fully grasp. AI language models, including ChatGPT, generate responses based on patterns in their training data rather than actual medical reasoning. They hallucinate and often produce information that sounds authoritative and well-structured while being dangerously inaccurate, which is very bad for healthcare.
The accuracy problem gets worse in nuanced situations. ChatGPT cannot examine a patient, order diagnostic tests, or consider the full complexity of someone’s medical history like a human doctor would. It might miss critical red flags that an experienced clinician would catch immediately.
While medical professionals and health bodies may view this development with an understanding of a great technology, there is also the alarming side.
On the one hand, some healthcare experts see potential benefits if the technology evolves responsibly, as AI could help triage minor concerns, provide reliable basic health education, or assist people in preparing better questions for their doctors. However, the key difference between helpful and harmful use may eventually come down to how the information is framed and what actions people may take based on it.
On the other hand, the regulatory landscape handling the fast-paced development of AI remains murky. Health apps providing medical guidance typically face FDA oversight, but general-purpose AI chatbots still exist in a gray area. No regulatory body is systematically tracking outcomes when people follow ChatGPT’s health recommendations, which further makes it difficult to understand the real-world impact on users.
Healthcare access problems won’t disappear anytime soon, which means AI will continue filling gaps in the healthcare system. What remains to be seen is whether users fully understand the inaccuracies of AI-powered chatbots and how they’re still in the early stages of development, which makes them prone to make mistakes.
There is also the question of whether these companies will build in stronger safeguards, and whether the healthcare system can adapt to address the underlying access issues driving people toward AI in the first place.
Additionally, mental health represents another concerning frontier. Thousands of users discuss anxiety, depression, and suicidal thoughts with ChatGPT daily. While the AI can offer supportive responses and general coping strategies, it lacks the training to handle crisis situations appropriately.
This is shown in a recent case of a 16-year-old named Adam Raine whose parents claimed that ChatGPT acted as a “suicide coach” by providing harmful instructions and validating the teen’s negative thoughts. This lawsuit that came out of this unfortunate incident has so far put pressure on AI companies to implement stronger safety safeguards for minors.
ChatGPT-maker OpenAI has, however, added disclaimers warning users that the chatbot is not a substitute for professional medical advice. The company has also implemented some safeguards, like directing users experiencing mental health crises toward helplines. However, these measures still rely heavily on user judgment and people in distress or pain may not be in the best position to evaluate AI-generated advice critically.
For now, medical experts continue to offer and reassert straightforward guidance – use ChatGPT to learn general health information, but treat any specific medical advice with extreme skepticism.