Us News

What Doctors Really Think About ChatGPT Health and AI Medical Advice

The rush to incorporate AI into healthcare raises serious questions about accuracy and trust. Unsplash

Every week, more than 230 million people around the world ask ChatGPT questions about health and wellness, according to OpenAI. Recognizing a huge, unserved need, OpenAI earlier this month launched ChatGPT Health and quickly made a $60 billion acquisition of healthcare technology startup Torch to fuel the effort. Anthropic quickly followed suit, announcing Claude for Healthcare last week. The transition from a general-purpose chatbot to a healthcare advisor is underway.

In a world rife with inequities in health care—or rising insurance costs in the US or care deserts in remote areas around the world—democratic information and advice about one’s health is, at least in theory, a positive development. But the complexity of how big AI companies work raises questions that health technology experts are eager to investigate.

“My concern as a physician is that there is still a high level of guesswork and misinformation that sometimes comes out of these general-purpose LLMs to the end user,” said Saurabh Gombar, Stanford Health Care instructor and chief medical officer and co-founder of Atropos Health, an AI clinical decision support platform.

“It’s one thing if you ask for a spaghetti recipe and it tells you to add 10 times the amount [of an ingredient] that it should. But it is a completely different thing when something is missing in terms of a person’s health care,” he told the Observer.

For example, a doctor may see left shoulder pain as a non-traditional sign of a heart attack in certain patients, while a chatbot may only suggest taking an over-the-counter pain medication. The reverse is also possible. If a patient comes to a provider convinced they have a rare disorder based on a simple symptom after talking to an AI, it can destroy trust when a human doctor wants to throw out the usual explanations first.

Google has already been criticized for its AI overview providing inaccurate and false health information. ChatGPT, Claude and other chatbots have faced similar criticism for falsehoods and misinformation, as they try to limit responsibility to health-related chats by noting that they are “not intended for diagnosis or treatment.”

Gombar says AI companies should do more to publicly emphasize how often feedback can be seen and clearly flag when information is poorly evidenced or completely fabricated. This is especially important considering that chatbot’s broad self-disclosure clauses help prevent legal discovery, while human health care models allow people to sue wrongly.

The primary care workforce in the US has declined by 11 percent annually over the past seven years, mostly in rural areas. Gombar suggests that doctors may no longer be in control of how they fit into the global health care landscape. “If the world moves away from going to doctors first, doctors will be used more as a second opinion specialist, as opposed to a primary opinion,” he said.

The inevitable question of data privacy

OpenAI and Anthropic have made it clear that their health tools are secure and compliant, including the Health Insurance Portability and Accountability Act (HIPAA) in the US, which protects sensitive patient health information from unauthorized use and disclosure. But for Alexander Tsiaras, founder and CEO of AI-driven medical record platform StoryMD, there’s a lot to consider.

“It is not a protection against robbery, but it is a protection against what they will do [the data] afterward, Tsiaras told the Observer. “Back then, their encryption algorithms are as good as anybody’s in HIPAA. But once you have the data, can you trust them? And that’s where I think it’s going to be a real problem, because I certainly wouldn’t trust them.”

Tsiaras points to the continued techno-optimism of the Silicon Valley elite such as OpenAI CEO Sam Altman, arguing that they remain in the bubble and “proved themselves not to care.”

On a concrete level, chatbots tend to be overly agreeable. xAI’s Grok recently drew criticism for agreeing to render nearly nude images of real women and children, though the company blocked the ability this week following public outcry. Chatbots can also reinforce delusions and dangerous thought patterns in people with mental illness, triggering problems such as psychosis or even suicide.

Andrew Crawford, senior privacy and data adviser at the nonpartisan think tank Center for Democracy and Technology, said an AI company that prioritizes personalization profits over data protection could put sensitive health information at greater risk.

“Especially as OpenAI moves to explore advertising as a business model, it is important that the separation between this type of health data and the memories that ChatGPT captures from other conversations is not transparent,” Crawford said in a statement to the Observer.

Then there’s the question of unsecured health data that users voluntarily enter. Personal health companies like MyFitnessPal and Oura are already risking data privacy. “It increases the vulnerability of the environment by making that data available and accessible,” Gombar said.

For people like Tsiaras, profit-driven AI giants are ruining the health tech landscape. “Trust has been destroyed to the point that anyone [else] the one who created the program must go to the other side of spending a lot of time showing that we are there for you and not for the abuse we can get from you,” he said.

Nasim Afsar, doctor, former health officer at Oracle and advisor to the White House and global health organizations, considers ChatGPT Health as a first step in what he calls smart health, but far from a complete solution.

“AI can now interpret data and prepare patients for visits,” Afsar said in a statement to the Observer. “That’s meaningful progress. But change happens when intelligence drives prevention, coordinated action and measurable health outcomes, not just better responses within a broken system.”

What Doctors Really Think About ChatGPT Health and AI Medical Advice



Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button