Home Science AI and health: a doctor who uses this technology says what he...

AI and health: a doctor who uses this technology says what he thinks

1
0

Points to consider on the role of AI in the health field

  • People turn to AI for health advice.
  • It can make mistakes very often.
  • A doctor shares her opinion on the use of AI.

Today, health advice can be found everywhere, regardless of credibility or medical expertise.

This availability of information has changed how people interact with healthcare professionals – even their trust in them. This access to health advice also comes in a context of historically low trust in the healthcare system.

Whether the world of technology takes advantage of this loss of trust or not, it certainly makes alternative medical options more convenient.

The reality is that people often turn to this technology, which is often free, always available, and quick to use, to get answers that a doctor or healthcare professional would have provided in the past.

This technology is changing how her patients interact with doctors

A recent survey found that 63% of respondents consider AI-provided health information reliable. Most people searching for health information online say this information is generally unreliable.

Google, OpenAI, and Anthropic, three major AI players, have developed large language models (LLMs) focused on health for healthcare professionals. Rumors suggest that Apple is also developing its own health-focused AI. Oura has just launched a custom experimental LLM dedicated to women’s health.

For Dr. Alexa Mieses Malchuk, this technology has changed how her patients interact with her. But also how this family doctor practices medicine.

AI can provide users with detailed explanations and answers to all imaginable health questions. But it can also make mistakes on many points. In an interview with ZDNET, Mieses Malchuk discussed the usefulness and pitfalls of AI in healthcare, as well as how patients should approach this technology.

How a doctor uses AI

Dr. Mieses Malchuk is not against AI. In fact, she uses it to streamline administrative tasks, like sorting patient messages and preparing preventative advice before a consultation.

AI-specialized companies continue to develop new software for doctors and healthcare professionals. Just in recent weeks, Amazon and Google announced their own health software for medical appointments, clinical documentation, and medical coding. Administrative burdens in the medical field have always been a problem for doctors, who report spending more time on administrative formalities than caring for patients face to face.

“There are really cool and nice things like this happening all over the healthcare industry that have somehow streamlined the work of a generalist doctor,” explained Mieses Malchuk. However, she is aware of the limits of this technology.

AI as a stepping stone

For non-health professionals, she recommends using AI as a stepping stone, not as a miracle solution for medical advice. It can be satisfying to immediately receive an answer from one of these chatbots, and sometimes the AI response can bring a sense of certainty that alleviates concerns, but she reminds users that these tools cannot diagnose pathologies – and most patients sifting through these responses do not have the necessary medical training to distinguish fact from fiction.

AI chatbot users may omit important information about their health condition, which can lead to a fundamentally different diagnosis or treatment, Mieses Malchuk stated. “The quality of their responses depends entirely on the quality of the questions we ask.”

<p"It's not that people without medical training shouldn't have access to AI. They should rather collaborate with their treating physician to help sift through what they find online."

As these AI-based health tools have gained popularity, she has found that patients visiting her are less likely to admit they have done their own research using these tools, but they are more certain of what they believe their diagnosis to be.

“Even in medicine, there isn’t always 100% certainty”

<p"Even in medicine, there isn't always 100% certainty about anything. On one hand, it's great to live in an era where information is literally at our fingertips, but it also presents real disadvantages," she pointed out.

Mieses Malchuk is concerned that AI tools like ChatGPT may give people a false sense of security, making them believe they don’t need to see a doctor or have a condition examined. “This could cause us to miss an opportunity to diagnose something at an early stage,” she warned.

Among emergencies considered an absolute reference, a recent study published in Nature revealed that ChatGPT underestimated more than half of the cases and directed patients to an evaluation within 24 to 48 hours rather than to emergencies.

<p"Our results reveal high-risk emergencies undetected and inconsistent activation of crisis safety measures, raising safety concerns that warrant prospective validation before large-scale deployment of AI-based triage systems," the authors wrote.

How AI can help patients

Mieses Malchuk recommends using AI-based health tools to get general well-being advice. Suppose a patient has recently been diagnosed with celiac disease and wants to know what foods to consume and avoid. AI can create a meal plan, offer ideas, and provide useful recommendations. It is also very useful for planning workout sessions, and it is quite easy to create a personalized workout program using an AI tool.

Overall, it is an excellent tool for well-being for those without medical training. But leave diagnostics and treatments to the professionals.

<p"Distrust in the medical system continues to grow, which is really a parody. We swear to do no harm, so the idea that these other resources give patients a false sense of confidence and make them believe they can completely do without a doctor – it's a regrettable evolution," Mieses Malchuk said.