AI-generated health recommendations can be hazardous and should never take the place of professional medical advice.
Chatbot-generated health information may be outdated, misleading, or overly generic.
Experts suggest using AI only for general background research. Always consult with a healthcare provider regarding any health advice obtained from AI sources.
An incident involving a 60-year-old man who switched to sodium bromide instead of table salt based on ChatGPT's suggestion led to bromide toxicity and required a three-week psychiatric hospitalization.
This case illustrates the risks associated with relying on AI for health information. However, many Americans believe that AI-generated health data is "somewhat reliable," according to recent surveys. Professionals emphasize that AI should never substitute professional medical care.
AI chatbots lack access to individual patient records and therefore cannot provide accurate guidance on new symptoms, existing conditions, or urgent care needs.
According to Margaret Lozovatsky, MD, vice president of digital health innovations at the American Medical Association, AI advice is generally too vague for personalized medical situations.
Dr. Lozovatsky recommends using AI tools for background information that can help patients ask informed questions or understand medical terms better.
Generative AI relies on data it was trained on, which may not include the most current medical guidance. For example, updated flu shot recommendations from the Centers for Disease Control (CDC) might not yet be reflected in chatbot responses.
AI systems can present incorrect information confidently, as they patch together pieces of data to create seemingly authoritative answers.
A study published in Nutrients found that popular chatbots like Gemini, Microsoft Copilot, and ChatGPT generate reasonably good weight loss meal plans, but these plans often lack a proper balance of macronutrients such as carbohydrates, proteins, fats, and fatty acids.
Ainsley MacLean, MD, a health AI consultant and former chief AI officer for the MidAtlantic Kaiser Permanente Medical Group, stated, "I would be extremely reluctant to advise a patient to follow guidance from ChatGPT." She noted that generative AI bots aren't protected by health privacy laws like HIPAA. Therefore, users should avoid inputting personal medical data into these systems.
When browsing AI summaries on Google, it's important to verify the source of the information. Look for respected science journals or medical organizations and check when the material was last updated.
Dr. Lozovatsky underscored that individuals should still consult their physicians about new symptoms and disclose any chatbot-sourced health advice they may have acted upon.
It's reasonable to share AI-generated information with your doctor and ask questions like: "Is this accurate? Does it apply to my specific case?" Also, inquire if there are trusted AI tools recommended by the physician.