AI for Medical Advice? Study Raises Serious Concerns

Artificial Intelligence has quickly become a go-to tool for everyday questions—including health-related concerns. However, a recent study has raised red flags about relying on AI for medical advice.


What the Study Found

Researchers from the US, Canada, and the UK analyzed five major AI platforms, including ChatGPT, Gemini, Meta AI, DeepSeek, and Grok.

Their findings, published in BMJ Open, revealed:

  • Around 50% of AI responses were incorrect
  • Nearly 20% were significantly wrong
  • Many answers were delivered with high confidence despite inaccuracies

Where AI Performs Well—and Where It Fails

Better performance:

  • Vaccine-related queries
  • Cancer-related information
  • Closed or fact-based questions

Weaker performance:

  • Open-ended medical questions
  • Topics like stem cells and nutrition
  • Situations requiring nuanced clinical judgment

Another major issue was that AI tools rarely provided complete or reliable references, making it harder for users to verify information.


Why This Is a Concern

  • AI chatbots are not licensed medical professionals
  • They lack clinical experience and real-world diagnostic ability
  • Users may trust confident answers without verification

With millions of people turning to AI for health advice every week, the risk of misinformation becomes significant.


Key Takeaway

AI can be helpful for general awareness or basic information, but it should never replace professional medical consultation.