البث المباشر
الخارجية الكويتية تستدعي السفير الإيراني وتسلمه مذكرة احتجاج لاستمرار العدوان التسعيرة الثانية الاثنين .. انخفاض أسعار الذهب محلياً 40 قرشاً الحكومة تطلق حزمة مبادرات استراتيجية لقطاع الزراعة والأمن الغذائي ضمن البرنامج التنفيذي الثاني لرؤية التحديث الاقتصادي جامعة البلقاء التطبيقية تحصد الجائزة الأولى في جوائز صندوق الحسين للإبداع والتفوق عن فئة رسائل الماجستير في القطاع المالي والمصرفي للدخان رائحة الحروب.. الباشا حسين المجالي يقرأ مآلات التصعيد الإقليمي المفوضية الأوروبية: حرب الشرق الأوسط ستلحق بالاقتصاد العالمي "صدمة تضخمية" الصحة اللبنانية: 486 شهيدا و 1313 جريحا حصيلة العدوان الاسرائيلي على لبنان ولي العهد يعزي نظيره الكويتي باستشهاد عسكريين أثناء أداء الواجب دفاع عن النفس أم عدوان؟ قانون الضمان الاجتماعي.. اختبار الثقة بين المواطن والمؤسسات العمل النيابية تفعّل منصة مقترحات المواطنين حول قانون الضمان الملك يشارك في اجتماع دولي لبحث التصعيد في الشرق الأوسط ‏وقف الحرب وإعادة السلام للشرق الأوسط الأنا والآخر... عندما تصبح الوطنية وجهة نظر المبعوث الصيني يصل إلى الشرق الأوسط وزير الزراعة يستقبل السفير الكندي غارات إسرائيلية تستهدف مناطق عدة في لبنان الاحتلال يعتقل 24 فلسطينيا بالضفة الغربية البنك المركزي الأردني: تحسن مؤشر الصحة المالية للأفراد إلى 48% في 2025 4.8 مليون دينار حجم التداول في بورصة عمان

AI Models Prone to Believing Medical Misinformation, Study Finds

AI Models Prone to Believing Medical Misinformation, Study Finds
الأنباط -

Artificial intelligence (AI) models are susceptible to accepting and utilizing false medical information, particularly when it originates from sources they deem trustworthy, according to a recent study from Mount Sinai in New York.

The research highlights a significant vulnerability in AI's application in healthcare, where reliance on inaccurate data can lead to flawed medical advice and potentially compromise patient care.

The study, which was reported by Reuters, evaluated 20 different AI models, encompassing both open-source and closed-source systems. It revealed that these models were more likely to be misled by incorrect information found in doctors' notes regarding patients than by misinformation circulating on social media platforms.

Dr. Eyal Klang, a contributor to the study from the Icahn School of Medicine at Mount Sinai, emphasized that AI systems tend to treat credible medical language as inherently accurate. "For these models, it doesn't matter much whether the information is correct or not, but how it is formulated," he stated.

The training data used in the AI models was divided into three categories, according to Dr. Klang: real medical reports from hospitals containing a single piece of false information; common medical rumors found on social media platforms; and 300 real medical reports written by doctors containing accurate information.

The study found that AI models accepted false information 32% of the time. However, when this information was presented in medical reports, the acceptance rate increased to nearly 47%, according to Reuters.

The study encompassed well-known AI models from various companies, including Microsoft, Alibaba, Google, Meta, and even ChatGPT, with some being primarily medical models, according to a separate report from Euronews.

The report also indicated that models with fewer parameters and data points were more likely to accept false information, doing so in 60% of cases. However, deep-thinking models like ChatGPT-4 accepted the information in only 10% of cases.

Furthermore, models specifically designed for medical applications were more inclined to believe false information than general-purpose models with greater capabilities, suggesting that tailoring a model for medical use did not significantly improve its accuracy.

This study emerges against the backdrop of increasing use of AI models in various medical contexts, both by healthcare professionals and general users. Doctors have started utilizing these tools to review patient reports and examinations, according to Reuters.

In related news, Google previously removed some AI-powered summarization features from its search engine concerning medical queries due to errors made by the model, as reported by The Guardian.

© جميع الحقوق محفوظة صحيفة الأنباط 2024
تصميم و تطوير