البث المباشر
الملكية الأردنية تعتذر عن إلغاء رحلتها الصباحية إلى أبوظبي من جاهزية البنوك إلى جاهزية الدولة: لماذا تحتاج المرحلة المقبلة إلى غرفة إنذار مالي واقتصادي مبكر؟ العمل… كرامة وطن وحكاية إنسان مقاربة المعايطة العلمية الرائدة: منصة تنفّسية مدعومة بالذكاء الاصطناعي تُعيد تعريف التشخيص المبكر وترسم ملامح الطب التنبؤي ولي العهد: بناة الوطن يعطيكم العافية في عيد العمال… البترا بين التحدي والفرصة ملك البحرين يعرب عن أسفه لـ"اصطفاف بعض المشرعين إلى جانب الخونة" حين يُتَّهَم الرافض… ويعلو نور البصيرة على ظلمات السحر والادعاء عمان الأهلية تهنئ بعيد العمال العالمي "إدارة الأزمات" تحذر المتنزهين من إشعال النار والسباحة في البرك والسدود أجواء معتدلة في أغلب المناطق اليوم ودافئة غدًا ‏مصادر للانباط: الشيباني إلى القاهرة الاسبوع المقبل تمهيدا لتقارب سياسي واقتصادي مهلة ترامب لإيران تنتهي الجمعة .. ومحللون: الرئيس الأمريكي سيمددها أو يتجاهل الموضوع المجالي: صوت الحق في وجه زوابع التشكيك. صباحا أم بعد الطعام؟.. التوقيت المثالي لتناول الحمضيات اكتشاف طريقة لإبطاء شيخوخة القلب علماء: شوكولاتة الأمازون ربما تصبح الغذاء الخارق القادم مقتل محامية شابة بوابل من الرصاص يثير الغضب في تركيا السجن 14 عاما لشاب قتل زوجته بـ"صفعة" بسبب تأخر الطعام الأردن يدين في بيان مشترك الاعتداء الإسرائيلي على أسطول الصمود

AI Models Prone to Believing Medical Misinformation, Study Finds

AI Models Prone to Believing Medical Misinformation, Study Finds
الأنباط -

Artificial intelligence (AI) models are susceptible to accepting and utilizing false medical information, particularly when it originates from sources they deem trustworthy, according to a recent study from Mount Sinai in New York.

The research highlights a significant vulnerability in AI's application in healthcare, where reliance on inaccurate data can lead to flawed medical advice and potentially compromise patient care.

The study, which was reported by Reuters, evaluated 20 different AI models, encompassing both open-source and closed-source systems. It revealed that these models were more likely to be misled by incorrect information found in doctors' notes regarding patients than by misinformation circulating on social media platforms.

Dr. Eyal Klang, a contributor to the study from the Icahn School of Medicine at Mount Sinai, emphasized that AI systems tend to treat credible medical language as inherently accurate. "For these models, it doesn't matter much whether the information is correct or not, but how it is formulated," he stated.

The training data used in the AI models was divided into three categories, according to Dr. Klang: real medical reports from hospitals containing a single piece of false information; common medical rumors found on social media platforms; and 300 real medical reports written by doctors containing accurate information.

The study found that AI models accepted false information 32% of the time. However, when this information was presented in medical reports, the acceptance rate increased to nearly 47%, according to Reuters.

The study encompassed well-known AI models from various companies, including Microsoft, Alibaba, Google, Meta, and even ChatGPT, with some being primarily medical models, according to a separate report from Euronews.

The report also indicated that models with fewer parameters and data points were more likely to accept false information, doing so in 60% of cases. However, deep-thinking models like ChatGPT-4 accepted the information in only 10% of cases.

Furthermore, models specifically designed for medical applications were more inclined to believe false information than general-purpose models with greater capabilities, suggesting that tailoring a model for medical use did not significantly improve its accuracy.

This study emerges against the backdrop of increasing use of AI models in various medical contexts, both by healthcare professionals and general users. Doctors have started utilizing these tools to review patient reports and examinations, according to Reuters.

In related news, Google previously removed some AI-powered summarization features from its search engine concerning medical queries due to errors made by the model, as reported by The Guardian.

© جميع الحقوق محفوظة صحيفة الأنباط 2024
تصميم و تطوير