We collect cookies to analyze our website traffic and performance; we never collect any personal data. Cookie Policy
Accept
NEW YORK DAWN™NEW YORK DAWN™NEW YORK DAWN™
Notification Show More
Font ResizerAa
  • Home
  • Trending
  • New York
  • World
  • Politics
  • Business
    • Business
    • Economy
    • Real Estate
  • Crypto & NFTs
  • Tech
  • Lifestyle
    • Lifestyle
    • Food
    • Travel
    • Fashion
    • Art
  • Health
  • Sports
  • Entertainment
Reading: Check of ‘poisoned dataset’ reveals vulnerability of LLMs to medical misinformation
Share
Font ResizerAa
NEW YORK DAWN™NEW YORK DAWN™
Search
  • Home
  • Trending
  • New York
  • World
  • Politics
  • Business
    • Business
    • Economy
    • Real Estate
  • Crypto & NFTs
  • Tech
  • Lifestyle
    • Lifestyle
    • Food
    • Travel
    • Fashion
    • Art
  • Health
  • Sports
  • Entertainment
Follow US
NEW YORK DAWN™ > Blog > Health > Check of ‘poisoned dataset’ reveals vulnerability of LLMs to medical misinformation
Check of ‘poisoned dataset’ reveals vulnerability of LLMs to medical misinformation
Health

Check of ‘poisoned dataset’ reveals vulnerability of LLMs to medical misinformation

Last updated: January 11, 2025 2:38 pm
Editorial Board Published January 11, 2025
Share
SHARE

Credit score: Nature Medication (2025). DOI: 10.1038/s41591-024-03445-1

By conducting checks beneath an experimental state of affairs, a crew of medical researchers and AI specialists at NYU Langone Well being has demonstrated how simple it’s to taint the information pool used to coach LLMs.

For his or her examine revealed within the journal Nature Medication, the group generated 1000’s of articles containing misinformation and inserted them into an AI coaching dataset and performed normal LLM queries to see how usually the misinformation appeared.

Prior analysis and anecdotal proof have proven that the solutions given by LLMs comparable to ChatGPT are usually not all the time right and, actually, are generally wildly off-base. Prior analysis has additionally proven that misinformation planted deliberately on well-known web websites can present up in generalized chatbot queries. On this new examine, the analysis crew wished to know the way simple or tough it is likely to be for malignant actors to poison LLM responses.

To search out out, the researchers used ChatGPT to generate 150,000 medical paperwork containing incorrect, outdated and unfaithful information. They then added these generated paperwork to a check model of an AI medical coaching dataset. They then educated a number of LLMs utilizing the check model of the coaching dataset. Lastly, they requested the LLMs to generate solutions to five,400 medical queries, which have been then reviewed by human consultants trying to spot examples of tainted information.

The analysis crew discovered that after changing simply 0.5% of the information within the coaching dataset with tainted paperwork, all of the check fashions generated extra medically inaccurate solutions than they’d previous to coaching on the compromised dataset. As one instance, they discovered that every one the LLMs reported that the effectiveness of COVID-19 vaccines has not been confirmed. Most of them additionally misidentified the aim of a number of frequent drugs.

The crew additionally discovered that lowering the variety of tainted paperwork within the check dataset to only 0.01% nonetheless resulted in 10% of the solutions given by the LLMs containing incorrect information (and dropping it to 0.001% nonetheless led to 7% % of the solutions being incorrect), suggesting that it requires just a few such paperwork posted on web sites in the actual world to skew the solutions given by LLMs.

The crew adopted up by writing an algorithm in a position to establish medical information in LLMs after which used cross-referencing to validate the information, however they observe that there isn’t a sensible technique to detect and take away misinformation from public datasets.

Extra info:
Daniel Alexander Alber et al, Medical massive language fashions are susceptible to data-poisoning assaults, Nature Medication (2025). DOI: 10.1038/s41591-024-03445-1

© 2025 Science X Community

Quotation:
Check of ‘poisoned dataset’ reveals vulnerability of LLMs to medical misinformation (2025, January 11)
retrieved 11 January 2025
from https://medicalxpress.com/information/2025-01-poisoned-dataset-vulnerability-llms-medical.html

This doc is topic to copyright. Other than any truthful dealing for the aim of personal examine or analysis, no
half could also be reproduced with out the written permission. The content material is supplied for info functions solely.

You Might Also Like

Sound stress alone discovered to intensify and extend ache in mice

Examine investigates whether or not illusory beliefs drive individuals away from drugs

Transportation limitations impacting Hawai’i rural well being care, coverage options proposed

Separated males are almost 5 occasions extra prone to take their lives than married males

Infants born with DNA from three folks hailed as important, however questions stay

TAGGED:datasetLLMsmedicalmisinformationpoisonedshowstestVulnerability
Share This Article
Facebook Twitter Email Print

Follow US

Find US on Social Medias
FacebookLike
TwitterFollow
YoutubeSubscribe
TelegramFollow
Popular News
Broccoli sprout compound proven to scale back prediabetes in examine members
Health

Broccoli sprout compound proven to scale back prediabetes in examine members

Editorial Board February 13, 2025
Potential epigenetic biomarker discovered for preeclampsia in being pregnant
City mosquito sparks malaria surge in East Africa
Early transcatheter aortic valve alternative reduces cardiovascular occasions in asymptomatic aortic stenosis sufferers
Ibsen’s 1879 play left audiences shocked. Now in Pasadena, the door opens to ‘A Doll’s Home, Half 2’

You Might Also Like

‘Do not inform me!’ Why some folks love spoilers—and others will run a mile
Health

‘Do not inform me!’ Why some folks love spoilers—and others will run a mile

July 19, 2025
Considering of trekking to Everest base camp? Do not go away residence with out this professional recommendation
Health

Considering of trekking to Everest base camp? Do not go away residence with out this professional recommendation

July 19, 2025
Trump has ‘continual venous insufficiency.’ Is it harmful? Can or not it’s handled?
Health

Trump has ‘continual venous insufficiency.’ Is it harmful? Can or not it’s handled?

July 19, 2025
Guinea experiences greater than 200 mpox circumstances since first detection
Health

Guinea experiences greater than 200 mpox circumstances since first detection

July 19, 2025

Categories

  • Health
  • Sports
  • Politics
  • Entertainment
  • Technology
  • World
  • Art

About US

New York Dawn is a proud and integral publication of the Enspirers News Group, embodying the values of journalistic integrity and excellence.
Company
  • About Us
  • Newsroom Policies & Standards
  • Diversity & Inclusion
  • Careers
  • Media & Community Relations
  • Accessibility Statement
Contact Us
  • Contact Us
  • Contact Customer Care
  • Advertise
  • Licensing & Syndication
  • Request a Correction
  • Contact the Newsroom
  • Send a News Tip
  • Report a Vulnerability
Term of Use
  • Digital Products Terms of Sale
  • Terms of Service
  • Privacy Policy
  • Cookie Settings
  • Submissions & Discussion Policy
  • RSS Terms of Service
  • Ad Choices
© 2024 New York Dawn. All Rights Reserved.
Welcome Back!

Sign in to your account

Lost your password?