Live update: Senators discuss governors snubbing summons
The YouTube doctor: Why your AI health search is a prescription for misinformation
If AI models are sourcing data from a platform where misinformation is easy to go viral, and it cannot be debunked easily
What you need to know:
- YouTube tops the list in AI citations for health queries, yet it only ranks 11th in organic search results.
- This shows that AI often prioritises video content even when more authoritative, easier-to-find sources exist.
You may be guilty of this: searching your symptoms online before seeking professional medical advice.
If you use Google, you have likely seen its new AI Overview feature. Introduced in May 2024, it always pops up to give a summary of a query that you key in when you search.
It shares pointers and links to where it got the information from.
A new analysis by the AI-powered SEO platform SE Ranking reveals a surprising trend for medical searches: the information in these summaries is most often sourced from YouTube.
Researchers used Germany as a case study due to its strict healthcare regulations, combined with European Union directives, standards and safety standards.
They found that most AI Overview citations for health queries come from YouTube, while only one per cent of the reviews link to peer-reviewed academic journals, considered the gold standard for medical information.
“YouTube tops the list in AI citations for health queries, yet it only ranks 11th in organic search results. This shows that AI often prioritises video content even when more authoritative, easier-to-find sources exist,” the analysis states.
It adds: “If AI systems rely heavily on non-medical or non-authoritative sources even in such an environment, it suggests the issue may extend beyond any single country,” showed their analysis.
Dr Gideon Mutai, a medical officer at Gilgil Sub-county Hospital, told Healthy Nation that the danger of searching online your symptoms is that you may end up having misleading results.
“If a medic is the one Googling, they should quickly tell what's true from misleading information,” he said.
He noted that online results are often not localised for Kenyans’ needs and warned: “AI hallucinates a lot and people don’t stand a chance if they challenge it.”
Allan Cheboi, Data and Digital Technology lead at Build up and a graduate student of Artificial Intelligence, shared a theory that could explain why YouTube tops, but gave a caveat that only the system developers would know the truth.
“One explanation could be that most medical content is on YouTube. Probably when Google was training its data, a lot of content was obtained from YouTube, which is why it ranks higher in searches,” he said.
“I would have expected it to pull more peer-reviewed academic articles because that is the gold standard.”
Cheboi warned that such summaries are dangerous because no one peer-reviews YouTube. Content is uploaded without deliberate fact-checking, and some creators use emotive information to attract clicks.
“Some users are looking for clicks, so they put up emotive information so that it attracts many people to the content,” he said.
Victor Ndede, Technology and Human Rights manager at Amnesty International, finds AI-driven medical sourcing to be of ‘deep concern’ and a bit ‘skewed’.
“It changes our architecture of information trust.”
Victor explained that an AI overview could miss cautionary aspects that are found in a peer-reviewed medical journal.
“If you pull a YouTube video as a primary source, then the AI is prioritising a content creator. It is highly unlikely that you will find medical doctors,” he said.
He added that most content creators satisfy an algorithm rather than facts.
For an average user, Victor worries that there is a thin line to distinguish between a certified clinician and a wellness content creator.
“In the context of human rights, all of us have the right to the highest attainable standards of healthcare. Looking at this model, this is actually a direct threat to the
highest attainable standards,” he said.
“If AI models are sourcing data from a platform where misinformation is easy to go viral, and it cannot be debunked easily…we are essentially automating the spread of medical malpractice,” he added.
He highlighted a digital literacy gap: people rarely check sources and don’t know how to verify digital information. There’s also an accountability gap—if someone is harmed by an AI summary leading to YouTube, assigning responsibility is difficult.
“Some could argue it’s the AI developer, others the platform, or the content creator. There’s a grey area,” he said.
The problem extends beyond YouTube to other social media platforms with misleading medical content.
“AI hallucination gives people a false confidence. If you look up symptoms, you may self-diagnose incorrectly,” Victor cautioned.
“We are moving to a world where the truth is decided for you by an AI tool or, a large language model is indexing for you the kind of videos you should watch,” he added.
He cautioned that people need to understand that not everyone who creates content is qualified. “This is why we have professionals. People should consult professionals before
acting on AI summaries.”
Nation reached out to a Google spokesperson, who said the company is working to ensure that AI Overviews are of high quality by integrating Google’s core web ranking systems into the feature. The spokesperson said these systems are designed to surface reliable and relevant information.
“The implication that AI Overviews provide unreliable information is refuted by the report’s own data. AI Overviews cite expert YouTube content from hospitals and clinics,” the spokesperson said.
They added that YouTube has, over the past five years, invested heavily in increasing the volume of high-quality content from trusted sources to ensure users are connected with credible information when searching for health-related content.