Google has removed some of its artificial intelligence health summaries after a Guardian investigation found people were being put at risk of harm by false and misleading information.
The company has said its AI Overviews, which use generative AI to provide snapshots of essential information about a topic or question, are “helpful” and “reliable”.
But some of the summaries, which appear at the top of search results, served up inaccurate health information, putting users at risk of harm.
In one case that experts described as “dangerous” and “alarming”, Google provided bogus information about crucial liver function tests that could leave people with serious liver disease wrongly thinking they were healthy.
Typing “what is the normal range for liver blood tests” served up masses of numbers, little context and no accounting for nationality, sex, ethnicity or age of patients, the Guardian found.
What Google’s AI Overviews said was normal may vary drastically from what was actually considered normal, experts said. The summaries could lead to seriously ill patients wrongly thinking they had a normal test result, and not bother to attend follow-up healthcare meetings.
After the investigation, the company has removed AI Overviews for the search terms “what is the normal range for liver blood tests” and “what is the normal range for liver function tests”.
A Google spokesperson said: “We do not comment on individual removals within Search. In cases where AI Overviews miss some context, we work to make broad improvements, and we also take action under our policies where appropriate.”
Vanessa Hebditch, the director of communications and policy at the British Liver Trust, a liver health charity, said: “This is excellent news, and we’re pleased to see the removal of the Google AI Overviews in these instances.
“However, if the question is asked in a different way, a potentially misleading AI Overview may still be given and we remain concerned other AI‑produced health information can be inaccurate and confusing.”
The Guardian found that typing slight variations of the original queries into Google, such as “lft reference range” or “lft test reference range”, prompted AI Overviews. That was a big worry, Hebditch said.
“A liver function test or LFT is a collection of different blood tests. Understanding the results and what to do next is complex and involves a lot more than comparing a set of numbers.
“But the AI Overviews present a list of tests in bold, making it very easy for readers to miss that these numbers might not even be the right ones for their test.
“In addition, the AI Overviews fail to warn that someone can get normal results for these tests when they have serious liver disease and need further medical care. This false reassurance could be very harmful.”
Google, which has a 91% share of the global search engine market, said it was reviewing the new examples provided to it by the Guardian.
Hebditch said: “Our bigger concern with all this is that it is nit-picking a single search result and Google can just shut off the AI Overviews for that but it’s not tackling the bigger issue of AI Overviews for health.”
Sue Farrington, the chair of the Patient Information Forum, which promotes evidence-based health information to patients, the public and healthcare professionals, welcomed the removal of the summaries but said she still had concerns.
“This is a good result but it is only the very first step in what is needed to maintain trust in Google’s health-related search results. There are still too many examples out there of Google AI Overviews giving people inaccurate health information.”
Millions of adults worldwide already struggle to access trusted health information, Farrington said. “That’s why it is so important that Google signposts people to robust, researched health information and offers of care from trusted health organisations.”
AI Overviews still pop up for other examples the Guardian originally highlighted to Google. They include summaries of information about cancer and mental health that experts described as “completely wrong” and “really dangerous”.
Asked why these AI Overviews had not also been removed, Google said they linked to well-known and reputable sources, and informed people when it was important to seek out expert advice.
A spokesperson said: “Our internal team of clinicians reviewed what’s been shared with us and found that in many instances, the information was not inaccurate and was also supported by high quality websites.”
Victor Tangermann, a senior editor at the technology website Futurism, said the results of the Guardian’s investigation showed Google had work to do “to ensure that its AI tool isn’t dispensing dangerous health misinformation”.
Contact Andrew Gregory about this story
Show
If you have something to share about this story, you can contact Andrew using one of the following methods.
The Guardian app has a tool to send tips about stories. Messages are end to end encrypted and concealed within the routine activity that every Guardian mobile app performs. This prevents an observer from knowing that you are communicating with us at all, let alone what is being said.
If you don’t already have the Guardian app, download it (iOS/Android) and go to the menu. Select ‘Secure Messaging’.
Email (not secure)
If you don’t need a high level of security or confidentiality you can email [email protected]
SecureDrop and other secure methods
If you can safely use the tor network without being observed or monitored you can send messages and documents to the Guardian via our SecureDrop platform.
Finally, our guide at theguardian.com/tips lists several ways to contact us securely, and discusses the pros and cons of each.
Illustration: Guardian Design / Rich Cousins
Google said AI Overviews only show up on queries where it has high confidence in the quality of the responses. The company constantly measures and reviews the quality of its summaries across many different categories of information, it added.
In an article for Search Engine Journal, senior writer Matt Southern said: “AI Overviews appear above ranked results. When the topic is health, errors carry more weight.”

8 hours ago
9

















































