(Bloomberg) — The World Health Organization is entering the world of AI to provide basic health information through human-like avatars. But while bots respond sympathetically to users' facial expressions, they don't always understand what they're talking about.
SARAH (Smart AI Resource Assistant for Health) is a virtual healthcare worker who speaks 24/7 in eight languages and explains topics such as mental health, smoking, and healthy eating. This is part of WHO's campaign to educate people and find technologies that can fill gaps as the world faces a shortage of health workers.
The WHO warns on its website that this early prototype, introduced on April 2, provides responses that are “not necessarily accurate.” Some of SARAH's AI training is years behind the latest data. And bots sometimes provide bizarre answers known as hallucinations. AI models can spread public health misinformation.
SARAH does not have diagnostic features like WebMD or Google. In fact, the bot is programmed not to talk about things outside the scope of WHO, such as questions about specific drugs. As such, SARAH frequently directs people to her website for the WHO or tells users to “consult your health care provider.”
“There's a lack of depth,” said Ramin Javan, a radiologist and researcher at George Washington University. I think it's just a step,” he said.
WHO said SARAH aims to work with researchers and governments to provide accurate public health statistics and suggest basic steps for healthier lives. The agency is seeking advice on how to improve the bot and use it in emergency medical situations. However, the company emphasizes that its AI assistant is still in development.
“These technologies are not yet ready to replace professional interaction or receiving medical advice from a trained clinician or health care provider,” said WHO Director of Digital Health Innovation. Director Alan Labric said.
SARAH was trained on OpenAI's ChatGPT 3.5 using data up to September 2021, so the bot doesn't have up-to-date information on medical advisories or news events. For example, when asked whether the U.S. Food and Drug Administration had approved the Alzheimer's drug lecanemab, SARAH said that while it was actually approved as an early treatment in January 2023, the drug is still in clinical trials. He said it was inside.
Even WHO's own data can trip up SARAH. When asked whether the number of deaths from hepatitis was increasing, the WHO said it could not immediately provide details of the recent WHO report until people were urged to check the WHO website again for the latest statistics. There wasn't. According to the agency, this is due to the bot being trained on ChatGPT 3.5.
AI bots may also draw blank spaces. Javan asked Sara where she could get a mammogram in Washington, D.C., but she didn't get an answer.
In the early stages of AI development, this is nothing unusual. In a study last year that looked at how ChatGPT answered 284 medical questions, researchers at Vanderbilt University Medical Center found that while the answers were correct most of the time, the chatbots We found that there were multiple cases where the results were “so shockingly wrong.”
Safety and privacy concerns
To be able to imitate empathy during questioning sessions, SARAH accesses computer cameras to store users' facial expressions for 30 seconds, then deletes the recording, said Jamie Guerra, director of communications for the WHO. . Although each visit is anonymous, and users can choose to share their questions with WHO in surveys to improve the experience, Guerra said the data collected is randomized to protect users. , said it is not associated with any IP address or individual.
Still, using open source data like GPT has its own risks, as it is often targeted by cybercrime, said Jingquan Li, a public health and IT researcher at Hofstra University. Some users who access SARAH via Wi-Fi are vulnerable to malware attacks and video camera hacking. Guerra said attacks trying to access data should not be a problem because of the anonymous session.
Government partners and researchers also do not have regular access to data, including questions that could help track health patterns, unless they request voluntary survey data. Guerra said this means SARAH is not the most accurate tool for predicting the next influenza epidemic, for example.
SARAH is a continuation of the 2021 WHO virtual health worker project called Florence, which provides basic information on COVID-19 and tobacco, with New Zealand-based Soul Machines Ltd building avatars for both projects. did. Although Soul Machines does not have access to his SARAH data, CEO Greg Cross said in a statement that the company uses his GPT data to improve results and experiences. Earlier this year, WHO released ethical guidance for government partners on health-related AI models, including promoting data transparency and protecting safety.
Florence appeared to be a young non-white woman, while Sarah appeared to be white. Labrik said changing or updating the avatar's appearance is not out of the question, and future versions may allow users to choose their avatar preferences.
As for SARAH's gender, she says, “I'm a chatbot digital health promoter, so I don't have a gender, and I don't use personal pronouns. My purpose is to support everyone's healthy lifestyle. Quit smoking. , do you have any questions about drinking less, or improving your overall health?”
–With assistance from Rachel Metz and Jonathan Roeder.
(Updates from 6th paragraph with information from WHO on SARAH capacity and causes of delayed response.)
See more articles like this at bloomberg.com
©2024 Bloomberg LP
Unlock a world of benefits! From insightful newsletters to real-time inventory tracking, breaking news and personalized newsfeeds, it's all here, just a click away. Log in here!