When someone searches YouTube for a health-related term such as “COVID-19,” most results will now prominently feature content from government agencies and health care organizations that the platform trusts to provide reliable information. The World Health Organization, the Mayo Clinic and the Children’s Hospital of Philadelphia are among the channels the platform has deemed “credible” sources of health information, based on guidelines developed by a panel of experts.
YouTube says the new approach is the first step in a larger strategy. The goal is to provide people with a go-to source of reliable answers to health questions in the same way they currently use the site to “learn how to fix my fridge,” says Garth Graham, director and global head of health care and public health at Google and YouTube and an associate professor at the University of Connecticut School of Medicine. Many misinformation and public health experts applaud the undertaking, but some worry the recent changes fail to fully address the complexity of health behavior and the contested nature of medical knowledge.
For example, the new policy does not affect the platform’s ranking algorithm, which has become notorious as a source of medical misinformation despite previous efforts to reign in conspiracy theories on the site. YouTube helped spread the rumor that hydroxychloroquine is an effective treatment for COVID-19 and hosted the discredited documentary Plandemic, which suggested the pandemic was planned, that vaccines do not work and that masks “activate” the coronavirus—before YouTube took it down. Instead the videos from credible sources appear in a special section, separated from the ordinary search results selected by the ranking algorithm.
The changes also fail to tackle YouTube’s role in the multiplatform misinformation ecosystem, says Joan Donovan, research director of the Shorenstein Center on Media, Politics and Public Policy at Harvard University. For instance, many viewers access misinformation on YouTube through links or videos posted in Facebook groups or on Twitter.
YouTube has banned content that contradicts COVID-19 information from the WHO or local health authorities since May 2020, and it has removed more than one million videos for violating its policy on COVID-19 medical misinformation, according to a company spokesperson. The new changes focus not on removing misinformation but on emphasizing reliable sources. They follow recommendations described in a peer-reviewed discussion paper that was written by a panel of six experts and published by the National Academy of Medicine last month. YouTube provided $100,000 to fund the project.
The authors determined that for a source of health information, “credibility” rests on three foundational principles. First, the source should provide content based on the best available scientific evidence. Second, it should promote objectivity by taking steps to reduce conflicts of interest and bias. Finally, the source needs to disclose its own limitations and errors to ensure transparency and accountability.
YouTube is using these findings as criteria to identify credible government and health care organizations. Videos produced by credible outlets appear in a special section that the company currently calls a “health content shelf,” which appears at or near the top of results for a few hundred health-related queries. (As the platform tests this feature, the special section will only appear for certain users some of the time.) These channels’ videos are also labeled with messages that briefly explain why the source is credible. In practice, that means channels owned by select government agencies, accredited educational and medical institutions, and academic and medical journals are competing for new digital real estate on the world’s second-busiest Web site.
These changes mark a “massive development in how the company envisions itself,” Donovan says. They suggest YouTube is seeking to become “an important source of information” rather than the digital equivalent to “the free bin at a record store.” Some experts, however, worry that identifying and elevating health care organizations and government agencies will not have the intended effect of encouraging people to view more accurate information.
“I’m just not sure that people are tuning in to YouTube to see more scientists or people who have been determined to be credible or authoritative,” says Corey Basch, a public health researcher at William Paterson University. Her studies of YouTube and TikTok reveal that videos produced by official organizations tend to be viewed far less often than content from creators who have earned the trust of communities on the platform. The move also does little to address “not unfounded” mistrust in many of the institutions that have been elevated by the change, she says. Basch thinks the problems run far deeper than access to facts. “Sometimes we miss the point that human emotion and behavior is often rooted in social and emotional factors versus cognitive ones,” she says. Graham acknowledges that “people trust sources for different reasons” and that information that does not originate from a “culturally relevant” source is unlikely to lead to a change in behavior. He says YouTube has plans to work with independent creators who make medical content that is engaging and reliable, but he would not discuss the plan in any detail.
Sven Bernecker is a philosopher at the University of California, Irvine, who researches fake news as well as the ways medical knowledge is accepted and put into practice. He says the discussion paper’s recommendations, which favor well-established sources, are likely to reinforce already existing structures of power and influence in the medical system. Furthermore, he suggests the paper’s emphasis on scientific consensus masks disagreement about what conclusions should be drawn from research and clinical experience. Bernecker explains that its recommendations play into the common narrative that science is an objective means of identifying a single truth.
Even a member of the group that prepared the recommendations suggests that information agreed on by scientific consensus does not necessarily represent objective truth. Panel member Wen-Ying Sylvia Chou, a program director at the National Cancer Institute, says the authors “struggled” with the concept of consensus among medical experts—especially knowing that, historically, powerful institutions have excluded certain marginalized groups and ideas from participating in these discussions. That is one reason why they concluded that credibility should be determined holistically rather than according to a quantitative rubric. “A formal numerical threshold can’t be implemented [because it is not] feasible,” Chou says. Each of the three principles defined in the discussion paper is accompanied by several attributes the authors developed to help with assessing a source. For example, one of the eight attributes it gives for the “science-based” principle is “synthesizes information from multiple sources, rather than a single source.”
It is unclear if YouTube did, in fact, create a strict numerical rubric to determine which organizations would be deemed credible. In a statement to Scientific American, a company spokesperson said the process began by using the discussion paper’s guidelines to compile lists of organizations. “We are starting with organizations with preexisting, standardized vetting mechanisms, such as health care organizations, educational institutions, public health departments and government organizations,” the spokesperson said. The company checked to see which institutions had YouTube accounts before confirming their choices with the American Public Health Association.
“This whole world of health misinformation has been sobering to observe,” says Chou, whose background is in public health. “A lot of our assumptions about communication and human behavior are being challenged. There’s a lot more research that needs to be done.”