WASHINGTON — QUESTION:
Are social media platforms doing anything to actively dispel misinformation about COVID-19?
ANSWER:
Yes.
SOURCES:
Joint industry statement
Facebook/Instagram/WhatsApp
PROCESS:
A Pew Research Center survey found that about 70% of Americans reported that they had searched online for information about the coronavirus. Forty percent of respondents said they had shared or posted information about the outbreak on social media.
With so many people plugged in, and with news changing by the hour, we're all more susceptible to bad information.
Our Verify team sought to find out what social media platforms are doing to stop the spread of misinformation.
Several leaders in the tech industry put out a joint statement on March 16 that said they were working closely to combat "fraud and misinformation about the virus."
Our Verify researchers took a closer look at some of their policies.
Facebook and Instagram introduced educational pop-ups at the top of newsfeeds. Facebook also added a special "COVID-19 Information Center," which provides real-time updates from national health authorities. You can access it under the "Explore" tab.
The company announced on April 16 that they will send you a message if you’ve liked, reacted or commented on a post with harmful misinformation about the virus that has since been removed. Facebook said users will start seeing those messages in the coming weeks.
"We regularly update the claims that we remove based on guidance from the WHO and other health authorities," Facebook wrote. "For example, we recently started removing claims that physical distancing doesn’t help prevent the spread of the coronavirus. We’ve also banned ads and commerce listings that imply a product guarantees a cure or prevents people from contracting COVID-19."
Facebook also utilizes a network of third-party fact-checkers to sift through conspiracy theories and memes with false information.
On Twitter, your tweet may be removed if it denies official health guidance or if it endorses a dangerous treatment. For example, the company says your tweet may be flagged if you say "social distancing is not effective," or "drinking bleach and ingesting colloidal silver will cure COVID-19."
They’re also banning tweets that attack specific groups or nationalities and claim that certain groups are more susceptible to coronavirus.
"To help us proactively identify rule-breaking content before it's reported, our systems learn from past decisions by our review teams, so over time, the technology is able to help us rank content or challenge accounts automatically,"
Twitter wrote online. "For content that requires additional context, such as misleading information around COVID-19, our teams will continue to review those reports manually."
For videos that include coronavirus content, YouTube is continuing to provide information panels linking to health experts, like WHO or CDC.
They are also actively removing videos that violate their policies when they are flagged including those that "discourage people from seeking medical treatment or claim harmful substances have health benefits."
Reddit is scheduling "ask me anything" sessions and panels with medical experts. Most of the sessions take place on the Reddit thread r/coronavirus.
"To further help ensure that authoritative content is what redditors see first when they are looking for conversations about coronavirus, Reddit may also apply a quarantine to communities that contains hoax or misinformation content," Reddit wrote. "A quarantine will remove the community from search results, warn the user that it may contain misinformation, and require an explicit opt-in."
So yes, we can verify that social media platforms are actively trying to dispel misinformation about COVID-19.
RELATED: VERIFY: Fake message about helicopters spraying disinfectant to kill coronavirus goes global