Fakes and phonies: how Facebook plans to stop fake news


Fake news is a problem. There’s no doubt about that. Only 4% of UK adults are able to correctly identify whether a news story is true or fake. What really brought fake news to the forefront, though, was its power during the US election, where some of the most shared stories turned out to be completely false.

So, what exactly is fake news?

Let’s start with the basic definition: fake news is an umbrella term for any fabricated story posted online. According to the BBC, there are two types of fake news: the first are false stories that are deliberately published to make people believe something that is untrue or to get them to visit a website — also known as ‘click-bait’. These consist of deliberate lies, with the writer knowing that they are false. Then there are stories that have some amount of truth to them but aren’t completely accurate. That’s usually because the writer hasn’t checked the facts, or their sources, before writing the story, or they exaggerate parts.

Is Facebook even a news site?

Social media has been crucial to the spread of fake news. The effect of fake news on the US election proved that false information has repercussions far beyond the web and that social media is no longer a neutral platform. All social media platforms have been affected, but as the world’s largest social network, it’s no surprise that Facebook came under fire for not doing more to stop the spread of fake news on its platform. Facebook has 1.8 billion active users worldwide, with 30 million in the UK. It drives a huge amount of news consumption: according to Reuters, half of online users say they use social media as a source of news, and 44% use Facebook to find, read, watch, and comment on the news each week. Yet Facebook doesn’t employ any journalists and has denied claims that it is a media company.

“Not it”

After a large number of stories widely shared on Facebook proved to be inaccurate — like reports that the Pope had endorsed Trump for election — the company began to take action. Users are now encouraged to report any information they think is false through a specific tool on the site. To do this, Facebook hired third-party fact checkers to vet stories and is trying to make sure those responsible for writing misleading articles don’t benefit financially. However, Facebook’s CEO, Mark Zuckerberg, has made it clear that Facebook will not put itself in the position of deciding what’s fake and what’s not. As The Verge puts it, Facebook is taking a “not it” approach to fake news — which isn’t surprising given that it’s often difficult to know which stories are fake and to control how many times they are then shared. And of course, it’s in Facebook’s best interest if its two billion users keep sharing content with each other — even if that content is fake.

So, how does advertising feed fake news?

When it comes to fake news and ads, there are two key issues. Number 1: fake content itself being sponsored to reach a wider audience; and Number 2: genuine ads placed on fake news sites, which ultimately boost the publisher’s revenue and, therefore, power.

Where Facebook becomes inextricably linked is through its sharing culture, increasing the spread of misleading, or outright untrue, content, either organically or as sponsored content. But the wider problem stems from advertising on fake news sites, through what Facebook calls its ‘audience network’.

On-site ads: fuelling the fire

The two key players taking the heat for the spread and funding of fake news are Google and Facebook, and this is largely due to their algorithmic advertising systems. Through online tracking, adverts placed through either Google or Facebook can follow us to any affiliated site online. If they happen to appear alongside a fake news article or otherwise dubious content, that’s good for business — for the fake news publisher in question; seemingly not for the advertiser, or whoever placed the ad, either Google or Facebook. The more money these fake news sites make through hosting ads, the more they have to spend on new stories and driving new furore and disruption. These sites are profitable solely because they are click-bait; the more clicks they get, the more revenue they can garner.

Infiltrating your News Feed

Fake news makes it onto your News Feed through shares, organic content, and paid posts. How much you see of this will depend on your own Facebook filter bubble, who you are friends with, and their sharing patterns. But it’s likely you will have seen your fair share of fake news nonetheless.

Like any viral content, this fake news spreads across your social feeds like wildfire. Thus, Mark Zuckerberg still finds himself in a position where he is having to downplay Facebook’s impact on the results of the US election. So where does this leave the company and its stance on fake news?

Does Facebook have a plan?

The short answer is: it’s trying. But whether it’s trying hard enough is highly disputable. The long answer is much more unclear, with various statements and changing standpoints coming from the company’s head honchos, including Mr. Zuckerberg himself.

Zuckerberg started out by quite vehemently defending his company in light of the fake news frenzy following the US election. This stance has since softened, but rather reluctantly, and still rather unclearly. When fake news items started to appear in Facebook’s trending topics, the company first hired a human task force to determine key trends. When this also backfired, the company introduced new algorithms to process stories and trends.

The News Feed algorithms then also undertook some amendments in order to move fake news further down users’ feeds. And that is where we seem to be at now, with these changes still ongoing. A refining process of the News Feed algorithms, with the intention of providing users with more of the content they want to see, and less dubious content.

So where does this lead us?

Speculation — ie. online discussions — seems to suggest that Facebook will look to implement further regulations to its News Feed content by asking users more detailed questions about the type of content they want to see. For more information specifically on this point, TechCrunch has written an incredibly detailed report on what they think the direction of Facebook’s regulations will be, and how this may further benefit the company’s ad targeting capabilities, for better or worse.

In regards to Facebook’s stance on advertising around fake news pieces, it recently updated its ad policies to include fake news websites in its list of sites that will not be able to use their Audience Network adverts. The company said in a statement that they “do not integrate or display ads in apps or sites containing content that is illegal, misleading or deceptive, which includes fake news“.

But can these policies really curb fake news?

Facebook, as well as Google and other major online players, seem to think so. Or, at least, they want to be seen to think so. But the online landscape is, as it always has been, ever-changing. So, if they are going to truly stop the spread of fake news, they will need to be one step ahead of any advancements the fake news publishers themselves make. They will have to do something in order to prevent it but whether or not they can stop it altogether is questionable.

We are, however, at a tipping point in the fake news frenzy. Either something major will have to be done, to the news and social landscape as a whole, or it will be another online phase that settles down as quickly as it has erupted. My spidey senses are telling me that the latter is unlikely, though. Facebook and other key influencers will have to take some serious steps to rectify this issue before it gets even further out of control.

If you want to learn how to go about Facebook advertising the right way, get in contact and we can help you implement an effective — and definitely not fake — campaign!

Comments are closed.

« »