I wish I could forget the morning when I saw a picture of a dead child on my Facebook news feed.
The image was stark and brutal, posted to raise awareness about a war. For the rest of the day, I was stunned. Why had someone chosen to post such an image, and more importantly, why had Facebook not taken it down?
The same thing occurred more recently, when someone I know shared an image with detailed instructions, telling people how to kill themselves if they did not support U.S. President Donald Trump. Whether the intent was humour or a political statement, the post crossed the bounds of common decency.
Why would anyone post this image and text, and why didn’t Facebook take it down?
These posts are not the only times I’ve wondered about the moderation policy on social media sites. There are times when disproved and dangerous medical advice is dispensed, or when hatred of an identifiable group is encouraged.
Before the days of social media, if such content had been shared through an internet forum or online discussion group, a moderator would have deleted it. Social media sites have traditionally taken a more hands-off approach.
Things are beginning to change. Some of the social media giants are now paying attention to the content posted on their sites.
Last week, Facebook announced it would crack down on vaccine misinformation on its platform. This announcement follows earlier moves by Facebook to exercise more oversight and to add fact-check tags to posts promoting inaccurate or misleading content. Other large social media platforms, including Twitter, Instagram and YouTube, are also doing more to monitor content.
The ability to monitor social media content has been in place for years. The major platforms have traditionally had community standards in place and have been able to remove posts or ban users for violating these standards. Why were these tools not used in the past?
Some will argue this latest move by Facebook to target vaccine misinformation is nothing more than a calculated censorship scheme — an effort to silence all speech other than that conforming to a specific ideology. If anti-vaccine posts are prohibited today, will content about religion or politics be banned tomorrow?
Such speculation fails to understand the nature of censorship or social media platforms. Censorship is a government’s attempt to control speech and other content. Facebook, YouTube, Twitter and Instagram are private businesses, not government entities.
A decision by social media platforms to refute or block certain content does not necessarily silence any voices. The big social media platforms are not the only options for social media users.
There are some U.S.-based platforms, modelled after Facebook, YouTube and Twitter, which allow registered users to post with little or no moderation. These unfiltered social media platforms are attracting a certain segment of the public, most notably the alt-right segment, extremists and those who are posting content that would be pulled down from the dominant social media platforms.
At present, these are small players in the world of social media, but if there is enough of a demand for an unfiltered platform, they could eventually become the dominant social media sites.
The question is whether the majority of social media users want an unfiltered platform, no matter what they see, or whether they instead want a degree of moderation in their social media feeds.
John Arendt is the editor of the Summerland Review.
To report a typo, email: