TLDR: After the Capitol Hill riots, debate is swirling over how social media content moderation platforms moderate content. Facebook, Twitter and YouTube outsource most of the grueling work to third-party companies. Experts say machines can't detect everything, such as the nuances of hate speech and misinformation.
IN DETAILS: Following the riots at the Capitol on January 6, there is a lot of discussion about how platforms control material and what is considered free speech. It’s a time-consuming and costly operation, with Facebook paying billions every day to evaluate thousands of pieces of information. While TikTok employs its own content censors, Facebook, Twitter, and YouTube outsource the majority of the content moderation labor to tens of thousands of third-party workers.
Because of the horrific things they witness when reading through hundreds or even thousands of postings every day, many moderators in the United States and abroad believe they need greater compensation, better working conditions, and better mental health care.
This Is How Facebook Content Moderation Platform Is Increasing
As a result, some businesses are relying increasingly on algorithms in the hopes of automating the majority of the dirty job. Experts, however, believe that algorithms can't detect everything, including the subtleties of hateful speech and misinformation. There are also a number of alternative social networks, such as Parler and Gab, that have grown in popularity due to their promise of minimal content monitoring. Parler was temporarily removed from the Apple and Google app stores, as well as Amazon Web Services hosting, as a result of this strategy. For moderating, some platforms, such as Nextdoor and Reddit, depend nearly entirely on enormous numbers of volunteers.
Read the news to learn more about how huge the social media content moderation business has grown and the real-world repercussions of social media's online decisions about what we can and can't view.