Why Purging Social Media of Extremist Speech Might Not Make Us Safer
It’s been a wild week on social media. Twitter and Facebook permanently suspended President Trump following the Capitol riots; Twitter removed 70,000 accounts for allegedly promoting violent and conspiratorial content; Facebook mistakenly locked former congressman Ron Paul out of his account, an incident that demonstrates the perils of overly broad moderation; Apple and Amazon moved to eliminate Parler from the former’s app store and the latter’s servers, effectively destroying the alternative platform used by many Trump supporters.
The social media companies’ treatment of both Tump and Parler has prompted furious criticism from conservative pundits and politicians. “Big Tech wants to control what we see, how we behave, and what we say,” said Rep. Matt Gaetz (R–Fla.) in a statement entirely characteristic of the right’s response.
These moderation decisions can be defended on their own: Trump has repeatedly violated Twitter’s terms of service, and spokespersons for the company have frequently suggested that he was receiving leniency only because of his status as president. The Capitol riots, in which Trump’s inflammatory rhetoric likely played some role, are a new low for his presidency, and the platforms are understandably worried that future calls to reverse the outcome of the 2020 election could inspire further violence. Moreover, Twitter, Facebook, Amazon, and Apple are all private companies, and thus they have broad latitude to curtail speech, even in cases where doing so is not wise or well-founded.
It’s also fair to criticize the platforms for decisions that appeared hypocritical. While Parler has certainly played host to extremist speech, so have Twitter and Facebook—but Apple and Amazon didn’t punish either of them, which makes it seem like Big Te
Article from Latest – Reason.com