The FTC’s Probe Into ‘Potentially Illegal’ Content Moderation Is a Blatant Assault on the First Amendment
Today is the deadline for public comments regarding a “public inquiry” by the Federal Trade Commission (FTC) into the “potentially illegal” content moderation practices of social media platforms. As many of those comments note, that investigation impinges on the editorial discretion that the U.S. Supreme Court has repeatedly said is protected by the First Amendment.
“Tech firms should not be bullying their users,” FTC Chairman Andrew Ferguson said when the agency launched its probe in February. “This inquiry will help the FTC better understand how these firms may have violated the law by silencing and intimidating Americans for speaking their minds.”
Ferguson touts his investigation as a blow against “the tyranny of Big Tech” and “an important step forward in restoring free speech.” His chief complaint is that “Big Tech censorship” discriminates against Republicans and conservatives. But even if that were true, there would be nothing inherently illegal about it.
The FTC suggests that social media companies may be engaging in “unfair or deceptive acts or practices,” which are prohibited by Section 5 of the Federal Trade Commission Act. To substantiate that claim, the agency asked for examples of deviations from platforms’ “policies” or other “public-facing representations” concerning “how they would regulate, censor, or moderate users’ conduct.” It wanted to know whether the platforms had applied those rules faithfully and consistently, whether they had revised their standards, and whether they had notified users of those changes.
If platforms fall short on any of those counts, the FTC implies, they are violating federal law. But that position contradicts both the agency’s prior understanding of its statutory authority and the Supreme Court’s understanding of the First Amendment.
The FTC’s authority under Section 5 “does not, and constitutionally cannot, extend to penalizing social media platforms for how they choose to moderate user content,” Ashkhen Kazaryan, a senior legal fellow at the Future of Free Speech, argues in a comment that the organization submitted on Tuesday. “Platforms’ content moderation policies, even if controversial or unevenly enforced, do not fall within the scope of deception or unfairness as defined by longstanding FTC precedent or constitutional doctrine. Content moderation practices, whether they involve the removal of misinformation, the enforcement of hate speech policies, or the decision to abstain from moderating content users don’t want to see, do not constitute the type of economic or tangible harm the unfairness standard was designed to address. While such policies may be the subject of vigorous public debate, they do not justify FTC intervention.”
The FTC says “an act or practice is ‘unfair’ if it ’causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition'” (emphasis in the original). “In most cases,” the FTC explains, “a substantial injury involves monetary harm, as when sellers coerce consumers into purchasing unwanted goods or services or when consumers buy defective goods or services on credit but are unable to assert against the creditor claims or defenses arising from the transaction. Unwarranted health and safety risks may also support a finding of unfairness.”
It is not obvious how that standard applies to, say, a Facebook user who complains that the platform erroneously or unfairly deemed one of his posts misleading. Nor does the FTC’s long-established definition of “deception” easily fit the “Big Tech censorship” to which Ferguson objects.
The FTC says “deception” requires “a representation, omission or practice that is likely to mislead the consumer.” It mentions several examples of “practices that have been found misleading or deceptive,” including “false oral or written representations, misleading price claims, sales of hazardous or systematically defective products or services without adequate disclosures, failure to disclose information regarding
Article from Reason.com
The Reason Magazine website is a go-to destination for libertarians seeking cogent analysis, investigative reporting, and thought-provoking commentary. Championing the principles of individual freedom, limited government, and free markets, the site offers a diverse range of articles, videos, and podcasts that challenge conventional wisdom and advocate for libertarian solutions. Whether you’re interested in politics, culture, or technology, Reason provides a unique lens that prioritizes liberty and rational discourse. It’s an essential resource for those who value critical thinking and nuanced debate in the pursuit of a freer society.