For years, Facebook and other social media companies have erred on the side of lenience in policing their sites — allowing most posts with false information to stay up, as long as they came from a genuine human and not a bot or a nefarious actor.
The latest: Now, the companies are considering a fundamental shift with profound social and political implications: deciding what is true and what is false.
The big picture: The new approach, if implemented, would not affect every lie or misleading post. It would be meant only to rein in manipulated media — everything from sophisticated, AI-enabled video or audio deepfakes to super-basic video edits like a much-circulated, slowed-down clip of Nancy Pelosi that surfaced in May.
Still, it would be a significant concession to critics who say the companies have a responsibility to do much more to keep harmful false information from spreading unfiltered.
It would also be an inflection point in the companies’ approach to free speech, which has thus far been that more is better and that the truth will bubble up.
Disclosure: The article notes: “In June, the Carnegie Endowment for International Peace gathered experts and representatives from several big social media companies in San Francisco to focus on the threat to the 2020 election.” I was a participant at that meeting.
I opine on these issues in Deep Fakes, Bots, and Siloed Justices: American Election Law in a Post-Truth World.