“Lawmakers’ latest idea to fix Facebook: Regulate the algorithm”

Will Oremus for WaPo:

On Facebook, you decide who to befriend, which pages to follow, which groups to join. But once you’ve done that, it’s Facebook that decides which of their posts you see each time you open your feed — and which you don’t.

The software that makes those decisions for each user, based on a secret ranking formula devised by Facebook that includes more than 10,000 factors, is commonly referred to as “the news feed algorithm,” or sometimes just “the algorithm.” On a social network with nearly 3 billion users, that algorithm arguably has more influence over what people read, watch and share online than any government or media mogul.

It’s the invisible hand that helps to make sure you see your close friend’s wedding photos at the top of your feed, rather than a forgotten high school classmate’s post about what they had for lunch today. But because Facebook’s primary goal is to grab and hold your attention, critics say, it’s also prone to feed you that high school classmate’s post of a meme that demonizes people you disagree with, rather than, say, a balanced news story — or an engrossing conspiracy theory rather than a dry, scientific debunking….

Forcing tech companies to be more careful about what they amplify might sound straightforward. But it poses a challenge to tech companies because the ranking algorithms themselves, while sophisticated, generally aren’t smart enough yet to fully grasp the message of every post. So the threat of being sued for even a couple of narrow types of illegal content could force platforms to adjust their systems on a more fundamental level. For instance, they might find it prudent to build in human oversight of what gets amplified, or perhaps move away from automatically personalized feeds altogether.

To some critics, that would be a win. Roddy Lindsay, a former Facebook data scientist who worked on the company’s algorithms, argued in a New York Times op-ed this week that Section 230 reform should go further. He proposes eliminating the liability shield for any content that social platforms amplify via personalized recommendation software. The idea echoes Haugen’s own suggestion. Both Lindsay and Haugen say companies such as Facebook would respond by abandoning their recommendation algorithms and reverting to feeds that simply show users every post from the people they follow.

Nick Clegg, Facebook’s vice president for global affairs and communications, argued against that idea Sunday on ABC’s “This Week.”

Daphne Keller, who directs the Program on Platform Regulation at Stanford University’s Cyber Policy Center, has thrown cold water on the idea of regulating what types of speech platforms can amplify, arguing that bills such as Eshoo and Malinowski’s would probably violate the First Amendment.

“Every time a court has looked at an attempt to limit the distribution of particular kinds of speech, they’ve said, ‘This is exactly the same as if we had banned that speech outright. We recognize no distinction,’” Keller said.

Proposals to limit algorithmic amplification altogether, such as Lindsay’s, might fare better than those that target specific categories of content, Keller added, but then social media companies might argue that their algorithms are protected under their First Amendment right to set editorial policy.

Share this: