if-mark-zuckerberg-wont-fix-facebooks-algorithms-problem-who-will
Open Sourced logo

All eyes are on Facebook’s oversight board, which is expected to decide in the next few weeks if former President Donald Trump will be allowed back on Facebook. But some critics — and at least one member — of the independent decision-making group say the board has more important responsibilities than individual content moderation decisions like banning Trump. They want it to have oversight over Facebook’s core design and algorithms.

This idea of externally regulating the algorithms that determine almost everything you see on Facebook is catching on outside of the oversight board, too. At Thursday’s hearing on misinformation and social media, several members of Congress took aim at the company’s engagement algorithms, saying they spread misinformation in order to maximize profits. Some lawmakers are currently renewing efforts to amend Section 230 — the law that largely shields social media networks from liability for the content that users post to their platforms — so that these companies could be held responsible when their algorithms amplify certain types of dangerous content. At least one member of Congress is suggesting that social media companies might need a special regulatory agency.

All of this plays into a growing debate over who should regulate content on Facebook — and how it should be done.

Right now, the oversight board’s scope is limited

Facebook’s new oversight board — which can overrule even CEO Mark Zuckerberg on certain decisions and is meant to function like a Supreme Court for social media content moderation — has a fairly narrow scope of responsibilities. It’s currently tasked with reviewing users’ appeals if they object to a decision Facebook made to take down their posts for violating its rules. And only the board’s decisions on individual posts or questions that are directly referred to it by Facebook are actually binding.

When it comes to Facebook’s fundamental design and the content it prioritizes and promotes to users, all the board can do right now is make recommendations. Some say that’s a problem.

“The jurisdiction that Facebook has currently given it is way too narrow,” Evelyn Douek, a lecturer at Harvard Law School who analyzes social media content moderation policies, told Recode. “If it’s going to have any meaningful impact at all and actually do any good, [the oversight board] needs to have a much broader remit and be able to look at the design of the platform and a bunch of those systems behind what leads to the individual pieces of content in question.”

Facebook designs its algorithms to be so powerful that they decide what shows up when you search for a given topic, what groups you’re recommended to join, and what appears at the top of your News Feed. To keep you on its platforms as long as possible, Facebook uses its algorithms to serve up content that will encourage you to scroll, click, comment, and share on its platforms — all while encountering the ads that fuel its revenue (Facebook has objected to this characterization).

But these recommendation systems have long been criticized for exacerbating the spread of misinformation and fueling political polarization, racism, and extremist violence. This month, a man said he was able to become an FBI informant regarding a plot to kidnap Michigan Gov. Gretchen Whitmer because Facebook’s algorithms recommended he join the group where it was being facilitated. While Facebook has taken some steps to adjust its algorithms — after the January 6 insurrection at the US Capitol, the company said it would permanently stop recommending political groups — many think the company hasn’t taken aggressive enough action.

That’s what’s prompting calls for external regulation of the company’s algorithms — whether from the oversight board or from lawmakers.

Can the oversight board take on Facebook’s algorithms?

“The biggest disappointment of the board … is how narrow its jurisdictions are, right? Like, we were promised the Supreme Court, and we’ve been given a piddly little traffic court,” said Douek, while noting that Facebook has signaled the board’s jurisdiction could broaden over time. “Facebook is strongly going to resist letting the board have the kind of jurisdiction that we’re talking about because it goes to their core business interests, right? What is prioritized in the News Feed is the way that they get engagement and therefore the way that they make money.”

Some members of the board have also started to suggest a similar interest in the company’s algorithms. Recently, Alan Rusbridger, a journalist and member of the oversight board, told a House of Lords committee in the United Kingdom that he expected that he and fellow board members are likely to eventually ask “to see the algorithm — I feel sure — whatever that means.”

“People say to me, ‘Oh, you’re on that board, but it’s well known that the algorithms reward emotional content that polarizes communities because that makes it more addictive,’” he told the committee. Well I don’t know if that’s true or not, and I think as a board we’re going to have to get to grips with that. Even if that takes many sessions with coders speaking very slowly so that we can understand what they’re saying, our responsibility will be to understand what these machines are — the machines that are going in — rather than the machines that are moderating, what their metrics are.”

In an interview with Recode, oversight board member John Samples, of the libertarian Cato Institute, told Recode that the board, which launched only late last year, is just getting started but that it is “aware” of algorithms as an issue. He said that the board could comment on algorithms in its non-binding recommendations.

Julie Owono, also an oversight board member and executive director of the organization Internet Sans Frontières, pointed to a recent case the board considered regarding an automated flagging system that wrongly removed a post in support of breast cancer awareness for violating Facebook’s rules about nudity. “We’ve proved in the decision that we’ve made that we’re completely aware of the problems that exist with AI, and algorithms, and automated content decisions,” she told Recode.

A Facebook spokesperson told Recode the company is not planning to refer any cases regarding recommendation or engagement algorithms to the board, and that content-ranking algorithms are not currently in the scope of the board’s appeal process. Still, the spokesperson noted that the board’s bylaws allow its scope to expand over time.

“I’d also point out that currently, as Facebook adopts the board’s policy recommendations, the board is impacting the company’s operations,” a spokesperson for the oversight board added. One example: In the recent case involving a breast cancer awareness post, Facebook says it changed the language of its community guidelines, as well as improving its machine learning-based flagging systems.

But there are key questions related to algorithms that the board ought to be able to consider, said Katy Glenn Bass, a research director at the Knight First Amendment Institute. The oversight board, she told Recode, should have a “broader mandate” to learn about how Facebook’s algorithms decide what goes viral and what is prioritized in the News Feed, and should be able to study how well Facebook’s attempts to stop the spread of extremism and misinformation are actually working.

Recently, Zuckerberg promised to reduce “politics” in users’ feeds. The company has also instituted a fact-checking program and has tried to discourage people from sharing flagged misinformation with alerts. Following the 2020 election, Facebook tinkered with its News Feed to prioritize mainstream news, a temporary change it eventually rolled back.

“[The board] should have the power to ask Facebook those questions,” Bass told Recode in an email, “and to ask Facebook to let independent experts (like computer scientists) do research on the platform to answer those questions.” Bass, along with other leaders at the Knight First Amendment Institute, has recommended that the oversight board, before ruling on the Trump decision, analyze how Facebook’s “design decisions” contributed to the events at the Capitol on January 6.

Some critics have already begun to say that the oversight board isn’t sufficient for regulating Facebook’s algorithms, and they want the government to institute reform. Better protection for data privacy and digital rights — and legal incentives to curb the platform’s most odious and dangerous content — could force Facebook to change its systems, said Safiya Umoja Noble, a professor at UCLA and member of the Real Facebook Oversight Board, a group of activists and scholars that have raised concerns about the oversight board.

“The issues are the result of almost two decades of disparate and inconsistent human and software-driven content moderation, coupled with machine learning trained on consumer engagements with all kinds of harmful propaganda,” she told Recode. “[I]f Facebook were legally accountable for damages to the public, and to individuals, from the circulation of harmful and discriminatory advertising, or its algorithmic organization and mobilization of violent, hate-based groups, it would have to reimagine its product.”

Some lawmakers also think Congress should take a more aggressive role in Facebook’s algorithms. On Wednesday, Reps. Tom Malinowski and Anna Eshoo reintroduced the Protecting Americans from Dangerous Algorithms Act, which would take away platforms’ legal liability in cases where their algorithms amplified content that interfere with civil rights or involve international terrorism.

When asked about the oversight board, Rep. Eshoo told Recode: “If you ask me do I have confidence in this, and that someone on some committee said that they’re concerned about algorithms? I mean, I welcome that. But do I have confidence in it? I don’t.”

Madihha Ahussain, special counsel for anti-Muslim bigotry for Muslim Advocates — a civil rights organization that has sounded the alarm about anti-Muslim content on Facebook’s platform — told Recode that while the “jury is still out” on the oversight board’s legitimacy, she’s concerned it’s acting as “little more than a PR stunt” for the company and says the government should “step in.”

“Facebook’s algorithms drive people to hate groups and hateful content,” she told Recode. “Facebook needs to stop caving to political and financial pressures and ensure that their algorithms stop the spread of dangerous, hateful content — regardless of ideology.”

Beyond Facebook, Twitter CEO Jack Dorsey has floated another way to change how social media algorithms work: giving users more control. Before the Thursday House hearing on misinformation and disinformation, Dorsey pointed to efforts from Twitter to let people choose what their algorithms prioritize (right now, Twitter users can choose to see Tweets reverse-chronologically or based on engagement), as well as a nascent, decentralized research effort called Bluesky, which Dorsey says is working on building “open” recommendation algorithms to provide greater user choice.

While it’s clear there’s growing enthusiasm to change how social media algorithms work and who can influence that, it’s not yet clear what those changes will involve, or whether those changes will ultimately be up to users’ individual decisions, government regulation, or the social networks themselves. Regardless, providing oversight to social media algorithms on the scale of Facebook’s is still uncharted territory.

“The law’s still really, really new at this, so it’s not like we have a good model of how to do it anywhere yet,” says Douek, of Harvard Law. “So in some sense, it’s a problem for the oversight board. And in some sense, it’s a bigger problem for sort of legal systems and the law more generally as we enter the algorithmic age.”

Open Sourced is made possible by Omidyar Network. All Open Sourced content is editorially independent and produced by our journalists.

Similar Posts