Google says it’s committed to ethical AI research. Its ethical AI team isn’t so sure.

Six months after star AI ethics researcher Timnit Gebru said Google fired her over an academic paper scrutinizing a technology that powers some of the company’s key products, the company says it’s still deeply committed to ethical AI research. It promised to double its research staff studying responsible AI to 200 people, and CEO Sundar Pichai has pledged his support to fund more ethical AI projects. Jeff Dean, the company’s head of AI, said in May that while the controversy surrounding Gebru’s departure was a “reputational hit,” it’s time to move on.

But some current members of Google’s tightly knit ethical AI group told Recode the reality is different from the one Google executives are publicly presenting. The 10-person group, which studies how artificial intelligence impacts society, is a subdivision of Google’s broader new responsible AI organization. They say the team has been in a state of limbo for months, and that they have serious doubts company leaders can rebuild credibility in the academic community — or that they will listen to the group’s ideas. Google has yet to hire replacements for the two former leaders of the team. Many members feel so adrift that they convene daily in a private messaging group to complain about leadership, manage themselves on an ad-hoc basis, and seek guidance from their former bosses. Some are considering leaving to work at other tech companies or to return to academia, and say their colleagues are thinking of doing the same.

“We want to continue our research, but it’s really hard when this has gone on for months,” said Alex Hanna, a researcher on the ethical AI team. Despite the challenges, Hanna added, individual researchers are trying to continue their work and effectively manage themselves — but if conditions don’t change, “I don’t see much of a path forward for ethics at Google in any kind of substantive way.”

A spokesperson for Google’s AI and research department declined to comment on the ethical AI team.

Google has a vast research organization of thousands of people that extends far beyond the 10 people it employs to specifically study ethical AI. There are other teams that also focus on societal impacts of new technologies, but the ethical AI team had a reputation for publishing groundbreaking papers about algorithmic fairness and bias in the data sets that train AI models. The team has lent Google’s research organization credibility in the academic community by demonstrating that it’s a place where seasoned scholars could do cutting-edge — and, at times, critical — research about the technologies the company develops. That’s important for Google, a company billions of people rely on every day to navigate the internet, and whose core products, such as Search, increasingly rely on AI.

While AI has the world-changing potential to help diagnose cancer, detect earthquakes, and replicate human conversation, the developing technology also has the ability to amplify biases against women and minorities, pose privacy threats, and contribute to carbon emissions. Google has a review process to determine whether new technologies are in line with its AI principles, which it introduced in 2018. And its AI ethics team is supposed to help the company find its own blind spots and ensure it develops and applies this technology responsibly. But in light of the controversy over Gebru’s departure and the upheaval of its ethical AI team, some academics in computer science research are concerned Google is plowing ahead with world-changing new technologies without adequately addressing internal feedback.

In May, for example, Google was criticized for announcing a new AI-powered dermatology app that had a significant shortcoming: It vastly underrepresented darker skin tones in its test data compared with lighter ones. It’s the kind of issue the ethical AI team, had it been consulted — and were it not in its current state — might have been able to help avoid.

The misfits of Google research

For the past several months, the leadership of Google’s ethical research team has been in a state of flux.

In the span of only a few months, the team — which has been referred to as a group of “friendly misfits” due to its status-quo-challenging research — lost two more leaders after Gebru’s departure. In February, Google fired Meg Mitchell, a researcher who founded the ethical AI team and co-led it with Gebru. And in April, Mitchell’s former manager, top AI scientist Samy Bengio, who previously managed Gebru and said he was “stunned” by what happened to her, resigned. Bengio, who did not work for the ethical AI team directly but oversaw its work as the leader of the larger Google Brain research division, will lead a new AI research team at Apple.

In mid-February, Google appointed Marian Croak, a former VP of engineering, to be the head of its new Responsible AI department, which the AI ethics team is a part of. But several sources told Recode that she is too high-level to be involved in day-to-day operations of the team.

This has left the ethical AI unit running itself in an ad-hoc fashion and turning to its former managers who no longer work at the company for informal guidance and research advice. Researchers on the team have invented their own structure: They rotate the responsibilities of running weekly staff meetings. And they’ve self-designated two researchers to keep other teams at Google updated on what they’re working on, which was a key part of Mitchell’s job. Because Google employs more than 130,000 people around the world, it can be difficult for researchers like the AI ethics team to know if their work would actually get implemented in products.

“But now, with me and Timnit not being there, I think the people threading that needle are gone,” Mitchell told Recode.

The past six months have been particularly difficult for newer members of the ethical AI team, who at times have been unsure of who to ask for basic information such as where they can find their salary or how to access Google’s internal research tools, according to several sources.

And some researchers on the team feel at risk after watching Gebru and Mitchell’s fraught departures. They’re worried that, if Google decides their work is too controversial, they could be ousted from their jobs, too.

In meetings with the ethical AI team, Croak, who is an accomplished engineering research leader but who has little experience in the field of ethics, has tried to reassure staff that she is the ally the team is looking for. Croak is one of the highest-ranking Black executives at Google, where Black women only represent about 1.2 percent of the workforce. She has acknowledged Google’s lack of progress on improving the racial and gender diversity of its employees — an issue Gebru was vocal about while working at Google. And Croak has struck an apologetic tone in meetings with staff, acknowledging the pain the team is going through, according to several researchers.

But the executive has gotten off on the wrong foot with the team, several sources say, because they feel she’s made a series of empty promises.

In the weeks before Croak was appointed officially as the lead of a new Responsible AI unit, she began having informal conversations with members of the ethical AI team about how to repair the damage done to the team. Hanna drafted a letter together with her colleagues on the ethical AI team that laid out demands that included “structural changes” to the research organization.

That restructuring happened. But ethical AI staff were blindsided when they first heard about the changes from a Bloomberg article.

“We happen to be the last people to know about it internally, even though we were the team that started this process,” said Hanna in February. “Even though we were the team that brought these complaints and said there needs to be a reorganization.”

“In the very beginning, Marian said, ‘We want your help in drafting a charter — you should have a say in how you’re managed,’” said another researcher on the ethical AI team who spoke on the condition of anonymity for fear of retaliation. “Then she disappeared for a month or two and said, ‘Surprise! Here’s the Responsible AI organization.’”

Croak told the team there was a miscommunication about the reorganization announcement. She continues to seek feedback from the ethical AI team and assures them that leadership all the way up to CEO Sundar Pichai recognizes the need for their work.

But several members of the ethical AI team say that even if Croak is well-intentioned, they question whether she has the institutional power to truly reform the dynamics at Google that led to the Gebru controversy in the first place.

Some are disillusioned about their future at Google and are questioning if they have the freedom they need to do their work. Google has agreed to one of their demands, but it hasn’t taken action on several others: They want Google to publicly commit to academic freedom and clarify its research review process. They also want it to apologize to Gebru and Mitchell and offer the researchers their jobs back — but at this point, that’s a highly unlikely prospect. (Gebru has said she would not take her old job back even if Google offered it to her.)

“There needs to be external accountability,” said Gebru in an interview in May. “And maybe once that comes, this team would have an internal leader who would champion them.”

Some researchers on the ethical AI team told Recode they are considering leaving the company, and that several of their colleagues are thinking of doing the same. In the highly competitive field of AI, where in-demand researchers at top tech companies can command seven-figure salaries, it would be a significant loss for Google to lose that talent to a competitor.

Google’s shaky standing in the research community

Google is by far one of the largest funders of research in the tech industry — it spent more than $27 billion on research and design last year, which is larger than NASA’s annual budget.

But the controversies surrounding its ethical AI team have left some academics questioning its commitment to letting researchers do their work freely, without being muzzled by the company’s business interests.

Thousands of professors, researchers, and lecturers in computer science signed a petition criticizing Google for firing Gebru, calling it “unprecedented research censorship.”

Dean and other AI executives at Google know that the company has lost trust in the broader research community. Their strategy for rebuilding that trust is “to continue to publish cutting-edge work” that is “deeply interesting,” according to comments Dean made at a February staff research meeting. “It will take a little bit of time to regain trust with people,” Dean said.

That might take more time than Dean predicted.

“I think Google’s reputation is basically irreparable in the academic community at this point, at least in the medium term,” said Luke Stark, an assistant professor at Western University in Ontario, Canada, who studies the social and ethical impacts of artificial intelligence.

Stark recently turned down a $60,000 unrestricted research grant from Google in protest over Gebru’s ousting. He is reportedly the first academic to ever to reject the generous and highly competitive funding.

Stark isn’t the only academic to protest Google over its handling of the ethical AI team. Since Gebru’s departure, two groups focused on increasing diversity in the field, Black in AI and Queer in AI, have said they will reject any funding from Google. Two academics invited to speak at a Google-run workshop boycotted it in protest. A popular AI ethics research conference, FAccT, suspended Google’s sponsorship.

And at least four Google employees, including an engineering director and an AI research scientist, have left the company and cited Gebru’s firing as a reason for their resignations.

Of course, these departures represent a handful of people out of a large group. Others are staying for now because they still believe things can change. One Google employee working in the broader research department but not on the ethical AI team said that they and their colleagues strongly disapproved of how leadership forced out Gebru. But they feel that it’s their responsibility to stay and continue doing meaningful work.

“Google is so powerful and has so much opportunity. It’s working on so much cutting-edge AI research. It feels irresponsible for no one who cares about ethics to be here.”

And these internal and external concerns about how Google is handling its approach to ethical AI development extend much further than the academic community. Regulators have started paying attention, too. In December, nine members of Congress sent a letter to Google demanding answers over Gebru’s firing. And the influential racial justice group Color of Change — which helped launch an advertiser boycott of Facebook last year — has called for an external audit of potential discrimination at Google in light of Gebru’s ouster.

These outside groups are paying close attention to what happens inside Google’s AI team because they recognize the increasing influence that AI will play in our lives. Virtually every major tech company, including Google, sees AI as a key technology in the modern world. And with Google already in the political hot seat because of antitrust concerns, the stakes are high for the company to get this new technology right.

“It’s going take a lot more than a PR push to shore up trust in responsible AI efforts, and I don’t think that’s being officially recognized by current leaders,” said Hanna. “I really don’t think they understand how much damage has been done to Google as a respectable actor in this space.”