On a Friday morning in October, in the lobby of a sleek San Francisco skyscraper, Matthew Butterick was headed toward the elevators when a security guard stopped him. Politely, the guard asked if he was lost.
It was an honest mistake. In checkerboard Vans, black baseball cap, and a windbreaker, Butterick didn’t look like the typical corporate warrior. He looked more like the type of guy who makes fun of the typical corporate warrior. He explained, equally politely, that he was in fact a lawyer with a legitimate reason to be in the building. His co-counsel, Joseph Saveri, leads an antitrust and class-action firm headquartered there.
Apologies, sir—right this way.
He might not look like it, but Butterick is the unlikely driving force behind the first wave of class-action lawsuits against big artificial-intelligence companies. He’s on a mission to make sure writers, artists, and other creative people have control over how their work is used by AI.
This is not where he expected to be. Until recently, Butterick wasn’t a practicing attorney at all, and he’s certainly not anti-technology. For most of his life, he’s worked as a self-employed designer and programmer, tinkering with speciality software. “I’m just a dude in his house,” he says, shrugging. “No assistant, no staff.” His idea of fun? Writing an app from scratch for personal use. He flies into the Bay Area for the requisite court dates—all the lawsuits have been filed in the Northern District of California—but he still spends most of his time working solo from the Los Angeles home he shares with his wife.
Yet when generative AI took off, he dusted off a long-dormant law degree specifically to fight this battle. He has now teamed up with Saveri as co-counsel on four separate cases, starting with a lawsuit filed in November 2022 against GitHub, claiming that the Microsoft subsidiary’s AI coding tool, Copilot, violates open-source licensing agreements. Now, the pair represent an array of programmers, artists, and writers, including comedian Sarah Silverman, who allege that generative AI companies are infringing upon their rights by training on their work without their consent.
The complaints all take slightly different legal approaches, but together, they represent a crusade to give creative people a say in how their work is used in AI training. “It’s pushback,” Butterick says. It’s a mission that AI companies vigorously oppose, because it frames the way they train their tools as fundamentally corrupt. Even many copyright and intellectual property scholars see it as a long shot.

