Lemonade, the fast-growing, machine learning-powered insurance app, put out a real lemon of a Twitter thread on Monday with a proud declaration that its AI analyzes videos of customers when determining if their claims are fraudulent. The company has been trying to explain itself and its business model — and fend off serious accusations of bias, discrimination, and general creepiness — ever since.
The prospect of being judged by AI for something as important as an insurance claim was alarming to many who saw the thread, and it should be. We’ve seen how AI can discriminate against certain races, genders, economic classes, and disabilities, among other categories, leading to those people being denied housing, jobs, education, or justice. Now we have an insurance company that prides itself on largely replacing human brokers and actuaries with bots and AI, collecting data about customers without them realizing they were giving it away, and using those data points to assess their risk.
Over a series of seven tweets, Lemonade claimed that it gathers more than 1,600 “data points” about its users — “100X more data than traditional insurance carriers,” the company claimed. The thread didn’t say what those data points are or how and when they’re collected, simply that they produce “nuanced profiles” and “remarkably predictive insights” which help Lemonade determine, in apparently granular detail, its customers’ “level of risk.”
Lemonade then provided an example of how its AI “carefully analyzes” videos that it asks customers making claims to send in “for signs of fraud,” including “non-verbal cues.” Traditional insurers are unable to use video this way, Lemonade said, crediting its AI for helping it improve its loss ratios: that is, taking in more in premiums than it had to pay out in claims. Lemonade used to pay out a lot more than it took in, which the company said was “friggin terrible.” Now, the thread said, it takes in more than it pays out.
“It’s incredibly callous to celebrate how your company saves money by not paying out claims (in some cases to people who are probably having the worst day of their lives),” Caitlin Seeley George, campaign director of digital rights advocacy group Fight for the Future, told Recode. “And it’s even worse to celebrate the biased machine learning that makes this possible.”
Lemonade, which was founded in 2015, offers renters, homeowners, pet, and life insurance in many US states and a few European countries, with aspirations to expand to more locations and add a car insurance offering. The company has more than 1 million customers, a milestone that it reached in just a few years. That’s a lot of data points.
“At Lemonade, one million customers translates into billions of data points, which feed our AI at an ever-growing speed,” Lemonade’s co-founder and chief operating officer Shai Wininger said last year. “Quantity generates quality.”
The Twitter thread made the rounds to a horrified and growing audience, drawing the requisite comparisons to the dystopian tech television series Black Mirror and prompting people to ask if their claims would be denied because of the color of their skin, or if Lemonade’s claims bot, “AI Jim,” decided that they looked like they were lying. What, many wondered, did Lemonade mean by “non-verbal cues?” Threats to cancel policies (and screenshot evidence from people who did cancel) mounted.
By Wednesday, the company walked back its claims, deleting the thread and replacing it with a new Twitter thread and blog post. You know you’ve really messed up when your company’s apology Twitter thread includes the word “phrenology.”
So, we deleted this awful thread which caused more confusion than anything else.
TL;DR: We do not use, and we’re not trying to build AI that uses physical or personal features to deny claims (phrenology/physiognomy) (1/4)
— Lemonade (@Lemonade_Inc) May 26, 2021
“The Twitter thread was poorly worded, and as you note, it alarmed people on Twitter and sparked a debate spreading falsehoods,” a spokesperson for Lemonade told Recode. “Our users aren’t treated differently based on their appearance, disability, or any other personal characteristic, and AI has not been and will not be used to auto-reject claims.”
The company also maintains that it doesn’t profit from denying claims and that it takes a flat fee from customer premiums and uses the rest to pay claims. Anything left over goes to charity (the company says it donated $1.13 million in 2020). But this model assumes that the customer is paying more in premiums than what they’re asking for in claims.
And Lemonade isn’t the only insurance company that relies on AI to power a large part of its business. Root offers car insurance with premiums based largely (but not entirely) on how safely you drive — as determined by an app that monitors your driving during a “test drive” period. But Root’s potential customers know they’re opting into this from the start.
So, what’s really going on here? According to Lemonade, the claim videos customers have to send are merely to let them explain their claims in their own words, and the “non-verbal cues” are facial recognition technology used to make sure one person isn’t making claims under multiple identities. Any potential fraud, the company says, is flagged for a human to review and make the decision to accept or deny the claim. AI Jim doesn’t deny claims.
Advocates say that’s not good enough.
“Facial recognition is notorious for its bias (both in how it’s used and also how bad it is at correctly identifying Black and brown faces, women, children, and gender-nonconforming people), so using it to ‘identify’ customers is just another sign of how Lemonade’s AI is biased,” George said. “What happens if a Black person is trying to file a claim and the facial recognition doesn’t think it’s the actual customer? There are plenty of examples of companies that say humans verify anything flagged by an algorithm, but in practice it’s not always the case.”
The blog post also didn’t address — nor did the company answer Recode’s questions about — how Lemonade’s AI and its many data points are used in other parts of the insurance process, like determining premiums or if someone is too risky to insure at all.
Lemonade did give some interesting insight into its AI ambitions in a 2019 blog post written by CEO and co-founder Daniel Schreiber that detailed how algorithms (which, he says, no human can “fully understand”) can remove bias. He tried to make this case by explaining how an algorithm that charged Jewish people more for fire insurance because they light candles in their homes as part of their religious practices would not actually be discriminatory, because it would be evaluating them not as a religious group, but as individuals who light a lot of candles and happen to be Jewish:
The fact that such a fondness for candles is unevenly distributed in the population, and more highly concentrated among Jews, means that, on average, Jews will pay more. It does not mean that people are charged more for being Jewish.
The upshot is that the mere fact that an algorithm charges Jews – or women, or black people – more on average does not render it unfairly discriminatory.
Happy Hanukkah!
This is what Schreiber described as a “Phase 3 algorithm,” but the post didn’t say how the algorithm would determine this candle-lighting proclivity in the first place — you can imagine how this could be problematic — or if and when Lemonade hopes to incorporate this kind of pricing. But, he said, “it’s a future we should embrace and prepare for” and one that was “largely inevitable” — assuming insurance pricing regulations change to allow companies to do it.
“Those who fail to embrace the precision underwriting and pricing of Phase 3 will ultimately be adversely-selected out of business,” Schreiber wrote.
This all assumes that customers want a future where they’re covertly analyzed across 1,600 data points they didn’t realize Lemonade’s bot, “AI Maya,” was collecting and then being assigned individualized premiums based on those data points — which remain a mystery.
The reaction to Lemonade’s first Twitter thread suggests that customers don’t want this future.
“Lemonade’s original thread was a super creepy insight into how companies are using AI to increase profits with no regard for peoples’ privacy or the bias inherent in these algorithms,” said George, from Fight for the Future. “The automatic backlash that caused Lemonade to delete the post clearly shows that people don’t like the idea of their insurance claims being assessed by artificial intelligence.”
But it also suggests that customers didn’t realize a version of it was happening in the first place, and that their “instant, seamless, and delightful” insurance experience was built on top of their own data — far more of it than they thought they were providing. It’s rare for a company to be so blatant about how that data can be used in its own best interests and at the customer’s expense. But rest assured that Lemonade is not the only company doing it.