Online Ad Targeting Does Work—As Long because it’s maybe not Creepy

If you click in the right-hand corner of any advertisement on Facebook, the social network will inform you why it was geared to you. But just what would happen if those hidden targeting strategies were transparently presented, right next to the ad it self? That’s the concern in the centre of new research from Harvard company class published in Journal of customer analysis. As it happens marketing transparency is best for a platform—but this will depend on how creepy marketer practices are.

The analysis has wide-reaching implications for advertising leaders like Facebook and Bing, which increasingly find themselves under great pressure to disclose more about their focusing on practices. The researchers discovered, for example, that consumers are reluctant to interact with adverts that they know are served according to their task on third-party websites, a tactic Facebook and Bing regularly utilize. Which also implies that technology giants have a financial motivation to make sure users are not mindful, at the least up front, about how exactly some advertisements are served.

Cannot Talk Behind My Straight Back

For his or her research, Tami Kim, Kate Barasz and Leslie K. John conducted several internet marketing experiments to understand the end result transparency is wearing user behavior. They unearthed that if websites inform you they truly are making use of unsavory techniques—like tracking you across the web—you’re less more likely to engage with their advertisements. The same goes for other invasive practices, like inferring one thing about your life when you haven’t explicitly provided information. A famous example of this is from 2012, whenever Target started sending a lady baby-focused marketing mailers, unintentionally divulging to her daddy that she had been pregnant.

“i believe it will likely be interesting to see how companies react inside age of increasing transparency,” says John, a teacher at Harvard company class and another associated with the authors of the paper. “Third-party data sharing clearly plays a huge part in behaviorally targeted marketing. And behaviorally targeted advertising has been confirmed to be extremely effective—in so it increases sales. But our studies have shown that whenever we discover third-party sharing—and also of organizations making inferences about us—we feel intruded upon and as a result ad effectiveness can decline.”

The scientists didn’t find, but that users react poorly to all types of advertisement transparency. If businesses easily disclose that they employ targeting methods observed become appropriate, like suggesting products considering things you have clicked before, individuals makes purchases all the same. And also the study suggests that if individuals already trust the working platform in which those advertisements are shown, they may also be more likely to click and purchase.

‘once we become aware of third-party sharing—and also of firms making inferences about us—we feel intruded upon.’

Leslie K. John, Harvard Business School

The scientists say their findings mimic social truths into the real world. Monitoring users across websites is deemed an an inappropriate flow of information, like talking behind a pal’s back. Likewise, making inferences is normally viewed as unacceptable, even if you’re drawing a summary the other person would easily disclose. As an example, you may tell a buddy you are attempting to lose weight, but believe it is inappropriate for him to inquire of if you would like shed some pounds. Similar sort of guidelines apply to the online world, according to the study.

“which brings to your topic that excites me the most—norms in the digital room are still evolving and less well understood,” says Kim, the lead author of the analysis and a advertising teacher during the University of Virginia’s company school. “For marketers to build relationships with customers efficiently, it’s critical for organizations to know just what these norms are and steer clear of techniques that violate these norms.”

Where’d That Advertisement Come From?

In one single experiment, the researchers recruited 449 individuals from Amazon’s Mechanical Turk platform to check out ads for the fictional bookstore. They were randomly shown two different ad-transparency communications, one saying they were targeted centered on items they will have clicked on before, and another saying these were targeted considering their task on other web sites. The research found that adverts appended utilizing the second message—revealing that users was tracked over the web—were 24 percent less effective. (the lab studies, “effectiveness” had been according to how the topics felt in regards to the ads.)

In another experiment, the researchers looked at whether advertisements are less effective whenever businesses disclose they truly are making inferences about their users. Within scenario, 348 participants had been shown an ad for the memorial, along with a message saying either these people were seeing the advertisement based on “your information which you claimed in regards to you,” or “based on your information we inferred about you.” Inside research, advertisements had been less 17 percent effective with regards to had been revealed they had been targeted based on things an online site concluded in regards to you by itself, rather than facts you actively supplied.

The researchers unearthed that their control ads, which didn’t have any transparency messages, done as well as those with “acceptable” ad-transparency disclosures—implying that being up-front about focusing on might not affect a company’s main point here, providing it’s not being creepy. The issue is that businesses do often use unsettling strategies; the Intercept discovered earlier this month, for example, that Twitter is promoting something built to provide ads predicated on just how it predicts customers will behave as time goes on.

In yet another test, the academics asked 462 individuals to log within their Facebook reports and appearance within first advertisement they saw. Then they were instructed to copy and paste Facebook’s “Why have always been I seeing this advertisement” message, as well as the name of this business that bought it. Responses included standard targeting techniques, like “my age I reported on my profile,” and invasive, distressing strategies like “my intimate orientation that Twitter inferred predicated on my Facebook use.”

Journal of Customer Research

The researchers coded these reactions, and provided them each a “transparency score.” The higher the score, the greater amount of appropriate the ad-targeting practice. The topics had been then asked exactly how interested they were into the advertising, including whether they would purchase one thing from business’s internet site. The outcome reveal participants who have been offered advertisements making use of acceptable practices had been prone to engage compared to those who were offered ads according to practices sensed to be unacceptable.

Then, the scientists tested whether users who distrusted Facebook were less likely to engage an advertising; they discovered both that while the reverse to be real. Those who trust Facebook more are more inclined to build relationships advertisements—though they need to be targeted in accepted methods. Put simply, Facebook possesses economic motivation beyond public relations to ensure users trust it. When they do not, people build relationships ads less.

Journal of Consumer Research

“the things I think will likely to be interesting moving forward is really what users determine for themselves as transparency. That meaning is quickly changing, and exactly how platforms define it would likely perhaps not align with exactly how users want or want it defined to feel just like they realize,” states Susan Wenograd, a digital marketing consultant having Facebook focus. “nobody thought a lot of quizzes and apps being tied to Twitter before, but needless to say they are doing now considering that the testimony regarding Cambridge Analytica. It’s a fine line become clear without scaring users.”

Whenever Transparency Functions For All

In certain circumstances, based on the research, being honest about focusing on practices may even induce more clicks and acquisitions. In another experiment, the researchers worked with two loyalty point-redemption programs, which past research shows consumers trust very. Once they revealed individuals messages next to adverts saying things like “recommended predicated on your presses on our site,” they certainly were prone to click and make acquisitions than if no message had been current.

That states being truthful can in fact improve a business’s base line—as very long because they’re perhaps not monitoring and targeting users within an invasive means. Once the scientists wrote, “even the absolute most individualized, perfectly targeted advertisement will flop if the consumer is more centered on the (un)acceptability of the way the targeting had been done in the first place.”

The Ad Machine

Online Ad Targeting Does Work—As Long As It’s Not Creepy

If you click on the right-hand corner of any advertisement on Facebook, the social network will tell you why it was targeted to you. But what would happen if those buried targeting tactics were transparently displayed, right next to the ad itself? That’s the question at the heart of new research from Harvard Business School published in the Journal of Consumer Research. It turns out advertising transparency can be good for a platform—but it depends on how creepy marketer methods are.

The study has wide-reaching implications for advertising giants like Facebook and Google, which increasingly find themselves under pressure to disclose more about their targeting practices. The researchers found, for example, that consumers are reluctant to engage with ads that they know have been served based on their activity on third-party websites, a tactic Facebook and Google routinely use. Which also suggests that tech giants have a financial incentive to ensure users aren’t aware, at least up front, about how some ads are served.

Don’t Talk Behind My Back

For their study, Tami Kim, Kate Barasz and Leslie K. John conducted a number of online advertising experiments to understand the effect transparency has on user behavior. They found that if sites tell you they’re using unsavory tactics—like tracking you across the web—you’re far less likely to engage with their ads. The same goes for other invasive methods, like inferring something about your life when you haven’t explicitly provided that information. A famous example of this is from 2012, when Target began sending a woman baby-focused marketing mailers, inadvertently divulging to her father that she was pregnant.

“I think it will be interesting to see how firms respond in this age of increasing transparency,” says John, a professor at Harvard Business School and one of the authors of the paper. “Third-party data sharing obviously plays a big part in behaviorally targeted advertising. And behaviorally targeted advertising has been shown to be very effective—in that it increases sales. But our research shows that when we become aware of third-party sharing—and also of firms making inferences about us—we feel intruded upon and as a result ad effectiveness can decline.”

The researchers didn’t find, however, that users react poorly to all forms of ad transparency. If companies readily disclose that they employ targeting methods perceived to be acceptable, like recommending products based on items you’ve clicked in the past, people will make purchases all the same. And the study suggests that if people already trust the platform where those ads are displayed, they might even be more likely to click and buy.

‘When we become aware of third-party sharing—and also of firms making inferences about us—we feel intruded upon.’

Leslie K. John, Harvard Business School

The researchers say their findings mimic social truths in the real world. Tracking users across websites is viewed as an an inappropriate flow of information, like talking behind a friend’s back. Similarly, making inferences is often seen as unacceptable, even if you’re drawing a conclusion the other person would freely disclose. For example, you might tell a friend that you’re trying to lose weight, but find it inappropriate for him to ask if you want to shed some pounds. The same sort of rules apply to the online world, according to the study.

“And this brings to the topic that excites me the most—norms in the digital space are still evolving and less well understood,” says Kim, the lead author of the study and a marketing professor at the University of Virginia’s business school. “For marketers to build relationships with consumers effectively, it’s critical for firms to understand what these norms are and avoid practices that violate these norms.”

Where’d That Ad Come From?

In one experiment, the researchers recruited 449 people from Amazon’s Mechanical Turk platform to look at ads for a fictional bookstore. They were randomly shown two different ad-transparency messages, one saying they were targeted based on products they’ve clicked on in the past, and one saying they were targeted based on their activity on other websites. The study found that ads appended with the second message—revealing that users had been tracked across the web—were 24 percent less effective. (For the lab studies, “effectiveness” was based on how the subjects felt about the ads.)

In another experiment, the researchers looked at whether ads are less effective when companies disclose they’re making inferences about their users. In this scenario, 348 participants were shown an ad for an art gallery, along with a message saying either they were seeing the ad based on “your information that you stated about you,” or “based on your information that we inferred about you.” In this study, ads were less 17 percent effective when it was revealed that they were targeted based on things a website concluded about you on its own, rather than facts you actively provided.

The researchers found that their control ads, which didn’t have any transparency messages, performed just as well as those with “acceptable” ad-transparency disclosures—implying that being up-front about targeting might not impact a company’s bottom line, as long as it’s not being creepy. The problem is that companies do sometimes use unsettling tactics; the Intercept discovered earlier this month, for example, that Facebook has developed a service designed to serve ads based on how it predicts consumers will behave in the future.

In yet another experiment, the academics asked 462 participants to log into their Facebook accounts and look at the first ad they saw. They then were instructed to copy and paste Facebook’s “Why am I seeing this ad” message, as well as the name of the company that purchased it. Responses included standard targeting methods, like “my age I stated on my profile,” as well as invasive, distressing tactics like “my sexual orientation that Facebook inferred based on my Facebook usage.”

Journal of Consumer Research

The researchers coded these responses, and gave them each a “transparency score.” The higher the score, the more acceptable the ad-targeting practice. The subjects were then asked how interested they were in the ad, including whether they would purchase something from the company’s website. The results show participants who were served ads using acceptable practices were more likely to engage than those who were served ads based on practices perceived to be unacceptable.

Then, the researchers tested whether users who distrusted Facebook were less likely to engage with an ad; they found both that and the reverse to be true. People who trust Facebook more are more likely to engage with advertisements—though they have to be targeted in accepted ways. In other words, Facebook has a financial incentive beyond public relations to ensure users trust it. When they don’t, people engage with advertisements less.

Journal of Consumer Research

“What I think will be interesting moving forward is what users define for themselves as transparency. That definition is rapidly changing, and how platforms define it may not align with how users want or need it defined to feel like they understand,” says Susan Wenograd, a digital advertising consultant with a Facebook focus. “No one thought much of quizzes and apps being tied to Facebook before, but of course they do now since the testimony regarding Cambridge Analytica. It’s a fine line to be transparent without scaring users.”

When Transparency Works For Everyone

In some situations, according to the study, being honest about targeting practices can even lead to more clicks and purchases. In another experiment, the researchers worked with two loyalty point-redemption programs, which previous research has shown consumers trust highly. When they showed people messages next to ads saying things like “recommended based on your clicks on our site,” they were more likely to click and make purchases than if no message was present.

That says being honest can actually improve a company’s bottom line—as long as they’re not tracking and targeting users in an invasive way. As the researchers wrote, “even the most personalized, perfectly targeted advertisement will flop if the consumer is more focused on the (un)acceptability of how the targeting was done in the first place.”

The Ad Machine

Maybe Election Poll Predictions Aren’t Broken After All

No matter where you situate your self regarding the governmental range, don’t attempt to deny that the 2016 United States presidential election made you go “whaaaaaaat?” This might ben’t a judgment; if you believe Michael Wolff’s book, even Donald Trump didn’t think Donald Trump would be president. Partially that’s because of polls. Even although you didn’t spend 2016 frantically refreshing Fivethirtyeight and arguing the relative merits of Sam Wang versus Larry Sabato (no judgment), if you simply watched the headlines, you probably thought that Hillary Clinton had from a 71 per cent to 99 % chance of becoming president.

Yet.

That outcome, along with a similarly hinky 2015 election in the United Kingdom, kicked into life an ecosystem of mea maxima culpas from pollsters around the globe. (This being data, everything want is just a mea maxima culpa, a mea minima culpa, and mean, typical, and standard-deviation culpas.) The American Association for Public Opinion analysis published a 50-page “Evaluation of 2016 Election Polls.” The Uk report on polls in 2015 was 120 pages very long. Pollsters were “completely and utterly wrong,” it seemed at that time, due to low reaction prices to telephone polls, which are generally over landlines, which people often maybe not answer anymore.

So now I’m gonna blow your mind: those pollsters might have been wrong about being incorrect. In fact, if you view polling from 220 nationwide elections since 1942—that’s 1,339 polls from 32 nations, from times of face-to-face interviews to today’s online polls—you find that while polls have actuallyn’t gotten better at predicting winners, but they haven’t gotten a great deal even worse, either. “You go through the last week of polls for many these countries, and essentially view how those modification,” claims Will Jennings, a governmental scientist during the University of Southampton and coauthor of the brand new paper on polling mistake in Nature Human Behaviour. “There’s no overall trend of errors increasing.”

Jennings and his coauthor Christopher Wlezien, a political scientist at University of Texas, really examined the essential difference between how a prospect or party polled additionally the actual, final share. That absolute value became their reliant adjustable, the point that changed in the long run. They did some mathematics.

First, they looked over an even bigger database of polls that covered entire elections, beginning 200 times before Election Day. That far out, they found, the typical absolute error had been around 4 %. Fifty days out, it declines to about 3 percent, and then the evening ahead of the election it is about 2 percent. That has been constant across years and nations, plus it’s exactly what you’d anticipate. As more and more people begin contemplating voting and more polls start polling, the outcome be a little more accurate.

The red line tracks the typical mistake in governmental polls within the last few week of the campaign over 75 years.

WILL JENNINGS

More importantly, in the event that you look just at last-week polls in the long run and just take the error for every from 1943 to 2017, the mean remains at 2.1 per cent. Really, that’s not exactly true—in this century it dropped to 2.0 %. Polling continues to be pretty OK. “It isn’t that which we quite expected when we started,” Jennings claims.

In 2016 in the usa, Jennings states, “the real national viewpoint polls weren’t extraordinarily incorrect. They Certainly Were good types of errors we come across historically.” It’s exactly that people kind of anticipated them to be less wrong. “Historically, theoretically advanced communities think these processes are perfect,” he says, “when naturally they will have mistake integrated.”

Sure, some polls are only lousy—go check the archives during the Dewey Presidential Library for lots more on that. Really however, all shocks tend to stick out. When polls casually and stably barrel toward a formality, nobody remembers. “There weren’t some complaints in 2008. There weren’t plenty of complaints in 2012,” claims Peter Brown, assistant director for the Quinnipiac University Poll. But 2016 had been a little different. “There had been more polls than in the recent times that didn’t perform up to their previous results in elections like ‘08 and ‘12.”

Also, according to AAPOR’s report on 2016, national polls actually reflected the outcome regarding the presidential battle pretty well—Hillary Clinton did, in the end, win the popular vote. Smaller state polls showed more uncertainty and underestimated Trump support—and must handle a lot of people changing their minds within the last week for the campaign. Polls that 12 months also didn’t account for overrepresentation within their types of university graduates, who had been prone to support Clinton.

In a likewise methodological vein, though, Jennings’ and Wlezien’s work features its own restrictions. In a culture in which civilians as you and me view polls obsessively, their focus on the the other day before election day is probably not utilizing the right lens. That’s specially crucial if it’s real, as some observers hypothesize, that pollsters “herd” in final times, attempting to make certain their information is in line with their peers’ and competitors’.

“It’s a narrow and limited method to have a look at how good governmental polls are,” claims Jon Cohen, primary research officer at SurveyMonkey. Cohen states he’s got plenty of respect the researchers’ work, but that “these writers are telling a tale that is in certain methods orthogonal to exactly how people experienced the election, not just due to polls that arrived on the scene a week or 48 hours before Election Day but because of just what the polls led them to believe over the whole course of the campaign.”

Generally speaking, pollsters agree totally that reaction rates remain an actual problem. On the web polling or alleged interactive voice response polling, in which a bot interviews you over the phone, might not be as good as random-digit-dial phone polls had been a half-century ago. At change of the century, the paper records, possibly a 3rd of people a pollster contacted would actually respond. Now it is less than one in 10. That means surveys are less representative, less random, and more prone to miss styles. “Does the universe of voters with cells differ from the universe of voters whom don’t have cells?” asks Brown. “If it absolutely was exactly the same universe, you wouldn’t must phone mobile phones.”

Web polling has comparable problems. If you preselect a sample to poll via internet, as some pollsters do, that’s by definition perhaps not random. That doesn’t suggest it can’t be accurate, but as being a technique it needs some brand new statistical thinking. “Pollsters are constantly suffering issues around changing electorates and changing technology,” Jennings claims. “Not many of them are complacent. However it’s some reassurance that things aren’t getting even worse.”

At the same time, it would be good if polls could take effect on approaches to better express the doubt around their figures, if a lot more of united states are likely to view them. (Cohen states that’s why SurveyMonkey issued multiple talks about the unique election in Alabama this past year, based in component on various turnout scenarios.) “Ultimately it will be good if we could evaluate polls on the methodologies and inputs and not soleley regarding output,” Cohen says. “But that’s the long game.” Plus it’s well worth keeping in mind when you begin simply clicking those mid-term election polling results this springtime.

Counting Votes

  • Voting toward the 2018 election has started, and some systems stay insecure.
  • Two senators provide suggestions for securing US voting systems.
  • The 2016 election outcomes astonished many people, but not the big-data guru in Trump’s campaign.

Clean power Is a Bright place Amid a Dark Tech Cloud

The mood around tech is dark nowadays. Internet sites are a definite cesspool of harassment and lies. On-demand organizations are producing a bleak economy of gig labor. AI learns to be racist. Can there be anyplace in which the tech news is radiant with conventional optimism? In which good cheer abounds?

Why, yes, there is certainly: clean energy. It’s, in place, the newest Silicon Valley—filled with giddy, breathtaking ingenuity and flat-out very good news.

This might appear astonishing given the climate-change denialism in Washington. But consider, first, residential solar technology. The cost of panels has plummeted in the past ten years and is projected to drop another 30 percent by 2022. Why? Clever engineering breakthroughs, like the use of diamond wire to cut silicon wafers into ever-skinnier slabs, creating higher yields with less natural material.

Manufacturing expenses are down. According to US government projections, the fastest-growing career regarding the next a decade will likely to be solar voltaic installer. And you understand who switched to solar powered energy last year, because it ended up being therefore cheap? The Kentucky Coal Museum.

Related Tales

Tech could have served up Nazis in social media channels, but, hey, it is additionally creating microgrids—a locavore equivalent for the solar set. One of these simple efforts is Brooklyn-based LO3 Energy, a business that produces a paperback-sized unit and pc software that lets owners of solar-equipped domiciles sell power to their neighbors—verifying the transactions using the blockchain, on top of that. LO3 is testing its system in 60 domiciles on its Brooklyn grid and hundreds more in the areas.

“Buy power and you’re buying from your own community,” LO3 founder Lawrence Or­sini tells me. Their chipsets also can connect with smart appliances, so you might save cash by allowing his system period down your devices as soon as the system is low on energy. The business uses internet logic—smart devices that communicate with one another more than a foolish network—to optimize energy consumption on fly, making local clean energy ever more viable.

But wait, does not blockchain number-crunching usage so much electricity it creates wasteful heat? It will. So Orsini invented DareHenry, a rack filled with six GPUs; although it processes mathematics, phase-­changing goo absorbs the outbound temperature and uses it to warm a house. Blockchain cogeneration, individuals! DareHenry is 4 feet of gorgeous, Victorian­esque steampunk aluminum—so lovely you’d want anyone to showcase to guests.

Solar and blockchain are just the end of clean technology. Within few years, we’ll probably start to see the first home fuel-cell systems, which convert propane to electricity. Such systems are “about 80 per cent efficient,” marvels Garry Golden, a futurist who has studied clean energy. (He’s additionally on LO3’s grid, along with the rest of his block.)

The point is, clean energy has a utopian character that reminds me personally associated with beginning of computers. The pioneers of the 1970s had been crazy hackers, hell-bent on making devices inexpensive sufficient the masses. Everybody thought these people were peanuts, or little potatoes—yet they revolutionized interaction. When I look at Orsini’s ­blockchain-based energy-trading routers, we start to see the Altair. And you can find oodles more inventors like him.

Mind you, early Silicon Valley had one thing crucial that clean energy now doesn’t: massive authorities help. The armed forces purchased a great deal of microchips, helping measure up computing. Trump’s musical organization of weather deniers aren’t probably be buyers of very first resort for clean energy, but states may do a lot. Ca currently has, for instance, by producing quotas for renewables. Therefore even though you can’t pay for this stuff yourself, you ought to pressure state and neighborhood officials to crank up their solar technology usage. It’ll give us all a boost of much-needed cheer.

Write to clive@clivethompson.net.


This article seems in January problem. Subscribe now.

Are Tech Companies Trying to Derail Sex-Trafficking Bill?

Last month, tech companies, anti-sex-trafficking advocates, prosecutors, and legislators celebrated a hardwon compromise on a bill designed to help prosecutors and victims pursue sites such as Backpage.com that facilitate online sex trafficking. Now that consensus may be in jeopardy amid a controversial proposed amendment to the House version of the same bill, which had 170 cosponsors and was expected to sail through without incident.

Both bills had focused on altering Section 230 of the Communications Decency Act, which grants websites immunity for material posted by others. Those bills would remove the liability shield for “knowingly” publishing material related to sex trafficking.

The new proposal would only remove the shield for publishing with “reckless disregard” for sex trafficking, a tougher legal standard to prove. It would also create a new crime under the Mann Act, an infamous 1910 law also known as the White Slavery Act, for using a website to promote or facilitate prostitution. Anti-sex-trafficking advocates say looping in the Mann Act introduces a new element that could upset the delicate compromise; they also fear it will hurt the bill’s chances of becoming law, because groups like Black Lives Matter believe the Mann Act has been applied discriminatorily and should be repealed.

The advocates suspect tech-industry lobbyists are behind the new approach. In late November, more than 30 anti-sex-trafficking groups and activists, including Rights4Girls, Shared Hope International, Consumer Watchdog, and Cindy McCain sent a letter to members of the House to “express our objection to recent efforts by some in the tech sector to undermine this proposed legislation.” On Monday evening, the same group sent another letter addressed to the ranking members of the Judiciary Committee, ahead of a planned Tuesday committee meeting to mark up the new bill.

Although the new letter does not mention the tech industry’s role, some advocates point out that the language in the amendment closely mirrors a suggestion made by Chris Cox, a former congressman and lobbyist who serves as outside counsel for NetChoice, an advocacy group funded in part by Google. NetChoice declined to say whether Google was one of its larger donors, but noted that it has two dozen members. “We don’t speak for any one member, not do we represent any members,” spokesperson Carl Szabo, the group’s vice president, told WIRED.

Advocates also point to an email from a lawyer for the Judiciary Committee as another sign that that tech firms may have been involved. They believe the Nov. 8 email from Margaret Barr was intended for tech industry lobbyists, but mistakenly reached additional recipients. In the email, Barr outlines the changes to the bill, then writes that the committee believes the new language “will sufficiently protect your clients from criminal and civil liability, while permitting bad actors to be held accountable.” The advocates think Barr was addressing tech lobbyists because the initial opposition to the bill from companies like Google was driven by concerns about liability. Barr referred questions a spokesperson for the Judiciary Committee, who did not respond to a request for comment.

The new approach was introduced by Representative Ann Wagner (R-Missouri). Wagner’s office says the changes were made with the support of the Department of Justice, local district attorneys, and advocates. Her office provided a letter of endorsement from the National Association of Assistant United States Attorneys and two nonprofits that support the new approach: Freedom Coalition, a right-wing Christian organization that is not focused on human trafficking, and US Institute Against Human Trafficking, another faith-based group.

In a statement to WIRED, Wagner says, “I am adamant that Congress passes legislation that will prevent victimization, not only via Backpage.com but also the hundreds of other websites that are selling America’s most vulnerable children and adults.”

Senate sponsors of the bill do not support the changes. In a statement to WIRED, Senator Richard Blumenthal, the Democratic cosponsor of the Senate bill, says, “This legislation’s priorities are shamefully misplaced. There is no good reason to proceed with a proposal that is opposed by the very survivors it claims to support, particularly when the alternative is a carefully crafted measure supported by all major stakeholders.”

Senator Rob Portman, the Republican cosponsor, says the new proposal “ is opposed by advocates because they’re concerned it is actually worse for victims than current law.”

The Internet Association, a key tech trade group, switched its view to support the Senate bill, known as the Stop Enabling Sex Traffickers Act, shortly after representatives of Google, Facebook, and Twitter faced two days of criticism from lawmakers for their roles in enabling Russian meddling in 2016 election. People familiar with the matter said Facebook was central to the group switching its position, and that Google went along reluctantly.

A few days after Internet Association announced its support, Facebook COO Sheryl Sandberg wrote a Facebook post in support of the bill. Facebook declined to say if it is supporting the new House approach, known as Allow States and Victims to Fight Online Sex Trafficking Act.

In a statement to WIRED, Facebook said: “Facebook prohibits child exploitation of any kind, and we support giving victims of these horrible crimes more tools to fight platforms that support sex traffickers.”

After the Internet Association endorsed the bill, Google assured Senate offices that it would stop lobbying efforts to derail the bill, according to a person familiar with the matter.

“I hope Google is not working at cross purposes with the survivors who are desperately seeking redress,” says Mary Mazzio, a filmmaker who has been active in the effort to hold websites more accountable for trafficking on their pages.

The Department of Justice and Google did not respond to requests for comment.

Lauren Hersh, a former prosecutor and national director of World Without Exploitation, a national coalition of 130 groups, met with lawmakers Monday to tell them that she and other advocates do not support the House bill. “We just want to slow this process down in the House. Our ask is to not have this go to Judiciary [Tuesday]. All the steps that were taken to [achieve] compromise on SESTA, we want that to happen here.”