NSA Spy Buildings, Facebook Data, and More Security News This Week

It has been, to be quite honest, a fairly bad week, as far as weeks go. But despite the sustained downbeat news, a few good things managed to happen as well. So we’ll start with those.

California has passed the strongest digital privacy law in the United States, for starters, which as of 2020 will give customers the right to know what data companies use, and to disallow those companies from selling it. It’s just the latest in a string of uncommonly good bits of privacy news, which included last week’s landmark Supreme Court decision in Carpenter v. US. That ruling will require law enforcement to get a warrant before accessing cell tower location data. And at the beginning of the week, the Wi-Fi Alliance detailed the full specifications of the WPA3 security standard that’s going to make the next generation of Wi-Fi much, much safer to use.

And then there’s the bad news. A marketing firm called Exactis left as many as 340 million personal information records sitting on the open internet for anyone to find. Anthony Kennedy announced that he’ll retire from the Supreme Court, an absence that will have ramifications for privacy and technology. The next arms race is going to happen in space, which will be less fun than it sounds. And Congress wants to talk with Cambridge Analytica alum Matt Oczkowski about whether his new firm, Data Propria, will just repeat the same indiscretions as is former employer.

But wait, there’s more! As always, we’ve rounded up all the news we didn’t break or cover in depth this week. Click on the headlines to read the full stories. And stay safe out there.

The NSA’s Secret Spy Hubs in 8 Major Cities

The Intercept this week published the locations of eight AT&T buildings that it says also serve as surveillance hubs for the National Security Agency. By piecing together public documents, classified files, and interviews, the outlet identified these networking equipment centers in Seattle, San Francisco, Los Angeles, Chicago, Dallas, New York, DC, and Atlanta. These locations are significant in that they route traffic not just from AT&T customers but from other internet backbone providers who have so-called peering agreements with the telecom giant. The facilities don’t exist specifically for the NSA; they simply offer the most bang for the buck in terms of watching data pass through. There’s nothing necessarily illegal about the arrangement, but the NSA is prohibited from spying on communications between two US citizens—a lot of which presumably travels through these eight sites.

Another Day, Another App That Exposed Data From Millions of Facebook Users

And you thought Cambridge Analytica got to have all the fun! This week, security researcher Inti De Ceukelaire outlined his discovery that a popular Facebook app called NameTests showed personal data in JavaScript file. Any third party could have accessed it. Facebook paid out $8,000 to the charity of De Ceukelaire’s choice as part of a bug bounty, but that doesn’t go very far toward helping the 120 million people—yep, 120 million—who had their data potentially exposed.

Another Data Leak Outs Law Enforcement Info

Texas State University’s Advanced Law Enforcement Rapid Response Training has a pretty self-explanatory mission. It also, reports ZDNet, exposed a database containing the personal information of thousands of officials who have gone through its program since April 2017. The database includes contact information like home addresses and phone numbers. Several email messages were also left vulnerable, including some that detailed lack of law enforcement resources in certain communities—information that could be used by criminals looking to take advantage of soft spots.

Silk Road Kingpin Ross Ulbricht’s Legal Road Ends at Supreme Court

Ross Ulbricht, who went by the moniker Dread Pirate Roberts when operating the notorious dark web bazaar the Silk Road, is officially out of appeals. Ulbricht had asked the Supreme Court to reconsider his life sentence; they declined. Ulbricht had previously lost an appeal in 2017, after his initial sentencing in 2015.


More Great WIRED Stories

Europe Considers a fresh Copyright Law. Here is Why That Matters

Even as businesses all over the world raced to adhere to sweeping privacy rules that took effect within the European Union last month, EU lawmakers were focusing on another set of changes which could have a worldwide effect on the internet.

Today a committee inside EU’s legislative branch approved a proposed model copyright law that would likely lead numerous apps and web sites to monitor uploaded content using automatic filters to identify copyrighted product. The proposition will now move to a vote by full European Parliament.

The end result will be similar to exactly how YouTube attempts to identify and block copyrighted sound and video clip from being posted on its website, but is placed on all types of content, including text, images, and software, and audio and video clip. Critics state this area of the proposal, Article 13, would cause legitimate content, particularly satire or quick excerpts, being blocked also beyond your EU.

Another portion of the proposal would need online services to pay for news publications for using their content. This has been commonly referred to as a “link income tax,” but hyperlinks and search engines like google are particularly exempted inside newest draft of the directive provided by European Parliament user Julia Reda, a part associated with the Pirate Party Germany. The principles are widely regarded as a method to force services like Twitter and Twitter that show short snippets or other previews of news stories to pay a cost to writers, nevertheless the draft does not make clear whether snippets would remain ok and, if that’s the case, the length of time they can be. The impact on Google can be not clear, as some of the product it shows, like its “featured snippet” information bins, may not be considered search-engine listings.

The proposition could be the latest effort by European governments to reign in US technology giants. As well as its privacy rules, the EU has in recent years imposed high antitrust fines on Bing, delivered Apple a hefty goverment tax bill, and passed the electronic “right become forgotten.” A year ago, Germany passed a legislation purchasing social media organizations to delete hate message within twenty four hours from it being posted. Unlike these other guidelines, which focus on fees and costs, the copyright proposal tries to place more money to the pockets of publishers in European countries and elsewhere by mandating licensing fees.

A coalition of four European publishing teams circulated a declaration applauding the European Parliament “for building a essential mean the ongoing future of a free of charge, independent press, for the future of professional journalism, money for hard times of fact-checked content, for the future of the rich, diverse and available internet and, fundamentally, for future years of a healthy democracy.”

The copyright proposal will be an EU “directive,” which would then be translated into rules in each EU country. Those regulations could vary somewhat. That, along with the obscure wording of some areas of the proposal, allow it to be hard to predict the precise outcomes of the rules.

Google head of worldwide general public policy Caroline Atkinson objected to your concept of pre-emptive filtering for all types of content in a 2016 article about a youthful version of the proposition. “This would effectively turn the internet right into a spot in which every thing uploaded towards web needs to be cleared by lawyers before it could find an market,” she had written. Atkinson wrote that spending to produce snippets had not been viable and would eventually decrease the level of traffic that Bing delivered writers via Google News and search. Facebook and Twitter couldn’t respond to needs for remark.

The proposal would shift the responsibility for publishing copyright-infringing work on the web through the users of a platform on platforms themselves. It might mandate that solutions intended to store and publish copyrighted materials simply take “appropriate and proportionate measures” to ensure that copyrighted product is not available without the permission of its owner. It doesn’t specify that sites must apply YouTube-style automatic blocking, plus it says your “implementation of measures by providers should not consist in a general monitoring obligation.” But critics argue your directive will result in the widespread use of automatic filters. Sometimes platforms could avoid blocking content by licensing the content from legal rights holders.

The legislation would only use within EU countries, but companies might implement filtering around the world, claims Gus Rossi, the director of worldwide policy within advocacy group Public Knowledge. He points towards the way some businesses, such as for example Microsoft, opted to follow the EU’s privacy guidelines globally, not just in European countries.

How automated filters typically work is liberties holders upload their content up to a platform like YouTube, and also the platform’s computer software immediately watches for copies of the works. When the filter detects what it suspects to be infringing content, the platform obstructs it from being published, or deletes it, if it has recently been posted.

But critics state the filters will monitor away content that needs to be appropriate, such as short excerpts from another work. In a single ironic instance, the French far-right political party National Rally (formerly known as the National Front), which supports the proposed copyright directive, recently had its YouTube channel shortly suspended due to so-called copyright violations, Techdirt reported. The channel can be acquired again. Nationwide Rally failed to respond to a request remark.

Automated filters could possibly be abused by those who never have the legal rights to content they attempt to protect, claims Cory Doctorow, an author and unique adviser toward Electronic Frontier Foundation. Someone could upload, say, the united states Constitution up to a website like moderate and claim it is their copyrighted work. Then, if moderate had implemented an automatic filtering system, the working platform would block anyone from citing long passages from Constitution. Doctorow claims this could be abused by pranksters, or by people who wish to suppress particular content. The draft proposal does not have any penalties to make false claims.

Automated filters may be high priced for smaller organizations to implement. “Far from just affecting big US Internet platforms (who can well spend the money for costs of compliance), the burden of Article 13 will fall many greatly on the competitors, including European startups” and smaller businesses, states an available page finalized by above 70 internet pioneers, including web inventor Tim Berners-Lee and Wikipedia creator Jimmy Wales. The letter states filters will likely be unreliable, plus the price of installing them are going to be “expensive and burdensome.”

European Parliament member Axel Voss associated with the Christian Democratic Union of Germany admits your proposition isn’t perfect and certainly will likely induce some false positives. But he tells WIRED it may be much better than the present system of allowing big platforms to profit by operating advertising alongside copyright-infringing product. “we must begin somewhere,” he states.

Voss claims the directive would just connect with a relatively few websites. The draft would just connect with internet sites meant to be used to publish content which “optimize” that content by doing such things as categorizing it. The draft has exceptions for online retailers that mostly offer real goods, “open source software developing platforms,” and non-commercial sites like “online encyclopaedia.” But Reda contends that some web sites might inadvertently be included in guidelines as the definition of which web sites are included is vague. For example, dating apps might have to screen the photos users upload to ensure they don’t infringe copyrights.

The best aftereffect of the directive is murky, simply because it are going to be translated into legislation in a different way in numerous nations. That is especially problematic when it comes to defining whenever a site might need to spend to include a snippet or preview of a news article, since each country could come up with a various optimum quantity of content that would be considered allowable.


More Great WIRED Stories

Trump Stokes Outrage in Silicon Valley—But It’s Selective

Silicon Valley is in the middle of an awakening, the dawning but selective realization that their products can be used to achieve terrible ends.

In the past few months, this growing unease has bubbled up into outright rebellion from within the rank and file of some of the largest companies in the Valley, beginning in April when Google employees balked at the company’s involvement with a Pentagon artificial intelligence program called Project Maven. On Monday, Amazon shareholders sent an open letter asking CEO Jeff Bezos to halt a program developing facial recognition software for governments pending a review by the board of directors. Also this week, as general horror built up over the Trump administration’s new “zero tolerance” immigration policy, which has led to the separation of more than 2,000 children from their parents, Microsoft employees objected to their company’s contract with US Immigration and Customs Enforcement to use Microsoft’s Azure cloud services.

“We are part of a growing movement, comprised of many across the industry who recognize the grave responsibility that those creating powerful technology have to ensure what they build is used for good, and not for harm,” reads an open letter posted to the company’s internal message board Tuesday.

That same day, Microsoft president Brad Smith published a blog post calling on the government to end the zero-tolerance policy. He also pointed out that Microsoft cofounded Kids in Need of Defense, one of the largest immigrant advocacy groups that is working to reconnect children and parents, and whose board Smith himself chairs. CEO Satya Nadella sent a company-wide memo Wednesday, which he also published online, assuring employees that Azure was not used to support ICE’s separation of families. Other Silicon Valley leaders have followed suit in publicly opposing Trump’s immigration policy: Facebook CEO Mark Zuckerberg is raising money for organizations working at the border, Apple’s Tim Cook called the policy inhumane, and Cisco CEO Chuck Robbins called on Trump to end the policy, among others.

The question now is whether this is the start of a larger reflection on the role technology plays not just in government work but in all aspects of life. Silicon Valley’s internal outrage can have the most power when it’s aimed at what’s broken about itself.

You have a lot of power in these companies. Don’t waste your opportunity. There are so many other things to change

Kathy Pham, Berkman Klein Center fellow

So far, the tech employee objections have mostly centered on their companies’ work with the government on high-profile military or law enforcement projects. The pushback is powerful: Google CEO Sundar Pichai announced he would not renew the contract with the Department of Defense. Though Microsoft hasn’t canceled its ICE contract, it immediately moved to address its employees’ concerns.

Yet, government contracts like these are a tiny part of the problems in tech. “It’s easy to stand up against DOD and drones or ICE using your cloud. These are certain really easy tangible things to stand up against, but meanwhile your company is doing all this other stuff that deserves deeper scrutiny,” says Kathy Pham, a former product manager at Google and founding product lead at the United States Digital Service. As a fellow at the Berkman Klein Center for Internet and Society, she is currently studying how to make tech a more ethical industry.

Where, she and others wonder, is this level of concern over policies and products that originate within these companies themselves and that can disenfranchise, divide, or otherwise harm people?

Everyday Ethical Concerns

When Pham first read the Google Maven news, she wondered why Googlers were only now realizing that the company’s products could be used in damaging ways. Where was the outcry over the ways Google Maps are used for surveillance? Her question echoes the thoughts of author Yasha Levine, who pointed to ICE’s use of Google Maps, telling my colleague Nitahsa Tiku on Monday, “Does that make Google complicit in Trump’s immigration policies? I say, yes.” Levine is concerned about all the many mundane ways tech is used by powerful interests, writing on Twitter today: “When everyone was freaking out over Cambridge Analytica I reminded people that powerful interests use tech like that all the time, including Charles Koch and Co.”

The problem goes beyond government integrations, and beyond any one tech company. Where is the public outcry over about biased search results? The mundane surveillance economy? Or racist facial recognition software? These issues have received sustained of attention from academia and the press, but haven’t stoked rebellion from inside the companies using and developing them.

We haven’t seen public criticism from Google employees over the ways Google Plus is being coopted by Nazis after they are kicked off of Twitter and Facebook, or the privacy nightmare of how it tracks people. We haven’t even seen much public criticism from within Facebook over the role its platform plays in the dissemination of false political propaganda, such as during the 2016 US election and around the world in places like Sri Lanka, despite facing so much external criticism.

Facebook was forced to respond in some way to the Cambridge Analytica scandal, and has since taken steps to clean up fake news on the site. But those efforts seem to lack a wider self-awareness about the scope of the issues and the ways in which disinformation flourished on the site by taking advantage of features, not bugs, in the platform. Zuckerberg’s mealy-mouthed congressional testimony, and the subsequent silence in the valley, recently led longtime resident and management expert Tom Peters to tell Recode that Silicon Valley had become a “moral cesspool.”

Former Facebook employee Sandy Parakilas wrote on Twitter Tuesday, “To the tech execs who made the bad decisions that got us here, and who are tweeting their horror at the child separation policy: THIS IS YOUR FAULT! Don’t ever forget that.” In a follow-up with WIRED, he explained he was specifically upset that tech leaders, like Zuckerberg, whose design and product choices helped get Donald Trump elected, would now come out against his policies without any acknowledgment of their own culpability.

To make it worse, he says, “so few of them have called Trump out by name. I think it’s cowardly to express outrage at the policy while continuing to do business with the administration, without even naming the person directly responsible.”

Selective Outrage

So why does the tech industry have a louder voice speaking out about government contracts than work cooked up in its own kitchens?

Silicon Valley workers see themselves as part of the solution to society’s ills, not the problem. And the history of government-tech partnerships is not all bad. After all, the world wide web itself was a government-funded project. The early days of the valley were nurtured by US government support. And many tech-government partnerships have admirable intentions. Take the USDS, which tries act like a startup to solve technical problems more nimbly than government bureaucracy usually allows.

But the extreme polarization of American politics has seeped into everyday life. Everything feels political now, even tech. And because the Trump administration has been so defined by controversy and policies many people find objectionable, any government-tech alliance has become suspect. That, combined with the cacophony on social media, creates an environment where people feel obligated to speak out about whatever outrage is dominating the news cycle. We saw the same thing last year after white supremacists marched in Charlottesville: Google and GoDaddy refused to host Nazi websites, and AirBnB closed white supremacist accounts. (Though even here there are limits—the gun control debate, for instance, hasn’t received the same attention from the tech world.)

Pham points out that there were problematic policies under President Barack Obama, too. She remembers when she worked at USDS that her team had to write Obama a letter explaining why a security improvement he wanted to make was a very bad idea. “We probably should have scrutinized things then, too, but because he was a much more palatable president we ignored certain contracts more,” she says.

Silicon Valley analyst and writer Ben Thompson, who last year had argued that tech CEOs can’t just refuse to work with Trump, says the zero-tolerance policy crosses a moral line that necessitates tech leaders to take action. Writing in his widely influential daily newsletter Wednesday, he concludes that “preserving – or, as has often been the case, pushing for – the fundamental human rights that underly those liberties is not just a civil responsibility but the ultimate fiduciary duty as well.”

Complicity with immoral government policies is an easy way for techies to draw a line in the sand. These contracts are clearly defined and publicized by the press. We’re familiar with the story of companies being complicit in immoral government actions—people remember how IBM worked directly with Nazi Germany, for instance. It can be harder to pinpoint how algorithms are eroding society, or what to do about it.

And while they are vocal, the employees speaking up about their companies’ cooperation with government agencies are still a minority. More than 4,000 Google employees signed a petition to cancel the Project Maven contract, but there are more than 85,000 employees at the company. As of Tuesday night more than 100 people signed the open letter at Microsoft—a company of more than 124,000.

Where Your Voice Is Loudest

Many employees are reluctant to speak out about policies within their own company even if they want to because doing so could get them fired or sued. In some cases, employees do post to internal message boards like the one used by Microsoft employees to voice their concerns, and those don’t always leak out to the press. Former employees are in a better position to speak out.

Additionally, taking a stand against something you or team created is very hard, even if you’re watching that thing be abused or misused. “Google Maps and Google tracking are people’s babies, their hearts and souls are in them,” says Pham, picking an example at random. The same is true for Newsfeed at Facebook, the very product that Russia used to sow discord during the election.

Tech leaders are increasingly taking their cues from their employees. But even they can do more than talk. Zuckerberg’s Facebook post asking people to raise money for immigration advocates, for instance, rings a little hollow to some considering his own vast personal wealth.

For the ethical awakening in Silicon Valley to be real, it needs to go beyond bandwagoning and turn its critical eye back on itself.

“Engineers have the loudest voices in companies. In my experience when engineers really rally around something the leadership really changes it,” says Pham. “You have a lot of power in these companies. Don’t waste your opportunity. There are so many other things to change… Many of these tools exacerbate injustices, many of these tools are not being used for good and it’s important to speak up.”

More Great WIRED Stories