The Next Great (Digital) Extinction

Somewhere between 2 and 3 billion years ago, what scientists call the Great Oxidation Event, or GOE, took place, causing the mass extinction of anaerobic bacteria, the dominant life form at the time. A new type of bacteria, cyanobacteria, had emerged, and it had the photosynthetic ability to produce glucose and oxygen out of carbon dioxide and water using the power of the sun. Oxygen was toxic to many anaerobic cousins, and most of them died off. In addition to being a massive extinction event, the oxygenation of the planet kicked off the evolution of multicellular organisms (620 to 550 million years ago), the Cambrian explosion of new species (540 million years ago), and an ice age that triggered the end of the dinosaurs and many cold-blooded species, leading to the emergence of the mammals as the apex group (66 million years ago) and eventually resulting in the appearance of Homo sapiens, with all of their social sophistication and complexity (315,000 years ago).

I’ve been thinking about the GOE, the Cambrian Explosion, and the emergence of the mammals a lot lately, because I’m pretty sure we’re in the midst of a similarly disruptive and pivotal moment in history that I’m calling the Great Digitization Event, or GDE. And right now we’re in that period where the oxygen, or in this case the internet as used today, is rapidly and indifferently killing off many systems while allowing new types of organizations to emerge.

As WIRED celebrates its 25th anniversary, the Whole Earth Catalog its 50th anniversary, and the Bauhaus its 100th anniversary, we’re in a modern Cambrian era, sorting through an explosion of technologies enabled by the internet that are the equivalent of the stunning evolutionary diversity that emerged some 500 million years ago. Just as in the Great Oxidation Event, in which early organisms that created the conditions for the explosion of diversity had to die out or find a new home in the mud on the ocean floor, the early cohort that set off the digital explosion is giving way to a new, more robust form of life. As Fred Turner describes in From Counterculture to Cyberculture, we can trace all of this back to the hippies in the 1960s and 1970s in San Francisco. They were the evolutionary precursor to the advanced life forms observable in the aftermath at Stoneman Douglas High School. Let me give you a first-hand account of how the hippies set off the Great Digitization Event.

From the outset, members of that movement embraced nascent technological change. Stewart Brand, one of the Merry Pranksters, began publishing the Whole Earth Catalog in 1968, which spawned a collection of other publications that promoted a vision of society that was ecologically sound and socially just. The Whole Earth Catalog gave birth to one of the first online communities, the Whole Earth ‘Lectronic Link, or WELL, in 1985.

Around that time, R.U. Sirius and Morgan Russell started the magazine High Frontiers, which was later relaunched with Queen Moo and others as Mondo 2000. The magazine helped legitimize the burgeoning cyberpunk movement, which imbued the growing community of personal computer users and participants in online communities with an ‘80s version of hippie sensibilities and values. A new wave of science fiction, represented by William Gibson’s Neuromancer, added the punk rock dystopian edge.

Timothy Leary, a “high priest” of the hippie movement and New Age spirituality, adopted me as his godson when we met during his visit to Japan in 1990, and he connected me to the Mondo 2000 community that became my tribe. Mondo 2000 was at the hub of cultural and technological innovation at the time, and I have wonderful memories of raves advertising “free VR” and artist groups like Survival Research Labs that connected the hackers from the emerging Silicon Valley scene with Haight-Ashbury hippies.

I became one of the bridges between the Japanese techno scene and the San Francisco rave scene. Many raves in San Francisco happened in the then-gritty area south of Market Street, near Townsend and South Park. ToonTown, a rave producer, set up its offices (and living quarters) there, which attracted designers and others who worked in the rave business, such as Nick Philip, a British BMX’er and designer. Nick, who started out designing flyers for raves using photocopy machines and collages, created a clothing brand called Anarchic Adjustment, which I distributed in Japan and which William Gibson, Dee-Lite, and Timothy Leary wore. He began using computer graphics tools from companies like Silicon Graphics to create the artwork for T-shirts and posters.

In August 1992, Jane Metcalfe and Louis Rossetto rented a loft in the South Park area because they wanted to start a magazine to chronicle what had evolved from a counterculture into a powerful new culture built around hippie values, technology, and the new Libertarian movement. (In 1971, Louis had appeared on the cover of The New York Times Magazine as coauthor, with Stan Lehr, of “Libertarianism, The New Right Credo.”) When I met them, they had a desk and a 120-page laminated prototype for what would become WIRED. Nicholas Negroponte, who had cofounded the MIT Media Lab in 1985, was backing Jane and Louis financially. The founding executive editor of WIRED was Kevin Kelly, who was formerly one of the editors of the Whole Earth Catalog. I got involved as a contributing editor. I didn’t write articles at the time, but made my debut in the media in the first issue of WIRED, mentioned as a kid addicted to MMORPGs in an article by Howard Rheingold. Brian Behlendorf, who ran the SFRaves mailing list, announcing and talking about the SF rave scene, became the webmaster of HotWired, a groundbreaking exploration of the new medium of the Web.

WIRED came along just as the internet and the technology around it really began to morph into something much bigger than a science fiction fantasy, in other words, on the cusp of the GDE. The magazine tapped into the design talent around South Park, literally connecting to the design and development shop Cyborganic, with ethernet cables strung inside of the building where they shared a T1 line. It embraced the post-psychedelic design and computer graphics that distinguished the rave community and established its own distinct look that bled over into the advertisements in the magazine, like one Nick Philip designed for Absolut, with the most impact coming from people such as Barbara Kuhr and Erik Adigard.

Before long, vice president Al Gore started talking about the internet as the Next Big Thing—I remember Jane excitedly telling me he had a whole box of first issues at Blair House. In 1996, a lyricist for the Grateful Dead, John Perry Barlow, wrote the hippie-inspired, libertarian-fueled manifesto “A Declaration of the Independence of Cyberspace,” which in many ways marked a pivotal moment where the dog catches the car and Silicon Valley emerges from the subculture and begins the dotcom boom. WIRED became a global symbol of the dramatic transformation headquartered in Silicon Valley that made consumers lust and struck fear in established businesses around the world.

The world also began to go through something like the Cambrian Explosion, as the internet lowered the cost of collaboration and invention to nearly zero, creating an explosion of new ideas and products. Meanwhile the culture also began to shift away from its roots in the hippie movement and the cyberpunk-rave thing. Today, much of the carefree, welcoming early sensibility of the movement has given way to Singularity’s obsession with exponential growth. Timothy Leary and his famous “Question authority and think for yourself” and “Turn on, tune in, drop out” have turned from an aspirational call to a systematic destruction of our institutions and Silicon Valley startups disrupting traditional companies.

This flourishing of technoculture had and continues to have a broad impact on business and society. People, companies, organizations, and communities that didn’t adapt started struggling to stay alive and either died, like many of the anaerobic bacteria, or retreated to the equivalent of the bottom of the ocean, where anaerobic bacteria hide in the post-oxygen world—taxi companies protected against Uber by governments; paywalled Elsevier protected by the conservative nature of academic publishing; and the pirate cassette tape business in North Korea, for example.

Legacy businesses have been disintermediated by the rise of companies built around the internet which have, within a very short period, exerted dominion over the world. This is the GDE, and it reminds me of nothing so much as the GOE in its impact and implications. As our modern dinosaurs crash down around us, I sometimes wonder what kind of humans will eventually walk out of this epic transformation. Trump and the populism that’s rampaging around the world today, marked by xenophobia, racism, sexism, and rising inequality, is greatly amplified by the forces the GDE has unleashed. For someone like me who saw the power of connection build a vibrant, technologically meshed ecosystem distinguished by peace, love, and understanding, the polarization and hatred empowered by the internet today is like watching your baby turning into the little girl in The Exorcist.

Nonetheless, the same tools of post-internet collective action that fueled Trump and #gamergate also gave the kids from Stoneman Douglas High School in Parkland, Florida, the tools they needed to inspire students at some 3,800 schools across the country to walk out in protest over lax gun regulations, and to push stores like Dick’s Sporting Goods to stop selling guns. That sustains my hope. I see the #MeToo and Time’s Up movements also using new versions of the same methods to begin the long path to ending centuries of patriarchal power. Just as the photosynthesis used by plants that feeds most life on Earth is a direct descendant from the original cyanobacteria that caused the extinction event, the tools being used to spread progressive social change are derivatives of many of the tools that Trump and #gamergate have used.

The hippie culture that drove the rise of the GDE failed to completely fulfill the promise of new technology, but those anaerobic hippies did leave Gen Z a whole new set of tools to deploy. The new generation are the warm-blooded mammals able to thrive in an environment no longer appropriate for their cold-blooded ancestors. My generation and the hippies are the anaerobic bacteria heading toward the mud.


More Great WIRED Stories

The Web’s Advice Engines Are Broken. Can We Fix Them?

I’ve been a Pinterest user for a long period. I’ve boards returning years, spanning past passions (art deco weddings) and much more recent people (rubber duck-themed very first birthday events). Once I log into the website, I get served up a slate of relevant recommendations—pins featuring colorful images of child clothes alongside pins of hearty Instant Pot meals. With every simply click, the suggestions have more certain. Click on one chicken soup recipe, along with other varieties appear. Click a pin of rubber duck dessert pops, and duck cupcakes plus duck-shaped cheese plate quickly populate beneath the header “More such as this.”

These are welcome, innocuous suggestions. And so they keep me personally clicking.

Nevertheless when a recently available disinformation research study led me to a Pinterest board of anti-Islamic memes, one night of clicking through those pins—created by fake personas associated with the net Research Agency—turned my feed ugly. My babies-and-recipes experience morphed right into a strange mish-mash of videos of Dinesh D’Souza, a controversial right-wing commentator, and Russian-language craft tasks.

Renee DiResta (@noUpside) is definitely an Ideas contributor for WIRED, currently talking about discourse as well as the internet. She studies narrative manipulation as the manager of research at brand new Knowledge, actually Mozilla fellow on news, misinformation and trust, and it is affiliated with the Berkman-Klein Center at Harvard and also the Data Science Institute at Columbia University. In past lives she has been on founding team of supply chain logistics startup Haven, a venture capitalist at OATV, and a trader at Jane Street.

Advice engines are every-where, although my Pinterest feed’s change was rapid and pronounced, it’s barely an anomaly. BuzzFeed recently stated that Facebook Groups nudge individuals toward conspiratorial content, creating a integrated market for spammers and propagandists. Follow one ISIS sympathizer on Twitter, and many others will appear underneath the “Who to check out” banner. And sociology professor Zeynep Tufekci dubbed YouTube “the Great Radicalizer” in a recent New York circumstances op-ed: “It seems as you should never be ‘hard core’ sufficient for YouTube’s suggestion algorithm,” she published. “It promotes, recommends and disseminates videos in a fashion that generally seems to constantly up the stakes.”

Today, recommendation engines are perhaps the biggest hazard to societal cohesion regarding the internet—and, thus, one of the biggest threats to societal cohesion within the offline world, too. The suggestion machines we engage are broken in ways which have grave effects: amplified conspiracy theories, gamified news, nonsense infiltrating conventional discourse, misinformed voters. Recommendation machines have become the fantastic Polarizer.

Ironically, the discussion about suggestion machines, as well as the curatorial power of social leaders, is also extremely polarized. A creator arrived at YouTube’s workplaces having weapon last week, outraged that the platform had demonetized and downranked a number of the videos on her channel. This, she felt, ended up being censorship. Itsn’t, however the Twitter discussion across the shooting demonstrably illustrated the simmering tensions over how platforms navigate content : you can find those that hold an absolutist take on free message and believe any moderation is censorship, and there are those who think that moderation is important to facilitate norms that respect the knowledge of this community.

Once the consequences of curatorial choices grow more serious, we have to ask: Can we make the internet’s recommendation machines more ethical? Assuming so, how?

Finding a solution starts with understanding how these systems work, since they are doing precisely what they’re designed to do. Recommendation machines generally speaking function in two ways. The very first is a content-based system. The motor asks, is this article like other content that this user has previously liked? If you binge-watched two seasons of, state, Law and purchase, Netflix’s reco motor will probably decide that you’ll like other seventeen, and that procedural crime dramas generally certainly are a good fit. The next types of filtering is what’s known as a collaborative filtering system. That motor asks, exactly what do we figure out relating to this user, and just what do similar people like? These systems is effective even if your wanting to’ve given the engine any feedback during your actions. If you join Twitter plus phone indicates you’re in Chicago, the initial “Who to follow along with” suggestions will feature popular Chicago sports team along with other accounts that people in your geographic area like. Recommender systems learn; while you reinforce by clicking and liking, they are going to serve you things based on your presses, likes, and searches—and those of individuals much like their ever-more-sophisticated profile of you. This is the reason my foray onto an anti-Islamic Pinterest board produced by Russian trolls generated months of being served far-right videos and Russian-language art pins; it was content that had been enjoyed by others who had spent time with those pins.

Now imagine that a user is enthusiastic about content more extreme than Law and purchase and Chicago activities. Exactly what then? The Pinterest algorithms don’t register a big change between suggesting duckie balloons and serving up extremist propaganda; the Twitter system doesn’t recognize that it’s encouraging individuals to follow additional extremist accounts, and Facebook’s Groups motor does not understand why directing conspiracy theorists to new conspiracy communities is perhaps a bad concept. The systems don’t in fact understand the content, they simply get back what they predict will keep us pressing. That’s because their primary function should assist attain some certain key performance indicators (KPIs) plumped for by the business. We handle what we can measure. It’s much easier to measure time on site or monthly normal individual stats than to quantify positive results of serving users conspiratorial or fraudulent content. And when this complexity is combined with the overhead of handling outraged individuals who believe that moderating content violates free message, it is easy to understand why the businesses standard to the hands-off approach.

But it isn’t in fact hands-off—there is not any very first Amendment directly to amplification—and the algorithm is already determining everything you see. Content-based recommendation systems and collaborative filtering should never be basic; these are typically always ranking one movie, pin, or team against another when they’re determining what things to explain to you. They’re opinionated and influential, though perhaps not in the simplistic or partisan method that some experts contend. And as extreme, polarizing, and sensational content continues to rise toward top, it’s increasingly apparent that curatorial algorithms should be tempered with additional oversight, and reweighted to take into account just what they’re serving up.

A few of this work has already been underway. Venture Redirect, an effort by Google Jigsaw, redirects certain types of users that searching YouTube for terrorist videos—people whom seem to be inspired by significantly more than mere curiosity. Rather than supply more violent content, the approach of the suggestion system is always to do the opposite—it points users to content designed to de-radicalize them. This project happens to be underway around violent extremism for some years, meaning that YouTube is aware of the conceptual problem, additionally the quantity of energy their recommender systems wield, for a while now. It makes their decision to address the situation in areas by redirecting users to Wikipedia for fact-checking even more baffling.

Guillaume Chaslot, a previous YouTube recommendation motor architect and today independent researcher, has written extensively about the problem of YouTube serving up conspiratorial and radicalizing content—fiction outperforming reality, as he put it in The Guardian. “People have been talking about these issues for decades,” he stated. “The surveys, Wikipedia, and additional raters are simply likely to be sure issues less noticeable. Nonetheless it won’t influence the primary problem—that YouTube’s algorithm is pressing users in a way they could not need.” Providing people more control over exactly what their algorithmic feed hands over is one possible solution. Twitter, for example, created a filter that enables users to prevent content from low-quality reports. Not everybody makes use of it, nevertheless the option exists.

In the past, businesses have in an instant cracked down on content related to committing suicide, pro-anorexia, payday lending, and bitcoin scams. Delicate subjects are often handled via ad-hoc moderation choices in reaction up to a general public outcry. Simple keyword bans are often overbroad, and lack the nuance to know if a merchant account, Group, or Pin is talking about a volatile subject, or marketing it. Reactive moderation often contributes to outcries about censorship.

Platforms need certainly to transparently, thoughtfully, and intentionally take ownership of this problem. Perhaps which involves making a visible variety of “Do maybe not Amplify” subjects in line with the platform’s values. Perhaps it’s a more nuanced approach: addition in recommendation systems is founded on an excellent indicator produced by a combination of signals towards content, the way it is disseminated (are bots involved?), together with authenticity regarding the channel, team, or sound behind it. Platforms can decide to enable Pizzagate content to occur on their site while at the same time determining not to ever algorithmically amplify or proactively proffer it to users.

Ultimately, we’re referring to choice architecture, a term the way that information or products are presented to people in a fashion that considers specific or societal welfare while preserving customer choice. The presentation of alternatives comes with an impact on what people choose, and social support systems’ recommender systems certainly are a key element of that presentation; they have been currently curating the pair of choices. This is the concept behind the “nudge”—do you place the oranges or the potato chips front and target the college meal line?

The requirement to reconsider the ethics of recommendation machines is growing more urgent as curatorial systems and AI appear in a lot more painful and sensitive places: local and nationwide governments are utilizing similar algorithms to find out who makes bail, who receives subsidies, and which areas require policing. As algorithms amass more power and duty within our each and every day everyday lives, we need to produce the frameworks to rigorously hold them accountable—that means prioritizing ethics over revenue.