The algorithm has won. The most powerful social, video, and shopping platforms have all converged on a philosophy of coddling users in automated recommendations. Whether through Spotify’s personalized playlists, TikTok’s all-knowing For You page, or Amazon’s product suggestions, the internet is hell-bent on micromanaging your online activity.

At the same time, awareness of the potential downsides of this techno-dictatorial approach has never been higher. The US Congress recently probed whether social media algorithms are threatening the well-being of children, and new scholarship and books have focused fresh attention on the broad cultural consequences of letting algorithms curate our feeds. “I do think it reifies a lot of our cultural tastes in a way that at least I find concerning,” says Ryan Stoldt, an assistant professor at Drake University and member of the University of Iowa’s Algorithms and Culture Research Group.

In response to the growing sense of unease surrounding Big Tech’s mysterious recommender systems, digital refuges from the algorithm have begun to emerge. Entrepreneur Tyler Bainbridge is part of a nascent movement attempting to develop less-fraught alternatives to automated recommendations. He’s founder of PI.FYI, a social platform launched in January that hopes to, in Bainbridge’s words, “bring back human curation.”

Perfectly Imperfect, and a simple conceit: Humans should receive recommendations only from other humans, not machines. Users post recommendations for everything from consumer products to experiences such as “being in love” or “not telling men at bars you study philosophy,” and they also crowdsource answers to questions like “What did you read last week?” or “London dry cleaner?”

Posts on the platform are displayed in chronological order, although users can choose between seeing a feed of content only from friends and a firehose of everything posted to the service. PI.FYI’s homepage offers recommendations from a “hand-curated algorithm”—posts and profiles selected by site administrators and some carefully chosen users.

“People long for the days of not being bombarded by tailored ads everywhere they scroll,” Bainbridge says. PI.FYI’s revenue comes from user subscriptions, which start at $6 a month. While its design evokes an older version of the internet, Bainbridge says he wants to avoid creating an overly nostalgic facade. “This isn’t an app built for millennials who made MySpace,” he says, claiming that a significant portion of his user base are from Gen Z.

Spread, a social app currently in closed beta testing, is another attempt to provide a supposedly algorithm-free oasis. “I don’t know a single person in my life that doesn’t have a toxic relationship with some app on their phone,” says Stuart Rogers, Spread’s cofounder and CEO. “Our vision is that people will be able to actually curate their diets again based on real human recommendations, not what an algorithm deems will be most engaging, therefore also usually enraging,” he says.

On Spread, users can’t create or upload original text or media. Instead, all posts on the platform are links to content from other services, including news articles, songs, and video. Users can tune their chronological feeds by following other users or choosing to see more of a certain type of media.

Brands and bots are barred from Spread, and, like PI.FYI, the platform doesn’t support ads. Instead of working to maximize time-on-site, Rogers’ primary metrics for success will be indicators of “meaningful” human engagement, like when someone clicks on another user’s recommendation and later takes action like signing up for a newsletter or subscription. He hopes this will align companies whose content is shared on Spread with the platform’s users. “I think there’s a nostalgia for what the original social meant to achieve,” Rogers says.

Prosocial Ranking Challenge, a competition with a $60,000 prize fund aiming to spur development of feed-ranking algorithms that prioritize socially desirable outcomes, based on measures of users’ well-being and how informative a feed is. From June through October, five winning algorithms will be tested on Facebook, X, and Reddit using a browser extension.

Until a viable replacement takes off, escaping engagement-seeking algorithms will generally mean going chronological. There’s evidence people are seeking that out beyond niche platforms like PI.FYI and Spread.

Group messaging, for example, is commonly used to supplement artificially curated social media feeds. Private chats—threaded by the logic of the clock—can provide a more intimate, less chaotic space to share and discuss gleanings from the algorithmic realm: the trading of jokes, memes, links to videos and articles, and screenshots of social posts.

Disdain for the algorithm could help explain the growing popularity of WhatsApp within the US, which has long been ubiquitous elsewhere. Meta’s messaging app saw a 9 percent increase in daily users in the US last year, according to data from Apptopia reported by The Wrap. Even inside today’s dominant social apps, activity is shifting from public feeds and toward direct messaging, according to Business Insider, where chronology rules.

Group chats might be ad-free and relatively controlled social environments, but they come with their own biases. “If you look at sociology, we’ve seen a lot of research that shows that people naturally seek out things that don’t cause cognitive dissonance,” says Stoldt of Drake University.

While providing a more organic means of compilation, group messaging can still produce echo chambers and other pitfalls associated with complex algorithms. And when the content in your group chat comes from each member’s respective highly personalized algorithmic feed, things can get even more complicated. Despite the flight to algorithm-free spaces, the fight for a perfect information feed is far from over.

Humans and Algorithms Engage in the Newest Online Culture Battle

In the ever-evolving landscape of the internet, a new battle is emerging between humans and algorithms. This clash is not physical, but rather a clash of cultures and values. As algorithms become more prevalent in our online experiences, they are shaping the way we consume information, interact with others, and ultimately, how we perceive the world around us.

Algorithms, which are essentially sets of rules or instructions, are used by technology platforms to curate and personalize content for users. They determine what news articles, videos, or social media posts are shown to us based on our previous behavior, interests, and preferences. While this may seem convenient and efficient, it also raises concerns about the potential for bias and manipulation.

One of the main issues with algorithms is their ability to create filter bubbles. Filter bubbles refer to the phenomenon where algorithms present users with content that aligns with their existing beliefs and preferences, effectively creating an echo chamber. This can lead to a narrowing of perspectives and a lack of exposure to diverse opinions and ideas. As a result, individuals may become more polarized and less open to alternative viewpoints.

Furthermore, algorithms can inadvertently perpetuate stereotypes and discrimination. For example, in online advertising, algorithms may use demographic data to target specific groups. However, this can lead to discriminatory practices, such as excluding certain racial or ethnic groups from seeing certain ads. This not only reinforces existing biases but also limits opportunities for marginalized communities.

On the other hand, humans bring their own biases and subjectivity to the table. When humans curate content or engage in online discussions, they are influenced by their own beliefs, experiences, and emotions. While this can lead to diverse perspectives and lively debates, it can also result in misinformation, hate speech, and the spread of harmful ideologies.

The battle between humans and algorithms is not a simple dichotomy. In reality, it is a complex interplay between the two. Algorithms are created and programmed by humans, who embed their own biases and values into the algorithms’ design. At the same time, humans are influenced by algorithms, which shape their online experiences and influence their behavior.

To address this culture battle, it is crucial to strike a balance between human curation and algorithmic personalization. Platforms should prioritize transparency and accountability, ensuring that users understand how algorithms work and have control over their own online experiences. Additionally, diversity and inclusivity should be at the forefront of algorithm design, with regular audits to identify and rectify any biases.

Education also plays a vital role in this battle. Users need to be aware of the potential pitfalls of algorithms and actively seek out diverse perspectives. Critical thinking skills should be emphasized to help individuals navigate the online world and discern between reliable information and misinformation.

Ultimately, the battle between humans and algorithms is not about choosing one over the other, but rather finding a harmonious coexistence. By recognizing the strengths and limitations of both humans and algorithms, we can create an online culture that promotes diversity, inclusivity, and informed decision-making.

Similar Posts