Om Malik is a partner at True Ventures, a Silicon Valley-based early-stage venture capital group. Prior to joining True, he was the founder of Gigaom, a pioneering technology blog and media company.
The social-media Web as we knew it, a place where we consumed the posts of our fellow-humans and posted in return, appears to be over.…In large part, this is because a handful of giant social networks have taken over the open space of the Internet, centralizing and homogenizing our experiences through their own opaque and shifting content-sorting systems.
Algorithms optimized for engagement shape what we see on social media and can goad us into participation by showing us things that are likely to provoke strong emotional responses. But although we know that all of this is happening in aggregate, it’s hard to know specifically how large technology companies exert their influence over our lives.
The moment exposes the tension between how social networks wish people used their services and the reality … Asking users to unlearn the habit of relying on social media will take time and may not work at all.
I read these three articles and was reminded of something I have known for a while, though I had not synthesized it succinctly enough: the internet, as we have known it, has evolved from a quaint, quirky place to a social utopia, and then to an algorithmic reality. In this reality, the primary task of these platforms is not about idealism or even entertainment — it is about extracting as much revenue as possible from human vanity, avarice, and narcissism.
Frankly, none of this should be a surprise. Most of the social algorithms have been specifically designed and optimized to do just that. The Social Internet began as a place to forge “friendships” and engage in “social interactions.” It performed its role as intended until companies needed to generate profit. By then, we were all hooked on the likes, hearts, retweets, and followers and the boost they gave to our egos.
Looking back, the very idea that socially inept and maladjusted founders would define online social norms feels almost laughable. The notion of having 5,000 people as “friends” was as preposterous then as it is now. We were naive in our thinking, and happy to replace real-life friendships with an unlimited number of online friends. After all, digital friends are superior to real ones, right? In my column for Business 2.0 magazine, I wrote:
This new startup might seem like the bastard child of EdTV and Blogger, the latest in tech-enhanced West Coast narcissism. But it actually points the way to a future where we use technology to stay in close touch with our friends and families around the world. Companies that take advantage of this trend are poised to capture more than just our attention.
Whether in Parisian cafes, Bombay chai stalls, or Manhattan singles’ bars, humans have an overwhelming need to get together, talk, communicate, and interact. Our genes are coded that way. It’s no surprise that as we rush toward an always-on, ever more connected society, we want to mimic these offline interactions on the Net.
Back then, the internet was still seen as a utopian ideal — not a massive marketing machine. Our friendly chats and discussions weren’t enough for platforms to draw the advertising revenue required for giants like Facebook and Twitter to keep growing. However, sharing news and media links became an effective way for social platforms to keep users engaged. Discussing the latest news was often more straightforward than initiating a genuine online conversation. Hence, the social internet morphed into “social media.”
It was evident where this was all headed.
Over the past few years, I’ve argued that there’s nothing truly “social” about social media and that algorithms now primarily guide the flow of information. This direction serves primarily the deities of advertising and revenue. And the algorithms are there to do two things — boost engagement and sell more ads.
Six months ago, I looked up how to humanely euthanize a sick fish on Reddit. I found a method and now my fish is dead. Since then, Reddit has sent an email every week with novel ways to kill fish of all sizes. The algorithm must think I’m a fish mass murderer. It won’t stop. Dustin Curtis.
While it’s fashionable to point fingers at Elon Musk for his systematic (and accelerated) undermining of Twitter, the truth is more nuanced. Indeed, while Musk’s influence is palpable — and, with his $44 billion purchase, entirely his prerogative — social media platforms have gradually lost their social essence for years. Derek Powazek, an internet old-timer like me, in his seminal essay, noted that Twitter was programmed to be an Argument Machine:
With enough people, and enough short thoughts, arguments are sure to occur. When they do, we’ll add heat to them by making sure everyone can see the individual thoughts outside of the argument’s participants. Nothing like a hooting crowd to make a bad situation worse. I’m not saying that Twitter was designed to create arguments. I’m just saying that, if you set out to create an Argument Machine, it’d come out looking a lot like Twitter.
No one cared — and that is because we were all busy looking out for our selfish interests. And, to be frank, the platforms began to atrophy from the moment we started treating them as our personal marketing channels. As I’ve previously pointed out:
every tweet, every selfie is a chance to virtue signal, an opportunity to market yourself as someone — pundit, guru, genius, or goofball. There is no other way of putting it — we are addicted to the idea of an audience. When we go online, we are programmed to react to engagement triggers — likes, shares, retweets, hearts, and thumb-ups. Social and this addiction of audience have made us addicted to something even harder to give up once tasted: a constant feeling of self-importance. To live in this post-social future, one has to embrace ideas that are the antithesis of self-importance. After two decades of being trained by micro-dosing on dopamine, I am not sure we can!
How do social media algorithms work? To put it rather crudely (and simply), most social media systems follow pretty much the same rules — mostly because many of the people who designed these systems have hopped from one company to another. The algorithms examine the who, what, and why of every piece of content.
The algorithm considers who posted the content to the network. This could be a source of information, a friend, or a business. The value of this “posting entity” becomes more important if they have a lot of followers. A big media company, a major brand, or a famous person (aka influencer) is treated preferentially by the algorithm.
Algorithms decide what kind of content they will prioritize — photos, videos, reels, links to articles, memes, or plain old-fashioned text. (I explained this in my piece titled ‘What do Instagram and TikTok have to do with Asparagus.‘)
For instance, a post by Gigi Hadid will take precedence over one by a less-known individual, and a video will likely garner more engagement than mere words.
If half a million people “heart” what Gigi Hadid has to say, or a million people retweet “fake news videos,” the algorithm will amplify that point of view. To me, there’s no discernible difference between the content that the Kardashian-Jenner clan produces and any advertisement from a drop-shipper on Instagram or Twitter. So-called “influencers” are as inauthentic as any bot. It’s becoming impossible to distinguish between a “real” influencer, whose motives and intentions are dubious, and a virtual influencer, who is, by definition, a fabrication. It’s all a façade, as it always has been.
Moreover, the way social media is structured rewards extreme ideas, ideologies, and those on the fringe. A radical idea will likely get more engagement — comments, likes, or reshares than a seemingly rational comment. One need look no further than Twitter to see this in action. If you want to be heard, you must be quirkier or more outrageous than the next person. If not, you aren’t going to get even a nano-second of attention.
In a recent study, researchers from the University of California, Santa Barbara, highlighted that “after retweeting a fake political news story, the more hearts people received — the symbol indicating another user liked their post — the more they agreed with that story’s content.” Joseph Walther, a communication professor at UCSB who led the study, noted, “It’s social interaction with others, even through those small signals of social approval native to social media platforms, that magnifies false beliefs.”
Study after study is coming to the conclusion that algorithms can’t distinguish between information and misinformation because they aren’t programmed to do so. Academics Soroush Vosoughi and Deb Roy, both of the MIT Media Lab, along with MIT Sloan professor Sinan Aral, recently conducted a study that reached the conclusion that false news spreads faster than the truth.
Their research, published in Science, found that misinformation is ’70 percent more likely to be retweeted on Twitter than the truth,’ and that the fake news ‘reached 1,500 people about six times faster than the truth.’”
About 126,000 rumors were spread by ∼3 million people. False news reached more people than the truth; the top 1% of false news cascades diffused to between 1,000 and 100,000 people, whereas the truth rarely diffused to more than 1000 people. Falsehood also diffused faster than the truth. The degree of novelty and the emotional reactions of recipients may be responsible for the differences observed. (via Science)
This is based on a dataset from 2006 to 2017, long before the current management took over and foreign actors began using social platforms for propaganda. Like UCSB’s Walter, the MIT researchers reached the same conclusion: ‘This suggests that false news spreads farther, faster, deeper, and more broadly than the truth because humans, not robots, are more likely to spread it.’
The problems have, of course, worsened. Now, only $8 a month is required for someone to appear legitimate, and battling disinformation has become less of a priority for major social companies. This includes YouTube, whose algorithms are even more efficient at spreading misinformation than what surfaces on Facebook or Twitter. The rise of generative AI is also making misinformation even more sophisticated — fake photos, fake videos, and improved copy will only increase the density of noise and misinformation.
As social media platforms increasingly shift from human interactions to algorithms, it’s no surprise that we all feel overwhelmed by internet noise. Consequently, the proliferation of spam, bots, irrelevant content, and ads has become (or should become) more apparent than ever. This is precisely why the internet feels less enjoyable. Social media seems less social, and lately, it feels even less like “media.”
Where do we go from here? We find ourselves lost in a fog of misinformation, a reality we must acknowledge. Reluctantly, we must admit that the Social Web, as we knew it, is on its last legs, and we stand at the threshold of a new era marked by social disconnection.