Clickbait
Clickbait Magazine
Published in
8 min readApr 24, 2018

--

Image: John Yuyi

*Disclaimer: The writers do not condone the actions of Nasim Aghdam, or violence of any kind. This piece is an examination of the current moment in influencer culture and its implications on mental health & financial security.*

A wave of familiar panic and despair rippled throughout the nation this week when reports emerged that an active shooter was tearing through one of America’s prized corporate campuses: YouTube.

The current media narrative zeroed in on the shooter’s ethnicity, immigration status and gender in an attempt to make sense of her rage. Identity is a critical factor in how our society rationalizes and processes perpetrations of violence — than motive itself. But what do you make of the fact that she was actually a disgruntled micro-influencer?

We now know that Nasim Aghdam hated YouTube because her prolific (albeit bizarre) content was no longer reaching her audience. We also know that her motive was financially-driven: she was no longer able make a living on YouTube ad revenue alone. “She was always complaining that YouTube ruined her life,” Aghdam’s brother, Shahran, told the East Bay Times.

Aghdam’s actions are sadly more common than one might expect. “Economic suicide” — the desire to end one’s life because of financial loss — was well-documented during the world financial crisis, with over 10,000 such deaths estimated to have occurred between 2008 and 2010, according to The British Journal of Psychiatry.

Suicide at work is at a record high, totaling 291 deaths in 2016 (most recent year of data) and the highest number since the government began tallying such events 25 years ago (US Bureau of Labor Statistics). Workplace homicide, tallied 475 events in 2012, and 417 in 2015.

While the shooter’s actions were outside the realm of sanity, they were triggered by an increasingly visible theme within our society: the condition of being “optimized out.”

Image: John Yuyi

On March 16th, Nicanor Ochisor, a 65-year-old yellow cab driver, committed suicide in his Queens home. According to family and friends, he had been drowning financially as his prized taxi medallion plummeted in value because of Uber and Lyft. The circumstances of his death echoed a February suicide by New York City cab driver Douglas Schifter who shot himself outside City Hall after publishing a lengthy Facebook post blaming politicians for allowing Uber to destroy his livelihood.

Those who have been “optimized out” by technology fit into a historical narrative of economic progress buoyed by technology — and financial ruin among those left behind.

Take the Luddites. Not people with flip phones, but 19th-century textile workers famous for breaking into factories and smashing the machinery that was supplanting them. In response, Parliament made “machine breaking” a capital crime with the Frame Breaking Act of 1812. The message was clear: technological progress was more valuable than human fulfillment.

Today, fear of human redundancy at the hands of technology continues to pervade even the upper echelons of the tech elite (see: Musk). Just look at the Neo-Luddism movement: “a leaderless movement of passive resistance to consumerism and the increasingly bizarre and frightening technologies of the Computer Age.”

In the past, and in the case of examples like Uber, the market determined who got optimized out. But the rules that other platforms use to restrict and demonetize have been opaque. So, who is in charge of optimizing people out, and how are their decisions made?

In a conversation with Vox founder Ezra Klein on the subject of how content might be regulated on Facebook in the future, Mark Zuckerberg proposed something like a “Facebook Supreme Court”:

What’s Facebook’s motive for this “Supreme Court”? Probably to avoid government regulation and scrutiny at all costs. Other platforms have revealed more about how they decide to promote or restrict content.

The New Yorker’s piece, “Reddit and the Struggle to Detoxify the Internet,” highlighted the very human problem that major platforms face when determining how to inhibit content — particularly the revolting/illegal. It turns out that the quality control algorithm patrolling the 4th most-visited site in the US — Reddit — is not an algorithm, but a small team of humans using only a spreadsheet and a moral compass.

Like Reddit, YouTube has recently revealed more about how decisions are made to promote or restrict content on their site. As it turns out, the YouTube shooting occurred just 6 weeks after the company announced a new policy restricting monetization for creators on their platform. As of February 20th, creators must have tallied 4,000 hours of watch time on their channel within the past year and have at least 1,000 subscribers. The tension between what users want and what platforms need is at the core of this decision-making process.

Attention is a finite resource, so it’s in the platform’s best interest to promote the most clickable pieces of content — content that will keep users spending more time within their walled garden. There’s also the significant issue of brand safety, which likely factored into this 4K hours/1K subs decision. It doesn’t make sense to actively filter or police content that 23 people will see; but it definitely doesn’t make sense to lose Procter & Gamble as an advertiser because of a wing-nut video.

It turns out that the free speech users want and the “quality speech” platforms want to promote are increasingly competing values. And when free speech and a paycheck are part of the same equation — as it is to YouTube creators — that becomes a problem for those who have been selectively “optimized out” as a redundant part of the content machine.

It’s key to understand that YouTube’s de-monetization policy isn’t just a problem of economic opportunity, but one of personal identity as well. Being “censored” as Nasim Aghdam saw it, may be akin to the psychological trauma of being removed from one’s soapbox, ending the drip-feed of “follower dopamine.”

We are just beginning to understand the neurochemical response the human brain has to technology — and, increasingly, to one’s follower-base. The Logan Paul Suicide Forest Saga of 2018 revealed that mental instability can live in tandem with social fame, and that we’re only just beginning to understand that one’s sense of self can be inextricable from one’s following. Did the YouTube shooter believe that she was simultaneously fired from her job and “kicked out of the village?”

Conversation about a universal basic income in the face of widespread, AI-induced human redundancy has been a hot topic. Basically, an extension of the “social safety” public programs that welfare and SSI offer. But what about a social media safety net? One that brands, platforms and users agree upon as we all increasingly become interdependent for financial and emotional well-being?

Some questions we should consider:

  1. Is a social following a human right?
  2. Should “loss of following” be considered a risk factor for trauma or mental illness?
  3. Is anger in the face of being optimized out another wrinkle of economic dislocation due to faceless circumstances, or will it become the status quo?
  4. This is the same “Owners of the Means of Production” problem that Marx and Engels were trying to solve. Should all social media platform users unionize and collectively bargain, or are distributed platforms immune to labor organization?
  5. Platforms make ad revenue off the content contributed by its users. Can advertisers and platforms work together to offer a financial safety net to those who have been suddenly demonetized?
  6. What are the emotional measures platforms can take when they “sunset” a once-popular influencer? Should they ease the pain of having your followers distanced, like morphine for an amputee? Offer them advice on how to “rebrand” themselves in order to regain their influence?
  7. As tech monopolies continue to smother competition and consolidate power, what options do people have once they’ve been optimized out of a single, dominant platform?
  8. In this faceless, decentralized gig economy — how should employees or users be offered an opportunity for civil recourse? How might they dispute the status of being demonetized or poorly rated?
  9. Youtube took a step toward transparency by revealing how decisions are made about which accounts qualify for monetization. Should platforms consider a universal, scaled rating that influencers and creators can see — similar to how consumers have access to a credit rating?
  10. And finally: Should platforms increasingly be considered a “public good” — and regulated as such? In an attempt to evade regulation, Zuckerberg has proposed a “Facebook Supreme Court” as a space for discourse on legal matters related to the platform’s functions. Should this be allowed to exist?
Don’t forget your meds

This piece was written with additional support and oversight from: Erica David, Liz Alexander, John Deschner and Sean Monahan.

--

--