“I no longer want a seat on this ride…I no longer want to scroll through post after post, feeling the endorphins surge through my veins and the helpless anger constrict in my stomach. The allure of the medium no longer outweighs its dangers. I want out.”

I wrote these lines in a 2016 blog post titled “Farewell Hot Takes and Hateful Eggs” which explained the reasons I was making a calculated retreat from social media. I’ve mostly kept to the rules I laid out in that post, although I occasionally bend them to make an obligatory post about a major life event. This is not to say that I’ve truly managed to “get clean”, since I still spend more time, energy, and anger on Twitter than is good for me. Yet Twitter is mandatory, whether in journalism or in academia, and the fact that it is a perilous place full of trolls, witches, doppelgangers and cultists can’t change its basic necessity.

What does concern me is that my personal abstention is becoming increasing futile in this digital age. Or, in less abstract terms, even if I never post to Facebook again, Google already knows more about me than I do, my political representatives are incapable of dealing with the threat of cyber-warfare, and my online actions are being used to train Artificial Intelligences that will eventually destroy our economy and democracy.

Did I mention I had become one of those raving zealots on Twitter? Stare into the abyss long enough and it stares back at you. But anyway, let’s deal with these in order, and let’s start with a salutation straight from the semi-lucid mouth of Mr. Gonzo himself: 

Attention doomed youth of the American empire! I, Paul Esau, in the first of a three-part series, am here to explain how your world is being hacked:

  1. Information cannot be free

The free thinkers, coders, engineers, and entrepreneurs who built the internet were not a cohesive bunch, but once conservative propagandists swayed Silicon Valley towards the libertarian right in the 1980s, they embraced a very compelling mantra: “Information should (or wants to be) free.” This was a compelling vision in an age when most information was encoded in analog mediums like books, files, and microfilm, accessible only from official repositories, libraries, or archives, and only available to those with the prestige, time, and money to be granted access.

So what happened when this motley crew of digital freebooters got their wish and began “freeing” information?  Well, the rapid destruction of a number of industries that held monopolies on data, including the music industry, book publishers, newspaper journalism, video stores, and pornography. Additionally, the creation of new services, such as search engines, social media platforms, and multiplayer games that are technically free. Consumers quickly became convinced that they did deserve information for free, and rebelled against attempts to impose paywalls or service costs, leading to new cycles of destruction as digital startups imploded or gravitated towards sponsored content.

Yet information, like any valuable commodity, has a cost that must be paid. This is especially true if that information has been refined and interpreted into an accessible form, which requires time and expertise. So who pays this cost?

We do, of course. The stupid, mouth-breathing, knuckle-dragging consumer who thinks they are getting a service for free just because “free” was printed on the box. Yet Google isn’t free, or Facebook, or Instagram, or YouTube, or any of the dozen other services we use daily. We live amidst a gluttony of information so vast and so unprecedented that our ancestors would have wept to glimpse its possibilities, and yet its sparkle hides a substructure of exploitation that is fundamentally warping our society.

For, what did the big tech companies do once “free” became the operational currency of the digital ecosystem? Well, first they consolidated in accordance with Metcalfe’s Law, until only a few monstrosities remained to monopolize that digital space. Then they turned their attention from the consumer who refused to pay for their product to the advertiser, who would.

Thus began the great journey of platforms like Facebook and Google away from their primary purpose of aiding the unwashed masses in connecting with friends and searching the internet, and towards the wonderful horizon of data exploitation. How is it, for example, that Facebook can be free, and yet the company is valued at $500 billion and continuously turns a profit? The answer is that Facebook’s core service is not providing a social network for its 1.5 billion daily users, but instead providing ways for third parties to influence those users through targeted advertising and behavioural manipulation.* Everything Facebook does to ‘improve’ its user experience is for the purpose of keeping users on the platform longer, so that more data can be extracted from that user, and more sponsored content (informed by the extracted data), can be shown to the user. Facebook, like Twitter, Instagram, YouTube and others is in a race for our attention (according to Tristan Harris, the conscience of Silicon Valley); a race in which each service is actively trying to incite behavioural patterns that keep us on a certain platform longer – no matter the cost to our relationships, self-esteem, or emotional health.

In 2014, investor and video game developer Nir Eyal published Hooked: How to Build Habit-Forming Products, which quickly became a holy text for entrepreneurs and software engineers in Silicon Valley and beyond. Eyal explained how aspiring innovators could engineer their product to hijack user behaviour and influence their reward mechanisms, creating significant dependence upon said product. Of course, Eyal claims he wanted to help high-minded entrepreneurs incentivize people to make better choices, and consequently refused to admit that writing a guidebook to behavioural manipulation for commercial gain could have a downside.

A skeptical observer might argue that behavioural modification and habit-formation have always been the goal of advertisers, so what makes Farmville or Candy Crush different than Nike or Cadillac? Additionally, what if the user wants to modify their behaviour to accomplish some important goal, like learning a language on Duolingo or getting in exercise while pretending to be chased by zombies?

The answer is twofold. First, this sort of behavioural modification is not happening with the consent of the user, or at least not consent from a user who understands the consequences of their consent. To use an extreme example, arguing that a heroin or gambling addict is consenting in an informed and binding way to the consequences of their habit whenever they shoot up or enter a casino is to ignore the hold their addictions have over them. In other words, a person who is already using Instagram, whose friends are all on Instagram, and who is invested both emotionally and intellectually in the reward mechanisms of Instagram, is not able to give free and informed consent when Instagram implements some new feature engineered to increase user “engagement” through behavioural modification.

Second, the power disparity between Facebook (which owns Instagram) and its users is far greater than it ever was between Cadillac and the average American. Thanks to its data accumulation, Facebook can provide the most detailed demographic information, the most prescient understanding of user personality, the most nuanced construction of user relationships, and the most accurate recreation of user habits … of any corporation (with the possible exception of Google) in the history of humanity. Decades of cognitive research, social experimentation, and digital innovation have created an arms race between free will and corporate manipulation that many commentators believe the corporation will win. 

For example, here is Yuval Noah Harrari explaining how humanity ceding minor decisions to digital helpers today (what route to take through a city, which neighbourhood restaurant to try next), will create complete reliance in the near future:

Take this to its logical conclusion, and eventually people may give algorithms the authority to make the most important decisions in their lives, such as who to marry … I will ask Google to choose. “Listen, Google,” I will say, “both John and Paul are courting me. I like both of them, but in a different way, and it’s so hard to make up my mind. Given everything you know, what do you advise me to do?”

And Google will answer: “Well, I know you from the day you were born. I have read all your emails, recorded all your phone calls, and know your favourite films, your DNA and the entire biometric history of your heart. I have exact data about each date you went on, and I can show you second-by-second graphs of your heart rate, blood pressure and sugar levels whenever you went on a date with John or Paul. And, naturally enough, I know them as well as I know you. Based on all this information, on my superb algorithms and on decades’ worth of statistics about millions of relationships — I advise you to go with John, with an 87 per cent probability of being more satisfied with him in the long run.

“Indeed, I know you so well that I even know you don’t like this answer. Paul is much more handsome than John and, because you give external appearances too much weight, you secretly wanted me to say ‘Paul’. Looks matter, of course, but not as much as you think. Your biochemical algorithms — which evolved tens of thousands of years ago in the African savannah — give external beauty a weight of 35 per cent in their overall rating of potential mates. My algorithms — which are based on the most up-to-date studies and statistics — say that looks have only a 14 per cent impact on the long-term success of romantic relationships. So, even though I took Paul’s beauty into account, I still tell you that you would be better off with John.”

Google won’t have to be perfect. It won’t have to be correct all the time. It will just have to be better on average than me. And that is not so difficult, because most people don’t know themselves very well, and most people often make terrible mistakes in the most important decisions of their lives.

Harari’s last point, I think, is key. People are notoriously bad at understanding themselves, or even explaining their own decisions. Consequently, our threshold for outsourcing decisions to an algorithm is only that it proves to be a little more successful than our own decision-making. The threshold is probably even lower when we are not aware that we are making an active choice between our existing inclinations and the recommendations of an algorithm, which is an increasingly insidious form of behavioural modification being developed for the digital environment.

During the 2016 American election, both the Russians and Cambridge Analytica (working with Republicans strategists) used crude algorithms coupled with classic espionage  to influence voters. No one is sure whether the confusion caused by these tactics helped Trump into the White House, but in an election which turned on under 80,000 votes, it’s a distinct possibility. More recently, Democratic strategist David Goldstein mimicked Cambridge Analytica’s Facebook campaign (legally) during a special 2017 election in Alabama, and played a decisive role (he claims) in electing the first Democratic senator from Alabama in a generation. Granted, the Democratic candidate was running against the infamous Roy Moore, who spent much of the election fending off sexual assault allegations, but Goldstein swears he can prove the influence of his campaign. 

It turns out that all that data Facebook and Google have compiled on users isn’t just useful for selling them products or monopolizing their attention; it can also be weaponized into a terrifyingly successful political tool. Historically, a political campaign might spend substantially to get 3-4 voter interactions a day – moments when that voter sees a campaign sign, advertisement, or mailer – without knowing whether those interactions created voter conversion or were immediately forgotten. Those are rookie numbers in the digital age. Today, for literally pennies, political campaigns can target statistically tiny demographics with hundreds of interactions across every digital platform. Even worse, voter behaviour will aid the campaign in adapting its message and tactics in real time, working to sap the enthusiasm of the most committed opposition voters while seducing the least committed with personalized appeals.

Google knows if you spend five seconds staring at one ad on your smartphone, and not another. It knows which message create a clickthrough, and what you searched immediately after reading the advertorial that click produced. Facebook offers targeted advertisement based upon a users friends, history, freely shared information, and “liked” groups and topics. Just as important, it allows advertisements to be hidden from other demographics, who might ask awkward questions about the purpose or veracity of an ad.

Facebook, by providing unprecedented access to the behaviours, psychology, and predilections of voters, is quickly becoming the kryptonite of our electoral system.

One might take comfort in the fact that this unprecedented level of surveillance is unique to the digital sphere, but that is also quickly changing. The city of London, which is the second-most monitored city in the world ( with 420,000 CTV cameras) after Beijing, is in the midst of experimenting with large-scale algorithm-based facial recognition. In China, state surveillance is being extended into the classroom, where some schools use artificial intelligence to monitor a student’s level of attention and assign them a corresponding grade. The ability of corporations and governments to track individual citizens has expanded across digital platforms and webpages into the physical world, where an amalgamation of passive data from sources such as Google Maps, traffic cameras, Uber, cell phone towers and financial transactions makes the process relatively easy.

In Canada, I was recently asked to use Naborly to apply for an apartment – a service which combines an internet scraper and an algorithm to compile my accessible data and then quantify it into a numerical score. Naborly claims that its algorithm removes the potential for human bias from landlord-renter interactions, but since algorithms themselves often manifest unintended bias, this argument simply switches one form of bias for another. Additionally, since Naborly’s algorithm is proprietary, the resultant score isn’t significantly more transparent than the decision of a human landlord. As the renter, you don’t know what information services like Naborly have scraped from the digital sphere, whether that information is accurate, or what weight is given to that information. Perhaps it decides you visit a certain bar too often, based upon your Facebook images, or that your politics don’t mesh with those of current renters (based upon your Twitter). Perhaps it finds evidence of a recent messy breakup on Instagram, or discovers that you’ve recently made several visits to the hospital and, based upon cryptic posts from the ex-boyfriend, might be pregnant or in financial trouble.

I was chosen as the successful candidate even though I refused to use Naborly, yet that may be luxury I can’t afford in the future. My wife and I had an an extremely strong application and multiple other options, which is not true for many other prospective tenants. As well, if Naborly ever acquires a dominant market share and becomes the industry standard, then it may become part of the price of living in the modern age – just as job interviews, insurance rates, and potential dates could soon be evaluated by similar algorithms

I’ve ranted this rant many times in the past few months, dragging friends and strangers alike into the uncanny valley which separates the promised digital utopia and surveillance dystopia of the immediate future. The most common response I get to my paranoid speculations is that my hostage/interlocutor isn’t worried because they “have nothing to hide.” Even if information isn’t free, and the way their data is being extracted by Google resembles a Holstein hooked up a milking machine on some stolid Dutch-Mennonite farm…they don’t care. They can’t foresee a way in which that data could have negative consequences, and anyway, the convenience of Google being able to advertise the right shoe size or anticipate traffic on their route to work is worth the invasion.

I think, fundamentally, this is a failure of imagination. Arguing that Google’s data extraction mechanism is irrelevant because you have nothing to hide, is like arguing that global warming will be fine because you’re going to install an air conditioner. Google already knows things about your medical history, your mental health, your relationship, and your workplace satisfaction that you would not have voluntarily disclosed. As this information is incorporated into more sophisticated algorithms, the data foolishly squandered in your youth may have life-long consequences.

Perhaps, as Harari speculates, we will all  freely decide to obey the suggestions of our increasingly sophisticated algorithms, and live happier, if less free, lives because of it. Yet perhaps, after our data is extracted, our patterns dissected, and our weakness exploited, we will have no choice but to obey, since our futures will hinge upon numerical scores spit out by algorithms fuelled be the data we now freely surrender.


*“Saying that consumers are treated as “fuel” for Silicon Valley is an apt metaphor. The collection and sale of personal data, including location, is at the heart of the business models of Facebook, Google, and many smaller companies that live on advertising. McNamee describes a rather Faustian bargain in which consumers trade their data for services.

‘We’ve always said services are free, but that’s not true. It’s a barter of personal data for services. The price in data has been rising geometrically, or at least a steep slope,’ he maintains.

Data collection, he says, is no longer just a passive technique; it’s become much more sophisticated and is a form of behavioral manipulation or modification. Consider Pokémon Go, a mobile game in which smartphone users hunt for digital creatures stashed around the city. The game seems harmless. But it’s more than that. ‘If we put a Pokémon in a Starbucks we can get you to go in and buy some coffee,’ McNamee says. ‘That’s manipulation and it’s not what we signed up for.’

From Roger McNamee: “Facebook is Terrible for America”

Advertisements

I'm a graduate student at Laurier University in Ontario. I used to be a journalist, and I moonlight as a writer / tennis player / LOTR nerd.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: