The Age of Surveillance Capitalism
| Book Author | |
|---|---|
| Published | October 4, 2018 |
| Pages | 691 |
| Greek Publisher | Καστανιώτης |
The Fight for a Human Future at the New Frontier of Power
What’s it about?
The Age of Surveillance Capitalism (2019) provides a revealing look at just how committed companies like Google and Facebook are to tracking every one of your actions and selling that data to advertisers. Over the past few years, this business practice has become one of the most prominent worldwide, and the harmful effects it has on personal liberty and democracy are becoming more apparent.
Listed on The Guardian’s Best 100 Books of the 21st Century
About the author
Shoshana Zuboff has a PhD in social psychology from Harvard University, as well as BA in philosophy from the University of Chicago. She is currently the Charles Edward Wilson Professor emerita at Harvard Business School. Her previous books include In the Age of the Smart Machine.
Basic Key Ideas
If you’ve found these blinks, then you certainly use the internet in some capacity. Let’s face it, it would be extremely difficult to get by in life these days without engaging with the digital world. And this is the perfect situation for today’s surveillance capitalists.
Surveillance capitalism is the business of taking people’s data and using it to make a profit. This includes location tracking, search history, contacts, browsing history, biometric data, when you go to sleep and wake up, how often you recharge your battery – the list goes on and on. This information is then analyzed for behavioral trends and sold to help advertisers better target customers.
Many books that are critical of surveillance capitalism are nonetheless helping to normalize it by simply recommending that people turn off their devices more often and minimize time spent on social media. But author Shoshana Zuboff is hoping that people won’t accept these invasive practices as the new status quo and holds out hope that we can find a way to establish better privacy laws in the digital sphere.
In these blinks, you’ll find out
- how many cookies your computer will collect by visiting the most popular websites;
- how the dubious field of behaviorism is guiding today’s business practices; and
- how the September 11 terrorist attacks put us on the path toward surveillance capitalism.
Do you know to what degree your movements, speech, actions, experiences, and behaviors are being processed and sold by businesses like Google, Facebook, Microsoft, and Amazon? Few of us do, and that’s just how the purveyors of surveillance capitalism would like to keep it.
The key message here is: In surveillance capitalism, all aspects of the human experience are turned into data and sold to a variety of businesses for a variety of reasons.
First and foremost, your personal data can help businesses better target their advertising efforts. Are you getting close to a McDonald’s? Here’s an ad for a Big Mac.
But it can also help to create predictive products, such as virtual assistants like Amazon’s Alexa, which are then used to collect more profitable data.
Google was the trailblazer in surveillance capitalism and it remains the frontrunner. But it wasn’t long before other companies recognized the value of this new personal data market. After all, once Google began using the data to improve the accuracy of targeted ads, the company went from bleeding money to seeing a 3,590-percent increase in revenue – in just four years!
Facebook was the first to follow in Google’s footsteps, and they’re the only ones who rival Google in the sheer amount of accumulated data. In a 2015 study at the University of Pennsylvania, researchers looked at the top one million most popular websites. They found that 90 percent of them leak personal data to an average of nine outside domains where this information is tracked and used for commercial purposes. Of the websites that leak data, 78 percent send information to Google-owned outside domains, while 34 percent send to Facebook-owned domains.
Like Google, Facebook sells advertisers targeting data that includes email addresses, contact information, phone numbers, and website visits from across the internet. In 2012, Facebook added a brief mention of this new tracking policy to a new terms-of-service agreement that was so lengthy that few people were likely to read every word. This kind of unreadable contract is a typical surveillance capitalism tactic.
Such tracking is not limited to internet browsing, however. Other studies have found that many apps sold for Google Android devices contain trackers that leak personal information even when they’re not actively being used. And, perhaps unsurprisingly, Google Android devices themselves, like most “smart” devices being sold these days, provide a constant stream of location and behavior data.
How did we get here? Why does using the internet or digital products now essentially mean opening the door to aggressive monitoring by unknown parties? In the next couple of blinks, we’ll look at how surveillance capitalism came to be.
The story of surveillance capitalism is a modern one. But to understand its rise and current dominance, we need to look to the 1970s and 1980s. During this time, the rules of capitalism itself underwent a significant change.
The key message here is: Prior changes to capitalism helped loosen regulations and change attitudes for the online age.
Prior to the 1970s, capitalism was something that involved a system of laws and policies, collectively known as the double movement, which was designed to protect society from capitalism run amok.
As the historian Karl Polanyi describes it, the double movement was integrated into the capitalist system to make sure that the institutions involved weren’t harming labor, land, and money. Polanyi, like Adam Smith and other economists before him, recognized that capitalism contained potentially destructive tendencies. Unchecked greed and power-mongering can have devastating effects, and the double movement was designed specifically to counteract these tendencies.
Nevertheless, two influential voices came to the forefront of economic policy in the 1970s, and they both suggested we’d be better off without the double movement. They were the Austrian economist Friedrich Hayek and the American economist Milton Friedman. These two men preached the gospel of a self-regulating free-market economy, unburdened by annoying things like laws and regulations that only served to limit the boundless potential of the capitalist enterprise.
Both Hayek and Friedman received Nobel Prizes. This recognition validated their ideas and is probably why these ideas were quickly implemented around the world. In the United States, double movement regulations were systematically taken down – first, under the Jimmy Carter administration, then during Ronald Regan’s tenure. In Europe, free-market capitalism was seen as the perfect antidote to the threats of communism and totalitarianism.
But it’s no coincidence that in the years since the dismantling of the double movement, social and economic inequality has reached dangerously high levels. In recent decades, unprecedented amounts of money has been transferred to the highest income brackets. In 2016, a report from the International Monetary Fund went so far as to call this disproportionate accumulation of wealth a threat to stability.
In this unregulated corporate environment, surveillance capitalism thrives. The inventor Thomas Edison once recognized what others, including the sociologist Emil Durkheim, have noticed: the principles of capitalism become the principles of society at large. If Google is successful, it must be right and good. And if surveillance capitalism is successful within the self-governing rules of free-market capitalism, then it, too, must be right and good.
Surveillance capitalism hasn’t gone unnoticed. Indeed, many intelligent people are concerned. What’s interesting, though, is that when we look back, we see that these concerns can quickly fade and turn into acceptance.
The key message here is: Early concerns about online privacy were dashed in favor of loose surveillance laws.
Let’s explore the issue by looking at cookies. Unlike the delicious baked goods, the cookies on our computers are nothing to feel good about. They track us wherever we go on the internet, and they were not greeted with open arms. In 1996, the Federal Trade Commission (FTC) began taking steps to limit how much personal information cookies leaked. The FTC went against the wishes of advertisers to propose a simple automated protocol that would put personal information in the user’s control by default.
The FTC understood that self-regulation wasn’t ideal when it came to establishing and protecting online privacy. And, in 2000, they were close to establishing legislation that would make the rules of online commerce similar to those offline. Alas, those plans were interrupted by the events of September 11, 2001.
After the attacks, the US government didn’t tighten privacy laws in cyberspace; rather, it went the other way, creating the Patriot Act and the Terrorist Screening Program, which significantly loosened regulations around surveillance. The CIA and the NSA, in particular, quickly ramped up efforts to monitor internet activity. And, naturally, they turned to Google for support.
In 2003, Google worked with the NSA and the CIA to provide the agencies with better search technologies. The tools that Google passed on allowed them to analyze mountains of metadata, identify behavioral patterns, and predict future behaviors.
As it turns out, Google’s treasure trove of personal data is the exact kind of information for which both advertisers and law enforcement agencies will pay top dollar. After winning special contracts with the NSA and the CIA in 2003, Google continued to nurture a mutually beneficial relationship with the intelligence community. In 2010, NSA Director Mike McConnell wrote about the need for a “seamless” partnership with Google, so that data would continue to flow unobstructed.
This brings us back to cookies. A 2015 study showed that, by visiting the 100 most popular websites, your computer would collect over 6,000 cookies. The study also found that 83 percent of the cookies came from third parties – not the websites that were actually visited. How is this possible? Google’s “tracking infrastructure” was found to be active on 92 of the top 100 sites.
Initial concerns about the internet-wide tracking capabilities of cookies have clearly fallen by the wayside. And as we look at how surveillance capitalism came to be, we can see that this is a recurring trend. There is initial outrage upon discovering the invasive practices of surveillance capitalists. But this eventually turns into a sense of begrudging acceptance.
Sadly, this plays right into the hands of companies like Google and Facebook, who explicitly want the general public to believe that their practices are inevitable.
The key message here is: Google’s Street View and Glass operations are great examples of outrage turning to acceptance.
There’s a chance that you’ve seen the odd-looking Google car, with a 360-degree camera sticking out like a periscope. But perhaps you didn’t know that such cars were taking more than just pictures.
In 2010, a German federal agency found that the Google Street View cars were quietly scanning WiFi networks and collecting personal information from any of the unencrypted transmissions they came across. Naturally, this caused an international uproar. And, after investigations in 12 countries, Google was found to have broken laws in at least nine.
However, prosecuting cases like these isn’t so straightforward. The primary problem is that the practices of surveillance capitalism are unprecedented, so there usually aren’t any laws that specifically address privacy and boundary issues in the digital sphere. As you may already know, Google’s Street View program has only continued to expand.
In 2012, there was also a public outcry over the introduction of Google Glass, a wearable technology that allowed Google to see into private spaces. The negative reaction led to a rebranding and the introduction of the “Glass Enterprise Edition” in 2017, which positioned the product as being designed strictly for the workplace, where people may already have lowered expectations of privacy.
But Google had already found a wildly successful way of getting into the nooks and crannies of private life. Niantic, the gaming company owned by Google’s Alphabet Inc., released Pokémon Go in 2016. The game uses a device’s camera and GPS information to reveal the location of virtual Pokémon creatures that users can capture. Those Pokémon can be located in people’s backyards and inside businesses – places where Street View cameras may have yet to capture.
The game was a massively popular phenomenon. But it’s really an amazing means of capturing personal information. The reason the game requires access to your contacts and needs “to find accounts on device” has nothing to do with gameplay and everything to do with surveillance capitalism.
At this point, you may be thinking, sure, Google collects all sorts of data, but I don’t have anything to hide, so why should I care?
Well, even if you’re willing to live your life like an open book, if you’re a fan of democracy or free will, you should care. As we’ll see, collecting location and browsing habits on individuals is only one step in the process.
The key message here is: Surveillance capitalism is getting more granular in their data collection.
Google’s ambitions are wide-ranging. The company would like to know everything about your past and current situation so that, rather than asking Google a question, Google would be able to “know what you want and tell you before you ask the question.” At least, this is how Hal Varian, Google’s chief economist, explained the company’s intentions.
This means getting down to granular detail about your wants and needs, as well as your emotional state. The field of emotional analytics, sometimes known as “affective computing,” has developed so that even the microexpressions on your face can be detected and instantly recognized as representing a specific emotional state. Of course, one image of your face can also reveal age, ethnicity, and gender.
One of the more advanced companies in this field is Realeyes, which boasts a data set of over 5.5 million annotated frames of over 7,000 subjects from around the world – all in an effort to build the world’s largest collection of expressions, emotions, and behavioral cues.
All of these factors represent a goldmine of data for advertisers. A market research report on the subject clearly states, “Knowing real-time emotional state can help businesses to sell their product and thereby increase revenue.” Or, as the Realeyes website puts it, “the more people feel, the more they spend.”
Body posture and gestures are also clues into what someone is doing and what they are feeling. This is why Google is developing digitally enhanced fabrics that can be turned into clothes and worn by people. This will bring a whole new level of granular behavioral data to Google’s constantly growing collection.
But if a person is active on social media, their personal posts and news feed can also be analyzed to offer an accurate prediction of how the person is feeling. And when advertisers and other surveillance capitalists know what you’re doing and feeling, they’ll know the perfect time to nudge you in the desired direction.
But how can surveillance capitalists really modify someone’s behavior? We’ll take a closer look in the next blink.
Given that a significant portion of Silicon Valley is into analyzing behavioral data, it makes sense that companies like Google and Facebook would be interested in the murky field of behaviorism.
After all, behaviorism teaches that free will is but an illusion; all behavior can be explained by the circumstances that precede it. Expose people to specific stimuli and you’ll get a specific response.
The key message here is: Surveillance capitalists hope to identify key moments of sensitivity in order to increase the chances of purchase and behavior modification.
A towering figure in behaviorism is B. F. Skinner, who was a professor at Harvard University and a pioneer in both behavioral analysis and utopian thinking. In Skinner’s worldview, there is no such thing as freedom or free will, and if you think there is – well, that’s just an expression of your ignorance.
Under Skinner’s brand of extreme behaviorism, every action can be mathematically explained through behavioral data. And if someone’s actions seemingly defy explanation, then that’s only because we haven’t collected enough of the right data.
Skinner passed away in 1990, which means he didn’t live to see the day when so many people were carrying around smartphones, living with smart speakers, and using virtual assistants. These are exactly the kind of devices Skinner dreamed of being able to use to monitor and experiment on his subjects.
Make no mistake, Google and Facebook are already conducting experiments and following the guidelines that Skinner left behind. As the professor recommended, the ideal scenario for accurate behavioral analysis is when the subjects are unaware of those conducting the experiment and collecting the data.
Facebook has admitted to experimenting with the content of people’s news feeds, and an accurate way to look at Pokémon Go is as an experimental test, run by Google, to see whether people can be digitally manipulated to go where directed, and then spend money.
At the height of Pokémon Go’s popularity, the game allowed businesses to pay money in order to become hotspots – places where players were sure to find the virtual creatures they were after. These businesses saw reported boosts in business of up to 70 percent.
In 1948, two books came out. One was B. F. Skinner’s Walden Two. This presented his version of a utopian world where extreme behaviorism was understood and accepted, and people stopped concerning themselves with the silly illusion of personal freedom. The second book was George Orwell’s 1984, which also offered a look at a world without personal freedom. But rather than presenting it as a utopia, Orwell clearly saw it as a dystopia.
One of these books, Walden Two, was widely panned by critics upon its release, while the other continues to be a painfully relevant warning for what our world could look like if we give up too much control to those in positions of power.
The key message here is: The invasive, all-controlling future of surveillance capitalism doesn’t have to be seen as inevitable.
Despite the warnings in Orwell’s book, the purveyors of surveillance capitalism want to be in our homes, cars, stores, and workplaces, monitoring everything we say and do. From their perspective, this would allow for all kinds of conveniences.
One of the more popular examples in Google’s vision of utopia is its new car contract. Under this contract, if you miss a car payment, your car will automatically stop working. No need for annoying paperwork or the hassle of sending someone to see what’s going on with you. Everything can be automated.
Never mind the glaring questions about the driver and how a sudden stoppage like this might separate a parent from her child or prevent someone from leaving a dangerous situation. Just think about how much bureaucracy we’d be able to bypass!
These kinds of automated contracts are something surveillance capitalists like to describe as inevitable. But the truth is, none of these things are inevitable.
Recently, we got a better look at what’s considered standard operating procedure at Facebook. In 2018, it was revealed that they’d given large amounts of personal data to Cambridge Analytica, a company that used the information to microtarget voters with a campaign of misinformation.
This has raised some troubling questions about the state of democracy today and the dangers that arise when the keepers of information are given free rein to collect whatever they want from us and put it to whatever use they see fit.
So, what can be done about surveillance capitalism?
First of all, it’s important for people to realize the true scope of what’s going on behind the scenes, and that there are other options.
The key message here is: Surveillance capitalism isn’t “inevitable,” and people aren’t ready and willing to give up privacy in the name of convenience.
Surveys conducted in 2009 and 2015 showed that between 73 and 91 percent of people reject the very idea of targeted advertising when told about the ways in which their personal data is being collected.
Right now, there is a hugely disproportionate balance in information. This extends to how companies are collecting personal information, what kinds of data are being collected and analyzed, and what that information is being used for. When this becomes clear, outrage soon follows.
It’s also important to fight back now. There is a generation of people growing up having never known a world without smartphones. Not only is this generation more prone to normalizing the practices of surveillance capitalism; they’re also especially vulnerable to the psychological effects of these practices.
In 2017, former Facebook president Sean Parker admitted that Facebook, like other social media platforms, uses behaviorist tactics such as variable reinforcement to keep people chasing after hits of dopamine – and, more importantly, to keep them glued to their news feed.
Unsurprisingly, this results in the same depressive psychological symptoms that people suffering from addiction and withdrawal experience. But along with addiction, the near-constant online exposure that today’s teens experience has also been shown to produce feelings of confusion, distress, boredom, and isolation.
Research has shown that “Facebook use does not promote well-being,” and the same could be said for the practices of surveillance capitalism in general. But it doesn’t have to be this way.
In 2000, researchers at Georgia Tech were developing the Aware Home. This was a vision of “ubiquitous computing” that isn’t far from the “smart home” that surveillance capitalists are bringing to reality. The big difference is that the Aware Home was designed with user privacy in mind.
The data produced by the users would be under their control. It honored the age-old concept of a person’s home being their sanctuary and a place where they could be free from surveillance.
Sadly, a year later, that concept was uprooted by the events of September 11. But that doesn’t mean we have to give up on this worthwhile dream.
The key message in these blinks:
Following the events of September 11, 2001, efforts to establish online privacy laws were pushed aside. Now, there are very few laws to protect your personal data from being collected and sold to advertisers and used to make more powerful predictive smart devices. This information includes browsing history, phone numbers, email addresses, location history, biometric data, and even a psychological profile based on your social media accounts. This information is becoming more specific and granular as more advanced “smart” devices are entering the market and diminishing the amount of space that is not being monitored for behavioral data.
What to read next: Dragnet Nation, by Julia Angwin
Once you know about how much spying the surveillance capitalists are doing on you, it’s very possible that you may want to learn more about what you may be able to do to stop the surveillance. Author Julia Angwin’s 2014 book Dragnet Nation offers more insight into online privacy and includes some tips on how to limit your exposure to the data collectors.
If you want to know more about who’s collecting your data and why, and how you can protect your personal information, we recommend you head over to our blinks on Dragnet Nation.
SECOND REVIEW FROM SHORTFORM
About Book
Are you aware that you’re being watched by big tech companies? In The Age of Surveillance Capitalism, Shoshana Zuboff explores the concept and consequences of “surveillance capitalism”—a term she created to describe the invasive and controlling data collection practices that tech companies like Google, Microsoft, and Facebook have adopted to maximize their profits. By learning about these practices, you’ll be better equipped to protect your privacy and fight to protect the privacy of us all.
In this guide, we’ll explore what surveillance capitalism is, how it’s been able to thrive despite growing opposition, and what we can do to prevent it from destroying our freedom and democracy. We’ll also expand on Zuboff’s ideas by suggesting counterarguments to some of her claims, adding context where possible to enhance understanding, providing updated examples of tech companies’ actions, and recommending concrete steps you can take to combat surveillance capitalism.
In The Age of Surveillance Capitalism, Shoshana Zuboff explores the concept and consequences of “surveillance capitalism”—a term she created to describe the invasive and controlling data collection practices that tech companies like Google, Microsoft, and Facebook have adopted to maximize their profits.
Zuboff uses scientific research and numerous real-life examples to support her insights into the development and advancement of surveillance capitalism. She also warns of the potentially disastrous outcomes for our society if we don’t advocate for change in this area.
The Age of Surveillance Capitalism is one of three books that Zuboff has written about different defining stages of technological development. It’s the culmination of years of research that she conducted while a professor at Harvard University.
In this guide, we’ll explore:
- What surveillance capitalism is
- The conditions that spurred its development
- How it’s been able to thrive despite growing opposition
- How it intends to shape our future
- Why we should be concerned about it
- What we can do to prevent it from destroying our freedom and democracy
We’ll also expand on Zuboff’s ideas by suggesting counterarguments to some of her claims, adding context where possible to enhance understanding, providing updated examples of tech companies’ actions, and recommending concrete steps you can take to combat surveillance capitalism.
What Is Surveillance Capitalism?
According to Zuboff, surveillance capitalism is an emerging form of capitalism in which companies harvest data about our behavior, make predictions about our future behavior using that data, and sell those predictions for profit.
(Shortform note: Zuboff first invented the term “surveillance capitalism” in 2014. She used it in a paper exploring the future of Big Data and how we have both the power and responsibility to shape it how we want to. The paper was the precursor to many of the ideas discussed in The Age of Surveillance Capitalism.)
Zuboff explains that although most of us are aware that big tech companies are doing something to us, we aren’t capable of understanding the situation’s complexities, implications, or magnitude because it’s entirely unprecedented—we have no other event in history to compare it to.
(Shortform note: As the first event of its kind, the rise of surveillance capitalism is one of many unprecedented events author Nassim Nicholas Taleb calls “Black Swans.” In his book, The Black Swan, Taleb explains that Black Swans are fundamentally unpredictable events that have a major impact on humanity. He argues that the only way we can prepare for these events is by accepting that they’re unpredictable. Other examples of Black Swans are World Wars I and II, 9/11, and the 2008 financial crisis.)
How Surveillance Capitalism Operates
Zuboff tells us that surveillance capitalism consists of four main components: an underlying philosophy, products, a means of production, and a marketplace. Let’s discuss each component in more detail.
Component #1: An Underlying Philosophy
Zuboff explains that although people tend to see surveillance capitalism as a type of advanced technology that’s capable of learning uncomfortably specific details about us, it’s actually a philosophy that guides how companies use technology.
The idea underlying surveillance capitalism is that serving people’s needs is less profitable and therefore less desirable than selling predictions about their future behavior. Through this lens, technology isn’t a means to make our lives better, but rather a means through which companies can better collect and control our data to maximize profits. In other words, in surveillance capitalism, the purpose of technology is to help companies collect more data, make more accurate predictions, and sell those predictions for more money.
(Shortform note: Zuboff says that the philosophy of surveillance capitalism is that selling behavioral predictions is more profitable than serving people’s needs. Why is this so concerning? In his book Basic Economics, Thomas Sowell explains that, in a free market economy, the promise of higher profits is supposed to incentivize businesses to produce goods and services that people want. If companies can earn higher profits without concerning themselves with the needs of consumers, the system breaks down and the people suffer.)
Component #2: Products
Zuboff says that in surveillance capitalism, the product that’s sold is predictions about your thoughts, actions, and emotions. Companies develop these predictions using data they collect about your behavior, both online and out in the real world. This includes everything from your online searches, text messages, and purchases to your facial expressions and attitudes.
(Shortform note: Zuboff makes it clear that large amounts of data make our behavior increasingly easy to anticipate, but just how predictable are humans in general? According to research, we’re extremely predictable. When researchers studied the way random cell phone users moved around, they found that users traveled in simple, regular patterns, regardless of age, gender, language, and other factors. These patterns were so regular that researchers could predict users’ whereabouts within the next hour with 93% accuracy.)
Component #3: Means of Production
According to Zuboff, companies like Google develop their predictions through machine intelligence. Machine intelligence feeds on behavioral data, constantly learning from what it takes in. The more data it collects about how people behave, the more accurately it can predict how people will behave in the future, and the more profitable its predictions become.
(Shortform note: Although Google’s use of machine learning to predict human behavior and sell those predictions for profit is unprecedented, machine learning has been around since the 1950s. In 1952, Arthur Samuel of IBM wrote a computer learning program that could improve its ability to play checkers with every game by studying the winning strategies.)
Component #4: Marketplace
Zuboff says the surveillance capitalism marketplace is where companies—the customers in this form of capitalism—trade for predictions about people’s behavior. While it was originally meant for advertisers, these days, any company that wants to take advantage of information about our future behaviors can participate by purchasing behavioral predictions from companies that gather user data.
(Shortform note: In her book, Zuboff paints the expansion of this marketplace to sectors other than advertising as a negative development that typically causes harm to society. However, in some cases, behavioral predictions can, and have, been used for good. For example, pet adoption agencies can use predictive analytics to determine which pets people are more likely to adopt. That way, they can focus their efforts on those pets that need more help being placed with families.)
The Conditions That Led to Surveillance Capitalism
Zuboff argues that surveillance capitalism was born out of a specific set of conditions—a perfect storm of factors that gave birth to a new technological philosophy. These conditions include the rise of neoliberal ideology and the intensification of surveillance following the 9/11 terrorist attacks in the United States. Let’s explore each condition separately:
Condition #1: The Rise of Neoliberal Ideology
Zuboff explains that the anti-regulation mindset of neoliberalism contributed significantly to the inception of surveillance capitalism. Following WWII and the height of the Cold War, the Western world—in particular the US and UK—was in the midst of an economic downturn. In addition, these countries were extremely wary of the governmental control underlying totalitarianism and communism in other parts of the world. As a result, the people of the US and UK began calling for more democratic participation and equal rights for marginalized groups.
As a solution to both the economic decline and the public’s demand for less governmental authority, neoliberal economists began to advocate for a radical free market based on Friedrich Hayek and Milton Friedman’s economic theories. They pushed for an unimpeded market where competition would reign free, and deregulation and privatization would replace government oversight, labor unions, and government-owned corporations.
While the US never adopted an entirely free market in practice, the essence of these neoliberal ideas took hold of the economy. Zuboff says that by the 1990s, the idea of self-regulation grew out of the absence of government regulation. Companies gained the ability to oversee themselves, which set up ideal conditions for Google to experiment with the data it had been collecting about people in whatever way it saw fit.
The Ties Between Economics and Politics
As Zuboff explains, neoliberal ideology grew in popularity because of how it seemed to address the public’s political concerns following WWII and the height of the Cold War. However, she doesn’t explain exactly why economists like Hayek and Friedman believed that neoliberalism was the answer to the public’s fears of governmental control and the lack of a political voice.
In Friedman’s book Capitalism and Freedom, which details his economic theories, he argues that economic freedom is essential to political freedom. This is because being able to choose which goods and services satisfy one’s individual needs puts significant power in the hands of the people, rather than the government. In other words, free markets give control of the economy to the public, thereby checking the power of government, maximizing individual freedoms, and preventing the oppression of individual rights.
However, one could argue that Friedman’s assertion that economic freedom gives control back to the public is misleading due to the existence of monopolies, which are a product of free, unimpeded markets. A monopoly is when one person or organization becomes the only supplier of a product or service. Without competition, they can control prices and therefore the market itself. When monopolies have total control, it undermines the benefits of a free market outlined in Friedman’s argument, as the power is in the hands of a few corporations rather than distributed among the people. Therefore, perhaps neoliberalism wasn’t the answer to people’s fears that economists claimed it was.
Condition #2: Heightened Surveillance After 9/11
Another condition that helped pave the way for surveillance capitalism, continues Zuboff, is the US’s push for heightened surveillance after the September 11, 2001 terrorist attacks. This push influenced Google to collect more and more data about their users, and as already discussed, their behavior ultimately gave rise to the philosophy of surveillance capitalism.
Zuboff explains that before 9/11, the Federal Trade Commission (FTC) recommended that online privacy be regulated. Had they succeeded, many practices of surveillance capitalism would be illegal. However, after the attacks, the US government disregarded these recommendations and instead intensified its own surveillance in the name of fighting terrorism, as seen with the passing of the Patriot Act.
(Shortform note: According to a report issued by the FTC in 2000, their motivation for recommending government regulation was the discovery that only 20% of the busiest commercial sites at the time met the FTC’s basic standards for privacy protection while under self-regulation. This suggested that self-regulation alone was not enough to protect consumers’ privacy and government action was required.)
Intelligence agencies became interested in working with Google and its search-engine capabilities to reliably predict and detect future threats by collecting internet data about anyone and everyone. These partnerships encouraged Google to invest in new technologies and participate in behaviors that would lead to surveillance capitalism.
The Patriot Act and Surveillance Capitalism
As Zuboff explains, the US government heightened surveillance after 9/11 in the name of fighting terrorism; for example, by passing the Patriot Act. However, Zuboff doesn’t explain the details of this act and how it may have influenced Google in the coming years.
The Patriot Act expanded existing surveillance laws to allow the government to access the phone, email, bank, and credit records, as well as the internet activity, of average Americans. But despite being advertised as a measure to combat terrorism, these investigations led to just one terrorism conviction from 2003 to 2006. In addition, the Patriot Act didn’t require the government to destroy the information it collected—even when it was from innocent Americans—and prohibited Americans from telling anyone that they were under investigation.
These secretive and invasive practices aimed at average citizens arguably set a dangerous precedent that would later play a major role in the development of surveillance capitalism.
Google’s Journey as the Pioneer of Surveillance Capitalism
According to Zuboff, Google was the pioneer of surveillance capitalism. It developed the foundations of surveillance capitalism in response to the threat of the dot-com bubble, which caused the stock prices of internet companies like Google to fall. This put these companies in a dire financial position.
(Shortform note: While Zuboff details the consequences of the dot-com bubble, she doesn’t explain what it was. In the 1990s, overly optimistic investments in technology companies led to a rapid rise in their stock equity valuations. These stock prices were far higher than the technology companies’ true value, creating a bubble that eventually burst in 2001.)
This bursting of the dot-com bubble prompted several phases in Google’s journey that ultimately led to the creation and expansion of surveillance capitalism. Let’s look at each phase in detail.
Phase #1: Google Starts Using Targeted Advertising
Zuboff explains that from its conception, Google’s founders Larry Page and Sergey Brin had intended for Google to be a free search engine. They refused to charge people for using their service and committed to excluding ads from their site. However, this left them with few opportunities to earn revenue, which made them extremely vulnerable when investors were looking to pull out as a consequence of the dot-com bubble.
To create a consistent source of revenue, Page and Brin finally decided to surrender to the idea of adding advertisements to their site. They used the data they had collected from searches to match advertisements to specific users, making the ads more relevant and therefore more valuable to advertisers.
(Shortform note: Page and Brin were forced to compromise their vision for an ad-free site to please investors, which business experts say is a common problem. In fact, some experts recommend that start-ups forgo venture capital funding altogether to avoid this exact predicament. They claim that the less external funding that start-ups raise, the more successful they are.)
The History of Targeted Advertising
Although targeted advertising became the solution to Page and Brin’s revenue issue, they were far from the first to implement it. Advertisers began targeting certain demographics as early as the mid-90s, when they realized they could reach certain groups of consumers depending on where they placed their ads on websites.
In the years to follow, targeted advertising became increasingly advanced with new tracking tools. By the time Google entered the scene, sponsored search—which gave advertisers the chance to bid for top search engine results related to particular terms—was already a popular advertising model.
Phase #2: Google Discovers the Potential of Predictions
Then, Zuboff says, in the early 2000s, a seemingly insignificant event caught Google’s attention. It noticed a large spike in searches across different time zones for Carol Brady’s (a popular character from the American sitcom The Brady Bunch) maiden name after it aired as a question on Who Wants to Be a Millionaire? In other words, the increase in searches corresponded to a precise and predictable pattern that Google’s analysts could see within their data.
The Carol Brady incident made Google realize that they could use their search data to identify events and trends before the news media and predict with precision what users were looking for. With these predictions, they could then target users with more relevant advertisements. Zuboff argues that this marked the beginning of surveillance capitalism because Google recognized the value and power of its behavioral data for the first time.
Google Was Not the First Organization to Use TV for Mass Predictions
While it may have been a breakthrough moment for Google, the Carol Brady incident was not the first time that a company has been able to predict people’s behavior in connection with televised events. Over a decade before the Carol Brady incident, the UK’s National Grid had successfully forecasted electricity needs based on major televised events they predicted would cause energy surges.
How do these events relate to energy surges? In the same way that the Carol Brady question on Who Wants to Be a Millionaire? caused a surge in search queries, certain popular televised events—major soccer games, royal weddings, or even highly anticipated episodes from popular TV shows—cause energy surges when UK households collectively go to the kitchen and make a cup of tea during commercial breaks. The largest energy surge to date occurred during the 1990 World Cup Semi Final, after England missed a penalty against West Germany.
Phase #3: Surveillance Capitalism Expands
Over time, continues Zuboff, Google progressed from collecting data from only its search pages to extracting it from sites across the internet to improve its predictions and sell them to advertisers at a higher value. Once other tech companies (and eventually, non-tech companies) realized how profitable Google’s model was, they followed suit by finding ways to extract their own behavioral data. As an example, Zuboff cites Microsoft, which launched a personal assistant called Cortana to capture users’ personal information. Cortana encourages users to share as much of their data as possible to improve its functionality.
(Shortform note: Zuboff uses Microsoft’s Cortana as an example of how other companies have created their own data extraction tools. But just how concerned should users be about Microsoft’s collection of their personal information? A privacy report conducted by Common Sense Media gave Cortana an overall privacy rating of just 71%—a poor score in the context of online privacy. The report also rated Cortana as particularly poor at preventing the sale of data, prohibiting the exploitation of users’ decision-making process, and following student data privacy laws.)
Now, companies are continuously inventing new technologies capable of extracting more specific personal information from users, such as wearable technologies that capture people’s biometric data, surroundings, and even emotions.
(Shortform note: In addition to the wearable technologies that Zuboff mentions, researchers are now developing ways to implant consumer-driven monitors inside of our bodies that could extract specific medical data. These implants would be able to do things like screen patients before appointments or even monitor glucose levels by pairing with a mobile app.)
Why Surveillance Capitalism Has Managed to Thrive
How have Google (and, later, other companies) managed to keep mining user data despite showing such blatant disregard for privacy? Zuboff argues that there are a variety of factors that have contributed to surveillance capitalism’s ability to thrive. These factors fall into three categories: overcoming opposition, cornering the public, and mastering the art of disguise. Let’s discuss each of them further.
Surveillance Capitalism Has Overcome Opposition
Zuboff argues that Google and its competitors have learned to overcome any form of opposition, making it difficult for the public to demand—and lawmakers to enact—change.
Tech companies have refused to take accountability. These companies have never stopped to consider whether their actions are immoral or against public opinion and have proceeded unfazed by any and all attempts to raise concern.
(Shortform note: Do companies have moral responsibility, as Zuboff seems to suggest here? Some would argue that they don’t. According to the legal compliance view argued by Milton Friedman, corporations have no moral obligations outside of their legal obligations.)
Tech companies have developed strong defenses. Google (and, later, its competitors in a similar fashion) has defended itself from governmental threats by proving its value to political campaigns, investing in lobbying, and building close ties with Washington.
(Shortform note: Just how involved is Google in the political sphere? According to reports, in 2018, Google spent $21 million on federal lobbying, more than any other company in the US. In addition, as of 2019, its public policy division provided funding to 349 different organizations— including academic institutions, trade organizations, and advocacy groups—that work to defend Google and its practices.)
Surveillance Capitalism Has Cornered the Public
Zuboff maintains that Google and its competitors have cornered the public in such a way that they are effectively unable to resist the problematic practices of surveillance capitalism.
Surveillance capitalism has fostered dependency. By tying data extraction to free services that meet people’s needs, Google and other companies have forced customers to allow their invasive practices. These days, it’s difficult—if not impossible—to live without access to those resources.
(Shortform note: Is this dependency on Big Tech’s services as absolute as Zuboff claims? Our experience during the Covid-19 pandemic would suggest it is. According to a Pew Research survey conducted in 2021, 58% of Americans said their use of the internet and technology—like video calls—was essential, and 90% said it was extremely important.)
Surveillance capitalism has prevented users from reclaiming their privacy. Because users are dependent on their services, Google and other companies have no incentive to prioritize user privacy. As a result, they either provide no alternative option where user privacy can be protected, or make the information regarding how to opt out of data collection extremely difficult to find.
(Shortform note: Zuboff argues that companies either provide no option to opt out of data collection or make the information about how to do so nearly impossible to find. Research conducted since the book’s publication supports this claim. In a 2020 study, over 50% of the 7,000 websites examined by researchers contained no option to opt out of data collection, and just over 11% provided only one opt-out hyperlink.)
Surveillance capitalism has exploited people’s desire for inclusion. Google and other companies—especially social media sites like Facebook—have taken advantage of the fact that people have a natural desire to feel included, which makes them highly likely to keep using their social services.
(Shortform note: How does this exploitation work? Psychologists say that social media platforms create a cycle of isolation and connection to keep us reliant on their apps. They isolate us by enticing us to connect with acquaintances and strangers rather than see our friends and family face-to-face. Then, when we’re feeling lonely, we’re influenced to look to their apps to feel like we’re connecting with our social networks, restarting the vicious cycle.)
Tech companies have leveraged their image of authority. Because of their innovative technologies, the public sees Google and its competitors as experts on the ways of the future. This means that people feel they can’t question them, and tech companies have taken advantage of that position to continue their data collection practices.
(Shortform note: Zuboff’s claim that people feel they can’t question tech companies may be in question based on a recent survey. According to a 2021 survey, trust in technology has dropped to a record low in the US and 17 other countries, including China, the UK, and Germany. Diminishing trust may indicate that people around the world are indeed questioning the authority of these companies.)
Surveillance Capitalism Has Mastered the Art of Disguise
Zuboff argues that Google and its competitors have mastered the art of disguise so that their intentions and practices are undetectable and therefore unstoppable.
Tech companies have masked their intentions. Google and other companies have learned to mask their intentions with innovative technologies like personalization and digital assistants. Because these technologies are undeniably useful, companies can distract users from the fact that they simultaneously harvest sensitive information.
(Shortform note: Perhaps companies’ intentions aren’t as well-masked as Zuboff implies. According to a survey, while 76% of Americans say they use smart assistants like Amazon’s Alexa and Apple’s Siri, 61% of those who use them are also worried that these devices are listening to their private conversations.)
Surveillance capitalism has operated in secret. Google and its competitors have worked hard to conceal the details of their data-mining practices and have actively opposed calls to reveal information.
(Shortform note: In some cases, companies are so secretive that employees themselves are out of the loop. For example, in 2018, Google employees requested that the company be more transparent about an ongoing project to develop a search engine for China. Employees cited concerns that they couldn’t make an ethically informed decision to continue working on the project without further information.)
Surveillance capitalism has progressed at lightning speed. As a side effect of Google and its competitors’ fast-moving technological innovation, the public and government have been incapable of processing and confronting these changes fast enough to raise concerns and enact regulatory policies.
(Shortform note: Technological progress is likely to become even faster (and even more difficult to regulate) as time goes on. Famously championed by Ray Kurzweil, the Law of Accelerating Returns states that technological progress occurs exponentially, not linearly—the rate at which technology transforms our world is constantly increasing. Kurzweil, currently in his 70s, predicts that technology will accelerate so quickly in the next few years that he’ll be able to live forever.)
The Ultimate Goal of Surveillance Capitalism
Zuboff argues that the ultimate goal of surveillance capitalism is to create a society in which our free will is replaced by behavioral conditioning that encourages predictable and machine-like patterns of behavior. This would eliminate human mistakes, accidents, and randomness. By guaranteeing specific human behavior, companies like Google can sell certainties instead of predictions and maximize their profits.
(Shortform note: Zuboff argues that the goal of surveillance capitalism is to replace human error with predictable, machine-like behavior. However, according to behavioral economics theory, humans are both irrational (and thus error-prone) and predictable. In his book Predictably Irrational, behavioral economist Dan Ariely argues that humans are systematically irrational, meaning that we tend to repeat the same mistakes in a predictable way without recognizing or correcting them. If this is true, then surveillance capitalism’s aim of behavioral conditioning may be misguided; to make behavior fully predictable, companies simply need to learn the patterns of our mistakes, rather than trying to eradicate mistakes entirely.)
Methods to Modify Behavior
Zuboff says that at present, tech companies use various methods to modify people’s behavior. One strategy they use is to provide subliminal cues that subtly influence people’s choices without them realizing it. For example, Airbnb displays how many other users are browsing for the same dates as you to create subconscious urgency to book a reservation.
Another method tech companies use to control users’ behavior is to reinforce actions that build a predictable routine—a routine that will reliably guarantee the outcomes companies want. For example, UberEats suggests ordering food at meal times, thereby reinforcing a routine of using the app on a regular schedule.
The Birth of Persuasive Technology
By describing these methods, Zuboff shows that companies have become remarkably good at modifying people’s behavior. How did they come to master this art of manipulation?
According to psychologist Richard Freed, the tech industry developed its powerful methods of persuasion by studying the behavioral research of B.J. Fogg. Fogg discovered that to modify behavior, you need to give your target motivation, ability, and triggers. In his book, Tiny Habits, Fogg describes this model in detail, explaining that motivation is the desire to act, ability is the capacity to act, and triggers are the cues that prompt you to act. So, for example, Airbnb creates motivation to book a reservation by showing how many users are browsing for the same dates. Similarly, UberEats’ meal time notifications act as triggers to keep you using the app on a regular schedule.
Because Fogg taught classes at Stanford University, which is a hub for the tech industry, he was in close contact with many individuals who would go on to develop the technologies of surveillance capitalism, like Instagram. They learned Fogg’s research directly from him and went on to test and perfect it for their industry. The result is the methods of behavioral modification that Zuboff describes.
Creating a Fully Connected Society
Zuboff explains that to reach a point of total predictability, companies’ control over our behavior needs to be all-encompassing. To accomplish this, companies want to create a society in which people and devices are connected at all times.
(Shortform note: We may be closer to the existence of the connected society that Zuboff describes than you may think. Meta (previously Facebook) is currently designing the Metaverse. This is a type of cyberspace that uses technology like virtual and augmented reality to blend the physical with the digital world. Once created, the Metaverse could facilitate the type of connection and control that Zuboff describes.)
As an example of what this would look like, Zuboff cites a patent application by Microsoft for a device that would monitor human behavior to detect anything abnormal, such as excessive shouting. The device could then report those abnormalities to individuals like family members, doctors, or law enforcement.
(Shortform note: Since the book’s publication, Microsoft has filed for similar patents, such as one for a system of sensors that would monitor employees’ body language, facial expressions, speech patterns, and mobile devices to track a meeting’s overall quality in real time. Although they haven’t stated it as their intention, Microsoft could use such sensitive data to surveil and control employees in the manner that Zuboff describes.)
Social Principles of a Connected Society
Zuboff argues that in this type of connected society, relationships within the community would fundamentally change, and algorithms would replace familiar social functions—like supervision, negotiation, communication, and problem solving—that govern current civilization.
(Shortform note: While Zuboff focuses her discussion on the future role of algorithms, researchers say that the current use of algorithms is already negatively impacting our society. In particular, our reliance on algorithms has led to the persistence of bias, deepening social divides, and the rise of unemployment.)
Zuboff identifies several social principles that would underlie this new reality. First, in a connected society, we would prioritize the collective over the individual. Companies would justify total control over our behavior by arguing that it’s “for the common good.” Furthermore, valued concepts like privacy and individuality would cease to exist for the sake of total connection and harmony.
(Shortform note: From a philosophical standpoint, there are counterarguments to Zuboff’s warnings about prioritizing the common good and forgoing freedom. For example, some would say that the concept of “common good” doesn’t exist, because each individual has unique experiences. Therefore, there’s no single policy that could benefit everyone, and companies couldn’t aim for such an ideal. On the other hand, others would argue that the total transparency of a hyper-connected, data-driven world would be its own kind of freedom, as it would allow us to understand far more about the world around us.)
In addition, instead of relying on the negotiation and compromise of politics to make decisions for society, automated systems would quickly compute certain solutions for the greater good.
Once they’ve determined specific solutions, companies would manipulate connections between people to drive change. In other words, they would influence your actions by exploiting your desire to “do what your friends are doing.”
Automation and the Power of Social Connection
Zuboff argues that in this connected society, automated decision making would replace politics and all of its relational components. But is this such an undesirable thing? According to a British poll conducted in 2018, one in four people would prefer robot politicians to human politicians, and research shows that AI is particularly good at understanding public issues.
That said, it’s important to consider the social implications of such innovation, such as whether people can connect with robots in the same way they do their human representatives. As Zuboff explains, social connection is a powerful tool—which is why she says that companies may be looking to manipulate it.
Consequences of Surveillance Capitalism
Zuboff stresses that surveillance capitalism has already caused a number of grave consequences for our society, and particularly for our democracy. Let’s discuss each consequence in more detail.
Consequence #1: Surveillance Capitalism Threatens Our Right to Privacy
Zuboff explains that companies often collect information without the knowledge or meaningful consent of their consumers. Additionally, they invade personal spaces—both physical and psychological—to do so.
For example, she says that even when we’re inside our homes, devices like TVs, thermostats, and even mattresses are monitoring and delivering information about what we say and do to company computers. In addition, companies can analyze metadata—like how often you change your profile picture—to determine extremely specific information that you never intentionally disclosed, such as whether or not you have depression.
How Far Does Privacy Invasion Go?
Here, Zuboff warns that tech companies are violating our privacy by invading our personal spaces and analyzing our metadata for extremely specific personal information. Arguably an even more severe threat to privacy is that these companies share this private information with law enforcement agencies without people’s knowledge or consent and without the required warrants.
For example, according to one report, law enforcement demanded seven days of location information from a man’s cell phone provider in connection with a criminal investigation. Fortunately, however, when the case was taken to the US Supreme Court in 2018, the Court ruled that his location information was protected by his Fourth Amendment rights, meaning law enforcement needed a warrant to access it. That said, whether this ruling will serve as a precedent for future cases involving privacy and technology remains to be seen.
Consequence #2: Surveillance Capitalism Removes Individual Autonomy
Zuboff argues that because companies aim to control people’s behavior—often outside of our awareness—they have removed our right to individual autonomy. Not only do companies engage in practices like eliminating the option to opt out of privacy invasion and fostering the public’s dependency on their services (as we’ve discussed earlier), but they also interfere with our emotions and choices without their knowledge or consent.
(Shortform note: While Zuboff’s argument presupposes the existence of free will, some would say that there’s no such thing. They insist that because we’re always influenced by biological and environmental factors outside of our control—like the health conditions we’re born with or the family we’re raised by—we don’t have the autonomy over our lives we think we do. Following this logic, no one can take away our free will, because it didn’t exist to begin with.)
For example, Zuboff says that in 2010, Facebook ran an experiment to test whether it could mobilize people to vote. It found that it could influence whether a person went to the polls by showing them photos of friends and family who had already voted. While it may seem minor, this subtle manipulation essentially removed their autonomous decision about whether to vote.
(Shortform note: Social media has been used not only to influence people to vote but also to influence people who to vote for. According to a report published by the US Senate in 2018, Russia tried to influence American voters through all major social media platforms prior to the 2016 election. This wide-scale attempted manipulation has concerning implications for both national security and democracy, which hinges on individuals’ right to have a voice.)
Consequence #3: Surveillance Capitalism Disregards Social Norms
According to Zuboff, surveillance capitalism’s goal of total predictability has influenced companies to disregard social norms in favor of machine automation. Because these social norms involve flexibility and risk—which are inherent in human-to-human interactions—they can’t facilitate the high level of behavioral control that companies seek.
For example, Zuboff says auto loan lenders install devices that deactivate the car’s engine if borrowers are late on payments. While this automatic process may help lenders avoid risk, it’s also void of empathy for human struggles—like whether the person was short on money due to illness—that is essential to the social contracts of our current society.
Starter Interrupt Devices Disengage Morality
These starter interrupt devices and similar technologies of surveillance capitalism arguably encourage action that lacks empathy and regard for social norms because they depersonalize the action. Research shows that people disengage their sense of morality when they lose the sense that the people they are mistreating are unique individuals. Someone who presses a button to shut off someone’s engine may not do the same thing if they were face-to-face with the car’s owner.
In addition to being void of empathy for human struggles—as Zuboff explains here—starter interrupt devices also unfairly target the poor. Dealers and lenders see them as a way to protect their assets from “risky” borrowers who have poor credit scores and other financial challenges, so they most often distribute them to poor people who have no choice but to apply for subprime auto loans. What’s more, not only are the devices immoral for their lack of empathy and unfair targeting, but they can also be legitimately dangerous. For example, some borrowers have claimed that their cars have been shut off while idling or driving on the freeway.
Consequence #4: Surveillance Capitalism Damages Our Mental Health
Zuboff says that some of the methods companies use to extract our data are damaging to our mental health—particularly social media. In particular, she argues that apps like Facebook and Instagram have encouraged extreme social comparison, which has caused damaging psychological effects like low self-esteem and self-worth, increased body judgment, and more frequent depressive moods.
Zuboff elaborates that because social media puts our lives on constant display, we exaggerate reality to gain standing among our peers. This causes others to feel inferior in comparison and pushes them to keep up with an unrealistic standard. Ultimately, this traps everyone in a vicious cycle of comparison and posturing that leads to a downward spiral of worsening mental health.
What’s Behind Our Social Comparison?
While Zuboff describes the damaging cycle of social comparison, she doesn’t speak to the cultural roots of our need to compete with our peers in the first place. According to Brené Brown in Daring Greatly, we live in a flaw-focused culture that makes us feel as if we’re never enough. In response to these feelings of inadequacy, we try to compensate by showing how incredible we are.
In today’s society, one of the ways we do this is by posting on social media, where we receive external validation in the form of likes and follows. While psychologists say that some level of validation seeking is normal, social media has made us rely exclusively on validation from others, which has led to the negative psychological effects that Zuboff describes. To overcome this tendency, psychologists recommend identifying when you’re seeking external validation and instead taking actions to self-validate: for example, journaling about your improvements and successes and learning to encourage yourself.
Consequence #5: Surveillance Capitalism Causes a Loss of “Self”
Zuboff explains that the hyper-connected world that is a major consequence of surveillance capitalism has caused young people—who have not yet matured and developed a strong sense of identity—to lose their sense of “self.” While we all have a desire for connection, she says that adolescents in particular have become so dependent on their connections to others and so incapable of escaping public view that it has threatened their ability to develop an identity that is separate from others’.
As a result of this loss of self, adolescents have become less able to tolerate solitude and more vulnerable to peer pressure and manipulation. They also try to control other people because they see others as an extension of themselves.
Developing a Sense of Self in the Face of Hyperconnectivity
Why does hyperconnectivity threaten adolescents’ ability to develop a sense of self? According to psychologist Erik Erikson’s Stages of Development theory, adolescents need sufficient opportunities for personal exploration of their beliefs, ideals, and values to develop a secure sense of independence and identity. Spending an excessive amount of time connected to others online may limit those opportunities and prevent adolescents from developing their sense of self.
To help adolescents build a stronger sense of identity and avoid the negative effects of hyperconnectivity that Zuboff mentions, parents can encourage their teens to explore their interests, avoid pushing their own agenda on their children, and let their children learn from their own choices.
How Society Has Tried to Fight Back
Zuboff argues that although Google and other companies have been overwhelmingly successful at avoiding and preventing any sort of regulations that would curb their surveillance capitalism practices, this hasn’t prevented people and governments from trying to fight back.
For example, in 2011, 90 Spanish citizens submitted claims demanding that Google give them the right to have their private information removed from its site. The claims included desires to stay hidden from abusive partners and forget old arrests. The “right to be forgotten” became a fundamental principle of EU law in 2014.
Then, in 2018, the EU adopted the General Data Protection Regulation (GDPR), which forces companies to modify their data activities according to certain regulations. For example, companies are prohibited from making personal information public by default.
In addition, activists, artists, and inventors have created ways to avoid the prying practices of surveillance capitalism. This includes signal-blocking phone cases to help protestors hide their location by eliminating all wireless communication.
(Shortform note: You, too, can work to combat the practices of surveillance capitalism thanks to an inventor who developed a way to make signal-blocking phone pouches at home. The design requires materials that can be purchased online, as well as some light sewing.)
Opposing Viewpoints: The EU and US’s Take on Surveillance Capitalism
Zuboff cites two examples of laws that the European Union has used to fight back against surveillance capitalism. But what action has been taken in the United States, where core companies like Google, Microsoft, and Amazon are based, and how does that impact the actions of the EU?
With regards to the “right to be forgotten,” the US opposes the EU. From the US’s perspective, the right to be forgotten violates the First Amendment right to free speech because companies have a right to publish whatever information they want online, even if that information reveals undesirable truths about individuals. In addition, in 2018, the US Congress enacted the US CLOUD Act, overruling the GDPR before it could be enforced.
With the US at the heart of surveillance capitalism, is it possible for the EU and other countries to drive meaningful change without American support? Thus far, it seems unlikely. For example, as previously mentioned, the US CLOUD Act overrules the GDPR. That means that US-based companies can and must allow government access to all stored data, including data stored by EU servers. In other words, the GDPR is rendered useless in the US—a likely outcome for any legislation regarding foreign data.
What We Can Do to Stop Surveillance Capitalism
Despite these efforts, we haven’t been able to drive change fundamental enough to end surveillance capitalism. Zuboff insists that to defeat it, society must undergo a series of mindset shifts:
- First, we must slow down and become aware of what’s happening around us.
- Second, we have to recognize surveillance capitalism as inherently anti-democratic.
- Third, we need to reignite our anger and fight for our right to privacy and self-determination.
- Fourth, we must accept that, as individuals, we’re powerless. To stop the progression of surveillance capitalism, we need collective social action.
Ultimately, Zuboff argues that, regardless of others’ past decisions, it’s each new generation’s responsibility to make things right.
What Can We Do to Stop Surveillance Capitalism?
Although Zuboff doesn’t offer any specific action steps individuals can take to stop the advance of surveillance capitalism, other writers offer tips on how to achieve this kind of change. Since Zuboff asserts that it’s every new generation’s responsibility to make things right, she would likely encourage everyone to take active steps like these:
To become more aware of what’s happening, intentionally research political action regarding big tech and surveillance capitalism. Keep close tabs on the members of Congress who represent you: Sign up for their newsletters, follow them on social media, and create Google News alerts for their names. Use GovTrack.us to stay informed about current congressional legislation as it develops.
To help society recognize surveillance capitalism as anti-democratic and inspire ourselves (and others) to fight for our right to privacy, find ways to spread Zuboff’s message. In Contagious, Jonah Berger explains that effectively spreading ideas is all about influencing others to spread them in everyday conversation. To do this, make your idea as visible as possible and engage your audience with an emotional story. For example, 2017’s #MeToo movement successfully spread awareness of sexual abuse and harassment by urging its audience to use a specific hashtag (increasing the movement’s visibility) and share their personal, emotionally-charged stories (engaging the audience).
To aid collective social action, join an existing activist group. There are plenty of activist organizations currently fighting Big Tech for our right to privacy, including the Electronic Privacy Information Center, the Electronic Frontier Foundation, and Privacy International. Seek a job opportunity or volunteer at one of these organizations—or just donate.




