The numbers that describe data breaches are often so mind-boggling that, when they’re inevitably adjusted upward by a few tens of millions, it’s difficult to muster any additional shock. The number of Facebook accounts improperly accessed by Cambridge Analytica was initially reported as 50 million, then 87 million, now maybe more. Last year’s Equifax breach compromised 147 million accounts and climbing, and the infamous Yahoo breach of 2013 is now thought to have nabbed 3 billion. That’s more accounts than people living in the Americas, Africa, and Europe, combined.

Impossible as it may be to conceptualize the magnitude of these breaches, their exposure confronts us with a basic fact of our economic life, one that most of us prefer not to think about: that the potential value of our personal data is what underwrites any number of platforms and services that we’ve come to rely upon, and that we tend to consider “free.” A New Yorker piece on the Cambridge Analytica scandal helpfully frames the economic context in which it happened: “For years, tech critics warned, ‘You’re not the customer, you’re the product,’” author Andrew Marantz writes. “[Online platforms] give consumers free things, such as birthday reminders and quick bursts of quantifiable attention, in exchange for their private data, which digital marketers then use to sell them products, ideologies, or candidates.”

The term potential value points to part of what distinguishes this moment in economic history. “To suggest that a Facebook user consents to all the ways Facebook uses or might use their data is completely to misrepresent the logic at work here, wrote William Davies last week in the London Review of Books. “Its potential value and use emerges after one has collected it, not before.” Author Nick Srnicek calls this economic state of affairs “platform capitalism.” Unlike earlier economic systems, which were rooted in the value of material things with generally known uses, platform capitalism assumes that companies haven’t yet discovered all the ways they might monetize personal data. This very unboundedness is part of what allows companies that have never turned a profit to achieve multi-billion-dollar valuations. And if your data is valuable enough to fund Silicon Valley unicorns, it’s inevitable that opportunistic innovators and imitators—like, for example, Cambridge Analytica—will be after it as well.

 

The Cambridge Analytica incident is significant in part because of its alleged entanglement with two major political events: the UK’s departure from the European Union, and the election of Donald Trump. As Davies reminds us, the degree to which Cambridge Analytica contributed to either event is unknowable, and the story’s notoriety is inflated by its utility as an explanation, however flawed and incomplete, for political events that many found shocking and confusing. In this way, Davies writes, the hidden-camera exposé of the Cambridge Analytica founders’ smug shadiness made them “grotesque personalities on which to focus anger and alarm”—no less guilty for being so, but clearly only representative of a larger phenomenon.

Naming that larger phenomenon (and its implications) correctly, without naïveté on the one hand or paranoia on the other, is the trick. On this point, Alan Jacobs has written a profoundly helpful piece for The Hedgehog Review titled “Tending the Digital Commons: A Small Ethics toward the Future.” Jacobs’ piece is a reflection upon a basic reality of contemporary life: our online lives are vulnerable, because we live them mostly on platforms, which are owned and maintained by corporations, whose primary incentive is to make the content we generate and the data we share profitable. This reality prompts Jacobs to ask a basic ethical question: given the growing extent to which life itself is lived online, what responsibility do we have for the state of the digital ecosystem we will hand down to future generations? What would it mean to leave them, borrowing a phrase from Tolkien, “clean earth to till?”

Jacobs reminds us that, digitally speaking, most of us are tilling land owned by massive social media companies, and we’ve begun to “forget that there’s any other way to live.” But there is: the Web itself is sustained by a set of decentralized organizations relying heavily on volunteer labor and open-source software, and anyone who takes the time to learn a few basic skills (which Jacobs lists) can contribute to it on their own terms, rather than those of a corporate platform. Many of us—and perhaps especially the “digital natives” among us—are so accustomed to our platform-dominated digital landscape that we’re oblivious to the possibility of an online life outside the high walls of big companies. By learning and teaching the value of being a “responsible citizen of the open Web,” we not only create space for living “extramurally,” but also develop and pass on a more intuitive understanding of how the internet shapes the way “knowledge is formed and shared; how identity is formed and expressed.”

The way Jacobs wants us to rethink our online lives has political implications as well (in a deep sense of the term): it’s one way to respond to a broad cultural shift in which public institutions are being eclipsed by private platforms. Platforms appear to decentralize power, but mostly they just depersonalize it, making it less visible, less accountable, and less responsive to people. Jacobs notes, for example, that “it is virtually impossible to contact anyone at Google, Facebook, Twitter, or Instagram, and that is so that those platforms can train us to do what they want us to do, rather than be accountable to our desires and needs.” Public accountability is simply not what corporate platforms are built for—which is precisely why our tendency to treat them as an “upgrade” over public institutions is dangerous. Jacobs’ predictions about the implications of this tendency remind us that the vulnerability of our personal data is just the tip of the iceberg:

To the extent that people accommodate themselves to the faceless inflexibility of platforms, they will become less and less capable of seeing the virtues of institutions, on any scale. One consequence of that accommodation will be an increasing impatience with representative democracy, and an accompanying desire to replace political institutions with platform-based decision making...Among other things, these trends will bring, in turn, the exploitation of communities and natural resources by people who will never see or know anything about what they are exploiting.


To the degree that massive data breaches give cause for, in Davies’ words, “anger and alarm,” where should those emotions be directed? To be sure, a great deal of the responsibility lies with the corporations to whom we entrust personal data, and who invariably promise that this data is “secure.” But it’s difficult to escape the feeling that some of our frustration is with ourselves. As consumers, part of what may be in order is a renewed mindfulness of a basic economic principle, as applied to our digital economy: there’s no such thing as a free platform. This isn’t to say that the tradeoffs involved in exchanging data for services aren’t ever worthwhile; but it is to say that most of us, most of the time, don’t consider those tradeoffs very carefully. As citizens, media expert Siva Vaidhyanathan prescribes a more difficult and far-reaching remedy: “Our long-term agenda should be to bolster institutions that foster democratic deliberation and the rational pursuit of knowledge,” he writes, “[including] scientific organizations, universities, libraries, museums, newspapers and civic organizations”. A renewed mindfulness, in other words, of the value of what we share.