Clinton Watts’ new book, Messing with the Enemy: Surviving in a Social Media World of Hackers, Terrorists, Russians, and Fake News, is a sobering recounting of the nearly 20 years of information warfare that Russia, al-Qaeda, ISIS, and others have waged on Western alliances and their members. Watts, whose testimony last year before the Senate Intelligence Committee was a wake-up call to anyone watching, carefully describes the through-line from a simple HTML world where connectivity liberated information from its local stores to the intrusive, abusive, and divisive information landscape of today.
A filter bubble — a term coined by Eli Pariser in his 2011 book of the same name — is a familiar concept, but Watts believes that over the ensuing years something more fundamental has taken hold which has changed how we perceive ourselves, one another, and our relationship with the physical world and empirical reality, writing:
The Internet brought the world together, but, over time, social media has torn the world apart. There are many reasons why this has happened, but one factor stands above the rest: preference. . . . The relentless pursuit of preferences turns smart crowds into dumb mobs, leads to the selection of preferred fictions over actual facts, and creates an environment where humans have access to more information than ever but actually understand less about the physical world. . . . Online, the pursuit of comfort and confirmation create an alternative reality.
What was once pursued as “personalization” has become a habit of inhabiting a world made up of only the things you prefer. For those of us who remember the early drivel about personalization, and our gut-level anxieties about its empty promise, the reality has turned out to be far worse than we could have imagined. Setting the individual at the center of the world always struck me as the start of a spiral of self-involved living — if I have my current preferences fed, how will I get new ideas, expanded preferences, new options?
The resulting ability for hall-of-mirror fantasies to be whipped up online has never been more powerful. Groups have become self-targeting. With smartphones extending everyone’s fantasy world deep into the real world, and online shopping, gossiping, and romancing further elaborating the preference fantasies, we expect only rewards and gratification. Boredom, doubt, and dissipated attention are no longer regular companions. At the same time, the companies and agents erecting these mirrors are able to target us without us knowing. “Fake news” is at its heart relativistic — fake to you is real to someone else. It’s all about what they prefer to believe. In a virtual fantasy world, finding approbation is easy.
Much of the success of the past few centuries of information has been the healthy relationship between what Watts calls “the core” and the crowd. That relationship has flipped in the new information economy, and has become toxic in many cases.
At the base of the changes wrought by social media — a euphemism at this point, surely, for coercive media — were at least three important factors:
- Coercive media was available to anyone — targets and agents — at no cost. It made no distinction between the core and the crowd.
- There was an initial assumption that the core — educated, experienced professionals — would remain highly influential within social media, which led to the lax development of guardrails. The engineers thought the guardrails would install themselves. But now, the crowd competes quite effectively with the core, even dominating it in many instances.
- The traditional and well-known core is being replaced by a new and unaccountable core — shadowy, corrupt, and power-hungry — that is using social media to seed dissent, confusion, and divisions in Europe, the US, the Phillipines, Myanmar, and elsewhere.
The first element clearly enabled the second and third elements, as various disenfranchised groups — disenfranchised in some important cases because society had rightly marginalized them (e.g., racists, terrorists, and criminals) — were provided with free access to platforms where they could target like-minded individuals (i.e., other racists, other terrorists, and other criminals), masquerade as legitimate entities, and slip by societal screens put in place to marginalize them, thereby gaining new handholds in society. Not only that, but these platforms were instantly global, allowing racists, terrorists, and criminals to appeal to others and coordinate activities across continents, countries, and time zones.
The business model has evolved into one of surveillance and data dominance, as Watts writes. We now have companies with quasi-governmental powers, who have to worry about their influence on elections in multiple countries simultaneously.
Could a better business model help? Email provides an interesting analog to my mind. Multiple times, the US Postal Service experimented with email, from the 1970s into the 1990s, thinking it had a role in delivering paid email. Migrating a paid model forward may have led to a small charge being levied on every 100 or 1,000 emails. However, for a variety of reasons, the USPS missed the opportunity, and email became an unregulated and “free” offering via AOL and others, bundled into Internet access at no apparent cost, tied in with our employer or services from companies that make money in other ways — a free service.
What followed from this “seems free” model is well-known to us all — spammers, phishers, hackers, and others using email to annoy, exploit, and intrude. It continues today, with Sci-Hub famously exploiting academia with phishing scams, and Iranians charged by the FBI with the same. A market of services to fight these problems was created, and spam filters and other imperfect tools became commonplace. CAN-SPAM and other laws in Canada and the EU have been somewhat effective, as well, but unsubscribing seems to fail as often as it works in actual practice. The new GDPR regulations simply led to a few days of hitting “delete” as mail purveyors went through the motions.
Making email a paid service would have likely nipped most if not all of these problems in the bud. Requiring business and individual emailers to pay to send emails is not anti-democratic — we pay to send physical mail — and reveals a few key things about those sending:
- Who is actually sending the emails?
- Who is serious enough about sending emails to be willing and able to pay for the privilege?
- Who is accountable if something goes wrong?
A paid model would control certain annoying things about email, including volume, frequency, and appropriateness — that is, you send only to those who you really think want the emails, rather than a “spray and pray” approach. It would make spam emailing economically non-viable. It would cleanly differentiate core from crowd far upstream.
Many of these same factors are at work in coercive media. Made freely available and without adequate controls over who posts what, social media was initially used in many beneficial or at least benign ways, but soon became a tool of increasingly destructive and purposely disruptive methods of incursion. Worse, malefactors are difficult to trace because a free service operating at surveillance scale leaves tangled and obscure commercial trails. It took months for sophisticated technology players like Twitter and Facebook to detect and trace illicit activities on their platforms, and many, myself included, are convinced they’ve only explored the tip of the iceberg. Exploiters come and go as they please, adopting various personas without consequence, and working from the shadows.
This shadow group of influencers is what Watts calls a “hidden elite core,” a group of unaccountable but influential people and organizations (or nations) who are able to instigate the crowd to achieve their goals while remaining hidden from view:
A hidden elite core will social-engineer an unwitting crowd into choosing the policies, politics, and preferences of the hidden elite, all without the crowd’s ever realizing it. Future social media influence will be dominated by those who can aggregate and harness the collective preferences of audiences, deceiving the crowd into willingly selecting the core’s agenda.
Russia, with its tiny global influence otherwise, has become a power once again using these techniques.
A “hidden core” in scholarly publishing came up in a recent article in the Economist outlining how some journals that claim to do peer review do not actually do so. These journals can be propped up quickly because their revenue model is easier to create, has immediate payback, and gives the readers no leverage. Whether fly-by-night or more enduring like some other exploitation publishers, the enabler is the business model. The Economist article outlines the myriad ways that academics are exploited by fake and predatory journals, how efforts to combat them through whitelists or blacklists have proven only passingly effective, and how the system remains vulnerable despite increased awareness. The author ends with this bit of speculation:
One far-fetched solution is a return to journal subscriptions. These have for so long been excoriated as rent-seeking profit-inflators restricting the flow of information that a change of course would now be unthinkable. But those who pushed for their elimination might be wise to pause for thought.
I would disagree in that it is not too late to “return” to journal subscriptions, because they still dominate the market, and for good reasons — they spread costs more widely, they reflect and bolster actual trust relationships, they align reader and publisher interests, and they give readers leverage over their information sources.
Watts also points to the lack of barriers to entry, barriers that paid services erect natively — some call them paywalls, some paypoints, some subscriptions, but the point is that if something requires someone to pay to use it, there is a modest but important barrier. That’s not always a bad thing, even if some of us reflexively view such barriers as “undemocratic” or “elitist.” Democracies are usually democratic republics, and assume an accountable elite core will correct what the crowd gets wrong, something worth remembering as a lesson our forebears learned. Watts writes:
No barriers to entry and unlimited preference in the virtual world have overtaken compromise in the real world.
Lowering or eliminating barriers to entry sounds laudable, but in reality it has allowed fringe viewpoints to go to scale on the major platforms. Gatekeepers who believe in lower barriers to entry can be reluctant and ineffective. Facebook famously has resisted calls to be more involved in evaluating the content on its platform, with Twitter and YouTube also doing poor jobs of monitoring the flood of content infesting their outlets — from bots to lies to terrorists. Our profession’s experimentation bends in this direction, whether it involves fewer peer review standards or post-publication review in place of pre-publication review. Even preprints smack of the mentality of a reluctant or confounded gatekeeper.
Information can be weaponized. Giving malefactors unbridled access to the raw materials of information warfare while platforms and preference systems have given them the targeting tools needed to divide and conquer far-flung constituencies, opponents, and rivals portends further problems. Scholarly publishing is not yet caught up wholly in the debacle that is modern coercive media, and one reason may be the dominance of the subscription model, which does keep out the crowd and limit access more or less to a discoverable and accountable elite core. Our core group is well-described, registering their activity with each paper published. Our goals of influence are clear — we want a more fact-based, empirical, and reality-based world, and we want to explain what we don’t understand.
I recently gave a talk about these issues to a group of academic editors and authors. One of them stopped me after the talk, telling me he had been in military psychological operations (PsyOps) for more than a decade, and was glad to see that people in our industry were actually starting to realize how the tools of psychological warfare were being used against them and the population at large. It was a sobering moment, a realization much like the one I had reading Watts’ book — the exploitation and weaponization of free information has been a campaign others have seen evolving for a long time. They’ve known we’ve been playing into it.
One of Facebook’s early advisors, Roger McNamee, wrote about this months ago, as well, urging Facebook to consider a subscription model to right its wrongs:
Facebook could adopt a business model that does not depend on surveillance, addiction, and manipulation: subscriptions.
Some in the media business have seen it coming, as well. In a recent Digiday podcast interview, New York Post CEO and Publisher Jesse Angelo described their success as having seen this coming:
I think our company [NewsCorp] has been really prescient about this. We are not just talking about this now. We were talking about this a decade ago, and people looked at us like we were crazy, and said, “But Google and Facebook are so cool.” And we said, “Yeah, but there are some very real issues here about the provenance of news and content, and separating publishers from their audiences.” Now, people are suddenly sitting up and going, “Oh, wow!” . . . People are now realizing there are some very, very real issues with the platforms. . . . I always think about it [publishers repeatedly caving into platform demands] like a horror movie where the person keeps going to the basement to check on the noise.
The platform threat is now not only at our doorstep, but in our basement — with anti-science people in power, emerging from preference bubbles into powerful positions that are maintained by those same preference bubbles. Decades of social and scientific progress are stalling out or being reversed because small preference bubbles have been pushed to the top of multiple cultures at once — from Europe to the US to the Phillipines to the Middle East — all by social media.
Meanwhile, there seems to be no downside for this warping of cultures and societies. Billionaires continue to amass vast amounts of wealth, to the dismay of many, including Rose Marcario, the socially conscious CEO of Patagonia, who was asked in a recent interview with Kara Swisher on the Recode Decode podcast why she referred to these billionaire CEOs as “weenies”:
What’s Zuckerberg worth, $60 billion? What’s Larry Page worth, $100 billion? . . . I have family in the military that fight for this country, and it’s our democracy that’s at stake. We got attacked on their platforms, and they haven’t done anything about it. They won’t step up and explain the problem. They won’t come out and, in plain English, say what they’re doing about it. And it’s pathetic! That’s why I say they’re weenies.
The mental model of scholarly publishing has been changed by the belief that it could and should mirror the commercial model of the broader Internet — free to users, paid by producers, technocentric, technocratic. As a result, we dress up weak business models and lowered barriers to entry in terms like “open” and “democratic,” in a way reminiscent of society using “social media” as a euphemism for coercive media.
Pre-2016 Silicon Valley thinking about the information economy can be safely considered to be old-fashioned at this point. The information world has changed dramatically. Looking at how things actually turned out using these approaches — by reading books like Watts’ or others — is a reminder that the destructiveness unleashed by the lack of barriers to entry, the diminishment of the obvious and expert core for the sake of the hidden illicit core and the crowd, and commerce that depends on psychological exploitation. Yet, we continue to ignore an upstream solution that is not only being more widely embraced in myriad other industries, but one that we have traditionally excelled at — the subscription model, paid access, and business models that foster trust and cooperation.
Business models are an instantiation of ethics, priorities, and alliances. Catering to producers, exploiting users, and feigning neutrality is a broken model from a societal standpoint. Catering to users, earning trust, and advocating for actual people is what the subscription model does at its best.
The business model may be a key to making things right again.