|
[From The Social Dilemma] |
Social
media oligarchs have seduced us -- giving us bicycles for the mind that they have
spent years and billions engineering to "engage" our attention. The
problem is that they insist on steering those bicycles for us, because they get
rich selling advertising that they precisely target to us. Democracy and common
sense require that we, the people, keep control of our marketplace of ideas. It
is time to wrestle back the steering of our bicycles, so that we can guide our
attention where we want. Here is why, and how. Hint: it will probably require
regulation, but not in the ways currently being pursued.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
TL;DR: See the bolded "Key ideas" section a bit down from here…
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Special
update:
This is “Version 0.1,” a discussion draft that was completed on 2/11/21, hours
before CaseyNewton’s report made me aware of a move by Twitter to research the
direction proposed here. Pending analysis and revisions to reflect that, it
seemed useful to get this version online now for discussion. Newton’s report links
to Jack Dorsey’s
initial sketchy announcement of this "@bluesky" effort about a year ago, and items he
linked to at The Verge link to an interesting analysis on Techcrunch. My initial
take is that is a very positive move, while recognizing that the Techcrunch
analysis rightly notes the risks that I had recognized below, and have thought to be
important to deal with, but ultimately manageable and necessary in a free
society. Dorsey's interest in this concept gives some reason to hope that this could occur as voluntary self-regulation, without need for the mandates I suggested likely to be necessary below. (late 2/11)
++2/13: There is a new piece by Richman and Fukuyama advocating this strategy in the WSJ.
Further updates are being posted in Growing Support for "Making Social Media Serve Society"
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
“In
case of emergency, break glass” …then what?
Our marketplace of ideas is clearly on
fire when two media oligarchs have such power that they can - and seemingly must - censor a President
on their own. Facebook then punted to an independent (but also unelected) “Oversight
Board” on whether to continue that censorship. This wicked problem of misinformation
and polarization is well on the way to destroying our consensus view of reality,
yet our current solutions have come to a reduction to the absurd.
This
has been over a decade in building, yet the path to doing better remains widely
misunderstood. The Capitol insurrection made the “break-glass” urgency clear, and
the recent GameStock insurrection in our financial
markets highlighted how wide the scope is. It all comes down to rethinking
whether we, the people, manage our own unfolding digital views of the world, or
whether oligarchies (or governments) do it for us.
Both
our democracy and our financial markets depend on our marketplace of ideas.
Reddit-inspired mobs empowered by the Robinhood trading platform triggered
circuit breakers in trading. The financial “madness of crowds” led toward a long-established
– but continuously evolving – regulatory regime for financial markets. For
nearly a century it has been the mission of the SEC to keep the markets free of
manipulation -- free to be volatile, but subject to basic ground rules -- and
the occasional temporary imposition of circuit-breakers.
Regulatory
mechanisms have properly applied much more loosely to our marketplace of ideas.
But with Big Tech businesses moving so fast and breaking so much, it is now all
too clear that some form of nuanced control on them is needed. Both safety and freedom
are at risk. We need to contain the damage from our broken system right now,
even if that temporarily violates some principles that should be
preserved and protected. But we dare not lose sight of the distinction between stopgap
measures limited to this brief emergency period, and the path beyond that.
Compounding
the problem, network effects have created platform oligarchies with extractive
advertising and data profits so huge as to create strong perverse incentives
that distract from visions of how these powerful tools can serve society. Current
remediation efforts are focused on limiting harms, with little positive vision that
would nurture the unfulfilled benefits that should be demanded.
There
are many interrelated concerns of harm to privacy and competition --as well as
a broad underpinning of gaps in digital literacy, critical thinking, and civics
education -- that all badly need attention. But unless we turn the tidal force
driving this imminent danger to democracy, a rapidly growing inability to
achieve consensus will make the other problems insoluble. Our malfunctioning bicycles
for the mind are now making us stupid.
Here
are some strategies for the long game: how
to guide technology and policy to protect both safety and freedom, while also seeking
the benefits. What I propose would require ongoing oversight by a specialized Digital Regulatory Agency
that can work with industry and academic experts, much like the SEC and the FCC,
but with different expertise. My focus is not on regulation, but on a normative
vision for uses of this technology that we should regulate toward, so that
better business models and competition can drive progress toward consumer and
social welfare.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Key ideas:
Paradise
lost …and found -- saving democracy by serving users and society
The
root causes of the crisis in our marketplace of ideas are that:
- The dominant social media platforms selectively
control what we see,
- and yet they are motivated not to present
what we value seeing, but instead to “engage” audiences to click ads.
They
use their control of our minds not to serve us, but to extract value from us.
The
best path to reduce the harm and achieve the lost promise of digital media is to
remove control over what users see in their feeds from the platforms. Instead,
create an open market in filtering “middleware” services that give users
more power to control what they see in their feeds.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
A number of
proposals advocate this – most prominently and well-argued
by Francis Fukuyama and colleagues in Foreign Affairs, “How to Save
Democracy From Technology” (summarizing a Stanford report). The following draws on that and my own related vision for how that can not only
protect, but systematically enhance wisdom and democracy. There are regulatory
precedents for similar functional divestitures, and models for open, user-driven
markets for filtering algorithms that can favor quality and value. Without such
a systemic change, social media will be increasingly toxic to democracy.
Free
our feeds!
Democracy
depends on an open, diverse, and well-structured marketplace of ideas. Freedom
of speech and of association are essential to our social processes for organically
seeking a working consensus on ground-truth. But now, the “feeds” from Facebook,
Twitter, YouTube, and a few others have become the dominant filters controlling
which information, and which other users, billions of people see. Those
oligarchies have nearly total power over what they selectively present each of
us, with almost no transparency or oversight – and systematically against our
interests!
People cannot do without algorithms to
filter what we drink from the “firehose” of social media, but we have
misapplied them disastrously.
- Network
effects lead toward concentration and scale in the platforms interconnecting
our global village -- every speaker rightly seeks to be heard by every listener
willing to hear them (with narrow limitations). That drives toward universality
of posting and access.
- But
filtering is personal – we each should be fed what we desire, and not
what we do not. For democracy to survive, each of us needs supervisory control
over how algorithms promote or demote information items and people to or from
our attention.
Network effects
are compounded by perverse incentives. A Facebook engineer observed in 2011 that “The best minds of my generation are
thinking about how to make people click ads.” A decade of algorithm design has
twisted “connecting people” to become a matter of targeting lucrative audiences
for advertisers. Oligarchs profit obscenely from selling
advertising in their “free” services -- but what a cost!
Those network effects and incentives are tidal forces,
but filtering can be pulled out and shielded from that. Businesses and governments must jointly facilitate doing
that -- but we, the people, must have autonomy over how that works for each of
us.
It might seem that
user control would worsen filter bubble-driven
echo chambers. But
the algorithms that divide and enrage us (so we click ads) could instead
stimulate thinking, understanding, and enlightenment. Now they drive factions
to lose touch with reality – feeding them lies,
connecting those susceptible to lies to create “lookalike audiences”
for advertisers -- and motivating users to disinform and sow division for profit or merely for attention.
A
marketplace of ideas functions well only if users control for themselves
whether they see “undesirable” items, as they individually define that. Instead
of expecting platforms to be responsible for managing the unruly beast of what
ideas are posted, we must empower markets for filters that manage how ideas are
consumed. Demand, not censorship (or advertising), should control the flow
of information to those who want it.
Social
media are for people -- not advertisers or platform owners.
The
promise of digital technology has been that each user can potentially configure
their own customized filters and recommenders – or select services that curate
for them in the ways that they choose. But now our feeds are customized for
us, without our consent, in non-transparent ways. The platforms’ algorithms
draw on many “signals” of suitability -- but are engineered not to serve what
we desire, but to sell as much advertising as possible. We have no access to
filters designed to serve our own needs.
Technology
promised tools for augmenting human intellect and
our collective ability to solve problems – but now the platforms are “de-augmenting”
us, dividing us and making us stupid. The platforms’ obscene profits from advertising
remove any incentive to do better (or to let others do it for us). Now those
harms stem from reckless greed -- think how much worse if they undertook a
political agenda (as some already fear they do here, and as China’s social
media already do). Oligopolistic thought-robber barons have hooked us on parasites
of our attention, “nuance destruction machines” that make us polarized and reactive. Can
we afford to pay that price for “free” services?
An
open market in filtering services is the way to serve users.
Fukuyama
and colleagues suggest
…taking away the platforms’ role as
gatekeepers of content …inviting a new group of competitive ‘middleware’
companies to enable users to choose how information is presented to them. And
it would likely be more effective than a quixotic effort to break these
companies up.
They
make the case that the remedy is to give individuals power to control the
“middleware” that filters their view of the information flowing through the
platform in the ways that they desire.
Of course, controlling what goes into
one’s feed at a fine-grained level is beyond the skill or patience of most
users. The solution is to create a diverse open market of interoperable filtering
services that users can select from. Individual needs vary so widely that no
single provider can serve that diversity well. Open, competitive markets excel
at serving diverse needs – and untangling incentives. Breaking filtering “middleware”
out as an independent service that interoperates with the platforms enables user
choice to drive competition and innovation.
These middleware services can work “inside”
the platforms, using APIs (Application Program Interfaces) to combine filtering
algorithms with human oversight in an unlimited variety of ways. They could be
funded with a revenue share from the platforms. (That need not reduce platform
revenue, since better service could yield more activity and more users, making
the pie bigger.) They could use much the same “surveillance capitalism” data
that that the platforms now use – with controls to limit that to only the
extent users are willing to permit, and subject to regulatory constraints on
privacy and how the data is used.
Paradise
lost sight of -- filtering for social truth
Imagine
how different our online world would be with open and innovative filtering
services. Humans have evolved ourselves and our society to test for and establish
truth in a social context, because we cannot possibly have direct knowledge of
everything that matters to us (“epistemic dependence”).
Renee DiResta nicely explains in “Mediating Consent” how these social
processes have been both challenged and enhanced by advances in technology from
Gutenberg to “social media.” Social media can augment similar processes in our
digital social context to determine what content to show us, and what people
(or groups) to suggest we connect with.
What
do we want done with that control? We do not want to rank on “engagement” (how
much time we spend glued to our screens) or on whose ad we will be disposed to
click on -- but what criteria should apply? Surely, we can do better than just
counting “likes” from everyone equally, regardless of who they are and whether
they read and considered an item, or just mindlessly reacted to a clickbait
headline.
Consider
how the nuanced and layered process of mediating consent that society has
evolved over millennia has been lost in our digital feeds. Do people and
institutions with reputations we trust in agree on a truth? Should we
trust in them because of others who trust in them? Can we apply this within
small communities -- and more broadly? That is how science works – as do political
consensus and scholarly citation analysis. That is how we decide who and what
to listen to and to believe, to avoid being lemmings.
Technology
has already succeeded at extending that kind of process into Google’s original PageRank
search algorithm, weighing billions of human evaluations at Internet
speed and scale. Social media feeds can empower users to mediate consent in the
ways that they, and their communities, favor. They can draw on the plethora of
information quality “signals” that the platforms have (clicks, likes, shares,
comments, etc,) and combine that with rudimentary understanding of content. They
can factor in the reputations of those providing the signals, as humans have
always done to decide what to pay attention to and which people and groups to
connect with.
To
be effective and scalable, reputation and rating systems must go beyond simplistic
popularity metrics (mob rule) or empaneled raters (expert rule). To socially
mediate consensus in an enlightened democratic society, reputation must be organically
emergent from that society. Algorithms can draw on both explicit and implicit signals
of human judgement, to rate the raters and weight the ratings (as I have detailed elsewhere) -- in
transparent but privacy-protective ways. Better and more transparent tools
could help us consider the reputations behind the postings -- to make us
smarter and connect us more constructively. We could factor in multiple levels
of reputation to weight the human judgments of those who other humans whom
we respect judge to be worth hearing (not just the most “liked”). We could
favor content from people and publishers we view as reputable, and factor in
human curation as desired.
This
can help us understand the world as we choose to view it, and to understand and
accept that other points of view may be reasonable. Fact-checking and warning
labels often just increase polarization, but if someone
we trust takes a contrary view we might think twice. Filters could seek those
“surprising validators” and sources of serendipity that offer new angles, without burying
us in noise.
To
make reputation-based filtering more effective, the platforms should better manage
user identity. Platforms could allow for anonymous users with arbitrary alias
names, as desirable to protect free speech, but distinguish among multiple levels
of identity verification (and distinguishing human versus bot). Weighting of
reputation could reflect both how well validated a user identity is, and how
much history there is behind their reputation. This would help filter out bad
actors, idiots, and bots in accord with standards that we choose (not those
imposed on us).
Now
the advertiser-funded oligopoly platforms perversely apply similar kinds of signals
with great finesse to serve their own ends. A Facebook engineer lamented in 2011 that “The best minds of my
generation are thinking about how to make people click ads.” They have
engineered Facebook and Twitter and rest to work as digital Skinner boxes,
in which we are the lab rats fed stimuli to run the clickbait treadmill that
earns their profits. We cannot expect or entrust them to redirect that treadmill
to serve our ends -- even with increased regulation and transparency. If
revenue primarily depends on selling ads, efforts to counter that incentive to
favor quality over engagement swim against a powerful tide. That need not be
malice, but human nature: “It is difficult to get a man to understand
something, when his salary depends on his not understanding it.”
Driving
our own filters
Users
should be able to combine filtering services to be selective in multiple ways,
favoring some attributes and disfavoring others. Algorithms can draw on human
judgements to filter undesirable items out of our feeds, by down-ranking
them, and recommend desirable items in, by up-ranking them. Given
the firehose of content that humans cannot keep up with, filters need rarely do
an absolute block (censorship) or an absolute must-see. Instead, a
well-architected system of interoperable filters can be composed by each user
with just a few simple selections -- so their multiple filtering service suggestions
are weighted together to present a composite feed of the most desired items.
That
way we can create our own “walled garden,” yet make the walls as permeable as
we like, based on the set of ranking services we activate at any given time.
Those may include specialized screening services that downrank items likely to
be undesirable, and specialized recommenders to uprank items corresponding to
our tastes in information, entertainment, socializing, and whatever special
interests we have. Such services could be from new kinds of providers, or from
publishers, community organizations, and even friends or other people we
follow. Services much like app stores might be needed to help users easily
select and control their middleware services at basic level, or with more
advanced personalization. We have open markets in “adtech” – why not
“feedtech.”
Some
have argued that filtering should be prohibited from using personal data—that
would limit abusive targeting but would also severely restrict the power of
filters to positively serve each user. Better to 1) motivate filtering services
to do good, and 2) develop privacy-protective methods to apply whatever signals
can be useful in ways that prevent undue risk. To the extent that user
postings, comments, likes, and shares are public (or shared among connections),
it is only more private signals like clickstreams and dwell time that would
need protection.
Filtering
services might be offered by familiar publishers, community groups, friends,
and other influencers we trust. Established publishers could extend their
brands to reenergize their profitability (now impaired by platform control):
New York Times, Wall Street Journal, or local newspapers; CNN, Fox, or PBS;
Atlantic or Cosmopolitan, Wired or People; sports leagues, ACLU, NRA, or church
groups. Publishers and review services like Consumer Reports or Wirecutter can
offer recommendations. Or if lazy, we could select among
omnibus filters for a single default, much as we select a search engine.
Users
should be able to easily “shift gears,” sliding filters up or down to
accommodate changes in their flow of tasks, moods, and interests. Right now, do
you want to see items that stimulate lean-forward thinking or indulge in
lean-back relaxation? – to be more or less open to items that stimulate fresh
thinking? Just turn some filtering services up and others down, using sliders. Save
desired combinations. Swap a work setting for a relax setting. Filter suites
could be shared and modified like music playlists and learn with simple
feedback like Pandora. Or, just choose one trusted master service to make all those
decisions.
Instead
of censoring Trump and his ilk and driving them to platforms like Parler to
fester in isolation (and possible secrecy), user-filtered services could relegate
them to the fringes of our open marketplace of ideas, as society has always
tended to do. That could downrank their trash out of the view of those who do
not opt in to see it, while keeping it accessible to those who do (and facilitate
monitoring whatever mischief they brew).
Filter-driven
downranking could also drive mechanisms to introduce friction, slow viral
spread of abusive items, and precisely target fact-checks and warning labels
for maximum effect. Friction could include such measures as adding delays on promotion
of questionable items and downranking likes or shares done too fast to have
read more than the headline. Society has always done best when the marketplace
of ideas is open, and oversight is by reason and community influence not
repression. It is social media’s recommendations of harmful content and
groups that are so pernicious, far more than any unseen presence of such
content and groups.
Filtering
services can emerge to shine sunlight on blindness to reality and good sense.
They can entice reasonable people to cast a wider net and think critically. Simple
fact-checking often fails, because when falsehoods are denied from outside our
echo-chambers, confirmation bias increases polarization. But when
someone trusted within our group challenges a belief, we stop and think.
Algorithms can identify and alert us to these “surprising validators” of opposing views. A notable proof point
of such surprising validators is the Redirect Method experiments in dissuading potential ISIS
sympathizers by presenting critical videos made by former members. Clever
filtering can also augment serendipity, cross-fertilizing across information
silos and surfacing fringe ideas that deserve wider consideration (Galileo: “…and
yet it moves”). Clever design can enlist people to help with that, much like
Wikipedia, and even gamify it.
User-driven
markets for filters can also better serve local community needs within the
global village. Network effects drive global platforms, but filtering can be
local and adaptive to community standards and national laws and norms. US, EU
and Chinese filters can be different, but open societies would make it easy to
swap alternative filters in when desired. The Wall Street Journal’s “Blue Feed, Red Feed” demonstrated strikingly
how you can walk in another’s shoes. Members of any community of thought –
religion, politics, culture, profession, hobby – could enable their members to
filter in accord with their community standards, at varying levels of
selectivity, without imposing those standards on those who seek broader
horizons. But now, social media users are subjected to “platform law,” which from a
human rights perspective is “both overbroad and underinclusive.”
Filters
and circuit breakers -- parallels in financial marketplaces
The
parallels between our marketplace of ideas and our financial markets run deep.
There is much to learn from and adapt, both at a technical and a regulatory
level. Both kinds of markets require distilling the wisdom of the crowd -- and limiting
the madness. This January made it apparent that the marketplaces of ideas and
of securities feed into one another.
The sensitivity of financial markets to information and volatility has driven
development of sophisticated control regimes designed to keep the markets free
and fair while limiting harmful instabilities. Those regimes involve SEC
regulations affecting market participants, exchanges/dealers/brokers, and
clearing houses - and they continue to evolve.
Just
as in financial markets, it is now apparent that social media markets of ideas
need circuit breakers to limit instabilities by reducing extremes of velocity,
without permanently constraining media postings (unless clearly illegal or
harmful). That suggests social media restrictions on postings can be rare, just
as individual securities trades can be at foolish prices without great harm.
Securities trading circuit breakers are applied when the velocity of trades
leads to such large and rapid market swings that decisions become reactive and
likely to lose touch with reality. Those market pauses give participants time
to consider available information and regain an orderly flow in whatever
direction then seems sensible to the participants. There is a similar need for
friction and pauses in social media virality.
User-controlled
filtering that serves each user should be the primary control on what we see,
but the financial market analog supports the idea that circuit breakers are
sometimes needed in social media. Filters controlled by individual users will
not, themselves, limit flows to users who have different filters. To control
excesses of viral velocity, access and sharing must be throttled across a
critical mass of all filters that are in use.
The
specific variables relevant to the guardrails needed for our marketplace of
ideas are different from those in financial markets, but analogous. Broad
throttling can be done by coordinating the platform posting and access
functions using network-wide traffic data, plus consolidated feedback on
quality metrics from the filters combined with velocity data from the platforms.
A standard interface protocol could enable the filters to report problematic
items. Such reports could be sent back to the platforms that are sourcing them,
or to a separate coordination service, so it can be determined when such
reports reach a threshold level that requires a circuit breaker to introduce
friction into the user interfaces and delays in sharing. Signaling protocols
could support sharing among the platforms and all the filtering services to
coordinate warnings that downranking or other controls might be desired. (To
preserve individual user freedom, users might be free to opt-out of having
their filters adhere to some or all such warnings.) Think of this as a
decentralized cognitive immune system that integrates signals emerging
from many kinds of distributed sensors, in much the same way that our bodies
coordinate an emergently learned response to pathogens.
Much
as financial market circuit breakers are invoked by exchanges or clearinghouses
in accord with oversight by the SEC, social media circuit breakers might be
invoked by the network subsystems in accord with oversight by a Digital
Regulatory Agency based on information flows across this new digital
information market ecology.
Harmful
content: controls and liability
User
control of filters enables society to again rely primarily on the best kind of
censorship: self-censorship -- and helps cut through much of the confusion that
surrounds the current controversy over whether Section 230 of the
Communications Act of 1996 should be repealed or modified to remove the safe
harbor that limits the liability of the platforms. Many argue that Section 230
should not be repealed, but modified to limit amplification (including writers
from AOL, Facebook policy and Facebook data science). Harold Feld of Public
Knowledge argues in The Case for the
Digital Platform Act that “elimination of Section 230 would do little to
get at the kinds of harmful speech increasingly targeted by advocates” and is
“irrelevant” to the issues of harmful speech on the platforms. He provides
helpful background on the issues and suggests a variety of other routes and specific
strategies for limiting amplification of bad content and promoting good content
in ways sensitive to the nature of the medium.
Regardless
of the legal mechanism, Feld’s summary of a Supreme Court ruling on an earlier
law makes the central point that matters here: “the general rule for
handling unwanted content is to require people who wish to avoid unwanted
content to do so, rather than to silence speakers.” That puts the
responsibility for limiting distribution of harmful content (other than clearly
illegal content) squarely on users – or on the filtering services that should
be acting as more or less faithful agents for those users.
Nuanced
regulation could depend on the form of moderation/amplification, as well as its
transparency, degree of user “buy-in,” and scale of influence. So long as the
filters work as faithful agents (fiduciaries) for each user, in accord with
that user’s stated preferences, then they should not be liable for their operation.
Regulators could facilitate and monitor adherence to guidelines on how to do
that responsibly. Negligence in triggering and applying friction and
downranking to slow the viral spread of borderline content could be a criterion
for liability or regulatory penalties. Such nuanced guardrails would limit harm
while keeping our marketplace of ideas open to what we each choose to have
filtered for us.
If
independent middleware selected by users does this “moderation,” the platform
remains effectively blind and neutral (and within the Section 230 safe harbor,
to the extent that may be relevant). That narrowing of safe harbor (or other
regulatory burdens) might help motivate the platforms to divest themselves of
filtering -- or to at least yield control to the users. If the filtering
middleware is spun out, the responsibility then shifts from the platforms to
the filtering middleware services. Larger middleware services could dedicate
significant resources to doing moderation and limiting harmful amplification
well, while smaller ones would at worst be amplifying to few users. Users who
were not happy with how moderation and amplification was being handled for them
could switch to other service providers. But if the dominant platforms retain
the filters and fail to yield transparency and control to their essentially
captive users, regulation might need to take a heavy hand. That would threaten
free expression in our marketplace of ideas.
Realigning
business incentives – though-robber barons and attention capitalism
“The Internet's Original Sin”
is that advertising-based business models drive filtering/ranking/alerting
algorithms to feed us junk food for the mind, even when toxic. The oligopolies
that hold our filters hostage to advertising are loath to risk any change to
that, and uninterested in experimenting with emerging alternative business models. That is a powerful tide to swim against.
Regulators
hesitate to meddle in business models, but even partial steps to open just this
layer of filtering middleware could do much to decouple the filtering of our
feeds from the sale of advertising. A competitive open market in filtering
services would be driven by the demand of individual users, making them more “the
customer” and less “the product.” Now the pull of advertising demand funds an
industry of content farms that create clickbait for disinformation -- or just for
the sole purpose of generating ad revenue.
Shoshana
Zuboff’s tour de force diagnosis of the ills of surveillance capitalism
has rightly raised awareness of the abuses we now face, but I suggest a rather
different prescription. The
more deadly problem is attention capitalism. Our attention and thought are far more
valuable to us than our data, and the harms of misdirection of attention that
robs us of reasoned thought are far more insidious to us as individuals and to
our society than other harms from extraction of our data.
It
is improper use of the data that does the harm. As outlined above, the extraction
of our attention stems from the combination of platform control and perverse
incentives. The cure is to regain control of our feeds, and to decouple the
perverse incentives.
My
work on innovative business models suggests how an even more
transformative shift from advertising to user-based revenue could be feasible. Those
methods could allow for user funding of social media in ways that are
affordable for all -- and that would align incentives to serve users as the
customer, not the product.
As
a half step toward those broader business model reforms, advertising could be
more tolerable and less perverse in its incentives if users could negotiate its
level and nature, and how that offsets the costs of service. That could be done
with a form of “reverse metering” that credits
users for their attention and data when viewing ads. Innovators are showing
that even users who now block ads might be open to non-intrusive ads that
deliver relevance or entertainment value, and willing to provide their personal
data to facilitate that.
But
in any case, advertising should not be permitted to dictate how our social
media content is filtered. Given the hurdles of platform and/or regulator
buy-in, divesting control of our feeds from the platforms seems to be the best leverage
point for driving real transformation. I have advocated user control of filters
for many years, but I credit the Fukuyama article for highlighting its surgical
precision in addressing our current crisis.
Making
this happen
Given
how far down the wrong path we have gone, reform will not be easy, and will
likely require complex regulation, but there is no other effective solution. To recap the options currently being pursued:
- Current
privacy and antitrust initiatives are aimed at harms to privacy and
competition, but even if broken up or regulated, monolithic, ad-driven social media
services have limited ability and motivation to protect our marketplace of
ideas.
- Simple
fact-checking and warning labels have very limited effect.
- More
sophisticated psychology-based interventions
have promise, but who combines the ability and motivation to apply them
effectively, even if mandated to do so?
- Banning Trump was a draconian measure
that dominant platforms rightly shied away from, understanding that censoring
who can post is antithetical to a free society. It clearly lacks legitimacy and
due protection for human rights when decided by private companies or even by
independent review boards.
As noted above (and outlined more fully in
a prior post),
a promising regulatory framework is emerging (to
little public attention). This goes beyond ad hoc remedies to specific harms,
and provides for ongoing oversight by a specialized Digital Regulatory Agency
that would work with industry and academic experts, much like the FCC and the
SEC. Hopefully, the Biden administration will have the wisdom and will to
undertake that (the UK is already
proceeding).
But
those proposals have yet to focus on the freeing of our feeds. That is the
where the power to save democracy lies, but we can expect the platforms to
resist losing control of this profit-enhancing component of their systems. Of
course, regulators could just task the monolithic platforms with offering users
direct control without any functional divestiture -- that seems possible, but
problematic, for the reasons given above.
The
other deep remedy would be to end the Faustian bargain of the ad model very
broadly, but that will take considerable regulatory resolve – or a groundswell
of public revulsion. Neither seem imminent. One way to finesse that is the “ratchet”
model that I have proposed, inspired by how regulations ratchet vehicle
manufacturers toward increasingly challenging fuel economy standards has driven
the market to meet those challenges incrementally, in ways of their own
devising. The idea is simple: mandate or apply taxes to shift social media
revenue to small but increasing percentages of user-based revenue. But the
focus here is on this more narrowly targeted and clearly feasible divestiture
of filtering.
While
regulators seem reluctant to meddle with business models, there is precedent
for modularizing interoperating
elements of a complex monolithic business through a functional breakup. The
Bell System breakup separated services from equipment suppliers, and local
service from long-distance. That was part of a series of regulatory actions
that required modular jacks to allow competitive terminal equipment (phones,
faxes, modems, etc.), number portability and many other liberating reforms, all
far too complex for legislation or the judiciary alone, but solvable by the FCC
working with industry and independent experts.
Internet
e-mail also serves as a relevant design model – it was designed to replace
incompatible networks, enabling users of any “user agent” (like Outlook or
Gmail) to send through any combination of cloud “transfer agents” to a
recipient with any other user agent. In the extreme, such models for liberation
could lead to “Protocols, Not Platforms.”
One move in that direction would allow multiple competing platforms to
interoperate to allow posting and access across multiple platforms, each acting
as a user messaging agent and a distributed data store. Also, as noted above,
the model of financial markets seems very relevant, offering proven guiding
principles.
But
in any case, even short of an open filtering middleware market, it is essential
to democracy to provide more control to each user over what information the
dominant platforms feed us. Even if the filters stay no better than they are
now and users just pick them randomly, they will become more diverse – that,
alone, would reduce dangerous levels of virality and ad-driven sensationalism. The
incentive of engagement that drives recommendations of pernicious content,
people, and groups would be eliminated or at least weakened.
The
economics of network effects favors this functional separation in a way that
regulators may find compelling.
- Network
effects intrinsically favor universal interconnections for posting and access,
driving platform dominance for those basic functions. That borders on being a
universal utility service (whether monolithic or distributed and interoperating).
- But
filtering of how posts and users are matched to other users is largely immune
to network effects. A filter can please a single individual, regardless of
whether others use it. Users will select middleware services that seem to act
in their interest – motivating businesses to demonstrate value over ubiquity.
Key
steps toward returning control to users can build incremental impact:
1. Policies
should be reframed to treat filtering that targets and amplifies reach to users
as editorial authorship/curation/moderation of a feed, and thus subject to regulation
(and liability). That might, itself, motivate the platforms to divest that
function to avoid that risk to their core businesses. That would also motivate
them to help design effective APIs to support those independent filtering
services. They could retain ability to provide the raw firehose, filtered only
in non-“editorial” ways -- by simple categories such as named friends or
groups, geography, and subject, in reverse chronological order, with no ranking
or amplification (using sampling to keep the flow at a desired level).
2. A
spinout could break out the platforms existing filtering services and staff into
one or several new companies with clear functional boundaries and distinct
subsets of the user base. The new units might begin with the current code base,
but then evolve independently to serve different communities of users -- but
with requirements for data portability to facilitate switching.
3. The
spin-out should be guided (and mandated as necessary) by well-crafted
regulation combined with ongoing adaptation. Regulators should define, enforce,
and evolve basic guardrails on the APIs and related practices and circuit
breakers on both sides -- and continually monitor and evolve that.
4. Such
structural changes alone would at least partially decouple filtering from the
perverse effects of the ad model. However, as noted above, regulation could
address that more aggressively by mandates (or taxes) that encourage a shift to
user-based revenue. A survey of some notable proposals for a Digital Regulatory
Agency, as well as suggestions of what we should regulate for, not just against,
is in Regulating our Platforms-- A Deeper Vision.
5. The
structural changes creating an open market would also motivate the new
filtering middleware services to devise user interfaces and new algorithms to better
provide value to users. The framework for reputation-based algorithms briefly outlined
above is more fully explained in Architecting Our Platforms to Better Serve Us -- Augmenting and Modularizing the Algorithm.
6. A
new digital agency can also address the many other desirable objectives of Big
Tech platform regulation, including consumer privacy and data usage rights,
standards and processes to remove clearly impermissible content, and
anticompetitive behaviors, as well as other Big Tech oligopoly issues beyond
social media.
Whatever
route we take in this direction, democracy requires that our marketplace of
ideas be controlled by “we the people,” not platforms or advertisers. We must take
back control as soon as possible. Current efforts at antitrust breakups and
privacy regulation that leave filtering in the hands of others with their own
agendas will perpetuate this mortal threat to democracy. Return of filtering
power to citizens can revitalize our marketplace of ideas. It can augment our
social processes for “mediating consent” and pursuing happiness – and provide a
healthy base for gradual evolution toward digital democracy. But so long as
others subvert control of our bicycles for the mind to their own ends, we have
no time to lose.
---
This
is a working draft for discussion.
+++Updates are here: Growing Support for "Making Social Media Serve Society."
Feedback
on support for these ideas, concerns, disagreements, and needs for
clarification are invited. Please use the comment section below or email to
interwingled [at] teleshuttle [dot] com.
---
Personal
note: The roots of these ideas
These
ideas have been brewing throughout my career (bio), with burst of activity very early on, then
around 2002, and increasingly in the past decade. They are part of a rich
network that intertwingles with my work on FairPay and several of my
patented inventions. Some background on these roots may be helpful.
I
was first enthused by the potential of what we now call social media around
1970, when I had seen early hypertext systems (pre-cursors of the Web) by TedNelson and Doug Englebart, and then studied systems for
collaborative decision support by Murray Turoff and others, rolling into a self-study
course on collaborative media systems in graduate school.
My
first proposals for an open market in media filtering were inspired by the
financial industry parallels. A robust open market in filters for news and for market
data analytics was emerging when I worked for Standard & Poor's and Dow
Jones around 1990. Filters and analytics would monitor raw news feeds and market
data (price ticker) feeds, select and analyze that raw information using algorithms
and parameters chosen by the user, and work within any of a variety of trading
platforms.
I
drew on all of that when designing a system for open innovation and
collaborative development of early-stage ideas around 2002. That design
featured an open market for reputation-based ranking algorithms very much like
those proposed here. Exposure to Google PageRank, which distilled human
judgment and reputation for ranking Web search results, inspired me to broaden
Google's design to distill the wisdom of the crowd as reflected in social media
interactions, based on a sophisticated and nuanced reputation system.
By
2012 it was becoming apparent that the Internet was seriously disrupting the
marketplace of ideas, and Cass Sunstein’s observations about surprising validators inspired me to adapt my designs to social media. I became
active in groups that were addressing those concerns and more fully recast my earlier
design to focus on social media. My other work on innovative business models
for digital services also gave me a unique perspective on alternatives to the
perverse incentives of social media.
The
recent Fukuyama article was gratifying validation on the need for an open,
competitive market for feed filtering services driven by users, and inspired me
to refocus on that as the most direct point of leverage for structural
remediation, as outlined here.
I
am very grateful to the many researchers and activists in this field I
have had the privilege of interacting with and who have provided invaluable
stimulation, feedback, suggestions, and support, especially over the past
several years as this has become a widely recognized problem.