This expands on my two new articles in Tech Policy Press that review and synthesize an important debate among scholars in the Journal of Democracy (as noted in my previous blog post that carries running updates). Those articles address the growing interest in a set of ideas that regard the scale of the current platforms as dangerous to democracy and propose that one way to address the danger is to
“unbundle” social media into distinct functions to separate the basic network layer of the content from the higher level of content curation.
Here I add forward-looking comments that build on my continuing work on how technology can augment human collaboration. First some general comments, then some more specific ideas, and then a longer-term vision from my perspective as a technologist and futurist.
Much of the thinking about
regulating social media is reactive to current harms, and the sense that
techno-optimism has failed. I say this is not a tech problem but one of poorly
managed tech. We need a multidisciplinary view of how tech can move democratic
society into its digital future. Democracy is always messy, but that is why it
works – truth and value are messy. Our task is to leverage tech to help us
manage that messiness to be ever more productive.
Some general perspectives on the debate so
far
No silver bullet – but maybe a
silver-jacketed bullet: No single remedy
will address all the harms in an acceptable way. But I suggest that
decentralized control of filtering recommendation services is the
silver-jacketed bullet that best de-scales the problems in the short and long
term. That will not solve all problems of harmful speech alone, but it can
create a tractable landscape so that the other bullets will not be overmatched.
That is especially important because mandates for content-specific curation seem
likely to fail First Amendment challenges, as Keller explains in her article (and
here).
Multiple disciplines: The problems of harmful speech have risen from a historic
background level to crisis conditions because of how technology was hijacked to
serve its builders’ perverse business model, rather than its human users.
Re-engineering the technology and business model is essential to manage new
problems of scope, scale, and speed -- to contain newly destructive feedback
cascades in ways that do not trample freedom of expression. That requires a
blend of technology, policy, social science, and business, with significant
inputs from all of those.
Unbundling control of impression versus expression: Digital social media have fundamentally changed the dynamics of how speech flows through society, in a way that is still underappreciated. Control of the impression of information on people’s attention (in news feeds and recommendations) has become far more consequential than the mere expression of that information. Historically, the focus of free speech has been on expression. Individuals generally enjoyed control over what was impressed on them -- by choosing their information sources. Now the dominant platforms have taken upon themselves to control the unified newsfeeds that each of us see, and what people and groups they recommend we connect to. Traditional intermediary publishers and other curators of impression have been disintermediated. Robust diversity in intermediaries must be restored. Now, freedom of impression is important. (More below.)
Time-frames: It will be challenging to balance this now-urgent crisis, remedies that will take time, and the fact that we are laying the foundations of digital society for decades to come. Digitally networked speech must support and enhance the richly evolving network of individuals, communities, institutions, and government that society relies on to understand truth and value – a social epistemic ecosystem. The platforms recklessly disintermediated that. A start on the long process of rejuvenating that ecosystem in our increasingly digital society is essential to any real solution.
On technological feasibility and curation at scale
Many have argued like Faris and Donovan that “more technology cannot solve the problem of misinformation-at-scale.” I believe the positive contribution technology can offer to aid in filtering for quality and value at scale is hugely underestimated – because we have not yet sought to apply an effective strategy. Instead of blindly disintermediating and destroying the epistemic social networks society has used to mediate speech, independent filtering services motivated to serve users could greatly augment those networks.
This may seem unrealistic to many, even foolishly techno-utopian, given that AI will almost certainly lack nuanced understanding and judgment for the foreseeable future. But strategies I suggest in my Tech Policy Press synthesis and in more detail elsewhere are not based on artificial intelligence. Instead, they rely on augmented intelligence, drawing on and augmenting crowdsourced human wisdom. I noted that crowdsourcing has been shown to be nearly as effective as experts in judging the quality of social media content. While those experiments required explicit human ratings, the more scalable strategy I propose relies on available metadata about the sharing of social media content. That can be mined to make inferences about judgments of quality from massive numbers of users. AI cannot judge quality and value on its own, but it can help collate human judgments of quality and value.
Similar methods proved highly successful in how Google conquered search by mining the human judgment inherent in the network of Web links built by humans -- to infer which pages had authority, based on the authority and reputation of other pages humans had linked to them (first “Webmasters,” later authors of all kinds). Google and other search engines also infer relevance by tracking which search results each user clicks on, and how long until they click something else. All of this is computationally dizzying, but now routine.
Social media already mine similar signals of human judgment (such as liking, sharing, and commenting) but instead, now use that to drive engagement. Filtering services can mine the human judgments of quality and value in likes and shares -- and in who is doing the liking and sharing -- as I have described in detail. By doing this multiple levels deep, augmentation algorithms can distill the wisdom of the crowd in a way that identifies and relies most heavily on the smartest of the crowd. Just as Google out-scaled Yahoo’s manually ranked search, this kind of augmented intelligence promises to enable services that are motivated to serve human objectives to do so far more scalably than armies of human curators. (That is not to exclude human curation, just as Web search now also draws on the human curation of Wikipedia.)
Several of the debaters (Ghosh and Srinivasan, Keller) raise the reasonable question of whether filtering service providers would actually emerge. As to the software investment, the core technology for this might be spun out from the platforms as open source, and further developed by analysis infrastructure providers that support multiple independent filtering services. That would do the heavy lifting of data analysis (and compartmentalize sensitive details of user data), on top of which the filtering services need only set the higher levels of the objective functions and weighting parameters that guide the rankings – a far less technically demanding task. Much of the platforms' existing filtering infrastructure could become the nucleus of one or more separate filtering infrastructure and service businesses. Existing publishers and other mediating institutions might welcome the opportunity to reestablish and extend their brands into this new infrastructure. The Bell System breakup provides a model for how a critical utility infrastructure business can be deeply rearchitected, unbundled, and opened to competition as overseen by expert regulators, all without interruption of service, and with productive reassignment of staff. Not easy, but doable.
Formative ideas on impression ranking filters versus expression blocking filters
Triggered by Fukuyama’s comments about his group’s recent thinking about takedowns, I wonder if there may be need to differentiate two categories of filtering services that would be managed and applied very differently. This relates to the difference between moderation/blocking/censorship of expression and curation/ranking of impression.
Some speak of moderation in ways that make me wonder if they mean exclusion of illegal content, possibly including some similar but slightly broader censorship of expression, or are just using the term moderation loosely, to also refer to the more nuanced issue of curation of impression.
My focus has been primarily on filters that do ranking for users, providing curation services that support their freedom of impression. Illegal content can properly be blocked (or later taken down) to be inaccessible to all users, but the task of curation filtering is a discretionary ranking of items for quality, value, and relevance to each user. That should be controlled by users and the curation services they choose to act as their agents, as Fukuyama and I propose.
The essential difference is that blocking filters can eliminate items from all feeds in their purview, while curation filters can only downrank undesirable items -- the effect of those downrankings would be contingent on how other items are ranked and whether uprankings from other active filters counterbalance those downrankings.
But even for blocking there may be need for a degree of configurability and localization (and possibly user control). This could enable a further desirable shift from “platform law” to community law. Some combination of alternative blocking filters might be applied to offer more nuance in what is blocked or taken down. This might apply voting logic, such that content is blocked when some threshold of votes of multiple filters from multiple sources agree. It might provide for a user or community control layer, much as parents, schools, and businesses choose from a market in Internet blocking filters, and might permit other mediators to offer such filters to those who might choose them.
The digitization of our epistemic social networks will force us to think clearly about how technology should support the processes of mediation that have evolved over centuries -- but will now be built into code – to continue to evolve in ways that are democratically decided. That is a task we should have begun on a decade ago.
A Digital Constitution of Discourse
Whether free expression retains primacy is central to much of the debate in the Journal of Democracy. Fukuyama and Keller view that primacy as the reason the unbundling of filtering power is central to limiting platform abuses. An excellent refresher on why free expression is the pillar of democracy and the broader search for truth is Jonathan Rauch’s The Constitution of Knowledge: A Defense of Truth, Rauch digs deep into the social and political processes of mediating consent and how an institutional ecosystem facilitates that. (I previously posted about how his book resonates with my work over the past two decades and suggests some directions for future development -- this post is a further step.)
Others (including Renee DiResta, Niall Ferguson, Matthew Hutson, and Marina Gorbis) have also elucidated how what people accept as true and valuable is not just a matter of individuals, nor of people talking to each other in flat networks, but is mediated through communities, institutions, and authorities. They review how technology affected that, breaking from the monolithic and “infallible” authority of the church when Gutenberg democratized the written word. That led to horribly disruptive wars of religion, and a rebuilding through many stages of evolution of a growing ecology of publishers, universities, professional societies, journalists, and mass media.
Corey Doctorow observed that “the term ‘ecology’ marked a turning point in environmental activism” and suggests “we are on the verge of a new ‘ecology’ moment dedicated to combating tech monopolies.” He speaks of “a pluralism movement or a self-determination movement.” I suggest this is literally an epistemic ecology. We had such an ecology but are letting tech move fast and break it. It is time to think broadly about how to rebuild and modernize this epistemic ecology.
Faris and Donovan criticize the unbundling of filters as “fragmentation by design” with concern that it would “work against the notion of a unified public sphere.” But fragmentation can be a virtue. Democracies only thrive when the unified public sphere tolerates a healthy diversity of opinions, including some that may seem foolish or odious. Infrastructures gain robustness from diversity, and technology thrives on functional modularity. While network effects push technology toward scale, that scale can be modularized and distributed -- much as the unbundling of filtering would do. It has long been established that functional modularity is essential to making large-scale systems practical, interoperable, adaptable, and extensible.
Toward a digital epistemic ecology
Now we face a first generation of dominant social media platforms that disintermediated our rich ecology of mediators with no replacement. Instead, the platforms channel and amplify the random utterances of the mob – whether wisdom, drivel, or toxins -- into newsfeeds that they control and curate as they see fit. Now their motivation is to sell ads, with little concern for the truth or values they amplify. That is already disastrous, but it could turn much worse. In the future their motivation may be coopted to actively control our minds in support of some social or political agenda.
This ecological perspective leads to a vision of what to
regulate for, not just against -- and makes an even stronger case for unbundling
construction of our individual newsfeeds from platform control to user control.
- Do we want top-down control by government,
platforms, or independent institutions (including oversight boards) that
we hope will be benevolent? That leads eventually to authoritarianism. “Platform law” is “overbroad and
underinclusive,” even when done with diligence and good intentions.
- Do we fully decentralize all social networks, to
rely on direct democracy (or small bottom-up collectives)? That risks mob
rule, the madness of crowds, and anarchy.
- Can a hybrid distributed solution balance both top-down and bottom-up power with an emergent dynamic of checks and balances? Can technology help us augment the wisdom of crowds rather than the madness? That seems the best hope.
The ecological depth of such a structure has not yet been appreciated. It is not simply to hope for some new kind of curatorial beast that may or may not materialize. Rather, it is to provide the beginnings of an infrastructure that the communities and institutions we already have can build on -- to reestablish their crucial role in mediating our discourse. They can be complemented and energized by whatever new kinds of communities and institutions may emerge as we learn to apply these powerful new tools. That requires tools for not just curation, but for integrating with other the aspects of these institutions and their broader missions.
Now our communities and institutions are treated little differently from any individual user of social media, which literally disintermediates them from the role as mediators. The platforms have arrogated to themselves, alone, to mediate what each of us sees from the mass of information flowing through social media. Unbundling filtering services to be independently operated would provide a ready foundation for our existing communities and institutions to restore their mediating role -- and create fertile ground for the emergence of new ones. The critical task ahead is to clarify how filtering services become a foundation for curatorial mediators to regain their decentralized roles in the digital realm. How will they curate not only their own content, but that of others? What kinds of institutions will have what curatorial powers?
Conclusion – can truth, value and democracy survive?
A Facebook engineer lamented in 2011 that “The best minds of my generation are thinking about how to make people click ads.” After ten years of that, isn’t it time to get our best minds thinking about empowering us in whatever ways fulfill us? Some of those minds should be technologists, some not. Keller’s taxonomy of problem areas is a good place to start, not to end.
There is some truth to the counter-Brandeisian view that more speech is not a remedy for bad speech -- just as there is some truth to the view that more intolerance is not a remedy for intolerance. Democracies cannot eliminate either. All they have is the unsatisfyingly incomplete remedy of healthy dialog and mediation, supported by good governance. Churchill said, “democracy is the worst form of government – except for all the others that have been tried.” Markets and technology have at times stressed that, when not guided by democratic government, but dramatically enhance that, when properly guided.
An assemblage of filtering services is the start of a digital support infrastructure for that. Some filtering services may gain institutional authority, and some may be given little authority, but we the people must have ongoing say in that. This will lead to a new layer of social mediation functionality that can become a foundation for the ecology of digital democracy.
Which future do we want? One of platform law acting on its
own, or as the servant of an authoritarian government, to control the seen
reality and passions of an unstructured mob? Or a digitally augmented upgrade
of the rich ecology of mediators of consent on truth and value that -- despite
occasional lapses -- has given us a vibrant and robust marketplace of ideas?
No comments:
Post a Comment