Monday, August 09, 2021

The Need to Unbundle Social Media - Looking Ahead

This expands on my two new articles in Tech Policy Press that review and synthesize an important debate among scholars in the Journal of Democracy (as noted in my previous blog post that carries running updates). Those articles address the growing interest in a set of ideas that regard the scale of the current platforms as dangerous to democracy and propose that one way to address the danger is to
“unbundle” social media into distinct functions to separate the basic network layer of the content from the higher level of content curation.

Here I add forward-looking comments that build on my continuing work on how technology can augment human collaboration. First some general comments, then some more specific ideas, and then a longer-term vision from my perspective as a technologist and futurist.

 

Much of the thinking about regulating social media is reactive to current harms, and the sense that techno-optimism has failed. I say this is not a tech problem but one of poorly managed tech. We need a multidisciplinary view of how tech can move democratic society into its digital future. Democracy is always messy, but that is why it works – truth and value are messy. Our task is to leverage tech to help us manage that messiness to be ever more productive.

Some general perspectives on the debate so far

No silver bullet – but maybe a silver-jacketed bullet: No single remedy will address all the harms in an acceptable way. But I suggest that decentralized control of filtering recommendation services is the silver-jacketed bullet that best de-scales the problems in the short and long term. That will not solve all problems of harmful speech alone, but it can create a tractable landscape so that the other bullets will not be overmatched. That is especially important because mandates for content-specific curation seem likely to fail First Amendment challenges, as Keller explains in her article (and here).

Multiple disciplines: The problems of harmful speech have risen from a historic background level to crisis conditions because of how technology was hijacked to serve its builders’ perverse business model, rather than its human users. Re-engineering the technology and business model is essential to manage new problems of scope, scale, and speed -- to contain newly destructive feedback cascades in ways that do not trample freedom of expression. That requires a blend of technology, policy, social science, and business, with significant inputs from all of those.

Unbundling control of impression versus expression: Digital social media have fundamentally changed the dynamics of how speech flows through society, in a way that is still underappreciated. Control of the impression of information on people’s attention (in news feeds and recommendations) has become far more consequential than the mere expression of that information. Historically, the focus of free speech has been on expression. Individuals generally enjoyed control over what was impressed on them -- by choosing their information sources. Now the dominant platforms have taken upon themselves to control the unified newsfeeds that each of us see, and what people and groups they recommend we connect to. Traditional intermediary publishers and other curators of impression have been disintermediated. Robust diversity in intermediaries must be restored. Now, freedom of impression is important. (More below.)


Time-frames: It will be challenging to balance this now-urgent crisis, remedies that will take time, and the fact that we are laying the foundations of digital society for decades to come. Digitally networked speech must support and enhance the richly evolving network of individuals, communities, institutions, and government that society relies on to understand truth and value – a social epistemic ecosystem. The platforms recklessly disintermediated that. A start on the long process of rejuvenating that ecosystem in our increasingly digital society is essential to any real solution.

On technological feasibility and curation at scale

Many have argued like Faris and Donovan that “more technology cannot solve the problem of misinformation-at-scale.” I believe the positive contribution technology can offer to aid in filtering for quality and value at scale is hugely underestimated – because we have not yet sought to apply an effective strategy. Instead of blindly disintermediating and destroying the epistemic social networks society has used to mediate speech, independent filtering services motivated to serve users could greatly augment those networks.

This may seem unrealistic to many, even foolishly techno-utopian, given that AI will almost certainly lack nuanced understanding and judgment for the foreseeable future. But strategies I suggest in my Tech Policy Press synthesis and in more detail elsewhere are not based on artificial intelligence. Instead, they rely on augmented intelligence, drawing on and augmenting crowdsourced human wisdom. I noted that crowdsourcing has been shown to be nearly as effective as experts in judging the quality of social media content. While those experiments required explicit human ratings, the more scalable strategy I propose relies on available metadata about the sharing of social media content. That can be mined to make inferences about judgments of quality from massive numbers of users. AI cannot judge quality and value on its own, but it can help collate human judgments of quality and value.

Similar methods proved highly successful in how Google conquered search by mining the human judgment inherent in the network of Web links built by humans -- to infer which pages had authority, based on the authority and reputation of other pages humans had linked to them (first “Webmasters,” later authors of all kinds). Google and other search engines also infer relevance by tracking which search results each user clicks on, and how long until they click something else. All of this is computationally dizzying, but now routine.

Social media already mine similar signals of human judgment (such as liking, sharing, and commenting) but instead, now use that to drive engagement. Filtering services can mine the human judgments of quality and value in likes and shares -- and in who is doing the liking and sharing -- as I have described in detail. By doing this multiple levels deep, augmentation algorithms can distill the wisdom of the crowd in a way that identifies and relies most heavily on the smartest of the crowd. Just as Google out-scaled Yahoo’s manually ranked search, this kind of augmented intelligence promises to enable services that are motivated to serve human objectives to do so far more scalably than armies of human curators. (That is not to exclude human curation, just as Web search now also draws on the human curation of Wikipedia.)

Several of the debaters (Ghosh and Srinivasan, Keller) raise the reasonable question of whether filtering service providers would actually emerge. As to the software investment, the core technology for this might be spun out from the platforms as open source, and further developed by analysis infrastructure providers that support multiple independent filtering services. That would do the heavy lifting of data analysis (and compartmentalize sensitive details of user data), on top of which the filtering services need only set the higher levels of the objective functions and weighting parameters that guide the rankings – a far less technically demanding task. Much of the platforms' existing filtering infrastructure could become the nucleus of one or more separate filtering infrastructure and service businesses. Existing publishers and other mediating institutions might welcome the opportunity to reestablish and extend their brands into this new infrastructure. The Bell System breakup provides a model for how a critical utility infrastructure business can be deeply rearchitected, unbundled, and opened to competition as overseen  by expert regulators, all without interruption of service, and with productive reassignment of staff. Not easy, but doable.

Formative ideas on impression ranking filters versus expression blocking filters

Triggered by Fukuyama’s comments about his group’s recent thinking about takedowns, I wonder if there may be need to differentiate two categories of filtering services that would be managed and applied very differently. This relates to the difference between moderation/blocking/censorship of expression and curation/ranking of impression.

Some speak of moderation in ways that make me wonder if they mean exclusion of illegal content, possibly including some similar but slightly broader censorship of expression, or are just using the term moderation loosely, to also refer to the more nuanced issue of curation of impression.

My focus has been primarily on filters that do ranking for users, providing curation services that support their freedom of impression. Illegal content can properly be blocked (or later taken down) to be inaccessible to all users, but the task of curation filtering is a discretionary ranking of items for quality, value, and relevance to each user. That should be controlled by users and the curation services they choose to act as their agents, as Fukuyama and I propose. 

The essential difference is that blocking filters can eliminate items from all feeds in their purview, while curation filters can only downrank undesirable items -- the effect of those downrankings would be contingent on how other items are ranked and whether uprankings from other active filters counterbalance those downrankings.

But even for blocking there may be need for a degree of configurability and localization (and possibly user control). This could enable a further desirable shift from “platform law” to community law. Some combination of alternative blocking filters might be applied to offer more nuance in what is blocked or taken down. This might apply voting logic, such that content is blocked when some threshold of votes of multiple filters from multiple sources agree. It might provide for a user or community control layer, much as parents, schools, and businesses choose from a market in Internet blocking filters, and might permit other mediators to offer such filters to those who might choose them.

The digitization of our epistemic social networks will force us to think clearly about how technology should support the processes of mediation that have evolved over centuries -- but will now be built into code – to continue to evolve in ways that are democratically decided. That is a task we should have begun on a decade ago.

A Digital Constitution of Discourse

Whether free expression retains primacy is central to much of the debate in the Journal of Democracy. Fukuyama and Keller view that primacy as the reason the unbundling of filtering power is central to limiting platform abuses. An excellent refresher on why free expression is the pillar of democracy and the broader search for truth is Jonathan Rauch’s The Constitution of Knowledge: A Defense of Truth, Rauch digs deep into the social and political processes of mediating consent and how an institutional ecosystem facilitates that. (I previously posted about how his book resonates with my work over the past two decades and suggests some directions for future development  -- this post is a further step.)

Others (including Renee DiResta, Niall Ferguson, Matthew Hutson, and Marina Gorbis) have also elucidated how what people accept as true and valuable is not just a matter of individuals, nor of people talking to each other in flat networks, but is mediated through communities, institutions, and authorities. They review how technology affected that, breaking from the monolithic and “infallible” authority of the church when Gutenberg democratized the written word. That led to horribly disruptive wars of religion, and a rebuilding through many stages of evolution of a growing ecology of publishers, universities, professional societies, journalists, and mass media.

Corey Doctorow observed that “the term ‘ecology’ marked a turning point in environmental activism” and suggests “we are on the verge of a new ‘ecology’ moment dedicated to combating tech monopolies.” He speaks of “a pluralism movement or a self-determination movement.” I suggest this is literally an epistemic ecology. We had such an ecology but are letting tech move fast and break it. It is time to think broadly about how to rebuild and modernize this epistemic ecology.

Faris and Donovan criticize the unbundling of filters as “fragmentation by design” with concern that it would “work against the notion of a unified public sphere.” But fragmentation can be a virtue. Democracies only thrive when the unified public sphere tolerates a healthy diversity of opinions, including some that may seem foolish or odious. Infrastructures gain robustness from diversity, and technology thrives on functional modularity. While network effects push technology toward scale, that scale can be modularized and distributed -- much as the unbundling of filtering would do. It has long been established that functional modularity is essential to making large-scale systems practical, interoperable, adaptable, and extensible.

Toward a digital epistemic ecology

Now we face a first generation of dominant social media platforms that disintermediated our rich ecology of mediators with no replacement. Instead, the platforms channel and amplify the random utterances of the mob – whether wisdom, drivel, or toxins -- into newsfeeds that they control and curate as they see fit. Now their motivation is to sell ads, with little concern for the truth or values they amplify. That is already disastrous, but it could turn much worse. In the future their motivation may be coopted to actively control our minds in support of some social or political agenda.

This ecological perspective leads to a vision of what to regulate for, not just against -- and makes an even stronger case for unbundling construction of our individual newsfeeds from platform control to user control.

  • Do we want top-down control by government, platforms, or independent institutions (including oversight boards) that we hope will be benevolent? That leads eventually to authoritarianism. “Platform law” is “overbroad and underinclusive,” even when done with diligence and good intentions.
  • Do we fully decentralize all social networks, to rely on direct democracy (or small bottom-up collectives)? That risks mob rule, the madness of crowds, and anarchy.
  • Can a hybrid distributed solution balance both top-down and bottom-up power with an emergent dynamic of checks and balances? Can technology help us augment the wisdom of crowds rather than the madness? That seems the best hope.

The ecological depth of such a structure has not yet been appreciated. It is not simply to hope for some new kind of curatorial beast that may or may not materialize. Rather, it is to provide the beginnings of an infrastructure that the communities and institutions we already have can build on -- to reestablish their crucial role in mediating our discourse. They can be complemented and energized by whatever new kinds of communities and institutions may emerge as we learn to apply these powerful new tools. That requires tools for not just curation, but for integrating with other the aspects of these institutions and their broader missions.

Now our communities and institutions are treated little differently from any individual user of social media, which literally disintermediates them from the role as mediators. The platforms have arrogated to themselves, alone, to mediate what each of us sees from the mass of information flowing through social media. Unbundling filtering services to be independently operated would provide a ready foundation for our existing communities and institutions to restore their mediating role -- and create fertile ground for the emergence of new ones. The critical task ahead is to clarify how filtering services become a foundation for curatorial mediators to regain their decentralized roles in the digital realm. How will they curate not only their own content, but that of others? What kinds of institutions will have what curatorial powers?

Conclusion – can truth, value and democracy survive?

A Facebook engineer lamented in 2011 that “The best minds of my generation are thinking about how to make people click ads.”  After ten years of that, isn’t it time to get our best minds thinking about empowering us in whatever ways fulfill us? Some of those minds should be technologists, some not. Keller’s taxonomy of problem areas is a good place to start, not to end.

There is some truth to the counter-Brandeisian view that more speech is not a remedy for bad speech -- just as there is some truth to the view that more intolerance is not a remedy for intolerance. Democracies cannot eliminate either. All they have is the unsatisfyingly incomplete remedy of healthy dialog and mediation, supported by good governance. Churchill said, “democracy is the worst form of government – except for all the others that have been tried.” Markets and technology have at times stressed that, when not guided by democratic government, but dramatically enhance that, when properly guided.

An assemblage of filtering services is the start of a digital support infrastructure for that. Some filtering services may gain institutional authority, and some may be given little authority, but we the people must have ongoing say in that. This will lead to a new layer of social mediation functionality that can become a foundation for the ecology of digital democracy.

Which future do we want? One of platform law acting on its own, or as the servant of an authoritarian government, to control the seen reality and passions of an unstructured mob? Or a digitally augmented upgrade of the rich ecology of mediators of consent on truth and value that -- despite occasional lapses -- has given us a vibrant and robust marketplace of ideas?

Thursday, August 05, 2021

Unbundling Social Media Filtering Services – Updates on Debate and Development

This is an informal work in progress updating and expanding on my two articles in Tech Policy Press (8/9/21) that relate to an important debate in the Journal of Democracy on The Limits of Platform Power

The focus is on how to manage social media and specifically the similar proposals by a number of prominent experts to unbundle the filtering services that curate the newfeeds and recommendations served to users:The updates are best understood after reading those articles.

Also relevant to this debate:

This visualization from my 4/22/21 Tech Policy Press article may also be helpful:



RUNNING UPDATES (most recent first):

  • [9/10/21]
    "Context collapse" is a critical factor in creating conflict in social media, as explained in The day context came back to Twitter (9/8/21), by Casey Newton. As he explains, Facebook Groups and the new Twitter Communities are a way to address this problem of "taking multiple audiences with different norms, standards, and levels of knowledge, and herding them all into a single digital space." Filters are a complementary tool for seeking context, especially when user controlled, and applied in with intentionality. Social media should offer both.

  • [8/25/21]
    The importance of a cross-platform view of the social media ecosystem is highlighted in one of the articles briefly reviewed in Tech Policy Press this week. The article by Zeve Sanderson et. al. on off-platform spread of Twitter-flagged  tweets (8/24/21) argues for “ecosystem-level solutions,” including such options as 1) multi-platform expansion of the Oversight Board, 2) unbundling of filters/recommenders as discussed here (citing Francis Fukuyama et. al. middleware proposal), and 3) “standards for value-driven algorithmic design” (as outlined in the following paper by Helberger).

    A conceptual framework On the Democratic Role of News Recommenders by Natalie Helberger (6/12/19, cited by Sanderson) provides a very though-provoking perspective on how we might want social media to serve society. This is the kind of thinking about what to regulate for, not just against, that I have suggested is badly needed. It suggests four very different (but in some ways complementary) sets of objectives to design for. This perspective -- especially the liberal and deliberative models – can be read to make a strong case for unbundling of filters/recommenders in a way that offers user choice (plus perhaps some default or even required ones as well).

    I hope to do a future piece expanding on the Helberger and Goldman (cited in my 8/15 update below) frameworks and how they combine with some of the ideas in my Looking Ahead post about the need to rebuild the social mediation ecosystems that we built over centuries -- and that digital social media are now abruptly disintermediating with no replacement.
  • [8/17/21]
    Progress on Twitter's @BlueSky unbundling initiative: Jay Graber announces "I’ll be leading @bluesky, an initiative started by @Twitter to decentralize social media. Follow updates on Twitter and at blueskyweb.org" (8/16). Mike Masnick comments: "there has been a lot going on behind the scenes, and now they've announced that Jay will be leading the project, which is FANTASTIC news." Masnick expands: "There are, of course, many, many challenges to making this a reality. And there remains a high likelihood of failure. But one of the key opportunities for making a protocol future a reality -- short of some sort of major catastrophe -- is for a large enough player in the space to embrace the concept and bring millions of users with them. Twitter can do that. And Jay is exactly the right person to both present the vision and to lead the team to make it a reality. ...This really is an amazing opportunity to shape the future and move us towards a more open web, rather than one controlled by a few dominant companies."

    Helpful perspectives on improving and diversifying filtering services are in some articles by Jonathan Stray, Designing Recommender Systems to Depolarize (7/11/21) and Beyond Engagement: Aligning Algorithmic Recommendations With Prosocial Goals (1/21/21). One promising conflict transformation ranking strategy that has been neglected is “surprising validators,” suggested by Cass Sunstein, as I expanded on in 2012 (and since). All of  these deserve research and testing -- and an open market in filtering services is the best way to make that happen.

  • [8/15/21]
    Additional rationales for demanding diversity in filtering services and understanding some of the forms this may take are nicely surveyed in Content Moderation Remedies by Eric Goldman.  He suggests "...moving past the binary remove-or-not remedy framework that dominates the current discourse about content moderation." and provides an extensive taxonomy of remedy options. He explains how expanded non-removal remedies can provide a possible workaround to the dilemmas of remedies that are not proportional to different levels of harm. Diverse filtering services can not only have different content selection criteria, but different strategies for discouraging abuse. And, as he points out, "user-controlled filters have a venerable tradition in online spaces." (Thanks to Daphne Keller for suggesting this to article to me as relevant to my Looking Ahead piece, and for her other helpful comments.)
  • [8/10/21]
  • [8/9/21]
    My review and synthesis of the Journal of Democracy debate mentioned in my 7/21 update are now published in Tech Policy Press.
    + I expand on those two articles in Tech Policy Press in The Need to Unbundle Social Media - Looking AheadWe need a multidisciplinary view of how tech can move democratic society into its digital future. Democracy is always messy, but that is why it works – truth and value are messy. Our task is to leverage tech to help us manage that messiness to be ever more productive. 

Older updates -- carried over from the page of updates to my 4/22/21 Tech Policy Press article

  • [7/21/21]
    A very interesting five-article debate on these unbundling/middleware proposals, all headed The Future of Platform Power, is in the Journal of Democracy, responding to Fukuyama's April article there. Fukayama responds to the other four commentaries (which include a reference to my Tech Policy Press article). The one by Daphne Keller, consistent with her items noted just below, is generally supportive of this proposal, while providing a very constructive critique that identifyies four important concerns. As I tweeted in response, "“The best minds of my generation are thinking about how to make people click ads” – get our best minds to think about empowering us in whatever ways fulfill us! @daphnehk problem list is a good place to start, not to end." I plan to post further comments on this debate soon [now linked  above,  8/9/21].

  • [6/15/21]
    Very insightful survey analysis of First Amendment issues relating to proposed measures for limiting harmful content on social media -- and how most run into serious challenges -- in Amplification and Its Discontents, by Daphne Keller (a former Google Associate General Counsel, now at Stanford, 6/8/21). Wraps up with discussion of proposals for "unbundling" of filtering services: "An undertaking like this would be very, very complicated. It would require lawmakers and technologists to unsnarl many knots.... But unlike many of the First Amendment snarls described above, these ones might actually be possible to untangle." Keller provides a very balanced analysis, but I read this as encouraging support on the legal merits of what I have proposed: the way to preserve freedom of expression is to protect users freedom of impression -- not easy, but the only option that can work. Keller's use of the term "unbundling" is also helpful in highlighting how this kind of remedy has precedent in antitrust law.
    Interview with Keller on this article by Justin Hendrix of Tech Policy Press, Hard Problems: Regulating Algorithms & Antitrust Legislation (6/20/21).
    + Added detail on the unbundling issues is in Keller's 9/9/20 article, If Lawmakers Don't Like Platforms' Speech Rules, Here's What They Can Do About It. Spoiler: The Options Aren't Great.
  • Another perspective on the how moderation conflicts with freedom is in On Social Media, American-Style Free Speech Is Dead (Gilad Edelman, Wired 4/27/21), which reports on Evelyn Douek's more international perspective. Key ideas are to question the feasibility of American-style binary free speech absolutism and shift from categorical limits to more proportionality in balancing societal interests. I would counter that the decentralization of filtering to user choice enables proportionality and balance to emerge from the bottom up, where it has a democratic validity as "community law," rather that being imposed from the top down as "platform law." The Internet is all about decentralized control -- why should we sacrifice freedom of speech to a failure of imagination in managing a technology that should enhance freedom? Customized filtering can provide a receiver-specific richness of proportionality that better balances rights of impression with nuanced freedom of expression. Douek rightly argues that we must accept an error rate in moderation -- why not expect a bottom up, user-driven error rate to be more open and responsive to evolving wisdom and diverse community standards than one applied across the board?
  • [5/18/21]
    Clear insights on the new dynamics of social media - plus new strategies for controlling disinformation with friction, circuit-breakers, and crowdsourced validation in How to Stop Misinformation Before It Gets Shared, by Renee DiResta and Tobias Rose-Stockwell (Wired 3/26/21). Very aligned with my article (but stops short of the contention that democracy cannot depend on the platforms to do what is needed).
  • [5/17/21]
    Important support and suggestions related to Twitter's Bluesky initiative from eleven members of the Harvard Berkman Klein community are in A meta-proposal for Twitter's bluesky project (3/31/21). They are generally aligned with the directions suggested in my article.
  • [4/22/21]
    Another piece by Francis Fukuyama that addresses his Stanford group proposal is in the 
    Journal of DemocracyMaking the Internet Safe for Democracy, April, 2021.
    (+See 7/21/21 update, above, for follow-ups.)