Wednesday, September 15, 2021

Reconciling Social Media & Democracy - Upcoming Mini-Symposium, 10/7/21, 1-4 pm ET

This will be a must-see event for those concerned about the difficult challenge of reversing the harms social media are doing to democracy. Hosted by Tech Policy Press, it brings together all sides of the debate in the Journal of Democracy that I reviewed and expanded on for Tech Policy Press, along with some other expert voices. 

RSVP here.

                       ------------------------

Topic: Reconciling Social Media & Democracy

Description: While various solutions to problems at the intersection of social media and democracy are under consideration, from regulation to antitrust action, some experts are enthusiastic about the opportunity to create a new social media ecosystem that relies less on centrally managed platforms like Facebook and more on decentralized, interoperable services and components.

In this mini-symposium, we will explore some of these ideas and critique them.

Participants include:

  • Tracy Chou, founder and CEO of Block Party, software engineer, and diversity advocate
  • Joan Donovan, Research Director of the Shorenstein Center on Media, Politics and Public Policy
  • Cory Doctorow, science fiction author, activist and journalist
  • Francis Fukuyama, Senior Fellow at Stanford University's Freeman Spogli Institute for International Studies, Mosbacher Director of FSI's Center on Democracy, Development, and the Rule of Law, and Director of Stanford's Ford Dorsey Master's in International Policy
  • Dipayan Ghosh, Co-Director of the Digital Platforms & Democracy Project at the Shorenstein Center on Media, Politics and Public Policy at the Harvard Kennedy School and faculty at Harvard Law School
  • Justin Hendrix, CEO and Editor, Tech Policy Press
  • Daphne Keller, Director of the Program on Platform Regulation at Stanford's Cyber Policy Center
  • Nathalie Maréchal, Senior Policy and Partnerships Manager at Ranking Digital Rights
  • Richard Reisman, innovator, entrepreneur, consultant, and investor
  • Ramesh Srinivasan, Professor, UCLA Department of Information Studies and Director of UC Digital Cultures Lab

Time: Oct 7, 2021 01:00 PM in Eastern Time (US and Canada)

------------------------

The core idea of the proposals to unbundle and decentralize control of what is recommended or filtered into our newsfeed is not just that the dominant platforms have done a horrible job, causing great harm to our democratic process -- but that this level of private power to control essential portions of our marketplace of ideas is incompatible with democracy, no matter how hard they try.

I am very pleased at having Justin Hendrix's support in helping to organize this event for Tech Policy Press, and will be honored to be moderating portions of it.

Links to my summaries of the debate articles and my related work can be found in the Selected Items tab above.

Monday, August 09, 2021

The Need to Unbundle Social Media - Looking Ahead

This expands on my two new articles in Tech Policy Press that review and synthesize an important debate among scholars in the Journal of Democracy (as noted in my previous blog post that carries running updates). Those articles address the growing interest in a set of ideas that regard the scale of the current platforms as dangerous to democracy and propose that one way to address the danger is to
“unbundle” social media into distinct functions to separate the basic network layer of the content from the higher level of content curation.

Here I add forward-looking comments that build on my continuing work on how technology can augment human collaboration. First some general comments, then some more specific ideas, and then a longer-term vision from my perspective as a technologist and futurist.

 

Much of the thinking about regulating social media is reactive to current harms, and the sense that techno-optimism has failed. I say this is not a tech problem but one of poorly managed tech. We need a multidisciplinary view of how tech can move democratic society into its digital future. Democracy is always messy, but that is why it works – truth and value are messy. Our task is to leverage tech to help us manage that messiness to be ever more productive.

Some general perspectives on the debate so far

No silver bullet – but maybe a silver-jacketed bullet: No single remedy will address all the harms in an acceptable way. But I suggest that decentralized control of filtering recommendation services is the silver-jacketed bullet that best de-scales the problems in the short and long term. That will not solve all problems of harmful speech alone, but it can create a tractable landscape so that the other bullets will not be overmatched. That is especially important because mandates for content-specific curation seem likely to fail First Amendment challenges, as Keller explains in her article (and here).

Multiple disciplines: The problems of harmful speech have risen from a historic background level to crisis conditions because of how technology was hijacked to serve its builders’ perverse business model, rather than its human users. Re-engineering the technology and business model is essential to manage new problems of scope, scale, and speed -- to contain newly destructive feedback cascades in ways that do not trample freedom of expression. That requires a blend of technology, policy, social science, and business, with significant inputs from all of those.

Unbundling control of impression versus expression: Digital social media have fundamentally changed the dynamics of how speech flows through society, in a way that is still underappreciated. Control of the impression of information on people’s attention (in news feeds and recommendations) has become far more consequential than the mere expression of that information. Historically, the focus of free speech has been on expression. Individuals generally enjoyed control over what was impressed on them -- by choosing their information sources. Now the dominant platforms have taken upon themselves to control the unified newsfeeds that each of us see, and what people and groups they recommend we connect to. Traditional intermediary publishers and other curators of impression have been disintermediated. Robust diversity in intermediaries must be restored. Now, freedom of impression is important. (More below.)


Time-frames: It will be challenging to balance this now-urgent crisis, remedies that will take time, and the fact that we are laying the foundations of digital society for decades to come. Digitally networked speech must support and enhance the richly evolving network of individuals, communities, institutions, and government that society relies on to understand truth and value – a social epistemic ecosystem. The platforms recklessly disintermediated that. A start on the long process of rejuvenating that ecosystem in our increasingly digital society is essential to any real solution.

On technological feasibility and curation at scale

Many have argued like Faris and Donovan that “more technology cannot solve the problem of misinformation-at-scale.” I believe the positive contribution technology can offer to aid in filtering for quality and value at scale is hugely underestimated – because we have not yet sought to apply an effective strategy. Instead of blindly disintermediating and destroying the epistemic social networks society has used to mediate speech, independent filtering services motivated to serve users could greatly augment those networks.

This may seem unrealistic to many, even foolishly techno-utopian, given that AI will almost certainly lack nuanced understanding and judgment for the foreseeable future. But strategies I suggest in my Tech Policy Press synthesis and in more detail elsewhere are not based on artificial intelligence. Instead, they rely on augmented intelligence, drawing on and augmenting crowdsourced human wisdom. I noted that crowdsourcing has been shown to be nearly as effective as experts in judging the quality of social media content. While those experiments required explicit human ratings, the more scalable strategy I propose relies on available metadata about the sharing of social media content. That can be mined to make inferences about judgments of quality from massive numbers of users. AI cannot judge quality and value on its own, but it can help collate human judgments of quality and value.

Similar methods proved highly successful in how Google conquered search by mining the human judgment inherent in the network of Web links built by humans -- to infer which pages had authority, based on the authority and reputation of other pages humans had linked to them (first “Webmasters,” later authors of all kinds). Google and other search engines also infer relevance by tracking which search results each user clicks on, and how long until they click something else. All of this is computationally dizzying, but now routine.

Social media already mine similar signals of human judgment (such as liking, sharing, and commenting) but instead, now use that to drive engagement. Filtering services can mine the human judgments of quality and value in likes and shares -- and in who is doing the liking and sharing -- as I have described in detail. By doing this multiple levels deep, augmentation algorithms can distill the wisdom of the crowd in a way that identifies and relies most heavily on the smartest of the crowd. Just as Google out-scaled Yahoo’s manually ranked search, this kind of augmented intelligence promises to enable services that are motivated to serve human objectives to do so far more scalably than armies of human curators. (That is not to exclude human curation, just as Web search now also draws on the human curation of Wikipedia.)

Several of the debaters (Ghosh and Srinivasan, Keller) raise the reasonable question of whether filtering service providers would actually emerge. As to the software investment, the core technology for this might be spun out from the platforms as open source, and further developed by analysis infrastructure providers that support multiple independent filtering services. That would do the heavy lifting of data analysis (and compartmentalize sensitive details of user data), on top of which the filtering services need only set the higher levels of the objective functions and weighting parameters that guide the rankings – a far less technically demanding task. Much of the platforms' existing filtering infrastructure could become the nucleus of one or more separate filtering infrastructure and service businesses. Existing publishers and other mediating institutions might welcome the opportunity to reestablish and extend their brands into this new infrastructure. The Bell System breakup provides a model for how a critical utility infrastructure business can be deeply rearchitected, unbundled, and opened to competition as overseen  by expert regulators, all without interruption of service, and with productive reassignment of staff. Not easy, but doable.

Formative ideas on impression ranking filters versus expression blocking filters

Triggered by Fukuyama’s comments about his group’s recent thinking about takedowns, I wonder if there may be need to differentiate two categories of filtering services that would be managed and applied very differently. This relates to the difference between moderation/blocking/censorship of expression and curation/ranking of impression.

Some speak of moderation in ways that make me wonder if they mean exclusion of illegal content, possibly including some similar but slightly broader censorship of expression, or are just using the term moderation loosely, to also refer to the more nuanced issue of curation of impression.

My focus has been primarily on filters that do ranking for users, providing curation services that support their freedom of impression. Illegal content can properly be blocked (or later taken down) to be inaccessible to all users, but the task of curation filtering is a discretionary ranking of items for quality, value, and relevance to each user. That should be controlled by users and the curation services they choose to act as their agents, as Fukuyama and I propose. 

The essential difference is that blocking filters can eliminate items from all feeds in their purview, while curation filters can only downrank undesirable items -- the effect of those downrankings would be contingent on how other items are ranked and whether uprankings from other active filters counterbalance those downrankings.

But even for blocking there may be need for a degree of configurability and localization (and possibly user control). This could enable a further desirable shift from “platform law” to community law. Some combination of alternative blocking filters might be applied to offer more nuance in what is blocked or taken down. This might apply voting logic, such that content is blocked when some threshold of votes of multiple filters from multiple sources agree. It might provide for a user or community control layer, much as parents, schools, and businesses choose from a market in Internet blocking filters, and might permit other mediators to offer such filters to those who might choose them.

The digitization of our epistemic social networks will force us to think clearly about how technology should support the processes of mediation that have evolved over centuries -- but will now be built into code – to continue to evolve in ways that are democratically decided. That is a task we should have begun on a decade ago.

A Digital Constitution of Discourse

Whether free expression retains primacy is central to much of the debate in the Journal of Democracy. Fukuyama and Keller view that primacy as the reason the unbundling of filtering power is central to limiting platform abuses. An excellent refresher on why free expression is the pillar of democracy and the broader search for truth is Jonathan Rauch’s The Constitution of Knowledge: A Defense of Truth, Rauch digs deep into the social and political processes of mediating consent and how an institutional ecosystem facilitates that. (I previously posted about how his book resonates with my work over the past two decades and suggests some directions for future development  -- this post is a further step.)

Others (including Renee DiResta, Niall Ferguson, Matthew Hutson, and Marina Gorbis) have also elucidated how what people accept as true and valuable is not just a matter of individuals, nor of people talking to each other in flat networks, but is mediated through communities, institutions, and authorities. They review how technology affected that, breaking from the monolithic and “infallible” authority of the church when Gutenberg democratized the written word. That led to horribly disruptive wars of religion, and a rebuilding through many stages of evolution of a growing ecology of publishers, universities, professional societies, journalists, and mass media.

Corey Doctorow observed that “the term ‘ecology’ marked a turning point in environmental activism” and suggests “we are on the verge of a new ‘ecology’ moment dedicated to combating tech monopolies.” He speaks of “a pluralism movement or a self-determination movement.” I suggest this is literally an epistemic ecology. We had such an ecology but are letting tech move fast and break it. It is time to think broadly about how to rebuild and modernize this epistemic ecology.

Faris and Donovan criticize the unbundling of filters as “fragmentation by design” with concern that it would “work against the notion of a unified public sphere.” But fragmentation can be a virtue. Democracies only thrive when the unified public sphere tolerates a healthy diversity of opinions, including some that may seem foolish or odious. Infrastructures gain robustness from diversity, and technology thrives on functional modularity. While network effects push technology toward scale, that scale can be modularized and distributed -- much as the unbundling of filtering would do. It has long been established that functional modularity is essential to making large-scale systems practical, interoperable, adaptable, and extensible.

Toward a digital epistemic ecology

Now we face a first generation of dominant social media platforms that disintermediated our rich ecology of mediators with no replacement. Instead, the platforms channel and amplify the random utterances of the mob – whether wisdom, drivel, or toxins -- into newsfeeds that they control and curate as they see fit. Now their motivation is to sell ads, with little concern for the truth or values they amplify. That is already disastrous, but it could turn much worse. In the future their motivation may be coopted to actively control our minds in support of some social or political agenda.

This ecological perspective leads to a vision of what to regulate for, not just against -- and makes an even stronger case for unbundling construction of our individual newsfeeds from platform control to user control.

  • Do we want top-down control by government, platforms, or independent institutions (including oversight boards) that we hope will be benevolent? That leads eventually to authoritarianism. “Platform law” is “overbroad and underinclusive,” even when done with diligence and good intentions.
  • Do we fully decentralize all social networks, to rely on direct democracy (or small bottom-up collectives)? That risks mob rule, the madness of crowds, and anarchy.
  • Can a hybrid distributed solution balance both top-down and bottom-up power with an emergent dynamic of checks and balances? Can technology help us augment the wisdom of crowds rather than the madness? That seems the best hope.

The ecological depth of such a structure has not yet been appreciated. It is not simply to hope for some new kind of curatorial beast that may or may not materialize. Rather, it is to provide the beginnings of an infrastructure that the communities and institutions we already have can build on -- to reestablish their crucial role in mediating our discourse. They can be complemented and energized by whatever new kinds of communities and institutions may emerge as we learn to apply these powerful new tools. That requires tools for not just curation, but for integrating with other the aspects of these institutions and their broader missions.

Now our communities and institutions are treated little differently from any individual user of social media, which literally disintermediates them from the role as mediators. The platforms have arrogated to themselves, alone, to mediate what each of us sees from the mass of information flowing through social media. Unbundling filtering services to be independently operated would provide a ready foundation for our existing communities and institutions to restore their mediating role -- and create fertile ground for the emergence of new ones. The critical task ahead is to clarify how filtering services become a foundation for curatorial mediators to regain their decentralized roles in the digital realm. How will they curate not only their own content, but that of others? What kinds of institutions will have what curatorial powers?

Conclusion – can truth, value and democracy survive?

A Facebook engineer lamented in 2011 that “The best minds of my generation are thinking about how to make people click ads.”  After ten years of that, isn’t it time to get our best minds thinking about empowering us in whatever ways fulfill us? Some of those minds should be technologists, some not. Keller’s taxonomy of problem areas is a good place to start, not to end.

There is some truth to the counter-Brandeisian view that more speech is not a remedy for bad speech -- just as there is some truth to the view that more intolerance is not a remedy for intolerance. Democracies cannot eliminate either. All they have is the unsatisfyingly incomplete remedy of healthy dialog and mediation, supported by good governance. Churchill said, “democracy is the worst form of government – except for all the others that have been tried.” Markets and technology have at times stressed that, when not guided by democratic government, but dramatically enhance that, when properly guided.

An assemblage of filtering services is the start of a digital support infrastructure for that. Some filtering services may gain institutional authority, and some may be given little authority, but we the people must have ongoing say in that. This will lead to a new layer of social mediation functionality that can become a foundation for the ecology of digital democracy.

Which future do we want? One of platform law acting on its own, or as the servant of an authoritarian government, to control the seen reality and passions of an unstructured mob? Or a digitally augmented upgrade of the rich ecology of mediators of consent on truth and value that -- despite occasional lapses -- has given us a vibrant and robust marketplace of ideas?

Thursday, August 05, 2021

Unbundling Social Media Filtering Services – Updates on Debate and Development

This is an informal work in progress updating and expanding on my two articles in Tech Policy Press (8/9/21) that relate to an important debate in the Journal of Democracy on The Limits of Platform Power

The focus is on how to manage social media and specifically the similar proposals by a number of prominent experts to unbundle the filtering services that curate the newfeeds and recommendations served to users:The updates are best understood after reading those articles.

Also relevant to this debate:

This visualization from my 4/22/21 Tech Policy Press article may also be helpful:



RUNNING UPDATES (most recent first):

  • [9/10/21]
    "Context collapse" is a critical factor in creating conflict in social media, as explained in The day context came back to Twitter (9/8/21), by Casey Newton. As he explains, Facebook Groups and the new Twitter Communities are a way to address this problem of "taking multiple audiences with different norms, standards, and levels of knowledge, and herding them all into a single digital space." Filters are a complementary tool for seeking context, especially when user controlled, and applied in with intentionality. Social media should offer both.

  • [8/25/21]
    The importance of a cross-platform view of the social media ecosystem is highlighted in one of the articles briefly reviewed in Tech Policy Press this week. The article by Zeve Sanderson et. al. on off-platform spread of Twitter-flagged  tweets (8/24/21) argues for “ecosystem-level solutions,” including such options as 1) multi-platform expansion of the Oversight Board, 2) unbundling of filters/recommenders as discussed here (citing Francis Fukuyama et. al. middleware proposal), and 3) “standards for value-driven algorithmic design” (as outlined in the following paper by Helberger).

    A conceptual framework On the Democratic Role of News Recommenders by Natalie Helberger (6/12/19, cited by Sanderson) provides a very though-provoking perspective on how we might want social media to serve society. This is the kind of thinking about what to regulate for, not just against, that I have suggested is badly needed. It suggests four very different (but in some ways complementary) sets of objectives to design for. This perspective -- especially the liberal and deliberative models – can be read to make a strong case for unbundling of filters/recommenders in a way that offers user choice (plus perhaps some default or even required ones as well).

    I hope to do a future piece expanding on the Helberger and Goldman (cited in my 8/15 update below) frameworks and how they combine with some of the ideas in my Looking Ahead post about the need to rebuild the social mediation ecosystems that we built over centuries -- and that digital social media are now abruptly disintermediating with no replacement.
  • [8/17/21]
    Progress on Twitter's @BlueSky unbundling initiative: Jay Graber announces "I’ll be leading @bluesky, an initiative started by @Twitter to decentralize social media. Follow updates on Twitter and at blueskyweb.org" (8/16). Mike Masnick comments: "there has been a lot going on behind the scenes, and now they've announced that Jay will be leading the project, which is FANTASTIC news." Masnick expands: "There are, of course, many, many challenges to making this a reality. And there remains a high likelihood of failure. But one of the key opportunities for making a protocol future a reality -- short of some sort of major catastrophe -- is for a large enough player in the space to embrace the concept and bring millions of users with them. Twitter can do that. And Jay is exactly the right person to both present the vision and to lead the team to make it a reality. ...This really is an amazing opportunity to shape the future and move us towards a more open web, rather than one controlled by a few dominant companies."

    Helpful perspectives on improving and diversifying filtering services are in some articles by Jonathan Stray, Designing Recommender Systems to Depolarize (7/11/21) and Beyond Engagement: Aligning Algorithmic Recommendations With Prosocial Goals (1/21/21). One promising conflict transformation ranking strategy that has been neglected is “surprising validators,” suggested by Cass Sunstein, as I expanded on in 2012 (and since). All of  these deserve research and testing -- and an open market in filtering services is the best way to make that happen.

  • [8/15/21]
    Additional rationales for demanding diversity in filtering services and understanding some of the forms this may take are nicely surveyed in Content Moderation Remedies by Eric Goldman.  He suggests "...moving past the binary remove-or-not remedy framework that dominates the current discourse about content moderation." and provides an extensive taxonomy of remedy options. He explains how expanded non-removal remedies can provide a possible workaround to the dilemmas of remedies that are not proportional to different levels of harm. Diverse filtering services can not only have different content selection criteria, but different strategies for discouraging abuse. And, as he points out, "user-controlled filters have a venerable tradition in online spaces." (Thanks to Daphne Keller for suggesting this to article to me as relevant to my Looking Ahead piece, and for her other helpful comments.)
  • [8/10/21]
  • [8/9/21]
    My review and synthesis of the Journal of Democracy debate mentioned in my 7/21 update are now published in Tech Policy Press.
    + I expand on those two articles in Tech Policy Press in The Need to Unbundle Social Media - Looking AheadWe need a multidisciplinary view of how tech can move democratic society into its digital future. Democracy is always messy, but that is why it works – truth and value are messy. Our task is to leverage tech to help us manage that messiness to be ever more productive. 

Older updates -- carried over from the page of updates to my 4/22/21 Tech Policy Press article

  • [7/21/21]
    A very interesting five-article debate on these unbundling/middleware proposals, all headed The Future of Platform Power, is in the Journal of Democracy, responding to Fukuyama's April article there. Fukayama responds to the other four commentaries (which include a reference to my Tech Policy Press article). The one by Daphne Keller, consistent with her items noted just below, is generally supportive of this proposal, while providing a very constructive critique that identifyies four important concerns. As I tweeted in response, "“The best minds of my generation are thinking about how to make people click ads” – get our best minds to think about empowering us in whatever ways fulfill us! @daphnehk problem list is a good place to start, not to end." I plan to post further comments on this debate soon [now linked  above,  8/9/21].

  • [6/15/21]
    Very insightful survey analysis of First Amendment issues relating to proposed measures for limiting harmful content on social media -- and how most run into serious challenges -- in Amplification and Its Discontents, by Daphne Keller (a former Google Associate General Counsel, now at Stanford, 6/8/21). Wraps up with discussion of proposals for "unbundling" of filtering services: "An undertaking like this would be very, very complicated. It would require lawmakers and technologists to unsnarl many knots.... But unlike many of the First Amendment snarls described above, these ones might actually be possible to untangle." Keller provides a very balanced analysis, but I read this as encouraging support on the legal merits of what I have proposed: the way to preserve freedom of expression is to protect users freedom of impression -- not easy, but the only option that can work. Keller's use of the term "unbundling" is also helpful in highlighting how this kind of remedy has precedent in antitrust law.
    Interview with Keller on this article by Justin Hendrix of Tech Policy Press, Hard Problems: Regulating Algorithms & Antitrust Legislation (6/20/21).
    + Added detail on the unbundling issues is in Keller's 9/9/20 article, If Lawmakers Don't Like Platforms' Speech Rules, Here's What They Can Do About It. Spoiler: The Options Aren't Great.
  • Another perspective on the how moderation conflicts with freedom is in On Social Media, American-Style Free Speech Is Dead (Gilad Edelman, Wired 4/27/21), which reports on Evelyn Douek's more international perspective. Key ideas are to question the feasibility of American-style binary free speech absolutism and shift from categorical limits to more proportionality in balancing societal interests. I would counter that the decentralization of filtering to user choice enables proportionality and balance to emerge from the bottom up, where it has a democratic validity as "community law," rather that being imposed from the top down as "platform law." The Internet is all about decentralized control -- why should we sacrifice freedom of speech to a failure of imagination in managing a technology that should enhance freedom? Customized filtering can provide a receiver-specific richness of proportionality that better balances rights of impression with nuanced freedom of expression. Douek rightly argues that we must accept an error rate in moderation -- why not expect a bottom up, user-driven error rate to be more open and responsive to evolving wisdom and diverse community standards than one applied across the board?
  • [5/18/21]
    Clear insights on the new dynamics of social media - plus new strategies for controlling disinformation with friction, circuit-breakers, and crowdsourced validation in How to Stop Misinformation Before It Gets Shared, by Renee DiResta and Tobias Rose-Stockwell (Wired 3/26/21). Very aligned with my article (but stops short of the contention that democracy cannot depend on the platforms to do what is needed).
  • [5/17/21]
    Important support and suggestions related to Twitter's Bluesky initiative from eleven members of the Harvard Berkman Klein community are in A meta-proposal for Twitter's bluesky project (3/31/21). They are generally aligned with the directions suggested in my article.
  • [4/22/21]
    Another piece by Francis Fukuyama that addresses his Stanford group proposal is in the 
    Journal of DemocracyMaking the Internet Safe for Democracy, April, 2021.
    (+See 7/21/21 update, above, for follow-ups.)

Tuesday, July 13, 2021

Toward the Digital Constitution of Knowledge [a teaser*]

How to Destroy Truth, the 7/1/21 David Brooks column, offers insight on the problems and opportunities of social media, drawing on Jonathan Rauch’s important new book “The Constitution of Knowledge: A Defense of Truth.” Brooks summarizes Rauch about empirical and propositional knowledge (and how that complements the emotional and moral knowledge that derives from the collective wisdom of shared stories):

…the acquisition of this kind of knowledge is also a collective process. It’s not just a group of people commenting on each other’s internet posts. It’s a network of institutions — universities, courts, publishers, professional societies, media outlets — that have set up an interlocking set of procedures to hunt for error, weigh evidence and determine which propositions pass muster.

My work on the future of networks for human collaboration has been in tune with this and suggests some urgent further directions, as detailed most recently in my Tech Policy Press article, The Internet Beyond Social Media Thought-Robber BaronsHaving just read Rauch’s book (with close attention to his chapter on “Disinformation Technology: The Challenge of Digital Media”), I have two initial take-aways that I preview here:

Extending Rauch’s work: I was struck that Rauch might enhance his ideas by drawing on proposals for unbundling aspects of digital media, as I and others (including Jack Dorsey and Francis Fukuyama) have advocated. Rauch’s chapter on media is very resonant, but the final section stopped me short. He seems uncritical in support of Big Tech efforts at quasi-independent outsourcing of controls like the Facebook Oversight Board and fact-checking authorities. I see that as ineffective – and, more importantly, as a fig-leaf on overcentralized authoritarian control of these essential network utilities -- and counter to the more open emergence needed to seek effective consensus on truth.

Extending my work: I have built on similar ideas (notably Renee DiResta’s Mediating Consent) -- but Rauch convinces me to add focus on the role of institutional participants in that process, beyond the emergent bottom-up processes for reliance on such institutions that I have been emphasizing as the driving force.

As Rauch explains, the “constitution of knowledge” is a collective process based on rules and governance principles. As he says, the dominant social media companies have hijacked this process to serve their own business objectives of selling ads, rather than the objectives of their users and society to support the constitution of knowledge. It is now clear to everyone whose salary does not depend on the selling of ads that these two objectives are incompatible, and we are suffering the consequences.

But, to the extent it is the platforms that address this, directly or via surrogates, it devolves into undemocratic “platform law,” which as  Molly Land explains, lacks legitimacy and is “overbroad and underinclusive.” Rauch makes a similar point that the Web has become a “company town.”

To address that we need to unbundle key functions of the social network platforms. As all discourse moves to the digital domain, there is a core function of posting and access that seeks to be universal and thus very subject to network effects that favor a degree of concentration. But the function that is essential to the constitution of knowledge is the selection of what each of us sees in our newsfeeds. In a free society that must be largely a matter of individual choice. That can be decentralized and has limited network effects.

The solution this leads to is a functional unbundling: to create an open market in filtering services that each of us can select from and mix and match to customize a feed of what each of us wishes to view from the platform at any given time. That might be voluntary (if Dorsey has his way) or mandated (if Zuckerberg continues to overreach).

My article and the works of other proponents of such an unbundling explain how these feed filtering services can be offered by service providers that may include the kinds of traditional institutional gatekeepers Rauch refers to. We have argued that such decentralization breaks up the “platform law” we are stumbling into. Instead, it returns individual agency into our open marketplace of ideas, supporting it with an open marketplace of filters. We, not the platform, should decide when we want filters from a given source, and with what weight. Those sources can include all the kinds of institutions based on professionalism and institutionalism that Rauch refers to, but we should generally be the ones to decide. Rauch quotes Frederick Douglas on “the rights of the hearer.” Democracy and truth require that we free our feeds to protect “the rights of the hearer as well as those of the speaker.”

As one of Rauch’s chapter subtitles says, “Outsourcing reality to a social network is humankind’s greatest innovation.” Translating that to the digital domain, the core idea is that multiple filtering services can uprank and downrank items for possible inclusion in our feeds. Each filtering service should be able to assign weights to those up or down rankings, and users should be able to use knobs or sliders to give higher or lower weightings to each filtering service they want applied. Rauch's emphasis on institutions suggests that more official and authoritative gatekeepers might have special overweightings or other privileged ways to adjust what we see and how we see it (such as to provide warnings or introduce friction into viral sharing).

My own work on filtering algorithms designs for truth in a man-machine partnership that is based on distilling human judgements of reputation. This generalizes how Google’s PageRank distills the human judgments of “Webmasters” as encoded into the emergent linking of the Web – adapting that to the likes and shares and comments of social media. Rauch seems to suggest a similar direction: “giving users an epistemic credit score.” (Social media already track users’ reputational credit scores, but they score for engagement, not for truth.) As Rauch observes' there can be "no comprehensive solutions to the disinformation threat," but this massive crowdsourcing of judgments offers what can become a robust "cognitive immune system." 

I would be very interested to learn how Rauch might build on those proposals for 1) open filtering and 2) reputation-based truth-seeking algorithms -- to develop his vision of the constitution of knowledge into a more dynamically adaptive and emergent future – one that moves toward a more flexible structure of “community law.” (Similar issues of platform versus community law also apply to the softer emotional and moral knowledge that Brooks refers to.)

Pursuant to that, I plan to revisit ideas from my early work on how this digitally augmented constitution of knowledge can effectively combine 1) the open emergence of preferred filtering services from individual users with 2) the contingent establishment of more official and authoritative gatekeepers. My original 2003 design document (paragraph 0288) outlined a vision that extended this kind of decentralized selection of filtering services in a way that I hope Rauch might relate to:

Checks and balances could provide for multiple bodies with distinct responsibilities, such as executive, legislative, and judicial, and could draw on representatives to oversee critical decisions and methods. Such representatives may be elected by democratic methods, or through reputation-based methods, or some combination. Expert panels could also have key roles, again, possibly given limited charters and oversight by elected representatives to avoid abuse by a technocracy. External communities and governmental bodies may also have oversight roles in order to ensure broadly based input and sensitivity to the overall welfare. The use of multiple confederated and cooperative marketplaces, as described above, may also provide a level of checks and balances as well.

It seems most of our thinking about social media is currently reactive and rooted in the present, looking only to the very near future. But we are already far down a wrong path and need a deep rethinking and reformation. We need a new driving vision of how our increasingly digital society can reposition itself to deal with the constitution of knowledge for coming decades. That future must be a flexible and emergent, and able to deal with unimaginable scale, speed, and scope. If we do not set a course for that future now, we may well find ourselves in a dark age that will be increasingly hard to escape. That window may already be closing.

---

*Referring to this as "a teaser," it is a preliminary draft that I hope to refine and expand based on further thought and feedback.

Sunday, June 13, 2021

Beyond Deplatforming: The Next Evolution of Social Media May Make Banning Individual Accounts Less Necessary

As published in Tech Policy Press...

Since his accounts on major platforms were suspended following the violent insurrection at the US Capitol on January 6, Donald Trump has been less of a presence on social media. But a recent New York Times analysis finds that while Trump “lost direct access to his most powerful megaphones,” his statements can still achieve vast reach on Facebook, Instagram and Twitter. The Times found that “11 of his 89 statements after the ban attracted as many likes or shares as the median post before the ban, if not more. How does that happen? …after the ban, other popular social media accounts often picked up his messages and posted them themselves.”

Understanding how that happens sheds light on the growing controversy over whether “deplatforming” is effective in moderating extremism, or just temporarily drives it out of view, to intensify and potentially cause even more harm.  It also illuminates the more fundamental question: is there a better way to leverage how social networks work to manage harmful speech in a way that is less draconian and more supportive of free expression? Should we really continue down this road toward “platform law” — restraints on speech applied by private companies (even if under “oversight” by others) — when it is inevitably “both overbroad and underinclusive” — especially as these companies provide increasingly essential services. 

Considering how these networks work reveals that the common “megaphone” analogy that underlies rhetoric around deplatforming is misleading. Social media do not primarily enable a single speaker to achieve mass reach, as broadcast media do. Rather, reach grows as messages propagate through social networks, with information spreading person to person, account to account, more like rumors. Trump’s accounts are anomalous, given his many tens of millions of direct followers, so his personal network does give him something of a megaphone. But the Times article shows that, even for him, much of his reach is by indirect propagation — dependent on likes and shares by others. It is striking that even after being banned, comments he made elsewhere were often posted by his supporters (or journalists, and indeed his opponents), and then liked and further shared by other users hundreds of thousands of times.

The lesson is that we need to think of social networks as networks and manage them that way. Banning a speaker from the network does not fully stop the flow of harmful messages, because they come from many users and are reinforced by other users as they flow through the network. The Times report explains that Trump’s lies about the election were reduced far more substantially than his other messages not simply because Trump was banned, but because messages from anyone promoting false election fraud claims are now specifically moderated by the platforms. That approach can work to a degree, for specific predefined categories of message, but it is not readily applied more generally. There are technical and operational challenges in executing such moderation at scale, and the same concerns about “platform law” apply. 

Social media networks should evolve to apply more nuanced intervention at the network level. There is growing recognition of the need to enable a deeper level of individual control on how messages are filtered into each user’s newsfeed, and whether harmful speakers and messages are downranked based on feedback from the crowd to reduce propagation. Such controls would offer a flexible, scalable, and adaptive cognitive immune system to limit harmful viral cascades. That can limit not only how messages propagate, but how harmful users and groups are recommended to other users — and can moderate which speech is impressed upon users without requiring a binary shutdown of expression.

Some experts propose that the best way to manage this at scale is to spin out the choice of filtering rules that work with the platforms to an open market of filtering services that users can choose from. The decentralization of this key aspect of current social media networks away from the dominant platforms, and the potential diversity of choices it may create for users, might prevent a speaker widely recognized to speak lies and hate from gaining many tens of millions of followers in the first place — and would break up the harmful feedback loops that reinforce the propagation of their dangerous messages. Perhaps such a system could have substantially prevented or reduced the propagation of the Big Lie, and therefore abrogated the necessity of deplatforming a President. Instead, it would apply more nuanced downstream control — a form of crowdsourced moderation emergent from the individual choices of users and communities of users. 

Under the status quo, we are left with the “platform law” policies set by a few dominant private companies, leaving no one satisfied. Instead, democracy would be far better served by digitally enhanced processes to apply more nuanced forms of  “community law,” as crowdsourced from each social network user and community as they interact with their networks. 

Wednesday, May 05, 2021

Ass-Backwards: The Facebook Oversight Board, Trump, and Freedom

[The Economist]
The Facebook Oversight Board decision on Trump “pleases no one” because we have it backwards. Social media have become a universal platform: We should individually control what we choose to hear, not globally control who can speak. 

The Internet is not like a soapbox with limited reach (if you don’t like the speech, you can walk away). Newsfeeds come to you all or not at all — except as filtered. We need to control our own filters! That is how we “walk away” as we desire. 

We can’t rely on control at the source. No one should decide for us what gets impressed upon our attention (except as we empower them to serve as our agent). The only solution is for each of us to control how we individually filter. We need to break out an open market in user-selectable filtering services that we each can choose from. Not perfect, but that is the nature of a free society. More at Tech Policy Press: The Internet Beyond Social Media Thought-Robber Barons.

Who should decide what you listen to? Not the speaker, not the government, not the platform, not some “oversight” board. Social Media cannot offer freedom of EXpression unless we each retain freedom of IMpression. We need individual control/delegation of what we see. #FreeOurFeeds! 

Tuesday, April 20, 2021

Tech Policy Press: The Internet Beyond Social Media Thought-Robber Barons

==============================================================
SEE IMPORTANT UPDATES BELOW, plus related items & background notes 
==============================================================

My new article, "The Internet Beyond Social Media Thought-Robber Barons," was published in Tech Policy Press on 4/22/21
  • It is now apparent that social media is dangerous for democracy, but few have recognized a simple twist that can put us back on track.  
  • A surgical restructuring -- an "unbundling" -- to an open market strategy that shifts control over our feeds to the users they serve -- is the only practical way to limit the harms and enable the full benefits of social media 
(This is an extensively updated and improved version of the discussion draft first posted on this blog in February, now integrating more proposals, addressing common objections, and drawing on feedback from a number of experts in the field -- and the very helpful editing of Justin Hendrix.)

I summarize and contrast these proposals:

  • Most prominently in Foreign Affairs and the Wall Street Journal by Francis Fukayama, Barak Richman, Ashish Goel, and others in the report of the Stanford Working Group on Platform Scale. (Their use of the technical term "middleware" for this approach has been picked up by some other commentators.)
  • Independently by Stephen Wolfram, Mike Masnick, and me.
  • And with what might become important real-world traction in the exploratory Bluesky initiative by Jack Dorsey at Twitter.

The article covers new ground in presenting a concrete vision of what an open market in filtering services might enable -- how this can bring individual and social purpose back to social media, to not only protect, but systematically enhance democracy, and how that can augment human wisdom and social interaction more broadly. That vision should be of interest to thoughtful citizens as well as policy professionals.


I welcome your feedback and support for these proposals, and can be reached at intertwingled [at] teleshuttle [dot] com.

--------------------------

UPDATES:

  • [7/21/21]
    A very interesting five-article debate on these unbundling/middleware proposals, all headed The Future of Platform Power, is in the Journal of Democracy, responding to Fukuyama's April article there. Fukayama responds to the other four commentaries (which include a reference to my Tech Policy Press article). The one by Daphne Keller, consistent with her items noted just below, is generally supportive of this proposal, while providing a very constructive critique that identifyies four important concerns. As I tweeted in response, "“The best minds of my generation are thinking about how to make people click ads” – get our best minds to think about empowering us in whatever ways fulfill us! @daphnehk problem list is a good place to start, not to end." I plan to post further comments on this debate soon.

  • [6/15/21]
    Very insightful survey analysis of First Amendment issues relating to proposed measures for limiting harmful content on social media -- and how most run into serious challenges -- in Amplification and Its Discontents, by Daphne Keller (a former Google Associate General Counsel, now at Stanford, 6/8/21). Wraps up with discussion of proposals for "unbundling" of filtering services: "An undertaking like this would be very, very complicated. It would require lawmakers and technologists to unsnarl many knots.... But unlike many of the First Amendment snarls described above, these ones might actually be possible to untangle." Keller provides a very balanced analysis, but I read this as encouraging support on the legal merits of what I have proposed: the way to preserve freedom of expression is to protect users freedom of impression -- not easy, but the only option that can work. Keller's use of the term "unbundling" is also helpful in highlighting how this kind of remedy has precedent in antitrust law.
    + Interview with Keller on this article by Justin Hendrix of Tech Policy Press, Hard Problems: Regulating Algorithms & Antitrust Legislation (6/20/21).
    + Added detail on the unbundling issues is in Keller's 9/9/20 article, If Lawmakers Don't Like Platforms' Speech Rules, Here's What They Can Do About It. Spoiler: The Options Aren't Great.
  • Another perspective on the how moderation conflicts with freedom is in On Social Media, American-Style Free Speech Is Dead (Gilad Edelman, Wired 4/27/21), which reports on Evelyn Douek's more international perspective. Key ideas are to question the feasibility of American-style binary free speech absolutism and shift from categorical limits to more proportionality in balancing societal interests. I would counter that the decentralization of filtering to user choice enables proportionality and balance to emerge from the bottom up, where it has a democratic validity as "community law," rather that being imposed from the top down as "platform law." The Internet is all about decentralized control -- why should we sacrifice freedom of speech to a failure of imagination in managing a technology that should enhance freedom? Customized filtering can provide a receiver-specific richness of proportionality that better balances rights of impression with nuanced freedom of expression. Douek rightly argues that we must accept an error rate in moderation -- why not expect a bottom up, user-driven error rate to be more open and responsive to evolving wisdom and diverse community standards than one applied across the board?
  • [5/18/21]
    Clear insights on the new dynamics of social media - plus new strategies for controlling disinformation with friction, circuit-breakers, and crowdsourced validation in How to Stop Misinformation Before It Gets Shared, by Renee DiResta and Tobias Rose-Stockwell (Wired 3/26/21). Very aligned with my article (but stops short of the contention that democracy cannot depend on the platforms to do what is needed).
  • [5/17/21]
    Important support and suggestions related to Twitter's Bluesky initiative from eleven members of the Harvard Berkman Klein community are in A meta-proposal for Twitter's bluesky project (3/31/21). They are generally aligned with the directions suggested in my article.
  • [4/22/21]
    Another piece by Francis Fukuyama that addresses his Stanford group proposal is in the 
    Journal of DemocracyMaking the Internet Safe for Democracy, April, 2021.
    (+See 7/21/21 update, above, for follow-ups.)

--------------------------

Related items by me:  see the Selected Items tab.

--------------------------

Personal note: The roots of these ideas

This background might be useful to make it more clear where I am coming from...

These ideas have been brewing throughout my long career (bio), with a burst of activity very early on, then around 2002-3, and increasingly in the past decade. They are part of a rich network that intertwingles with my better-known work on FairPay and several of my patented inventions. Some background on these roots may be helpful.

I was first enthused by the potential of what we now call social media around 1970, when I had seen early hypertext systems (pre-cursors of the Web) by Ted Nelson and Doug Engelbart, and then studied systems for collaborative “social” decision support by Murray Turoff and others, rolling into an independent study graduate school course on collaborative systems. All of this oriented me to the spirit of using computers for augmenting human intelligence (including social intelligence) -- not replacing it with artificial intelligence. 

My first proposals for an open market in media filtering were inspired by the financial industry parallels. An open market in filters for news and market data analytics was emerging when I worked for Standard & Poor's and Dow Jones around 1990. Filters and analytics would monitor raw news feeds and market data (price ticker) feeds, select, and analyze that raw information using algorithms and parameters chosen by the user, and work within any of a variety of trading platforms.

I drew on all of that when designing a social decision support system for large-scale open innovation and collaborative development of early-stage ideas around 2002. That design featured an open market for reputation-based ranking algorithms essentially as proposed here. Exposure to Google PageRank, which distilled human judgment and reputation for ranking Web search results, inspired me to broaden Google's design to distill the wisdom of the crowd as reflected in social media interactions, using a nuanced multi-level reputation system.

By 2012 it was becoming apparent that the Internet was seriously disrupting the marketplace of ideas, and Cass Sunstein’s observations about surprising validators inspired me to adapt my methods to social media. I became active in groups that were addressing those concerns and more fully recast my earlier designs to focus on social media, and to address architectural and regulatory strategies (here and then here). My other work on innovative business models for digital services also gave me a unique perspective on better alternatives to the perverse incentives of the ad model.

The Fukuyama article late last year was gratifying validation on the need for an open, competitive market for feed filtering services driven by users, and inspired me to refocus on that as the most direct point of leverage for structural remediation, as expanded on here.

My thanks to the many researchers and activists in this field I have had the privilege of interacting with and who have provided invaluable stimulation, feedback, suggestions, and support. And special thanks to Justin Hendrix for his very helpful editing, and to those who reviewed and commented on earlier versions of this article: Renee DiResta, Yael Eisenstat, Gene Kimmelman, Ellen Goodman, Molly Land, and Sam Lessin.


Thursday, April 01, 2021

But Who Should Control the Algorithm, Nick Clegg? Not Facebook ...Us!

(Image adapted from cited Nick Clegg article)
Facebook's latest attempt to justify their stance on disinformation and other harms, and their plans to make minor improvements, actually points the reason those improvements are not nearly enough -- and can never be. They need to make far more radical moves to free our feeds, as I have proposed previously.

Facebook’s VP of Global Affairs, Nick Clegg, put out an article yesterday that provides a telling counterpoint to those proposals. You and the Algorithm: It Takes Two to Tango defends Facebook in most respects, but accepts the view that users need more transparency and control:

You should be able to better understand how the ranking algorithms work and why they make particular decisions, and you should have more control over the content that is shown to you. You should be able to talk back to the algorithm and consciously adjust or ignore the predictions it makes — to alter your personal algorithm…

He goes on to describe laudable changes Facebook has just made, with further moves in that direction intended. 

But the question is: how this can be more than Band-Aids covering the deeper problem? Seeking to put the onus on us -- “We need to look at ourselves in the mirror…” -- he goes on (emphasis added):

…These are profound questions — and ones that shouldn’t be left to technology companies to answer on their own…Promoting individual agency is the easy bit. Identifying content which is harmful and keeping it off the internet is challenging, but doable. But agreeing on what constitutes the collective good is very hard indeed.

Exactly the point of these proposals! No private company can be permitted to attempt that, even under the most careful regulation - especially in a democracy. That is especially true for a dominant social media service. Further, slow-moving regulation cannot be effective in an age of dynamic change. We need a free market in filters from a diversity of providers - for users to choose from. Twitter seems to understand that; it seems clear that Facebook does not.

Don't try to tango with a dancing bear

As I explain in my proposal:

Social media oligarchs have seduced us -- giving us bicycles for the mind that they have spent years and billions engineering to "engage" our attention. The problem is that they insist on steering those bicycles for us, because they get rich selling advertising that they precisely target to us. Democracy and common sense require that we, the people, keep control of our marketplace of ideas. It is time to wrestle back the steering of our bicycles, so that we can guide our attention where we want. Here is why, and how. Hint: it will probably require regulation, but not in the ways currently being pursued.

What I and others have proposed -- and that Jack Dorsey of Twitter has advocated -- is to spin out the filtering of our newsfeeds (and other recommendations of content, users, and groups) to a multitude of new "middleware" services that work with the platforms, but that users can choose from in an open market, and mix and match as they like. 

"Agreeing on what constitutes the collective good" has always been best done bthe collective human effort of an open market of ideas. Algorithms can aid humans in doing that, but we, the people, must decide which algorithms, with what parameters and what objective functions. These open filtering proposals explain how and why. What Clegg suggest is good advice as far as it goes, but, ultimately, too much like trying to tango with a dancing bear.