Thursday, December 23, 2021

Tech Policy Press Had A Great First Year -- Illuminating the Critical Issues

Democracy owes thanks to Tech Policy Press and its CEO/Editor Justin Hendrix for a great first year of important reporting, analysis, and opinion on the increasingly urgent issues of tech policy, especially social media. It is becoming the place to keep up with news and ideas. 

They just published their list of Top 50 Contributor Posts of 2021 from 330 posts from 120 guest contributors and their list of Top 10 Tech Policy Press Podcasts of 2021 from 54 episodes.

I am honored to be among the stellar contributors - and to have written two of the “Top 50” posts (plus four others) - and to have helped organize and moderate their special half-day event, Reconciling Social Media and Democracy.

Just a partial sampling of the many other contributors I have learned much from - Daphne Keller, Elinor Carmi, Nathalie Maréchal, Yael Eisenstat, Ellen Goodman, Karen Kornbluh, Renee DiResta, Chris Riley, Francis Fukuyama, Corey Doctorow, and Mike Masnick.

Great work by CEO/Editor Justin Hendrix.

Sign up for their newsletter!

Monday, December 20, 2021

Are You Covidscuous? [or Coviscuous?]

Are You Covidscuous? Have you been swapping air with those who are?

Covidscuous, adj. (co-vid-skyoo-us), Covidscuity, n. -- definition: demonstrating or implying an undiscriminating or unselective approach; indiscriminate or casual -- in regard to Covid contagion risks to oneself and those around one.

[Update 1/12/22:] Alternate form: Coviscuous, Coviscuity. Some may find this form easier to pronounce and more understandable.

We seem to lack a word for this badly needed concept. Many smart people who know Covid is real and have been vaccinated and boosted and wear masks often still seem to be oblivious to the cumulative and multiplicative nature of repeated exposures to risk. Many are aware that Omicron has added a new curveball, but give little thought to how often they expose themselves (and thus those they spend time with) by not limiting how much time they spend in large congregate indoor settings -- especially when rates and risks are increasing.

In July 2020, I wrote The Fog of Coronavirus: No Bright Lines, emphasizing that Covid spreads like a fog, depending on distance, airflow, and duration of exposure. That while a single interaction may have low risk, large numbers of low-risk interactions can amount to high risk. “You can play Russian roulette once or twice and likely survive. Ten or twenty times and you will almost certainly die.  We must weigh level of risk, duration, and frequency.” A gathering of six friends or relatives exposes six people to each other. A party with dozens of people chatting and mingling in ever-changing close circles of a few people has far higher risk – even if all are boosted.

We need to constantly apply the OODA loop to our exposures – Observe, Orient, Decide, Act, and repeat. When rates and exposure levels are low, we can be more relaxed. As rates or other risk factors increase, we need to be far more judicious about our exposures.

We should think in terms of a Covidscuity Rating. An index that factors in how many people you interact with (each having their own Covidscuity Rating), for what duration. More people, some with higher Covidscuity, and for more duration, closer, with less masking all multiply risk. Maybe epidemiologists can decide just how that math generally works and create a calculator app we can use to understand the relevant factors better (much like apps for home energy efficiency). Maybe display a Monte Carlo graph to show how this is never exact, but a fuzzy bell curve of probabilities. This could help us understand the risks we take -- and those we take on from those we choose to interact with.

But in any case, the OODA loops must be continuous. Not from months ago, but weekly, and whenever there is new information. Observe, Orient, Decide, Act, repeat.

And of course we have a social responsibility. This risk is not just to you, but those you might next infect. And to all of us, as you help provide a breeding ground for new and more dangerous variants.

This is not to say some Covidscuity is always wrong, only that we should maintain updated awareness of what risk we take, for what reward, and consider not just single events but budget your activities for the compounding effect of repeated exposure. Consider your own Covidscuity, and that of those you expose yourself to.

Sunday, December 19, 2021

Tech Policy Press: The Ghost of Surveillance Capitalism Future

My short article in Tech Policy Press focuses on The Ghost of Surveillance Capitalism Future, AKA, The Ghost of Social Media Future. 

Concerned about what Facebook and other platforms know about you and use to manipulate you now? The "mind-reading" power of "biometric psychography" will make that look like the good old days. 

Now is the time for policy planners to look to the future – not just to next year, but the next decade. Whatever direction we choose, the underlying question is “whom does the technology serve?” These global networks are far too universal, and their future potential far too powerful, to leave this to laissez-faire markets with business models that primarily exploit users.

Plus two additional references that add to the vision of abuses:

    Monday, November 29, 2021

    Directions Toward Re-Architecting Social Media to Serve Society

    My current article in Tech Policy Press, ProgressToward Re-Architecting Social Media to Serve Society, reports briefly on the latest in a series of dialogs on a family of radical proposals that is gaining interest. These discussions have been driven by the Stanford Working Group on Platform Scale and their proposal to unbundle the filtering of items into our social media news feeds, from the platforms, into independent filtering “middleware” services that are selected by users in an open market.

    As that article suggests, the latest dialogue at the StanfordHAI Conference on "Radical Proposals" questions whether variations on these proposals go too far, or not far enough. That suggests that policy planners would benefit from more clarity on increases in scope that might be phased over time and on just what the long-term vision for the proposal is. The most recent session offered some hints of directions toward more ambitious variations – which might be challenging to achieve but might generate broader support by more fully addressing key issues. But these were just hints.

    Reflecting on these discussions, this post pulls together some bolder visions along the same lines that I have been sketching out, to clarify what we might work toward and how this might address open concerns. Most notably, it expands on the suggestion in the recent session that data cooperatives are another kind of “middleware” between platforms and users that might complement the proposed news feed filtering middleware.

    The current state of discussion

    This is best understood after reading my current Tech Policy Press article, but here is the gist:

       The unbundling of control of social media filtering to users -- via an open market of filtering services -- is gaining recognition as a new and potentially important tool in our arsenal for managing social media without crippling the freedom of speech that democracy depends on. Instead of platform control, it brings a level of social mediation by users and services that work as their agents.

       Speaking as members of the Stanford Group, Francis Fukuyama and Ashish Goel explained more of their vision of such an unbundling, gave a brief demo, and how they have backed off to become a bit less radical -- to limit privacy concerns as well as platform and political resistance. However, others on the panel suggested that might not be ambitious enough.

       To the five open concerns about these proposals that I had previously summarized -- relating to speech, business models, privacy, competition and interoperability, and technological feasibility – this latest session highlighted a sixth issue -- relating to the social flow graph. That is the need for filtering to consider not just the content of social media but the dynamics of how that content flows among -- and draws reaction from -- chains of users, with sometimes-destructive amplification. How can we manage that harmful form of social mediation -- and can we achieve positive forms of social mediation?

       That, in turn, brings privacy back to the fore. Panelist Katrina Ligett suggested that another topic at the Stanford conference, Data Cooperatives, was also relevant to this need to consider the collective behavior of social media users. That is something I had written about after reflecting on the earlier discussion hosted by Tech Policy Press. The following section relates those ideas to this latest discussion.

    Infomediaries -- another level of middleware -- to address privacy and business model issues

    While adding another layer of intermediation and spinning more function out of the platforms may seem to complicate things, the deeper level of insight from the dynamics of the flow of discourse will enable more effective filtering -- and more effective management of speech across the board. It will not come easily or quickly -- but any stop-gap remediation should be done with care to not foreclose development toward mining this wellspring of collective human judgment.

    The connection of filtering service “middleware” to the other “middleware” of data collectives that Ligett and I have raised has relevance not only to privacy but also to the business and revenue model concerns that Fukuyama and Goel gave as reasons for scaling back their proposals. Data collectives are a variation on what were first proposed as “infomediaries” (information intermediaries) and later as “information fiduciaries.” I wrote in 2018 about how infomediary services could help resolve the businessmodel problems of social media, and recently about how they could help resolve the privacyconcerns. The core idea is that infomediaries act as user agents and fiduciaries to negotiate between users and platforms – and advertisers -- for user attention and data.

    My recent sketch of a proposal to use infomediaries to support filtering middleware, Resolving Speech, Biz Model, and Privacy Issues – An Infomediary Infrastructure for Social Media?, suggested not that the filtering services themselves be infomediaries, but be part of an architecture with two new levels:

    1. A small number of independent and competing infomediaries that could safeguard the personal data of users, coordinate limits on clearly harmful content, and help manage flow controls. They could use all of that data to run filtering on behalf of...
    2. A large diversity of filtering services – without exposing that personal data to the filtering services (which might have much more limited resources to process and safeguard the data)

    Such a two-level structure might enable powerful and diverse filtering services while providing a strong quasi-central, federated support service – insulated from both the platforms and the filtering services. That infomediary service could coordinate efforts to limit dangerous virality in ways that serve users and society, not advertisers. Those infomediaries could also negotiate as agents for the users for a share of any advertising revenue -- and take a portion of that to fund themselves, and the filtering services.

    With infomediaries, the business model concerns about sustaining filtering services, and insulating them from the perverse incentives of the advertising model to drive engagement, might become much less difficult than currently feared.

       Equitable revenue shares in any direction can be negotiated by the infomediaries, regardless of just how much data the filtering services or infomediaries control, who sells the ads, or how much of the user interface they handle. That is not a technical problem but one of negotiating power. The content and ad-tech industries already manage complex multi-party sales and revenue sharing for ads -- in Web, video, cable TV, and broadcast TV contexts -- which accommodate varying options for which party sells and places ads, and how the revenue is divided among the parties. (Complex revenue sharing arrangements through intermediaries have long been the practice in the music industry.)

       Filtering services and infomediaries could also shift incentives away from the perversity of the engagement model. Engagement is not the end objective of advertisers, but only a convenient surrogate for sales and brand-building. Revenue shares to filtering services and infomediaries could be driven by user-value-based metrics rather than engagement -- even as simple as MAUs (monthly average users). That would better align those services with the business objective of attracting and keeping users, rather than addicting them. Some users may choose to wear blinders, but few will agree to be manipulatively driven toward anger and hate if they have good alternatives. But now the platform's filters are the only game in the platform's town.

    Related strategies that build on this ecosystem to filter for quality

    There might be more agreement on the path toward social media that serve society if we shared a more fleshed-out vision of what constructively motivated social media might do, and how that would counter the abuses we currently face. Some aspects of the power that better filtering services might bring to human discourse are suggested in the following:

    Skeptics are right that user-selected filtering services might sometimes foster filter bubbles. But they fail to consider the power that multiple services that seek to filter for user value might achieve, working in “coopetition.” Motivated to use methods like these, a diversity of filtering services can collaborate to mine the wisdom of the crowd that is hidden in the dynamics of the social flow graph of how users interact with one another – and can share and build on these insights into reputation and authority. User-selected filtering services may not always drive toward quality for all users, but collectively, a powerful vector of emergent consensus can bend toward quality. The genius of democracy is its reliance on free speech to converge on truth – when mediated toward consensus by an open ecosystem of supportive institutions. Well-managed and well-regulated technology can augment that mediation, instead of disrupting it.

    Phases – building toward a social media architecture that serves society

       The Stanford Group’s concerns about “political realism” and platform pushback has led them to a basic level of independent, user-selectable labeling services. That is a limited remedy, but may be valuable in itself, and as a first step toward bolder action.

       Their intent is to extend from labeling to ranking and scoring, initially with little or no personal data. (It is unclear how useful that can be without user interaction flow data, but also a step worth testing.)

       Others have proposed similar basic steps toward more user control of filtering. In addition to proposals I cited this spring, the proposed Filter Bubble Transparency Act would require that users be offered an unfiltered reverse-chronological feed. That might also enable independent services to filter that raw feed. Jack Balkin and Chris Riley have separately suggested that Section 230 be a lever for reform by restricting safe harbors to services that act as fiduciaries and/or that provide an unfiltered feed that independent services can filter. (But again, it is unclear how useful that filtering can be without access to user interaction flow data.)

       Riley has also suggested differential treatment of commercial and non-commercial speech. That could enable filtering that is better-tailored to each type.

       The greatest benefit would come with more advanced stages of filtering services that would apply more personal data about the context and flow of content through the network, as users interact with it, to gain far more power to apply human wisdom to filtering (as I have been suggesting). That could feed back to modulate forward flows, creating a powerful tool for selectively damping (or amplifying) the viral cascades that are now so often harmful.

        Infomediaries (data cooperatives) could be introduced to better support that more advanced kind of filtering, as well as to help manage other aspects of the value exchange with users relating to privacy and attention that are now abused by “surveillance capitalism.”

    Without this kind of long-term vision, we risk two harmful errors. One is overreliance on oppressive forms of mediation that stifle the free inquiry that our society depends on, and that the First Amendment was designed to protect. The other is overly restrictive privacy legislation that privatizes community data that should be used to serve the common good. Of course there is a risk that we may stumble at times on this challenging path, but that is how new ecosystems develop.


    Running updates on these important issues can be found here, and my updating list of Selected Items is on the tab above.

    Wednesday, November 03, 2021

    Resolving Speech, Biz Model, and Privacy Issues – An Infomediary Infrastructure for Social Media?

    A Quick Sketch for Discussion: Formative thoughts on addressing open concerns, posted in anticipation of a 11/9 conference session at Stanford on the “middleware” unbundling proposals. (This also suggests linkage to an 11/10 session on “data cooperatives” at the same event). [Update: As noted at the end, there was some discussion at the 11/9 conference session that was generally supportive of the directions suggested here.]


    Recent proposals to unbundle filtering services from social media platforms to better serve user interests have generated support, tempered by concern -- notably about business models and privacy protection. Instead of the one-level functional unbundling that has been proposed, these concerns may be better handled by a two-level unbundling. 

    Between the platforms and the large numbers of unbundled filtering services that need resources and access to sensitive personal data to filter effectively on their users’ behalf, add a layer with a small number of better-resourced “infomediaries” that are fiduciaries for users. The infomediaries can manage coordination of services, data protection, and revenue sharing in service to user interests, and enable the many independent filtering services to share resources and run their filters in privacy-protected ways.

    The time may be ripe for the long-gestating idea of “infomediaries” to emerge as a linchpin for resolving some of the management and control dilemmas we now face with social media. The session with Francis Fukuyama and others that I moderated at the Tech Policy Press event on 10/7, Reconciling Social Media & Democracy: Fukuyama, Keller, Maréchal & Reisman (along with other speakers that followed) generated a wide-ranging discussion of issues with those proposals that provide context for the upcoming session on these proposals he will participate in at Stanford. 

    Knotty problems with the “middleware” proposal

    The unbundling proposals that Fukuyama, I, and others advocate have been viewed to have considerable appeal in principle, but the 10/7 discussion sharpened many previously raised questions about whether they can work -- relating to speech, business models, privacy, compatibility and interoperability, and technological feasibility.

    Reflecting on the privacy issues led me to refocus on “infomediaries” as an important part of a solution, and how they might clarify the business model issues, as well. Infomediaries were first proposed in the dot-com era, as agents of consumers that could negotiate with businesses over data and attention, to give consumers control and compensation for their information. The imbalance of power over consumers has grown in the world of e-commerce, but social media have given this even more importance and urgency.

    The unbundling proposal is to spin out the filtering of what users see in their newsfeeds from the platforms -- to create independent filtering “middleware” services that users select in an open market to serve as their agents. There is wide agreement that their ad-engagement-driven business model drives social media to promote harmful speech in powerful and dangerous ways. Fukuyama raised an even deeper concern that the concentration of power to control what we each see is a mortal threat to democracy, “a loaded gun sitting on the table” that we cannot rely on good actors to not pick up.

    Unbundling of the filtering services would take that loaded gun from the platforms (and those who might coerce them) and reduce its power -- by giving individual users more independent control of what they see in their social media newsfeeds and recommendations. But -- how can those unbundled services be funded, since users seem disinclined to pay for them? -- and how can the filtering services use the personal data needed to do filtering effectively without breaches of privacy?

    This problem is compounded because we would want a wide diversity of filtering services innovating and competing for users. Many would be small, and under-resourced -- and there is no simple, automated solution to understanding the content they filter and its authority.

    • How would they have the resources -- to not only do the basic filtering task of ranking, but also to moderate the overwhelming firehose of harmful content that already taxes the ability of giants like Facebook?
    • How would a multitude of small filtering services be able to protect non-public multi-party content, as well as the multi-party personal metadata, needed to understand the provenance and authority of what they filter?
    These are challenging tasks, and there is reluctance to proceed without a clear idea of how we might operationalize a solution.

    The role of infomediaries

    I suggest the answer to this dilemma could be a more sophisticated distribution of functions. Not just two levels:  of platform and filtering services (as user agents); but three levels: of platform, of infomediaries (as a few, privileged user agents), and of filtering services (as many, more limited user agents).

    "Infomediaries" (Information intermediaries) were suggested in 1997 in Harvard Business Review --as a trusted user agent that manages a consumer’s data and attention -- and negotiates with businesses on how it is used and for what compensation. Similar ideas re-surfaced in a law review article in 2016 as "Information Fiduciaries" and then in HBR in 2018 as "Mediators of Individual Data" ("MIDs").

    (As I was writing this, I learned that another session at the Stanford event is on more a recent variant, “Data Cooperatives.” Despite that coincidence, I am unaware a connection has been seen, except for the observation in this recent work that social media data is not individual but “collective.” If the participants at those two sessions are not in communication, I suggest that might be productive.)

    Why have infomediaries not materialized in any significant way? It seems network effects and the "original sin of the Internet," advertising, have proven so hugely powerful that infomediaries never got critical mass in commerce beyond narrow uses. (I was CTO from ‘98-‘00 for a basic kind of infomediary service that had some success before the crash.)

    But now, with the harms of social media bringing the broader abuses of “attention capitalism” to a head, regulators may see that the only way to systematically limit these harms – and the harms of attention capitalism more broadly -- is to mandate the creation of infomediaries to serve as negotiating and custodial agents for consumers. They offer a way to enable business models that balance business power with consumer power, especially regarding compensation for attention and data -- in ways that empower users to decide what to allow, for what benefit. They also offer a new solution to protecting sensitive multi-party social media messages and related metadata -- while enabling society to refine and benefit from the wisdom of the crowd that it contains -- to help us manage our attention.

    Here is a sketch of how filtering services might be supported by infomediaries. Working out the details will be a complex task that should be guided by a dedicated Digital Regulatory Agency with significant business and independent expert participation.

    • Put all personal data of social media users under the control of carefully regulated infomediaries (IMs) who interface with the platforms and the filtering services (FSs), as fiduciary agents for their users. Create a small number of infomediaries (five to seven?) to support defined subsets of users. After that, users would be free to migrate among infomediaries in a free market -- and very limited numbers of new infomediary entrants might be enabled.
    • Spin out the filtering services from the platforms – and create processes to encourage new entrants. The infomediaries would cooperate to enable the filtering services to benefit from the data of all qualified infomediaries, while protecting personally identifiable data.
    • Empower the infomediaries to negotiate a share of advertising revenue from the platforms on behalf of their users, in compensation for their data and attention – to be shared with the filtering services (and perhaps the users). Provide alternatively for a mix of user support or public subsidy much like existing public media. Ideally that could grow to include user support for the platforms as an alternative to some or all advertising.
    • Use regulatory power to work with industry to manage interface standards and the ongoing conduct of these roles and negotiations, much as other essential, complex, and dynamic industries like finance, telecom, transport, power, and other utilities are regulated. Creation of new infomediaries might be strictly limited by regulators, much like banks or securities exchanges.

    The virtue of this two-level unbundling architecture is that it concentrates elements of the infomediary role that have network-wide impact and sensitive data in a small number of large competitive entities -- they could apply the necessary resources and technology to maintain privacy and provide complex services, with some competitive diversity. It enables much larger numbers of filtering services that serve diverse user needs to be lean and unburdened.

    Because the new infomediaries would be accredited custodians of sensitive messaging data, as fiduciaries for the users, they could share that data among themselves, providing a collective resource to safely power the filtering services.

    This could be done in two ways: 1) by providing purpose and time-limited, privacy protected data to the filtering services, or perhaps simpler and more secure, 2) by acting as a platform that runs filtering algorithms defined by the filtering services and returning rankings without divulging the data itself. (More on how that can be done, and why, is below). Either way, the platforms would no longer control or be gatekeepers for the filtering.

    This multilevel breakup may sound very complex and raise questions of regulatory power, but it would be very analogous to the breakup of the Bell System, which unbundled the integrated AT&T into a long-distance service (AT&T), seven regional local-service operating companies (RBOCs), and a manufacturing service (Lucent), all of which were opened to new competitive entrants, unleashing a torrent of valuable innovation.

    As our social media ecosystem becomes the underlying fabric of most human discourse, a similarly ambitious undertaking is not only economically desirable and justifiable, but essential to the survival of democracy and free speech. Functional specialization multiplies the number of entities, but it simplifies the tasks of those entities – and enables competition, innovation, and resilience. To the fear of technical solutions to social problems that Nathalie Marechal spoke of, I submit that the problem of algorithms that select for virality (thus exacerbating a social problem) is a newly-created technical problem, driven by an incentives problem – one that this architecture (or some improvement on it) can help solve.

    A casual reader might stop here. The following sections dig deeper on how this addresses first, the business model challenges that infomediaries were conceived to solve, and then, the difficult privacy issues of middleware unbundling and other problems that they seem might help finesse.


    Looking deeper...

    Resolving the business model issues

    Even as one who had read it then, it is now enlightening to turn the clock back to 1997 to read the original HBR article on infomediaries by John Hagel III and Jeffrey F. Rayport, The Coming Battle for Customer Information,” for perspective on the current problems of surveillance and attention capitalism. The authors predicted:

    In order to help [consumers] strike the best bargain with vendors, new intermediaries will emerge. They will aggregate consumers and negotiate on their behalf within the economic definition of privacy determined by their clients. … When ownership of information shifts to the consumer, a new form of supply is created. By connecting information supply with information demand and by helping both parties involved determine the value of that information, infomediaries would be building a new kind of information supply chain.”

    A 1999 book co-authored by Hagel greatly expands on this idea (and is also worth a look). It specifically refers to “filtering services,” to include or exclude marketing messages to match the need or preferences of its clients.

    Growing due to network effects and scale economies, vendors like Amazon and ad-tech services like Google and Facebook have effectively usurped the vendor side of the infomediary function. These powers are now so entrenched and engorged with obscene profits that there is little hope that infomediaries that do represent user interests can emerge without regulatory action.

    The proposal that unbundled filtering services be funded by as revenue share from the platforms has struck critics as implausible and complex. But if that role is not dispersed, among large numbers of often-small filtering services, but managed by a small number of larger infomediaries who have a mandate from regulators, the task may be far more tractable.

    Yes, this would be a complex ecosystem, with multiple levels of cooperating businesses for which economically sound revenue shares would need to be negotiated. Ad revenues from platforms to infomediaries, to filtering services, and possibly to consumers. Or alternatively, from consumers or sponsors or public funding -- in whichever direction makes corresponds to the value transfer. But many industries – such as financial services. ad-tech, telecom, logistics -- flourish with equally complex revenue shares (whether called shares, fees, commissions, settlements or whatever), often overseen by regulators that ensure fairness.

    Once such a multiplayer market begins to operate, innovation can enable better revenue models. My 2018 article “Reverse the Biz Model” explored some possible variations, and explained how they could work via infomediaries, or directly between business and consumer. It also suggested how consumer funding to eliminate ads on an individual basis could be commensurate with ability to pay. The inherent economics are more egalitarian than one might first think because those with low income have low value to advertisers. They would have to contribute less to compensate for lost ad revenue. Mediated well, users could even benefit from whatever level of non-intrusive and relevant advertising they desire, and platforms would still bring in sufficient funding to disperse through the ecosystem -- perhaps more than now, given that there would be less waste. (Note that filtering services might specialize in advertising/marketing messages or in user-generated content to better address the different issues for each.)

    Some fear that having filtering services receive funding from advertising, even indirectly, would continue the perverse incentives for engagement that are so harmful. But revenue shares to the infomediaries and filtering services need not be tied to engagement – they could be tied to monthly average users or other user-value-based metrics. With a multitude of filtering services, the value of engagement to the platform would be decoupled, so that no individual filtering service would materially affect engagement. These services might be structured as nonprofits, benefit corporations, or cooperatives, to further shift incentives toward user and social value.

    Resolving the privacy issues

    The other key opportunity for infomediaries is to manage data privacy. This takes on special significance because key aspects of filtering and recommendations depend on either message content or the metadata about how users interact with those messages -- both of which are often privacy-sensitive. Importantly, as noted by the recent proposals for data cooperatives, that data is not individual, but collective.

    Infomediaries may offer a way to finesse the concerns pinpointed in the 10/7 discussion. I suggested that the most promising strategy for filtering to understand quality -- given the limitations of AI and of human review of billions of content items in hundreds of languages and contexts -- is to use the metadata that signals how other users responded to that content. Daphne Keller nicely delineated the privacy concern:

    … I think a lot of content moderation does depend on metadata. For example, spam detection and demotion is very much driven by metadata. And Twitter has said that a lot of how they detect terrorist content, isn’t really by the content, it’s by the patterns of connections between accounts following each other or coming from the same IP address or appearing the same– those aren’t the examples they gave, but what I assume they’re using. And I think it’s a big part of what Camille Francois has called the ABC framework, the Actors-Behavior-Content, as these three frameworks for approaching responding to problematic online content.

    And I think it just makes everything much harder because if we pretend that metadata isn’t useful to content moderation, that kind of simplifies things. If we acknowledge that metadata is useful, that is often personally identifiable data about users, including users who haven’t signed up for this new middleware provider, and it’s a different kind of personally identifiable data than just the fact that they posted particular content at a particular time. And all of the concerns that I raised, but in particular, the privacy concern and just like how do we even do this? What is the technology that takes metadata structured around the backend engineering of Twitter or whomever and share it with a competitor? That gets really hard. So I’m scared to hear you bring up metadata because that adds another layer of questions I’m not sure how to solve.

    This is what drove me to refocus on infomediaries as the way to cut through the dilemma. The platforms could have filtered using as much of this data as the wished, since they now control that data. Similar data is central to Google search (the PageRank algorithm that was the key to their success) -- but search is less driven by engagement than social media.

    Privacy has been a sore point for the unbundling of filtering. The kind of issues that Keller raised led Fukuyama and his colleagues to back off from the broadest unbundling to advocate more limited ambitions, such as labelling, that are content-based and rather than metadata-based. He points to services like NewsGuard that rate news sources for their credibility. As I have argued elsewhere, that is a useful service, but severely limited because it only applies to limited numbers of established news services (which do represent large amounts of content), not the billions of user-generated content sources (obviously significant in aggregate, but intractable for expert ratings). Instead, I suggest using metadata to draw out the wisdom of crowds, much as Google does. Recent studies support the idea that crowdsourced assessment of quality can be as good as expert ratings, and there is no question that automated crowdsourced methods that draw on passively obtained metadata are far more capable of operating at Internet scale and speed – the only solution that can really scale as needed.

    Thus, it would be a huge loss to society to not be able to filter social media based on interaction metadata --- an infomediary strategy for making that feasible is well worth some added complexity. A manageable number of infomediaries could manage this data to include most (but not necessarily all) users in this crowdsourcing. Each infomediary would only have a subset of the users’ data, but that data could be pooled among properly regulated infomediaries and restricted to use only in filtering.

    More technical/operational detail on filtering and data protection

    As noted above, and drawing on work on trust and data sharing by Sandy Pentland (one of the speakers in the Stanford Data Cooperatives session), and similar suggestions by Stephen Wolfram (in his 2019 testimony to a US Senate subcommittee), there seem to be two basic alternatives: 1) providing limited, privacy protected data to the filtering services, or perhaps simpler and more securely, 2) acting as a platform for running filtering algorithms defined by the filtering services and returning rankings, without divulging the data itself.

    Perhaps emerging technologies for secure data sharing (such as those described by Pentland) might allow the fiduciaries to grant the filtering services controlled and limited access to this data. But that is not necessary to this architecture -- as noted above, the simpler solution appears to be that of having the infomediaries act as a platform for running filtering algorithms defined by the filtering services without divulging the data itself. Send the algorithm to the data.

    Adapting the approach and terminology suggested by Wolfram, the infomediary retains operational control of the filtering operation, and all the data used for that -- working essentially as a “final ranking provider” – as a fully trusted level of user agent. But the setting of specific criteria for that ranking is delegated to one or more user-chosen filtering services to operate essentially as a “constraint provider” that instructs that rankings to be done in accord with the preferences they set on behalf of their users. (In contrast, the platforms now serve as both constraint providers and final ranking providers -- and users have very little say in how that is done.)

    Note that, ideally, these rankings should be done in a composable format, such that rankings from multiple filters can be arithmetically combined into a composite ranking. This might be done with relative weightings that users can select for each filtering service, such as with sliders, to compose an overall ranking drawn from on all the services they choose. Users might be enabled to change their filter selections and weightings at any time to suit varying objectives and moods. Thus, users control the filters by choosing the filtering services (and setting any variations they enable), but the actual process of filtering and the data needed for that remains within the cooperating array of secure infomediaries.

    Back to Keller’s concerns: The boundaries between the platforms and the infomediaries are clear and with well-defined interfaces, much as in any complex, evolving ecosystem. There is nothing shared with competitors, only with partners. It is co-opetition, among trusted peers, on how shared data is used and protected, at what price. The personal data never goes beyond a team of infomediaries, all trusted with purpose-specific portions of one-another’s clients’ data. There is no more implementation complexity than in Google’s ad business. It won’t happen in a day, but it is eminently do-able – if we really want.

    Improved functionality for filtering, blocking, and flow control

    Consider how this two-level architecture can enable rich functionality with diverse characteristics needed to address the multi-faceted challenges of filtering, blocking, and flow control as we face new technical/social issues like "ampliganda." Growing evidence favors not just filtering (ranking and recommenders) or blocking (takedowns and bans), but flow controls. These include circuit-breakers and other forms of friction that can slow the effects of virality (such as nudging users to take time and read an item before sharing it). The infomediaries could pool their real-time network flow data to serve as the empowered coordinating locus for such measures -- with diversity, and with independence from the platforms.

    The infomediaries might also be the independent coordinating locus for takedowns of truly illegal content in ways that protect user rights of privacy, explanation, and appeal, much as common carriers handle such roles in traditional telecom services. Criteria here might be relatively top-down (because takedowns are draconian binaries), in contrast to the bottom-up rankings of the filtering services (which are fuzzy, not preventing anyone from seeing content, merely making it less likely to be fed into one’s attention). The infomediaries could better shielded these functions from corporate or political interference than leaving it with the platforms. They would serve as an institutional later insulated from platform control. The infomediaries could outsource takedown decision inputs to specialized services (much like email spam blocking services) that could compete based on expertise in various domains. Here again, the co-opetition among trusted peers (and their agents) keeps private data secure.

    Note that this can evolve to a more general infrastructure that works across multiple social media platforms and user subsets. It can also support higher levels of user communities and special interest groups on this same infrastructure, so that the notion of independent platforms can blur into independent groups, communities, using a full suite of interaction modalities, all on a common backbone network infrastructure.

    Whatever the operational details, the primary responsibility for control of personal data would remain with the infomediaries, as data custodians for the data relating to the users they serve. To the extent that the platforms and/or filtering services (and other cooperating infomediaries) have access to that data at all, it could be limited to their specifically authorized transient needs and removed from their reach as soon as that need is satisfied -- subject to legal audits and enforcement. That enables powerful filtering based on rich data across platforms and user populations.

    This is not unlike how trust has long been enforced in financial services ecosystems. Is our information ecosystem less critical to our welfare than our financial ecosystem? Is our ability to exchange our ideas less critical than our financial exchanges?

    [Update 11/8/21:] Feedback from Sandy Pentland (a panelist for the upcoming Data Cooperatives session) led me to the introduction to his new book, which provides an excellent perspective on how this kind of infomediary can evolve, and be distributed in a largely bottom-up way. My description above highlights the institutional role of infomediaries and how they can balance top-down order to serve users -- but Sandy's book suggests how, as these new data technologies mature, they might provide a much more fully distributed blend of bottom-up control and cooperation that can still balance privacy and autonomy with constructive social mediation processes.

    [Update 11/10/21:] There was discussion of data cooperatives as relevant to filtering middleware in the 11/9 HAI middleware session. Panelist Katrina Ligett emphasized the need consider not only content items, but the data about the social flow graph of how content moves through the network and draws telling reactions from users. She referred to data cooperatives as another kind of middleware, and Ashish Goel also saw promise in this other kind of middleware. I will be writing more on that.
    Directions Toward Re-Architecting Social Media to Serve Society


    For additional background, see the Selected Items tab.

    Tuesday, October 26, 2021

    The Best Idea From Facebook Staffers for Fixing Facebook: Learn From Google

    [Image from Murat Yükselif/The Globe and Mail]
    The Facebook Papers trove of internal documents show that Facebook employees understand the harms of their service and have many good ideas for limiting them. Many have value as part of a total solution. But only one has been proven to not only limit distribution of harmful content but also to select for quality content -- and to work economically at huge scale and across a multitude of languages and cultures.

    Facebook knows that filtering for quality in newsfeeds (and filtering out mis/disinformation and hate) doesn’t require advanced AI -- or humans to understand content -- or the self-defeating Luddite remedy of prohibiting algorithms. It takes clever algorithms that weigh external signals of quality to augment human user intelligence, much as done by Google PageRank. 

    I was pleased to read Gilad Edelman's capsule on this in Wired on 10/26, which brought me to Karen Hao's report in Tech Review from 9/16, both based on a leaked 10/4/19 report by Jeff Allen, a senior-level data scientist then leaving Facebook. I have long advocated such an approach -- seemingly as a voice in the wilderness -- and view this as a measure of validation. Here is a quick note (hopefully to be expanded).

    Jeff Allen's Facebook Paper "How Communities are Exploited on Our Platforms"

    As Karen Hao reports on Allen: 

    “It will always strike me as profoundly weird ... and genuinely horrifying,” he wrote. “It seems quite clear that until that situation can be fixed, we will always be feeling serious headwinds in trying to accomplish our mission.”

    The report also suggested a possible solution. “This is far from the first time humanity has fought bad actors in our media ecosystems,” he wrote, pointing to Google’s use of what’s known as a graph-based authority measure—which assesses the quality of a web page according to how often it cites and is cited by other quality web pages—to demote bad actors in its search rankings.

    “We have our own implementation of a graph-based authority measure,” he continued. If the platform gave more consideration to this existing metric in ranking pages, it could help flip the disturbing trend in which pages reach the widest audiences.

    When Facebook’s rankings prioritize engagement, troll-farm pages beat out authentic pages, Allen wrote. But “90% of Troll Farm Pages have exactly 0 Graph Authority … [Authentic pages] clearly win.” 

    And as Gilad Edelman reports,

    Allen suggests that Graph Authority should replace engagement as the main basis of recommendations. In his post, he posits that this would obliterate the problem of sketchy publishers devoted to gaming Facebook, rather than investing in good content. An algorithm optimized for trustworthiness or quality would not allow the fake-news story “Pope Francis Shocks World, Endorses Donald Trump for President” to rack up millions of views, as it did in 2016. It would kneecap the teeming industry of pages that post unoriginal memes, which according to one 2019 internal estimate accounted at the time for as much as 35 to 40 percent of Facebook page views within News Feed. And it would provide a boost to more respected, higher quality news organizations, who sure could use it.

    Allen's original 2018 report expands: "...this is far from the first time humanity has fought bad actors in our media ecosystems. And it is even far from the first time web platforms have fought similar bad actors. There is a proven strategy to aligning media ecosystems and distribution platforms with important missions, such as ours, and societal value." He capsules the history of Google's PageRank as "the algorithm that built the internet" and notes that graph-based authority measures date back to the '70s.

    He recounts the history of "yellow journalism" over a century ago, and how Adolph Ochs' New York Times changed that by establishing a reputation for quality, and then digs in to Google (emphasis added):

    So Let's Just Follow Googles Lead. Google has set a remarkable example of how to build Ochs’ idea into a web platform. How to encode company values and missions into ranking systems. Figuring out how to make some of it work for FB and IG would provide the whole company with enormous value.

    Google is remarkably transparent about how they work and how they fight these types of actors. If you haven't read “How Search Works" I highly recommend it. It is an amazing lesson in how to build a world class information retrieval system. And if you haven't read “How Google Fights Disinformation”, legitimately stop what you're doing right now and read it.

    The problem of information retrieval (And Newsfeed is 100% an information retrieval system) comes down to creating a meaningful definition of both the quality of the content producer and the relevance of the content. Google's basic method was to use their company mission to define the quality.

    Google's mission statement is to make the worlds information widely available and useful. The most important word in that mission statement is “useful”. A high quality content producer should be in alignment with the IR systems mission. In the case of Google, that means a content producer that makes useful content. A low quality producer makes content that isn't useful. Google has built a completely objective and defensible definition of what useful content is that they can apply at scale. This is done in their “Search Quality Rater Guidelines”, which they publish publicly.

    The way Google breaks down the utility of content basically lands in 3 buckets. How much expertise does the author have in the subject matter of the content, as determined by the credentials the author presents to the users. How much effort does the author put into their content. And the level of 3rd party validation the author has.

    If the author has 0 experience in the subject, doesn't spend any time on the content, and doesn't have any 3rd party validation, then that author is going to be labeled lowest quality by Google and hardly get any search traffic. Does that description sound familiar? It is a pretty solid description of the Troll Farms.

    Google calls their quality work their first line of defense against disinformation and misinformation. All we have to do are figure out what the objective and defensible criteria are for a Page to build community, and bring the world closer together. We are leaving a huge obvious win on the table by not pursuing this strategy.

    ...It seems quite clear that until that situation can be fixed, we will always be feeling serious headwinds in trying to accomplish our mission. Newsfeed and specifically ranking is such an integral part of our platform. For almost everything we want to accomplish, Feed plays a key role. Feed is essential enough that it doesn't particularly need any mission beyond our companies. FB, and IG, need to figure out what implications our company mission has on ordering posts from users inventory.

    Until we do, we should expect our platform to continue to empower actors who are antithetical to the company mission.

    My views on applying this

    Allen is focused here on troll-farm Pages rather than pure user generated content, and that is where Google's page ranking strategy is most directly parallel. It also may be the most urgent to remedy. 

    UGC is more of a long tail -- more items, harder to rate according the first two of Allen's "3 buckets." But he did not explain the third bucket -- how Google users massive data, such as links placed by human "Webmasters," plus feedback on which items in search hit lists users actually click on, and even dwell times on those clicks. That is similar to the data on likes, shares, and comments that I have suggested be used to create graph authority reputations for ordinary users and their posts and comments. For details on just how I see that working, see my 2018 post, The Augmented Wisdom of Crowds:  Rate the Raters and Weight the Ratings. 

    Of course there will be challenges for any effort to apply this to social media. Google has proven that technology can do this kind of thing efficiently at Internet scale, but social media UGC and virality is even more of a challenge than Web pages. 

    The biggest challenge is incentives -- to motivate Facebook to optimize for quality, rather than engagement. One way to make it happen is to unbundle the filtering/ranking services from Facebook, as I described in Tech Policy Press, and as discussed by eminent scholars in the recent Tech Policy Press mini-symposium (and in other writings listed in the Selected Items tab, above). That could realign the incentives to filter for quality, by making filtering a service to users, not platforms or advertisers.

    Maybe the level of anger at this increasingly blatant and serious abuse of society and threat to democracy will finally spur regulatory action -- and the realization that we need deep fixes, not just band-aids..

    In any case, it is great to finally see recognition of the merit of this strategy from within Facebook (even if by an employee reportedly departing in frustration). 

    Tuesday, October 12, 2021

    It Will Take a Moonshot to Save Democracy From Social Media

    A moonshot is what struck me, after some reflection on the afternoon’s dialog at the Tech Policy Press mini-symposium, Reconciling Social Media & Democracy on 10/7/21. It was crystalized by a tweet later that evening about “an optimistic note.” My optimism that there is a path to a much better future was reinforced, but so was my sense of the weight of the task.

    Key advocates now see the outlines of a remedial program, and many are now united in calling for reform. But the task is unlikely to be undertaken voluntarily by the platforms -- and is far too complex, laborious, and uncertain to be effectively managed by legislation or existing regulatory bodies. There seemed to be general agreement on an array of measures as promising -- despite considerable divergence on details and priorities. The clearest consensus was that a new, specialized, expert agency is needed to work with and guide the industry to serve users and society.

    While many of the remedies have been widely discussed, the focal point was a less-known strategy arising from several sources and recently given prominence by Francis Fukuyama and his Stanford-based group. The highly respected Journal of Democracy featured an article by Fukuyama, then a debate by other scholars plus Fukuyama’s response. Our event featured Fukuyama and most of those other debaters, plus several notable technology-focused experts. I moderated the opening segment with Fukuyama and two of the other scholars, drawing on my five-decade perspective on the evolution of social media to try to step back and suggest a long-term guiding vision.

    The core proposal is to unbundle the filtering of items in our newsfeeds, creating an open market in filtering services (“middleware”) that users can choose from to work as their agents. The idea is 1) to reduce the power of the platforms to control for each of us what we see, and 2) to decouple that from the harmful effects of engagement-driven business incentives that favor shock, anger, and divisiveness. That unbundling is argued to be the only strategy that limits unaccountable platform power over what individuals see, as a “loaded gun on the table” that could be picked up by an authoritarian platform or government to threaten the very foundations of democracy.

    Key alternatives, favored by some, are the more familiar remedies of shifting from extractive, engagement-driven, advertising-based business models; stronger requirements for effective moderation and transparency; and corporate governance reforms. These too have weaknesses: moderation is very hard to do well no matter what, and government enforcement of content-based moderation standards would likely fail First Amendment challenges.

    Some of the speakers are proponents of even greater decentralization. My opening comments suggested that be viewed as a likely long-term direction, and that the unbundling of filters was an urgent first step toward a much richer blend of centralized and decentralized services and controls -- including greater user control and more granular competitive options.

    There was general agreement by most speakers that there is no silver bullet, and that most of these remedies are needed at some level as part of a holistic solution. There were concerns whether the unbundling of filters would do enough to stop harmful content or filter bubble echo chambers, but general agreement that shifting power from the platforms is important. The recent Facebook Files and hearings make it all too clear that platform self-regulation cannot be relied on and that all but the most innocuous efforts at regulation will be resisted or subverted. My suggested long-term direction of richer decentralization seemed to generate little dispute.

    This dialog may help bring more coherence to this space, but the deeper concern is just how hard reform will be. There seemed to be full agreement on the urgent need for a new Digital Regulatory Agency with new powers to draw on expertise from government, industry, and academia to regulate and monitor with an ongoing and evolving discipline (and that current proposals to expand the FTC role are too limited).

    The Facebook Files and recent whistleblower testimony may have stirred regulators to action (or not?), but we need a whole of society effort. We see the outlines of the direction through a thicket of complex issues, but cannot predict just where it will lead.  That makes us all uncomfortable.

    That is why this is much like the Apollo moonshot. Both are concerted attacks on unsolved, high-risk problems -- taking time, courage, dedication, multidisciplinary government/industry organization, massive financial and manpower resources, and navigation through a perilous and evolving course of trial and error.

    But this problem of social media is far more consequential than the moonshot. “The lamps are going out all over the free world, and we shall not see them lit again in our lifetime” (paraphrasing Sir Edward Grey as the First World War began) -- this could apply within a very few years. We face the birthing of the next stage of democracy -- much as after Gutenberg, industrialization, and mass media. No one said this would be easy, and our neglect over the past two decades has made it much harder. It is not enough to sound alarms – or to ride off in ill-considered directions. But there is reason to be optimistic -- if we are serious about getting our act together.


    This is my quick take, from my own perspective (and prior to access to recordings or transcripts) -- feedback reflecting other takes on this are welcome. More to follow...

    Running updates on these important issues can be found here, and my updating list of Selected Items is on the tab above.

    Friday, September 17, 2021

    Unbundling Social Media Filtering Services – Toward an Ecosystem Architecture for the Future [Working Draft]


    Raging issues concerning moderation of harmful speech on social media are most often viewed from the perspective of combating current harms. But the broader context is that society has evolved a social information ecosystem that mediates discourse and understanding through a rich interplay of people, publishers, and institutions. Now that is being disintermediated, but as digitization progresses, mediating institutions will reinvent themselves to leverage this new infrastructure. The urgent task for regulation is to facilitate that. Current proposals for unbundling of social media filtering services are just a first and most urgent step in that evolution. That can transform moderation remedies -- including ranking/recommenders, bans/takedowns, and flow controls -- to support epistemic health instead of treating epistemic disease.

    An unbundling proposal with growing, but still narrow, support was the subject of an important debate about social media platform power among scholars in the Journal of Democracy (as I summarized in Tech Policy Press). On further reflection, the case for unbundling as a way to resolve the dilemmas of today becomes stronger by looking farther ahead. Speaking as a systems architect, here are thoughts about these first steps on a long path – to limit current harms and finesse current dilemmas in a way that builds toward a flexible digital social media ecosystem architecture. What is currently seen as failings of individual systems should be viewed as birth pangs in a digital transformation of the entire ecosystem for social construction of truth and value.

    That debate was on proposals to unbundle and decentralize moderation decisions now made by the platforms -- to limit platform power and empower users. The argument is that the platforms have gained too much power, and that, in a democracy, we the people should each have control over what information is fed to us (directly or through chosen agents). Those decisions should serve users -- and not be subject to undemocratically arbitrary “platform law” -- nor to improper government control (which the First Amendment constrains far more than many would-be reformers recognize). Common arguments against such unbundling are that shifting control to users would do too little to combat current harms and might even worsen them.* Meanwhile, some other observers favor a far more radical decentralization, but that seems beyond reach.

    Here I suggest how we might finesse some concerns about the unbundling proposals -- to position that as a first step that can help limit harms and facilitate other remedies -- while also beginning a path toward meeting the greater challenges of the future. That future should be neither centralized nor totally decentralized, but a constantly evolving hybrid of distributed services, authority, and control. Unbundling filters is a start.

    The moderation dilemma

    The trigger for this debate was Francis Fukuyama’s article on proposals that the ranking and recommender decisions made within the platforms should be spun out into an open market that users can select from. The platforms should not control decisions about what individuals choose to hear, and an open market would spur competition and innovation. Responding to debate comments, Fukuyama recognized concerns that some moderation of toxic content might be too complex and costly to decentralize. He also observed that we face a two-sided issue: not only the promotion of toxic content, but also bans or takedowns that censor some speakers or items of their speech. He suggested that perhaps the control of clearly toxic content – except for the sensitive case of political speech -- should remain under centralized control.

    That highlights the distinction between two mechanisms of moderation that are often confounded -- each having fundamentally different technical/operational profiles:

    Blocking moderation in the form of bans/takedowns that block speakers or their speech from being accessible to any user. Such items are entirely removed from the ecosystem. To the extent this censorship of speech (expression) is to be done, that cannot be a matter of listener choice.

    Promotional moderation in the form of ranking/recommenders that decide at the level of each individual listener what they should hear. Items are not removed, but merely downranked in priority for inclusion in an individual’s newsfeed so they are unlikely to be heard. Because this management of reach (impression) is listener-specific, democratic principles require this to be a matter of listener rights (at least for the most part). Users need not manage this directly but should be able to choose filtering services that fit their desires.

    The decision options map out as in this table:

    OK for personal taste

    Lawful but awful?



    OK – listener control






    Block to all

    As a first approximation, setting aside those lawful but awful boundary cases, this suggests that OK content should be managed by filtering services that are chosen by listeners in an open market -- as has been proposed, but that blocking of illegal content should remain relatively centralized. That narrows the tricky questions to the boundary zone: how wide and clearly bounded is that zone, should the draconian censorship of bans/takedowns apply, how much of this can be trusted to the lighter hand of decentralized ranking/recommenders to moderate well enough, and who decides on bans/takedowns (platforms, government, independent boards)?

    Unbundling is a natural solution for ranking/recommenders, but a different kind of unbundling might also prove useful for bans/takedowns. That censorship authority might be delegated to specialized services that operate at different levels to address issues that vary by nation, region, community, language, and subject domain.

    Sensible answers to these issues will shake out over time – if we look ahead enough to assure the flexibility, adaptability, and freedom for competition and innovation to enable that. The answers may never be purely black or white, but instead, pragmatic matters of nuance and context.

    An ecosystem architecture for social media – from pre-digital to digital transformation

    How can we design for where the puck will be? It is natural to think of “social media” in terms of what Facebook and its ilk do now. The FTC defined that as “personal social networking services…built on a social graph that maps the connections between users and their friends, family, and other personal connections.” But it is already evident this blurs in many ways: with more media-centered services like YouTube and Pinterest; as users increasingly access much of their news via social media sharing and promotion rather than directly from publishers; and as institutions interact with their members via these digital social media channels.

    Meanwhile, it is easy to forget that long before our digital age, humanity evolved a highly effective social media ecology in which communities, publishers, and institutions mediated our discourse as gatekeepers and discovery services. These pre-digital social networks are now being disintermediated by new digital social network systems. But as digitization progresses, such mediating institutions will reinvent themselves and rebuild for this new infrastructure.

    Digital platforms seized dominance over this epistemic ecosystem by exploiting network effects – moving fast and breaking it. Now we are digitizing our culture – not just the information, but the human communication flows, processes, and tools – in a way that will determine human destiny.

    From this perspective, it is apparent that neither full centralization nor full decentralization can cope with this complexity. It requires an architecture that is distributed and coordinated in its operation, how it is controlled, and by whom. It must be open to many social network services, media services, and intermediary services to serve many diverse communities and institutions. It must evolve organically and emergently, with varying forms of loose or tight interconnection, with semipermeable boundaries -- just as our existing epistemic ecosystem has. Adapting the metaphor of Jonathan Rauch, it will take a “constitution of discourse” that blends bottom-up, top-down, and distributed authority – in ways that will embed freedom and democracy in software code. Open interoperability, competition, innovation, and adaptability -- overseen with smart governance and human intervention -- will be the only way to serve this complex need.+

    Network effects will continue to drive toward universality of network connectivity. Dominant platforms may resist change, but oligopoly control of this utility infrastructure will become untenable. Once the first step of unbundling filters is achieved, at least some elements of the decentralized and interoperable visions of Mike Masnick, Cory Doctorow, Ethan Zuckerman, and Twitter CEO Jack Dorsey that now may seem impractical will become more attainable and compelling. Just as electronic mail can be sent from one mail system (Gmail, Apple, Outlook, ...) through a universal backbone network that interoperates with every other mail system, postings in one social media service should interoperate with every other social media service. This will be a long, complex evolution, with many business, technical, and governance issues to be resolved. But what alternative is viable?

    Facebook Groups and Pages are primitive community and institutional layers that complement our personal social network graphs. Filtering serves a complementary organizing function. Both can interact and grow in richness to support the reinvention of traditional intermediaries and institutions to ride this universal social media utility infrastructure and reestablish context that has “collapsed.”** Twitter’s Bluesky initiative envisions that this ecosystem will grow with a scope and vibrance that makes attempts at domination by monolithic services counterproductive -- and that this broader service domain presents a far larger and more sustainable business opportunity. The Web enabled rich interoperability in information services -- why not build similar interoperability into our epistemic ecosystem?

    Coping with ecosystem-level moderation challenges

    The ecosystem perspective is already becoming inescapable. Zeve Sanderson and colleagues point to “cross-platform diffusion of misinformation, emphasizing the need to consider content moderation at an ecosystem level.” That suggests that filtering services should be cross-platform. Eric Goldman provides a rich taxonomy of diverse moderation remedies, many of which beg for cross-platform scope. No one filtering service can apply all of those remedies (nor can one platform), but an ecosystem of coordinating and interoperable tools can grow and evolve to meet the challenges -- even as bad actors keep trying to outwit whatever systems are used.

    Still-broader presumptions remain unstated – they seem to underlie key disagreements in the Journal of Democracy debate, especially as they relate to tolerance for lawful but awful speech. Natalie Helberger explores four different models of democracy -- liberal, participative, deliberative, and critical -- and how each leads to very different ideas for what news recommenders should do for citizens. My interpretation is that American democracy is primarily of the liberal model (high user freedom), but with significant deliberative elements (encouraging consideration of alternative views). This too argues for a distributed-control architecture for our social media ecosystem that can support different aspects of democracy suited to different contexts. Sorting this out rises to the level of Rauch’s “constitution” and warrants a wisdom of debate and design for diversity and adaptability not unlike that of the Federalist Papers, and the Enlightenment philosophers that informed that.***

    Designing a constitution for human social discourse is now a crisis discipline. Simple point solutions that lack grounding in a broader ecological vision are likely to fail or create ever deeper crises. Given the rise of authoritarianism, information warfare, and nihilism now exploiting social media, we need to marshal multi-disciplinary thinking, democratic processes, innovation, and resolve.

    Ecosystem-level management of speech and reach

    Returning to filters, the user level of ranking/recommenders blurs into a cross-ecosystem layer of controls, including many suggested by Goldman – especially flow controls affecting virality, velocity, friction, and circuit-breakers. As Sanderson evidences, these may require distributed coordination across filtering services and platforms (at least those beyond some threshold scale). As suggested previously in Tech Policy Press, financial markets provide a very relevant and highly developed model of a networked global ecosystem with realtime risks of systemic instability, managed under a richly distributed and constantly evolving regulatory regime.****

    These are primarily issues of reach (ranking/recommenders at the listener level) -- but some may be issues of speech (bans/takedowns). As platforms decentralize and interoperate to create a distributed social network that pools speech inputs to feed to listeners through a diversity of channels, the question arises whether bans/takedowns are always ecosystem-wide or may be effected differently in different contexts. There is a case for differential effect – such as to provide for granularity in national- and community-level control. Related questions are whether there should be free-speech zones that are protected from censorship by authoritative regimes and communities.

    Fukuyama seems most concerned about authoritarian control of political speech, but important questions of wrongful constraint can apply to pornography, violence, incitement, and even science. Current debates over sex worker rights and authoritarian crackdowns on “incitement” by dissidents, as well as the Inquisition-era ban on Galileo’s statement that the Earth moves, show that there are no simple bright lines or infallible authorities. And, to protect against overenforcement, it seems there must be provision for appeals and for some form of retention (with more or less strictly controlled access) for even very objectionable speech.

    Which future?

    The unbundling of filtering systems becomes more compelling from this broader perspective. It is no panacea, and not without its own complications – but it is a first step toward distributed development of a new layer of social mediation functionality that can enable next-generation democracy. Most urgent to preserve democracy is a moderation regime that leverages decentralization to be very open and empowering on ranking/recommenders, while applying a light and context-sensitive hand on the censorship of bans/takedowns. This openness can restore our epistemic ecology to actively supporting health, not just reactively treating disease.

    Which future do we want? Private platform law acting on its own -- or as the servant of a potentially authoritarian government -- to control the seen reality and passions of an unstructured mob? A multitude of separate, uncoordinated platforms, some tightly managed as walled gardens of civility, but surrounded by nests of vipers? Or a flexible hybrid, empowering a digitally augmented upgrade of the richly distributed ecology of mediators of consent on truth and value that, despite occasional lapses and excesses, has given us a vibrant and robust marketplace of ideas -- an epistemic ecology that liberates, empowers, and enlightens us in ways we can only begin to imagine?


    This article expands on articles in Tech Policy Press and Reisman’s blog, listed here.


    *Some, notably Facebook, argue for just a limited opening of filter parameters to user control. That is worth doing as a stopgap. But that remaining private corporate control is too inflexible, and just not consistent with democracy. Others, like Nathalie Marechal argue that the prime issue is the ad-targeting business model, and that fixing that is the priority. That is badly needed, too, but would still leave authoritarian platform law under corporate control, free to subvert democracy.

    A more direct criticism, by Robert Faris and Joan Donovan, is that unbundling filters is “fragmentation by design” that works against the notion of a “unified public sphere.” I praise that as “functional modularity and diversity by design,” and suggest that apparent unity results from a dialectic of constantly boiling diversity. Designing for diversity is the only way to accommodate the context dependence and contingency required in humanity’s global epistemic ecosystem. Complex systems of all kinds thrive on an emergent, distributed order (whether designed or evolved) built on divisions of function and power. This applies to software systems (where it is designed, learning from the failures of monolithic systems) and in economic, political, and epistemic systems (where it evolves from a stew of collaboration).

    **Twitter recently announced similar community features explicitly intended to restore the context that has collapsed.

    ***A recent paper by Bridget Barrett, Katherine Dommet, and Daniel Kreiss (interviewed by Tech Policy Press) offers relevant research suggesting a need to be more explicit about what we are solving for and that conflicting objectives must be balanced. Maybe what regulation policy should solve for is not the balanced solution itself, but an ecosystem for seeking balanced solutions in a balanced way.

    ****Note the parallels in the new challenges in regulating decentralized finance and digital currency.

    +[Added 9/20] Of course the even more fundamental metaphor comes from nature. Claire Evans likens the "context collapse" on social media to the disruption of monocultures and clear-cutting in forests, ignoring “the wood wide web” -- “just how sustainable, interdependent, life-giving systems work."