Monday, November 29, 2021

Directions Toward Re-Architecting Social Media to Serve Society

My current article in Tech Policy Press, ProgressToward Re-Architecting Social Media to Serve Society, reports briefly on the latest in a series of dialogs on a family of radical proposals that is gaining interest. These discussions have been driven by the Stanford Working Group on Platform Scale and their proposal to unbundle the filtering of items into our social media news feeds, from the platforms, into independent filtering “middleware” services that are selected by users in an open market.

As that article suggests, the latest dialogue at the StanfordHAI Conference on "Radical Proposals" questions whether variations on these proposals go too far, or not far enough. That suggests that policy planners would benefit from more clarity on increases in scope that might be phased over time and on just what the long-term vision for the proposal is. The most recent session offered some hints of directions toward more ambitious variations – which might be challenging to achieve but might generate broader support by more fully addressing key issues. But these were just hints.

Reflecting on these discussions, this post pulls together some bolder visions along the same lines that I have been sketching out, to clarify what we might work toward and how this might address open concerns. Most notably, it expands on the suggestion in the recent session that data cooperatives are another kind of “middleware” between platforms and users that might complement the proposed news feed filtering middleware.

The current state of discussion

This is best understood after reading my current Tech Policy Press article, but here is the gist:

   The unbundling of control of social media filtering to users -- via an open market of filtering services -- is gaining recognition as a new and potentially important tool in our arsenal for managing social media without crippling the freedom of speech that democracy depends on. Instead of platform control, it brings a level of social mediation by users and services that work as their agents.

   Speaking as members of the Stanford Group, Francis Fukuyama and Ashish Goel explained more of their vision of such an unbundling, gave a brief demo, and how they have backed off to become a bit less radical -- to limit privacy concerns as well as platform and political resistance. However, others on the panel suggested that might not be ambitious enough.

   To the five open concerns about these proposals that I had previously summarized -- relating to speech, business models, privacy, competition and interoperability, and technological feasibility – this latest session highlighted a sixth issue -- relating to the social flow graph. That is the need for filtering to consider not just the content of social media but the dynamics of how that content flows among -- and draws reaction from -- chains of users, with sometimes-destructive amplification. How can we manage that harmful form of social mediation -- and can we achieve positive forms of social mediation?

   That, in turn, brings privacy back to the fore. Panelist Katrina Ligett suggested that another topic at the Stanford conference, Data Cooperatives, was also relevant to this need to consider the collective behavior of social media users. That is something I had written about after reflecting on the earlier discussion hosted by Tech Policy Press. The following section relates those ideas to this latest discussion.

Infomediaries -- another level of middleware -- to address privacy and business model issues

While adding another layer of intermediation and spinning more function out of the platforms may seem to complicate things, the deeper level of insight from the dynamics of the flow of discourse will enable more effective filtering -- and more effective management of speech across the board. It will not come easily or quickly -- but any stop-gap remediation should be done with care to not foreclose development toward mining this wellspring of collective human judgment.

The connection of filtering service “middleware” to the other “middleware” of data collectives that Ligett and I have raised has relevance not only to privacy but also to the business and revenue model concerns that Fukuyama and Goel gave as reasons for scaling back their proposals. Data collectives are a variation on what were first proposed as “infomediaries” (information intermediaries) and later as “information fiduciaries.” I wrote in 2018 about how infomediary services could help resolve the businessmodel problems of social media, and recently about how they could help resolve the privacyconcerns. The core idea is that infomediaries act as user agents and fiduciaries to negotiate between users and platforms – and advertisers -- for user attention and data.

My recent sketch of a proposal to use infomediaries to support filtering middleware, Resolving Speech, Biz Model, and Privacy Issues – An Infomediary Infrastructure for Social Media?, suggested not that the filtering services themselves be infomediaries, but be part of an architecture with two new levels:

  1. A small number of independent and competing infomediaries that could safeguard the personal data of users, coordinate limits on clearly harmful content, and help manage flow controls. They could use all of that data to run filtering on behalf of...
  2. A large diversity of filtering services – without exposing that personal data to the filtering services (which might have much more limited resources to process and safeguard the data)

Such a two-level structure might enable powerful and diverse filtering services while providing a strong quasi-central, federated support service – insulated from both the platforms and the filtering services. That infomediary service could coordinate efforts to limit dangerous virality in ways that serve users and society, not advertisers. Those infomediaries could also negotiate as agents for the users for a share of any advertising revenue -- and take a portion of that to fund themselves, and the filtering services.

With infomediaries, the business model concerns about sustaining filtering services, and insulating them from the perverse incentives of the advertising model to drive engagement, might become much less difficult than currently feared.

   Equitable revenue shares in any direction can be negotiated by the infomediaries, regardless of just how much data the filtering services or infomediaries control, who sells the ads, or how much of the user interface they handle. That is not a technical problem but one of negotiating power. The content and ad-tech industries already manage complex multi-party sales and revenue sharing for ads -- in Web, video, cable TV, and broadcast TV contexts -- which accommodate varying options for which party sells and places ads, and how the revenue is divided among the parties. (Complex revenue sharing arrangements through intermediaries have long been the practice in the music industry.)

   Filtering services and infomediaries could also shift incentives away from the perversity of the engagement model. Engagement is not the end objective of advertisers, but only a convenient surrogate for sales and brand-building. Revenue shares to filtering services and infomediaries could be driven by user-value-based metrics rather than engagement -- even as simple as MAUs (monthly average users). That would better align those services with the business objective of attracting and keeping users, rather than addicting them. Some users may choose to wear blinders, but few will agree to be manipulatively driven toward anger and hate if they have good alternatives. But now the platform's filters are the only game in the platform's town.

Related strategies that build on this ecosystem to filter for quality

There might be more agreement on the path toward social media that serve society if we shared a more fleshed-out vision of what constructively motivated social media might do, and how that would counter the abuses we currently face. Some aspects of the power that better filtering services might bring to human discourse are suggested in the following:

Skeptics are right that user-selected filtering services might sometimes foster filter bubbles. But they fail to consider the power that multiple services that seek to filter for user value might achieve, working in “coopetition.” Motivated to use methods like these, a diversity of filtering services can collaborate to mine the wisdom of the crowd that is hidden in the dynamics of the social flow graph of how users interact with one another – and can share and build on these insights into reputation and authority. User-selected filtering services may not always drive toward quality for all users, but collectively, a powerful vector of emergent consensus can bend toward quality. The genius of democracy is its reliance on free speech to converge on truth – when mediated toward consensus by an open ecosystem of supportive institutions. Well-managed and well-regulated technology can augment that mediation, instead of disrupting it.

Phases – building toward a social media architecture that serves society

   The Stanford Group’s concerns about “political realism” and platform pushback has led them to a basic level of independent, user-selectable labeling services. That is a limited remedy, but may be valuable in itself, and as a first step toward bolder action.

   Their intent is to extend from labeling to ranking and scoring, initially with little or no personal data. (It is unclear how useful that can be without user interaction flow data, but also a step worth testing.)

   Others have proposed similar basic steps toward more user control of filtering. In addition to proposals I cited this spring, the proposed Filter Bubble Transparency Act would require that users be offered an unfiltered reverse-chronological feed. That might also enable independent services to filter that raw feed. Jack Balkin and Chris Riley have separately suggested that Section 230 be a lever for reform by restricting safe harbors to services that act as fiduciaries and/or that provide an unfiltered feed that independent services can filter. (But again, it is unclear how useful that filtering can be without access to user interaction flow data.)

   Riley has also suggested differential treatment of commercial and non-commercial speech. That could enable filtering that is better-tailored to each type.

   The greatest benefit would come with more advanced stages of filtering services that would apply more personal data about the context and flow of content through the network, as users interact with it, to gain far more power to apply human wisdom to filtering (as I have been suggesting). That could feed back to modulate forward flows, creating a powerful tool for selectively damping (or amplifying) the viral cascades that are now so often harmful.

    Infomediaries (data cooperatives) could be introduced to better support that more advanced kind of filtering, as well as to help manage other aspects of the value exchange with users relating to privacy and attention that are now abused by “surveillance capitalism.”

Without this kind of long-term vision, we risk two harmful errors. One is overreliance on oppressive forms of mediation that stifle the free inquiry that our society depends on, and that the First Amendment was designed to protect. The other is overly restrictive privacy legislation that privatizes community data that should be used to serve the common good. Of course there is a risk that we may stumble at times on this challenging path, but that is how new ecosystems develop.

---

Running updates on these important issues can be found here, and my updating list of Selected Items is on the tab above.

No comments:

Post a Comment