Thursday, January 16, 2025

#FreeOurFeeds - Another Step Toward the Vision

As perhaps the first to use the phrase "free our feeds" and the Twitter hashtag #FreeOurFeeds, it is gratifying to see the launch of the Free Our Feeds Foundation to embark on a major step toward that vision. 

There have been many small steps to free our feeds, now seen as an urgent need to "billionaire-proof" our social media connectivity. Musk and then Zuck have shown the perils of the "loaded weapon" we have left on the table of online discourse, by so shamelessly picking it up to use for their own ends. We can only guess where they -- and others like them -- or worse -- will point their weapons next.

Some see the Mastodon "fediverse" as a major step in this direction, arguably so, but many are coming to see Bluesky as a larger step toward the portability and open interoperability of the full range of functions needed to free us from platform lock-in and manipulation. It is also interesting that similar steps to more fully open the Bluesky and Mastodon ecosystems were announced on the same day, 1/13/25. I am hopeful that both efforts will succeed, and that the Mastodon and Bluesky ecosystems will grow -- and gain high levels of interoperability with each other.

Bluesky seems to currently be the most open to building high levels of function and extensibility, which I have always seen as very important. We are in the early days of social media, just learning to crawl. To leverage this technology so that we can walk, run, and fly -- while remaining democratic and free -- it must be kept open to user control and to the control of communities. That will enable us to re-energize the social mediation ecosystem I have written about recently, and in many other works listed here

A key aspect of Bluesky and its AT Protocol (not yet in the Mastodon architecture as I understand it) is that, at the level of both 1) the app, and 2) of the relays that tie app instances together, each can be separately managed and replicated, along with 3) the level of independently selectable feed algorithms. The federation of the relays is important because they are resource heavy services, not very amenable to lightly resourced community managers, but capable of being secured and managed by trusted organizations to support advanced algorithms in ways that can also preserve privacy, as I described in 11/3/21 and updated in Tech Policy Press. The Free Our Feeds Foundation promises to take a large step in that direction for the Bluesky ecosystem

As Cory Doctorow, Mr. Enshittification, himself, said of this effort:

If there's a way to use Bluesky without locking myself to the platform, I will join the party there in a hot second. And if there's a way to join the Bluesky party from the Fediverse, then goddamn I will party my ass off.

Back to my personal interest here, I began using the rallying cry of Free Our Feeds! in a blog post on 2/11/21 (the earliest use of that phrase I could find on Google), and then used the hashtag #FreeOurFeeds on Twitter on 2/13/21, apparently the first use of that hashtag. I continued using this hashtag often on Twitter, and featured a fuller treatment of the concept in a 4/22/21 article in Tech Policy Press that included the diagram here. 

Of course 2021 was not very long ago, and many people had already become advocates for algorithmic choice. But I also take pride in being perhaps the longest-serving advocate for these ideas.

The hope is that Bluesky Social PBC and Free Our Feeds Foundation can catalyze a vibrant open ecosystem -- to create a new infrastructure for social media that lets a thousand flowers bloom -- and can grow and evolve over many sociotechnical generations.

Thursday, January 09, 2025

New Logics for Social Media and AI - "Whom Does It Serve?"

[Pinned -- Originally published 12/7/24 at 3:47pm]

[UPDATED 12/17/24 to add Shaping the Future of Social Media with Middleware  (with Francis Fukuyama, Renée DiResta, Luke Hogg, Daphne Keller, and others, Foundation for American Innovation, Georgetown University McCourt School of Public Policy, and Stanford Cyber Policy Center (details below).] 

A collection of recent works present related aspects of new logics for the development of social media and AI - to faithfully serve individuals and society, and to protect democratic freedoms that are now in growing jeopardy. The core question is "Whom does it serve?"*

This applies to our technology -- first in social media, and now as we build out broader and more deeply impactful forms of AI. It is specifically relevant to our technology platforms, which now suffer from "enshittification" as they increasingly serve themselves at the expense of their users, advertisers, other business partners, and society at large. These works build to focus on how this all comes down to the interplay of individual choice (bottom-up) and social mediation of that choice (top-down, but legitimized from bottom-up). That dialectic shapes the dimension of "whom does it serve?"* for both social media and AI.

Consider the strong relationship between the “social” and “media” aspects of AI -- and how that ties to issues arising in problematic experience with social media platforms that are already large scale:

  • Social media increasingly include AI-derived content and AI-based algorithms, and conversely, human social media content and behaviors increasingly feed AI models
  • The issues of maintaining strong freedom of expression, as central to democratic freedoms in social media, translate to and shed light on similar issues in how AI can shape our understanding of the world – properly or improperly.

These works focus on how the 1) need for direct human agency applies to AI, 2) how that same need in social media requires deeper remediation than commonly considered, how 3) middleware interoperability for enabling user choice is increasingly being recognized as the technical foundation for this remediation, and how 3) freedom (in both natural and digital worlds) is not just a matter of freedom of expression, but of freedom of impression (choice of who to listen to). 

Without constant, win-win focus on this essential question of "whom does it serve?" as we develop social media and AI, we risk the dystopia of "Huxwell" (a blend of Huxley's Brave New World and Orwell's 1984).**  

  • New Perspectives on AI Agentiality and Democracy: "Whom Does It Serve?"
     (with co-author Richard Whitt, Tech Policy Press12/6/24) - Building toward optimal AI relationships and capabilities that serve individuals, society, and freedom requires new perspectives on the functional dimensions of AI agency and interoperability. Individuals should be able to just say "Have your AI call my AI." To do that, agents must develop in two dimensions:
    1. Agenticity, a measure of capability - what can it do?
    2. Agentiality, a measure of relationship - whom does it serve?
  • Three Pillars of Human Discourse (and How Social Media Middleware Can Support All Three) (Tech Policy Press10/24/24) - Overview of new framing that strengthens, broadens, and deepens the case for open middleware to address the dilemmas of governing discourse on social media. Human discourse is, and remains, a social process based on three essential pillars that must work together:
    1. Agency
    2. Mediation
    3. Reputation 
  • NEW: Shaping the Future of Social Media with Middleware (Foundation for American Innovation and Georgetown University McCourt School of Public Policy, 12/17/24) -- Major team effort with Francis Fukuyama, Renée DiResta, Luke Hogg, Daphne Keller, and many other notables, White paper building on this 4/30/24 Symposium that I helped organize, held at Stanford Cyber Policy Center. Assembled leading thinkers at the nexus of social media, middleware, and public policy. The only comprehensive white paper to offer a thoughtful assessment of middleware’s promise, progress, and issues since the 2020 Stanford Group paper. The goal is to operationalize the concept of middleware and provide a roadmap for innovators and policymakers. (The above two pieces extend this vision in broader and more forward-looking directions.)
  • New Logics for Governing Human Discourse in the Online Era (CIGI Freedom of Thought Project, 4/25/24- Leading into the above pieces, this policy brief pulls together and builds on ideas about how freedom of impression guides freedom of expression without restricting it, and how combining 1) user agency, 2) a restored role for our traditional social mediation ecosystem, and 3) systems of social trust all combine to synergize that process for the online era. It offers a proactive vision of how that can enable social media to become ever more powerful and beneficial "bicycles for our minds."
*Alluding to the Arthurian legend of the Holy Grail.
**Suggested by Jeff Einstein and teased in his video.

(Originally published 12/7/24 at 3:47pm, revised 12/22/24 -- with dateline reset to pin it at or near the top of this blog)

Wednesday, January 08, 2025

Beyond the Pendulum Swings of Centralized Moderation (X/Twitter, Meta, and Fact Checking)

The crazy pendulum swings of centralized moderation by dominant social media platforms is all over the news again, as nicely summarized by Will Oremus, and explored by a stellar Lawfare panel of experts. 

We have seen one swing toward what many (mostly the right) perceive as blunt over-moderation and censorship that intensified around the 2016 election and aftermath. And now, with the 2020 election and aftermath, a swing away, to what others (mostly the left) view as irresponsibly enabling uncontrolled cesspools of anger, hate, and worse. This pendulum is clearly driven in large part by the political winds (which it influences, in turn), a question of whose ox gets gored, and who has the power to influence the platforms -- "Free speech for me, but not for thee."

This will remain a disruptive pendulum -- one that can destroy the human community and its collective intelligence -- until we step back and take a smarter approach to context and diversity of our perceptions of speech. More reliance on community moderation, as X/Twitter and Meta/Facebook/Threads are now doing, points theoretically in the right direction: to democratize that control -- but is far from being effective. Even if they really try, centralized platforms are inherently incapable of  doing that well

Middleware as systems thinking on how to do better

Three of the speakers on the Lawfare panel were coauthors/contributors with me in a comprehensive white paper, based on a symposium on a partially decentralized approach called "middleware." That proposes an open market in independent curation and moderation services that sit in the middle between each user and their platforms. These services can do community-based moderation in a fuller range of ways, at a community level, much more like the way traditional communities have always done "moderation" (better thought of as "mediation") of how we communicate with others. This new middleware paper explains the basics, why it is a promising solution, and how to make it happen. (For a real-world example of middleware, but still in its infancy, consider Bluesky.)

As for the current platform approach to "community moderation," many have critiqued it, but I suggest a deeper way to think about this, drawing on how humans have always mediated their speech. Three Pillars of Human Discourse (and How Social Media Middleware Can Support All Three is a recent piece on extending current ideas on middleware to support this solution that has evolved over centuries of human society. The three pillars are: User Agency, Social Mediation, and Reputation. 

Toward effective community moderation

The Augmented Wisdom of Crowds: Rate the Raters and Weight the Ratings (from 2018) digs deeper into why simplistic attempts at community surveys fail, and how the same kind of advanced analysis of human inputs that made Google win the search engine wars can be applied to social media. A 2021 post and a 2024 policy brief update that.

To understand why this is important, consider what I call The Zagat Olive Garden Problem. In the early 90s, I noticed this oddity in the popular Zagat guide, a community-rating service for restaurants: The top 10 or so restaurants in NYC were all high-priced, haute cuisine or comparably refined, except one: Olive Garden. Because Olive Garden food was just as good? No, because far more people knew it from their many locations, and they were attracted to a familiar brand with simple, but tasty, food at very moderate prices, and put off by very high prices. 

Doing surveys where all votes are counted equally may sound democratic, but foolishly so. We really want ratings from those with a reputation for tastes and values we relate to (but leavened with healthy diversity on how we should broaden our horizons). That is what good feed and recommender algorithms must do. For that, we need to "rate the raters and weight the ratings," and do so in the relevant context, as that post explains.

Back to the pendulum analogy, consider how pendulums work -- especially the subtle phenomenon of entrainment (perhaps blurring details, but suggestive): 

  • Back in 1666, Huygens invented the pendulum clock and discovered that if two were mounted on the same wall, their pendulum swings gradually became synchronized. That is because each interacts with the shared wall to exchange energy in a way that brings them into phase.
  • Simplistically, moderation is a pendulum that can swing from false positives to false negatives. Each conventional platform has one big pendulum controlled by one owner or corporation that swings with the political wind (or other platform influences). Platform-level community moderation entrains everyone to that one pendulum, whether it fits or not -- resulting in many false positives and false negatives, often biased to one side or the other.
  • Alternatively, a distributed system of middleware services can serve many individuals or communities, each with their own pendulums that swing to their own tastes.
  • Within communities, these pendulums are tightly linked (the shared wall) and tend to entrain.
  • Across communities, there are also weaker linkages, in different dimensions, so still nudge toward some entrainment.
  • In addition to these linkages in many dimensions, instead of being rigid, the "walls" of human connection are relatively elastic in how they entrain.
  • The Google PageRank algorithm is based on advanced math (eigenvalues) and can treat individual search engine users and their intentions as clustering into diverse communities of interest and value -- much like a network of pendulums all linked to one another by elastic "walls" in a multidimensional array.
  • Similar algorithms can be used by diverse middleware services to distill community ratings with the same nuanced sensitivity to their diverse community contexts. Not perfectly, but far better than any centralized system.
In addition, part of the problem with current community notes, and any form of explicit ratings of content, is getting enough people to put in the effort. Just as Google PageRank uses implicit signals of approval that users do anyway (linking to a page), variations for social media can also use implicit signaling in the form of likes, shares, and comments (and more to be added) to draw on a far larger population of users, structured into communities of interest and values.

Of course there are concerns that the decentralization of middleware might worsen fragmentation and polarization. While it might have some such effect in some contexts, there is also the opposing effect of reducing harmful virality feedback cascades. Consider the fluid dynamics of an old fashioned metal ice cube tray, and how water sloshing in the open tray forms much more uncontrollable waves than in the tray with the separating insert in place.

The only effective and scalable solution to social media moderation/curation/mediation is to build distributed middleware services, along with tools for orchestrating the use of a selection of them to compose our individual feeds. That too can be done well or badly, but only with a collective effort to do our best on a suitably distributed basis can we succeed. 

Thursday, October 24, 2024

Now on Tech Policy Press: Three Pillars of Human Discourse (and How Social Media Middleware Can Support All Three)

My new short article, Three Pillars of Human Discourse (and How Social Media Middleware Can Support All Three), is now on Tech Policy Press -- after extensive workshopping with dozens of experts. 

This new framing strengthens, broadens, and deepens the case for open middleware to address the dilemmas of governing discourse on social media

Human discourse is a social process. It depends on three pillars that must work together:
  1. Agency
  2. Mediation
  3. Reputation 
Lack of attention to all three pillars and their synergy has greatly harmed current social mediaWithout strong support for all three pillars -- enabled by middleware for interoperation and open innovation -- social media will likely struggle to balance chaos and control. 

Advocates of middleware have brought increasing attention to the need for user agency -- but without strong support for the other two pillars, there remain many issues. Agency must combine with mediation and reputation to rebuild the context of "social trust" that is being lost. By enabling attention to all three pillars, open, interoperable middleware can help to:
  • Organically maximize rights to expression, impression, and association in win-win ways,
  • Cut through speech governance dilemmas that lead to controversy and gridlock, and
  • Support democracy and protect against chaos, authoritarianism, or tyranny of the majority.
There are also helpful supplements on my blog: 
Broader background for these pillars and why we need to attend to them is in my CIGI policy brief, New Logics for Governing Human Discourse in the Online Era.

(My thanks to the many experts who have provided encouragement and helpful feedback in individual discussion and at the April FAI/Stanford symposium on middleware -- and special thanks to Luke Thorburn for invaluable suggestions on simplifying the presentation of these ideas.)

---
[Update 10/31/24:] In addition to the foundation on "social trust" by Laufer and Nissenbaum that I cited in my article, I just found an enlightening sociological perspective from Thorsten Jelinek, How Social Media and Tokenization Distort the Fabric of Human Relations.

Sunday, October 13, 2024

Making Social Media More Deeply Social with Branded Middleware

This vision of social media future is meant to complement and clarify the vision behind many of my other works (such as this, see list of selected pieces at the end). It assumes you have come here after seeing at least one of those (but includes enough background to also be read first).

Business opportunity – start now, and grow from there:

     Managers of the NY Times, small local news services, or any other organization that has built a strong community can use the following model to build a basic online middleware service business, starting now.

     For example, Bluesky could be a base platform for building initial proof-of-concept services along these lines that could develop and grow into a major business.

[If you are impatient, jump to the section on "Branding"]

It is clear that social media technology is not serving social values well. But it is not so clear how to do better. I have been suggesting that the answer begins in learning from how we, as a society, curated information flows offline. (These issues are also increasingly relevant to emerging AI.)

This piece envisions how an offline curation “brand” with an established following – like the New York Times, or many others, including non-commercial communities of all kinds – could extend their curatorial influence, and the role of their larger community, more deeply into the digital future of thought. (Of course, much the same kind of service can be built as a greenfield startup, as well, but having an established community reduces the cold-start problem.)

Building on middleware – the Three Pillars

I and many others have advocated for “middleware” services, a layer of enabling technology that sits between users and platforms to give control back to users over what goes into each of our individual feeds. But that is just the start of how that increased user agency can support healthy discourse and limit fragmentation and polarization in our globally online world.

 The pillars I have been writing about are:

  1. Individual agency
    , the starting point of democratic free choice over what we say to whom, what individuals we listen to, and what groups we participate in.
  2. Social mediation, the social processes, enabled by an ecosystem of communities and institutions of all kinds that influence and propagate our thoughts, expression, and impression. (For simple background, see What Is a Social Mediation Ecosystem?)
  3. Reputation, the quality metrics, intuitively developed and shared to decide which individuals and communities are trustworthy, and thus deserve our attention (or our skepticism).

Middleware can sit on top of our basic social networking platforms to support the synergistic operation of all three pillars, and thus help make our discourse productive.

In the offline world of open societies, there is no single source of “middleware” services that guide us, but an open, organic, and constantly adjusted mix of many sources of collective support. People grow up learning intuitively to develop and apply these pillars in ever-changing combinations.

Software is far more rigid than humans. Online middleware is a technique for enabling the same kind of diversity and “interoperation” – of attention agent services for us to choose from, and to help groups fully participate in them – so we can dynamically compose the view of the world we want at any point in time.

Bluesky currently offers perhaps the best hint at how middleware services will be composed, steered, and focused – as our desires, tasks, and moods change. Just keep in mind that current middleware offerings are still just infants learning to crawl.

As we may think …together

Vannevar Bush provided a prescient vision of the web in 1945 (yes, 1945!) – in his Atlantic article “As We May Think.” Its technology was quaint, but the vision of how humans can use machines to help us think was very on-point, and inspired the creation of the web. Now it is time for a next level vision – of how we may think together – even if the details of that vision are still crude.

Current notions of middleware have been focused primarily on user agency, and just beginning (as in Bluesky) to consider how we need not just a choice of a single middleware agent service, but to flexibly compose and steer among many attention agent services. Steve Jobs spoke of computers as “bicycles for our minds.” As we conduct our discourse, middleware-based attention agent services can give us handlebars to steer them and gear shifts to deal with varying terrain and motivations. They can give us “lenses,” for focusing what we see from our bicycles.

To build out this capability, we will need at least two levels of user-facing middleware services:

     Many low level service agents that curate for specific objectives of subject domain, styles, moods, sources, values, and other criteria.

     One or more high level service agents that make it easy to orchestrate those low level agents, as we steer them, shift gears, and change our focus, creating a consolidated ranking that gives us what we want, and screens out what we do not want, at any given time.

Just how those will work will change greatly over time as we learn to drive these bicycles, and providers learn to supply useful services – “we shape our tools and our tools shape us.” Emerging AI in these agents will increase the ease of use, and the usable power of the bicycles – but even in the age of AI, the primary intelligence and judgment must come from the humans that use these systems and create the terrain of existing and new information and ideas (not just mechanically reassembled tokens of existing data) that we steer through.

====================================================
Here is the business opportunity:
====================================================

Branding – a “handle” for intuitively easy selection  -- and signaling value

Yes, choosing middleware services seems complicated, and skeptics rightly observe that most users lack the skill or patience to think very hard about how to steer these new bicycles for our minds. But there are ways to make this easy enough. One of the most promising and suggestive is branding – a powerful and user-friendly tool for reliably selecting a service to give desired results. Take the important case of news services:

     If we try to select news stories at the low level of all the different dimensions of choice – subject matter, style, values, and the like – of course the task would be very complex and burdensome.

     But many millions easily choose what mix of CNN, MSNBC, Fox News, PBS, or less widely used brands they want to watch at any time. The existing brand equity and curation capabilities of such media enterprises are now being squandered by digital platforms that offer such established service brands only rudimentary integration into their social media curation processes. With proper support, both established and new branded middleware services can establish distinctive sensibilities that can make choice easy.

Importantly, branding also serves marketing and revenue functions in powerful ways that can be exploited by middleware services. Once established and nurtured, a brand attracts users on the basis that it offers known levels of quality, and as catering to selective interests and tastes. "It's Not TV, It's HBO" encapsulated the power of HBO's brand in the heyday of premium TV.

The New York Times as a branded curation community: 

Consider the New York Times as just one example of branded curation middleware that could serve as a steerable lens into global online discourse. It could just as well be News Corp, CNN, Sports Illustrated, or Vogue – or your local newspaper (if you still have one!) – or your town or faith community, a school, a civil society organization, a political party, a library, a bowling league – or whatever group or institution that wants to support its uniquely focused (but overlapping and not isolated) segment of the total social mediation ecosystem.

Consider how all three pillars can work and synergize in such a service:

User agency comes in by our participation as readers, and as speakers in any relevant mode – posts, comments, likes, shares, letters to the editor, submissions for Times publications. This can be addressed at at least two levels:

     Low level attention service agents that find and rank candidate items for our feeds and recommenders. This is much as we now choose from an extensive list of available email newsletters from the Times.

     Higher level middleware composing agents would help compose these low-level choices – and facilitate interoperation with similar services from other communities – to build a composite feed of items from the Times and all our other chosen sources. They could offer sliders to decide what mx to steer into a feed at any given time, and saved presets to shift gears for various moods, such as news awareness/analysis, sports/entertainment, challenging ideas, light mind expansion, and diversion/relaxation.

(Different revenue models may apply to different services, levels, and modes of participation, just as some NY Times features now may cost extra.)

Social mediation processes come in to our user interface at two levels of curation:

     User-driven curation: Much like current platforms, the Times low-level services can rank items based on signals from the community of Times users – their likes, shares, comments, and other signals of interest and value. This might distinguish subscribers versus non-subscribing readers. Subscribers might be more representative of the community, but non-subscribers might bring important counterpoints. Other categories could include special users, such as public figures in various political, business, or professional categories. As such services mature, these signals can be expanded in variety to be far more richly nuanced, such as to give clearer feedback and be categorized by subject domains of primary involvement. 

     Expert-driven curation: The Times editorial team can be drawn on (and potentially augmented with supportive levels of AI) to provide high quality expert curation services in much the same way, in whatever mix desired. This could include both their own contributions, and their reactions to readers’ contributions.

Reputation systems that keep score of quality and trust feedback on both users and content items – that arise from those mediation processes – can also be valuably focused on the Times community:

     At a gross level, we might make gross assumptions that differentiate the editorial and journalism staff, subscribers, and non-subscribing readers (as part of the basic mediation process), but a reputation system could distinguish among very different levels of reputation for quality of participation in many dimensions, such as expertise, judgment, clarity, wisdom, civility, and many more – in each of many subject domains.

     Reputation systems might also be tuned to Times reporters and editors, and their inputs to reputations of content items and users. But the true power of this kind of service is its crowdsourcing from not just the Times staff, but from its unique extended community. One could choose to ignore the staff, and just turn their lens on the community, or vice versa.

Enterprise-class community support integration – and simple beginnings

To fully enable this would require new operational support services that integrate the operation of open online social media platform services (like Bluesky now, or maybe someday Threads) with the operations of the Times. As the technology for multi-group participation is built out beyond current rudimentary levels, it can integrate with the operation of each group, including the enterprise-class systems that drive the operations of the Times. This might include the kind of functionality and integration offered by CRM (customer relationship management) systems for managing all of the Times’ interactions with its customers, as well as the CMS (content management system) used to manage its journalism content, and the SMS (subscription management systems) that manage revenue operations.

Doing all of this fully will take time and effort – but some of it could be done relatively easily, such as in an attention agent that ranks items based on the Times community members signals as distinct from those of the general network population. The Times could begin a trial of this in the near term by exploiting the basic middleware capabilities already available by creating a Bluesky server instance (using the open Bluesky server code and interoperation protocols) and their own custom algorithms. 

A large, profitable (or otherwise well-funded) business like the Times could develop and operate middleware software itself (if the social media platform allows that, as Bluesky does), but smaller organizations might need a shared “middleware as a service” (MaaS) software and operations provider to do much of that work.

A user steered, intuitively blended, mix of diverse sub-community feeds

Even at a basic level, imagine how doing this for many such branded ecosystem groups could enable users to easily compose feeds that bring them a diverse mix of quality inputs, and to steer and adjust the lenses in those feeds and searches to focus our view as we desire, when we desire.

Similar middleware services could be based all kinds of groups – for example:

     Local news and community information services – much like the Times example, for where you live now, used to live, or want to live or visit.

     Leadership and/or supporters of political parties or civil society organizations – issues, platforms/policies, campaigns, turnout, surveys, fact-checking, and volunteering.

     Professional and/or amateur players and/or coaches for sports – catering to teams, fans, sports lore, and fantasy leagues.

     Faculty, students, and/or alumni from universities – selecting for students, faculty, alumni, applicants, parents.

     Librarians and/or card holders for library systems – selecting for discovery, reading circles, research, criticism, and authors.

     Leaders and/or adherents to faith communities – for community news, personal spiritual issues, and social issues.

Consider how the Times example translates to and complements any of these other kinds of groups (most easily if enabling software is made available from a SaaS provider). Users could easily orchestrate their control over diverse sources of curation and moderation – selecting from brands with identities they recognize – without requiring the prohibitive cognitive load of controlling all the details that critics now argue would doom middleware because few would bother to make selections. New brands can also emerge and gain critical mass, using this same technology base.

By drawing on signals from expert and/or ordinary members of groups that have known orientations and norms, users might easily select mixes that serve their needs and values – and shift them as often as desired.

Context augmentation

Peter Steiner in The New Yorker

"On the Internet, no one knows you are a dog" -- or a lunatic, or a bot. Famously observed by Peter Steiner's 1993 cartoon, this became known as "context collapse," broadly understood as a core reason why internet discourse is so problematic. Much of the meaning derives from context external to the message itself -- who is speaking to whom, from and to what community, with what norms and assumptions. That has largely been lost in current social media (and in emerging AIs). 

Consider how the kind of social mediation ecosystem processes envisioned here differ from what current major platforms offer in the way of community support -- and thus fail to provide essential context: 

  • They let you create a personal set (a unidirectional pseudo-community) of friends or those you follow, but increasingly focus on engagement-based ranking into feeds -- because they want to maximize advertising revenue, not the quality of your experience. 
  • They rank based on likes, shares, and comments from a largely undifferentiated global audience, with little opportunity for you to influence who is included. 
  • They may favor feedback from rudimentary "groups" that you join, but provide very limited support to organizers and members to make those groups rich and cohesive. 
  • They may cluster you into what they infer to be your communities of interest, but with out any agency from you over which groups those are, except for the rudimentary "groups" you join.
  • And, even if they did want to serve your objectives, not theirs, they would be hard-pressed to come anywhere near the richness and diversity of truly independent, opt-in, community-driven middleware services that are tailored to diverse needs, contexts, and sustaining revenue models.

Doing moderation the old-fashioned way – enabled by middleware

Instead of being seen as a magical leap in technology, or an off-putting cognitive burden on users, middleware can be understood as a way to recreate in digital form the formal and informal social structures people have enjoyed for centuries – individually composed interaction with the wisdom of organically evolved social mediation ecosystems and intuitive informal reputation systems.

What at first seems complicated, from the perspective of current social media, is at core, little more complicated than the structure of traditional human discourse – building on key functions and elements of the social mediation and reputation ecosystems – all legitimized by choices of individual agency. Yes, that is complicated, but humans have learned over millennia to intuitively navigate this traditional web of communities and reputations. Yes, make it as simple as possible, but no simpler!

Creating an online twin of such a web of community ecosystems will not happen overnight, but many industries have already built out online infrastructures of similar complexity – in finance, manufacturing, logistics, travel, and e-commerce. Middleware is just a tool for enabling software systems to work together in ways similar to what humans (and groups of humans) do intuitively. The time to start rebuilding those ecosystems is now.

____________________

Related works:

     My November 2023 post introducing the pillars framing – A New, Broader, More Fundamental Case for Social Media Agent "Middleware" – introduced the Three Pillars framing, and embeds a deck that adds details and implication not yet fully addressed elsewhere.

     Core ideas addressed more formally in my April 2024 CIGI policy brief, New Logics for Governing Human Discourse in the Online Era.

     Very simply -- What Is a Social Mediation Ecosystem? (and Why We Need to Rebuild It). 

     Other related works are listed on my blog.



Tuesday, September 17, 2024

What Is a Social Mediation Ecosystem? (and Why We Need to Rebuild It)

(This post highlights some basic ideas from my prior publications,* and why they are of continuing relevance to social media issues.) 

  • The idea of a social mediation ecosystem integrating with social media feeds is a re-visioning of how things used to work. Society has been organically building on such sense-making ecosystems for millennia.
  • The groups that comprise the social mediation ecosystem have historically served as a “public square,” or “public sphere,” ranging from informal gathering places such as coffee shops and taverns to social and civic associations, the press, academia, workplaces, unions, faith communities, and other communities of interest.
  • This square or sphere is not unitary but an ecosystem, a polycentric web of interlinked groups in a multidimensional space.
  • Such associations develop norms and contexts for discourse. Our participation in a network of them shapes what we see and hear of the world. 
  • These processes of social influence nudge us to speak “freely,” but with sensitivity to those norms and values, so others will choose to listen to us.
  • Online media technology can enable restoration of that mediating role through enterprise-class middleware affordances that support community operation and let users interact both within and across the diverse communities they opt into.
  • Middleware can facilitate and enrich user-community interactions, and enable us to steer our feeds to blend content favored by any mix of communities we choose to include at a given time — depending on our tastes, objectives, tasks and moods.
  • For example, current curators of news could become attention agent services. Users might select a set of such services — for example, The New York Times, CNN, MSNBC, Fox, The Atlantic, People — to play a role in composing their feeds, assigning them different relative weights in ranking. Other groups in the social media ecosystem, such as civic, political, faith communities and special interest associations, could also be selected by the user to function as attention agents. Content ranking inputs could come from each community’s expert curators/editors or be crowdsourced from the user population that follows those curators, or from a combination of both.
  • Importantly — and as it has been historically — this ecosystem must be open and diverse, and users must be able to draw on combinations of many mediation sources to maintain an open and balanced understanding of the world.
  • Many fear that the involvement of independent attention agents or middleware might increase fragmentation and partisan sorting. That may be a concern while there are just one or a few mediators, but being able to selectively combine exposure to many loosely connected communities is how open societies have always limited that ever-present risk.

Related works:

     My November 2023 post introducing the pillars framing – A New, Broader, More Fundamental Case for Social Media Agent "Middleware" – introduced the Three Pillars framing, and embeds a deck that adds details and implication not yet fully addressed elsewhere.

     Core ideas addressed more formally in my April 2024 CIGI policy brief, New Logics for Governing Human Discourse in the Online Era.

     A vision, with examples -- Making Social Media More Deeply Social with Branded Middleware. 

     Other related works are listed on my blog.

(*This was first published with minor variations as a sidebar to A New, Broader, More Fundamental Case for Social Media Agent "Middleware" (11/9/23), and then as a sidebar to a more formal Centre for International Governance Innovation policy brief (4/25/24).)

Thursday, April 25, 2024

A Policy Brief and a Symposium, Oh My! ++ On Middleware and Governing Online Discourse

Days apart by coincidence, my wide-ranging policy brief published today, and next Tuesday brings an exciting symposium at Stanford that I helped organize-- both focus on middleware agent services and why we need them -- to re-envision the future of social media.

  • The policy brief is "New Logics for Governing Human Discourse in the Online Era" - part of the Freedom of Thought Project at the Centre for International Governance Innovation (CIGI). It expands on and updates my ongoing work (listed here) on these themes. It pulls together ideas about how freedom of impression guides freedom of expression without restricting it, and how combining 1) user agency, 2) a restored role for our traditional social mediation ecosystem, and 3) systems of social trust all combine to synergize that process for the online era. It offers a proactive vision of how that can enable social media to become ever more powerful and beneficial "bicycles for our minds."

  • The symposium is "Shaping the Future of Social Media with Middleware" - to be held on April 30 at Stanford by the Foundation for American Innovation and the Stanford Cyber Policy Center. Leading thinkers at the nexus of social media, middleware, and public policy will delve into the complexities and potential of middleware as a transformative force. That is to lead to a comprehensive white paper that offers recommendations and a roadmap for developers, investors, and policymakers. We have high hopes for this ambitious effort to bring new insight and energy into shaping the future of human discourse for the good.
Please do look at the policy brief, consider this broad vision that is gaining support, and stay tuned for reports on the Stanford/FAI symposium and where it leads.

+++Update -- Presentation at Public Knowledge Emerging Tech 2024 (6/14/24):
  • Middleware Agents for Distributed Control of Internet Services: Social Media …and AI
    Video (Intro at 12:55, Main comments at 30:20, also 43:30, 48:55 -- poor audio Intro segment only)
    Notes and background

Thursday, November 09, 2023

A New, Broader, More Fundamental Case for Social Media Agent "Middleware" (Revised)

(Discussion draft post and deck, restructured, expanded 1/7/24, as noted below.)

Despite the efforts of business, government, and academia, there seems to be no adequate solution to the dilemma of managing any-to-any online media at global-scale. Too much central control by platforms or governments is a "loaded weapon on the table" ripe for authoritarian abuse, but media anarchy pollutes the public (and private) sphere -- and there are no bright lines. This is creating a deepening crisis not only in the world's political health, but in all aspects of public health: social, mental, and physical. 

How can we maintain freedom of thought while limiting harm from antisocial speech? Democracy is in crisis over who controls what is expressed online -- and what is impressed upon each of us in online feeds and recommendations. What are the legitimate roles of online platforms, government, communities, and individuals in such controls, and how does that depend on community and contextThere are numerous efforts and proposals, many with significant support, but each has serious limitations. 

It recently struck me that three key solution elements that I have been advocating for many years have an importantly synergistic effect. I have become all too familiar with the objections for each element that have limited uptake -- and now see that the way to counter those concerns is to clarify and build on how these pillars work in combination – to reinforce one another and serve as a foundation for the full suite of remedies.

I offer this as a significant broadening of common thinking about "middleware" services (intermediaries between users and platforms) -- in a way that makes it far more powerful and important to civil discourse, and counters various concerns that have hindered its acceptance as a way to preserve democracy in the online era.

Middleware can support three essential pillars of discourse that synergize with each other to restore the human context that platforms have collapsed:
       1. Individual agency 
(the current focus)
       2. A social mediation ecosystem (now seen apart, fragmentary, even conflicting)
       3. Reputation and trust (now considered only in basic form).

Here are some brief notes -- followed by an embedded deck that serves as a working outline with more depth on my suggestions

Three pillars

The three pillars that synergize to restore human context as a foundation for managing online discourse are:

  1. Individual choice and agency, over how we each use online media – this creates speaker/listener context. This gained significant recognition after Francis Fukuyama and his group at Stanford proposed it be enabled via “middleware” that sits between users and the platforms, as a democratic way to limit how platform power threatens democracy. The idea is to return power to users to steer our online “bicycles for our minds” for ourselves.

  2. A social mediation ecosystem, which cooperatively applies collective intelligence, wisdom, judgment, and values, to serve users, as networked into social groups – this mediates context collectively. We have failed to directly integrate the traditional roles of more or less organized social groups into social media. The idea is for social media to leverage our social associations to promote “bridging” of the divides that social media now seem to highlight and reinforce – by rebuilding our processes for creating "social trust." Many have proposed aspects of this, but I take this much farther than I have seen suggested anywhere else (as explained further in the update below, and more deeply in the deck).*

  3. Reputation and trust, both in individuals and in what they say – to evaluate speaker/mediator context and trustworthiness both individually and collectively. This is less widely advocated, and most proposals for this are relatively basic, but some have seen that much more powerful reputation and trust systems are possible -- much like how Google has applied reputation and trust to web search. The idea is to apply the kind of rich combination of individual and social judgements of reputation that guided traditional (pre-online) discourse.*

I now see user agent "middleware" as underlying all of the three pillars, enabling them to work together to restore the context that is essential to effective discourse. Most consideration of middleware seems to focus almost entirely on just the first of these pillars (important as it is), thus understating its true potential and raising concerns that the other pillars can reduce.

My primary focus here is “social” media – in its broadest sense. That also applies to hybrids of human and artificial intelligence (AI), as touched on briefly.

Context collapse

A key reason why online discourse is so problematic is that global any-to-any networks generally collapse the subjective mutual understanding of context -- who is speaking to what intended audience in what way. This has been understood as “context collapse.” These three pillars work together, through middleware, to restore this lost matrix of context, thus making the particular and subjective nuance of online discourse more understandable to both humans and algorithms. I suggest that can counter the feared pitfalls of each alone.

The broader need for middleware

As a long-time advocate for user agent middleware, I have seen it gain support with a primary focus on restoring the pillar of user choice and agency, but generally in ways that are narrowly centered on that, and open to important concerns. I now see the need to emphasize the synergy of each pillar with the other two more clearly – and to make the case that user agent middleware can and must support all three pillars as they work in concert - individual agency, social mediation, and reputation. The hope is that will provide a far more powerful benefit, and counter the common objections arising from narrower framings. 

That might lead to much broader uptake of this important strategy for reestablishing human context that I believe can provide a strong foundation for cutting through current dilemmas, using these and other supplementary strategies to enable online discourse of all kinds to have a far more positive influence on society, and sustain democracy -- for both individual and collective welfare. 

The fundamental synergy is the dialectic of a flexibly optimized blend of human freedom gently balanced by a degree of social nudging toward responsibility. Underlying  that synergy is the collective wisdom that humans embed in reputation. Middleware is the technology that supports this traditional human context in the online world of computer-mediated discourse. Think of it as contextware.

Working notes on this thesis -- a slide deck

As I have begun socializing this strategy, in preparation for a more formal presentation --and to draw in others who might join in developing these ideas -- I am sharing this working version of a deck. It explains these elements in more detail, including how they work together, and how all three are facilitated by middleware as the underlying connector -- and so can together counter objections commonly raised in response to each when considered individually.

(The deck can be viewed on Google Slides without a Google account here.)

Feedback on this is invited (intertwingled [at] teleshuttle [dot] com).

Major update 1/7/24:

Drawing on comments from many experts (acknowledged below), this post has been revised and the slide deck has been expanded and reorganized. The deck adds new sidebars, and section breaks to facilitate easy scanning:

  • Summary
  • Overview
  • Synergies
  • Thought as a Social Process
  • Middleware as Foundation
  • Sidebar: The App Store Analogy (new) 
    Analogy suggesting how transformative a middleware ecosystem can be. 
  • Conclusion
  • Sidebar: Fear of Middleware (new)
    Addressing the fears, especially fragmentation and partisan sorting. 

Added Sidebar:
Migrating our traditional social mediation ecosystem into social media 

Part of the fear that middleware might increase social fragmentation derives from the narrow way that social media middleware is generally understood -- as apart from the broader social mediation ecosystem and its historically central role. Consider this broadened perspective:

What we've got here is failure to re-integrateThe idea of a social mediation ecosystem integrating with social media feeds is a re-visioning of how things used to work. Context collapse is not a problem internal to online social media, but a broader failure to migrate our existing social mediation ecosystem -- our processes for "social trust" -- into the digital domain.  We seem to have forgotten how society has been building on such ecosystems for millennia. 

  • The groups that comprise the social mediation ecosystem have historically served as a public square, or public sphere, ranging from informal gathering places like coffee shops and taverns, social & civic associations, the press, academia, churches, unions, workplaces, and other communities of interest. 
  • Such associations develop norms and contexts for discourse. Our participation in them shapes what we see and hear of the world. That nudges us to speak "freely," but with sensitivity to those norms and values, so others will choose to listen to us. 
  • Online media technology can enable restoration of that mediating role through enterprise-class affordances that support community operation (including integration with CRM systems) and let users interact both within and across communities. 
  • Middleware can facilitate and enrich user-community interactions, and enable us to steer our feeds to blend content favored by whatever mix of communities we choose to include at a given time -- depending on our tastes, objectives, tasks, and moods. 
  • For example, current curators of news could become attention agent services. Users might select a set of them to compose, with different relative weights in ranking. Eg: NY Times, CNN, MSNBC, Fox, The Atlantic, People (and other categories, such as civic, political, church, and special interest associations). Content ranking inputs could be from the community’s expert curators/editors and/or crowdsourced from the population that follows those curators.
  • Importantly, as it was historically, this ecosystem must be open and diverse, and users must be able to draw on combinations of many mediation sources to maintain an open and balanced understanding of the world. 
  • Many fear that middleware might increase fragmentation and partisan sorting. That will be a concern while there are just one or a few mediators, but being able to selectively combine exposure to many loosely connected communities is how open societies have always limited that ever-present risk. 
  • (While there has been little attention to reintegrating this ecosystem into online social media, it has figured in many of my prior writings, as listed here, and dates back to my 2001-3 design for a richly distributed social media system that anticipated current and forward-looking ideas like multi-level, multi-homing federation, user-chosen middleware, and reputation-based attention agents -- as highlighted here. My inspiration for this perspective was from my work with the emerging open market in financial market data analytics around 1990, and from my work on the evolution of "intergroupware" around 1997.)

______________

Acknowledgements (apologies for any omissions):

Thanks to those who have provided stimulating feedback on these ideas, including Chris Riley, Luke Thorburn, Aviv Ovadya, Daphne Keller, Francis Fukuyama, Zoelle Egner, Ethan Zuckerman, Chand Rajendra-Nicolucci, Zach Graves, Luke Hogg, Harold Feld, Pri Bengani, Gabe Nicholas, Renée DiResta, Justin Hendrix, Richard Whitt, Ellen Goodman.

(Updated 2/3/24)

*[Added 7/26/24] To better clarify the distinction between pillars 2 and 3, note that social mediation is a process that occurs over time, while reputation/trust is a quality (a reputation for trustworthiness at a given point in time that results from the operation of the social mediation process).