Monday, August 04, 2025

Beyond the State of the Art in Social Media (and AI) -- Cruising Your "Vibes" with a Feed Mixer -- on "Bicycles for Our Minds"

Context: I have been thinking and writing about this future of "bicycles for our minds" for decades in ways that look well beyond the generally understood state of the art. Many of these ideas have flourished, and many are gaining recognition and being implemented, but many more remain largely unrecognized. This post highlights those that seem most important to the path forward. It assumes familiarity with the current state of the art -- and its discontents. 

Where should social media (and AI) be headed?

There is growing dissatisfaction and deepening concern about social media and its effects on people and society-- and its threats to democracy. But few really understand where we are -- and how we could be going in a far better direction. These same issues are also emerging for AI, as it fuses with social media by 1) incorporating socially user-created content and 2) being used in social media feeds and recommenders.  

Many long for a liberation of social media from the "enshittification" of centrally-controlled platforms -- and now with even greater urgency -- a counter to the incumbent platforms' capitulation to authoritarian influence that now threatens democracy, open discourse and the very foundations of human sense-making. 

The "ATmosphere" of Bluesky and the "Fediverse" of Mastodon, along with Project Liberty's DSNP and other similar efforts have gained attention as more decentralized and giving users more choice over how these powerful tools serve them. This shift to a "federated universe" of interoperating systems can better serve the context- and norm-specific needs of discourse among individuals and and the diverse communities they participate in. Even Meta has given a partial nod to this trend -- by federating Threads with the Activity Pub protocol of Mastodon and other services -- thus edging toward what is better described as a "pluriverse".

Context: I draw heavily on Bluesky and AT protocol and the framing of the Free Our Feeds initiative to solidify its openness -- as currently farthest along in providing for the multidimensionality that will underlie a full-function pluriverse. But those directional ideas apply equally well to the fediverse of Mastodon and other ActivityPub-connected systems -- and to Project Liberty's Distributed Social Network Protocol -- and to other current and future protocols and services with similar objectives that might integrate and harmonize (or supplant) these early shoots of growth in a better direction. 

Global networks, insularity, and the problem of "vibe"

All social media services currently have issues of what communities they help us assemble and participate in, and of what norms apply to them. The dominant global platforms suffer from "context collapse," bringing  diverse communities into collisions without sufficient context to avoid misunderstandings and polarization. The more decentralized pluriverse seeks to avoid that by empowering more community context. Bluesky with its AT protocol has pulled into the lead over the Mastodon fediverse by combining user choice with ease of use, openness, flexibility, and extensibility, reaching over 35 million users. However, its appeal has been limited by its reputation as being dominated by liberals (driven from X/Twitter) and for how some perceive its "vibe." People are wondering where to turn for an online experience the offers the people and vibe they want.

The answer is in a deeper vision of what these "bicycles for our minds" can do, and how we must allow time for us to shape these still formative tools -- and to learn to manage how they shape us. What we see now is just the infancy of a radically new medium that is subsuming all media. This post (adapted from an earlier post) offers a vision of how this infant that just barely crawls, will grow into the nimble bicycle that Steve Jobs had in mind (based on how human mobility was far less efficient than many animals, but how a human on a bicycle could travel far more efficiently than the most efficient animal, a condor.) Our trick is that humans are tool makers, but our downfall might be when the tools makers build tools to serve themselves, and not those who use them.

While there is obvious need to develop near-term features to make each competing tech platform and universe of platforms into more efficient tools, there is also a need to articulate long-term objectives that many tools can build toward to be not only efficient, but effective in serving us, as their users. We are re-engineering human discourse for the online era -- that will be a long process -- but without a long-term vision of how our tools for discourse should work, and how we want to use them, it will be longer and more problematic.

Key ideas 

Technically: A "feed mixer" is a key missing layer. There is much current discussion of online service feeds -- the good, the bad, and the ugly, and the hurdle of ease of use as limiting greater user control -- but little recognition of the need for a user-controlled feed mixer. That would serve to simplify the combination of 1) handlebars for steering our bicycle, and 2) pedals, brakes, and gear shifts for controlling its speed and responsiveness. With flexible control of our feeds that orchestrates multiple algorithms and works across whatever networks we participate in, it will be easy to tune into whatever vibe we want. This post explains and puts that in context -- along with other layers that have been generally ignored.

Sociotechnically: Individual agency over feeds and other details is only #1 of three essential pillars. Neglected are #2, the "Social Mediation Ecosystem" that our ideas get mediated by, and #3 the Reputation Systems that determine whose mediating efforts we trustAll three synergize to help humans, as individuals in an open society, to refine and apply our unique collective intelligence and human values to make sense of the world and flourish. As Marshall McLuhan and his colleagues said, “We shape our tools and thereafter our tools shape us.” Modern liberal society has been powerfully shaped by print and broadcast media. Now we must again relearn -- to reshape how individuals and society adapt -- and how we shape online media to manage its far greater power, reach, and speed.

Further TL;DR of the pluriverse, as I envision it 

  • The move to decentralization, federation, and on toward the rich diversity of the pluriverse, reflects the realization that human society is far too complex, diverse, and nuanced to be served by any one centrally managed global "public square."

  • However, current steps toward decentralization will need to better support the hyperlinked multidimensionality of how individuals and communities interconnect. These communities reflect a diversity of interests, values, and norms. But IRL (In Real Life) individuals participate in many communities. They are rarely bound by any one community, and wish to have global views into many, as both speakers and listeners, depending on their interests, goals, and moods as they vary from time to time. Ted Nelson invented hypertext because "everything is deeply intertwingled."

  • Users will inevitably need multi-homing tools that give variable "lenses" for looking into and participating in many communities. Cross-community feeds and recommenders will be essential for individuals to navigate the abundance of riches in the pluriverse to meet their needs and find their vibe. This may work at at least two levels: 1) low-level recommenders for up- or down-ranking ranking feed items based on specific objectives, and 2) higher-level UX tools for composing and steering mixes of lower level rankings into a consolidated feed.

  • Think of that higher level UX tool as a feed mixer. Just as a music mixing console takes in many individual sound tracks and blends them into a dynamically orchestrated, multi-dimensional composition, an information feed mixer should do the same for individual sub-feeds. Music mixers take tracks from voices, instruments, and other listening points, then adjust overall volume, apply tonal adjustment effects or filters, and balance the levels of each track in the mix. This may be controlled by a specialized mixing operator (an agent) -- often using selectors, knobs, and slider controls.

  • Before objecting that such a tool would be too hard and time-consuming for lazy users to master, consider how feed services can be branded, and how that can make it easy for users to grasp a brand identity -- who is included with what vibe -- and mix feeds based on that intuition of a vibe. That is how we select CNN or Fox or MSNBC or PBS without studying a specification of their editorial curation policies. 

  • Bluesky seems farthest along in pointing to this multidimensionality in our feeds, providing for (but still in early stages of implementing) tools for separating the "speech layer" from the "reach layer" as described in their early blog posts on Composable ModerationModeration in a Public Commons, and Algorithmic Choice. My more detailed post from 6/23 suggests directions for taking that farther.

  • Mastodon seems to also be moving in that general direction, with discussion of a cross-instance groups structure, and shared moderation services that address the challenges of administering small communities, but seems to prefer to remain relatively insular. I suggest they can have both, making their communities semi-permeable. 

  • A similar effort by Project Liberty also has some traction (and significant funding from Frank McCourt) and a vision that seems similar to that of AT Protocol, instead based on Distributed Social Network Protocol (DSNP).

  • The objective should be for all of these -- as well as services using alternative decentralized protocols and current closed platforms -- to harmonize to allow users to seamlessly participate in an integrated "pluriverse" with a multi-homing feed mixer. My recent discussions with activists from all three of these current efforts shows a shared recognition of the need to converge from disparate silos to a true pluriverse with high interoperability.

  • The vision I suggest will take time to build, develop, and be fleshed out by users, but to get where we will want to go in the future will require having these ideas in mind as we architect and build toward that vision. But even given the limits of our imagination, the beauty of open interoperability is that it supercharges the ability to markets to innovate. Just consider App Stores, and how the open interoperation of smartphone apps enabled the growth of a vibrant ecosystem far beyond what Apple or Google could ever provide by themselves. Or how the openness of the web took us far beyond what the corporate walled gardens of AOL, Prodigy, or CompuServe could offer.
Summarizing key elements and features of the vision

Sections of my older and longer post are summarized and updated here. (Serious readers may wish to look to some fuller explanations there):
  • From Freedom of Speech and Reach to Freedom of Expression and Impression
    Managing society’s problems related to how (and by whom) social media news feeds are composed, and whether they are or should be censored, is rapidly reducing to the absurd. Focus on the other end of the proverbial “megaphone” – not speaker’s expression end, but listener’s impression end. Freedom of expression can be strong only if we are free to associate with the information sources we want, by exercising our freedoms of association and assemblyRecognize and restore our Freedom of Impression! Free our feeds!

  • Hypercommunities
    Each person can be a member of many communities (/groups) at once, 
    as many layers of overlapping Venn diagrams in many dimensions -- shifting our view and level of participation as desired (semi-permeability).

  • Ranking as the core task
    Nearly all "moderation" and "curation" recommendations boil down to ranking. Downranking can provide safety from bad content, and upranking can bubble up quality and value. Composability of ranking tools can work at both individual and community levels to blend a mix of rankings that draw on the wisdom of each community. ("Moderation as removal" should instead be effected by "downranking with extreme prejudice" that ensures items will not appear in feeds, but may remain accessible by direct request, subject to appropriate restrictions on access to illegal content)

  • Feed Mixing Agent Services as a core tool -- User-selectable, multilevel feed composition composed from multiple algorithms
    A truly composable, steerable feed would provide a higher level feed mixer interface that lets each of us easily manage and merge a user-selected mix of lower-level feeds, with user-defined relative weights. A steerable feed would allow those mixes and weights to be easily changed at will to suit our varying tasks and moods, including options for stored pre-sets. This would restore user agency to choose and orchestrate from an open market in independent attention agent services -- providing choices of UXs, algorithms, and human mediation providers. Branding of attention agents from both new and legacy mediating services would make it easy for users to select them, much as we now intuitively choose what mix of CNN, MSNBC, Fox News, PBS, or less widely used brands we want to watch at any time.

  • Multi-dimensional reputation based on explicit and/or implicit signals
    Wiser use of algorithms is needed -- not to replace human wisdom, but to distill it based on human judgments and reputations as judged by other humans, all under user control. I view reputation as essential to making ranking work well, and have written frequently about “rate the raters and weight the ratings” as an extension of Google’s PageRank algorithm to develop a socially derived and reputation-weighted reputation. Reputations have multiple dimensions, including subject domain, value systems, and community context, and change over time, being slow to develop, but easy to lose. An effective reputation system motivates individuals to seek and maintain a good reputation.

  • Support for rebuilding our Social Mediation Eecosystem
    Communities and mediating services can be decoupled. The speech layer may be more tightly tied to specific communities than the reach layer. Real-life communities and institutions may be re-enabled to mediate our online discourse, both for their direct membership and those who wish to follow them. The ecosystem that shaped and stabilized discourse in the real world should be reconstituted in the virtual world. This mediation ecosystem shapes how messages flow and evolve, interacting and synergizing with both user agency and reputation, as discussed more fully in Tech Policy Press, Three Pillars of Human Discourse (and How Social Media Middleware Can Support All Three).

  • Classification/labelling and ranking
    Rankings can be based on many dimensions of attributes -- so rankings could take a hybrid form that includes classification or label attributes. Adding a quantifier for the strength of a classification/label (how strongly positive or negative it might be) would ultimately be essential to achieving nuance, and could also include quantification of the rater's confidence level in that value rating.

  • Broader issues and federation/subsidiarity in labeling and ranking
    Our notions of truth and value -- and authority about that -- are contingent, changeable, and heavily influenced by our broader social mediation ecosystem. That has been central to the generative success of human society. Thus our social media should reflect that social contingency, and provide for a high degree of subsidiarity in how decisions are made. That is the essence of what I call freedom of impression, and how it serves to balance freedom of expression. 

  • Further thoughts on federated architecture and feed mixers
    There is need for algorithmic choice at multiple levels. 
    At a lower level is an open market in basic algorithms with very specific objective functions in terms of subjects, values, and vibes/moods. At a higher level is an open market in UX-level services that enable composition and orchestration of those lower level algorithmic rankings to present an consolidated view that blends multiple objective functions, and to allow steering that view dynamically as the user's moods and needs change.  

  • Enabling subsidiarity of "moderation" of the "lawful but awful"
    Federation is based on the principle of subsidiarity: that idea that most moderation/mediation decisions should be local, to best reflect relevant local/community interests, values, and norms. This would apply a nuanced blend of top-down controls to limit dissemination of the truly unlawful (with trust and safety teams, tools, and services), along with mostly bottom-up tools and services to manage more contingent (context-, value-, and norm-dependent) levels of awfulness -- and goodness! -- in multiple dimensions. This should apply at the level of 1) membership communities (servers/instances plus other communities/groups) and 2) cross-community attention/mediation agent services that users choose to opt into. (Given the role of Mastodon instance operators as "benevolent dictators," the current ActivityPub "fediverse" is really more of a "confediverse." The ATmosphere (of AT protocol) seems more supportive of the nuanced multidimensional division of control in federation.)

  • "Vibe"--  seeking 
    "the shmoo of social media"
    There is much talk of the "vibe" of different platforms, but "we 
    ain't seen nothin' yet." With selectable, composable feeds, users will be able to create views that tune into whatever vibe they want (with whatever levels of moderation they want). This is the infancy of a flexible new social ecosystem online, and whatever initial vibe chaos we might see now will give way to a new order of shaping a vibe style and viewpoint, and tuning into it. A fully functional social media pluriverse will be a "shmoo" (a classic cartoon creature that tastes like whatever you want) -- with diverse communities, but flexible lenses into as many as desired. This provides a level of flexibility and user control of their experience that will grow in importance as the pluriverse grows in scale and diversity and in the richness of interconnections desired by users with many interests and moods for diverse vibes.
These levels of choice may seem complex and overwhelming, but just as we easily choose what mix of CNN or Fox or MSNBC we want at any given time, branded middleware services could make that easy -- as I outline with an example for the NY Times. And consider that easily "channel surfed" linear TV channels have been largely replaced by an enshittified kludge of streaming walled gardens. The streaming platforms have resisted enabling a simple high level feed mixer user interface that crosses programs and streaming services. We might see that open to a better high-level user experience specific to video, but as social media eat the world, we might better hope to see such innovation applied to all of our information feeds, of all modalities.

Much of this flexible multidimensionality will emerge slowly, as technical, human, and social infrastructures co-evolve toward it -- a whole-of-society, socio-technical process that will take decades, and may be very disruptive for a time (much as the era of warfare related to society's sociotechnical absorption of Gutenberg's printing press). But if we do not plan for what we can foresee, and build for extensibility to what we do not yet foresee, it will be even harder to find a path toward a new stability that is robust and generative.

Sunday, July 20, 2025

Tech and Democracy: Busy Times in Tech Policy Press

[Working notes in progress, as I try to drink from and reflect on this firehose...]

Tech Policy Press
has become increasingly essential reading and reveals increasing energy in "
issues and ideas at the intersection of tech & democracy." 

I begin this working draft post today because, in addition to being pleased to see my own new piece on AI and Democracy (7/16/25) join ten others I have done since 2021, I saw stimulating connections with other TPP pieces through the week that deserve comment. Today I found still more in editor Justin Hendrix's weekly newsletter recap, even though I can only keep up with a fraction of TPPs continuing growth as a key focal point for this community that Justin is catalyzing. 

This post serves as my point of connection for comments that link some of them (and from other sources) to complementary ideas from my work. [This may expand.]

These two triggered this idea for a connection point for my commentary:

Reviglio's insightful Pluralism piece first struck me, as a variant perspective on issues in my new piece and throughout my work -- the challenge of balancing bottom-up individual agency versus democratically legitimate top-down group influence on discourse. One can drive to silos, filter bubbles, echo chambers, and the madness of crowds. The other can lead to sterile, conformist groupthink, or authoritarian Huxwell dystopias. 

Reviglo suggests that "plurality typically denotes market diversity and anti-monopoly safeguards, while pluralism generally refers to ensuring broad access and visibility of diverse voices and perspectives." I read his "algorithmic plurality" as a purely bottom-up influence, and his "algorithmic pluralism" as a lateral influence (for both serendipity and "prosocial" "bridging" of diverse viewpoints) that emerges from either a seeking from the bottom-up, or a positive nudging that can be either top-down or side-peer. (That contrasts with more negative top-down nudging to conformity or subservience.)

My latest piece emphasizes the need for attention agents to serve their users, and my Three Pillars piece broadens the ideas of algorithmic choice to factor in strong levels of social influence. Pieces out of and building on the Stanford symposium on middleware that I helped organize address the these debates and how it is not an issue of technology for user control, but the sociotechnical issues of how society chooses to support and influence the use of these tools that cut in whatever direction we shape. Our urgent task is to build the sociotechnical infrastructure to support development and use of more prosocial algorithmic services.

Algorithmic pluralism is also central to my earlier Delegation series with Chris Riley, with a broader look at these issues especially in the last two installments on the Community roles in moderation of a Digital Public Hypersquare, and on Contending as agonistic versus antagonistic.

Marechal's piece on AI Slop presents a thought-provoking summary of how algorithms and broader factors have driven us toward "optimized" culture of "fast-food" for the mind, the opposite of the vision of "bicycles for the mind" were to offer us. He laments the downranking of outliers, "...a deeply illiberal optimization ethic that rejects “outlier” perspectives. Rather than seeing deviations from the “algorithmic models in our heads” as opportunities to grow, we increasingly see outliers as dangerous anomalies to be ignored or ridiculed."

That is where we have let ourselves be taken, but my half-century vision of bicycles for the mind has always been to enable the opposite, as my latest piece notes. I did a 2012 piece, Filtering for Serendipity -- Extremism, 'Filter Bubbles' and 'Surprising Validators' on how to optimize for serendipity and challenging ideas (drawing on my 2003 system design) and a forerunner to recent work by others on "bridging systems."

Here again, the problem is not in the tech or in algorithms in general, but in how we have let that be hijacked to serve platforms not users and communities (my Three Pillars). Marechal suggests "becoming an algorithmic problem" by insisting on better algorithms. That is exactly why I advocate for middleware to "Free Our Feeds" -- not just for individual agency run wild, but in a context of a "social mediation ecosystem" that creates more enlightened and challenging algorithms to be mixed in with the junk food. People are beginning to realize that we are "amusing ourselves to death." What we need is a whole of society effort to change that, and middleware and algorithmic choice is the only technology that can enable that. It is up to us to use it wisely.*

One older piece that I finally read today (not from Tech Policy Press) also ties in with these issues of how algorithms work for or against us.

Berjon points out that "digital sovereignty had a bad reputation...[but] is a real problem that matters to real people and real businesses in the real world, it can be explained in concrete terms, and we can devise pragmatic strategies to improve it." The visions that many are now working toward for open infrastructure, and communities, especially the "semi-permeable" open "hyper-communities" referred to above can enable the positive forms of digital sovereignty that Berjon delineates, and he provides an excellent overview of a wide range of strategies for building on such an open infrastructure. My Delegation series (the Contending installment) and many other works have emphasized the need for "subsidiarity," as the basis for true federalism.

-----------------------

Comments? I invite comments, and posted about this on LinkedIn, as a vehicle to facilitate that -- please make any comments there.

*Apropos of this issue, I happened to just watch the 2005 movie Good Night and Good Luck, including the very on point "Wires and Lights in a Box" speech by Edward R. Murrow from 1958. I highly recommend the full speech, and this shortened rendition from the movie.

Wednesday, July 02, 2025

How to Reclaim Social Media from Big Tech (As published in Francis Fukuyama's Persuasion)

“Middleware” is an idea whose time has come.

By Renée DiResta and Richard Reisman

This article was published July 1 by American Purpose, the magazine and community founded by Francis Fukuyama in 2020, which is proudly part of the Persuasion family.


via Getty Images via Persuasion.community
Social media platforms have long influenced global politics, but today their entanglement with power is deeper and more fraught than ever. Major tech CEOs, who once endeavored to appear apolitical, have increasingly taken far more partisan stances; Elon Musk, for example, served as a campaign surrogate in the 2024 U.S. presidential election, and spoke out in favor of specific political parties in the German election. Immediately following Trump’s re-election, Meta made radical shifts to align its content moderation policies with changing political winds, and TikTok’s CEO issued public statements flattering Trump and praising him for his assistance in deferring enforcement of regulation to ban the app. Both Meta and X chose to settle lawsuits that had been widely seen as easy wins for them in the courts, with their CEOs making donations to Trump’s presidential library, in presumptive apology for their fights over his post-January 6 deplatforming. Outside of the United States, there is growing tension between platforms and EU regulatory bodies, which Vice President JD Vance has opportunistically framed as concern about “free speech” amid increased European calls for “digital sovereignty.”

While companies have always sought to maintain favorable relationships with those in power—and while those in power have always sought to “work the referees”—the current dynamics are much more pronounced and consequential. Users’ feeds have long been at the mercy of opaque corporate whims (as underlined when Musk bought Twitter), but now it is clearer than ever that the pendulum of content moderation and curation can swing hard in response to political pressures.

It is users, regardless of where they live or their political leanings, who bear the brunt of such volatility. Exiting a platform comes at a high cost: we use social media for entertainment, community, and connection, and abandoning an app often means severing ties with online friends, or seeing less of our favorite creators. Yet when users try to push back against policies they don’t like—if they attempt to “work the referees” themselves—they are often hindered both by a lack of relative power and the lack of transparency about the internal workings of platform algorithms. Without collective action significant enough to inflict economic consequences, user concerns rarely outweigh the expediencies of CEOs or governments. Unaccountable private platforms continue to wield disproportionate control over public attention and social norms.

We need to shift this paradigm and find alternatives that empower users to take more control over their social media experience. But what would that look like?

As Francis Fukuyama and others at Stanford University argued in 2020—and as we expanded upon in a recent report coauthored with Fukuyama and others—one promising solution is middleware: independent software-enabled services that sit between the user and the platform, managing the information that flows between them. For example, a user might choose a middleware service that filters out spammy clickbait headlines from their feed, or one that highlights posts from trusted sources in a specific domain, like public health or local news. Middleware can help rebalance the scales, empowering users while limiting platforms’ ability to dictate the terms of online discourse.

Putting users in control of their attention

Middleware has the potential to transform two of the most contentious functions of social media: curation and moderation. Curation shapes how content is ranked in users’ feeds, shaping which voices are amplified. Moderation governs what is allowed, labeled, demoted, or removed. Both functions have become politicized battlegrounds, with critics on all sides accusing platforms of bias, censorship, or failing to address harms.

Middleware cuts through this dynamic of overly-centralized control by offering users and communities control that is more direct and context-specific. An open market (think “app store”) of middleware software and services would allow users to freely choose from a variety of algorithms and/or human-in-the-loop moderation services to compose their feeds for them. For instance, one user might prefer to subscribe to a feed optimized for civil discourse, another might choose one that highlights breaking news, while a third wants cat pictures. On the moderation front, some users may want to see profanity and nudity; others may want to subscribe to a tool that hides or labels such posts in their feed. Flexibility allows people to tailor their online environment to their needs (which shift depending on task, mood, or context) or to their political orientation or membership in different communities. This supports a greater diversity of online experience in terms of politics, values, and norms, enabling users and communities to select for their desired “vibe”—not one imposed by platform overlords or a tyranny of some majority.

Middleware can also reduce the risk of political capture, making it more difficult for incumbent platforms, or governments, to exert undue pressure or outright manipulation over online discourse. It fosters competition and innovation by enabling a robust market of providers, which improves both transparency and responsiveness to user and community needs. Importantly, middleware replaces the binary choice between centralized control and total anarchy with an adaptive middle ground that empowers individuals, communities, and institutions to shape their own social experiences.

Retaking control

So how does increased user choice become a reality? Where Facebook, X, and the other incumbent giants are concerned, middleware’s success depends on their cooperation. Third-party tools need the ability to interoperate through open protocols or interfaces. So far, platforms have shown very limited interest in enabling this. However, as moderation becomes more politically fraught, they may decide that devolving more control to users—selectively opening their “walled gardens”—really is a smart choice. Meta’s Threads app is experimenting with a limited degree of such openness.

Whatever the centralized providers do, an alternative path is already emerging. Decentralized platforms based on open protocols, such as Mastodon and Bluesky, have been designed from the ground up to prioritize user choice and agency—without needing permission from a corporate gatekeeper. This is most apparent on Bluesky, which now serves well over 30 million users, some of whom already subscribe to alternative feeds for curation and independent content labeler services that flag porn or hate speech. Newly-formed non-profit foundations that serve as custodians for the Bluesky and Mastodon protocols (one using the very apt #FreeOurFeeds hashtag) promise to ensure that these infrastructures can remain “billionaire-proof” and open to competition, as public goods.

This open infrastructure model is not anti-commercial. On the contrary, it opens space for innovation, extensibility, and entrepreneurship. Just as Apple’s App Store created a flourishing ecosystem of third-party tools, middleware could spur new markets for feed curation, trust labeling, moderation filters, and more. News outlets might create branded options: the “Fox News Feed,” or the “New York Times Feed.” Trusted intermediaries—civil society groups, perhaps—might offer labels grounded in shared community values. Interoperable services can compete and cooperate across an ecosystem of distinct but connected communities. The goal is not to overwhelm users with technical choices, but to create options—similar to how users can now easily choose an email service or an add-on function extension for a browser.

Policy support

Policymakers can help promote user choice by removing barriers that entrench the status quo. On the regulatory front, lawmakers should reimagine outdated statutes like the Digital Millennium Copyright Act (DMCA) and Computer Fraud and Abuse Act (CFAA)—laws that, while originally designed to protect creators and national security, have too often become tools for corporate suppression of competition. By reforming these laws, barriers that favor entrenched monopolies can be dismantled, promoting a more open internet, and ensuring that the interests of users, communities, and innovators come before exploitative profit. There are also worthwhile legislative efforts like the proposed Senate ACCESS Act, which would require “the largest companies make user data portable – and their services interoperable – with other platforms, and to allow users to designate a trusted third-party service to manage their privacy and account settings.”

Middleware empowers communities to decide how they wish to balance competing democratic values—free speech, protection from harm, pluralism—even in a time of high polarization. It offers a path toward a more democratic and resilient information ecosystem, where users have more agency over their attention. The question is no longer whether such alternatives are necessary or feasible—it’s whether they can be scaled, enhanced, and sustained to meet the moment.

Renée DiResta is an Associate Research Professor at the McCourt School of Public Policy at Georgetown and author of Invisible Rulers: The People Who Turn Lies Into Reality.

Richard Reisman is Nonresident Senior Fellow at the Foundation for American Innovation. 

Sunday, April 06, 2025

Being Human in 2035 -- How Are We Changing in the Age of AI?

My recent predictive essay has been included in Being Human in 2035 -- How Are We Changing in the Age of AI? -- a very thought-provoking compendium from the Imagining the Digital Future Center at Elon University by Lee Rainie and Janna Anderson: 

Nearly 300 of the experts in this early 2025 study responded to a series of three quantitative questions, and nearly 200 wrote predictive essays in how the evolution of artificial intelligence (AI) systems and humans might affect essential qualities of being human in the next decade. Many are concerned that the deepening adoption of AI systems over the next decade will negatively alter how humans think, feel, act and relate to one another.

This snippet from my contribution was featured (with those from eight others) in the Executive Summary (p. 5):

Over the next decade we will be at a tipping point in deciding whether uses of AI as a tool for both individual and social (collective) intelligence augments humanity or de-augments it. We are now being driven in the wrong direction by the dominating power of the ‘tech-industrial complex,’ but we still have a chance to right that. Will our tools for thought and communication serve their individual users and the communities those users belong to and support, or will they serve the tool builders in extracting value from and manipulating those individual users and their communities?
… If we do not change direction in the next few years, we may, by 2035, descend into a global sociotechnical dystopia that will drain human generativity and be very hard to escape. If we do make the needed changes in direction, we might well, by 2035, be well on the way to a barely imaginable future of increasingly universal enlightenment and human flourishing.

My full contribution is in the full report (p. 112) -- with these snippets in sidebars:

While there is increasingly strong momentum in worsening dehumanization, there is also a growing techlash and entrepreneurial drive that seeks to return individual agency, openness and freedom with the drive to support the human flourishing of the early web era. Many now seek more human-centered technology governance, design architectures and business models.
...Human discourse is, and remains, a social process based on three essential pillars that must work together: Individual Agency, Social Mediation, Reputation. Without the other two pillars, individual agency might lead to chaos or tyranny. But without the pillars of the social mediation ecosystem that focuses collective intelligence and the tracking of reputation to favor the wisdom of the smart crowd – while remaining open to new ideas and values – we will not bend toward a happy middle ground.

…We need to return to how society once relied largely on self-governance that avoided the sterile thought control of walled gardens, centrally managed ‘public’ forums and the abuses of company towns. We relied instead on a social mediation ecosystem of individuals participating in and giving legitimacy to communities of interest and value to set norms and socially construct our reality.

I hope you will read my full contribution -- and of course the very insightful other contributions from the many eminent contributors.

Thursday, January 16, 2025

#FreeOurFeeds - Another Step Toward the Vision

As perhaps the first to use the phrase "free our feeds" and the Twitter hashtag #FreeOurFeeds, it is gratifying to see the launch of the Free Our Feeds Foundation to embark on a major step toward that vision. 

Why? There have been many small steps to free our feeds, now seen as an urgent need to "billionaire-proof" our social media connectivity. Musk and then Zuck have shown the perils of the "loaded weapon" we have left on the table of online discourse, by so shamelessly picking it up to use for their own ends. We can only guess where they -- and others like them -- or worse -- will point their weapons next.

What? Many see the Mastodon "fediverse" as a major step in this direction, arguably so -- and a similar move to open governance of the fediverse, also on 1/13, is a second major step there.* But many are coming to see Bluesky as a larger step toward both horizontal and vertical interoperability for the full range of functions needed to free us from platform lock-in and manipulation. I am hopeful that both efforts will succeed, and that those ecosystems will grow -- and gain high levels of interoperability with one another (and with future protocols).

How? Bluesky seems to currently be the most open to building high levels of function and extensibility. We are in the early days of social media, just learning to crawl. To leverage this technology so that we can walk, run, and fly -- while remaining democratic and free -- it must be kept open to user control and to the control of communities. That will enable us to re-energize the social mediation ecosystem as I explained recently (and in many other works listed here). 

A key aspect of Bluesky and its AT Protocol (not yet in the Mastodon architecture as I understand it) is that, at the level of both 1) the app, and 2) of the relays that tie app instances together, each can be separately managed and replicated, along with 3) the level of independently selectable feed algorithms. Federation of the relays is important because they are resource-heavy services, not very amenable to lightly resourced community managers, but capable of being secured and managed by trusted organizations to support advanced algorithms. That is also important to preserve privacy. The Free Our Feeds Foundation promises to take a large step in that direction for the Bluesky ecosystem

As Cory Doctorow, Mr. Enshittification, himself, said of this effort:

If there's a way to use Bluesky without locking myself to the platform, I will join the party there in a hot second. And if there's a way to join the Bluesky party from the Fediverse, then goddamn I will party my ass off.

An opening for entrepreneurship? These moves create potentially huge opportunities to build better app instances, better relays, better algorithms, and new levels of services and user experiences to make this all easy to use, powerful, trustworthy, and well-engineered. An effective ecosystem creates a large pie for many players -- and levels the playing field -- unlike the concentration and extractive business models of current platforms. The example of how the web ate CompuServe, Prodigy, and AOL to grow far larger businesses may repeat itself.

My personal interest: I began using the rallying cry of Free Our Feeds! in a blog post on 2/11/21 (the earliest use of that phrase I could find on Google), and the hashtag #FreeOurFeeds on Twitter on 2/13/21 (also apparently the first use). I continued using this hashtag often on Twitter, and for a fuller treatment of the concept in a 4/22/21 Tech Policy Press article that included the diagram here. 

On rereading that, "The Internet Beyond Social Media Thought-Robber Barons," I was pleased at how well it has stood up in articulating the vision that is now catching fire -- and at how forward-looking it still is on where that vision can take us.

Of course 2021 was not long ago, and many people were becoming advocates for algorithmic choice. But I also take pride in being perhaps the longest-serving advocate for these ideas -- and perhaps the one looking farthest ahead. For this forward view, see especially the recent works synopsized in this post, and this fuller list

What I may have underestimated is how the dominant platforms would seed their own destruction, without need for regulatory action - and instead, grassroots innovation might be enough to replace them. Movements like #FreeOurFeeds can create a digital republic…”if you can keep it.” (Which is not to say we should not legislate to support that as well.)

A re-formation of social media? The hope is that Bluesky Social PBC and Free Our Feeds Foundation (along with similar Mastodon efforts) can catalyze a vibrant open ecosystem -- to create a new infrastructure for social media that lets a thousand flowers bloom -- and can grow and evolve over many sociotechnical generations.

(*It is amusing that the image in the Mastodon announcement seems to show a Mastodon looking over a chasm toward a blue sky.)
(Minor revisions 1/19/25)

Thursday, January 09, 2025

New Logics for Social Media and AI - "Whom Does It Serve?"

[Pinned -- Originally published 12/7/24 at 3:47pm]

[UPDATE 4/2/25Prosocial Design Network w/ Richard Reisman: Middleware & Prosocial Design - Recap and video of session with Julia Kamin covers key points from this recent work.]

[UPDATED 12/17/24 to add Shaping the Future of Social Media with Middleware  (with Francis Fukuyama, Renée DiResta, Luke Hogg, Daphne Keller, and others, Foundation for American Innovation, Georgetown University McCourt School of Public Policy, and Stanford Cyber Policy Center (details below).] 

A collection of recent works present related aspects of new logics for the development of social media and AI - to faithfully serve individuals and society, and to protect democratic freedoms that are now in growing jeopardy. The core question is "Whom does it serve?"*

This applies to our technology -- first in social media, and now as we build out broader and more deeply impactful forms of AI. It is specifically relevant to our technology platforms, which now suffer from "enshittification" as they increasingly serve themselves at the expense of their users, advertisers, other business partners, and society at large. These works build to focus on how this all comes down to the interplay of individual choice (bottom-up) and social mediation of that choice (top-down, but legitimized from bottom-up). That dialectic shapes the dimension of "whom does it serve?"* for both social media and AI.

Consider the strong relationship between the “social” and “media” aspects of AI -- and how that ties to issues arising in problematic experience with social media platforms that are already large scale:

  • Social media increasingly include AI-derived content and AI-based algorithms, and conversely, human social media content and behaviors increasingly feed AI models
  • The issues of maintaining strong freedom of expression, as central to democratic freedoms in social media, translate to and shed light on similar issues in how AI can shape our understanding of the world – properly or improperly.

These works focus on how the 1) need for direct human agency applies to AI, 2) how that same need in social media requires deeper remediation than commonly considered, how 3) middleware interoperability for enabling user choice is increasingly being recognized as the technical foundation for this remediation, and how 3) freedom (in both natural and digital worlds) is not just a matter of freedom of expression, but of freedom of impression (choice of who to listen to). 

Without constant, win-win focus on this essential question of "whom does it serve?" as we develop social media and AI, we risk the dystopia of "Huxwell" (a blend of Huxley's Brave New World and Orwell's 1984).**  

  • New Perspectives on AI Agentiality and Democracy: "Whom Does It Serve?"
     (with co-author Richard Whitt, Tech Policy Press12/6/24) - Building toward optimal AI relationships and capabilities that serve individuals, society, and freedom requires new perspectives on the functional dimensions of AI agency and interoperability. Individuals should be able to just say "Have your AI call my AI." To do that, agents must develop in two dimensions:
    1. Agenticity, a measure of capability - what can it do?
    2. Agentiality, a measure of relationship - whom does it serve?
  • Three Pillars of Human Discourse (and How Social Media Middleware Can Support All Three) (Tech Policy Press10/24/24) - Overview of new framing that strengthens, broadens, and deepens the case for open middleware to address the dilemmas of governing discourse on social media. Human discourse is, and remains, a social process based on three essential pillars that must work together:
    1. Agency
    2. Mediation
    3. Reputation 
    ...Supplementary to this:
  • NEW: Shaping the Future of Social Media with Middleware (Foundation for American Innovation and Georgetown University McCourt School of Public Policy, 12/17/24) -- Major team effort with Francis Fukuyama, Renée DiResta, Luke Hogg, Daphne Keller, and many other notables, White paper building on this 4/30/24 Symposium that I helped organize, held at Stanford Cyber Policy Center. Assembled leading thinkers at the nexus of social media, middleware, and public policy. The only comprehensive white paper to offer a thoughtful assessment of middleware’s promise, progress, and issues since the 2020 Stanford Group paper. The goal is to operationalize the concept of middleware and provide a roadmap for innovators and policymakers. (The above two pieces extend this vision in broader and more forward-looking directions.)
  • New Logics for Governing Human Discourse in the Online Era (CIGI Freedom of Thought Project, 4/25/24- Leading into the above pieces, this policy brief pulls together and builds on ideas about how freedom of impression guides freedom of expression without restricting it, and how combining 1) user agency, 2) a restored role for our traditional social mediation ecosystem, and 3) systems of social trust all combine to synergize that process for the online era. It offers a proactive vision of how that can enable social media to become ever more powerful and beneficial "bicycles for our minds."
*Alluding to the Arthurian legend of the Holy Grail.
**Suggested by Jeff Einstein and teased in his video.

(Originally published 12/7/24 at 3:47pm, revised 12/22/24 -- with dateline reset to pin it at or near the top of this blog)

Wednesday, January 08, 2025

Beyond the Pendulum Swings of Centralized Moderation (X/Twitter, Meta, and Fact Checking)

The crazy pendulum swings of centralized moderation by dominant social media platforms is all over the news again, as nicely summarized by Will Oremus, and explored by a stellar Lawfare panel of experts. 

We have seen one swing toward what many (mostly the right) perceive as blunt over-moderation and censorship that intensified around the 2016 election and aftermath. And now, with the 2020 election and aftermath, a swing away, to what others (mostly the left) view as irresponsibly enabling uncontrolled cesspools of anger, hate, and worse. This pendulum is clearly driven in large part by the political winds (which it influences, in turn), a question of whose ox gets gored, and who has the power to influence the platforms -- "Free speech for me, but not for thee."

This will remain a disruptive pendulum -- one that can destroy the human community and its collective intelligence -- until we step back and take a smarter approach to context and diversity of our perceptions of speech. More reliance on community moderation, as X/Twitter and Meta/Facebook/Threads are now doing, points theoretically in the right direction: to democratize that control -- but is far from being effective. Even if they really try, centralized platforms are inherently incapable of  doing that well

Middleware as systems thinking on how to do better

Three of the speakers on the Lawfare panel were coauthors/contributors with me in a comprehensive white paper, based on a symposium on a partially decentralized approach called "middleware." That proposes an open market in independent curation and moderation services that sit in the middle between each user and their platforms. These services can do community-based moderation in a fuller range of ways, at a community level, much more like the way traditional communities have always done "moderation" (better thought of as "mediation") of how we communicate with others. This new middleware paper explains the basics, why it is a promising solution, and how to make it happen. (For a real-world example of middleware, but still in its infancy, consider Bluesky.)

As for the current platform approach to "community moderation," many have critiqued it, but I suggest a deeper way to think about this, drawing on how humans have always mediated their speech. Three Pillars of Human Discourse (and How Social Media Middleware Can Support All Three is a recent piece on extending current ideas on middleware to support this solution that has evolved over centuries of human society. The three pillars are: User Agency, Social Mediation, and Reputation. 

Toward effective community moderation

The Augmented Wisdom of Crowds: Rate the Raters and Weight the Ratings (from 2018) digs deeper into why simplistic attempts at community surveys fail, and how the same kind of advanced analysis of human inputs that made Google win the search engine wars can be applied to social media. A 2021 post and a 2024 policy brief update that.

To understand why this is important, consider what I call The Zagat Olive Garden Problem. In the early 90s, I noticed this oddity in the popular Zagat guide, a community-rating service for restaurants: The top 10 or so restaurants in NYC were all high-priced, haute cuisine or comparably refined, except one: Olive Garden. Because Olive Garden food was just as good? No, because far more people knew it from their many locations, and they were attracted to a familiar brand with simple, but tasty, food at very moderate prices, and put off by very high prices. 

Doing surveys where all votes are counted equally may sound democratic, but foolishly so. We really want ratings from those with a reputation for tastes and values we relate to (but leavened with healthy diversity on how we should broaden our horizons). That is what good feed and recommender algorithms must do. For that, we need to "rate the raters and weight the ratings," and do so in the relevant context, as that post explains.

Back to the pendulum analogy, consider how pendulums work -- especially the subtle phenomenon of entrainment (perhaps blurring details, but suggestive): 

  • Back in 1666, Huygens invented the pendulum clock and discovered that if two were mounted on the same wall, their pendulum swings gradually became synchronized. That is because each interacts with the shared wall to exchange energy in a way that brings them into phase.
  • Simplistically, moderation is a pendulum that can swing from false positives to false negatives. Each conventional platform has one big pendulum controlled by one owner or corporation that swings with the political wind (or other platform influences). Platform-level community moderation entrains everyone to that one pendulum, whether it fits or not -- resulting in many false positives and false negatives, often biased to one side or the other.
  • Alternatively, a distributed system of middleware services can serve many individuals or communities, each with their own pendulums that swing to their own tastes.
  • Within communities, these pendulums are tightly linked (the shared wall) and tend to entrain.
  • Across communities, there are also weaker linkages, in different dimensions, so still nudge toward some entrainment.
  • In addition to these linkages in many dimensions, instead of being rigid, the "walls" of human connection are relatively elastic in how they entrain.
  • The Google PageRank algorithm is based on advanced math (eigenvalues) and can treat individual search engine users and their intentions as clustering into diverse communities of interest and value -- much like a network of pendulums all linked to one another by elastic "walls" in a multidimensional array.
  • Similar algorithms can be used by diverse middleware services to distill community ratings with the same nuanced sensitivity to their diverse community contexts. Not perfectly, but far better than any centralized system.
In addition, part of the problem with current community notes, and any form of explicit ratings of content, is getting enough people to put in the effort. Just as Google PageRank uses implicit signals of approval that users do anyway (linking to a page), variations for social media can also use implicit signaling in the form of likes, shares, and comments (and more to be added) to draw on a far larger population of users, structured into communities of interest and values.

Of course there are concerns that the decentralization of middleware might worsen fragmentation and polarization. While it might have some such effect in some contexts, there is also the opposing effect of reducing harmful virality feedback cascades. Consider the fluid dynamics of an old fashioned metal ice cube tray, and how water sloshing in the open tray forms much more uncontrollable waves than in the tray with the separating insert in place.

The only effective and scalable solution to social media moderation/curation/mediation is to build distributed middleware services, along with tools for orchestrating the use of a selection of them to compose our individual feeds. That too can be done well or badly, but only with a collective effort to do our best on a suitably distributed basis can we succeed. 

Thursday, October 24, 2024

Now on Tech Policy Press: Three Pillars of Human Discourse (and How Social Media Middleware Can Support All Three)

My new short article, Three Pillars of Human Discourse (and How Social Media Middleware Can Support All Three), is now on Tech Policy Press -- after extensive workshopping with dozens of experts. 

This new framing strengthens, broadens, and deepens the case for open middleware to address the dilemmas of governing discourse on social media

Human discourse is a social process. It depends on three pillars that must work together:
  1. Agency
  2. Mediation
  3. Reputation 
Lack of attention to all three pillars and their synergy has greatly harmed current social mediaWithout strong support for all three pillars -- enabled by middleware for interoperation and open innovation -- social media will likely struggle to balance chaos and control. 

Advocates of middleware have brought increasing attention to the need for user agency -- but without strong support for the other two pillars, there remain many issues. Agency must combine with mediation and reputation to rebuild the context of "social trust" that is being lost. By enabling attention to all three pillars, open, interoperable middleware can help to:
  • Organically maximize rights to expression, impression, and association in win-win ways,
  • Cut through speech governance dilemmas that lead to controversy and gridlock, and
  • Support democracy and protect against chaos, authoritarianism, or tyranny of the majority.
There are also helpful supplements on my blog: 
Broader background for these pillars and why we need to attend to them is in my CIGI policy brief, New Logics for Governing Human Discourse in the Online Era.

(My thanks to the many experts who have provided encouragement and helpful feedback in individual discussion and at the April FAI/Stanford symposium on middleware -- and special thanks to Luke Thorburn for invaluable suggestions on simplifying the presentation of these ideas.)

---
[Update 10/31/24:] In addition to the foundation on "social trust" by Laufer and Nissenbaum that I cited in my article, I just found an enlightening sociological perspective from Thorsten Jelinek, How Social Media and Tokenization Distort the Fabric of Human Relations.

Sunday, October 13, 2024

Making Social Media More Deeply Social with Branded Middleware

This vision of social media future is meant to complement and clarify the vision behind many of my other works (such as this, see list of selected pieces at the end). It assumes you have come here after seeing at least one of those (but includes enough background to also be read first).

Business opportunity – start now, and grow from there:

     Managers of the NY Times, small local news services, or any other organization that has built a strong community can use the following model to build a basic online middleware service business, starting now.

     For example, Bluesky could be a base platform for building initial proof-of-concept services along these lines that could develop and grow into a major business.

[If you are impatient, jump to the section on "Branding"]

It is clear that social media technology is not serving social values well. But it is not so clear how to do better. I have been suggesting that the answer begins in learning from how we, as a society, curated information flows offline. (These issues are also increasingly relevant to emerging AI.)

This piece envisions how an offline curation “brand” with an established following – like the New York Times, or many others, including non-commercial communities of all kinds – could extend their curatorial influence, and the role of their larger community, more deeply into the digital future of thought. (Of course, much the same kind of service can be built as a greenfield startup, as well, but having an established community reduces the cold-start problem.)

Building on middleware – the Three Pillars

I and many others have advocated for “middleware” services, a layer of enabling technology that sits between users and platforms to give control back to users over what goes into each of our individual feeds. But that is just the start of how that increased user agency can support healthy discourse and limit fragmentation and polarization in our globally online world.

 The pillars I have been writing about are:

  1. Individual agency
    , the starting point of democratic free choice over what we say to whom, what individuals we listen to, and what groups we participate in.
  2. Social mediation, the social processes, enabled by an ecosystem of communities and institutions of all kinds that influence and propagate our thoughts, expression, and impression. (For simple background, see What Is a Social Mediation Ecosystem?)
  3. Reputation, the quality metrics, intuitively developed and shared to decide which individuals and communities are trustworthy, and thus deserve our attention (or our skepticism).

Middleware can sit on top of our basic social networking platforms to support the synergistic operation of all three pillars, and thus help make our discourse productive.

In the offline world of open societies, there is no single source of “middleware” services that guide us, but an open, organic, and constantly adjusted mix of many sources of collective support. People grow up learning intuitively to develop and apply these pillars in ever-changing combinations.

Software is far more rigid than humans. Online middleware is a technique for enabling the same kind of diversity and “interoperation” – of attention agent services for us to choose from, and to help groups fully participate in them – so we can dynamically compose the view of the world we want at any point in time.

Bluesky currently offers perhaps the best hint at how middleware services will be composed, steered, and focused – as our desires, tasks, and moods change. Just keep in mind that current middleware offerings are still just infants learning to crawl.

As we may think …together

Vannevar Bush provided a prescient vision of the web in 1945 (yes, 1945!) – in his Atlantic article “As We May Think.” Its technology was quaint, but the vision of how humans can use machines to help us think was very on-point, and inspired the creation of the web. Now it is time for a next level vision – of how we may think together – even if the details of that vision are still crude.

Current notions of middleware have been focused primarily on user agency, and just beginning (as in Bluesky) to consider how we need not just a choice of a single middleware agent service, but to flexibly compose and steer among many attention agent services. Steve Jobs spoke of computers as “bicycles for our minds.” As we conduct our discourse, middleware-based attention agent services can give us handlebars to steer them and gear shifts to deal with varying terrain and motivations. They can give us “lenses,” for focusing what we see from our bicycles.

To build out this capability, we will need at least two levels of user-facing middleware services:

     Many low level service agents that curate for specific objectives of subject domain, styles, moods, sources, values, and other criteria.

     One or more high level service agents that make it easy to orchestrate those low level agents, as we steer them, shift gears, and change our focus, creating a consolidated ranking that gives us what we want, and screens out what we do not want, at any given time.

Just how those will work will change greatly over time as we learn to drive these bicycles, and providers learn to supply useful services – “we shape our tools and our tools shape us.” Emerging AI in these agents will increase the ease of use, and the usable power of the bicycles – but even in the age of AI, the primary intelligence and judgment must come from the humans that use these systems and create the terrain of existing and new information and ideas (not just mechanically reassembled tokens of existing data) that we steer through.

====================================================
Here is the business opportunity:
====================================================

Branding – a “handle” for intuitively easy selection  -- and signaling value

Yes, choosing middleware services seems complicated, and skeptics rightly observe that most users lack the skill or patience to think very hard about how to steer these new bicycles for our minds. But there are ways to make this easy enough. One of the most promising and suggestive is branding – a powerful and user-friendly tool for reliably selecting a service to give desired results. Take the important case of news services:

     If we try to select news stories at the low level of all the different dimensions of choice – subject matter, style, values, and the like – of course the task would be very complex and burdensome.

     But many millions easily choose what mix of CNN, MSNBC, Fox News, PBS, or less widely used brands they want to watch at any time. The existing brand equity and curation capabilities of such media enterprises are now being squandered by digital platforms that offer such established service brands only rudimentary integration into their social media curation processes. With proper support, both established and new branded middleware services can establish distinctive sensibilities that can make choice easy.

Importantly, branding also serves marketing and revenue functions in powerful ways that can be exploited by middleware services. Once established and nurtured, a brand attracts users on the basis that it offers known levels of quality, and as catering to selective interests and tastes. "It's Not TV, It's HBO" encapsulated the power of HBO's brand in the heyday of premium TV.

The New York Times as a branded curation community: 

Consider the New York Times as just one example of branded curation middleware that could serve as a steerable lens into global online discourse. It could just as well be News Corp, CNN, Sports Illustrated, or Vogue – or your local newspaper (if you still have one!) – or your town or faith community, a school, a civil society organization, a political party, a library, a bowling league – or whatever group or institution that wants to support its uniquely focused (but overlapping and not isolated) segment of the total social mediation ecosystem.

Consider how all three pillars can work and synergize in such a service:

User agency comes in by our participation as readers, and as speakers in any relevant mode – posts, comments, likes, shares, letters to the editor, submissions for Times publications. This can be addressed at at least two levels:

     Low level attention service agents that find and rank candidate items for our feeds and recommenders. This is much as we now choose from an extensive list of available email newsletters from the Times.

     Higher level middleware composing agents would help compose these low-level choices – and facilitate interoperation with similar services from other communities – to build a composite feed of items from the Times and all our other chosen sources. They could offer sliders to decide what mx to steer into a feed at any given time, and saved presets to shift gears for various moods, such as news awareness/analysis, sports/entertainment, challenging ideas, light mind expansion, and diversion/relaxation.

(Different revenue models may apply to different services, levels, and modes of participation, just as some NY Times features now may cost extra.)

Social mediation processes come in to our user interface at two levels of curation:

     User-driven curation: Much like current platforms, the Times low-level services can rank items based on signals from the community of Times users – their likes, shares, comments, and other signals of interest and value. This might distinguish subscribers versus non-subscribing readers. Subscribers might be more representative of the community, but non-subscribers might bring important counterpoints. Other categories could include special users, such as public figures in various political, business, or professional categories. As such services mature, these signals can be expanded in variety to be far more richly nuanced, such as to give clearer feedback and be categorized by subject domains of primary involvement. 

     Expert-driven curation: The Times editorial team can be drawn on (and potentially augmented with supportive levels of AI) to provide high quality expert curation services in much the same way, in whatever mix desired. This could include both their own contributions, and their reactions to readers’ contributions.

Reputation systems that keep score of quality and trust feedback on both users and content items – that arise from those mediation processes – can also be valuably focused on the Times community:

     At a gross level, we might make gross assumptions that differentiate the editorial and journalism staff, subscribers, and non-subscribing readers (as part of the basic mediation process), but a reputation system could distinguish among very different levels of reputation for quality of participation in many dimensions, such as expertise, judgment, clarity, wisdom, civility, and many more – in each of many subject domains.

     Reputation systems might also be tuned to Times reporters and editors, and their inputs to reputations of content items and users. But the true power of this kind of service is its crowdsourcing from not just the Times staff, but from its unique extended community. One could choose to ignore the staff, and just turn their lens on the community, or vice versa.

Enterprise-class community support integration – and simple beginnings

To fully enable this would require new operational support services that integrate the operation of open online social media platform services (like Bluesky now, or maybe someday Threads) with the operations of the Times. As the technology for multi-group participation is built out beyond current rudimentary levels, it can integrate with the operation of each group, including the enterprise-class systems that drive the operations of the Times. This might include the kind of functionality and integration offered by CRM (customer relationship management) systems for managing all of the Times’ interactions with its customers, as well as the CMS (content management system) used to manage its journalism content, and the SMS (subscription management systems) that manage revenue operations.

Doing all of this fully will take time and effort – but some of it could be done relatively easily, such as in an attention agent that ranks items based on the Times community members signals as distinct from those of the general network population. The Times could begin a trial of this in the near term by exploiting the basic middleware capabilities already available by creating a Bluesky server instance (using the open Bluesky server code and interoperation protocols) and their own custom algorithms. 

A large, profitable (or otherwise well-funded) business like the Times could develop and operate middleware software itself (if the social media platform allows that, as Bluesky does), but smaller organizations might need a shared “middleware as a service” (MaaS) software and operations provider to do much of that work.

A user steered, intuitively blended, mix of diverse sub-community feeds

Even at a basic level, imagine how doing this for many such branded ecosystem groups could enable users to easily compose feeds that bring them a diverse mix of quality inputs, and to steer and adjust the lenses in those feeds and searches to focus our view as we desire, when we desire.

Similar middleware services could be based all kinds of groups – for example:

     Local news and community information services – much like the Times example, for where you live now, used to live, or want to live or visit.

     Leadership and/or supporters of political parties or civil society organizations – issues, platforms/policies, campaigns, turnout, surveys, fact-checking, and volunteering.

     Professional and/or amateur players and/or coaches for sports – catering to teams, fans, sports lore, and fantasy leagues.

     Faculty, students, and/or alumni from universities – selecting for students, faculty, alumni, applicants, parents.

     Librarians and/or card holders for library systems – selecting for discovery, reading circles, research, criticism, and authors.

     Leaders and/or adherents to faith communities – for community news, personal spiritual issues, and social issues.

Consider how the Times example translates to and complements any of these other kinds of groups (most easily if enabling software is made available from a SaaS provider). Users could easily orchestrate their control over diverse sources of curation and moderation – selecting from brands with identities they recognize – without requiring the prohibitive cognitive load of controlling all the details that critics now argue would doom middleware because few would bother to make selections. New brands can also emerge and gain critical mass, using this same technology base.

By drawing on signals from expert and/or ordinary members of groups that have known orientations and norms, users might easily select mixes that serve their needs and values – and shift them as often as desired.

Context augmentation

Peter Steiner in The New Yorker

"On the Internet, no one knows you are a dog" -- or a lunatic, or a bot. Famously observed by Peter Steiner's 1993 cartoon, this became known as "context collapse," broadly understood as a core reason why internet discourse is so problematic. Much of the meaning derives from context external to the message itself -- who is speaking to whom, from and to what community, with what norms and assumptions. That has largely been lost in current social media (and in emerging AIs). 

Consider how the kind of social mediation ecosystem processes envisioned here differ from what current major platforms offer in the way of community support -- and thus fail to provide essential context: 

  • They let you create a personal set (a unidirectional pseudo-community) of friends or those you follow, but increasingly focus on engagement-based ranking into feeds -- because they want to maximize advertising revenue, not the quality of your experience. 
  • They rank based on likes, shares, and comments from a largely undifferentiated global audience, with little opportunity for you to influence who is included. 
  • They may favor feedback from rudimentary "groups" that you join, but provide very limited support to organizers and members to make those groups rich and cohesive. 
  • They may cluster you into what they infer to be your communities of interest, but with out any agency from you over which groups those are, except for the rudimentary "groups" you join.
  • And, even if they did want to serve your objectives, not theirs, they would be hard-pressed to come anywhere near the richness and diversity of truly independent, opt-in, community-driven middleware services that are tailored to diverse needs, contexts, and sustaining revenue models.

Doing moderation the old-fashioned way – enabled by middleware

Instead of being seen as a magical leap in technology, or an off-putting cognitive burden on users, middleware can be understood as a way to recreate in digital form the formal and informal social structures people have enjoyed for centuries – individually composed interaction with the wisdom of organically evolved social mediation ecosystems and intuitive informal reputation systems.

What at first seems complicated, from the perspective of current social media, is at core, little more complicated than the structure of traditional human discourse – building on key functions and elements of the social mediation and reputation ecosystems – all legitimized by choices of individual agency. Yes, that is complicated, but humans have learned over millennia to intuitively navigate this traditional web of communities and reputations. Yes, make it as simple as possible, but no simpler!

Creating an online twin of such a web of community ecosystems will not happen overnight, but many industries have already built out online infrastructures of similar complexity – in finance, manufacturing, logistics, travel, and e-commerce. Middleware is just a tool for enabling software systems to work together in ways similar to what humans (and groups of humans) do intuitively. The time to start rebuilding those ecosystems is now.

____________________

Related works:

     My November 2023 post introducing the pillars framing – A New, Broader, More Fundamental Case for Social Media Agent "Middleware" – introduced the Three Pillars framing, and embeds a deck that adds details and implication not yet fully addressed elsewhere.

     Core ideas addressed more formally in my April 2024 CIGI policy brief, New Logics for Governing Human Discourse in the Online Era.

     Very simply -- What Is a Social Mediation Ecosystem? (and Why We Need to Rebuild It). 

     Other related works are listed on my blog.