Sunday, April 06, 2025

Being Human in 2035 -- How Are We Changing in the Age of AI?

My recent predictive essay has been included in Being Human in 2035 -- How Are We Changing in the Age of AI? -- a very thought-provoking compendium from the Imagining the Digital Future Center at Elon University by Lee Rainie and Janna Anderson: 

Nearly 300 of the experts in this early 2025 study responded to a series of three quantitative questions, and nearly 200 wrote predictive essays in how the evolution of artificial intelligence (AI) systems and humans might affect essential qualities of being human in the next decade. Many are concerned that the deepening adoption of AI systems over the next decade will negatively alter how humans think, feel, act and relate to one another.

This snippet from my contribution was featured (with those from eight others) in the Executive Summary (p. 5):

Over the next decade we will be at a tipping point in deciding whether uses of AI as a tool for both individual and social (collective) intelligence augments humanity or de-augments it. We are now being driven in the wrong direction by the dominating power of the ‘tech-industrial complex,’ but we still have a chance to right that. Will our tools for thought and communication serve their individual users and the communities those users belong to and support, or will they serve the tool builders in extracting value from and manipulating those individual users and their communities?
… If we do not change direction in the next few years, we may, by 2035, descend into a global sociotechnical dystopia that will drain human generativity and be very hard to escape. If we do make the needed changes in direction, we might well, by 2035, be well on the way to a barely imaginable future of increasingly universal enlightenment and human flourishing.

My full contribution is in the full report (p. 112) -- with these snippets in sidebars:

While there is increasingly strong momentum in worsening dehumanization, there is also a growing techlash and entrepreneurial drive that seeks to return individual agency, openness and freedom with the drive to support the human flourishing of the early web era. Many now seek more human-centered technology governance, design architectures and business models.
...Human discourse is, and remains, a social process based on three essential pillars that must work together: Individual Agency, Social Mediation, Reputation. Without the other two pillars, individual agency might lead to chaos or tyranny. But without the pillars of the social mediation ecosystem that focuses collective intelligence and the tracking of reputation to favor the wisdom of the smart crowd – while remaining open to new ideas and values – we will not bend toward a happy middle ground.

…We need to return to how society once relied largely on self-governance that avoided the sterile thought control of walled gardens, centrally managed ‘public’ forums and the abuses of company towns. We relied instead on a social mediation ecosystem of individuals participating in and giving legitimacy to communities of interest and value to set norms and socially construct our reality.

I hope you will read my full contribution -- and of course the very insightful other contributions from the many eminent contributors.

Thursday, January 16, 2025

#FreeOurFeeds - Another Step Toward the Vision

As perhaps the first to use the phrase "free our feeds" and the Twitter hashtag #FreeOurFeeds, it is gratifying to see the launch of the Free Our Feeds Foundation to embark on a major step toward that vision. 

Why? There have been many small steps to free our feeds, now seen as an urgent need to "billionaire-proof" our social media connectivity. Musk and then Zuck have shown the perils of the "loaded weapon" we have left on the table of online discourse, by so shamelessly picking it up to use for their own ends. We can only guess where they -- and others like them -- or worse -- will point their weapons next.

What? Many see the Mastodon "fediverse" as a major step in this direction, arguably so -- and a similar move to open governance of the fediverse, also on 1/13, is a second major step there.* But many are coming to see Bluesky as a larger step toward both horizontal and vertical interoperability for the full range of functions needed to free us from platform lock-in and manipulation. I am hopeful that both efforts will succeed, and that those ecosystems will grow -- and gain high levels of interoperability with one another (and with future protocols).

How? Bluesky seems to currently be the most open to building high levels of function and extensibility. We are in the early days of social media, just learning to crawl. To leverage this technology so that we can walk, run, and fly -- while remaining democratic and free -- it must be kept open to user control and to the control of communities. That will enable us to re-energize the social mediation ecosystem as I explained recently (and in many other works listed here). 

A key aspect of Bluesky and its AT Protocol (not yet in the Mastodon architecture as I understand it) is that, at the level of both 1) the app, and 2) of the relays that tie app instances together, each can be separately managed and replicated, along with 3) the level of independently selectable feed algorithms. Federation of the relays is important because they are resource-heavy services, not very amenable to lightly resourced community managers, but capable of being secured and managed by trusted organizations to support advanced algorithms. That is also important to preserve privacy. The Free Our Feeds Foundation promises to take a large step in that direction for the Bluesky ecosystem

As Cory Doctorow, Mr. Enshittification, himself, said of this effort:

If there's a way to use Bluesky without locking myself to the platform, I will join the party there in a hot second. And if there's a way to join the Bluesky party from the Fediverse, then goddamn I will party my ass off.

An opening for entrepreneurship? These moves create potentially huge opportunities to build better app instances, better relays, better algorithms, and new levels of services and user experiences to make this all easy to use, powerful, trustworthy, and well-engineered. An effective ecosystem creates a large pie for many players -- and levels the playing field -- unlike the concentration and extractive business models of current platforms. The example of how the web ate CompuServe, Prodigy, and AOL to grow far larger businesses may repeat itself.

My personal interest: I began using the rallying cry of Free Our Feeds! in a blog post on 2/11/21 (the earliest use of that phrase I could find on Google), and the hashtag #FreeOurFeeds on Twitter on 2/13/21 (also apparently the first use). I continued using this hashtag often on Twitter, and for a fuller treatment of the concept in a 4/22/21 Tech Policy Press article that included the diagram here. 

On rereading that, "The Internet Beyond Social Media Thought-Robber Barons," I was pleased at how well it has stood up in articulating the vision that is now catching fire -- and at how forward-looking it still is on where that vision can take us.

Of course 2021 was not long ago, and many people were becoming advocates for algorithmic choice. But I also take pride in being perhaps the longest-serving advocate for these ideas -- and perhaps the one looking farthest ahead. For this forward view, see especially the recent works synopsized in this post, and this fuller list

What I may have underestimated is how the dominant platforms would seed their own destruction, without need for regulatory action - and instead, grassroots innovation might be enough to replace them. Movements like #FreeOurFeeds can create a digital republic…”if you can keep it.” (Which is not to say we should not legislate to support that as well.)

A re-formation of social media? The hope is that Bluesky Social PBC and Free Our Feeds Foundation (along with similar Mastodon efforts) can catalyze a vibrant open ecosystem -- to create a new infrastructure for social media that lets a thousand flowers bloom -- and can grow and evolve over many sociotechnical generations.

(*It is amusing that the image in the Mastodon announcement seems to show a Mastodon looking over a chasm toward a blue sky.)
(Minor revisions 1/19/25)

Thursday, January 09, 2025

New Logics for Social Media and AI - "Whom Does It Serve?"

[Pinned -- Originally published 12/7/24 at 3:47pm]

[UPDATE 4/2/25Prosocial Design Network w/ Richard Reisman: Middleware & Prosocial Design - Recap and video of session with Julia Kamin covers key points from this recent work.]

[UPDATED 12/17/24 to add Shaping the Future of Social Media with Middleware  (with Francis Fukuyama, Renée DiResta, Luke Hogg, Daphne Keller, and others, Foundation for American Innovation, Georgetown University McCourt School of Public Policy, and Stanford Cyber Policy Center (details below).] 

A collection of recent works present related aspects of new logics for the development of social media and AI - to faithfully serve individuals and society, and to protect democratic freedoms that are now in growing jeopardy. The core question is "Whom does it serve?"*

This applies to our technology -- first in social media, and now as we build out broader and more deeply impactful forms of AI. It is specifically relevant to our technology platforms, which now suffer from "enshittification" as they increasingly serve themselves at the expense of their users, advertisers, other business partners, and society at large. These works build to focus on how this all comes down to the interplay of individual choice (bottom-up) and social mediation of that choice (top-down, but legitimized from bottom-up). That dialectic shapes the dimension of "whom does it serve?"* for both social media and AI.

Consider the strong relationship between the “social” and “media” aspects of AI -- and how that ties to issues arising in problematic experience with social media platforms that are already large scale:

  • Social media increasingly include AI-derived content and AI-based algorithms, and conversely, human social media content and behaviors increasingly feed AI models
  • The issues of maintaining strong freedom of expression, as central to democratic freedoms in social media, translate to and shed light on similar issues in how AI can shape our understanding of the world – properly or improperly.

These works focus on how the 1) need for direct human agency applies to AI, 2) how that same need in social media requires deeper remediation than commonly considered, how 3) middleware interoperability for enabling user choice is increasingly being recognized as the technical foundation for this remediation, and how 3) freedom (in both natural and digital worlds) is not just a matter of freedom of expression, but of freedom of impression (choice of who to listen to). 

Without constant, win-win focus on this essential question of "whom does it serve?" as we develop social media and AI, we risk the dystopia of "Huxwell" (a blend of Huxley's Brave New World and Orwell's 1984).**  

  • New Perspectives on AI Agentiality and Democracy: "Whom Does It Serve?"
     (with co-author Richard Whitt, Tech Policy Press12/6/24) - Building toward optimal AI relationships and capabilities that serve individuals, society, and freedom requires new perspectives on the functional dimensions of AI agency and interoperability. Individuals should be able to just say "Have your AI call my AI." To do that, agents must develop in two dimensions:
    1. Agenticity, a measure of capability - what can it do?
    2. Agentiality, a measure of relationship - whom does it serve?
  • Three Pillars of Human Discourse (and How Social Media Middleware Can Support All Three) (Tech Policy Press10/24/24) - Overview of new framing that strengthens, broadens, and deepens the case for open middleware to address the dilemmas of governing discourse on social media. Human discourse is, and remains, a social process based on three essential pillars that must work together:
    1. Agency
    2. Mediation
    3. Reputation 
    ...Supplementary to this:
  • NEW: Shaping the Future of Social Media with Middleware (Foundation for American Innovation and Georgetown University McCourt School of Public Policy, 12/17/24) -- Major team effort with Francis Fukuyama, Renée DiResta, Luke Hogg, Daphne Keller, and many other notables, White paper building on this 4/30/24 Symposium that I helped organize, held at Stanford Cyber Policy Center. Assembled leading thinkers at the nexus of social media, middleware, and public policy. The only comprehensive white paper to offer a thoughtful assessment of middleware’s promise, progress, and issues since the 2020 Stanford Group paper. The goal is to operationalize the concept of middleware and provide a roadmap for innovators and policymakers. (The above two pieces extend this vision in broader and more forward-looking directions.)
  • New Logics for Governing Human Discourse in the Online Era (CIGI Freedom of Thought Project, 4/25/24- Leading into the above pieces, this policy brief pulls together and builds on ideas about how freedom of impression guides freedom of expression without restricting it, and how combining 1) user agency, 2) a restored role for our traditional social mediation ecosystem, and 3) systems of social trust all combine to synergize that process for the online era. It offers a proactive vision of how that can enable social media to become ever more powerful and beneficial "bicycles for our minds."
*Alluding to the Arthurian legend of the Holy Grail.
**Suggested by Jeff Einstein and teased in his video.

(Originally published 12/7/24 at 3:47pm, revised 12/22/24 -- with dateline reset to pin it at or near the top of this blog)

Wednesday, January 08, 2025

Beyond the Pendulum Swings of Centralized Moderation (X/Twitter, Meta, and Fact Checking)

The crazy pendulum swings of centralized moderation by dominant social media platforms is all over the news again, as nicely summarized by Will Oremus, and explored by a stellar Lawfare panel of experts. 

We have seen one swing toward what many (mostly the right) perceive as blunt over-moderation and censorship that intensified around the 2016 election and aftermath. And now, with the 2020 election and aftermath, a swing away, to what others (mostly the left) view as irresponsibly enabling uncontrolled cesspools of anger, hate, and worse. This pendulum is clearly driven in large part by the political winds (which it influences, in turn), a question of whose ox gets gored, and who has the power to influence the platforms -- "Free speech for me, but not for thee."

This will remain a disruptive pendulum -- one that can destroy the human community and its collective intelligence -- until we step back and take a smarter approach to context and diversity of our perceptions of speech. More reliance on community moderation, as X/Twitter and Meta/Facebook/Threads are now doing, points theoretically in the right direction: to democratize that control -- but is far from being effective. Even if they really try, centralized platforms are inherently incapable of  doing that well

Middleware as systems thinking on how to do better

Three of the speakers on the Lawfare panel were coauthors/contributors with me in a comprehensive white paper, based on a symposium on a partially decentralized approach called "middleware." That proposes an open market in independent curation and moderation services that sit in the middle between each user and their platforms. These services can do community-based moderation in a fuller range of ways, at a community level, much more like the way traditional communities have always done "moderation" (better thought of as "mediation") of how we communicate with others. This new middleware paper explains the basics, why it is a promising solution, and how to make it happen. (For a real-world example of middleware, but still in its infancy, consider Bluesky.)

As for the current platform approach to "community moderation," many have critiqued it, but I suggest a deeper way to think about this, drawing on how humans have always mediated their speech. Three Pillars of Human Discourse (and How Social Media Middleware Can Support All Three is a recent piece on extending current ideas on middleware to support this solution that has evolved over centuries of human society. The three pillars are: User Agency, Social Mediation, and Reputation. 

Toward effective community moderation

The Augmented Wisdom of Crowds: Rate the Raters and Weight the Ratings (from 2018) digs deeper into why simplistic attempts at community surveys fail, and how the same kind of advanced analysis of human inputs that made Google win the search engine wars can be applied to social media. A 2021 post and a 2024 policy brief update that.

To understand why this is important, consider what I call The Zagat Olive Garden Problem. In the early 90s, I noticed this oddity in the popular Zagat guide, a community-rating service for restaurants: The top 10 or so restaurants in NYC were all high-priced, haute cuisine or comparably refined, except one: Olive Garden. Because Olive Garden food was just as good? No, because far more people knew it from their many locations, and they were attracted to a familiar brand with simple, but tasty, food at very moderate prices, and put off by very high prices. 

Doing surveys where all votes are counted equally may sound democratic, but foolishly so. We really want ratings from those with a reputation for tastes and values we relate to (but leavened with healthy diversity on how we should broaden our horizons). That is what good feed and recommender algorithms must do. For that, we need to "rate the raters and weight the ratings," and do so in the relevant context, as that post explains.

Back to the pendulum analogy, consider how pendulums work -- especially the subtle phenomenon of entrainment (perhaps blurring details, but suggestive): 

  • Back in 1666, Huygens invented the pendulum clock and discovered that if two were mounted on the same wall, their pendulum swings gradually became synchronized. That is because each interacts with the shared wall to exchange energy in a way that brings them into phase.
  • Simplistically, moderation is a pendulum that can swing from false positives to false negatives. Each conventional platform has one big pendulum controlled by one owner or corporation that swings with the political wind (or other platform influences). Platform-level community moderation entrains everyone to that one pendulum, whether it fits or not -- resulting in many false positives and false negatives, often biased to one side or the other.
  • Alternatively, a distributed system of middleware services can serve many individuals or communities, each with their own pendulums that swing to their own tastes.
  • Within communities, these pendulums are tightly linked (the shared wall) and tend to entrain.
  • Across communities, there are also weaker linkages, in different dimensions, so still nudge toward some entrainment.
  • In addition to these linkages in many dimensions, instead of being rigid, the "walls" of human connection are relatively elastic in how they entrain.
  • The Google PageRank algorithm is based on advanced math (eigenvalues) and can treat individual search engine users and their intentions as clustering into diverse communities of interest and value -- much like a network of pendulums all linked to one another by elastic "walls" in a multidimensional array.
  • Similar algorithms can be used by diverse middleware services to distill community ratings with the same nuanced sensitivity to their diverse community contexts. Not perfectly, but far better than any centralized system.
In addition, part of the problem with current community notes, and any form of explicit ratings of content, is getting enough people to put in the effort. Just as Google PageRank uses implicit signals of approval that users do anyway (linking to a page), variations for social media can also use implicit signaling in the form of likes, shares, and comments (and more to be added) to draw on a far larger population of users, structured into communities of interest and values.

Of course there are concerns that the decentralization of middleware might worsen fragmentation and polarization. While it might have some such effect in some contexts, there is also the opposing effect of reducing harmful virality feedback cascades. Consider the fluid dynamics of an old fashioned metal ice cube tray, and how water sloshing in the open tray forms much more uncontrollable waves than in the tray with the separating insert in place.

The only effective and scalable solution to social media moderation/curation/mediation is to build distributed middleware services, along with tools for orchestrating the use of a selection of them to compose our individual feeds. That too can be done well or badly, but only with a collective effort to do our best on a suitably distributed basis can we succeed. 

Thursday, October 24, 2024

Now on Tech Policy Press: Three Pillars of Human Discourse (and How Social Media Middleware Can Support All Three)

My new short article, Three Pillars of Human Discourse (and How Social Media Middleware Can Support All Three), is now on Tech Policy Press -- after extensive workshopping with dozens of experts. 

This new framing strengthens, broadens, and deepens the case for open middleware to address the dilemmas of governing discourse on social media

Human discourse is a social process. It depends on three pillars that must work together:
  1. Agency
  2. Mediation
  3. Reputation 
Lack of attention to all three pillars and their synergy has greatly harmed current social mediaWithout strong support for all three pillars -- enabled by middleware for interoperation and open innovation -- social media will likely struggle to balance chaos and control. 

Advocates of middleware have brought increasing attention to the need for user agency -- but without strong support for the other two pillars, there remain many issues. Agency must combine with mediation and reputation to rebuild the context of "social trust" that is being lost. By enabling attention to all three pillars, open, interoperable middleware can help to:
  • Organically maximize rights to expression, impression, and association in win-win ways,
  • Cut through speech governance dilemmas that lead to controversy and gridlock, and
  • Support democracy and protect against chaos, authoritarianism, or tyranny of the majority.
There are also helpful supplements on my blog: 
Broader background for these pillars and why we need to attend to them is in my CIGI policy brief, New Logics for Governing Human Discourse in the Online Era.

(My thanks to the many experts who have provided encouragement and helpful feedback in individual discussion and at the April FAI/Stanford symposium on middleware -- and special thanks to Luke Thorburn for invaluable suggestions on simplifying the presentation of these ideas.)

---
[Update 10/31/24:] In addition to the foundation on "social trust" by Laufer and Nissenbaum that I cited in my article, I just found an enlightening sociological perspective from Thorsten Jelinek, How Social Media and Tokenization Distort the Fabric of Human Relations.

Sunday, October 13, 2024

Making Social Media More Deeply Social with Branded Middleware

This vision of social media future is meant to complement and clarify the vision behind many of my other works (such as this, see list of selected pieces at the end). It assumes you have come here after seeing at least one of those (but includes enough background to also be read first).

Business opportunity – start now, and grow from there:

     Managers of the NY Times, small local news services, or any other organization that has built a strong community can use the following model to build a basic online middleware service business, starting now.

     For example, Bluesky could be a base platform for building initial proof-of-concept services along these lines that could develop and grow into a major business.

[If you are impatient, jump to the section on "Branding"]

It is clear that social media technology is not serving social values well. But it is not so clear how to do better. I have been suggesting that the answer begins in learning from how we, as a society, curated information flows offline. (These issues are also increasingly relevant to emerging AI.)

This piece envisions how an offline curation “brand” with an established following – like the New York Times, or many others, including non-commercial communities of all kinds – could extend their curatorial influence, and the role of their larger community, more deeply into the digital future of thought. (Of course, much the same kind of service can be built as a greenfield startup, as well, but having an established community reduces the cold-start problem.)

Building on middleware – the Three Pillars

I and many others have advocated for “middleware” services, a layer of enabling technology that sits between users and platforms to give control back to users over what goes into each of our individual feeds. But that is just the start of how that increased user agency can support healthy discourse and limit fragmentation and polarization in our globally online world.

 The pillars I have been writing about are:

  1. Individual agency
    , the starting point of democratic free choice over what we say to whom, what individuals we listen to, and what groups we participate in.
  2. Social mediation, the social processes, enabled by an ecosystem of communities and institutions of all kinds that influence and propagate our thoughts, expression, and impression. (For simple background, see What Is a Social Mediation Ecosystem?)
  3. Reputation, the quality metrics, intuitively developed and shared to decide which individuals and communities are trustworthy, and thus deserve our attention (or our skepticism).

Middleware can sit on top of our basic social networking platforms to support the synergistic operation of all three pillars, and thus help make our discourse productive.

In the offline world of open societies, there is no single source of “middleware” services that guide us, but an open, organic, and constantly adjusted mix of many sources of collective support. People grow up learning intuitively to develop and apply these pillars in ever-changing combinations.

Software is far more rigid than humans. Online middleware is a technique for enabling the same kind of diversity and “interoperation” – of attention agent services for us to choose from, and to help groups fully participate in them – so we can dynamically compose the view of the world we want at any point in time.

Bluesky currently offers perhaps the best hint at how middleware services will be composed, steered, and focused – as our desires, tasks, and moods change. Just keep in mind that current middleware offerings are still just infants learning to crawl.

As we may think …together

Vannevar Bush provided a prescient vision of the web in 1945 (yes, 1945!) – in his Atlantic article “As We May Think.” Its technology was quaint, but the vision of how humans can use machines to help us think was very on-point, and inspired the creation of the web. Now it is time for a next level vision – of how we may think together – even if the details of that vision are still crude.

Current notions of middleware have been focused primarily on user agency, and just beginning (as in Bluesky) to consider how we need not just a choice of a single middleware agent service, but to flexibly compose and steer among many attention agent services. Steve Jobs spoke of computers as “bicycles for our minds.” As we conduct our discourse, middleware-based attention agent services can give us handlebars to steer them and gear shifts to deal with varying terrain and motivations. They can give us “lenses,” for focusing what we see from our bicycles.

To build out this capability, we will need at least two levels of user-facing middleware services:

     Many low level service agents that curate for specific objectives of subject domain, styles, moods, sources, values, and other criteria.

     One or more high level service agents that make it easy to orchestrate those low level agents, as we steer them, shift gears, and change our focus, creating a consolidated ranking that gives us what we want, and screens out what we do not want, at any given time.

Just how those will work will change greatly over time as we learn to drive these bicycles, and providers learn to supply useful services – “we shape our tools and our tools shape us.” Emerging AI in these agents will increase the ease of use, and the usable power of the bicycles – but even in the age of AI, the primary intelligence and judgment must come from the humans that use these systems and create the terrain of existing and new information and ideas (not just mechanically reassembled tokens of existing data) that we steer through.

====================================================
Here is the business opportunity:
====================================================

Branding – a “handle” for intuitively easy selection  -- and signaling value

Yes, choosing middleware services seems complicated, and skeptics rightly observe that most users lack the skill or patience to think very hard about how to steer these new bicycles for our minds. But there are ways to make this easy enough. One of the most promising and suggestive is branding – a powerful and user-friendly tool for reliably selecting a service to give desired results. Take the important case of news services:

     If we try to select news stories at the low level of all the different dimensions of choice – subject matter, style, values, and the like – of course the task would be very complex and burdensome.

     But many millions easily choose what mix of CNN, MSNBC, Fox News, PBS, or less widely used brands they want to watch at any time. The existing brand equity and curation capabilities of such media enterprises are now being squandered by digital platforms that offer such established service brands only rudimentary integration into their social media curation processes. With proper support, both established and new branded middleware services can establish distinctive sensibilities that can make choice easy.

Importantly, branding also serves marketing and revenue functions in powerful ways that can be exploited by middleware services. Once established and nurtured, a brand attracts users on the basis that it offers known levels of quality, and as catering to selective interests and tastes. "It's Not TV, It's HBO" encapsulated the power of HBO's brand in the heyday of premium TV.

The New York Times as a branded curation community: 

Consider the New York Times as just one example of branded curation middleware that could serve as a steerable lens into global online discourse. It could just as well be News Corp, CNN, Sports Illustrated, or Vogue – or your local newspaper (if you still have one!) – or your town or faith community, a school, a civil society organization, a political party, a library, a bowling league – or whatever group or institution that wants to support its uniquely focused (but overlapping and not isolated) segment of the total social mediation ecosystem.

Consider how all three pillars can work and synergize in such a service:

User agency comes in by our participation as readers, and as speakers in any relevant mode – posts, comments, likes, shares, letters to the editor, submissions for Times publications. This can be addressed at at least two levels:

     Low level attention service agents that find and rank candidate items for our feeds and recommenders. This is much as we now choose from an extensive list of available email newsletters from the Times.

     Higher level middleware composing agents would help compose these low-level choices – and facilitate interoperation with similar services from other communities – to build a composite feed of items from the Times and all our other chosen sources. They could offer sliders to decide what mx to steer into a feed at any given time, and saved presets to shift gears for various moods, such as news awareness/analysis, sports/entertainment, challenging ideas, light mind expansion, and diversion/relaxation.

(Different revenue models may apply to different services, levels, and modes of participation, just as some NY Times features now may cost extra.)

Social mediation processes come in to our user interface at two levels of curation:

     User-driven curation: Much like current platforms, the Times low-level services can rank items based on signals from the community of Times users – their likes, shares, comments, and other signals of interest and value. This might distinguish subscribers versus non-subscribing readers. Subscribers might be more representative of the community, but non-subscribers might bring important counterpoints. Other categories could include special users, such as public figures in various political, business, or professional categories. As such services mature, these signals can be expanded in variety to be far more richly nuanced, such as to give clearer feedback and be categorized by subject domains of primary involvement. 

     Expert-driven curation: The Times editorial team can be drawn on (and potentially augmented with supportive levels of AI) to provide high quality expert curation services in much the same way, in whatever mix desired. This could include both their own contributions, and their reactions to readers’ contributions.

Reputation systems that keep score of quality and trust feedback on both users and content items – that arise from those mediation processes – can also be valuably focused on the Times community:

     At a gross level, we might make gross assumptions that differentiate the editorial and journalism staff, subscribers, and non-subscribing readers (as part of the basic mediation process), but a reputation system could distinguish among very different levels of reputation for quality of participation in many dimensions, such as expertise, judgment, clarity, wisdom, civility, and many more – in each of many subject domains.

     Reputation systems might also be tuned to Times reporters and editors, and their inputs to reputations of content items and users. But the true power of this kind of service is its crowdsourcing from not just the Times staff, but from its unique extended community. One could choose to ignore the staff, and just turn their lens on the community, or vice versa.

Enterprise-class community support integration – and simple beginnings

To fully enable this would require new operational support services that integrate the operation of open online social media platform services (like Bluesky now, or maybe someday Threads) with the operations of the Times. As the technology for multi-group participation is built out beyond current rudimentary levels, it can integrate with the operation of each group, including the enterprise-class systems that drive the operations of the Times. This might include the kind of functionality and integration offered by CRM (customer relationship management) systems for managing all of the Times’ interactions with its customers, as well as the CMS (content management system) used to manage its journalism content, and the SMS (subscription management systems) that manage revenue operations.

Doing all of this fully will take time and effort – but some of it could be done relatively easily, such as in an attention agent that ranks items based on the Times community members signals as distinct from those of the general network population. The Times could begin a trial of this in the near term by exploiting the basic middleware capabilities already available by creating a Bluesky server instance (using the open Bluesky server code and interoperation protocols) and their own custom algorithms. 

A large, profitable (or otherwise well-funded) business like the Times could develop and operate middleware software itself (if the social media platform allows that, as Bluesky does), but smaller organizations might need a shared “middleware as a service” (MaaS) software and operations provider to do much of that work.

A user steered, intuitively blended, mix of diverse sub-community feeds

Even at a basic level, imagine how doing this for many such branded ecosystem groups could enable users to easily compose feeds that bring them a diverse mix of quality inputs, and to steer and adjust the lenses in those feeds and searches to focus our view as we desire, when we desire.

Similar middleware services could be based all kinds of groups – for example:

     Local news and community information services – much like the Times example, for where you live now, used to live, or want to live or visit.

     Leadership and/or supporters of political parties or civil society organizations – issues, platforms/policies, campaigns, turnout, surveys, fact-checking, and volunteering.

     Professional and/or amateur players and/or coaches for sports – catering to teams, fans, sports lore, and fantasy leagues.

     Faculty, students, and/or alumni from universities – selecting for students, faculty, alumni, applicants, parents.

     Librarians and/or card holders for library systems – selecting for discovery, reading circles, research, criticism, and authors.

     Leaders and/or adherents to faith communities – for community news, personal spiritual issues, and social issues.

Consider how the Times example translates to and complements any of these other kinds of groups (most easily if enabling software is made available from a SaaS provider). Users could easily orchestrate their control over diverse sources of curation and moderation – selecting from brands with identities they recognize – without requiring the prohibitive cognitive load of controlling all the details that critics now argue would doom middleware because few would bother to make selections. New brands can also emerge and gain critical mass, using this same technology base.

By drawing on signals from expert and/or ordinary members of groups that have known orientations and norms, users might easily select mixes that serve their needs and values – and shift them as often as desired.

Context augmentation

Peter Steiner in The New Yorker

"On the Internet, no one knows you are a dog" -- or a lunatic, or a bot. Famously observed by Peter Steiner's 1993 cartoon, this became known as "context collapse," broadly understood as a core reason why internet discourse is so problematic. Much of the meaning derives from context external to the message itself -- who is speaking to whom, from and to what community, with what norms and assumptions. That has largely been lost in current social media (and in emerging AIs). 

Consider how the kind of social mediation ecosystem processes envisioned here differ from what current major platforms offer in the way of community support -- and thus fail to provide essential context: 

  • They let you create a personal set (a unidirectional pseudo-community) of friends or those you follow, but increasingly focus on engagement-based ranking into feeds -- because they want to maximize advertising revenue, not the quality of your experience. 
  • They rank based on likes, shares, and comments from a largely undifferentiated global audience, with little opportunity for you to influence who is included. 
  • They may favor feedback from rudimentary "groups" that you join, but provide very limited support to organizers and members to make those groups rich and cohesive. 
  • They may cluster you into what they infer to be your communities of interest, but with out any agency from you over which groups those are, except for the rudimentary "groups" you join.
  • And, even if they did want to serve your objectives, not theirs, they would be hard-pressed to come anywhere near the richness and diversity of truly independent, opt-in, community-driven middleware services that are tailored to diverse needs, contexts, and sustaining revenue models.

Doing moderation the old-fashioned way – enabled by middleware

Instead of being seen as a magical leap in technology, or an off-putting cognitive burden on users, middleware can be understood as a way to recreate in digital form the formal and informal social structures people have enjoyed for centuries – individually composed interaction with the wisdom of organically evolved social mediation ecosystems and intuitive informal reputation systems.

What at first seems complicated, from the perspective of current social media, is at core, little more complicated than the structure of traditional human discourse – building on key functions and elements of the social mediation and reputation ecosystems – all legitimized by choices of individual agency. Yes, that is complicated, but humans have learned over millennia to intuitively navigate this traditional web of communities and reputations. Yes, make it as simple as possible, but no simpler!

Creating an online twin of such a web of community ecosystems will not happen overnight, but many industries have already built out online infrastructures of similar complexity – in finance, manufacturing, logistics, travel, and e-commerce. Middleware is just a tool for enabling software systems to work together in ways similar to what humans (and groups of humans) do intuitively. The time to start rebuilding those ecosystems is now.

____________________

Related works:

     My November 2023 post introducing the pillars framing – A New, Broader, More Fundamental Case for Social Media Agent "Middleware" – introduced the Three Pillars framing, and embeds a deck that adds details and implication not yet fully addressed elsewhere.

     Core ideas addressed more formally in my April 2024 CIGI policy brief, New Logics for Governing Human Discourse in the Online Era.

     Very simply -- What Is a Social Mediation Ecosystem? (and Why We Need to Rebuild It). 

     Other related works are listed on my blog.



Tuesday, September 17, 2024

What Is a Social Mediation Ecosystem? (and Why We Need to Rebuild It)

(This post highlights some basic ideas from my prior publications,* and why they are of continuing relevance to social media issues.) 

  • The idea of a social mediation ecosystem integrating with social media feeds is a re-visioning of how things used to work. Society has been organically building on such sense-making ecosystems for millennia.
  • The groups that comprise the social mediation ecosystem have historically served as a “public square,” or “public sphere,” ranging from informal gathering places such as coffee shops and taverns to social and civic associations, the press, academia, workplaces, unions, faith communities, and other communities of interest.
  • This square or sphere is not unitary but an ecosystem, a polycentric web of interlinked groups in a multidimensional space.
  • Such associations develop norms and contexts for discourse. Our participation in a network of them shapes what we see and hear of the world. 
  • These processes of social influence nudge us to speak “freely,” but with sensitivity to those norms and values, so others will choose to listen to us.
  • Online media technology can enable restoration of that mediating role through enterprise-class middleware affordances that support community operation and let users interact both within and across the diverse communities they opt into.
  • Middleware can facilitate and enrich user-community interactions, and enable us to steer our feeds to blend content favored by any mix of communities we choose to include at a given time — depending on our tastes, objectives, tasks and moods.
  • For example, current curators of news could become attention agent services. Users might select a set of such services — for example, The New York Times, CNN, MSNBC, Fox, The Atlantic, People — to play a role in composing their feeds, assigning them different relative weights in ranking. Other groups in the social media ecosystem, such as civic, political, faith communities and special interest associations, could also be selected by the user to function as attention agents. Content ranking inputs could come from each community’s expert curators/editors or be crowdsourced from the user population that follows those curators, or from a combination of both.
  • Importantly — and as it has been historically — this ecosystem must be open and diverse, and users must be able to draw on combinations of many mediation sources to maintain an open and balanced understanding of the world.
  • Many fear that the involvement of independent attention agents or middleware might increase fragmentation and partisan sorting. That may be a concern while there are just one or a few mediators, but being able to selectively combine exposure to many loosely connected communities is how open societies have always limited that ever-present risk.

Related works:

     My November 2023 post introducing the pillars framing – A New, Broader, More Fundamental Case for Social Media Agent "Middleware" – introduced the Three Pillars framing, and embeds a deck that adds details and implication not yet fully addressed elsewhere.

     Core ideas addressed more formally in my April 2024 CIGI policy brief, New Logics for Governing Human Discourse in the Online Era.

     A vision, with examples -- Making Social Media More Deeply Social with Branded Middleware. 

     Other related works are listed on my blog.

(*This was first published with minor variations as a sidebar to A New, Broader, More Fundamental Case for Social Media Agent "Middleware" (11/9/23), and then as a sidebar to a more formal Centre for International Governance Innovation policy brief (4/25/24).)

Thursday, April 25, 2024

A Policy Brief and a Symposium, Oh My! ++ On Middleware and Governing Online Discourse

Days apart by coincidence, my wide-ranging policy brief published today, and next Tuesday brings an exciting symposium at Stanford that I helped organize-- both focus on middleware agent services and why we need them -- to re-envision the future of social media.

  • The policy brief is "New Logics for Governing Human Discourse in the Online Era" - part of the Freedom of Thought Project at the Centre for International Governance Innovation (CIGI). It expands on and updates my ongoing work (listed here) on these themes. It pulls together ideas about how freedom of impression guides freedom of expression without restricting it, and how combining 1) user agency, 2) a restored role for our traditional social mediation ecosystem, and 3) systems of social trust all combine to synergize that process for the online era. It offers a proactive vision of how that can enable social media to become ever more powerful and beneficial "bicycles for our minds."

  • The symposium is "Shaping the Future of Social Media with Middleware" - to be held on April 30 at Stanford by the Foundation for American Innovation and the Stanford Cyber Policy Center. Leading thinkers at the nexus of social media, middleware, and public policy will delve into the complexities and potential of middleware as a transformative force. That is to lead to a comprehensive white paper that offers recommendations and a roadmap for developers, investors, and policymakers. We have high hopes for this ambitious effort to bring new insight and energy into shaping the future of human discourse for the good.
Please do look at the policy brief, consider this broad vision that is gaining support, and stay tuned for reports on the Stanford/FAI symposium and where it leads.

+++Update -- Presentation at Public Knowledge Emerging Tech 2024 (6/14/24):
  • Middleware Agents for Distributed Control of Internet Services: Social Media …and AI
    Video (Intro at 12:55, Main comments at 30:20, also 43:30, 48:55 -- poor audio Intro segment only)
    Notes and background