Thursday, January 16, 2025

#FreeOurFeeds - Another Step Toward the Vision

As perhaps the first to use the phrase "free our feeds" and the Twitter hashtag #FreeOurFeeds, it is gratifying to see the launch of the Free Our Feeds Foundation to embark on a major step toward that vision. 

There have been many small steps to free our feeds, now seen as an urgent need to "billionaire-proof" our social media connectivity. Musk and then Zuck have shown the perils of the "loaded weapon" we have left on the table of online discourse, by so shamelessly picking it up to use for their own ends. We can only guess where they -- and others like them -- or worse -- will point their weapons next.

Some see the Mastodon "fediverse" as a major step in this direction, arguably so, but many are coming to see Bluesky as a larger step toward the portability and open interoperability of the full range of functions needed to free us from platform lock-in and manipulation. It is also interesting that similar steps to more fully open the Bluesky and Mastodon ecosystems were announced on the same day, 1/13/25. I am hopeful that both efforts will succeed, and that the Mastodon and Bluesky ecosystems will grow -- and gain high levels of interoperability with each other.

Bluesky seems to currently be the most open to building high levels of function and extensibility, which I have always seen as very important. We are in the early days of social media, just learning to crawl. To leverage this technology so that we can walk, run, and fly -- while remaining democratic and free -- it must be kept open to user control and to the control of communities. That will enable us to re-energize the social mediation ecosystem I have written about recently, and in many other works listed here

A key aspect of Bluesky and its AT Protocol (not yet in the Mastodon architecture as I understand it) is that, at the level of both 1) the app, and 2) of the relays that tie app instances together, each can be separately managed and replicated, along with 3) the level of independently selectable feed algorithms. The federation of the relays is important because they are resource heavy services, not very amenable to lightly resourced community managers, but capable of being secured and managed by trusted organizations to support advanced algorithms in ways that can also preserve privacy, as I described in 11/3/21 and updated in Tech Policy Press. The Free Our Feeds Foundation promises to take a large step in that direction for the Bluesky ecosystem

As Cory Doctorow, Mr. Enshittification, himself, said of this effort:

If there's a way to use Bluesky without locking myself to the platform, I will join the party there in a hot second. And if there's a way to join the Bluesky party from the Fediverse, then goddamn I will party my ass off.

Back to my personal interest here, I began using the rallying cry of Free Our Feeds! in a blog post on 2/11/21 (the earliest use of that phrase I could find on Google), and then used the hashtag #FreeOurFeeds on Twitter on 2/13/21, apparently the first use of that hashtag. I continued using this hashtag often on Twitter, and featured a fuller treatment of the concept in a 4/22/21 article in Tech Policy Press that included the diagram here. 

Of course 2021 was not very long ago, and many people had already become advocates for algorithmic choice. But I also take pride in being perhaps the longest-serving advocate for these ideas.

The hope is that Bluesky Social PBC and Free Our Feeds Foundation can catalyze a vibrant open ecosystem -- to create a new infrastructure for social media that lets a thousand flowers bloom -- and can grow and evolve over many sociotechnical generations.

Thursday, January 09, 2025

New Logics for Social Media and AI - "Whom Does It Serve?"

[Pinned -- Originally published 12/7/24 at 3:47pm]

[UPDATED 12/17/24 to add Shaping the Future of Social Media with Middleware  (with Francis Fukuyama, Renée DiResta, Luke Hogg, Daphne Keller, and others, Foundation for American Innovation, Georgetown University McCourt School of Public Policy, and Stanford Cyber Policy Center (details below).] 

A collection of recent works present related aspects of new logics for the development of social media and AI - to faithfully serve individuals and society, and to protect democratic freedoms that are now in growing jeopardy. The core question is "Whom does it serve?"*

This applies to our technology -- first in social media, and now as we build out broader and more deeply impactful forms of AI. It is specifically relevant to our technology platforms, which now suffer from "enshittification" as they increasingly serve themselves at the expense of their users, advertisers, other business partners, and society at large. These works build to focus on how this all comes down to the interplay of individual choice (bottom-up) and social mediation of that choice (top-down, but legitimized from bottom-up). That dialectic shapes the dimension of "whom does it serve?"* for both social media and AI.

Consider the strong relationship between the “social” and “media” aspects of AI -- and how that ties to issues arising in problematic experience with social media platforms that are already large scale:

  • Social media increasingly include AI-derived content and AI-based algorithms, and conversely, human social media content and behaviors increasingly feed AI models
  • The issues of maintaining strong freedom of expression, as central to democratic freedoms in social media, translate to and shed light on similar issues in how AI can shape our understanding of the world – properly or improperly.

These works focus on how the 1) need for direct human agency applies to AI, 2) how that same need in social media requires deeper remediation than commonly considered, how 3) middleware interoperability for enabling user choice is increasingly being recognized as the technical foundation for this remediation, and how 3) freedom (in both natural and digital worlds) is not just a matter of freedom of expression, but of freedom of impression (choice of who to listen to). 

Without constant, win-win focus on this essential question of "whom does it serve?" as we develop social media and AI, we risk the dystopia of "Huxwell" (a blend of Huxley's Brave New World and Orwell's 1984).**  

  • New Perspectives on AI Agentiality and Democracy: "Whom Does It Serve?"
     (with co-author Richard Whitt, Tech Policy Press12/6/24) - Building toward optimal AI relationships and capabilities that serve individuals, society, and freedom requires new perspectives on the functional dimensions of AI agency and interoperability. Individuals should be able to just say "Have your AI call my AI." To do that, agents must develop in two dimensions:
    1. Agenticity, a measure of capability - what can it do?
    2. Agentiality, a measure of relationship - whom does it serve?
  • Three Pillars of Human Discourse (and How Social Media Middleware Can Support All Three) (Tech Policy Press10/24/24) - Overview of new framing that strengthens, broadens, and deepens the case for open middleware to address the dilemmas of governing discourse on social media. Human discourse is, and remains, a social process based on three essential pillars that must work together:
    1. Agency
    2. Mediation
    3. Reputation 
  • NEW: Shaping the Future of Social Media with Middleware (Foundation for American Innovation and Georgetown University McCourt School of Public Policy, 12/17/24) -- Major team effort with Francis Fukuyama, Renée DiResta, Luke Hogg, Daphne Keller, and many other notables, White paper building on this 4/30/24 Symposium that I helped organize, held at Stanford Cyber Policy Center. Assembled leading thinkers at the nexus of social media, middleware, and public policy. The only comprehensive white paper to offer a thoughtful assessment of middleware’s promise, progress, and issues since the 2020 Stanford Group paper. The goal is to operationalize the concept of middleware and provide a roadmap for innovators and policymakers. (The above two pieces extend this vision in broader and more forward-looking directions.)
  • New Logics for Governing Human Discourse in the Online Era (CIGI Freedom of Thought Project, 4/25/24- Leading into the above pieces, this policy brief pulls together and builds on ideas about how freedom of impression guides freedom of expression without restricting it, and how combining 1) user agency, 2) a restored role for our traditional social mediation ecosystem, and 3) systems of social trust all combine to synergize that process for the online era. It offers a proactive vision of how that can enable social media to become ever more powerful and beneficial "bicycles for our minds."
*Alluding to the Arthurian legend of the Holy Grail.
**Suggested by Jeff Einstein and teased in his video.

(Originally published 12/7/24 at 3:47pm, revised 12/22/24 -- with dateline reset to pin it at or near the top of this blog)

Wednesday, January 08, 2025

Beyond the Pendulum Swings of Centralized Moderation (X/Twitter, Meta, and Fact Checking)

The crazy pendulum swings of centralized moderation by dominant social media platforms is all over the news again, as nicely summarized by Will Oremus, and explored by a stellar Lawfare panel of experts. 

We have seen one swing toward what many (mostly the right) perceive as blunt over-moderation and censorship that intensified around the 2016 election and aftermath. And now, with the 2020 election and aftermath, a swing away, to what others (mostly the left) view as irresponsibly enabling uncontrolled cesspools of anger, hate, and worse. This pendulum is clearly driven in large part by the political winds (which it influences, in turn), a question of whose ox gets gored, and who has the power to influence the platforms -- "Free speech for me, but not for thee."

This will remain a disruptive pendulum -- one that can destroy the human community and its collective intelligence -- until we step back and take a smarter approach to context and diversity of our perceptions of speech. More reliance on community moderation, as X/Twitter and Meta/Facebook/Threads are now doing, points theoretically in the right direction: to democratize that control -- but is far from being effective. Even if they really try, centralized platforms are inherently incapable of  doing that well

Middleware as systems thinking on how to do better

Three of the speakers on the Lawfare panel were coauthors/contributors with me in a comprehensive white paper, based on a symposium on a partially decentralized approach called "middleware." That proposes an open market in independent curation and moderation services that sit in the middle between each user and their platforms. These services can do community-based moderation in a fuller range of ways, at a community level, much more like the way traditional communities have always done "moderation" (better thought of as "mediation") of how we communicate with others. This new middleware paper explains the basics, why it is a promising solution, and how to make it happen. (For a real-world example of middleware, but still in its infancy, consider Bluesky.)

As for the current platform approach to "community moderation," many have critiqued it, but I suggest a deeper way to think about this, drawing on how humans have always mediated their speech. Three Pillars of Human Discourse (and How Social Media Middleware Can Support All Three is a recent piece on extending current ideas on middleware to support this solution that has evolved over centuries of human society. The three pillars are: User Agency, Social Mediation, and Reputation. 

Toward effective community moderation

The Augmented Wisdom of Crowds: Rate the Raters and Weight the Ratings (from 2018) digs deeper into why simplistic attempts at community surveys fail, and how the same kind of advanced analysis of human inputs that made Google win the search engine wars can be applied to social media. A 2021 post and a 2024 policy brief update that.

To understand why this is important, consider what I call The Zagat Olive Garden Problem. In the early 90s, I noticed this oddity in the popular Zagat guide, a community-rating service for restaurants: The top 10 or so restaurants in NYC were all high-priced, haute cuisine or comparably refined, except one: Olive Garden. Because Olive Garden food was just as good? No, because far more people knew it from their many locations, and they were attracted to a familiar brand with simple, but tasty, food at very moderate prices, and put off by very high prices. 

Doing surveys where all votes are counted equally may sound democratic, but foolishly so. We really want ratings from those with a reputation for tastes and values we relate to (but leavened with healthy diversity on how we should broaden our horizons). That is what good feed and recommender algorithms must do. For that, we need to "rate the raters and weight the ratings," and do so in the relevant context, as that post explains.

Back to the pendulum analogy, consider how pendulums work -- especially the subtle phenomenon of entrainment (perhaps blurring details, but suggestive): 

  • Back in 1666, Huygens invented the pendulum clock and discovered that if two were mounted on the same wall, their pendulum swings gradually became synchronized. That is because each interacts with the shared wall to exchange energy in a way that brings them into phase.
  • Simplistically, moderation is a pendulum that can swing from false positives to false negatives. Each conventional platform has one big pendulum controlled by one owner or corporation that swings with the political wind (or other platform influences). Platform-level community moderation entrains everyone to that one pendulum, whether it fits or not -- resulting in many false positives and false negatives, often biased to one side or the other.
  • Alternatively, a distributed system of middleware services can serve many individuals or communities, each with their own pendulums that swing to their own tastes.
  • Within communities, these pendulums are tightly linked (the shared wall) and tend to entrain.
  • Across communities, there are also weaker linkages, in different dimensions, so still nudge toward some entrainment.
  • In addition to these linkages in many dimensions, instead of being rigid, the "walls" of human connection are relatively elastic in how they entrain.
  • The Google PageRank algorithm is based on advanced math (eigenvalues) and can treat individual search engine users and their intentions as clustering into diverse communities of interest and value -- much like a network of pendulums all linked to one another by elastic "walls" in a multidimensional array.
  • Similar algorithms can be used by diverse middleware services to distill community ratings with the same nuanced sensitivity to their diverse community contexts. Not perfectly, but far better than any centralized system.
In addition, part of the problem with current community notes, and any form of explicit ratings of content, is getting enough people to put in the effort. Just as Google PageRank uses implicit signals of approval that users do anyway (linking to a page), variations for social media can also use implicit signaling in the form of likes, shares, and comments (and more to be added) to draw on a far larger population of users, structured into communities of interest and values.

Of course there are concerns that the decentralization of middleware might worsen fragmentation and polarization. While it might have some such effect in some contexts, there is also the opposing effect of reducing harmful virality feedback cascades. Consider the fluid dynamics of an old fashioned metal ice cube tray, and how water sloshing in the open tray forms much more uncontrollable waves than in the tray with the separating insert in place.

The only effective and scalable solution to social media moderation/curation/mediation is to build distributed middleware services, along with tools for orchestrating the use of a selection of them to compose our individual feeds. That too can be done well or badly, but only with a collective effort to do our best on a suitably distributed basis can we succeed.