Tuesday, October 26, 2021

The Best Idea From Facebook Staffers for Fixing Facebook: Learn From Google

The Facebook Papers trove of internal documents show that Facebook employees understand the harms of their service and have many good ideas for limiting them. Many have value as part of a total solution. But only one has been proven to not only limit distribution of harmful content but also to select for quality content -- and to work economically at huge scale and across a multitude of languages and cultures.

Facebook knows that filtering for quality in newsfeeds (and filtering out mis/disinformation and hate) doesn’t require advanced AI -- or humans to understand content -- or the self-defeating Luddite remedy of prohibiting algorithms. It takes clever algorithms that weigh external signals of quality to augment human user intelligence, much as done by Google PageRank. 

I was pleased to read Gilad Edelman's capsule on this in Wired on 10/26, which brought me to Karen Hao's report in Tech Review from 9/16, both based on a leaked 10/4/19 report by Jeff Allen, a senior-level data scientist then leaving Facebook. I have long advocated such an approach -- seemingly as a voice in the wilderness -- and view this as a measure of validation. Here is a quick note (hopefully to be expanded).

Jeff Allen's Facebook Paper "How Communities are Exploited on Our Platforms"

As Karen Hao reports on Allen: 

“It will always strike me as profoundly weird ... and genuinely horrifying,” he wrote. “It seems quite clear that until that situation can be fixed, we will always be feeling serious headwinds in trying to accomplish our mission.”

The report also suggested a possible solution. “This is far from the first time humanity has fought bad actors in our media ecosystems,” he wrote, pointing to Google’s use of what’s known as a graph-based authority measure—which assesses the quality of a web page according to how often it cites and is cited by other quality web pages—to demote bad actors in its search rankings.

“We have our own implementation of a graph-based authority measure,” he continued. If the platform gave more consideration to this existing metric in ranking pages, it could help flip the disturbing trend in which pages reach the widest audiences.

When Facebook’s rankings prioritize engagement, troll-farm pages beat out authentic pages, Allen wrote. But “90% of Troll Farm Pages have exactly 0 Graph Authority … [Authentic pages] clearly win.” 

And as Gilad Edelman reports,

Allen suggests that Graph Authority should replace engagement as the main basis of recommendations. In his post, he posits that this would obliterate the problem of sketchy publishers devoted to gaming Facebook, rather than investing in good content. An algorithm optimized for trustworthiness or quality would not allow the fake-news story “Pope Francis Shocks World, Endorses Donald Trump for President” to rack up millions of views, as it did in 2016. It would kneecap the teeming industry of pages that post unoriginal memes, which according to one 2019 internal estimate accounted at the time for as much as 35 to 40 percent of Facebook page views within News Feed. And it would provide a boost to more respected, higher quality news organizations, who sure could use it.

Allen's original 2018 report expands: "...this is far from the first time humanity has fought bad actors in our media ecosystems. And it is even far from the first time web platforms have fought similar bad actors. There is a proven strategy to aligning media ecosystems and distribution platforms with important missions, such as ours, and societal value." He capsules the history of Google's PageRank as "the algorithm that built the internet" and notes that graph-based authority measures date back to the '70s.

He recounts the history of "yellow journalism" over a century ago, and how Adolph Ochs' New York Times changed that by establishing a reputation for quality, and then digs in to Google (emphasis added):

So Let's Just Follow Googles Lead. Google has set a remarkable example of how to build Ochs’ idea into a web platform. How to encode company values and missions into ranking systems. Figuring out how to make some of it work for FB and IG would provide the whole company with enormous value.

Google is remarkably transparent about how they work and how they fight these types of actors. If you haven't read “How Search Works" I highly recommend it. It is an amazing lesson in how to build a world class information retrieval system. And if you haven't read “How Google Fights Disinformation”, legitimately stop what you're doing right now and read it.

The problem of information retrieval (And Newsfeed is 100% an information retrieval system) comes down to creating a meaningful definition of both the quality of the content producer and the relevance of the content. Google's basic method was to use their company mission to define the quality.

Google's mission statement is to make the worlds information widely available and useful. The most important word in that mission statement is “useful”. A high quality content producer should be in alignment with the IR systems mission. In the case of Google, that means a content producer that makes useful content. A low quality producer makes content that isn't useful. Google has built a completely objective and defensible definition of what useful content is that they can apply at scale. This is done in their “Search Quality Rater Guidelines”, which they publish publicly.

The way Google breaks down the utility of content basically lands in 3 buckets. How much expertise does the author have in the subject matter of the content, as determined by the credentials the author presents to the users. How much effort does the author put into their content. And the level of 3rd party validation the author has.

If the author has 0 experience in the subject, doesn't spend any time on the content, and doesn't have any 3rd party validation, then that author is going to be labeled lowest quality by Google and hardly get any search traffic. Does that description sound familiar? It is a pretty solid description of the Troll Farms.

Google calls their quality work their first line of defense against disinformation and misinformation. All we have to do are figure out what the objective and defensible criteria are for a Page to build community, and bring the world closer together. We are leaving a huge obvious win on the table by not pursuing this strategy.

...It seems quite clear that until that situation can be fixed, we will always be feeling serious headwinds in trying to accomplish our mission. Newsfeed and specifically ranking is such an integral part of our platform. For almost everything we want to accomplish, Feed plays a key role. Feed is essential enough that it doesn't particularly need any mission beyond our companies. FB, and IG, need to figure out what implications our company mission has on ordering posts from users inventory.

Until we do, we should expect our platform to continue to empower actors who are antithetical to the company mission.

My views on applying this

Allen is focused here on troll-farm Pages rather than pure user generated content, and that is where Google's page ranking strategy is most directly parallel. It also may be the most urgent to remedy. 

UGC is more of a long tail -- more items, harder to rate according the the first two of Allen's "3 buckets." But he did not explain the third bucket -- how Google users massive data, such as links placed by human "Webmasters," plus feedback on which items in search hit lists users actually click on, and even dwell times on those clicks. That is similar to the data on likes, shares, and comments that I have suggested be used to create graph authority reputations for ordinary users and their posts and comments. For details on just how I see that working, see my 2018 post, The Augmented Wisdom of Crowds:  Rate the Raters and Weight the Ratings. 

Of course there will be challenges for any effort to apply this to social media. Google has proven that technology can do this kind of thing efficiently at Internet scale, but social media UGC and virality is even more of a challenge than Web pages. 

The biggest challenge is incentives -- to motivate Facebook to optimize for quality, rather than engagement. One way to make it happen is to unbundle the filtering/ranking services from Facebook, as I described in Tech Policy Press, and as discussed by eminent scholars in the recent Tech Policy Press mini-symposium (and in other writings listed in the Selected Items tab, above). That could realign the incentives to filter for quality, by making filtering a service to users, not platforms or advertisers.

Maybe the level of anger at this increasingly blatant and serious abuse of society and threat to democracy will finally spur regulatory action -- and the realization that we need deep fixes, not just band-aids..

In any case, it is great to finally see recognition of the merit of this strategy from within Facebook (even if by an employee reportedly departing in frustration). 


Tuesday, October 12, 2021

It Will Take a Moonshot to Save Democracy From Social Media

A moonshot is what struck me, after some reflection on the afternoon’s dialog at the Tech Policy Press mini-symposium, Reconciling Social Media & Democracy on 10/7/21. It was crystalized by a tweet later that evening about “an optimistic note.” My optimism that there is a path to a much better future was reinforced, but so was my sense of the weight of the task.

Key advocates now see the outlines of a remedial program, and many are now united in calling for reform. But the task is unlikely to be undertaken voluntarily by the platforms -- and is far too complex, laborious, and uncertain to be effectively managed by legislation or existing regulatory bodies. There seemed to be general agreement on an array of measures as promising -- despite considerable divergence on details and priorities. The clearest consensus was that a new, specialized, expert agency is needed to work with and guide the industry to serve users and society.

While many of the remedies have been widely discussed, the focal point was a less-known strategy arising from several sources and recently given prominence by Francis Fukuyama and his Stanford-based group. The highly respected Journal of Democracy featured an article by Fukuyama, then a debate by other scholars plus Fukuyama’s response. Our event featured Fukuyama and most of those other debaters, plus several notable technology-focused experts. I moderated the opening segment with Fukuyama and two of the other scholars, drawing on my five-decade perspective on the evolution of social media to try to step back and suggest a long-term guiding vision.

The core proposal is to unbundle the filtering of items in our newsfeeds, creating an open market in filtering services (“middleware”) that users can choose from to work as their agents. The idea is 1) to reduce the power of the platforms to control for each of us what we see, and 2) to decouple that from the harmful effects of engagement-driven business incentives that favor shock, anger, and divisiveness. That unbundling is argued to be the only strategy that limits unaccountable platform power over what individuals see, as a “loaded gun on the table” that could be picked up by an authoritarian platform or government to threaten the very foundations of democracy.

Key alternatives, favored by some, are the more familiar remedies of shifting from extractive, engagement-driven, advertising-based business models; stronger requirements for effective moderation and transparency; and corporate governance reforms. These too have weaknesses: moderation is very hard to do well no matter what, and government enforcement of content-based moderation standards would likely fail First Amendment challenges.

Some of the speakers are proponents of even greater decentralization. My opening comments suggested that be viewed as a likely long-term direction, and that the unbundling of filters was an urgent first step toward a much richer blend of centralized and decentralized services and controls -- including greater user control and more granular competitive options.

There was general agreement by most speakers that there is no silver bullet, and that most of these remedies are needed at some level as part of a holistic solution. There were concerns whether the unbundling of filters would do enough to stop harmful content or filter bubble echo chambers, but general agreement that shifting power from the platforms is important. The recent Facebook Files and hearings make it all too clear that platform self-regulation cannot be relied on and that all but the most innocuous efforts at regulation will be resisted or subverted. My suggested long-term direction of richer decentralization seemed to generate little dispute.

This dialog may help bring more coherence to this space, but the deeper concern is just how hard reform will be. There seemed to be full agreement on the urgent need for a new Digital Regulatory Agency with new powers to draw on expertise from government, industry, and academia to regulate and monitor with an ongoing and evolving discipline (and that current proposals to expand the FTC role are too limited).

The Facebook Files and recent whistleblower testimony may have stirred regulators to action (or not?), but we need a whole of society effort. We see the outlines of the direction through a thicket of complex issues, but cannot predict just where it will lead.  That makes us all uncomfortable.

That is why this is much like the Apollo moonshot. Both are concerted attacks on unsolved, high-risk problems -- taking time, courage, dedication, multidisciplinary government/industry organization, massive financial and manpower resources, and navigation through a perilous and evolving course of trial and error.

But this problem of social media is far more consequential than the moonshot. “The lamps are going out all over the free world, and we shall not see them lit again in our lifetime” (paraphrasing Sir Edward Grey as the First World War began) -- this could apply within a very few years. We face the birthing of the next stage of democracy -- much as after Gutenberg, industrialization, and mass media. No one said this would be easy, and our neglect over the past two decades has made it much harder. It is not enough to sound alarms – or to ride off in ill-considered directions. But there is reason to be optimistic -- if we are serious about getting our act together.

-------------------------------

This is my quick take, from my own perspective (and prior to access to recordings or transcripts) -- feedback reflecting other takes on this are welcome. More to follow...

Running updates on these important issues can be found here, and my updating list of Selected Items is on the tab above.

Friday, September 17, 2021

Unbundling Social Media Filtering Services – Toward an Ecosystem Architecture for the Future [Working Draft]

Abstract

Raging issues concerning moderation of harmful speech on social media are most often viewed from the perspective of combating current harms. But the broader context is that society has evolved a social information ecosystem that mediates discourse and understanding through a rich interplay of people, publishers, and institutions. Now that is being disintermediated, but as digitization progresses, mediating institutions will reinvent themselves to leverage this new infrastructure. The urgent task for regulation is to facilitate that. Current proposals for unbundling of social media filtering services are just a first and most urgent step in that evolution. That can transform moderation remedies -- including ranking/recommenders, bans/takedowns, and flow controls -- to support epistemic health instead of treating epistemic disease.

An unbundling proposal with growing, but still narrow, support was the subject of an important debate about social media platform power among scholars in the Journal of Democracy (as I summarized in Tech Policy Press). On further reflection, the case for unbundling as a way to resolve the dilemmas of today becomes stronger by looking farther ahead. Speaking as a systems architect, here are thoughts about these first steps on a long path – to limit current harms and finesse current dilemmas in a way that builds toward a flexible digital social media ecosystem architecture. What is currently seen as failings of individual systems should be viewed as birth pangs in a digital transformation of the entire ecosystem for social construction of truth and value.

That debate was on proposals to unbundle and decentralize moderation decisions now made by the platforms -- to limit platform power and empower users. The argument is that the platforms have gained too much power, and that, in a democracy, we the people should each have control over what information is fed to us (directly or through chosen agents). Those decisions should serve users -- and not be subject to undemocratically arbitrary “platform law” -- nor to improper government control (which the First Amendment constrains far more than many would-be reformers recognize). Common arguments against such unbundling are that shifting control to users would do too little to combat current harms and might even worsen them.* Meanwhile, some other observers favor a far more radical decentralization, but that seems beyond reach.

Here I suggest how we might finesse some concerns about the unbundling proposals -- to position that as a first step that can help limit harms and facilitate other remedies -- while also beginning a path toward meeting the greater challenges of the future. That future should be neither centralized nor totally decentralized, but a constantly evolving hybrid of distributed services, authority, and control. Unbundling filters is a start.

The moderation dilemma

The trigger for this debate was Francis Fukuyama’s article on proposals that the ranking and recommender decisions made within the platforms should be spun out into an open market that users can select from. The platforms should not control decisions about what individuals choose to hear, and an open market would spur competition and innovation. Responding to debate comments, Fukuyama recognized concerns that some moderation of toxic content might be too complex and costly to decentralize. He also observed that we face a two-sided issue: not only the promotion of toxic content, but also bans or takedowns that censor some speakers or items of their speech. He suggested that perhaps the control of clearly toxic content – except for the sensitive case of political speech -- should remain under centralized control.

That highlights the distinction between two mechanisms of moderation that are often confounded -- each having fundamentally different technical/operational profiles:

Blocking moderation in the form of bans/takedowns that block speakers or their speech from being accessible to any user. Such items are entirely removed from the ecosystem. To the extent this censorship of speech (expression) is to be done, that cannot be a matter of listener choice.

Promotional moderation in the form of ranking/recommenders that decide at the level of each individual listener what they should hear. Items are not removed, but merely downranked in priority for inclusion in an individual’s newsfeed so they are unlikely to be heard. Because this management of reach (impression) is listener-specific, democratic principles require this to be a matter of listener rights (at least for the most part). Users need not manage this directly but should be able to choose filtering services that fit their desires.

The decision options map out as in this table:


OK for personal taste

Lawful but awful?

Illegal

Ranking/recommenders

OK – listener control

Boundary???

n/a

Bans/takedowns

n/a

Boundary???

Block to all


As a first approximation, setting aside those lawful but awful boundary cases, this suggests that OK content should be managed by filtering services that are chosen by listeners in an open market -- as has been proposed, but that blocking of illegal content should remain relatively centralized. That narrows the tricky questions to the boundary zone: how wide and clearly bounded is that zone, should the draconian censorship of bans/takedowns apply, how much of this can be trusted to the lighter hand of decentralized ranking/recommenders to moderate well enough, and who decides on bans/takedowns (platforms, government, independent boards)?

Unbundling is a natural solution for ranking/recommenders, but a different kind of unbundling might also prove useful for bans/takedowns. That censorship authority might be delegated to specialized services that operate at different levels to address issues that vary by nation, region, community, language, and subject domain.

Sensible answers to these issues will shake out over time – if we look ahead enough to assure the flexibility, adaptability, and freedom for competition and innovation to enable that. The answers may never be purely black or white, but instead, pragmatic matters of nuance and context.

An ecosystem architecture for social media – from pre-digital to digital transformation

How can we design for where the puck will be? It is natural to think of “social media” in terms of what Facebook and its ilk do now. The FTC defined that as “personal social networking services…built on a social graph that maps the connections between users and their friends, family, and other personal connections.” But it is already evident this blurs in many ways: with more media-centered services like YouTube and Pinterest; as users increasingly access much of their news via social media sharing and promotion rather than directly from publishers; and as institutions interact with their members via these digital social media channels.

Meanwhile, it is easy to forget that long before our digital age, humanity evolved a highly effective social media ecology in which communities, publishers, and institutions mediated our discourse as gatekeepers and discovery services. These pre-digital social networks are now being disintermediated by new digital social network systems. But as digitization progresses, such mediating institutions will reinvent themselves and rebuild for this new infrastructure.

Digital platforms seized dominance over this epistemic ecosystem by exploiting network effects – moving fast and breaking it. Now we are digitizing our culture – not just the information, but the human communication flows, processes, and tools – in a way that will determine human destiny.

From this perspective, it is apparent that neither full centralization nor full decentralization can cope with this complexity. It requires an architecture that is distributed and coordinated in its operation, how it is controlled, and by whom. It must be open to many social network services, media services, and intermediary services to serve many diverse communities and institutions. It must evolve organically and emergently, with varying forms of loose or tight interconnection, with semipermeable boundaries -- just as our existing epistemic ecosystem has. Adapting the metaphor of Jonathan Rauch, it will take a “constitution of discourse” that blends bottom-up, top-down, and distributed authority – in ways that will embed freedom and democracy in software code. Open interoperability, competition, innovation, and adaptability -- overseen with smart governance and human intervention -- will be the only way to serve this complex need.+

Network effects will continue to drive toward universality of network connectivity. Dominant platforms may resist change, but oligopoly control of this utility infrastructure will become untenable. Once the first step of unbundling filters is achieved, at least some elements of the decentralized and interoperable visions of Mike Masnick, Cory Doctorow, Ethan Zuckerman, and Twitter CEO Jack Dorsey that now may seem impractical will become more attainable and compelling. Just as electronic mail can be sent from one mail system (Gmail, Apple, Outlook, ...) through a universal backbone network that interoperates with every other mail system, postings in one social media service should interoperate with every other social media service. This will be a long, complex evolution, with many business, technical, and governance issues to be resolved. But what alternative is viable?

Facebook Groups and Pages are primitive community and institutional layers that complement our personal social network graphs. Filtering serves a complementary organizing function. Both can interact and grow in richness to support the reinvention of traditional intermediaries and institutions to ride this universal social media utility infrastructure and reestablish context that has “collapsed.”** Twitter’s Bluesky initiative envisions that this ecosystem will grow with a scope and vibrance that makes attempts at domination by monolithic services counterproductive -- and that this broader service domain presents a far larger and more sustainable business opportunity. The Web enabled rich interoperability in information services -- why not build similar interoperability into our epistemic ecosystem?

Coping with ecosystem-level moderation challenges

The ecosystem perspective is already becoming inescapable. Zeve Sanderson and colleagues point to “cross-platform diffusion of misinformation, emphasizing the need to consider content moderation at an ecosystem level.” That suggests that filtering services should be cross-platform. Eric Goldman provides a rich taxonomy of diverse moderation remedies, many of which beg for cross-platform scope. No one filtering service can apply all of those remedies (nor can one platform), but an ecosystem of coordinating and interoperable tools can grow and evolve to meet the challenges -- even as bad actors keep trying to outwit whatever systems are used.

Still-broader presumptions remain unstated – they seem to underlie key disagreements in the Journal of Democracy debate, especially as they relate to tolerance for lawful but awful speech. Natalie Helberger explores four different models of democracy -- liberal, participative, deliberative, and critical -- and how each leads to very different ideas for what news recommenders should do for citizens. My interpretation is that American democracy is primarily of the liberal model (high user freedom), but with significant deliberative elements (encouraging consideration of alternative views). This too argues for a distributed-control architecture for our social media ecosystem that can support different aspects of democracy suited to different contexts. Sorting this out rises to the level of Rauch’s “constitution” and warrants a wisdom of debate and design for diversity and adaptability not unlike that of the Federalist Papers, and the Enlightenment philosophers that informed that.***

Designing a constitution for human social discourse is now a crisis discipline. Simple point solutions that lack grounding in a broader ecological vision are likely to fail or create ever deeper crises. Given the rise of authoritarianism, information warfare, and nihilism now exploiting social media, we need to marshal multi-disciplinary thinking, democratic processes, innovation, and resolve.

Ecosystem-level management of speech and reach

Returning to filters, the user level of ranking/recommenders blurs into a cross-ecosystem layer of controls, including many suggested by Goldman – especially flow controls affecting virality, velocity, friction, and circuit-breakers. As Sanderson evidences, these may require distributed coordination across filtering services and platforms (at least those beyond some threshold scale). As suggested previously in Tech Policy Press, financial markets provide a very relevant and highly developed model of a networked global ecosystem with realtime risks of systemic instability, managed under a richly distributed and constantly evolving regulatory regime.****

These are primarily issues of reach (ranking/recommenders at the listener level) -- but some may be issues of speech (bans/takedowns). As platforms decentralize and interoperate to create a distributed social network that pools speech inputs to feed to listeners through a diversity of channels, the question arises whether bans/takedowns are always ecosystem-wide or may be effected differently in different contexts. There is a case for differential effect – such as to provide for granularity in national- and community-level control. Related questions are whether there should be free-speech zones that are protected from censorship by authoritative regimes and communities.

Fukuyama seems most concerned about authoritarian control of political speech, but important questions of wrongful constraint can apply to pornography, violence, incitement, and even science. Current debates over sex worker rights and authoritarian crackdowns on “incitement” by dissidents, as well as the Inquisition-era ban on Galileo’s statement that the Earth moves, show that there are no simple bright lines or infallible authorities. And, to protect against overenforcement, it seems there must be provision for appeals and for some form of retention (with more or less strictly controlled access) for even very objectionable speech.

Which future?

The unbundling of filtering systems becomes more compelling from this broader perspective. It is no panacea, and not without its own complications – but it is a first step toward distributed development of a new layer of social mediation functionality that can enable next-generation democracy. Most urgent to preserve democracy is a moderation regime that leverages decentralization to be very open and empowering on ranking/recommenders, while applying a light and context-sensitive hand on the censorship of bans/takedowns. This openness can restore our epistemic ecology to actively supporting health, not just reactively treating disease.

Which future do we want? Private platform law acting on its own -- or as the servant of a potentially authoritarian government -- to control the seen reality and passions of an unstructured mob? A multitude of separate, uncoordinated platforms, some tightly managed as walled gardens of civility, but surrounded by nests of vipers? Or a flexible hybrid, empowering a digitally augmented upgrade of the richly distributed ecology of mediators of consent on truth and value that, despite occasional lapses and excesses, has given us a vibrant and robust marketplace of ideas -- an epistemic ecology that liberates, empowers, and enlightens us in ways we can only begin to imagine?

---

This article expands on articles in Tech Policy Press and Reisman’s blog, listed here.

________________________

*Some, notably Facebook, argue for just a limited opening of filter parameters to user control. That is worth doing as a stopgap. But that remaining private corporate control is too inflexible, and just not consistent with democracy. Others, like Nathalie Marechal argue that the prime issue is the ad-targeting business model, and that fixing that is the priority. That is badly needed, too, but would still leave authoritarian platform law under corporate control, free to subvert democracy.

A more direct criticism, by Robert Faris and Joan Donovan, is that unbundling filters is “fragmentation by design” that works against the notion of a “unified public sphere.” I praise that as “functional modularity and diversity by design,” and suggest that apparent unity results from a dialectic of constantly boiling diversity. Designing for diversity is the only way to accommodate the context dependence and contingency required in humanity’s global epistemic ecosystem. Complex systems of all kinds thrive on an emergent, distributed order (whether designed or evolved) built on divisions of function and power. This applies to software systems (where it is designed, learning from the failures of monolithic systems) and in economic, political, and epistemic systems (where it evolves from a stew of collaboration).

**Twitter recently announced similar community features explicitly intended to restore the context that has collapsed.

***A recent paper by Bridget Barrett, Katherine Dommet, and Daniel Kreiss (interviewed by Tech Policy Press) offers relevant research suggesting a need to be more explicit about what we are solving for and that conflicting objectives must be balanced. Maybe what regulation policy should solve for is not the balanced solution itself, but an ecosystem for seeking balanced solutions in a balanced way.

****Note the parallels in the new challenges in regulating decentralized finance and digital currency.

+[Added 9/20] Of course the even more fundamental metaphor comes from nature. Claire Evans likens the "context collapse" on social media to the disruption of monocultures and clear-cutting in forests, ignoring “the wood wide web” -- “just how sustainable, interdependent, life-giving systems work."

Wednesday, September 15, 2021

Reconciling Social Media & Democracy - Upcoming Mini-Symposium, 10/7/21, 1-4 pm ET

Link to recording & transcript to follow

This [was] a must-see event for those concerned about the difficult challenge of reversing the harms social media are doing to democracy. Hosted by Tech Policy Press, it brings together all sides of the debate in the Journal of Democracy that I reviewed and expanded on for Tech Policy Press, along with some other expert voices. 

RSVP hereAgenda and readings here (and below).

                       ------------------------

Topic: Reconciling Social Media & Democracy

Description: While various solutions to problems at the intersection of social media and democracy are under consideration, from regulation to antitrust action, some experts are enthusiastic about the opportunity to create a new social media ecosystem that relies less on centrally managed platforms like Facebook and more on decentralized, interoperable services and components.

In this mini-symposium, we will explore some of these ideas and critique them.

Participants include:

  • Tracy Chou, founder and CEO of Block Party, software engineer, and diversity advocate
  • Joan Donovan, Research Director of the Shorenstein Center on Media, Politics and Public Policy
  • Cory Doctorow, science fiction author, activist and journalist
  • Francis Fukuyama, Senior Fellow at Stanford University's Freeman Spogli Institute for International Studies, Mosbacher Director of FSI's Center on Democracy, Development, and the Rule of Law, and Director of Stanford's Ford Dorsey Master's in International Policy
  • Dipayan Ghosh, Co-Director of the Digital Platforms & Democracy Project at the Shorenstein Center on Media, Politics and Public Policy at the Harvard Kennedy School and faculty at Harvard Law School
  • Justin Hendrix, CEO and Editor, Tech Policy Press
  • Daphne Keller, Director of the Program on Platform Regulation at Stanford's Cyber Policy Center
  • Nathalie Maréchal, Senior Policy and Partnerships Manager at Ranking Digital Rights
  • Michael Masnick, Founder, CEO and Editor, Techdirt
  • Richard Reisman, innovator, entrepreneur, consultant, and investor
  • Ramesh Srinivasan, Professor, UCLA Department of Information Studies and Director of UC Digital Cultures Lab

Time: Oct 7, 2021 01:00 PM in Eastern Time (US and Canada)

------------------------

The core idea of the proposals to unbundle and decentralize control of what is recommended or filtered into our newsfeed is not just that the dominant platforms have done a horrible job, causing great harm to our democratic process -- but that this level of private power to control essential portions of our marketplace of ideas is incompatible with democracy, no matter how hard they try.

I am very pleased at having Justin Hendrix's support in helping to organize this event for Tech Policy Press, and will be honored to be moderating portions of it.

------------------------

Background Readings 

Tech Policy Press reports on the Journal of Democracy debate:

The Journal of Democracy debate articles:

Other speaker’s articles:


Monday, August 09, 2021

The Need to Unbundle Social Media - Looking Ahead

This expands on my two new articles in Tech Policy Press that review and synthesize an important debate among scholars in the Journal of Democracy (as noted in my previous blog post that carries running updates). Those articles address the growing interest in a set of ideas that regard the scale of the current platforms as dangerous to democracy and propose that one way to address the danger is to
“unbundle” social media into distinct functions to separate the basic network layer of the content from the higher level of content curation.

Here I add forward-looking comments that build on my continuing work on how technology can augment human collaboration. First some general comments, then some more specific ideas, and then a longer-term vision from my perspective as a technologist and futurist.

 

Much of the thinking about regulating social media is reactive to current harms, and the sense that techno-optimism has failed. I say this is not a tech problem but one of poorly managed tech. We need a multidisciplinary view of how tech can move democratic society into its digital future. Democracy is always messy, but that is why it works – truth and value are messy. Our task is to leverage tech to help us manage that messiness to be ever more productive.

Some general perspectives on the debate so far

No silver bullet – but maybe a silver-jacketed bullet: No single remedy will address all the harms in an acceptable way. But I suggest that decentralized control of filtering recommendation services is the silver-jacketed bullet that best de-scales the problems in the short and long term. That will not solve all problems of harmful speech alone, but it can create a tractable landscape so that the other bullets will not be overmatched. That is especially important because mandates for content-specific curation seem likely to fail First Amendment challenges, as Keller explains in her article (and here).

Multiple disciplines: The problems of harmful speech have risen from a historic background level to crisis conditions because of how technology was hijacked to serve its builders’ perverse business model, rather than its human users. Re-engineering the technology and business model is essential to manage new problems of scope, scale, and speed -- to contain newly destructive feedback cascades in ways that do not trample freedom of expression. That requires a blend of technology, policy, social science, and business, with significant inputs from all of those.

Unbundling control of impression versus expression: Digital social media have fundamentally changed the dynamics of how speech flows through society, in a way that is still underappreciated. Control of the impression of information on people’s attention (in news feeds and recommendations) has become far more consequential than the mere expression of that information. Historically, the focus of free speech has been on expression. Individuals generally enjoyed control over what was impressed on them -- by choosing their information sources. Now the dominant platforms have taken upon themselves to control the unified newsfeeds that each of us see, and what people and groups they recommend we connect to. Traditional intermediary publishers and other curators of impression have been disintermediated. Robust diversity in intermediaries must be restored. Now, freedom of impression is important. (More below.)


Time-frames: It will be challenging to balance this now-urgent crisis, remedies that will take time, and the fact that we are laying the foundations of digital society for decades to come. Digitally networked speech must support and enhance the richly evolving network of individuals, communities, institutions, and government that society relies on to understand truth and value – a social epistemic ecosystem. The platforms recklessly disintermediated that. A start on the long process of rejuvenating that ecosystem in our increasingly digital society is essential to any real solution.

On technological feasibility and curation at scale

Many have argued like Faris and Donovan that “more technology cannot solve the problem of misinformation-at-scale.” I believe the positive contribution technology can offer to aid in filtering for quality and value at scale is hugely underestimated – because we have not yet sought to apply an effective strategy. Instead of blindly disintermediating and destroying the epistemic social networks society has used to mediate speech, independent filtering services motivated to serve users could greatly augment those networks.

This may seem unrealistic to many, even foolishly techno-utopian, given that AI will almost certainly lack nuanced understanding and judgment for the foreseeable future. But strategies I suggest in my Tech Policy Press synthesis and in more detail elsewhere are not based on artificial intelligence. Instead, they rely on augmented intelligence, drawing on and augmenting crowdsourced human wisdom. I noted that crowdsourcing has been shown to be nearly as effective as experts in judging the quality of social media content. While those experiments required explicit human ratings, the more scalable strategy I propose relies on available metadata about the sharing of social media content. That can be mined to make inferences about judgments of quality from massive numbers of users. AI cannot judge quality and value on its own, but it can help collate human judgments of quality and value.

Similar methods proved highly successful in how Google conquered search by mining the human judgment inherent in the network of Web links built by humans -- to infer which pages had authority, based on the authority and reputation of other pages humans had linked to them (first “Webmasters,” later authors of all kinds). Google and other search engines also infer relevance by tracking which search results each user clicks on, and how long until they click something else. All of this is computationally dizzying, but now routine.

Social media already mine similar signals of human judgment (such as liking, sharing, and commenting) but instead, now use that to drive engagement. Filtering services can mine the human judgments of quality and value in likes and shares -- and in who is doing the liking and sharing -- as I have described in detail. By doing this multiple levels deep, augmentation algorithms can distill the wisdom of the crowd in a way that identifies and relies most heavily on the smartest of the crowd. Just as Google out-scaled Yahoo’s manually ranked search, this kind of augmented intelligence promises to enable services that are motivated to serve human objectives to do so far more scalably than armies of human curators. (That is not to exclude human curation, just as Web search now also draws on the human curation of Wikipedia.)

Several of the debaters (Ghosh and Srinivasan, Keller) raise the reasonable question of whether filtering service providers would actually emerge. As to the software investment, the core technology for this might be spun out from the platforms as open source, and further developed by analysis infrastructure providers that support multiple independent filtering services. That would do the heavy lifting of data analysis (and compartmentalize sensitive details of user data), on top of which the filtering services need only set the higher levels of the objective functions and weighting parameters that guide the rankings – a far less technically demanding task. Much of the platforms' existing filtering infrastructure could become the nucleus of one or more separate filtering infrastructure and service businesses. Existing publishers and other mediating institutions might welcome the opportunity to reestablish and extend their brands into this new infrastructure. The Bell System breakup provides a model for how a critical utility infrastructure business can be deeply rearchitected, unbundled, and opened to competition as overseen  by expert regulators, all without interruption of service, and with productive reassignment of staff. Not easy, but doable.

Formative ideas on impression ranking filters versus expression blocking filters

Triggered by Fukuyama’s comments about his group’s recent thinking about takedowns, I wonder if there may be need to differentiate two categories of filtering services that would be managed and applied very differently. This relates to the difference between moderation/blocking/censorship of expression and curation/ranking of impression.

Some speak of moderation in ways that make me wonder if they mean exclusion of illegal content, possibly including some similar but slightly broader censorship of expression, or are just using the term moderation loosely, to also refer to the more nuanced issue of curation of impression.

My focus has been primarily on filters that do ranking for users, providing curation services that support their freedom of impression. Illegal content can properly be blocked (or later taken down) to be inaccessible to all users, but the task of curation filtering is a discretionary ranking of items for quality, value, and relevance to each user. That should be controlled by users and the curation services they choose to act as their agents, as Fukuyama and I propose. 

The essential difference is that blocking filters can eliminate items from all feeds in their purview, while curation filters can only downrank undesirable items -- the effect of those downrankings would be contingent on how other items are ranked and whether uprankings from other active filters counterbalance those downrankings.

But even for blocking there may be need for a degree of configurability and localization (and possibly user control). This could enable a further desirable shift from “platform law” to community law. Some combination of alternative blocking filters might be applied to offer more nuance in what is blocked or taken down. This might apply voting logic, such that content is blocked when some threshold of votes of multiple filters from multiple sources agree. It might provide for a user or community control layer, much as parents, schools, and businesses choose from a market in Internet blocking filters, and might permit other mediators to offer such filters to those who might choose them.

The digitization of our epistemic social networks will force us to think clearly about how technology should support the processes of mediation that have evolved over centuries -- but will now be built into code – to continue to evolve in ways that are democratically decided. That is a task we should have begun on a decade ago.

A Digital Constitution of Discourse

Whether free expression retains primacy is central to much of the debate in the Journal of Democracy. Fukuyama and Keller view that primacy as the reason the unbundling of filtering power is central to limiting platform abuses. An excellent refresher on why free expression is the pillar of democracy and the broader search for truth is Jonathan Rauch’s The Constitution of Knowledge: A Defense of Truth, Rauch digs deep into the social and political processes of mediating consent and how an institutional ecosystem facilitates that. (I previously posted about how his book resonates with my work over the past two decades and suggests some directions for future development  -- this post is a further step.)

Others (including Renee DiResta, Niall Ferguson, Matthew Hutson, and Marina Gorbis) have also elucidated how what people accept as true and valuable is not just a matter of individuals, nor of people talking to each other in flat networks, but is mediated through communities, institutions, and authorities. They review how technology affected that, breaking from the monolithic and “infallible” authority of the church when Gutenberg democratized the written word. That led to horribly disruptive wars of religion, and a rebuilding through many stages of evolution of a growing ecology of publishers, universities, professional societies, journalists, and mass media.

Corey Doctorow observed that “the term ‘ecology’ marked a turning point in environmental activism” and suggests “we are on the verge of a new ‘ecology’ moment dedicated to combating tech monopolies.” He speaks of “a pluralism movement or a self-determination movement.” I suggest this is literally an epistemic ecology. We had such an ecology but are letting tech move fast and break it. It is time to think broadly about how to rebuild and modernize this epistemic ecology.

Faris and Donovan criticize the unbundling of filters as “fragmentation by design” with concern that it would “work against the notion of a unified public sphere.” But fragmentation can be a virtue. Democracies only thrive when the unified public sphere tolerates a healthy diversity of opinions, including some that may seem foolish or odious. Infrastructures gain robustness from diversity, and technology thrives on functional modularity. While network effects push technology toward scale, that scale can be modularized and distributed -- much as the unbundling of filtering would do. It has long been established that functional modularity is essential to making large-scale systems practical, interoperable, adaptable, and extensible.

Toward a digital epistemic ecology

Now we face a first generation of dominant social media platforms that disintermediated our rich ecology of mediators with no replacement. Instead, the platforms channel and amplify the random utterances of the mob – whether wisdom, drivel, or toxins -- into newsfeeds that they control and curate as they see fit. Now their motivation is to sell ads, with little concern for the truth or values they amplify. That is already disastrous, but it could turn much worse. In the future their motivation may be coopted to actively control our minds in support of some social or political agenda.

This ecological perspective leads to a vision of what to regulate for, not just against -- and makes an even stronger case for unbundling construction of our individual newsfeeds from platform control to user control.

  • Do we want top-down control by government, platforms, or independent institutions (including oversight boards) that we hope will be benevolent? That leads eventually to authoritarianism. “Platform law” is “overbroad and underinclusive,” even when done with diligence and good intentions.
  • Do we fully decentralize all social networks, to rely on direct democracy (or small bottom-up collectives)? That risks mob rule, the madness of crowds, and anarchy.
  • Can a hybrid distributed solution balance both top-down and bottom-up power with an emergent dynamic of checks and balances? Can technology help us augment the wisdom of crowds rather than the madness? That seems the best hope.

The ecological depth of such a structure has not yet been appreciated. It is not simply to hope for some new kind of curatorial beast that may or may not materialize. Rather, it is to provide the beginnings of an infrastructure that the communities and institutions we already have can build on -- to reestablish their crucial role in mediating our discourse. They can be complemented and energized by whatever new kinds of communities and institutions may emerge as we learn to apply these powerful new tools. That requires tools for not just curation, but for integrating with other the aspects of these institutions and their broader missions.

Now our communities and institutions are treated little differently from any individual user of social media, which literally disintermediates them from the role as mediators. The platforms have arrogated to themselves, alone, to mediate what each of us sees from the mass of information flowing through social media. Unbundling filtering services to be independently operated would provide a ready foundation for our existing communities and institutions to restore their mediating role -- and create fertile ground for the emergence of new ones. The critical task ahead is to clarify how filtering services become a foundation for curatorial mediators to regain their decentralized roles in the digital realm. How will they curate not only their own content, but that of others? What kinds of institutions will have what curatorial powers?

Conclusion – can truth, value and democracy survive?

A Facebook engineer lamented in 2011 that “The best minds of my generation are thinking about how to make people click ads.”  After ten years of that, isn’t it time to get our best minds thinking about empowering us in whatever ways fulfill us? Some of those minds should be technologists, some not. Keller’s taxonomy of problem areas is a good place to start, not to end.

There is some truth to the counter-Brandeisian view that more speech is not a remedy for bad speech -- just as there is some truth to the view that more intolerance is not a remedy for intolerance. Democracies cannot eliminate either. All they have is the unsatisfyingly incomplete remedy of healthy dialog and mediation, supported by good governance. Churchill said, “democracy is the worst form of government – except for all the others that have been tried.” Markets and technology have at times stressed that, when not guided by democratic government, but dramatically enhance that, when properly guided.

An assemblage of filtering services is the start of a digital support infrastructure for that. Some filtering services may gain institutional authority, and some may be given little authority, but we the people must have ongoing say in that. This will lead to a new layer of social mediation functionality that can become a foundation for the ecology of digital democracy.

Which future do we want? One of platform law acting on its own, or as the servant of an authoritarian government, to control the seen reality and passions of an unstructured mob? Or a digitally augmented upgrade of the rich ecology of mediators of consent on truth and value that -- despite occasional lapses -- has given us a vibrant and robust marketplace of ideas?

Thursday, August 05, 2021

Unbundling Social Media Filtering Services – Updates on Debate and Development


This is an informal work in progress updating and expanding on my two articles that relate to an important debate in the Journal of Democracy on The Limits of Platform Power, in Tech Policy Press (8/9/21): 

...as well as this earlier article in Tech Policy Press (4/22/21):

The focus is on how to manage social media -- and specifically the similar proposals by a number of prominent experts to unbundle the filtering services that curate the news feeds and recommendations served to users. The updates are best understood after reading those articles.

A further round of notable discussion on this occurred around the Reconciling Social Media & Democracy mini-symposium hosted by Tech Policy Press on 10/7/21 that I helped organize, representing all of the Journal of Democracy debate authors, plus other notables. Recordings and transcripts for my session with Francis Fukuyama, Nathalie Marechal, and Daphne Keller..

Also relevant to this debate:

This visualization from my 4/22/21 Tech Policy Press article may also be helpful:

RUNNING UPDATES (most recent first):

  • [10/12/21]
    It Will Take a Moonshot to Save Democracy From Social Media -- that is my quick take on the 10/7 mini-symposium (see previous item). There was general agreement by most speakers that there is no silver bullet...but that shifting power from the platforms is important. (more...)

  • [10/7/21]
    The Tech Policy Press mini-symposium, Reconciling Social Media & Democracy that I helped organize and moderate, was held with eleven speakers prominent in this space. It was very gratifying to catalyze this excellent discussion reflecting diverse perspectives, which I hope will help further a movement toward productive reforms. [10/14:] Recordings and transcripts for my session with Francis Fukuyama, Nathalie Marechal, and Daphne Keller. Other sessions being posted at TechPolicy.Press.

  • [9/20/21]
    "The wood wide web" provides a very relevant analog to the ecosystem perspective in my article preprint -- as explained in The word for web is forest, by Claire Evans (9/19/21). She likens the "context collapse" on social media to monocultures and clear-cutting in forests, which ignores "just how sustainable, interdependent, life-giving systems work." 

  • [9/10/21]
    "Context collapse" is a critical factor in creating conflict in social media, as explained in The day context came back to Twitter (9/8/21), by Casey Newton. As he explains, Facebook Groups and the new Twitter Communities are a way to address this problem of "taking multiple audiences with different norms, standards, and levels of knowledge, and herding them all into a single digital space." Filters are a complementary tool for seeking context, especially when user controlled, and applied in with intentionality. Social media should offer both.

  • [8/25/21]
    The importance of a cross-platform view of the social media ecosystem is highlighted in one of the articles briefly reviewed in Tech Policy Press this week. The article by Zeve Sanderson et. al. on off-platform spread of Twitter-flagged  tweets (8/24/21) argues for “ecosystem-level solutions,” including such options as 1) multi-platform expansion of the Oversight Board, 2) unbundling of filters/recommenders as discussed here (citing Francis Fukuyama et. al. middleware proposal), and 3) “standards for value-driven algorithmic design” (as outlined in the following paper by Helberger).

    A conceptual framework On the Democratic Role of News Recommenders by Natalie Helberger (6/12/19, cited by Sanderson) provides a very though-provoking perspective on how we might want social media to serve society. This is the kind of thinking about what to regulate for, not just against, that I have suggested is badly needed. It suggests four very different (but in some ways complementary) sets of objectives to design for. This perspective -- especially the liberal and deliberative models – can be read to make a strong case for unbundling of filters/recommenders in a way that offers user choice (plus perhaps some default or even required ones as well).

    I hope to do a future piece expanding on the Helberger and Goldman (cited in my 8/15 update below) frameworks and how they combine with some of the ideas in my Looking Ahead post about the need to rebuild the social mediation ecosystems that we built over centuries -- and that digital social media are now abruptly disintermediating with no replacement.
  • [8/17/21]
    Progress on Twitter's @BlueSky unbundling initiative: Jay Graber announces "I’ll be leading @bluesky, an initiative started by @Twitter to decentralize social media. Follow updates on Twitter and at blueskyweb.org" (8/16). Mike Masnick comments: "there has been a lot going on behind the scenes, and now they've announced that Jay will be leading the project, which is FANTASTIC news." Masnick expands: "There are, of course, many, many challenges to making this a reality. And there remains a high likelihood of failure. But one of the key opportunities for making a protocol future a reality -- short of some sort of major catastrophe -- is for a large enough player in the space to embrace the concept and bring millions of users with them. Twitter can do that. And Jay is exactly the right person to both present the vision and to lead the team to make it a reality. ...This really is an amazing opportunity to shape the future and move us towards a more open web, rather than one controlled by a few dominant companies."

    Helpful perspectives on improving and diversifying filtering services are in some articles by Jonathan Stray, Designing Recommender Systems to Depolarize (7/11/21) and Beyond Engagement: Aligning Algorithmic Recommendations With Prosocial Goals (1/21/21). One promising conflict transformation ranking strategy that has been neglected is “surprising validators,” suggested by Cass Sunstein, as I expanded on in 2012 (and since). All of  these deserve research and testing -- and an open market in filtering services is the best way to make that happen.

  • [8/15/21]
    Additional rationales for demanding diversity in filtering services and understanding some of the forms this may take are nicely surveyed in Content Moderation Remedies by Eric Goldman.  He suggests "...moving past the binary remove-or-not remedy framework that dominates the current discourse about content moderation." and provides an extensive taxonomy of remedy options. He explains how expanded non-removal remedies can provide a possible workaround to the dilemmas of remedies that are not proportional to different levels of harm. Diverse filtering services can not only have different content selection criteria, but different strategies for discouraging abuse. And, as he points out, "user-controlled filters have a venerable tradition in online spaces." (Thanks to Daphne Keller for suggesting this to article to me as relevant to my Looking Ahead piece, and for her other helpful comments.)
  • [8/10/21]
  • [8/9/21]
    My review and synthesis of the Journal of Democracy debate mentioned in my 7/21 update are now published in Tech Policy Press.
    + I expand on those two articles in Tech Policy Press in The Need to Unbundle Social Media - Looking AheadWe need a multidisciplinary view of how tech can move democratic society into its digital future. Democracy is always messy, but that is why it works – truth and value are messy. Our task is to leverage tech to help us manage that messiness to be ever more productive. 

Older updates -- carried over from the page of updates to my 4/22/21 Tech Policy Press article

  • [7/21/21]
    A very interesting five-article debate on these unbundling/middleware proposals, all headed The Future of Platform Power, is in the Journal of Democracy, responding to Fukuyama's April article there. Fukayama responds to the other four commentaries (which include a reference to my Tech Policy Press article). The one by Daphne Keller, consistent with her items noted just below, is generally supportive of this proposal, while providing a very constructive critique that identifyies four important concerns. As I tweeted in response, "“The best minds of my generation are thinking about how to make people click ads” – get our best minds to think about empowering us in whatever ways fulfill us! @daphnehk problem list is a good place to start, not to end." I plan to post further comments on this debate soon [now linked  above,  8/9/21].

  • [6/15/21]
    Very insightful survey analysis of First Amendment issues relating to proposed measures for limiting harmful content on social media -- and how most run into serious challenges -- in Amplification and Its Discontents, by Daphne Keller (a former Google Associate General Counsel, now at Stanford, 6/8/21). Wraps up with discussion of proposals for "unbundling" of filtering services: "An undertaking like this would be very, very complicated. It would require lawmakers and technologists to unsnarl many knots.... But unlike many of the First Amendment snarls described above, these ones might actually be possible to untangle." Keller provides a very balanced analysis, but I read this as encouraging support on the legal merits of what I have proposed: the way to preserve freedom of expression is to protect users freedom of impression -- not easy, but the only option that can work. Keller's use of the term "unbundling" is also helpful in highlighting how this kind of remedy has precedent in antitrust law.
    Interview with Keller on this article by Justin Hendrix of Tech Policy Press, Hard Problems: Regulating Algorithms & Antitrust Legislation (6/20/21).
    + Added detail on the unbundling issues is in Keller's 9/9/20 article, If Lawmakers Don't Like Platforms' Speech Rules, Here's What They Can Do About It. Spoiler: The Options Aren't Great.
  • Another perspective on the how moderation conflicts with freedom is in On Social Media, American-Style Free Speech Is Dead (Gilad Edelman, Wired 4/27/21), which reports on Evelyn Douek's more international perspective. Key ideas are to question the feasibility of American-style binary free speech absolutism and shift from categorical limits to more proportionality in balancing societal interests. I would counter that the decentralization of filtering to user choice enables proportionality and balance to emerge from the bottom up, where it has a democratic validity as "community law," rather that being imposed from the top down as "platform law." The Internet is all about decentralized control -- why should we sacrifice freedom of speech to a failure of imagination in managing a technology that should enhance freedom? Customized filtering can provide a receiver-specific richness of proportionality that better balances rights of impression with nuanced freedom of expression. Douek rightly argues that we must accept an error rate in moderation -- why not expect a bottom up, user-driven error rate to be more open and responsive to evolving wisdom and diverse community standards than one applied across the board?
  • [5/18/21]
    Clear insights on the new dynamics of social media - plus new strategies for controlling disinformation with friction, circuit-breakers, and crowdsourced validation in How to Stop Misinformation Before It Gets Shared, by Renee DiResta and Tobias Rose-Stockwell (Wired 3/26/21). Very aligned with my article (but stops short of the contention that democracy cannot depend on the platforms to do what is needed).
  • [5/17/21]
    Important support and suggestions related to Twitter's Bluesky initiative from eleven members of the Harvard Berkman Klein community are in A meta-proposal for Twitter's bluesky project (3/31/21). They are generally aligned with the directions suggested in my article.
  • [4/22/21]
    Another piece by Francis Fukuyama that addresses his Stanford group proposal is in the 
    Journal of DemocracyMaking the Internet Safe for Democracy, April, 2021.
    (+See 7/21/21 update, above, for follow-ups.)
---

Grandfather clause: Many people have contributed to the idea of unbundling social media filters, but I believe I was the first, dating to 2002-3 -- see The Roots of My Thinking on Tech Policy.

============================================
============================================
[Original opening section of this updates post, as posted 8/5/21]

This is an informal work in progress updating and expanding on my two articles in Tech Policy Press (8/9/21) that relate to an important debate in the Journal of Democracy on The Limits of Platform Power

The focus is on how to manage social media and specifically the similar proposals by a number of prominent experts to unbundle the filtering services that curate the news feeds and recommendations served to users. The updates are best understood after reading those articles.

UPDATE: A further round of notable discussion on this occurred around the Reconciling Social Media & Democracy mini-symposium hosted by Tech Policy Press on 10/7/21 that I helped organize, representing all of the Journal of Democracy debate authors, plus other notables. Recordings and transcripts for my session with Francis Fukuyama, Nathalie Marechal, and Daphne Keller..

Also relevant to this debate:

This visualization from my 4/22/21 Tech Policy Press article may also be helpful: