Tuesday, July 13, 2021

Toward the Digital Constitution of Knowledge [a teaser*]

How to Destroy Truth, the 7/1/21 David Brooks column, offers insight on the problems and opportunities of social media, drawing on Jonathan Rauch’s important new book “The Constitution of Knowledge: A Defense of Truth.” Brooks summarizes Rauch about empirical and propositional knowledge (and how that complements the emotional and moral knowledge that derives from the collective wisdom of shared stories):

…the acquisition of this kind of knowledge is also a collective process. It’s not just a group of people commenting on each other’s internet posts. It’s a network of institutions — universities, courts, publishers, professional societies, media outlets — that have set up an interlocking set of procedures to hunt for error, weigh evidence and determine which propositions pass muster.

My work on the future of networks for human collaboration has been in tune with this and suggests some urgent further directions, as detailed most recently in my Tech Policy Press article, The Internet Beyond Social Media Thought-Robber BaronsHaving just read Rauch’s book (with close attention to his chapter on “Disinformation Technology: The Challenge of Digital Media”), I have two initial take-aways that I preview here:

Extending Rauch’s work: I was struck that Rauch might enhance his ideas by drawing on proposals for unbundling aspects of digital media, as I and others (including Jack Dorsey and Francis Fukuyama) have advocated. Rauch’s chapter on media is very resonant, but the final section stopped me short. He seems uncritical in support of Big Tech efforts at quasi-independent outsourcing of controls like the Facebook Oversight Board and fact-checking authorities. I see that as ineffective – and, more importantly, as a fig-leaf on overcentralized authoritarian control of these essential network utilities -- and counter to the more open emergence needed to seek effective consensus on truth.

Extending my work: I have built on similar ideas (notably Renee DiResta’s Mediating Consent) -- but Rauch convinces me to add focus on the role of institutional participants in that process, beyond the emergent bottom-up processes for reliance on such institutions that I have been emphasizing as the driving force.

As Rauch explains, the “constitution of knowledge” is a collective process based on rules and governance principles. As he says, the dominant social media companies have hijacked this process to serve their own business objectives of selling ads, rather than the objectives of their users and society to support the constitution of knowledge. It is now clear to everyone whose salary does not depend on the selling of ads that these two objectives are incompatible, and we are suffering the consequences.

But, to the extent it is the platforms that address this, directly or via surrogates, it devolves into undemocratic “platform law,” which as  Molly Land explains, lacks legitimacy and is “overbroad and underinclusive.” Rauch makes a similar point that the Web has become a “company town.”

To address that we need to unbundle key functions of the social network platforms. As all discourse moves to the digital domain, there is a core function of posting and access that seeks to be universal and thus very subject to network effects that favor a degree of concentration. But the function that is essential to the constitution of knowledge is the selection of what each of us sees in our newsfeeds. In a free society that must be largely a matter of individual choice. That can be decentralized and has limited network effects.

The solution this leads to is a functional unbundling: to create an open market in filtering services that each of us can select from and mix and match to customize a feed of what each of us wishes to view from the platform at any given time. That might be voluntary (if Dorsey has his way) or mandated (if Zuckerberg continues to overreach).

My article and the works of other proponents of such an unbundling explain how these feed filtering services can be offered by service providers that may include the kinds of traditional institutional gatekeepers Rauch refers to. We have argued that such decentralization breaks up the “platform law” we are stumbling into. Instead, it returns individual agency into our open marketplace of ideas, supporting it with an open marketplace of filters. We, not the platform, should decide when we want filters from a given source, and with what weight. Those sources can include all the kinds of institutions based on professionalism and institutionalism that Rauch refers to, but we should generally be the ones to decide. Rauch quotes Frederick Douglas on “the rights of the hearer.” Democracy and truth require that we free our feeds to protect “the rights of the hearer as well as those of the speaker.”

As one of Rauch’s chapter subtitles says, “Outsourcing reality to a social network is humankind’s greatest innovation.” Translating that to the digital domain, the core idea is that multiple filtering services can uprank and downrank items for possible inclusion in our feeds. Each filtering service should be able to assign weights to those up or down rankings, and users should be able to use knobs or sliders to give higher or lower weightings to each filtering service they want applied. Rauch's emphasis on institutions suggests that more official and authoritative gatekeepers might have special overweightings or other privileged ways to adjust what we see and how we see it (such as to provide warnings or introduce friction into viral sharing).

My own work on filtering algorithms designs for truth in a man-machine partnership that is based on distilling human judgements of reputation. This generalizes how Google’s PageRank distills the human judgments of “Webmasters” as encoded into the emergent linking of the Web – adapting that to the likes and shares and comments of social media. Rauch seems to suggest a similar direction: “giving users an epistemic credit score.” (Social media already track users’ reputational credit scores, but they score for engagement, not for truth.) As Rauch observes' there can be "no comprehensive solutions to the disinformation threat," but this massive crowdsourcing of judgments offers what can become a robust "cognitive immune system." 

I would be very interested to learn how Rauch might build on those proposals for 1) open filtering and 2) reputation-based truth-seeking algorithms -- to develop his vision of the constitution of knowledge into a more dynamically adaptive and emergent future – one that moves toward a more flexible structure of “community law.” (Similar issues of platform versus community law also apply to the softer emotional and moral knowledge that Brooks refers to.)

Pursuant to that, I plan to revisit ideas from my early work on how this digitally augmented constitution of knowledge can effectively combine 1) the open emergence of preferred filtering services from individual users with 2) the contingent establishment of more official and authoritative gatekeepers. My original 2003 design document (paragraph 0288) outlined a vision that extended this kind of decentralized selection of filtering services in a way that I hope Rauch might relate to:

Checks and balances could provide for multiple bodies with distinct responsibilities, such as executive, legislative, and judicial, and could draw on representatives to oversee critical decisions and methods. Such representatives may be elected by democratic methods, or through reputation-based methods, or some combination. Expert panels could also have key roles, again, possibly given limited charters and oversight by elected representatives to avoid abuse by a technocracy. External communities and governmental bodies may also have oversight roles in order to ensure broadly based input and sensitivity to the overall welfare. The use of multiple confederated and cooperative marketplaces, as described above, may also provide a level of checks and balances as well.

It seems most of our thinking about social media is currently reactive and rooted in the present, looking only to the very near future. But we are already far down a wrong path and need a deep rethinking and reformation. We need a new driving vision of how our increasingly digital society can reposition itself to deal with the constitution of knowledge for coming decades. That future must be a flexible and emergent, and able to deal with unimaginable scale, speed, and scope. If we do not set a course for that future now, we may well find ourselves in a dark age that will be increasingly hard to escape. That window may already be closing.

---

*Referring to this as "a teaser," it is a preliminary draft that I hope to refine and expand based on further thought and feedback.

Sunday, June 13, 2021

Beyond Deplatforming: The Next Evolution of Social Media May Make Banning Individual Accounts Less Necessary

As published in Tech Policy Press...

Since his accounts on major platforms were suspended following the violent insurrection at the US Capitol on January 6, Donald Trump has been less of a presence on social media. But a recent New York Times analysis finds that while Trump “lost direct access to his most powerful megaphones,” his statements can still achieve vast reach on Facebook, Instagram and Twitter. The Times found that “11 of his 89 statements after the ban attracted as many likes or shares as the median post before the ban, if not more. How does that happen? …after the ban, other popular social media accounts often picked up his messages and posted them themselves.”

Understanding how that happens sheds light on the growing controversy over whether “deplatforming” is effective in moderating extremism, or just temporarily drives it out of view, to intensify and potentially cause even more harm.  It also illuminates the more fundamental question: is there a better way to leverage how social networks work to manage harmful speech in a way that is less draconian and more supportive of free expression? Should we really continue down this road toward “platform law” — restraints on speech applied by private companies (even if under “oversight” by others) — when it is inevitably “both overbroad and underinclusive” — especially as these companies provide increasingly essential services. 

Considering how these networks work reveals that the common “megaphone” analogy that underlies rhetoric around deplatforming is misleading. Social media do not primarily enable a single speaker to achieve mass reach, as broadcast media do. Rather, reach grows as messages propagate through social networks, with information spreading person to person, account to account, more like rumors. Trump’s accounts are anomalous, given his many tens of millions of direct followers, so his personal network does give him something of a megaphone. But the Times article shows that, even for him, much of his reach is by indirect propagation — dependent on likes and shares by others. It is striking that even after being banned, comments he made elsewhere were often posted by his supporters (or journalists, and indeed his opponents), and then liked and further shared by other users hundreds of thousands of times.

The lesson is that we need to think of social networks as networks and manage them that way. Banning a speaker from the network does not fully stop the flow of harmful messages, because they come from many users and are reinforced by other users as they flow through the network. The Times report explains that Trump’s lies about the election were reduced far more substantially than his other messages not simply because Trump was banned, but because messages from anyone promoting false election fraud claims are now specifically moderated by the platforms. That approach can work to a degree, for specific predefined categories of message, but it is not readily applied more generally. There are technical and operational challenges in executing such moderation at scale, and the same concerns about “platform law” apply. 

Social media networks should evolve to apply more nuanced intervention at the network level. There is growing recognition of the need to enable a deeper level of individual control on how messages are filtered into each user’s newsfeed, and whether harmful speakers and messages are downranked based on feedback from the crowd to reduce propagation. Such controls would offer a flexible, scalable, and adaptive cognitive immune system to limit harmful viral cascades. That can limit not only how messages propagate, but how harmful users and groups are recommended to other users — and can moderate which speech is impressed upon users without requiring a binary shutdown of expression.

Some experts propose that the best way to manage this at scale is to spin out the choice of filtering rules that work with the platforms to an open market of filtering services that users can choose from. The decentralization of this key aspect of current social media networks away from the dominant platforms, and the potential diversity of choices it may create for users, might prevent a speaker widely recognized to speak lies and hate from gaining many tens of millions of followers in the first place — and would break up the harmful feedback loops that reinforce the propagation of their dangerous messages. Perhaps such a system could have substantially prevented or reduced the propagation of the Big Lie, and therefore abrogated the necessity of deplatforming a President. Instead, it would apply more nuanced downstream control — a form of crowdsourced moderation emergent from the individual choices of users and communities of users. 

Under the status quo, we are left with the “platform law” policies set by a few dominant private companies, leaving no one satisfied. Instead, democracy would be far better served by digitally enhanced processes to apply more nuanced forms of  “community law,” as crowdsourced from each social network user and community as they interact with their networks. 

Wednesday, May 05, 2021

Ass-Backwards: The Facebook Oversight Board, Trump, and Freedom

[The Economist]
The Facebook Oversight Board decision on Trump “pleases no one” because we have it backwards. Social media have become a universal platform: We should individually control what we choose to hear, not globally control who can speak. 

The Internet is not like a soapbox with limited reach (if you don’t like the speech, you can walk away). Newsfeeds come to you all or not at all — except as filtered. We need to control our own filters! That is how we “walk away” as we desire. 

We can’t rely on control at the source. No one should decide for us what gets impressed upon our attention (except as we empower them to serve as our agent). The only solution is for each of us to control how we individually filter. We need to break out an open market in user-selectable filtering services that we each can choose from. Not perfect, but that is the nature of a free society. More at Tech Policy Press: The Internet Beyond Social Media Thought-Robber Barons.

Who should decide what you listen to? Not the speaker, not the government, not the platform, not some “oversight” board. Social Media cannot offer freedom of EXpression unless we each retain freedom of IMpression. We need individual control/delegation of what we see. #FreeOurFeeds! 

Tuesday, April 20, 2021

Tech Policy Press: The Internet Beyond Social Media Thought-Robber Barons

==============================================================
SEE IMPORTANT UPDATES BELOW, plus related items & background notes 
==============================================================

My new article, "The Internet Beyond Social Media Thought-Robber Barons," was published in Tech Policy Press on 4/22/21
  • It is now apparent that social media is dangerous for democracy, but few have recognized a simple twist that can put us back on track.  
  • A surgical restructuring -- an "unbundling" -- to an open market strategy that shifts control over our feeds to the users they serve -- is the only practical way to limit the harms and enable the full benefits of social media 
(This is an extensively updated and improved version of the discussion draft first posted on this blog in February, now integrating more proposals, addressing common objections, and drawing on feedback from a number of experts in the field -- and the very helpful editing of Justin Hendrix.)

I summarize and contrast these proposals:

  • Most prominently in Foreign Affairs and the Wall Street Journal by Francis Fukayama, Barak Richman, Ashish Goel, and others in the report of the Stanford Working Group on Platform Scale. (Their use of the technical term "middleware" for this approach has been picked up by some other commentators.)
  • Independently by Stephen Wolfram, Mike Masnick, and me.
  • And with what might become important real-world traction in the exploratory Bluesky initiative by Jack Dorsey at Twitter.

The article covers new ground in presenting a concrete vision of what an open market in filtering services might enable -- how this can bring individual and social purpose back to social media, to not only protect, but systematically enhance democracy, and how that can augment human wisdom and social interaction more broadly. That vision should be of interest to thoughtful citizens as well as policy professionals.


I welcome your feedback and support for these proposals, and can be reached at intertwingled [at] teleshuttle [dot] com.

--------------------------

UPDATES:

  • [7/21/21]
    A very interesting five-article debate on these unbundling/middleware proposals, all headed The Future of Platform Power, is in the Journal of Democracy, responding to Fukuyama's April article there. Fukayama responds to the other four commentaries (which include a reference to my Tech Policy Press article). The one by Daphne Keller, consistent with her items noted just below, is generally supportive of this proposal, while providing a very constructive critique that identifyies four important concerns. As I tweeted in response, "“The best minds of my generation are thinking about how to make people click ads” – get our best minds to think about empowering us in whatever ways fulfill us! @daphnehk problem list is a good place to start, not to end." I plan to post further comments on this debate soon.

  • [6/15/21]
    Very insightful survey analysis of First Amendment issues relating to proposed measures for limiting harmful content on social media -- and how most run into serious challenges -- in Amplification and Its Discontents, by Daphne Keller (a former Google Associate General Counsel, now at Stanford, 6/8/21). Wraps up with discussion of proposals for "unbundling" of filtering services: "An undertaking like this would be very, very complicated. It would require lawmakers and technologists to unsnarl many knots.... But unlike many of the First Amendment snarls described above, these ones might actually be possible to untangle." Keller provides a very balanced analysis, but I read this as encouraging support on the legal merits of what I have proposed: the way to preserve freedom of expression is to protect users freedom of impression -- not easy, but the only option that can work. Keller's use of the term "unbundling" is also helpful in highlighting how this kind of remedy has precedent in antitrust law.
    + Interview with Keller on this article by Justin Hendrix of Tech Policy Press, Hard Problems: Regulating Algorithms & Antitrust Legislation (6/20/21).
    + Added detail on the unbundling issues is in Keller's 9/9/20 article, If Lawmakers Don't Like Platforms' Speech Rules, Here's What They Can Do About It. Spoiler: The Options Aren't Great.
  • Another perspective on the how moderation conflicts with freedom is in On Social Media, American-Style Free Speech Is Dead (Gilad Edelman, Wired 4/27/21), which reports on Evelyn Douek's more international perspective. Key ideas are to question the feasibility of American-style binary free speech absolutism and shift from categorical limits to more proportionality in balancing societal interests. I would counter that the decentralization of filtering to user choice enables proportionality and balance to emerge from the bottom up, where it has a democratic validity as "community law," rather that being imposed from the top down as "platform law." The Internet is all about decentralized control -- why should we sacrifice freedom of speech to a failure of imagination in managing a technology that should enhance freedom? Customized filtering can provide a receiver-specific richness of proportionality that better balances rights of impression with nuanced freedom of expression. Douek rightly argues that we must accept an error rate in moderation -- why not expect a bottom up, user-driven error rate to be more open and responsive to evolving wisdom and diverse community standards than one applied across the board?
  • [5/18/21]
    Clear insights on the new dynamics of social media - plus new strategies for controlling disinformation with friction, circuit-breakers, and crowdsourced validation in How to Stop Misinformation Before It Gets Shared, by Renee DiResta and Tobias Rose-Stockwell (Wired 3/26/21). Very aligned with my article (but stops short of the contention that democracy cannot depend on the platforms to do what is needed).
  • [5/17/21]
    Important support and suggestions related to Twitter's Bluesky initiative from eleven members of the Harvard Berkman Klein community are in A meta-proposal for Twitter's bluesky project (3/31/21). They are generally aligned with the directions suggested in my article.
  • [4/22/21]
    Another piece by Francis Fukuyama that addresses his Stanford group proposal is in the 
    Journal of DemocracyMaking the Internet Safe for Democracy, April, 2021.
    (+See 7/21/21 update, above, for follow-ups.)

--------------------------

Related items by me:  see the Selected Items tab.

--------------------------

Personal note: The roots of these ideas

This background might be useful to make it more clear where I am coming from...

These ideas have been brewing throughout my long career (bio), with a burst of activity very early on, then around 2002-3, and increasingly in the past decade. They are part of a rich network that intertwingles with my better-known work on FairPay and several of my patented inventions. Some background on these roots may be helpful.

I was first enthused by the potential of what we now call social media around 1970, when I had seen early hypertext systems (pre-cursors of the Web) by Ted Nelson and Doug Engelbart, and then studied systems for collaborative “social” decision support by Murray Turoff and others, rolling into an independent study graduate school course on collaborative systems. All of this oriented me to the spirit of using computers for augmenting human intelligence (including social intelligence) -- not replacing it with artificial intelligence. 

My first proposals for an open market in media filtering were inspired by the financial industry parallels. An open market in filters for news and market data analytics was emerging when I worked for Standard & Poor's and Dow Jones around 1990. Filters and analytics would monitor raw news feeds and market data (price ticker) feeds, select, and analyze that raw information using algorithms and parameters chosen by the user, and work within any of a variety of trading platforms.

I drew on all of that when designing a social decision support system for large-scale open innovation and collaborative development of early-stage ideas around 2002. That design featured an open market for reputation-based ranking algorithms essentially as proposed here. Exposure to Google PageRank, which distilled human judgment and reputation for ranking Web search results, inspired me to broaden Google's design to distill the wisdom of the crowd as reflected in social media interactions, using a nuanced multi-level reputation system.

By 2012 it was becoming apparent that the Internet was seriously disrupting the marketplace of ideas, and Cass Sunstein’s observations about surprising validators inspired me to adapt my methods to social media. I became active in groups that were addressing those concerns and more fully recast my earlier designs to focus on social media, and to address architectural and regulatory strategies (here and then here). My other work on innovative business models for digital services also gave me a unique perspective on better alternatives to the perverse incentives of the ad model.

The Fukuyama article late last year was gratifying validation on the need for an open, competitive market for feed filtering services driven by users, and inspired me to refocus on that as the most direct point of leverage for structural remediation, as expanded on here.

My thanks to the many researchers and activists in this field I have had the privilege of interacting with and who have provided invaluable stimulation, feedback, suggestions, and support. And special thanks to Justin Hendrix for his very helpful editing, and to those who reviewed and commented on earlier versions of this article: Renee DiResta, Yael Eisenstat, Gene Kimmelman, Ellen Goodman, Molly Land, and Sam Lessin.


Thursday, April 01, 2021

But Who Should Control the Algorithm, Nick Clegg? Not Facebook ...Us!

(Image adapted from cited Nick Clegg article)
Facebook's latest attempt to justify their stance on disinformation and other harms, and their plans to make minor improvements, actually points the reason those improvements are not nearly enough -- and can never be. They need to make far more radical moves to free our feeds, as I have proposed previously.

Facebook’s VP of Global Affairs, Nick Clegg, put out an article yesterday that provides a telling counterpoint to those proposals. You and the Algorithm: It Takes Two to Tango defends Facebook in most respects, but accepts the view that users need more transparency and control:

You should be able to better understand how the ranking algorithms work and why they make particular decisions, and you should have more control over the content that is shown to you. You should be able to talk back to the algorithm and consciously adjust or ignore the predictions it makes — to alter your personal algorithm…

He goes on to describe laudable changes Facebook has just made, with further moves in that direction intended. 

But the question is: how this can be more than Band-Aids covering the deeper problem? Seeking to put the onus on us -- “We need to look at ourselves in the mirror…” -- he goes on (emphasis added):

…These are profound questions — and ones that shouldn’t be left to technology companies to answer on their own…Promoting individual agency is the easy bit. Identifying content which is harmful and keeping it off the internet is challenging, but doable. But agreeing on what constitutes the collective good is very hard indeed.

Exactly the point of these proposals! No private company can be permitted to attempt that, even under the most careful regulation - especially in a democracy. That is especially true for a dominant social media service. Further, slow-moving regulation cannot be effective in an age of dynamic change. We need a free market in filters from a diversity of providers - for users to choose from. Twitter seems to understand that; it seems clear that Facebook does not.

Don't try to tango with a dancing bear

As I explain in my proposal:

Social media oligarchs have seduced us -- giving us bicycles for the mind that they have spent years and billions engineering to "engage" our attention. The problem is that they insist on steering those bicycles for us, because they get rich selling advertising that they precisely target to us. Democracy and common sense require that we, the people, keep control of our marketplace of ideas. It is time to wrestle back the steering of our bicycles, so that we can guide our attention where we want. Here is why, and how. Hint: it will probably require regulation, but not in the ways currently being pursued.

What I and others have proposed -- and that Jack Dorsey of Twitter has advocated -- is to spin out the filtering of our newsfeeds (and other recommendations of content, users, and groups) to a multitude of new "middleware" services that work with the platforms, but that users can choose from in an open market, and mix and match as they like. 

"Agreeing on what constitutes the collective good" has always been best done bthe collective human effort of an open market of ideas. Algorithms can aid humans in doing that, but we, the people, must decide which algorithms, with what parameters and what objective functions. These open filtering proposals explain how and why. What Clegg suggest is good advice as far as it goes, but, ultimately, too much like trying to tango with a dancing bear.

Friday, March 26, 2021

3 Tech CEOs Zoom into a Congress... Two Ways Forward on Disinformation (Yes or No?)

My quick take on yesterday's tech CEO Congressional hearing on disinformation is that the two best ways forward now seem very slightly less of a long shot (dysfunctional as much of questioning was):

  1. Congress seems increasingly convinced that an end to the addictive ad model may have to be mandated.
  2. Twitter's prominent support for opening up the filtering to free our feeds gives that important strategy increased credibility (even though Congress seems to not yet have a clue about it).
We are still very far from any agreement on specific actions, but either of these two directions could make a real difference -- and doing both would go far to limit disinformation. Neither is easy, but both are doable.

On the ad-model 

Much of the concern seems focused on eliminate targeting of ads (as Gilad Edelman reports in Wired) -- that could help, but would leave the overall drive for addictive engagement intact. It would also limit the value that advertising could have, if done right.

I have suggested a more market-based solution to advertising using "reverse metering" that negotiates payment to users for their attention and data. That would make the user the customer of the advertiser, motivating them to make ads be relevant, non-intrusive, and privacy protective.

A much broader elimination or reduction of advertising in favor of user-supported models would be more effective in eliminating perverse incentives to addict users with bad content. I have proposed innovative (and award-winning) models to make user funding effective, fair, and affordable -- as well as a strategy for "ratcheting" toward that goal.

On freeing our feeds

Oddly enough, Jack Dorsey of Twitter is offering what may be the best, most powerful, and most achievable solution -- but this remains largely ignored in legislative and policy circles. His Bluesky initiative proposes "...giving more people choice around what relevance algorithms they're using, ranking algorithms they're using. You can imagine a more market-driven and marketplace approach to algorithms." He highlighted it in his prepared testimony -- and a tweetstorm sent during the hearings that could be the best material of the day.

I proposed such a solution years ago, and summarized its potential in my recent post, Making Social Media Serve Society (before learning of Bluesky):
Social media oligarchs have seduced us -- giving us bicycles for the mind that they have spent years and billions engineering to "engage" our attention. The problem is that they insist on steering those bicycles for us, because they get rich selling advertising that they precisely target to us. Democracy and common sense require that we, the people, keep control of our marketplace of ideas. It is time to wrestle back the steering of our bicycles, so that we can guide our attention where we want.
Giving users an open market in filtering services that work within the platforms is the best way to do that -- by enabling competition that stimulates diversity and innovation to meet user needs. A marketplace of ideas functions well only if users control for themselves whether they see “undesirable” items, as they individually define that. Instead of expecting platforms to be responsible for managing the unruly beast of what ideas are posted, we must empower markets for filters that manage how ideas are consumed. Demand, not censorship (nor advertising), should control the flow of information to those who want it.

I since posted brief updates on this strategy, citing Bluesky and others who have made similar proposals, and will publish a fully updated and expanded article on this soon.

Yes or no?

And, on the dysfunction in Congress, @Jack's "Yes or No" poll during his testimony was a testament (and won him a Congressional kudo for multitasking):

[@Jack in the Congress 3/25/21]

Will the answer be yes, oh magic eight ball? Sources say maybe.

Monday, February 22, 2021

"I am large, I Contain Multitudes" -- The Internet vs Splinternets

Do I contradict myself? / Very well then I contradict myself,

(I am large, I contain multitudes.)

--Walt Whitman, channeling the Internet in one grain of humanity.

The Internet is in crisis as a medium for human communications in a "global village," as Shira Ovide nicely summarizes in "The Internet is Splintering:"

[2BE Splinternet movie]

Each country has its own car safety regulations and tax codes. But should every country also decide its own bounds for appropriate online expression?

If you have a quick answer, let me ask you to think again. We probably don’t want internet companies deciding on the freedoms of billions of people, but we may not want governments to have unquestioned authority, either.

...a messy set of trade offs with no easy solutions. ...Is there a middle ground? The splinternet fear is often presented as a binary choice between one global Facebook or Google, or 200 versions. But there are ideas floating around to set a global baseline of online expression, and a process for adjudicating disputes.

...If you’re thinking all of this is a mess — yes, it is. Speech on the internet is a relatively new thing, and we’re still very much figuring it out.

Ovide focuses on political divides in the global village, but that is just one dimension of splintering... 

Sam Lessin focuses on cultural divides in "Clubhouse and the Future of Cult-Driven Social Platforms." He provides a scary dystopia where the ideal of “places where people value the viewpoints and stories of other group members and care about their standing in the minds of those others” is giving way to “places where people want to hear from a powerful leader and care about their standing in the eyes of the leader but not necessarily other cult members.”

There is a way to deal with this messiness -- humans have been doing that for millennia. Ovide observes that "Speech on the internet is a relatively new thing, and we’re still very much figuring it out," but I suggest we already have -- we have just not had the clarity or will to build it into our Internet media.

Humanity has always had a messy competitive battle in our marketplace of ideas. Sometimes authority dominates, other times it is the unruly mob, but on balance, the marketplace contains multitudes of ideas at varying levels of commonality and splintering and yet we haltingly evolve toward an emergently mediated consensus.  

What we have failed to properly address is how the Internet greatly amplifies voices, and how that amplification has fallen into the hands of a few media platform oligarchies seeking to keep control while assuaging government concerns. We have forgotten that these are services for users, and that it is users who should have primacy in controlling what information their media feed them.

A number of proposals have recently converged on a powerful structural remedy for letting these multitudes of views work through the process of mediating consent more freely and effectively -- to whatever level is feasible at any given time. My recent post Making Social Media Serve Society outlines these proposals (with links to some running updates). Aside from my own work, other key advocates are Francis Fukuyama and Barak Richman (in Foreign Affairs and The WSJ), Stephen Wolfram (in Senate testimony), and Jack Dorsey’s Bluesky initiative on Twitter.

Free our feeds!

The solution becomes clear when we identify the true nature of the problem: Social media oligarchs have seduced us -- giving us bicycles for the mind that they have spent years and billions engineering to "engage" our attention. The problem is that they insist on steering those bicycles for us, because they get rich selling advertising that they precisely target to us. Democracy and common sense require that we, the people, keep control of our marketplace of ideas. It is time to wrestle back the steering of our bicycles, so that we can guide our attention where we want. 

The best way to restore facts, reason, and civility is to be smarter about the viral spread of ideas, not to censor them (even though some authorities may want the power to censor). Users should have control over their filters, so that - in the eye of each receiver - desirable content is promoted, and desirable communities are proselytized. Giving users an open market in filtering “middleware” that works within the platforms is the best way to do that -- by enabling competition that stimulates diversity and innovation to meet the multitude of user needs

Even Jack Dorsey sees the logic in that. Of course we could try to achieve the same effect within the monolithic platforms, but as Ovide says, "We probably don’t want internet companies deciding on the freedoms of billions of people, but we may not want governments to have unquestioned authority, either." The "global baseline of online expression, and ...process for adjudicating disputes" that she refers to are helpful steps, but far too slow and inflexible to make much of a dent in this problem without a deeper structural change to who controls filtering.

Letting users decide among independent middleware services that can help them steer their bicycles in accord with their individual desires is the best way to augment our marketplace of ideas. Unless we find and emergent and flexible way to contain an Internet of multitudes, our marketplace of ideas, democracy, and human society as a whole will splinter into a new dark age. My recent post explains how to do better.

[Update 2/23:] I should add that the effort to create new social media platforms that have "better" values and quality is laudable and valuable, but fails to address the deeper problem of getting both quality and diversity. Only by being large and containing multitudes can we maximize the power of our marketplace of ideas. We must tolerate fringe views, but limit their ability to do harm -- only by being cautiously open to the fringe do we learn and evolve. Flexible filters can let us decide when we want to stay in a comfort zone, and when to cast a wider net -- and in what directions. Niche services can take the form of walled-garden communities that layer on top of a universal interconnect infrastructure. That enables the walls to be semi-permeable or semi-transparent, to a degree that we can vary to suit our tastes and moods at any given time.

Friday, February 12, 2021

Growing Support for "Making Social Media Serve Society"

It was nice to learn that Jack Dorsey of Twitter was exploring ideas similar to what I have proposed -- in a project called Bluesky. As I was finalizing my (just prior) 2/11/21 post, Making Social Media Serve Society, I learned of this important development in a report by Casey Newton. That led me to other supportive items, including Senate testimony by prominent AI expert Stephen Wolfram advancing similar ideas. 

  • The prior post has a brief preliminary update addressing the Twitter actions (duplicated below). 
  • Rather that update the body of that post at this time (except to add missing links and correct formatting and typos), I provide a running commentary on my ongoing findings and views here. 

First a summary of the key ideas of the original, then running updates (most recent first).

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

Key ideas:

Paradise lost …and found -- saving democracy by serving users and society

The root causes of the crisis in our marketplace of ideas are that:

  1. The dominant social media platforms selectively control what we see, 
  2. and yet they are motivated not to present what we value seeing, but instead to “engage” audiences to click ads 

      They use their control of our minds not to serve us, but to extract value from us.

The best path to reduce the harm and achieve the lost promise of digital media is to remove control over what users see in their feeds from the platforms. Instead, create an open market in filtering “middleware” services that give users more power to control what they see in their feeds.

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

Summary of updates:

So far, the additional information and analysis seems to be very encouraging:

  • Adding support for this idea of shifting control of filters from the platforms to the users
  • Offering some slim hope (at least from Jack Dorsey of Twitter) that these reforms might be possible in part as self-regulation, rather than having to be imposed by regulators.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

2/16

Crowdsourcing for the "cognitive immune system" to downrank fake news: I had intended to include citations to important research that shows crowdsourcing of news source quality can compete with professional fact-checking as to quality -- and is clearly superior as to speed, cost, and scalability. See studies by Pennycook and Rand, and by Epstein, Pennycook and Rand.

2/13

Barak Richman and Francis Fukuyama's, How to Quiet the Megaphones of Facebook, Google and Twitter (2/12, WSJ), reinforces and updates their prior call for this strategy:

  • The subtitle: "Today’s often toxic social-media environment calls for a fix that puts choices back in the hands of consumers. A new layer of ‘middleware’ can do that."
  • The closing paragraph: "Middleware offers a structural fix and avoids collateral damage. It will not solve the problems of polarization and disinformation, but it will remove an essential megaphone that has fueled their spread. And it will enhance individual autonomy, leveling the playing field in what should be a democratic marketplace of ideas."

Twitter's 2/19 earnings call includes comments by Jack Dorsey on the related Bluesky project.

  • "...we're excited to build to address some of the problems that is facing Section 230 is giving more people choice around what relevance algorithms they're using, ranking algorithms they're using. You can imagine a more market-driven and marketplace approach to algorithms. And that is something that not only we can host, but we can participate in."
  • "...we will have access to a much larger conversation, have access to much more content, and we'll be able to put many more ranking algorithms that suit different people's needs on top of it. And you can imagine an app store like VU, our ranking algorithms that give people optimal flexibility in terms of how they see it. And that will not only help our business, but drive more people into participating in social media in the first place. So this is something we believe in, not just from an open Internet standpoint, but also we believe it's important and it really helps our business thrive in a significantly new way, given how much bigger it could be.

2/12

Casey Newton's Twitter seeks the wisdom of crowds (2/11/21, Platformer) updates on two separate Twitter initiatives: 

  • Bluesky is the one most central to my prior post, breaking out filtering to support an open market in competing services that would let user choose one or more filtering services that suited their needs. This is still just conceptual, but the fact that Dorsey actively supports exploration of divesting control to to an open market "app-store-like view of ranking algorithms that give people ultimate flexibility in terms of” what posts are put in front of them seems a positive sign. 
  • Most importantly, it suggests some possibility this might be embraced voluntarily, as self-regulation.
  • Birdwatch relates to crowdsourcing feedback on fact-checking doing the basics of what I also referred to in the prior post, and cover more deeply in A Cognitive Immune System for Social Media -- Developing Systemic Resistance to Fake News. Newton's reporting is that Birdwatch is still in an embryonic state.

Stephen Wolfram's Testifying at the Senate about A.I.‑Selected Content on the Internet (6/25/19) makes very similar suggestions about an open market in user-selected filters:

  • He explains how problematic it is get AI to do this task, and how neither government not monolithic oligopoly platforms should make filtering decisions -- and that user selection can be done at two levels "based on mixing technical ideas with market mechanisms. The basic principle of both suggestions is to give users a choice about who to trust, and to let the final results they see not necessarily be completely determined by the underlying ACS business."
  • One is to have the independent "final ranking providers" make the selections
  • The other is to have the independent "constraint providers" define "sets of constraints," such as for balance or leanings or types of content, on how the platforms to make the selections
  • "There’s been debate about whether ACS businesses are operating as “platforms” that more or less blindly deliver content, or whether they’re operating as “publishers” who take responsibility for content they deliver. Part of this debate can be seen as being about what responsibility should be taken for an AI. But my suggestions sidestep this issue, and in different ways tease apart the “platform” and “publisher” roles."
  • He suggests "both Suggestions...attempt to leverage the exceptional engineering and commercial achievements of the [Automatic Content Selection] businesses, while diffusing current trust issues about content selection, providing greater freedom for users, and inserting new opportunities for market growth."
Wolfram's commentary seems to provide very strong support for the ideas in my post, along with the Fukuyama's article and report that I cited in my post.

Lucas Matney's Twitter’s decentralized future (1/15/21, Techcrunch) raises the dark side: "The platform’s vision of a sweeping open standard could also be the far-right’s internet endgame:"
  • "Social platforms like Parler or Gab could theoretically rebuild their networks on bluesky, benefitting from its stability and the network effects of an open protocol. Researchers involved are also clear that such a system would also provide a meaningful measure against government censorship and protect the speech of marginalized groups across the globe."
  • “I think the solution to the problem of algorithms isn’t getting rid of algorithms — because sorting posts chronologically is an algorithm — the solution is to make it an open pluggable system by which you can go in and try different algorithms and see which one suits you or use the one that your friends like,” quoting a member of the working group.
  • This is seen as having appeal as a standard beyond Twitter: "Right at this moment I think that there’s going to be a lot of incentive to adopt, and I don’t just mean by end users, I mean by platforms, because Twitter is not the only one having these really thorny moderation problems ...I think people understand that this is a critical moment,” quoting another group member.
I see Matney's concerns as valid and important to deal with, but ultimately manageable and necessary in a free society, as the prior post explains in the section on "Driving our own filters."

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

As noted on the prior post (near midnight 2/11):

Special update: This is “Version 0.1,” a discussion draft that was completed on 2/11/21, hours before Casey Newton’s report made me aware of a move by Twitter to research the direction proposed here. Pending analysis and revisions to reflect that, it seemed useful to get this version online now for discussion. Newton’s report links to Jack Dorsey’s initial sketchy announcement of this "@bluesky" effort about a year ago, and items he linked to at The Verge link to an interesting analysis on Techcrunch. My initial take is that is a very positive move, while recognizing that the Techcrunch analysis rightly notes the risks that I had recognized below, and have thought to be important to deal with, but ultimately manageable and necessary in a free society. Dorsey's interest in this concept gives some reason to hope that this could occur as voluntary self-regulation, without need for the mandates I suggested likely to be necessary below. (late 2/11)

Thursday, February 11, 2021

Making Social Media Serve Society [Discussion Draft]

[From The Social Dilemma]
Social media oligarchs have seduced us -- giving us
bicycles for the mind that they have spent years and billions engineering to "engage" our attention. The problem is that they insist on steering those bicycles for us, because they get rich selling advertising that they precisely target to us. Democracy and common sense require that we, the people, keep control of our marketplace of ideas. It is time to wrestle back the steering of our bicycles, so that we can guide our attention where we want. Here is why, and how. Hint: it will probably require regulation, but not in the ways currently being pursued.

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

TL;DR:  See the bolded "Key ideas" section a bit down from here…

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

Special update: This is “Version 0.1,” a discussion draft that was completed on 2/11/21, hours before CaseyNewton’s report made me aware of a move by Twitter to research the direction proposed here. Pending analysis and revisions to reflect that, it seemed useful to get this version online now for discussion. Newton’s report links to Jack Dorsey’s initial sketchy announcement of this "@bluesky" effort about a year ago, and items he linked to at The Verge link to an interesting analysis on Techcrunch. My initial take is that is a very positive move, while recognizing that the Techcrunch analysis rightly notes the risks that I had recognized below, and have thought to be important to deal with, but ultimately manageable and necessary in a free society. Dorsey's interest in this concept gives some reason to hope that this could occur as voluntary self-regulation, without need for the mandates I suggested likely to be necessary below. (late 2/11) 
++2/13: There is a new piece 
by Richman and Fukuyama advocating this strategy in the WSJ.

Further updates are being posted in Growing Support for "Making Social Media Serve Society"

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

“In case of emergency, break glass” …then what?

Our marketplace of ideas is clearly on fire when two media oligarchs have such power that they can - and seemingly must - censor a President on their own. Facebook then punted to an independent (but also unelected) “Oversight Board” on whether to continue that censorship. This wicked problem of misinformation and polarization is well on the way to destroying our consensus view of reality, yet our current solutions have come to a reduction to the absurd.

This has been over a decade in building, yet the path to doing better remains widely misunderstood. The Capitol insurrection made the “break-glass” urgency clear, and the recent GameStock insurrection in our financial markets highlighted how wide the scope is. It all comes down to rethinking whether we, the people, manage our own unfolding digital views of the world, or whether oligarchies (or governments) do it for us.

Both our democracy and our financial markets depend on our marketplace of ideas. Reddit-inspired mobs empowered by the Robinhood trading platform triggered circuit breakers in trading. The financial “madness of crowds” led toward a long-established – but continuously evolving – regulatory regime for financial markets. For nearly a century it has been the mission of the SEC to keep the markets free of manipulation -- free to be volatile, but subject to basic ground rules -- and the occasional temporary imposition of circuit-breakers.

Regulatory mechanisms have properly applied much more loosely to our marketplace of ideas. But with Big Tech businesses moving so fast and breaking so much, it is now all too clear that some form of nuanced control on them is needed. Both safety and freedom are at risk. We need to contain the damage from our broken system right now, even if that temporarily violates some principles that should be preserved and protected. But we dare not lose sight of the distinction between stopgap measures limited to this brief emergency period, and the path beyond that.

Compounding the problem, network effects have created platform oligarchies with extractive advertising and data profits so huge as to create strong perverse incentives that distract from visions of how these powerful tools can serve society. Current remediation efforts are focused on limiting harms, with little positive vision that would nurture the unfulfilled benefits that should be demanded.

There are many interrelated concerns of harm to privacy and competition --as well as a broad underpinning of gaps in digital literacy, critical thinking, and civics education -- that all badly need attention. But unless we turn the tidal force driving this imminent danger to democracy, a rapidly growing inability to achieve consensus will make the other problems insoluble. Our malfunctioning bicycles for the mind are now making us stupid.

Here are some strategies for the long game:  how to guide technology and policy to protect both safety and freedom, while also seeking the benefits. What I propose would require ongoing oversight by a specialized Digital Regulatory Agency that can work with industry and academic experts, much like the SEC and the FCC, but with different expertise. My focus is not on regulation, but on a normative vision for uses of this technology that we should regulate toward, so that better business models and competition can drive progress toward consumer and social welfare.

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

Key ideas:

Paradise lost …and found -- saving democracy by serving users and society

The root causes of the crisis in our marketplace of ideas are that:

  1. The dominant social media platforms selectively control what we see, 
  2. and yet they are motivated not to present what we value seeing, but instead to “engage” audiences to click ads.  

      They use their control of our minds not to serve us, but to extract value from us.

The best path to reduce the harm and achieve the lost promise of digital media is to remove control over what users see in their feeds from the platforms. Instead, create an open market in filtering “middleware” services that give users more power to control what they see in their feeds.

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

A number of proposals advocate this – most prominently and well-argued by Francis Fukuyama and colleagues in Foreign Affairs,
How to Save Democracy From Technology (summarizing a Stanford report). The following draws on that and my own related vision for how that can not only protect, but systematically enhance wisdom and democracy. There are regulatory precedents for similar functional divestitures, and models for open, user-driven markets for filtering algorithms that can favor quality and value. Without such a systemic change, social media will be increasingly toxic to democracy.

Free our feeds!

Democracy depends on an open, diverse, and well-structured marketplace of ideas. Freedom of speech and of association are essential to our social processes for organically seeking a working consensus on ground-truth. But now, the “feeds” from Facebook, Twitter, YouTube, and a few others have become the dominant filters controlling which information, and which other users, billions of people see. Those oligarchies have nearly total power over what they selectively present each of us, with almost no transparency or oversight – and systematically against our interests!

People cannot do without algorithms to filter what we drink from the “firehose” of social media, but we have misapplied them disastrously.

  • Network effects lead toward concentration and scale in the platforms interconnecting our global village -- every speaker rightly seeks to be heard by every listener willing to hear them (with narrow limitations). That drives toward universality of posting and access.
  • But filtering is personal – we each should be fed what we desire, and not what we do not. For democracy to survive, each of us needs supervisory control over how algorithms promote or demote information items and people to or from our attention.

Network effects are compounded by perverse incentives. A Facebook engineer observed in 2011 that “The best minds of my generation are thinking about how to make people click ads.” A decade of algorithm design has twisted “connecting people” to become a matter of targeting lucrative audiences for advertisers. Oligarchs profit obscenely from selling advertising in their “free” services -- but what a cost!

Those network effects and incentives are tidal forces, but filtering can be pulled out and shielded from that. Businesses and governments must jointly facilitate doing that -- but we, the people, must have autonomy over how that works for each of us.

It might seem that user control would worsen filter bubble-driven echo chambers. But the algorithms that divide and enrage us (so we click ads) could instead stimulate thinking, understanding, and enlightenment. Now they drive factions to lose touch with reality – feeding them lies, connecting those susceptible to lies to create “lookalike audiences” for advertisers -- and motivating users to disinform and sow division for profit or merely for attention.

A marketplace of ideas functions well only if users control for themselves whether they see “undesirable” items, as they individually define that. Instead of expecting platforms to be responsible for managing the unruly beast of what ideas are posted, we must empower markets for filters that manage how ideas are consumed. Demand, not censorship (or advertising), should control the flow of information to those who want it.

Social media are for people -- not advertisers or platform owners.

The promise of digital technology has been that each user can potentially configure their own customized filters and recommenders – or select services that curate for them in the ways that they choose. But now our feeds are customized for us, without our consent, in non-transparent ways. The platforms’ algorithms draw on many “signals” of suitability -- but are engineered not to serve what we desire, but to sell as much advertising as possible. We have no access to filters designed to serve our own needs.

Technology promised tools for augmenting human intellect and our collective ability to solve problems – but now the platforms are “de-augmenting” us, dividing us and making us stupid. The platforms’ obscene profits from advertising remove any incentive to do better (or to let others do it for us). Now those harms stem from reckless greed -- think how much worse if they undertook a political agenda (as some already fear they do here, and as China’s social media already do). Oligopolistic thought-robber barons have hooked us on parasites of our attention, “nuance destruction machines” that make us polarized and reactive. Can we afford to pay that price for “free” services?

An open market in filtering services is the way to serve users.

Fukuyama and colleagues suggest

…taking away the platforms’ role as gatekeepers of content …inviting a new group of competitive ‘middleware’ companies to enable users to choose how information is presented to them. And it would likely be more effective than a quixotic effort to break these companies up.

They make the case that the remedy is to give individuals power to control the “middleware” that filters their view of the information flowing through the platform in the ways that they desire.

Of course, controlling what goes into one’s feed at a fine-grained level is beyond the skill or patience of most users. The solution is to create a diverse open market of interoperable filtering services that users can select from. Individual needs vary so widely that no single provider can serve that diversity well. Open, competitive markets excel at serving diverse needs – and untangling incentives. Breaking filtering “middleware” out as an independent service that interoperates with the platforms enables user choice to drive competition and innovation.

These middleware services can work “inside” the platforms, using APIs (Application Program Interfaces) to combine filtering algorithms with human oversight in an unlimited variety of ways. They could be funded with a revenue share from the platforms. (That need not reduce platform revenue, since better service could yield more activity and more users, making the pie bigger.) They could use much the same “surveillance capitalism” data that that the platforms now use – with controls to limit that to only the extent users are willing to permit, and subject to regulatory constraints on privacy and how the data is used.

Paradise lost sight of -- filtering for social truth

Imagine how different our online world would be with open and innovative filtering services. Humans have evolved ourselves and our society to test for and establish truth in a social context, because we cannot possibly have direct knowledge of everything that matters to us (“epistemic dependence”). Renee DiResta nicely explains in “Mediating Consent” how these social processes have been both challenged and enhanced by advances in technology from Gutenberg to “social media.” Social media can augment similar processes in our digital social context to determine what content to show us, and what people (or groups) to suggest we connect with.

What do we want done with that control? We do not want to rank on “engagement” (how much time we spend glued to our screens) or on whose ad we will be disposed to click on -- but what criteria should apply? Surely, we can do better than just counting “likes” from everyone equally, regardless of who they are and whether they read and considered an item, or just mindlessly reacted to a clickbait headline.

Consider how the nuanced and layered process of mediating consent that society has evolved over millennia has been lost in our digital feeds. Do people and institutions with reputations we trust in agree on a truth? Should we trust in them because of others who trust in them? Can we apply this within small communities -- and more broadly? That is how science works – as do political consensus and scholarly citation analysis. That is how we decide who and what to listen to and to believe, to avoid being lemmings.

Technology has already succeeded at extending that kind of process into Google’s original PageRank search algorithm, weighing billions of human evaluations at Internet speed and scale. Social media feeds can empower users to mediate consent in the ways that they, and their communities, favor. They can draw on the plethora of information quality “signals” that the platforms have (clicks, likes, shares, comments, etc,) and combine that with rudimentary understanding of content. They can factor in the reputations of those providing the signals, as humans have always done to decide what to pay attention to and which people and groups to connect with.

To be effective and scalable, reputation and rating systems must go beyond simplistic popularity metrics (mob rule) or empaneled raters (expert rule). To socially mediate consensus in an enlightened democratic society, reputation must be organically emergent from that society. Algorithms can draw on both explicit and implicit signals of human judgement, to rate the raters and weight the ratings (as I have detailed elsewhere) -- in transparent but privacy-protective ways. Better and more transparent tools could help us consider the reputations behind the postings -- to make us smarter and connect us more constructively. We could factor in multiple levels of reputation to weight the human judgments of those who other humans whom we respect judge to be worth hearing (not just the most “liked”). We could favor content from people and publishers we view as reputable, and factor in human curation as desired.

This can help us understand the world as we choose to view it, and to understand and accept that other points of view may be reasonable. Fact-checking and warning labels often just increase polarization, but if someone we trust takes a contrary view we might think twice. Filters could seek those “surprising validators” and sources of serendipity that offer new angles, without burying us in noise.

To make reputation-based filtering more effective, the platforms should better manage user identity. Platforms could allow for anonymous users with arbitrary alias names, as desirable to protect free speech, but distinguish among multiple levels of identity verification (and distinguishing human versus bot). Weighting of reputation could reflect both how well validated a user identity is, and how much history there is behind their reputation. This would help filter out bad actors, idiots, and bots in accord with standards that we choose (not those imposed on us).

Now the advertiser-funded oligopoly platforms perversely apply similar kinds of signals with great finesse to serve their own ends. A Facebook engineer lamented in 2011 that “The best minds of my generation are thinking about how to make people click ads.” They have engineered Facebook and Twitter and rest to work as digital Skinner boxes, in which we are the lab rats fed stimuli to run the clickbait treadmill that earns their profits. We cannot expect or entrust them to redirect that treadmill to serve our ends -- even with increased regulation and transparency. If revenue primarily depends on selling ads, efforts to counter that incentive to favor quality over engagement swim against a powerful tide. That need not be malice, but human nature: “It is difficult to get a man to understand something, when his salary depends on his not understanding it.”

Driving our own filters

Users should be able to combine filtering services to be selective in multiple ways, favoring some attributes and disfavoring others. Algorithms can draw on human judgements to filter undesirable items out of our feeds, by down-ranking them, and recommend desirable items in, by up-ranking them. Given the firehose of content that humans cannot keep up with, filters need rarely do an absolute block (censorship) or an absolute must-see. Instead, a well-architected system of interoperable filters can be composed by each user with just a few simple selections -- so their multiple filtering service suggestions are weighted together to present a composite feed of the most desired items.

That way we can create our own “walled garden,” yet make the walls as permeable as we like, based on the set of ranking services we activate at any given time. Those may include specialized screening services that downrank items likely to be undesirable, and specialized recommenders to uprank items corresponding to our tastes in information, entertainment, socializing, and whatever special interests we have. Such services could be from new kinds of providers, or from publishers, community organizations, and even friends or other people we follow. Services much like app stores might be needed to help users easily select and control their middleware services at basic level, or with more advanced personalization. We have open markets in “adtech” – why not “feedtech.”

Some have argued that filtering should be prohibited from using personal data—that would limit abusive targeting but would also severely restrict the power of filters to positively serve each user. Better to 1) motivate filtering services to do good, and 2) develop privacy-protective methods to apply whatever signals can be useful in ways that prevent undue risk. To the extent that user postings, comments, likes, and shares are public (or shared among connections), it is only more private signals like clickstreams and dwell time that would need protection.

Filtering services might be offered by familiar publishers, community groups, friends, and other influencers we trust. Established publishers could extend their brands to reenergize their profitability (now impaired by platform control): New York Times, Wall Street Journal, or local newspapers; CNN, Fox, or PBS; Atlantic or Cosmopolitan, Wired or People; sports leagues, ACLU, NRA, or church groups. Publishers and review services like Consumer Reports or Wirecutter can offer recommendations. Or if lazy, we could select among omnibus filters for a single default, much as we select a search engine.

Users should be able to easily “shift gears,” sliding filters up or down to accommodate changes in their flow of tasks, moods, and interests. Right now, do you want to see items that stimulate lean-forward thinking or indulge in lean-back relaxation? – to be more or less open to items that stimulate fresh thinking? Just turn some filtering services up and others down, using sliders. Save desired combinations. Swap a work setting for a relax setting. Filter suites could be shared and modified like music playlists and learn with simple feedback like Pandora. Or, just choose one trusted master service to make all those decisions.

Instead of censoring Trump and his ilk and driving them to platforms like Parler to fester in isolation (and possible secrecy), user-filtered services could relegate them to the fringes of our open marketplace of ideas, as society has always tended to do. That could downrank their trash out of the view of those who do not opt in to see it, while keeping it accessible to those who do (and facilitate monitoring whatever mischief they brew).

Filter-driven downranking could also drive mechanisms to introduce friction, slow viral spread of abusive items, and precisely target fact-checks and warning labels for maximum effect. Friction could include such measures as adding delays on promotion of questionable items and downranking likes or shares done too fast to have read more than the headline. Society has always done best when the marketplace of ideas is open, and oversight is by reason and community influence not repression. It is social media’s recommendations of harmful content and groups that are so pernicious, far more than any unseen presence of such content and groups.

Filtering services can emerge to shine sunlight on blindness to reality and good sense. They can entice reasonable people to cast a wider net and think critically. Simple fact-checking often fails, because when falsehoods are denied from outside our echo-chambers, confirmation bias increases polarization. But when someone trusted within our group challenges a belief, we stop and think. Algorithms can identify and alert us to these “surprising validators” of opposing views. A notable proof point of such surprising validators is the Redirect Method experiments in dissuading potential ISIS sympathizers by presenting critical videos made by former members. Clever filtering can also augment serendipity, cross-fertilizing across information silos and surfacing fringe ideas that deserve wider consideration (Galileo: “…and yet it moves”). Clever design can enlist people to help with that, much like Wikipedia, and even gamify it.

User-driven markets for filters can also better serve local community needs within the global village. Network effects drive global platforms, but filtering can be local and adaptive to community standards and national laws and norms. US, EU and Chinese filters can be different, but open societies would make it easy to swap alternative filters in when desired. The Wall Street Journal’s “Blue Feed, Red Feed” demonstrated strikingly how you can walk in another’s shoes. Members of any community of thought – religion, politics, culture, profession, hobby – could enable their members to filter in accord with their community standards, at varying levels of selectivity, without imposing those standards on those who seek broader horizons. But now, social media users are subjected to “platform law,” which from a human rights perspective is “both overbroad and underinclusive.”

Filters and circuit breakers -- parallels in financial marketplaces

The parallels between our marketplace of ideas and our financial markets run deep. There is much to learn from and adapt, both at a technical and a regulatory level. Both kinds of markets require distilling the wisdom of the crowd -- and limiting the madness. This January made it apparent that the marketplaces of ideas and of securities feed into one another. The sensitivity of financial markets to information and volatility has driven development of sophisticated control regimes designed to keep the markets free and fair while limiting harmful instabilities. Those regimes involve SEC regulations affecting market participants, exchanges/dealers/brokers, and clearing houses - and they continue to evolve.

Just as in financial markets, it is now apparent that social media markets of ideas need circuit breakers to limit instabilities by reducing extremes of velocity, without permanently constraining media postings (unless clearly illegal or harmful). That suggests social media restrictions on postings can be rare, just as individual securities trades can be at foolish prices without great harm. Securities trading circuit breakers are applied when the velocity of trades leads to such large and rapid market swings that decisions become reactive and likely to lose touch with reality. Those market pauses give participants time to consider available information and regain an orderly flow in whatever direction then seems sensible to the participants. There is a similar need for friction and pauses in social media virality.

User-controlled filtering that serves each user should be the primary control on what we see, but the financial market analog supports the idea that circuit breakers are sometimes needed in social media. Filters controlled by individual users will not, themselves, limit flows to users who have different filters. To control excesses of viral velocity, access and sharing must be throttled across a critical mass of all filters that are in use.

The specific variables relevant to the guardrails needed for our marketplace of ideas are different from those in financial markets, but analogous. Broad throttling can be done by coordinating the platform posting and access functions using network-wide traffic data, plus consolidated feedback on quality metrics from the filters combined with velocity data from the platforms. A standard interface protocol could enable the filters to report problematic items. Such reports could be sent back to the platforms that are sourcing them, or to a separate coordination service, so it can be determined when such reports reach a threshold level that requires a circuit breaker to introduce friction into the user interfaces and delays in sharing. Signaling protocols could support sharing among the platforms and all the filtering services to coordinate warnings that downranking or other controls might be desired. (To preserve individual user freedom, users might be free to opt-out of having their filters adhere to some or all such warnings.) Think of this as a decentralized cognitive immune system that integrates signals emerging from many kinds of distributed sensors, in much the same way that our bodies coordinate an emergently learned response to pathogens.

Much as financial market circuit breakers are invoked by exchanges or clearinghouses in accord with oversight by the SEC, social media circuit breakers might be invoked by the network subsystems in accord with oversight by a Digital Regulatory Agency based on information flows across this new digital information market ecology.

Harmful content:  controls and liability

User control of filters enables society to again rely primarily on the best kind of censorship: self-censorship -- and helps cut through much of the confusion that surrounds the current controversy over whether Section 230 of the Communications Act of 1996 should be repealed or modified to remove the safe harbor that limits the liability of the platforms. Many argue that Section 230 should not be repealed, but modified to limit amplification (including writers from AOL, Facebook policy and Facebook data science). Harold Feld of Public Knowledge argues in The Case for the Digital Platform Act that “elimination of Section 230 would do little to get at the kinds of harmful speech increasingly targeted by advocates” and is “irrelevant” to the issues of harmful speech on the platforms. He provides helpful background on the issues and suggests a variety of other routes and specific strategies for limiting amplification of bad content and promoting good content in ways sensitive to the nature of the medium.

Regardless of the legal mechanism, Feld’s summary of a Supreme Court ruling on an earlier law makes the central point that matters here: “the general rule for handling unwanted content is to require people who wish to avoid unwanted content to do so, rather than to silence speakers.” That puts the responsibility for limiting distribution of harmful content (other than clearly illegal content) squarely on users – or on the filtering services that should be acting as more or less faithful agents for those users.

Nuanced regulation could depend on the form of moderation/amplification, as well as its transparency, degree of user “buy-in,” and scale of influence. So long as the filters work as faithful agents (fiduciaries) for each user, in accord with that user’s stated preferences, then they should not be liable for their operation. Regulators could facilitate and monitor adherence to guidelines on how to do that responsibly. Negligence in triggering and applying friction and downranking to slow the viral spread of borderline content could be a criterion for liability or regulatory penalties. Such nuanced guardrails would limit harm while keeping our marketplace of ideas open to what we each choose to have filtered for us.

If independent middleware selected by users does this “moderation,” the platform remains effectively blind and neutral (and within the Section 230 safe harbor, to the extent that may be relevant). That narrowing of safe harbor (or other regulatory burdens) might help motivate the platforms to divest themselves of filtering -- or to at least yield control to the users. If the filtering middleware is spun out, the responsibility then shifts from the platforms to the filtering middleware services. Larger middleware services could dedicate significant resources to doing moderation and limiting harmful amplification well, while smaller ones would at worst be amplifying to few users. Users who were not happy with how moderation and amplification was being handled for them could switch to other service providers. But if the dominant platforms retain the filters and fail to yield transparency and control to their essentially captive users, regulation might need to take a heavy hand. That would threaten free expression in our marketplace of ideas.

Realigning business incentives – though-robber barons and attention capitalism

The Internet's Original Sin” is that advertising-based business models drive filtering/ranking/alerting algorithms to feed us junk food for the mind, even when toxic. The oligopolies that hold our filters hostage to advertising are loath to risk any change to that, and uninterested in experimenting with emerging alternative business models. That is a powerful tide to swim against.

Regulators hesitate to meddle in business models, but even partial steps to open just this layer of filtering middleware could do much to decouple the filtering of our feeds from the sale of advertising. A competitive open market in filtering services would be driven by the demand of individual users, making them more “the customer” and less “the product.” Now the pull of advertising demand funds an industry of content farms that create clickbait for disinformation -- or just for the sole purpose of generating ad revenue.

Shoshana Zuboff’s tour de force diagnosis of the ills of surveillance capitalism has rightly raised awareness of the abuses we now face, but I suggest a rather different prescription. The more deadly problem is attention capitalism.  Our attention and thought are far more valuable to us than our data, and the harms of misdirection of attention that robs us of reasoned thought are far more insidious to us as individuals and to our society than other harms from extraction of our data.

It is improper use of the data that does the harm. As outlined above, the extraction of our attention stems from the combination of platform control and perverse incentives. The cure is to regain control of our feeds, and to decouple the perverse incentives.

My work on innovative business models suggests how an even more transformative shift from advertising to user-based revenue could be feasible. Those methods could allow for user funding of social media in ways that are affordable for all -- and that would align incentives to serve users as the customer, not the product.

As a half step toward those broader business model reforms, advertising could be more tolerable and less perverse in its incentives if users could negotiate its level and nature, and how that offsets the costs of service. That could be done with a form of “reverse metering” that credits users for their attention and data when viewing ads. Innovators are showing that even users who now block ads might be open to non-intrusive ads that deliver relevance or entertainment value, and willing to provide their personal data to facilitate that.

But in any case, advertising should not be permitted to dictate how our social media content is filtered. Given the hurdles of platform and/or regulator buy-in, divesting control of our feeds from the platforms seems to be the best leverage point for driving real transformation. I have advocated user control of filters for many years, but I credit the Fukuyama article for highlighting its surgical precision in addressing our current crisis.

Making this happen

Given how far down the wrong path we have gone, reform will not be easy, and will likely require complex regulation, but there is no other effective solution.  To recap the options currently being pursued:

  • Current privacy and antitrust initiatives are aimed at harms to privacy and competition, but even if broken up or regulated, monolithic, ad-driven social media services have limited ability and motivation to protect our marketplace of ideas.
  • Simple fact-checking and warning labels have very limited effect.
  • More sophisticated psychology-based interventions have promise, but who combines the ability and motivation to apply them effectively, even if mandated to do so?
  • Banning Trump was a draconian measure that dominant platforms rightly shied away from, understanding that censoring who can post is antithetical to a free society. It clearly lacks legitimacy and due protection for human rights when decided by private companies or even by independent review boards.

As noted above (and outlined more fully in a prior post), a promising regulatory framework is emerging (to little public attention). This goes beyond ad hoc remedies to specific harms, and provides for ongoing oversight by a specialized Digital Regulatory Agency that would work with industry and academic experts, much like the FCC and the SEC. Hopefully, the Biden administration will have the wisdom and will to undertake that (the UK is already proceeding).

But those proposals have yet to focus on the freeing of our feeds. That is the where the power to save democracy lies, but we can expect the platforms to resist losing control of this profit-enhancing component of their systems. Of course, regulators could just task the monolithic platforms with offering users direct control without any functional divestiture -- that seems possible, but problematic, for the reasons given above.

The other deep remedy would be to end the Faustian bargain of the ad model very broadly, but that will take considerable regulatory resolve – or a groundswell of public revulsion. Neither seem imminent. One way to finesse that is the “ratchet” model that I have proposed, inspired by how regulations ratchet vehicle manufacturers toward increasingly challenging fuel economy standards has driven the market to meet those challenges incrementally, in ways of their own devising. The idea is simple: mandate or apply taxes to shift social media revenue to small but increasing percentages of user-based revenue. But the focus here is on this more narrowly targeted and clearly feasible divestiture of filtering.

While regulators seem reluctant to meddle with business models, there is precedent for modularizing interoperating elements of a complex monolithic business through a functional breakup. The Bell System breakup separated services from equipment suppliers, and local service from long-distance. That was part of a series of regulatory actions that required modular jacks to allow competitive terminal equipment (phones, faxes, modems, etc.), number portability and many other liberating reforms, all far too complex for legislation or the judiciary alone, but solvable by the FCC working with industry and independent experts.

Internet e-mail also serves as a relevant design model – it was designed to replace incompatible networks, enabling users of any “user agent” (like Outlook or Gmail) to send through any combination of cloud “transfer agents” to a recipient with any other user agent. In the extreme, such models for liberation could lead to “Protocols, Not Platforms.” One move in that direction would allow multiple competing platforms to interoperate to allow posting and access across multiple platforms, each acting as a user messaging agent and a distributed data store. Also, as noted above, the model of financial markets seems very relevant, offering proven guiding principles.

But in any case, even short of an open filtering middleware market, it is essential to democracy to provide more control to each user over what information the dominant platforms feed us. Even if the filters stay no better than they are now and users just pick them randomly, they will become more diverse – that, alone, would reduce dangerous levels of virality and ad-driven sensationalism. The incentive of engagement that drives recommendations of pernicious content, people, and groups would be eliminated or at least weakened.

The economics of network effects favors this functional separation in a way that regulators may find compelling.

  •  Network effects intrinsically favor universal interconnections for posting and access, driving platform dominance for those basic functions. That borders on being a universal utility service (whether monolithic or distributed and interoperating).
  • But filtering of how posts and users are matched to other users is largely immune to network effects. A filter can please a single individual, regardless of whether others use it. Users will select middleware services that seem to act in their interest – motivating businesses to demonstrate value over ubiquity.

Key steps toward returning control to users can build incremental impact:

1.   Policies should be reframed to treat filtering that targets and amplifies reach to users as editorial authorship/curation/moderation of a feed, and thus subject to regulation (and liability). That might, itself, motivate the platforms to divest that function to avoid that risk to their core businesses. That would also motivate them to help design effective APIs to support those independent filtering services. They could retain ability to provide the raw firehose, filtered only in non-“editorial” ways -- by simple categories such as named friends or groups, geography, and subject, in reverse chronological order, with no ranking or amplification (using sampling to keep the flow at a desired level).

2.   A spinout could break out the platforms existing filtering services and staff into one or several new companies with clear functional boundaries and distinct subsets of the user base. The new units might begin with the current code base, but then evolve independently to serve different communities of users -- but with requirements for data portability to facilitate switching.

3.   The spin-out should be guided (and mandated as necessary) by well-crafted regulation combined with ongoing adaptation. Regulators should define, enforce, and evolve basic guardrails on the APIs and related practices and circuit breakers on both sides -- and continually monitor and evolve that.

4.   Such structural changes alone would at least partially decouple filtering from the perverse effects of the ad model. However, as noted above, regulation could address that more aggressively by mandates (or taxes) that encourage a shift to user-based revenue. A survey of some notable proposals for a Digital Regulatory Agency, as well as suggestions of what we should regulate for, not just against, is in Regulating our Platforms-- A Deeper Vision.

5.   The structural changes creating an open market would also motivate the new filtering middleware services to devise user interfaces and new algorithms to better provide value to users. The framework for reputation-based algorithms briefly outlined above is more fully explained in Architecting Our Platforms to Better Serve Us -- Augmenting and Modularizing the Algorithm.

6.   A new digital agency can also address the many other desirable objectives of Big Tech platform regulation, including consumer privacy and data usage rights, standards and processes to remove clearly impermissible content, and anticompetitive behaviors, as well as other Big Tech oligopoly issues beyond social media.

Whatever route we take in this direction, democracy requires that our marketplace of ideas be controlled by “we the people,” not platforms or advertisers. We must take back control as soon as possible. Current efforts at antitrust breakups and privacy regulation that leave filtering in the hands of others with their own agendas will perpetuate this mortal threat to democracy. Return of filtering power to citizens can revitalize our marketplace of ideas. It can augment our social processes for “mediating consent” and pursuing happiness – and provide a healthy base for gradual evolution toward digital democracy. But so long as others subvert control of our bicycles for the mind to their own ends, we have no time to lose.

---

This is a working draft for discussion.

+++Updates are here: Growing Support for "Making Social Media Serve Society."

Feedback on support for these ideas, concerns, disagreements, and needs for clarification are invited. Please use the comment section below or email to interwingled [at] teleshuttle [dot] com.

---

Personal note: The roots of these ideas

These ideas have been brewing throughout my career (bio), with burst of activity very early on, then around 2002, and increasingly in the past decade. They are part of a rich network that intertwingles with my work on FairPay and several of my patented inventions. Some background on these roots may be helpful.

I was first enthused by the potential of what we now call social media around 1970, when I had seen early hypertext systems (pre-cursors of the Web) by TedNelson and Doug Englebart, and then studied systems for collaborative decision support by Murray Turoff and others, rolling into a self-study course on collaborative media systems in graduate school.

My first proposals for an open market in media filtering were inspired by the financial industry parallels. A robust open market in filters for news and for market data analytics was emerging when I worked for Standard & Poor's and Dow Jones around 1990. Filters and analytics would monitor raw news feeds and market data (price ticker) feeds, select and analyze that raw information using algorithms and parameters chosen by the user, and work within any of a variety of trading platforms.

I drew on all of that when designing a system for open innovation and collaborative development of early-stage ideas around 2002. That design featured an open market for reputation-based ranking algorithms very much like those proposed here. Exposure to Google PageRank, which distilled human judgment and reputation for ranking Web search results, inspired me to broaden Google's design to distill the wisdom of the crowd as reflected in social media interactions, based on a sophisticated and nuanced reputation system.

By 2012 it was becoming apparent that the Internet was seriously disrupting the marketplace of ideas, and Cass Sunstein’s observations about surprising validators inspired me to adapt my designs to social media. I became active in groups that were addressing those concerns and more fully recast my earlier design to focus on social media. My other work on innovative business models for digital services also gave me a unique perspective on alternatives to the perverse incentives of social media.

The recent Fukuyama article was gratifying validation on the need for an open, competitive market for feed filtering services driven by users, and inspired me to refocus on that as the most direct point of leverage for structural remediation, as outlined here.

I am very grateful to the many researchers and activists in this field I have had the privilege of interacting with and who have provided invaluable stimulation, feedback, suggestions, and support, especially over the past several years as this has become a widely recognized problem.