Tuesday, February 14, 2023

Back From Absurdity: The Essential Reframing Needed to Manage Online Speech - Now on Tech Policy Press

My latest article in Tech Policy Press distills the core ideas needed to reform how we manage online speech: From Freedom of Speech and Reach to Freedom of Expression and Impression.

Key ideas: Problems in managing social media news feeds, including disinformation, extremism, and hate speech, are reducing to the absurd because of a misguided focus on censorious removal. A new framing is needed to reverse these problems, from the “Twitter Files” and Musk’s confusion about “freedom of speech, but not freedom of reach,” to the absurd pairing of cases on online speech headed to the Supreme Court. 

The solution is to shift focus to manage the other end of the proverbial “megaphone” – not the speaker’s end, but the listener’s. Dominant social media platforms are co-opting the “freedom of impression” that we listeners did not realize we had. Proposed remedies based on “delegation” and “middleware” promise to address this, but this new framing in terms of “freedom of impression” is necessary to clarify for all concerned why that is needed from human rights, legal, governance, economic, cultural, and technological perspectives -- and how it can work.

This new framing offers a way to apply both/and thinking to cut through many current dilemmas, and to set a direction for the future of freedom and democracy. It also illuminates a new path toward competition in this market.

Operationally, it suggests how to manage networked speech by refactoring the balance of three control points - to balance full freedom of expression with full freedom of impression:

  • Censorship as posts and responses enter the network, entirely banning users or removing individual posts before they reach anyone at all -- a threat to freedom of expression.
  • Selection of what is fed or recommended out to each user, individually -- a threat or exercise of freedom of impression, depending on who controls it.
  • Friction and other measures to enhance the deliberative quality of human social mediation activity -- with little threat to freedom of thought.
There is no quick fix to the problems of social media, but we can quickly change course to begin to undo the disaster of the past decade or two -- and to avoid ill-conceived remedies that will fail or make things worse. As shown here, doing that means: 
  • relying less on censorship (bans and removals that have questionable legitimacy, even when guided by the most well-intended proportionality), and
  • giving users agency over the selection of what they see (to legitimately balance each speaker's freedom of expression with each listener's freedom of impression), and
  • re-creating a truly open and generative social mediation ecosystem that is like what we have been evolving over the past centuries of analog society, but now augmented by digital media tech (instead of de-augmented and disintermediated by it).

...And dig deeper into the rationale and concepts behind this distillation and update of the Delegation series (co-authored with Chris Riley)

---
Podcast: For those who prefer listening to reading, most of these ideas are covered, along with some additional commentary (but a few months less currency), in the Lincoln Network's podcast, The Dynamist, hosted by Evan Schwarztrauber.

Sunday, February 12, 2023

Reisman on The Dynamist Podcast - Reforming Social Media with Delegation and Freedom of Impression

Lincoln Network's The Dynamist podcast (Episode 5, 2/7/23) features an interview of me by Evan Schwarztrauber that covers many of the key ideas in my Delegation series in Tech Policy Press with Chris Riley, plus more that will be in an article to be published soon. [Update 2/14: Now online at Tech Policy Press: From Freedom of Speech and Reach to Freedom of Expression and Impression.]

It steps back to consider the inherent absurdity of current approaches to content moderation and the dilemmas of freedom of expression vs. censorship -- as exemplified in the cases now headed to the Supreme Court: some that would require platforms to carry "lawful but awful" speech, and others that would make them liable for carrying it

I suggest the way to cut through those dilemmas is by giving users agency to restore their freedom of impression, which the platforms have co-opted -- and to delegate that agency to services that can shape their feeds and recommendations in accord with criteria and values that they choose. (This was recorded on 10/13/22.)

Monday, January 30, 2023

Thought as a Cyclic Social Process: Thought => Expression => Social Mediation => Impression => ...

[Update 2/14/23: The article this previews is now published on Tech Policy Press.]

This is a preview of ideas from an article in the works, introducing some new diagrams seeking to distill and simplify key ideas addressed in the Delegation Series (with Chris Riley).


Freedom of thought, expression, and impression are not just isolated, individual matters, but an ongoing, cyclic, social process. Thought leads to expression, which then flows through a network of others – a social mediation ecosystem. That feeds impression, in cycles that reflexively lead to further thought.

Cutting through the dilemmas of managing networked speech will depend on balancing full freedom of expression with full freedom of impression, by augmenting the social mediation ecosystem with the right balance of three control points:

  • Censorship as posts and responses enter the network, entirely banning users or removing individual posts before they reach anyone at all -- a threat to freedom of expression.
  • Selection of what is fed or recommended out to each user, individually -- a threat or exercise of freedom of impression, depending on who controls it.
  • Friction and other measures to enhance the deliberative quality of human social mediation activity -- with little threat to freedom of thought.
There is no quick fix to the problems of social media, but we can quickly change course to begin to undo the disaster of the past decade or two -- and to avoid ill-conceived remedies that will fail or make things worse. As shown here, doing that means: 
  • relying less on censorship (bans and removals that have questionable legitimacy), and
  • giving users agency over the selection of what they see (to legitimately balance each speaker's freedom of expression with each listener's freedom of impression), and
  • re-creating a truly open and generative social mediation ecosystem that is like what we have been evolving over the past few centuries of analog society, but now augmented by digital media tech (instead of de-augmented and disintermediated by it).

Tuesday, December 20, 2022

In Tech Policy Press: Into the Plativerse… Through Fiddleware?

[Shutterstock, via Tech Policy Press]
My latest short piece in Tech Policy Press, is Into the Plativerse… Through Fiddleware?

It speculates on how the seemingly approaching demise of the platforms and push for less centralization of their power can lead to a renaissance, building on my earlier piece, The Future of Twitter is Open, or Bust (11/4/22). Think of that as the "Twilight of the Platforms (the Platformdämmerung, for those into Wagner).

This has led to a migration to Mastodon and its "fediverse" of locally-controlled federated systems, which in turn has led to the creation of bridges between Twitter and Mastodon. From that, I suggest some further evolution. Here are some snippets:

The Plativerse (A Fediverse That Includes Platforms)

Because these bridges are still crude, Twitter is effectively a huge instance (platstance?) that is poorly federated. ...It seems inevitable that those beginnings of a hybrid fediverse/plativerse can be improved on to enable full interoperability between the fediverse and Twitter (or any other platform)...

Fiddleware (Federated Middleware)

I have long advocated for user choice in how our online feeds are organized and moderated as the only effective way for a democratic society to deal with this complexity and nuance. Enabling such choice has recently gained advocates who see a role for “middleware” services that act as user-agents between users and their media distribution systems. I envision this not as choosing a single middleware service to be granted sole control, but as composing and steering combinations of services to blend a range of algorithms that distill selected sources of human judgments – and to use them to draw from a multiplicity of what I called confederated systems as far back as 2003.

The importance of that level of flexibility in middleware has been little recognized, but the fediverse/plativerse may provide just the environment for it to emerge organically. If users can be given powerful tools to manage their navigation of the fediverse, shouldn’t they be able to shape these tools to feed them what they want, drawing from any of a multiplicity of instances/platstances, in whatever ways they choose – rather than being under the control of any single home instance with its home community and single benevolent dictator? Shouldn’t they be able to compose multiple ranking services to generate composite rankings, and shift the gears – weighting and steering those systems as their moods, tasks and domains change? Shouldn’t middleware be federated? Call it fiddleware.

Shaping a Diverse Information Ecosystem

...The fediverse is surging in reaction to the platforms’ abuse of our attention and failure to scale moderation well. But scalable participatory governance is the crucial failing of the fediverse as well as the platforms. A plativerse can allow platforms to interoperate with less centralized systems – and can create an open marketspace in which shared infrastructure services such as middleware can emerge and find their place organically.

This builds on the ideas the Chris Riley and I explored in depth in our four-part series in Tech Policy Press on delegation of user choice.

 

Friday, November 04, 2022

#4 Contributor Post in Tech Policy Press for 2022 -- The Future of Twitter is Open, or Bust


Update 12/26/22: It is gratifying to see that most recent article with Chris Riley was listed as #4 in the Top 20 Tech Policy Press Contributor Posts for 2022.


The Future of Twitter is Open, or Bust
(11/4/22, Chris Riley coauthor) explores how a more open strategy might save Twitter from demise.

Some snippets:

Twitter’s best — and most likely, only — hope to survive as a service and as a business is to find an exit ramp off of the highway to hell it’s on. History offers one such path: Open up the platform. Let others build their own Twitter apps, and do their own filtering and moderation, while preserving the advantages of a centralized discovery and sharing mechanism through the underlying platform. And when other, independent Twitter apps succeed, so too will Twitter.

Many years ago, it was hard to imagine the World Wide Web winning in the market over AOL and CompuServe. Yet that’s exactly what happened. It turned out that letting the users of the Web, including other businesses, sit in the drivers’ seat unlocked a powerful creative force, and gave the Web an advantage that saw it outlast its platform competitors.

Twitter can take one last swing for the fences and try to recreate the power of the open Web — and in the same move, perhaps sidestep much of the coming maelstrom of content policy criticism — by separating out the platform it manages from the “presentation layer” that sits between the platform and its users, and includes both the user-facing app as well as behind-the-scenes content filtering, prioritization and recommendation.

That means opening up the platform’s interfaces and data enough to let others create new kinds of Twitter tools and apps. And not just customizing at the level of colors and fonts, but deeply, at the level of freely selecting what content is made available when, and how it is presented to users.

...Separating the platform from the presentation means letting go of sole responsibility for filtering and content moderation.

...Twitter has never known what to do with the incredible network it has...It’s time to let others take a swing at it. 

Friday, October 07, 2022

"Delegation, or, The Twenty Nine Words that the Internet Forgot" -- A Series in Tech Policy Press

It is the policy of the United States…to encourage the development of technologies which maximize user control over what information is received by individuals…who use the Internet…” (from Section 230 of the Communications Decency Act)

Part 1. (2/27/22)
Delegation, or, The Twenty Nine Words that the Internet Forgot
 

The series begins with an exploration of why this emphasis on user control is far more important than generally recognized, and how an architecture designed to make high levels of user control manageable can enhance the nuance, context, balance, and value in human discourse that current social media are tragically degrading.

While that portion of the much-discussed "Section 230" has been neglected, those ideas have re-emerged -- most prominently in the 2019 ACCESS Act introduced in the U.S. Senate, which included among its provisions a requirement to provide “delegatability” – enabled through APIs that allow a user to authorize a third party to manage the user’s content and settings directly on the user’s behalf.

This opening essay concludes: 

User choice is essential to a social and media ecosystem that preserves and augments democracy, self-actualization, and the common welfare – instead of undermining it. And delegation is the linchpin that can make that a reality.

Part 2. (4/27/22)
Understanding Social Media: An Increasingly Reflexive Extension of Humanity
 

We shape our tools and thereafter our tools shape us. (Marshall McLuhan)

Social media do not behave like other media. Speech is not primarily broadcast, as through megaphones and amplification but rather, propagates more like word-of-mouth, from person to person. Feedback loops of reinforcing interactions by other users can snowball, or just fizzle out. Understanding how to modulate the harmful aspects of wild messaging cascades requires stepping back and, instead of viewing the messages as individual items of content, seeing them as stages in reflexive flows in which we and these new media tools shape each other. The reflexivity is the message. A media ecology perspective can help us understand where current social media have gone wrong and orchestrate the effort to manage increasing reflexivity in a holistic, coherent, inclusive, and effective way.

Part 3. (6/17/22)
Community and Content Moderation in the Digital Public Hypersquare

Current news is awash with acute concerns about social media content and how it is or is not moderated in the so-called “digital public square.” However, this is not really a single, discrete square, but is better seen as the “digital public hypersquare:” a hyperlinked environment made up of a multitude of digital spaces, much as the World Wide Web is a hyperlinked web made up of a multitude of websites.

Recognizing the multidimensionality and interconnected nature of these social squares (or spaces) can facilitate flexible, context-specific content modulation, as opposed to the blunter, less context-specific tool of moderation-as-removal. Instead of framing content policies as centralized, top-down policing – with all of that frame’s inherent associations with oppression, at one extreme, or anarchy, at the other – social media governance can be envisioned as a network of positive community-built, community-building layers, running in their own contextually appropriate ways, over the top of modern-day networks. This provides a new logic for diagnosing and beginning to treat how social media now exacerbate many of the disease symptoms that now present with increasing severity.

Efforts are already in the works to start layering community-centric approaches onto broader platforms...

Part 4. (9/22/22)
Contending for Democracy on Social Media and Beyond

Conflict is part of democracy, and will continue to be, especially in an age of rapid change that only promises to accelerate. Just as democracy is weakened by the prevalence of unhealthy conflict, so too it is weakened by attempts to suppress healthy conflict that is agonistic, rather than antagonistic. 

Faced with the challenges of harmful online content, some argue that more paternal—some might say more principled, others authoritarian—governance is needed to deal with these stressors, but robust and healthy democratic processes are arguably the most adaptable, and therefore ensuring they work effectively is more important than ever.

This series is being published in Tech Policy Press -- co-authored with tech policy executive Chris Riley... [series is currently on hiatus]

-----------------

***Background and running updates below [last updated 2/14/23]*** 

New shorter pieces that build on the Delegation series 

  • Summation and update - Start with this!
    From Freedom of Speech and Reach to Freedom of Expression and Impression (Tech Policy Press, 2/14/23) - Distilling and updating essential reframings from the Delegation series. Managing society’s problems related to how (and by whom) social media news feeds are composed is rapidly reducing to the absurd. Focus on the other end of the proverbial “megaphone” – not speaker’s end, but listener’s. Restore our Freedom of Impression!
  • Into the Plativerse… Through Fiddleware? (Tech Policy Press, 12/20/22) - Suggesting a future that is neither fully centralized platforms, nor a fully decentralized fediverse, but a distributed hybrid (a plativerse?) that enables nuanced control -- and may enable the emergence of federated middleware (fiddleware?) to best serve users.

  • The Future of Twitter is Open, or Bust (Tech Policy Press, 11/4/22, with Chris Riley) -- Twitter’s best — and most likely, only — hope to survive as a service and as a business is to find an exit ramp off of the highway to hell it’s on by opening up the platform.

Background

This page is to be updated as the series unfolds -- with my own personal perspectives and links to relevant materials. All views expressed here are my own (but owe much to wise insights from Chris). 

My other works related to this are listed in the Selected Items tab, above [updates here are now very intermittent - check Selected Items tab for more current items]. Some that are most relevant to expand on the themes introduced in this first article:

This diagram from my The Internet Beyond... article may also be helpful:


Chris and I are very pleased with how this collaboration is synergizing our ideas, and how we draw on very complementary backgrounds: his in internet policy, governance, and law; mine in the technology and business of media as a tool for augmenting human discourse and intellect.

Running updates

[1/30/23:] This new diagram of mine (to be published soon, see fuller teaser) distills the core dynamic:

Freedom of thought, expression, and impression are not just isolated, individual matters, but an ongoing, cyclic, social process. Thought leads to expression, which then flows through a network of others – a social mediation ecosystem. That feeds impression, in cycles that reflexively lead to further thought.

Cutting through the dilemmas of managing networked speech will depend on balancing full freedom of expression with full freedom of impression, by augmenting the social mediation ecosystem with the right balance of three control points:


[6/17/22:] While many have good reason to fear that control of Twitter by Elon Musk could be a disaster, there are some further hopeful signs in his 6/16 comments to employees:
There's freedom of speech and freedom of reach," he said. "Anyone could just go into the middle of Times Square right now and say anything they want. They can just walk into the middle of Times Square and deny the Holocaust ... but that doesn't mean that needs to be promoted to millions of people. So I think people should be allowed to say pretty outrageous things that are in the bounds of the law but that don't get amplified and don't get a ton of reach."
Our Delegation piece supports this idea in a form that is more clearly desirable and operationalizable, by shifting from the negative frame of Free Speech is Not the Same As Free Reach (which Musk may have gotten from Renee DiResta via Jack Dorsey), with its focus on the speaker/advertiser, to our positive frame of freedom of impression, with its focus on the rights of each listener.

[5/6/22:] Dorsey-funded Bluesky project published an architecture paper that helps clarify key ideas in the vision of decentralized, user-delegated control of social media filtering. Suggestive of possible directions by Twitter under Musk, and more broadly. I posted some excerpts from this (somewhat technical) document, with some light context and links.

[5/6/22:] Today I was reminded how much the media ecology of reflexivity augmented by human-machine loops has surprisingly early roots. I first dug into that around 1970, including Licklider's 1960 Man-Computer Symbiosis, which I now see again was very pointed about this symbiosis as going beyond the levels of "mechanically extended man" (a very McLuhanesque phrase that Licklider cited to 1954) and "artificial intelligence." Licklider inspired (and sponsored) Engelbart's "Augmenting Human Intellect," which inspired my views on making social media augment human society -- and also anticipates the related resurgence of thinking about more "human-centered AI," and AI Delegability. And of course Bush's 1945 As We May Think inspired all of this.

This reflexive intertwingling of ideas is also apropos of the question of our original attribution of our opening quote ("Man shapes his tools and thereafter our tools shape us") to McLuhan -- we removed any specific attribution because it may have been taken from others -- what matters to us is that McLuhan adopted it and gave it added attention.

[4/29/22:] Opening sections revised to add the second in the series.

[2/28/22:] Very pleased to see this:


Acknowledgements

My thanks to the many outstanding thinkers in this space who have been helpful in developing these ideas -- and especially to Justin Hendrix, co-founder and editor of Tech Policy Press for his support and skilled editing. ...And of course to Chris Riley for this very stimulating and enjoyable collaboration.

[This post was first published 2/27/22 when the series began, and has since been updated and expanded as additional essays are published.]

Monday, June 20, 2022

Cited in FTC Report to Congress on Combatting Online Harms

A new major Federal Trade Commission Report to Congress, Combatting Online Harms Through Innovation, offers an outstanding review of the state of thinking about this urgent topic that has been a focus of my work.

It was especially pleasing to see that it cited one of my essays in Tech Policy PressProgress Toward Re-Architecting Social Media to Serve Society, in a section of recommendations focused on "User Tools" that delegate more control over what we see on social media to be set by individual users -- and independent services that are chosen by users to serve them as their agents -- rather than unilaterally by the platforms. It also referred to the conference with leading thinkers on the pros and cons of such tools that I helped organize and moderate for Tech Policy Press.

I especially recommend reading the Introduction, the section on Platform AI Interventions (which also looks beyond simplistic and inadequate remedies of moderation-as-removal), and the one on User Tools.

Thursday, June 02, 2022

Reisman Appointed as Nonresident Senior Fellow at Lincoln Network

I am very pleased to be appointed as a nonresident senior fellow at Lincoln Network.

Lincoln Network is a 501(c)(3) nonprofit founded in 2014 with a mission to help bridge the gap between Silicon Valley and DC, advancing a more perfect union between technology and the American republic. We believe in a world of free people and competitive markets, and that fostering a robust innovation ecosystem is crucial to creating a better, freer, and more abundant future.

...With a cross-partisan portfolio of issues and staff from diverse backgrounds in technology and policy located on both coasts, we have built expansive networks across different sectors and ideological lines.

My interests in social media (in the broadest, forward-looking sense) seem to be in particular alignment with those expressed in this piece by Lincoln Network Executive Director Zach Graves, with the tagline that "Interoperability and open protocols can solve many of the problems of centralized cyber power without a heavy regulatory hand," and also noting that "achieving (optimal) interoperability may sometimes require government action to address coordination problems, misaligned incentives, or overcome existing regulatory barriers."

Tuesday, May 17, 2022

Boiling Elon Musk – Jumping Out Of The Pot Of Platform Law?

My take on the deeper issues for democracy of Musk's on-again/off-again bid for Twitter, Boiling Elon Musk – Jumping Out Of The Pot Of Platform Law?, has been published on Techdirt.  
The boiling frog syndrome suggests that if a frog jumps into a pot of boiling water, it immediately jumps out — but if a frog jumps into a slowly heating pot, it senses no danger and gets cooked. Mark Zuckerberg’s Facebook has been gradually coming to a boil of dysfunction for a decade – some are horrified, but many fail to see any serious problem. Now Elon Musk has jumped into a Twitter that he may quickly bring to a boil. Many expect either him – or hordes of non-extremist Twitter users – to jump out.
The frog syndrome may not be true of frogs, and Musk may not bring Twitter to an immediate boil, but the deeper problem that could boil us all is “platform law:” Social media, notably Twitter, have become powerful platforms that are bringing our new virtual “public square” to a raging boil. Harmful and polarizing disinformation and hate speech are threatening democracy here, and around the world.

The apparent problem is censorship versus free speech (whatever those may mean) -- but the deeper problem is who sets the rules for what can be said, to what audience? Now we are facing a regime of platform law, where these private platforms have nearly unlimited power to set and enforce rules for censoring who can say what...

The article goes on to suggest ways to take the pot off the burner. 

Thursday, May 05, 2022

Musk, Twitter, and Bluesky -- How to Rethink Free Speech and Moderation in Social Media

Beyond the hope, fear, and loathing wrapped in the enigma of Elon Musk's Twitter, there are some hints of possible blue skies and sunlight, whatever your politics. A new architecture document from the Bluesky project that Jack Dorsey funded points to an important strategy for how that might be achieved -- whether by Twitter, or by others. Here are some quick notes on the key idea and why it matters.

That document is written for the technically inclined, so here are some important highlights (emphasis added):

It’s not possible to have a usable social network without moderation. Decentralizing components of existing social networks is about creating a balance that gives users the right to speech, and services the right to provide or deny reach.

Our model is that speech and reach should be two separate layers, built to work with each other. The “speech” layer should remain neutral, distributing authority and designed to ensure everyone has a voice. The “reach” layer lives on top, built for flexibility and designed to scale.

Source: Bluesky 

The base layer...creates a common space for speech where everyone is free to participate, analogous to the Web where anyone can put up a website. ...Indexer services then enable reach by aggregating content from the network. Moderation occurs in multiple layers through the system, including in aggregation algorithms, thresholds based on reputation, and end-user choice. There's no one company that can decide what gets published; instead there is a marketplace of companies deciding what to carry to their audiences.

Separating speech and reach gives indexing services more freedom to moderate. Moderation action by an indexing service doesn't remove a user's identity or destroy their social graph – it only affects the services' own indexes. Users choose their indexers, and so can choose a different service or to supplement with additional services if they're unhappy with the policies of any particular service.

There is growing recognition that something along these lines is the only feasible way to manage the increasing reach of social media that is now running wild in democracies that value free speech. I have been writing extensively about this on this blog, and in Tech Policy Press (see the list of selected items).

The Bluesky document also suggests a nice two level structure that separates the task of labeling from the actioning task that actually controls what gets into your feed:

The act of moderation is split into two distinct concepts. The first is labeling, and the second is actioning. In a centralized system the process of content review can lead directly to a moderation decision to remove content across the site. In a distributed system the content reviewers can provide information but cannot force every moderator in the system to take action.

Labels

In a centralized system there would be a Terms of Service for the centralized service. They would hire a Trust and Safety team to label content which violates those terms. In a decentralized system there is no central point of control to be leveraged for trust and safety. Instead we need to rely on data labelers. For example, one data labeling service might add safety labels for attachments that are identified as malware, while another may provide labels for spam, and a third may have a portfolio of labels for different kinds of offensive content. Any indexer or home server could choose to subscribe to one or more of these labeling services.

The second source of safety labels will be individuals. If a user receives a post that they consider to be spam or offensive they can apply their own safety labels to the content. These signals from users can act as the raw data for the larger labeling services to discover offensive content and train their labelers.

By giving users the ability to choose their preferred safety labelers, we allow the bar to move in both directions at once. Those that wish to have stricter labels can choose a stricter labeler, and those that want more liberal labels can choose a more liberal labeler. This will reduce the intense pressure that comes from centralized social networks trying to arrive at a universally acceptable set of values for moderating content.

Actions

Safety labels don’t inherently protect users from offensive content. Labels are used in order to determine which actions to take on the content. This could be any number of actions, from mild actions like displaying context, to extreme actions like permanently dropping all future content from that source. Actions such as contextualizing, flagging, hiding behind an interstitial click through, down ranking, moving to a spam timeline, hiding, or banning would be enacted by a set of rules on the safety labels.

This divide empowers users with increased control of their timeline. In a centralized system, all users must accept the Trust and Safety decisions of the platform, and the platform must provide a set of decisions that are roughly acceptable to all users. By decomposing labels and the resulting actions, we enable users to choose labelers and resulting actions which fit their preferences.

Each user’s home server can pull the safety labels on the candidate content for the home timeline from many sources. It can then use those labels in curating and ranking the user timeline. Once the events are sent to the client device the same safety labels can be used to feed the UX in the app.

This just hints at the wide array of factors that can be used in ranking and recommending that I have explored in a major piece in Tech Policy Press, and in more detail in my blog (notably this post). One point of special interest is the suggestion that a "source of safety labels will be individuals" -- I have suggested that crowdsourcing can be a powerful tool for creating a "cognitive immune system" that can be more powerful, scalable, and responsive in real time than conventional moderation.

The broader view of what this means for social media and society are the subject of the series I am doing with Chris Riley in Tech Policy Press. But this Bluesky document provide a nice explanation of some basic ideas, and demonstrates progress toward making such systems a reality. 

The hope is that Twitter applies such ideas -- and that others do.

 

Friday, February 04, 2022

The Wrong Way to Preserve Journalism

Experts Spar At Hearing on Journalism, Tech and Market Power, as Justin Hendrix nicely reports today in Tech Policy Press.

Here is a brief commentary [still in progress] -- on why the proposed "Journalism Competition and Preservation Act of 2021" is harmful law in terms of its effects on journalism, competition, business models, and the essential nature of the Internet. 

This bill provides for a badly designed subsidy, in a way that is the very opposite of enhancing competition or access to information via the Internet, and removes motivation for news publishers to move beyond their failed business models.

There is a case for subsidizing the preservation of news (especially local news), and for limiting the monopoly rents that the platforms extract from advertisers. Until the market can do that on its own, the way to do that is with a tax on platform ad revenues that is used to fund a subsidy for journalism and to support efforts to find better business models so that journalism can sustain itself in the new digital world of abundance.

My work on the FairPay framework suggests how the latter might be accomplished, in ways that few yet understand, as outlined below. But in the meantime a tax + subsidy strategy seems the only viable option.

The problems with this approach

Hendrix links to a statement by multiple public interest organizations on why this is the wrong remedy"

Free Press led a letter signed also by Public Knowledge, Wikimedia and Common Cause, among others, that said the JCPA “may actually hurt local publishers by entrenching existing power relationships between the largest platforms and largest publishers. News giants with the greatest leverage would dominate the negotiations and small outlets with diverse or dissenting voices would be unheard if not hurt.”

He also cites the hearing testimony of  Dan Gainor and Daniel Francis, which make compelling arguments as to why the good intentions of the advocates are misguided.

Joshua Benton at NiemanLab provides an excellent analysis of why “Australia’s latest export is bad media policy, and it’s spreading fast” (see the "third" idea, part way down): 

The base problem here is that these governments are telling the tech giants that their use of their country’s publishers news content has a monetary value that is somehow different from all other content in existence. And that’s the important word here: use.

...You can have a million complaints about these companies — I do! — but at a fundamental level, the ways in which they “use” content are simply inherent to their natures as a search engine and a social platform.

...The core issue is misdirection. Publishers complain about Google and Facebook’s use of their stories — but that’s not what they’re actually angry about. What they’re angry about is that Google and Facebook dominate the digital advertising business — just as they used to dominate the print advertising business. And those are two really different things!

...It’s also why I get cross with media reporters who let sloppy language seep into their stories — like that this is all about setting “a price for news content published on the companies’ platforms.” None of this content is being published on Google and Facebook unless the publishers have specifically asked it to be. It’s being linked to, in the same way everything else in the world is being linked to. And unless you think the very concept of a search engine or a social platform is immoral, linking to things is just a fundamental part of how these things work.

...So tax them. Say you’re going to put a 1.5% tax on the targeted digital advertising revenue of all companies with a market cap over $1 trillion, or annual revenues over $20 billion, or whatever cutoff you want. That would generate billions of dollars a year in a way that doesn’t warp competition or let Google and Facebook use their cash as a tool for targeted PR payoffs.

Better approaches

As for the question of sustainable business models for journalism, my work on FairPay explains why current models based on artificial scarcity and flat-rate pricing fail in the world of digital abundance, where prices should map to highly diverse and variable customer value propositions -- and how more adaptively win-win models based on customer value in an ongoing relationship can change that.

A large body of work on that is cited on my FairPayZone blog, including work with academic co-authors in Harvard Business Review and two scholarly marketing journals. Some notable items are (start with the first):

Of course, getting to this kind of model will take time, experimentation, and learning, which the news publishers have been too distracted, stretched thin, or simply too unimaginative to do. So...

In the meantime, Congress should provide a stop-gap that sustains journalism and helps it move toward being self-sustaining in this new digital world:

  • Tax the ad revenues of the dominant platforms to limit their obscene monopoly profits on advertising and help drive them toward better business models of their own (see Reverse the Biz Model! -- Undo the Faustian Bargain for Ads and Data).
  • Temporarily, use much of that tax revenue to subsidize news -- directly to publishers of quality news, especially local, and to create new public interest publishers much like public radio and television. 
  • For long-term remedy, use a significant portion of that tax revenue to fund experiments in better business models for publishers, and for operational platforms that help them generate direct reader/patron revenue in consumer-value-efficient ways.
Importantly, publishers should not be rewarded for their lack of business model innovation. Subsidies should be narrowly channeled to preserving actual journalism work itself in the short term, as emergency relief, while also supporting business model innovation projects  -- including development of shared Revenue-as-a-Service platforms. (These qualifications on timing in the use of the tax revenue were spurred by a comment from Chris Riley, referring to his post from a year ago, The Great War, Part 3: The Internet vs Journalism.)

This alternative approach would address the symptoms of failed business models for journalism, with none of the damage that would be caused by embracing cartels of businesses that failed to adapt, eliminating "fair use," and destroying the fundamental structure of linking on the Internet that has created so much value for all of us -- even the publishers who downplay the significant promotional value it has created for them.

Thursday, December 23, 2021

Tech Policy Press Had A Great First Year -- Illuminating the Critical Issues

Democracy owes thanks to Tech Policy Press and its CEO/Editor Justin Hendrix for a great first year of important reporting, analysis, and opinion on the increasingly urgent issues of tech policy, especially social media. It is becoming the place to keep up with news and ideas. 

They just published their list of Top 50 Contributor Posts of 2021 from 330 posts from 120 guest contributors and their list of Top 10 Tech Policy Press Podcasts of 2021 from 54 episodes.

I am honored to be among the stellar contributors - and to have written two of the “Top 50” posts (plus four others) - and to have helped organize and moderate their special half-day event, Reconciling Social Media and Democracy.

Just a partial sampling of the many other contributors I have learned much from - Daphne Keller, Elinor Carmi, Nathalie Maréchal, Yael Eisenstat, Ellen Goodman, Karen Kornbluh, Renee DiResta, Chris Riley, Francis Fukuyama, Corey Doctorow, and Mike Masnick.

Great work by CEO/Editor Justin Hendrix.

Sign up for their newsletter!

Monday, December 20, 2021

Are You Covidscuous? [or Coviscuous?]

Are You Covidscuous? Have you been swapping air with those who are?

Covidscuous, adj. (co-vid-skyoo-us), Covidscuity, n. -- definition: demonstrating or implying an undiscriminating or unselective approach; indiscriminate or casual -- in regard to Covid contagion risks to oneself and those around one.

[Update 1/12/22:] Alternate form: Coviscuous, Coviscuity. Some may find this form easier to pronounce and more understandable.

We seem to lack a word for this badly needed concept. Many smart people who know Covid is real and have been vaccinated and boosted and wear masks often still seem to be oblivious to the cumulative and multiplicative nature of repeated exposures to risk. Many are aware that Omicron has added a new curveball, but give little thought to how often they expose themselves (and thus those they spend time with) by not limiting how much time they spend in large congregate indoor settings -- especially when rates and risks are increasing.

In July 2020, I wrote The Fog of Coronavirus: No Bright Lines, emphasizing that Covid spreads like a fog, depending on distance, airflow, and duration of exposure. That while a single interaction may have low risk, large numbers of low-risk interactions can amount to high risk. “You can play Russian roulette once or twice and likely survive. Ten or twenty times and you will almost certainly die.  We must weigh level of risk, duration, and frequency.” A gathering of six friends or relatives exposes six people to each other. A party with dozens of people chatting and mingling in ever-changing close circles of a few people has far higher risk – even if all are boosted.

We need to constantly apply the OODA loop to our exposures – Observe, Orient, Decide, Act, and repeat. When rates and exposure levels are low, we can be more relaxed. As rates or other risk factors increase, we need to be far more judicious about our exposures.

We should think in terms of a Covidscuity Rating. An index that factors in how many people you interact with (each having their own Covidscuity Rating), for what duration. More people, some with higher Covidscuity, and for more duration, closer, with less masking all multiply risk. Maybe epidemiologists can decide just how that math generally works and create a calculator app we can use to understand the relevant factors better (much like apps for home energy efficiency). Maybe display a Monte Carlo graph to show how this is never exact, but a fuzzy bell curve of probabilities. This could help us understand the risks we take -- and those we take on from those we choose to interact with.

But in any case, the OODA loops must be continuous. Not from months ago, but weekly, and whenever there is new information. Observe, Orient, Decide, Act, repeat.

And of course we have a social responsibility. This risk is not just to you, but those you might next infect. And to all of us, as you help provide a breeding ground for new and more dangerous variants.

This is not to say some Covidscuity is always wrong, only that we should maintain updated awareness of what risk we take, for what reward, and consider not just single events but budget your activities for the compounding effect of repeated exposure. Consider your own Covidscuity, and that of those you expose yourself to.

Sunday, December 19, 2021

Tech Policy Press: The Ghost of Surveillance Capitalism Future

My short article in Tech Policy Press focuses on The Ghost of Surveillance Capitalism Future, AKA, The Ghost of Social Media Future. 

Concerned about what Facebook and other platforms know about you and use to manipulate you now? The "mind-reading" power of "biometric psychography" will make that look like the good old days. 

Now is the time for policy planners to look to the future – not just to next year, but the next decade. Whatever direction we choose, the underlying question is “whom does the technology serve?” These global networks are far too universal, and their future potential far too powerful, to leave this to laissez-faire markets with business models that primarily exploit users.

--
Plus two additional references that add to the vision of abuses:

    Monday, November 29, 2021

    Directions Toward Re-Architecting Social Media to Serve Society

    My current article in Tech Policy Press, ProgressToward Re-Architecting Social Media to Serve Society, reports briefly on the latest in a series of dialogs on a family of radical proposals that is gaining interest. These discussions have been driven by the Stanford Working Group on Platform Scale and their proposal to unbundle the filtering of items into our social media news feeds, from the platforms, into independent filtering “middleware” services that are selected by users in an open market.

    As that article suggests, the latest dialogue at the StanfordHAI Conference on "Radical Proposals" questions whether variations on these proposals go too far, or not far enough. That suggests that policy planners would benefit from more clarity on increases in scope that might be phased over time and on just what the long-term vision for the proposal is. The most recent session offered some hints of directions toward more ambitious variations – which might be challenging to achieve but might generate broader support by more fully addressing key issues. But these were just hints.

    Reflecting on these discussions, this post pulls together some bolder visions along the same lines that I have been sketching out, to clarify what we might work toward and how this might address open concerns. Most notably, it expands on the suggestion in the recent session that data cooperatives are another kind of “middleware” between platforms and users that might complement the proposed news feed filtering middleware.

    The current state of discussion

    This is best understood after reading my current Tech Policy Press article, but here is the gist:

       The unbundling of control of social media filtering to users -- via an open market of filtering services -- is gaining recognition as a new and potentially important tool in our arsenal for managing social media without crippling the freedom of speech that democracy depends on. Instead of platform control, it brings a level of social mediation by users and services that work as their agents.

       Speaking as members of the Stanford Group, Francis Fukuyama and Ashish Goel explained more of their vision of such an unbundling, gave a brief demo, and how they have backed off to become a bit less radical -- to limit privacy concerns as well as platform and political resistance. However, others on the panel suggested that might not be ambitious enough.

       To the five open concerns about these proposals that I had previously summarized -- relating to speech, business models, privacy, competition and interoperability, and technological feasibility – this latest session highlighted a sixth issue -- relating to the social flow graph. That is the need for filtering to consider not just the content of social media but the dynamics of how that content flows among -- and draws reaction from -- chains of users, with sometimes-destructive amplification. How can we manage that harmful form of social mediation -- and can we achieve positive forms of social mediation?

       That, in turn, brings privacy back to the fore. Panelist Katrina Ligett suggested that another topic at the Stanford conference, Data Cooperatives, was also relevant to this need to consider the collective behavior of social media users. That is something I had written about after reflecting on the earlier discussion hosted by Tech Policy Press. The following section relates those ideas to this latest discussion.

    Infomediaries -- another level of middleware -- to address privacy and business model issues

    While adding another layer of intermediation and spinning more function out of the platforms may seem to complicate things, the deeper level of insight from the dynamics of the flow of discourse will enable more effective filtering -- and more effective management of speech across the board. It will not come easily or quickly -- but any stop-gap remediation should be done with care to not foreclose development toward mining this wellspring of collective human judgment.

    The connection of filtering service “middleware” to the other “middleware” of data collectives that Ligett and I have raised has relevance not only to privacy but also to the business and revenue model concerns that Fukuyama and Goel gave as reasons for scaling back their proposals. Data collectives are a variation on what were first proposed as “infomediaries” (information intermediaries) and later as “information fiduciaries.” I wrote in 2018 about how infomediary services could help resolve the businessmodel problems of social media, and recently about how they could help resolve the privacyconcerns. The core idea is that infomediaries act as user agents and fiduciaries to negotiate between users and platforms – and advertisers -- for user attention and data.

    My recent sketch of a proposal to use infomediaries to support filtering middleware, Resolving Speech, Biz Model, and Privacy Issues – An Infomediary Infrastructure for Social Media?, suggested not that the filtering services themselves be infomediaries, but be part of an architecture with two new levels:

    1. A small number of independent and competing infomediaries that could safeguard the personal data of users, coordinate limits on clearly harmful content, and help manage flow controls. They could use all of that data to run filtering on behalf of...
    2. A large diversity of filtering services – without exposing that personal data to the filtering services (which might have much more limited resources to process and safeguard the data)

    Such a two-level structure might enable powerful and diverse filtering services while providing a strong quasi-central, federated support service – insulated from both the platforms and the filtering services. That infomediary service could coordinate efforts to limit dangerous virality in ways that serve users and society, not advertisers. Those infomediaries could also negotiate as agents for the users for a share of any advertising revenue -- and take a portion of that to fund themselves, and the filtering services.

    With infomediaries, the business model concerns about sustaining filtering services, and insulating them from the perverse incentives of the advertising model to drive engagement, might become much less difficult than currently feared.

       Equitable revenue shares in any direction can be negotiated by the infomediaries, regardless of just how much data the filtering services or infomediaries control, who sells the ads, or how much of the user interface they handle. That is not a technical problem but one of negotiating power. The content and ad-tech industries already manage complex multi-party sales and revenue sharing for ads -- in Web, video, cable TV, and broadcast TV contexts -- which accommodate varying options for which party sells and places ads, and how the revenue is divided among the parties. (Complex revenue sharing arrangements through intermediaries have long been the practice in the music industry.)

       Filtering services and infomediaries could also shift incentives away from the perversity of the engagement model. Engagement is not the end objective of advertisers, but only a convenient surrogate for sales and brand-building. Revenue shares to filtering services and infomediaries could be driven by user-value-based metrics rather than engagement -- even as simple as MAUs (monthly average users). That would better align those services with the business objective of attracting and keeping users, rather than addicting them. Some users may choose to wear blinders, but few will agree to be manipulatively driven toward anger and hate if they have good alternatives. But now the platform's filters are the only game in the platform's town.

    Related strategies that build on this ecosystem to filter for quality

    There might be more agreement on the path toward social media that serve society if we shared a more fleshed-out vision of what constructively motivated social media might do, and how that would counter the abuses we currently face. Some aspects of the power that better filtering services might bring to human discourse are suggested in the following:

    Skeptics are right that user-selected filtering services might sometimes foster filter bubbles. But they fail to consider the power that multiple services that seek to filter for user value might achieve, working in “coopetition.” Motivated to use methods like these, a diversity of filtering services can collaborate to mine the wisdom of the crowd that is hidden in the dynamics of the social flow graph of how users interact with one another – and can share and build on these insights into reputation and authority. User-selected filtering services may not always drive toward quality for all users, but collectively, a powerful vector of emergent consensus can bend toward quality. The genius of democracy is its reliance on free speech to converge on truth – when mediated toward consensus by an open ecosystem of supportive institutions. Well-managed and well-regulated technology can augment that mediation, instead of disrupting it.

    Phases – building toward a social media architecture that serves society

       The Stanford Group’s concerns about “political realism” and platform pushback has led them to a basic level of independent, user-selectable labeling services. That is a limited remedy, but may be valuable in itself, and as a first step toward bolder action.

       Their intent is to extend from labeling to ranking and scoring, initially with little or no personal data. (It is unclear how useful that can be without user interaction flow data, but also a step worth testing.)

       Others have proposed similar basic steps toward more user control of filtering. In addition to proposals I cited this spring, the proposed Filter Bubble Transparency Act would require that users be offered an unfiltered reverse-chronological feed. That might also enable independent services to filter that raw feed. Jack Balkin and Chris Riley have separately suggested that Section 230 be a lever for reform by restricting safe harbors to services that act as fiduciaries and/or that provide an unfiltered feed that independent services can filter. (But again, it is unclear how useful that filtering can be without access to user interaction flow data.)

       Riley has also suggested differential treatment of commercial and non-commercial speech. That could enable filtering that is better-tailored to each type.

       The greatest benefit would come with more advanced stages of filtering services that would apply more personal data about the context and flow of content through the network, as users interact with it, to gain far more power to apply human wisdom to filtering (as I have been suggesting). That could feed back to modulate forward flows, creating a powerful tool for selectively damping (or amplifying) the viral cascades that are now so often harmful.

        Infomediaries (data cooperatives) could be introduced to better support that more advanced kind of filtering, as well as to help manage other aspects of the value exchange with users relating to privacy and attention that are now abused by “surveillance capitalism.”

    Without this kind of long-term vision, we risk two harmful errors. One is overreliance on oppressive forms of mediation that stifle the free inquiry that our society depends on, and that the First Amendment was designed to protect. The other is overly restrictive privacy legislation that privatizes community data that should be used to serve the common good. Of course there is a risk that we may stumble at times on this challenging path, but that is how new ecosystems develop.

    ---

    Running updates on these important issues can be found here, and my updating list of Selected Items is on the tab above.