Tuesday, May 17, 2022

Boiling Elon Musk – Jumping Out Of The Pot Of Platform Law?

My take on the deeper issues for democracy of Musk's on-again/off-again bid for Twitter, Boiling Elon Musk – Jumping Out Of The Pot Of Platform Law?, has been published on Techdirt.  
The boiling frog syndrome suggests that if a frog jumps into a pot of boiling water, it immediately jumps out — but if a frog jumps into a slowly heating pot, it senses no danger and gets cooked. Mark Zuckerberg’s Facebook has been gradually coming to a boil of dysfunction for a decade – some are horrified, but many fail to see any serious problem. Now Elon Musk has jumped into a Twitter that he may quickly bring to a boil. Many expect either him – or hordes of non-extremist Twitter users – to jump out.
The frog syndrome may not be true of frogs, and Musk may not bring Twitter to an immediate boil, but the deeper problem that could boil us all is “platform law:” Social media, notably Twitter, have become powerful platforms that are bringing our new virtual “public square” to a raging boil. Harmful and polarizing disinformation and hate speech are threatening democracy here, and around the world.

The apparent problem is censorship versus free speech (whatever those may mean) -- but the deeper problem is who sets the rules for what can be said, to what audience? Now we are facing a regime of platform law, where these private platforms have nearly unlimited power to set and enforce rules for censoring who can say what...

The article goes on to suggest ways to take the pot off the burner. 

Thursday, May 05, 2022

Musk, Twitter, and Bluesky -- How to Rethink Free Speech and Moderation in Social Media

Beyond the hope, fear, and loathing wrapped in the enigma of Elon Musk's Twitter, there are some hints of possible blue skies and sunlight, whatever your politics. A new architecture document from the Bluesky project that Jack Dorsey funded points to an important strategy for how that might be achieved -- whether by Twitter, or by others. Here are some quick notes on the key idea and why it matters.

That document is written for the technically inclined, so here are some important highlights (emphasis added):

It’s not possible to have a usable social network without moderation. Decentralizing components of existing social networks is about creating a balance that gives users the right to speech, and services the right to provide or deny reach.

Our model is that speech and reach should be two separate layers, built to work with each other. The “speech” layer should remain neutral, distributing authority and designed to ensure everyone has a voice. The “reach” layer lives on top, built for flexibility and designed to scale.

Source: Bluesky 

The base layer...creates a common space for speech where everyone is free to participate, analogous to the Web where anyone can put up a website. ...Indexer services then enable reach by aggregating content from the network. Moderation occurs in multiple layers through the system, including in aggregation algorithms, thresholds based on reputation, and end-user choice. There's no one company that can decide what gets published; instead there is a marketplace of companies deciding what to carry to their audiences.

Separating speech and reach gives indexing services more freedom to moderate. Moderation action by an indexing service doesn't remove a user's identity or destroy their social graph – it only affects the services' own indexes. Users choose their indexers, and so can choose a different service or to supplement with additional services if they're unhappy with the policies of any particular service.

There is growing recognition that something along these lines is the only feasible way to manage the increasing reach of social media that is now running wild in democracies that value free speech. I have been writing extensively about this on this blog, and in Tech Policy Press (see the list of selected items).

The Bluesky document also suggests a nice two level structure that separates the task of labeling from the actioning task that actually controls what gets into your feed:

The act of moderation is split into two distinct concepts. The first is labeling, and the second is actioning. In a centralized system the process of content review can lead directly to a moderation decision to remove content across the site. In a distributed system the content reviewers can provide information but cannot force every moderator in the system to take action.

Labels

In a centralized system there would be a Terms of Service for the centralized service. They would hire a Trust and Safety team to label content which violates those terms. In a decentralized system there is no central point of control to be leveraged for trust and safety. Instead we need to rely on data labelers. For example, one data labeling service might add safety labels for attachments that are identified as malware, while another may provide labels for spam, and a third may have a portfolio of labels for different kinds of offensive content. Any indexer or home server could choose to subscribe to one or more of these labeling services.

The second source of safety labels will be individuals. If a user receives a post that they consider to be spam or offensive they can apply their own safety labels to the content. These signals from users can act as the raw data for the larger labeling services to discover offensive content and train their labelers.

By giving users the ability to choose their preferred safety labelers, we allow the bar to move in both directions at once. Those that wish to have stricter labels can choose a stricter labeler, and those that want more liberal labels can choose a more liberal labeler. This will reduce the intense pressure that comes from centralized social networks trying to arrive at a universally acceptable set of values for moderating content.

Actions

Safety labels don’t inherently protect users from offensive content. Labels are used in order to determine which actions to take on the content. This could be any number of actions, from mild actions like displaying context, to extreme actions like permanently dropping all future content from that source. Actions such as contextualizing, flagging, hiding behind an interstitial click through, down ranking, moving to a spam timeline, hiding, or banning would be enacted by a set of rules on the safety labels.

This divide empowers users with increased control of their timeline. In a centralized system, all users must accept the Trust and Safety decisions of the platform, and the platform must provide a set of decisions that are roughly acceptable to all users. By decomposing labels and the resulting actions, we enable users to choose labelers and resulting actions which fit their preferences.

Each user’s home server can pull the safety labels on the candidate content for the home timeline from many sources. It can then use those labels in curating and ranking the user timeline. Once the events are sent to the client device the same safety labels can be used to feed the UX in the app.

This just hints at the wide array of factors that can be used in ranking and recommending that I have explored in a major piece in Tech Policy Press, and in more detail in my blog (notably this post). One point of special interest is the suggestion that a "source of safety labels will be individuals" -- I have suggested that crowdsourcing can be a powerful tool for creating a "cognitive immune system" that can be more powerful, scalable, and responsive in real time than conventional moderation.

The broader view of what this means for social media and society are the subject of the series I am doing with Chris Riley in Tech Policy Press. But this Bluesky document provide a nice explanation of some basic ideas, and demonstrates progress toward making such systems a reality. 

The hope is that Twitter applies such ideas -- and that others do.

 

Friday, April 29, 2022

"Delegation, or, The Twenty Nine Words that the Internet Forgot" -- A Series in Tech Policy Press

It is the policy of the United States…to encourage the development of technologies which maximize user control over what information is received by individuals…who use the Internet…” (from Section 230 of the Communications Decency Act)

***Background and running updates below*** 

This series is being published in Tech Policy Press -- co-authored with tech policy executive Chris Riley...

Part 1. (2/27/22)
Delegation, or, The Twenty Nine Words that the Internet Forgot
 

The series begins with an exploration of why this emphasis on user control is far more important than generally recognized, and how an architecture designed to make high levels of user control manageable can enhance the nuance, context, balance, and value in human discourse that current social media are tragically degrading.

While that portion of the much-discussed "Section 230" has been neglected, those ideas have re-emerged -- most prominently in the 2019 ACCESS Act introduced in the U.S. Senate, which included among its provisions a requirement to provide “delegatability” – enabled through APIs that allow a user to authorize a third party to manage the user’s content and settings directly on the user’s behalf.

This opening essay concludes: 

User choice is essential to a social and media ecosystem that preserves and augments democracy, self-actualization, and the common welfare – instead of undermining it. And delegation is the linchpin that can make that a reality.

Part 2. (4/27/22)
Understanding Social Media: An Increasingly Reflexive Extension of Humanity 

We shape our tools and thereafter our tools shape us. (Marshall McLuhan)

Social media do not behave like other media. Speech is not primarily broadcast, as through megaphones and amplification but rather, propagates more like word-of-mouth, from person to person. Feedback loops of reinforcing interactions by other users can snowball, or just fizzle out. Understanding how to modulate the harmful aspects of wild messaging cascades requires stepping back and, instead of viewing the messages as individual items of content, seeing them as stages in reflexive flows in which we and these new media tools shape each other. The reflexivity is the message. A media ecology perspective can help us understand where current social media have gone wrong and orchestrate the effort to manage increasing reflexivity in a holistic, coherent, inclusive, and effective way.

Background

This page is to be updated as the series unfolds -- with my own personal perspectives and links to relevant materials. All views expressed here are my own (but owe much to wise insights from Chris). 

My other works related to this are listed in the Selected Items tab, above. Some that are most relevant to expand on the themes introduced in this first article:

This diagram from my The Internet Beyond... article may also be helpful:


Chris and I are very pleased with how this collaboration is synergizing our ideas, and how we draw on very complementary backgrounds: his in internet policy, governance, and law; mine in the technology and business of media as a tool for augmenting human discourse and intellect.

Running updates

[5/6/22:] Dorsey-funded Bluesky project published an architecture paper that helps clarify key ideas in the vision of decentralized, user-delegated control of social media filtering. Suggestive of possible directions by Twitter under Musk, and more broadly. I posted some excerpts from this (somewhat technical) document, with some light context and links.

[5/6/22:] Today I was reminded how much the media ecology of reflexivity augmented by human-machine loops has surprisingly early roots. I first dug into that around 1970, including Licklider's 1960 Man-Computer Symbiosis, which I now see again was very pointed about this symbiosis as going beyond the levels of "mechanically extended man" (a very McLuhanesque phrase that Licklider cited to 1954) and "artificial intelligence." Liclider inspired (and sponsored) Engelbart's "Augmenting Human Intellect," which inspired my views on making social media augment human society -- and also anticipates the related resurgence of thinking about more "human-centered AI," and AI Delegability. And of course Bush's 1945 As We May Think inspired all of this.

This reflexive intertwingling of ideas is also apropos of the question of our original attribution of our opening quote ("Man shapes his tools and thereafter our tools shape us") to McLuhan -- we removed any specific attribution because it may have been taken from others -- what matters to us is that McLuhan adopted it and gave it added attention.

[4/29/22:] Opening sections revised to add the second in the series.

[2/28/22:] Very pleased to see this:


Acknowledgements

My thanks to the many outstanding thinkers in this space who have been helpful in developing these ideas -- and especially to Justin Hendrix, co-founder and editor of Tech Policy Press for his support and skilled editing. ...And of course to Chris Riley for this very stimulating and enjoyable collaboration.

[This post was first published 2/27/22 when the series began, and has since been updated and expanded as additional essays are published.]

Friday, February 04, 2022

The Wrong Way to Preserve Journalism

Experts Spar At Hearing on Journalism, Tech and Market Power, as Justin Hendrix nicely reports today in Tech Policy Press.

Here is a brief commentary [still in progress] -- on why the proposed "Journalism Competition and Preservation Act of 2021" is harmful law in terms of its effects on journalism, competition, business models, and the essential nature of the Internet. 

This bill provides for a badly designed subsidy, in a way that is the very opposite of enhancing competition or access to information via the Internet, and removes motivation for news publishers to move beyond their failed business models.

There is a case for subsidizing the preservation of news (especially local news), and for limiting the monopoly rents that the platforms extract from advertisers. Until the market can do that on its own, the way to do that is with a tax on platform ad revenues that is used to fund a subsidy for journalism and to support efforts to find better business models so that journalism can sustain itself in the new digital world of abundance.

My work on the FairPay framework suggests how the latter might be accomplished, in ways that few yet understand, as outlined below. But in the meantime a tax + subsidy strategy seems the only viable option.

The problems with this approach

Hendrix links to a statement by multiple public interest organizations on why this is the wrong remedy"

Free Press led a letter signed also by Public Knowledge, Wikimedia and Common Cause, among others, that said the JCPA “may actually hurt local publishers by entrenching existing power relationships between the largest platforms and largest publishers. News giants with the greatest leverage would dominate the negotiations and small outlets with diverse or dissenting voices would be unheard if not hurt.”

He also cites the hearing testimony of  Dan Gainor and Daniel Francis, which make compelling arguments as to why the good intentions of the advocates are misguided.

Joshua Benton at NiemanLab provides an excellent analysis of why “Australia’s latest export is bad media policy, and it’s spreading fast” (see the "third" idea, part way down): 

The base problem here is that these governments are telling the tech giants that their use of their country’s publishers news content has a monetary value that is somehow different from all other content in existence. And that’s the important word here: use.

...You can have a million complaints about these companies — I do! — but at a fundamental level, the ways in which they “use” content are simply inherent to their natures as a search engine and a social platform.

...The core issue is misdirection. Publishers complain about Google and Facebook’s use of their stories — but that’s not what they’re actually angry about. What they’re angry about is that Google and Facebook dominate the digital advertising business — just as they used to dominate the print advertising business. And those are two really different things!

...It’s also why I get cross with media reporters who let sloppy language seep into their stories — like that this is all about setting “a price for news content published on the companies’ platforms.” None of this content is being published on Google and Facebook unless the publishers have specifically asked it to be. It’s being linked to, in the same way everything else in the world is being linked to. And unless you think the very concept of a search engine or a social platform is immoral, linking to things is just a fundamental part of how these things work.

...So tax them. Say you’re going to put a 1.5% tax on the targeted digital advertising revenue of all companies with a market cap over $1 trillion, or annual revenues over $20 billion, or whatever cutoff you want. That would generate billions of dollars a year in a way that doesn’t warp competition or let Google and Facebook use their cash as a tool for targeted PR payoffs.

Better approaches

As for the question of sustainable business models for journalism, my work on FairPay explains why current models based on artificial scarcity and flat-rate pricing fail in the world of digital abundance, where prices should map to highly diverse and variable customer value propositions -- and how more adaptively win-win models based on customer value in an ongoing relationship can change that.

A large body of work on that is cited on my FairPayZone blog, including work with academic co-authors in Harvard Business Review and two scholarly marketing journals. Some notable items are (start with the first):

Of course, getting to this kind of model will take time, experimentation, and learning, which the news publishers have been too distracted, stretched thin, or simply too unimaginative to do. So...

In the meantime, Congress should provide a stop-gap that sustains journalism and helps it move toward being self-sustaining in this new digital world:

  • Tax the ad revenues of the dominant platforms to limit their obscene monopoly profits on advertising and help drive them toward better business models of their own (see Reverse the Biz Model! -- Undo the Faustian Bargain for Ads and Data).
  • Temporarily, use much of that tax revenue to subsidize news -- directly to publishers of quality news, especially local, and to create new public interest publishers much like public radio and television. 
  • For long-term remedy, use a significant portion of that tax revenue to fund experiments in better business models for publishers, and for operational platforms that help them generate direct reader/patron revenue in consumer-value-efficient ways.
Importantly, publishers should not be rewarded for their lack of business model innovation. Subsidies should be narrowly channeled to preserving actual journalism work itself in the short term, as emergency relief, while also supporting business model innovation projects  -- including development of shared Revenue-as-a-Service platforms. (These qualifications on timing in the use of the tax revenue were spurred by a comment from Chris Riley, referring to his post from a year ago, The Great War, Part 3: The Internet vs Journalism.)

This alternative approach would address the symptoms of failed business models for journalism, with none of the damage that would be caused by embracing cartels of businesses that failed to adapt, eliminating "fair use," and destroying the fundamental structure of linking on the Internet that has created so much value for all of us -- even the publishers who downplay the significant promotional value it has created for them.