Tuesday, April 20, 2021

Tech Policy Press: The Internet Beyond Social Media Thought-Robber Barons

==============================================================
THE LIST OF ONGOING UPDATES HAS BEEN MOVED -- CLICK HERE
==============================================================

My new article, "The Internet Beyond Social Media Thought-Robber Barons," was published in Tech Policy Press on 4/22/21
  • It is now apparent that social media is dangerous for democracy, but few have recognized a simple twist that can put us back on track.  
  • A surgical restructuring -- an "unbundling" -- to an open market strategy that shifts control over our feeds to the users they serve -- is the only practical way to limit the harms and enable the full benefits of social media 
(This is an extensively updated and improved version of the discussion draft first posted on this blog in February, now integrating more proposals, addressing common objections, and drawing on feedback from a number of experts in the field -- and the very helpful editing of Justin Hendrix.)

I summarize and contrast these proposals:

  • Most prominently in Foreign Affairs and the Wall Street Journal by Francis Fukayama, Barak Richman, Ashish Goel, and others in the report of the Stanford Working Group on Platform Scale. (Their use of the technical term "middleware" for this approach has been picked up by some other commentators.)
  • Independently by Stephen Wolfram, Mike Masnick, and me.
  • And with what might become important real-world traction in the exploratory Bluesky initiative by Jack Dorsey at Twitter.

The article covers new ground in presenting a concrete vision of what an open market in filtering services might enable -- how this can bring individual and social purpose back to social media, to not only protect, but systematically enhance democracy, and how that can augment human wisdom and social interaction more broadly. That vision should be of interest to thoughtful citizens as well as policy professionals.


I welcome your feedback and support for these proposals, and can be reached at intertwingled [at] teleshuttle [dot] com.

--------------------------

UPDATES:

  • [7/21/21]
    A very interesting five-article debate on these unbundling/middleware proposals, all headed The Future of Platform Power, is in the Journal of Democracy, responding to Fukuyama's April article there. Fukayama responds to the other four commentaries (which include a reference to my Tech Policy Press article). The one by Daphne Keller, consistent with her items noted just below, is generally supportive of this proposal, while providing a very constructive critique that identifyies four important concerns. As I tweeted in response, "“The best minds of my generation are thinking about how to make people click ads” – get our best minds to think about empowering us in whatever ways fulfill us! @daphnehk problem list is a good place to start, not to end." I plan to post further comments on this debate soon.

  • [6/15/21]
    Very insightful survey analysis of First Amendment issues relating to proposed measures for limiting harmful content on social media -- and how most run into serious challenges -- in Amplification and Its Discontents, by Daphne Keller (a former Google Associate General Counsel, now at Stanford, 6/8/21). Wraps up with discussion of proposals for "unbundling" of filtering services: "An undertaking like this would be very, very complicated. It would require lawmakers and technologists to unsnarl many knots.... But unlike many of the First Amendment snarls described above, these ones might actually be possible to untangle." Keller provides a very balanced analysis, but I read this as encouraging support on the legal merits of what I have proposed: the way to preserve freedom of expression is to protect users freedom of impression -- not easy, but the only option that can work. Keller's use of the term "unbundling" is also helpful in highlighting how this kind of remedy has precedent in antitrust law.
    + Interview with Keller on this article by Justin Hendrix of Tech Policy Press, Hard Problems: Regulating Algorithms & Antitrust Legislation (6/20/21).
    + Added detail on the unbundling issues is in Keller's 9/9/20 article, If Lawmakers Don't Like Platforms' Speech Rules, Here's What They Can Do About It. Spoiler: The Options Aren't Great.
  • Another perspective on the how moderation conflicts with freedom is in On Social Media, American-Style Free Speech Is Dead (Gilad Edelman, Wired 4/27/21), which reports on Evelyn Douek's more international perspective. Key ideas are to question the feasibility of American-style binary free speech absolutism and shift from categorical limits to more proportionality in balancing societal interests. I would counter that the decentralization of filtering to user choice enables proportionality and balance to emerge from the bottom up, where it has a democratic validity as "community law," rather that being imposed from the top down as "platform law." The Internet is all about decentralized control -- why should we sacrifice freedom of speech to a failure of imagination in managing a technology that should enhance freedom? Customized filtering can provide a receiver-specific richness of proportionality that better balances rights of impression with nuanced freedom of expression. Douek rightly argues that we must accept an error rate in moderation -- why not expect a bottom up, user-driven error rate to be more open and responsive to evolving wisdom and diverse community standards than one applied across the board?
  • [5/18/21]
    Clear insights on the new dynamics of social media - plus new strategies for controlling disinformation with friction, circuit-breakers, and crowdsourced validation in How to Stop Misinformation Before It Gets Shared, by Renee DiResta and Tobias Rose-Stockwell (Wired 3/26/21). Very aligned with my article (but stops short of the contention that democracy cannot depend on the platforms to do what is needed).
  • [5/17/21]
    Important support and suggestions related to Twitter's Bluesky initiative from eleven members of the Harvard Berkman Klein community are in A meta-proposal for Twitter's bluesky project (3/31/21). They are generally aligned with the directions suggested in my article.
  • [4/22/21]
    Another piece by Francis Fukuyama that addresses his Stanford group proposal is in the 
    Journal of DemocracyMaking the Internet Safe for Democracy, April, 2021.
    (+See 7/21/21 update, above, for follow-ups.)

--------------------------

Related items by me:  see the Selected Items tab.

Where I am coming from: The Roots of My Ideas on Tech Policy

Thursday, April 01, 2021

But Who Should Control the Algorithm, Nick Clegg? Not Facebook ...Us!

(Image adapted from cited Nick Clegg article)
Facebook's latest attempt to justify their stance on disinformation and other harms, and their plans to make minor improvements, actually points the reason those improvements are not nearly enough -- and can never be. They need to make far more radical moves to free our feeds, as I have proposed previously.

Facebook’s VP of Global Affairs, Nick Clegg, put out an article yesterday that provides a telling counterpoint to those proposals. You and the Algorithm: It Takes Two to Tango defends Facebook in most respects, but accepts the view that users need more transparency and control:

You should be able to better understand how the ranking algorithms work and why they make particular decisions, and you should have more control over the content that is shown to you. You should be able to talk back to the algorithm and consciously adjust or ignore the predictions it makes — to alter your personal algorithm…

He goes on to describe laudable changes Facebook has just made, with further moves in that direction intended. 

But the question is: how this can be more than Band-Aids covering the deeper problem? Seeking to put the onus on us -- “We need to look at ourselves in the mirror…” -- he goes on (emphasis added):

…These are profound questions — and ones that shouldn’t be left to technology companies to answer on their own…Promoting individual agency is the easy bit. Identifying content which is harmful and keeping it off the internet is challenging, but doable. But agreeing on what constitutes the collective good is very hard indeed.

Exactly the point of these proposals! No private company can be permitted to attempt that, even under the most careful regulation - especially in a democracy. That is especially true for a dominant social media service. Further, slow-moving regulation cannot be effective in an age of dynamic change. We need a free market in filters from a diversity of providers - for users to choose from. Twitter seems to understand that; it seems clear that Facebook does not.

Don't try to tango with a dancing bear

As I explain in my proposal:

Social media oligarchs have seduced us -- giving us bicycles for the mind that they have spent years and billions engineering to "engage" our attention. The problem is that they insist on steering those bicycles for us, because they get rich selling advertising that they precisely target to us. Democracy and common sense require that we, the people, keep control of our marketplace of ideas. It is time to wrestle back the steering of our bicycles, so that we can guide our attention where we want. Here is why, and how. Hint: it will probably require regulation, but not in the ways currently being pursued.

What I and others have proposed -- and that Jack Dorsey of Twitter has advocated -- is to spin out the filtering of our newsfeeds (and other recommendations of content, users, and groups) to a multitude of new "middleware" services that work with the platforms, but that users can choose from in an open market, and mix and match as they like. 

"Agreeing on what constitutes the collective good" has always been best done bthe collective human effort of an open market of ideas. Algorithms can aid humans in doing that, but we, the people, must decide which algorithms, with what parameters and what objective functions. These open filtering proposals explain how and why. What Clegg suggest is good advice as far as it goes, but, ultimately, too much like trying to tango with a dancing bear.