Friday, February 12, 2021

Growing Support for "Making Social Media Serve Society"

It was nice to learn that Jack Dorsey of Twitter was exploring ideas similar to what I have proposed -- in a project called Bluesky. As I was finalizing my (just prior) 2/11/21 post, Making Social Media Serve Society, I learned of this important development in a report by Casey Newton. That led me to other supportive items, including Senate testimony by prominent AI expert Stephen Wolfram advancing similar ideas. 

  • The prior post has a brief preliminary update addressing the Twitter actions (duplicated below). 
  • Rather that update the body of that post at this time (except to add missing links and correct formatting and typos), I provide a running commentary on my ongoing findings and views here. 

First a summary of the key ideas of the original, then running updates (most recent first).

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

Key ideas:

Paradise lost …and found -- saving democracy by serving users and society

The root causes of the crisis in our marketplace of ideas are that:

  1. The dominant social media platforms selectively control what we see, 
  2. and yet they are motivated not to present what we value seeing, but instead to “engage” audiences to click ads 

      They use their control of our minds not to serve us, but to extract value from us.

The best path to reduce the harm and achieve the lost promise of digital media is to remove control over what users see in their feeds from the platforms. Instead, create an open market in filtering “middleware” services that give users more power to control what they see in their feeds.

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

Summary of updates:

So far, the additional information and analysis seems to be very encouraging:

  • Adding support for this idea of shifting control of filters from the platforms to the users
  • Offering some slim hope (at least from Jack Dorsey of Twitter) that these reforms might be possible in part as self-regulation, rather than having to be imposed by regulators.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

2/16

Crowdsourcing for the "cognitive immune system" to downrank fake news: I had intended to include citations to important research that shows crowdsourcing of news source quality can compete with professional fact-checking as to quality -- and is clearly superior as to speed, cost, and scalability. See studies by Pennycook and Rand, and by Epstein, Pennycook and Rand.

2/13

Barak Richman and Francis Fukuyama's, How to Quiet the Megaphones of Facebook, Google and Twitter (2/12, WSJ), reinforces and updates their prior call for this strategy:

  • The subtitle: "Today’s often toxic social-media environment calls for a fix that puts choices back in the hands of consumers. A new layer of ‘middleware’ can do that."
  • The closing paragraph: "Middleware offers a structural fix and avoids collateral damage. It will not solve the problems of polarization and disinformation, but it will remove an essential megaphone that has fueled their spread. And it will enhance individual autonomy, leveling the playing field in what should be a democratic marketplace of ideas."

Twitter's 2/19 earnings call includes comments by Jack Dorsey on the related Bluesky project.

  • "...we're excited to build to address some of the problems that is facing Section 230 is giving more people choice around what relevance algorithms they're using, ranking algorithms they're using. You can imagine a more market-driven and marketplace approach to algorithms. And that is something that not only we can host, but we can participate in."
  • "...we will have access to a much larger conversation, have access to much more content, and we'll be able to put many more ranking algorithms that suit different people's needs on top of it. And you can imagine an app store like VU, our ranking algorithms that give people optimal flexibility in terms of how they see it. And that will not only help our business, but drive more people into participating in social media in the first place. So this is something we believe in, not just from an open Internet standpoint, but also we believe it's important and it really helps our business thrive in a significantly new way, given how much bigger it could be.

2/12

Casey Newton's Twitter seeks the wisdom of crowds (2/11/21, Platformer) updates on two separate Twitter initiatives: 

  • Bluesky is the one most central to my prior post, breaking out filtering to support an open market in competing services that would let user choose one or more filtering services that suited their needs. This is still just conceptual, but the fact that Dorsey actively supports exploration of divesting control to to an open market "app-store-like view of ranking algorithms that give people ultimate flexibility in terms of” what posts are put in front of them seems a positive sign. 
  • Most importantly, it suggests some possibility this might be embraced voluntarily, as self-regulation.
  • Birdwatch relates to crowdsourcing feedback on fact-checking doing the basics of what I also referred to in the prior post, and cover more deeply in A Cognitive Immune System for Social Media -- Developing Systemic Resistance to Fake News. Newton's reporting is that Birdwatch is still in an embryonic state.

Stephen Wolfram's Testifying at the Senate about A.I.‑Selected Content on the Internet (6/25/19) makes very similar suggestions about an open market in user-selected filters:

  • He explains how problematic it is get AI to do this task, and how neither government not monolithic oligopoly platforms should make filtering decisions -- and that user selection can be done at two levels "based on mixing technical ideas with market mechanisms. The basic principle of both suggestions is to give users a choice about who to trust, and to let the final results they see not necessarily be completely determined by the underlying ACS business."
  • One is to have the independent "final ranking providers" make the selections
  • The other is to have the independent "constraint providers" define "sets of constraints," such as for balance or leanings or types of content, on how the platforms to make the selections
  • "There’s been debate about whether ACS businesses are operating as “platforms” that more or less blindly deliver content, or whether they’re operating as “publishers” who take responsibility for content they deliver. Part of this debate can be seen as being about what responsibility should be taken for an AI. But my suggestions sidestep this issue, and in different ways tease apart the “platform” and “publisher” roles."
  • He suggests "both Suggestions...attempt to leverage the exceptional engineering and commercial achievements of the [Automatic Content Selection] businesses, while diffusing current trust issues about content selection, providing greater freedom for users, and inserting new opportunities for market growth."
Wolfram's commentary seems to provide very strong support for the ideas in my post, along with the Fukuyama's article and report that I cited in my post.

Lucas Matney's Twitter’s decentralized future (1/15/21, Techcrunch) raises the dark side: "The platform’s vision of a sweeping open standard could also be the far-right’s internet endgame:"
  • "Social platforms like Parler or Gab could theoretically rebuild their networks on bluesky, benefitting from its stability and the network effects of an open protocol. Researchers involved are also clear that such a system would also provide a meaningful measure against government censorship and protect the speech of marginalized groups across the globe."
  • “I think the solution to the problem of algorithms isn’t getting rid of algorithms — because sorting posts chronologically is an algorithm — the solution is to make it an open pluggable system by which you can go in and try different algorithms and see which one suits you or use the one that your friends like,” quoting a member of the working group.
  • This is seen as having appeal as a standard beyond Twitter: "Right at this moment I think that there’s going to be a lot of incentive to adopt, and I don’t just mean by end users, I mean by platforms, because Twitter is not the only one having these really thorny moderation problems ...I think people understand that this is a critical moment,” quoting another group member.
I see Matney's concerns as valid and important to deal with, but ultimately manageable and necessary in a free society, as the prior post explains in the section on "Driving our own filters."

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

As noted on the prior post (near midnight 2/11):

Special update: This is “Version 0.1,” a discussion draft that was completed on 2/11/21, hours before Casey Newton’s report made me aware of a move by Twitter to research the direction proposed here. Pending analysis and revisions to reflect that, it seemed useful to get this version online now for discussion. Newton’s report links to Jack Dorsey’s initial sketchy announcement of this "@bluesky" effort about a year ago, and items he linked to at The Verge link to an interesting analysis on Techcrunch. My initial take is that is a very positive move, while recognizing that the Techcrunch analysis rightly notes the risks that I had recognized below, and have thought to be important to deal with, but ultimately manageable and necessary in a free society. Dorsey's interest in this concept gives some reason to hope that this could occur as voluntary self-regulation, without need for the mandates I suggested likely to be necessary below. (late 2/11)

No comments:

Post a Comment