Wednesday, December 16, 2020

Biden Campaign Shows How Social Media Can Reduce Polarization

A compelling report by Kevin Roose in the NY Times explains how the Biden campaign used "surprising validators" like Fox News to penetrate the filter bubbles of Trump supporters. The broader lesson is that social media algorithms can automate this strategy for each of us -- surfacing turncoat and contrarian views from sources we trust that can make us stop and think. That can disarm polarization far more effectively than the “neutral” fact-checking and warning labels that the platforms have been pressured to try.

...So the campaign pivoted…expanding Mr. Biden’s reach by working with social media influencers and “validators,” people who were trusted by the kinds of voters the campaign hoped to reach.

...Perhaps the campaign’s most unlikely validator was Fox News. Headlines from the outlet that reflected well on Mr. Biden were relatively rare, but the campaign’s tests showed that they were more persuasive to on-the-fence voters than headlines from other outlets. So when they appeared — as they did in October when Fox News covered an endorsement that Mr. Biden received from more than 120 Republican former national security and military officials — the campaign paid to promote them on Facebook and other platforms.

“The headlines from the sources that were the most surprising were the ones that had the most impact …When people saw a Fox News headline endorsing Joe Biden, it made them stop scrolling and think.”

“Stop scrolling and think?” Does that happen when a social media user sees an “independent” fact-check warning label? Or an “authoritative” article that presents a contrary view? 

Cass Sunstein introduced the term “surprising validators” in a 2012 Times op-ed, explaining how they could cut through filter-bubble echo-chambers -- while “balanced” critiques were “likely to increase polarization rather than reduce it.” 

That spurred my suggestions that social media should build alerting to surprising validators directly into their algorithms, as a way to help combat growing polarization and disinformation. Unlike fact-checking and labels that are not only slow and labor-intensive, but fail to convince those already polarized, surprising validators can be automatically identified by social media with Internet speed, scale, and economy, and work like Trojan Horses to penetrate closed minds. 

I am seeking to publish a fuller discussion of this and why it is is the best way to reverse polarization and create a "cognitive immune system" that can help protect our democracy.

Friday, December 11, 2020

Across a Crowded Zoom, But No Enchantment...

 ♫ Some enchanted evening…You may see a stranger…Across a crowded Zoom 

Just published in Techonomy, my "No Enchantment Across a Crowded Zoom" offers some musings on the fundamental problem of how virtual conferencing is unsatisfying because it fails to convey the subtle energies and interpersonal mirroring of live interaction. (The article plays off of what many of you know as perhaps the "greatest song ever written for a musical.")

Hopefully that article offers food for thought on what might be improved in Zoom and similar tools. (My thanks to Pip Mothersill, Ph.D. from MIT Media Lab, for some stimulating conversations on serendipity and Zoom.)

Here I add one tangential idea about the subtleties of communication that I was reminded of while writing the article:

GLENDOWER. I can call spirits from the vasty deep. 

HOTSPUR. Why, so can I, or so can any man; But will they come when you do call for them on Zoom?

Many years ago, I heard the famous Gyuto Monks* of Tibet do some of their striking meditative chants --  noted for the deep harmonics that their practiced throat-singing techniques create. The next day they were empaneled in a classroom with Robert Thurman leading an intimate chat about their experiences. 

One of the tidbits was the story of how reluctant they had been to allow their chants to be recorded. The reason for their reluctance was that the chants are part of a meditative process in which fierce demigods are summoned to appear, and then entreated to be beneficent.

The fear was that if the chanting was recorded, on playback, the summoned spirits might hear the call and actually appear. Finding no spiritually adept monks there to greet them, those demigods might become angry. 

Happily, after some thought and meditation, the monks concluded that the spirits would be called only by the live voices of monks in prayer, not the disembodied sounds of a recording!

[*The quality of YouTube audio does not do the chants justice, even for mere mortals -- quality recordings give more sense of the live experience. I can attest that no angry spirits materialized on playing the chants in my home (neither vinyl nor CD).]

Wednesday, December 09, 2020

“How to Save Democracy From Technology" (Fukuyama et. al.)

[This is a quick preliminary alert -- a fuller commentary is NOW ONLINE has been drafted for publication -- see this page for supporting information.]

An important new article in Foreign Affairs by Francis Fukuyama and others makes a compelling case: Few realize that the real harm from Big Tech* platforms is not just bigness or failure to self-regulate, but that they threaten democratic society, itself. They go on to suggest a fundamental remedy (emphasis added): 
Fewer still have considered a practical way forward: taking away the platforms’ role as gatekeepers of content …inviting a new group of competitive ‘middleware’ companies to enable users to choose how information is presented to them. And it would likely be more effective than a quixotic effort to break these companies up.”

The article makes a strong case that the systemic cure for this problem is to give people the power to control the “middleware” that filters our view of the information flowing through the platform in the ways that we each desire. Controlling that at a fine-grained level is beyond the skill or patience of most users, so the best way to do that is to create a diverse open market of interoperable middleware services that users can select from. The authors point out that this middleware could be funded with a revenue share from the platforms – and that, instead of reducing the platform revenue, it might actually increase it by providing better service to bring in more users and more activity. Their article is backed up by an excellent Stanford white paper that provides much more detail.

This resonates with similar proposals I have written over the past two decades. The threat to democracy is not platform control over what is posted, but their unilateral and non-transparent control over what is seen by whom. The platforms control the filters/recommenders of what we each see - and subvert that so they can engage us and sell ads. The only real solution is to delegate that control to users, so that undesirable (in the eye of the receiver) content is not amplified, and bad communities are not proselytized – all without censorship except in extreme cases. An open market is the best way to do that, to ensure the competition that brings us choice, diversity, and innovation -- and to decouple these decisions from the perverse incentives of the platforms to favor advertising revenue over user welfare.

The basic idea of an open market if filtering middleware is described in my Architecting Our Platforms to Better Serve Us -- Augmenting and Modularizing the Algorithm (building on work I began in 2002).  Part of the relevant section:

Filtering rules

Filters are central to the function of Facebook, Google, and Twitter. ... there are issues of homophily, filter bubbles, echo chambers, and fake news, and spoofing that are core to whether these networks make us smart or stupid, and whether we are easily manipulated to think in certain ways. Why do we not mandate that platforms be opened to user-selectable filtering algorithms (and/or human curators)? The major platforms can control their core services, but could allow users to select separate filters that interoperate with the platform. Let users control their filters, whether just by setting key parameters, or by substituting pluggable alternative filter algorithms. (This would work much like third party analytics in financial market data systems.) Greater competition and transparency would allow users to compare alternative filters and decide what kinds of content they do or do not want. It would stimulate innovation to create new kinds of filters that might be far more useful and smart...

Don't just prevent harm, empower benefit

My deeper proposals explore how changes in algorithms and business models could make such an open market in filtering middleware even more effective. Instead of just preventing platforms from doing harm, this could empower social media to do good, in the ways that each of us choose. That is the essence of democracy and our marketplace of ideas. Good technology can empower us, serving as "bicycles for the mind."

* While Fukuyama's article is entitled “How to Save Democracy From Technology,” this is really not a problem of technology, itself, but of badly applied technology -- bad algorithms and architectures, motivated by bad business models.

[Revised 1/23/21]