[A post-2016-election update on this theme:
2016: Fake News, Echo Chambers, Filter Bubbles and the "De-Augmentation" of Our Intellect]
Balanced information may actually inflame extreme views -- that is the counter-intuitive suggestion in a NY Times op-ed by Cass Sunstein, "Breaking Up the Echo" (9/17/12). Sunstein is drawing on some very interesting research,* and this points toward an important new direction for our media systems.
I suggest this is especially important to digital media, in that we can counter this problem with more intelligent filters for managing our supply of information. This could be one of the most important ways for technology to enhance modern society. Technology has made us more foolish in some respects, but the right technology can make us much smarter.
Sunstein's suggestion is that what we need are what he calls "surprising validators," people one gives credence to who suggest one's view might be wrong. While all media and public discourse can try to leverage this insight, an even greater opportunity is for electronic media services to exploit this insight that "what matters most may be not what is said, but who, exactly, is saying it."
Much attention has been given to the growing lack of balance in our discourse, and there have been efforts to seek to address that.
- It has been widely lamented that the mass media are creating an "echo chamber" -- such as Fox News on the right vs. MSNBC on the left.
- It has also been noted that Internet media bring a further vicious cycle of polarization, as nicely described in the 2011 TED talk (and related book) by Eli Pariser, "Beware online "filter bubbles," services that filter out things not to one's taste.
- Similarly, extremist views that were once muted in communities that provided balance are now finding kindred spirits in global niches, and feeding upon their own lunacy.
This is increasingly damaging to society, as we see the nasty polarization of our political discourse, the gridlock in Washington, and growing extremism around the world. The "global village" that promises to bring us together is often doing the opposite.
It would seem that the remedy is to try to bring greater balance into our media. There have been laudable efforts to build systems that recognize disagreement and suggest balance, such as services like SettleIt, FactCheck, and Snopes, and, a particularly interesting effort, the Intel Dispute Finder (no longer active).
It would seem that the remedy is to try to bring greater balance into our media. There have been laudable efforts to build systems that recognize disagreement and suggest balance, such as services like SettleIt, FactCheck, and Snopes, and, a particularly interesting effort, the Intel Dispute Finder (no longer active).
- The notable problem with this is Sunstein's warning that even if we can expose people to greater balance, that may not be enough to reduce such polarization, and that balancing corrections can even be counter-productive, because "biased assimilation" causes people to dismiss the opposing view and become even more strident.
- Thus it is not enough to simply make our filter bubbles more permeable, to let in more balanced information. What we need is an even smarter kind of filter and presentation system. We have begun to exploit the "wisdom of crowds," but we have done little to refine that wisdom by applying tools to shape it intelligently.
People tend to dismiss information that would falsify their convictions. But they may reconsider if the information comes from a source they cannot dismiss. People are most likely to find a source credible if they closely identify with it or begin in essential agreement with it. In such cases, their reaction is not, “how predictable and uninformative that someone like that would think something so evil and foolish,” but instead, “if someone like that disagrees with me, maybe I had better rethink.”
Our initial convictions are more apt to be shaken if it’s not easy to dismiss the source as biased, confused, self-interested or simply mistaken. This is one reason that seemingly irrelevant characteristics, like appearance, or taste in food and drink, can have a big impact on credibility. Such characteristics can suggest that the validators are in fact surprising — that they are “like” the people to whom they are speaking.
It follows that turncoats, real or apparent, can be immensely persuasive. If civil rights leaders oppose affirmative action, or if well-known climate change skeptics say that they were wrong, people are more likely to change their views.
Here, then, is a lesson for all those who provide information. What matters most may be not what is said, but who, exactly, is saying it.This struck a chord with me, as something to build on. Applying the idea of "surprising validators" (people who can make us think again):
- The media and social network systems that are personalized to serve each of us can understand who says what, who I identify and agree with in a given domain, and when a person I respect holds views that are different from views that I have expressed that I might be wrong about. Such people may be "friends" in my social network, or distant figures that I am known to consider wise. (Of course it is the friends I consider wise, not those I like but view as misguided, that need to be identified and leveraged.)
- By alerting me that people I identify and agree with think differently on a given point, such systems can make me think again -- if not to change my mind, at least to consider the idea that reasonable people can differ on this point.
- Such an approach could build on the related efforts for systems that recognize disagreement and suggest balance noted above. ...But as Sunstein suggests, the trick is to focus on the surprising validators.
- Surprising validators can be identified in terms of a variety of dimensions of values, beliefs, tastes, and stature that can be sensed and algorithmically categorized (both overall and by subject domain). In this way the voices for balance who are most likely to be given credence by each individual can be selectively raised to their attention.
- Such surprising validations (or reasons to re-think) might be flagged as such, to further aid people in being alert to the blinders of biased assimilation and to counter foolish polarization.
More broadly, what we need to counter the filter bubble are ways to engineer serendipity into our information filters -- we need methods for exposing us to the things we don't realize we should know, and don't know how to set filters for. Identifying surprising validators is just one aspect of this, but this might be one of the easiest to engineer (since it builds directly on the relationship of what we know and who we know, a relationship that is increasingly accessible to technology), and one of the most urgently needed.
Of course the reason that engineering serendipity is hard is because it is something of an oxymoron--how can we define a filter for the accident of desirable surprise? But with surprising validators we have a model that may be extended more broadly--focused not on disputes, but on crossing other kinds of boundaries--based on who else has made a similar crossing--still in terms of what we know and who we know, and other predictors of what is likely to resonate as desirable surprise. Perhaps we might think of these as "surprising combinators."
[Update 10/21/19:] Serendipity and flow. Some specific hints on how to engineer serendipity can be drawn from a recent article, "Why Aren't We Curious about the Things We Want to Be Curious About?" This reinforces my suggestion (in the paragraph just above) that it be "in terms of what we know and who we know" adding the insight that "We’re maximally curious when we sense that the environment offers new information in the right proportion to complement what we already know" and suggesting that it has to do with finding "the just-right match to your current knowledge that will maintain your curiosity." This seems to be another case of seeking a "flow state," the energized and enjoyable happy medium between not so challenging or alien as to be too frustrating, yet not so easy and familiar as to be boring. I suggest that smart filtering technology will help us find flow, and do it in ways that adapt in real time to our moods and our ongoing development.
This offers a way to more intelligently shape the "wisdom of crowds," a process that could become a powerful force for moderation, balance, and mutual understanding. We need not just to make our "filter bubbles" more permeable, but much like a living cell, we need to engineer a semi-permeable membrane that is very smart about what it does or does not filter.
Applying this kind of strategy to conventional discourse would be complex and difficult to do without pervasive computer support, but within our electronic filters (topical news filters and recommenders, social network services, etc.) this is just another level of algorithm. Just as Google took old academic ideas about hubs and authority, and applied these seemingly subtle and insignificant signals to make search engines significantly more relevant, new kinds of filter services can use the subtle signals of surprising validators (and surprising combinators) to make our filters more wisely permeable.
That may be society's most urgent need in information and media services. Only when we can bring a new level of collaboration, a more intelligently shaped wisdom of crowds, will we benefit from the full potential of the Internet. We need our technology to be more a part of the solution, and less a part of the problem. If we can't learn to understand one another better, and reverse the current slide into extremism, nothing else will matter very much.
[Update:] Note that the kind of filtering suggested here would ideally be personalized to each individual user, fully reflecting the "everything is deeply interwingled" and non-binary nuance of their overlapping Venn diagram of values, beliefs, tastes, communities of interest, and domains of expertise. However, in use-cases where that level of individual data analysis is impractical or impermissible, it could be done at a less granular level, based on simple categories, personas, or the like. For example, a news service that lacks detailed user data might categorize readers based on just a current session to identify who might be a surprising validator, or what might be serendipitous.
[Update:] Note that the kind of filtering suggested here would ideally be personalized to each individual user, fully reflecting the "everything is deeply interwingled" and non-binary nuance of their overlapping Venn diagram of values, beliefs, tastes, communities of interest, and domains of expertise. However, in use-cases where that level of individual data analysis is impractical or impermissible, it could be done at a less granular level, based on simple categories, personas, or the like. For example, a news service that lacks detailed user data might categorize readers based on just a current session to identify who might be a surprising validator, or what might be serendipitous.
[Update 12/7/20:] Biden wins in 2020 with Surprising Validators!
A compelling report by Kevin Roose in the NY Times shows how Surprising Validators enabled Biden's "Rebel Alliance" to cut a hole in Trump's "Death Star" -- "…the sources that were most surprising were the one who had the most impact." "Perhaps the campaign's most unlikely validator was Fox News." This was by the campaign, external to the platforms' algorithms, but think how much more powerful this could be when fully integrated.
---
See the Selected Items tab for more on this theme.
---
See the Selected Items tab for more on this theme.
[See also my earlier post on this theme:
Full Frontal Reality: how to combat the growing lunatic fringe.]
-------------
*The work Sunstein apparently refers to can be found by searching for "Biased Assimilation and Attitude Polarization," the title of a much-cited 1979 paper. I found some very interesting research and plan to review this further, seeking methods suited to algorithmic use. One interesting current center of study is the Yale Law School Cultural Cognition Project.
(On a personal note, this is an effort I have seen as having huge benefit to society since my first exposure to early work on computer-aided conferencing and decision support systems in the early 1970's. I continue to see this as a vital challenge to pursue, and I welcome dialog and collaboration with others who share that mission.) *The work Sunstein apparently refers to can be found by searching for "Biased Assimilation and Attitude Polarization," the title of a much-cited 1979 paper. I found some very interesting research and plan to review this further, seeking methods suited to algorithmic use. One interesting current center of study is the Yale Law School Cultural Cognition Project.
No comments:
Post a Comment