Showing posts with label attitude polarization. Show all posts
Showing posts with label attitude polarization. Show all posts

Monday, October 08, 2018

A Cognitive Immune System for Social Media -- Developing Systemic Resistance to Fake News

To counter the spread of fake news, it's more important to manage and filter its spread than to try to interdict its creation -- or to try to inoculate people against its influence. 

A recent NY Times article on their inside look at Facebook's election "war room" highlights the problem, quoting cybersecurity expert Priscilla Moriuchi:
If you look at the way that foreign influence operations have changed these last two years, their focus isn’t really on propagating fake news anymore. “It’s on augmenting stories already out there which speak to hyperpartisan audiences.”
That is why much of the growing effort to respond to the newly recognized crisis of fake news, Russian disinformation, and other forms of disruption in our social media fails to address the core of the problem. We cannot solve the problem by trying to close our systems off from fake news, nor can we expect to radically change people's natural tendency toward cognitive bias. The core problem is that our social media platforms lack an effective "cognitive immune system" that can resist our own tendency to spread the "cognitive pathogens" that are endemic in our social information environment.

Consider how living organisms have evolved to contain infections. We did that not by developing impermeable skins that could be counted on to keep all infections out, nor by making all of our cells so invulnerable that they can resist whatever infectious agents may unpredictably appear.

We have powerfully complemented what we can do in those ways by developing a richly nuanced internal immune system that is deeply embedded throughout our tissues. That immune system uses emergent processes at a system-wide level -- to first learn to identify dangerous agents of disease, and then to learn how to resist their replication and virulence as they try to spread through our system.

The problem is that our social media lack an effective "cognitive immune system" of this kind. 

In fact many of our social media platforms are designed by the businesses that operate them to maximize engagement so they can sell ads. In doing so, they have learned that spreading incendiary disinformation that makes people angry and upset, polarizing them into warring factions, increases their engagement. As a result, these platforms actually learn to spread disease rather than to build immunity. They learn to exploit the fact that people have cognitive biases that make them want to be cocooned in comfortable filter bubbles and feel-good echo-chambers, and to ignore and refute anything that might challenge beliefs that are wrong but comfortable. They work against our human values, not for them.

What are we doing about it? Are we addressing this deep issue of immunity, or are we just putting on band-aids and hoping we can teach people to be smarter? (As a related issue, are we addressing the underlying issue of business model incentives?) Current efforts seem to be focused on measures at the end-points of our social media systems:
  • Stopping disinformation at the source. We certainly should apply band-aids to prevent bad-actors from injecting our media with news, posts, and other items that are intentionally false and dishonest. Of course we should seek to block such items and those who inject them. Band-aids are useful when we find an open wound that germs are gaining entry through. But band-aids are still just band-aids.
  • Making it easier for individuals to recognize when items they receive may be harmful because they are not what they seem. We certainly should provide "immune markers" in the form of consumer-reports-like ratings of items and of the publishers or people who produce them (as many are seeking to do). Making such markers visible to users can help prime them to be more skeptical, and perhaps apply more critical thinking -- much like applying an antiseptic. But that depends on the willingness of users to pay attention to such markers and apply the antiseptic. There is good reason to doubt that will have more than modest effectiveness, given people's natural laziness and instinct for thinking fast rather than slow. (Many social media users "like" items based only on click-bait headlines that are often inflammatory and misleading, without even reading the item -- and that is often enough to cause those items to spread massively.)
These end-point measures are helpful and should be aggressively pursued, but we need to urgently pursue a more systemic strategy of defense. We need to address the problem of dissemination and amplification itself. We need to be much smarter about what gets spread -- from whom, to whom, and why.

Doing that means getting deep into the guts of how our media are filtered and disseminated, step by step, through the "viral" amplification layers of the media systems that connect us. That means integrating a cognitive immune system into the core of our social media platforms. Getting the platform owners to buy in to that will be challenging, but it is the only effective remedy.

Building a cognitive immune system -- the biological parallel

This perspective comes out of work I have been doing for decades, and have written about on this blog (and in a patent filing since released into the public domain). That work centers on ideas for augmenting human intelligence with computer support. More specifically, it is centers on augmenting the wisdom of crowds. It is based on the idea the our wisdom is not the simple result of a majority vote -- but results from an emergent process that applies smart filters that rate the raters and weight the ratings. That provides a way to learn which votes should be more equal than others (in a way that is democratic and egalitarian, but also merit-based). This approach is explained in the posts listed below. It extends an approach that has been developing for centuries.

Supportive of those perspectives, I recently turned to some work on biological immunity that uses the term "cognitive immune system." That work highlight the rich informational aspects of actual immune systems, as a model for understanding how these systems work at a systems level. As noted in one paper (see longer extract below*), biological immune systems are "cognitive, adaptive, fault-tolerant, and fuzzy conceptually." I have only begun to think about the parallels here, but it is apparent that the system architecture I have proposed in my other posts is at least broadly parallel, being also "cognitive, adaptive, fault-tolerant, and fuzzy conceptually." (Of course being "fuzzy conceptually" makes it not the easiest thing to explain and build, but when that is the inherent nature of the problem, it may also necessarily be the essential nature of the solution -- just as it is for biological immune systems.)

An important aspect of this being "fuzzy conceptually," is what I call The Tao of Truth. We can't definitively declare good-faith "speech" as "fake" or "false" in the abstract. Validity is "fuzzy" because it depends on context and interpretation. ("Fuzzy logic" recognizes that in the real world, it is often the case that facts are not entirely true or false but, rather, have degrees of truth.)  That is why only the clearest cases of disinformation can be safely cut off at the source. But we can develop a robust system for ranking the probable (fuzzy) value and truthfulness of speech, revising those rankings, and using that to decide how to share it with whom. For practical purposes, truth is a filtering process, and we can get much smarter about how we apply our collective intelligence to do our filtering. It seems the concepts of "danger" and "self/not-self" in our immune systems have a similarly fuzzy Tao -- many denizens of our microbiome that are not "self" are beneficial to us, and our immune systems have learned that we live better with them inside of us.

My proposals

Expansion on the architecture I have proposed for a cognitive immune system -- and the need for it -- are here:
  • The Tao of Fake News – the essential need for fuzziness in our logic: the inherent limits of experts, moderators, and rating agencies – and the need for augmenting the wisdom of the crowd (as essential to maintaining the intellectual openness of our democratic/enlightenment values).
(These works did not explicitly address the parallels with biological cognitive immune systems -- exploring those parallels might well lead to improvements on these strategies.)

To those without a background in the technology of modern information platforms, this brief outline may seem abstract and unclear. But as noted in these more detailed posts, these methods are a generalization of methods used by Google (in its PageRank algorithm) to do highly context-relevant filtering of search results using a similar rate the raters and weight the ratings strategy. (That is also "cognitive, adaptive, fault-tolerant, and fuzzy conceptually.") These methods not simple, but they are little stretch from the current computational methods of search engines, or from the ad targeting methods already well-developed by Facebook and others. They can be readily applied -- if the platforms can be motivated to do so.

Broader issues of support for our cognitive immune system

The issue of motivation to do this is crucial. For the kind of cognitive immune system I propose to be effective, it must be built deeply into the guts of our social media platforms (whether directly, or via APIs). As noted above, getting incumbent platforms to shift their business models to align their internal incentives with that need will be challenging. But I suggest it need not be as difficult as it might seem.
A related non-technical issue that many have noted is the need for education of citizens 1) in critical thinking, and 2) in the civics of our democracy. Both seem to have been badly neglected in recent decades. Aggressively remedying that is important, to help inoculate users from disinformation and sloppy thinking -- but that will have limited effectiveness unless we alter the overwhelmingly fast dynamics of our information flows (with the cognitive immune system suggested here) -- to help make us smarter, not dumber in the face of this deluge of information.

---
[Update 10/12:] A subsequent Times article by Sheera Frenkel, adds perspective on the scope and pace of the problem -- and the difficulty in definitively identifying items as fakes that can rightly be censored "because of the blurry lines between free speech and disinformation" -- but such questionable items can be down-ranked.
-----
*Background on our Immune Systems -- from the introduction to the paper mentioned above, "A Cognitive Computational Model Inspired by the Immune System Response" (emphasis added):
The immune system (IS) is by nature a highly distributed, adaptive, and self-organized system that maintains a memory of past encounters and has the ability to continuously learn about new encounters; the immune system as a whole is being interpreted as an intelligent agent. The immune system, along with the central nervous system, represents the most complex biological system in nature [1]. This paper is an attempt to investigate and analyze the immune system response (ISR) in an effort to build a framework inspired by ISR. This framework maintains the same features as the IS itself; it is cognitive, adaptive, fault-tolerant, and fuzzy conceptually. The paper sets three phases for ISR operating sequentially, namely, “recognition,” “decision making,” and “execution,” in addition to another phase operating in parallel which is “maturation.” This paper approaches these phases in detail as a component based architecture model. Then, we will introduce a proposal for a new hybrid and cognitive architecture inspired by ISR. The framework could be used in interdisciplinary systems as manifested in the ISR simulation. Then we will be moving to a high level architecture for the complex adaptive system. IS, as a first class adaptive system, operates on the body context (antigens, body cells, and immune cells). ISR matured over time and enriched its own knowledge base, while neither the context nor the knowledge base is constant, so the response will not be exactly the same even when the immune system encounters the same antigen. A wide range of disciplines is to be discussed in the paper, including artificial intelligence, computational immunology, artificial immune system, and distributed complex adaptive systems. Immunology is one of the fields in biology where the roles of computational and mathematical modeling and analysis were recognized...
The paper supposes that immune system is a cognitive system; IS has beliefs, knowledge, and view about concrete things in our bodies [created out of an ongoing emergent process], which gives IS the ability to abstract, filter, and classify the information to take the proper decisions.

Monday, October 08, 2012

Filtering for Serendipity -- Extremism, "Filter Bubbles" and "Surprising Validators"

[The Augmented Wisdom of Crowds:  Rate the Raters and Weight the Ratings, (2018) puts this in a much broader framework and outlines an architecture for augmenting social media and other collaborative systems.]

[A post-2016-election update on this theme:
2016: Fake News, Echo Chambers, Filter Bubbles and the "De-Augmentation" of Our Intellect]


Balanced information may actually inflame extreme views -- that is the counter-intuitive suggestion in a NY Times op-ed by Cass Sunstein, "Breaking Up the Echo" (9/17/12).   Sunstein is drawing on some very interesting research,* and this points toward an important new direction for our media systems.

I suggest this is especially important to digital media, in that we can counter this problem with more intelligent filters for managing our supply of information.  This could be one of the most important ways for technology to enhance modern society. Technology has made us more foolish in some respects, but the right technology can make us much smarter.

Sunstein's suggestion is that what we need are what he calls "surprising validators," people one gives credence to who suggest one's view might be wrong.  While all media and public discourse can try to leverage this insight, an even greater opportunity is for electronic media services to exploit this insight that "what matters most may be not what is said, but who, exactly, is saying it."

Much attention has been given to the growing lack of balance in our discourse, and there have been efforts to seek to address that.
  • It has been widely lamented that the mass media are creating an "echo chamber" -- such as Fox News on the right vs. MSNBC on the left.  
  • It has also been noted that Internet media bring a further vicious cycle of polarization, as nicely described in the 2011 TED talk (and related book) by Eli Pariser, "Beware online "filter bubbles," services that filter out things not to one's taste.
  • Similarly, extremist views that were once muted in communities that provided balance are now finding kindred spirits in global niches, and feeding upon their own lunacy.
This is increasingly damaging to society, as we see the nasty polarization of our political discourse, the gridlock in Washington, and growing extremism around the world. The "global village" that promises to bring us together is often doing the opposite.

It would seem that the remedy is to try to bring greater balance into our media. There have been laudable efforts to build systems that recognize disagreement and suggest balance, such as services like SettleItFactCheck, and Snopes, and, a particularly interesting effort, the Intel Dispute Finder (no longer active).
  • The notable problem with this is Sunstein's warning that even if we can expose people to greater balance, that may not be enough to reduce such polarization, and that balancing corrections can even be counter-productive, because "biased assimilation" causes people to dismiss the opposing view and become even more strident. 
  • Thus it is not enough to simply make our filter bubbles more permeable, to let in more balanced information.  What we need is an even smarter kind of filter and presentation system.  We have begun to exploit the "wisdom of crowds," but we have done little to refine that wisdom by applying tools to shape it intelligently.
From that perspective, consider Sunstein's suggestions:
People tend to dismiss information that would falsify their convictions. But they may reconsider if the information comes from a source they cannot dismiss. People are most likely to find a source credible if they closely identify with it or begin in essential agreement with it. In such cases, their reaction is not, “how predictable and uninformative that someone like that would think something so evil and foolish,” but instead, “if someone like that disagrees with me, maybe I had better rethink.”
Our initial convictions are more apt to be shaken if it’s not easy to dismiss the source as biased, confused, self-interested or simply mistaken. This is one reason that seemingly irrelevant characteristics, like appearance, or taste in food and drink, can have a big impact on credibility. Such characteristics can suggest that the validators are in fact surprising — that they are “like” the people to whom they are speaking.
It follows that turncoats, real or apparent, can be immensely persuasive. If civil rights leaders oppose affirmative action, or if well-known climate change skeptics say that they were wrong, people are more likely to change their views.
Here, then, is a lesson for all those who provide information. What matters most may be not what is said, but who, exactly, is saying it. 
This struck a chord with me, as something to build on.  Applying the idea of "surprising validators"  (people who can make us think again):
  • The media and social network systems that are personalized to serve each of us can understand who says what, who I identify and agree with in a given domain, and when a person I respect holds views that are different from views that I have expressed that I might be wrong about.  Such people may be "friends" in my social network, or distant figures that I am known to consider wise.  (Of course it is the friends I consider wise, not those I like but view as misguided, that need to be identified and leveraged.)
  • By alerting me that people I identify and agree with think differently on a given point, such systems can make me think again -- if not to change my mind, at least to consider the idea that reasonable people can differ on this point. 
  • Such an approach could build on the related efforts for systems that recognize disagreement and suggest balance noted above.  ...But as Sunstein suggests, the trick is to focus on the surprising validators.
  • Surprising validators can be identified in terms of a variety of dimensions of values, beliefs, tastes, and stature that can be sensed and algorithmically categorized (both overall and by subject domain).  In this way the voices for balance who are most likely to be given credence by each individual can be selectively raised to their attention.  
  • Such surprising validations (or reasons to re-think) might be flagged as such, to further aid people in being alert to the blinders of biased assimilation and to counter foolish polarization.
This provides a specific, practical method for directly countering the worst aspects of the echo chambers and filter bubbles.

More broadly, what we need to counter the filter bubble are ways to engineer serendipity into our information filters -- we need methods for exposing us to the things we don't realize we should know, and don't know how to set filters for.  Identifying surprising validators is just one aspect of this, but this might be one of the easiest to engineer (since it builds directly on the relationship of what we know and who we know, a relationship that is increasingly accessible to technology), and one of the most urgently needed.

Of course the reason that engineering serendipity is hard is because it is something of an oxymoron--how can we define a filter for the accident of desirable surprise?  But with surprising validators we have a model that may be extended more broadly--focused not on disputes, but on crossing other kinds of boundaries--based on who else has made a similar crossing--still in terms of what we know and who we know, and other predictors of what is likely to resonate as desirable surprise. Perhaps we might think of these as "surprising combinators."

[Update 10/21/19:] Serendipity and flow. Some specific hints on how to engineer serendipity can be drawn from a recent article, "Why Aren't We Curious about the Things We Want to Be Curious About?"  This reinforces my suggestion (in the paragraph just above) that it be "in terms of what we know and who we know" adding the insight that "We’re maximally curious when we sense that the environment offers new information in the right proportion to complement what we already know" and suggesting that it has to do with finding "the just-right match to your current knowledge that will maintain your curiosity." This seems to be another case of seeking a "flow state," the energized and enjoyable happy medium between not so challenging or alien as to be too frustrating, yet not so easy and familiar as to be boring. I suggest that smart filtering technology will help us find flow, and do it in ways that adapt in real time to our moods and our ongoing development.

This offers a way to more intelligently shape the "wisdom of crowds," a process that could become a powerful force for moderation, balance, and mutual understanding. We need not just to make our "filter bubbles" more permeable, but much like a living cell, we need to engineer a semi-permeable membrane that is very smart about what it does or does not filter.

Applying this kind of strategy to conventional discourse would be complex and difficult to do without pervasive computer support, but within our electronic filters (topical news filters and recommenders, social network services, etc.) this is just another level of algorithm. Just as Google took old academic ideas about hubs and authority, and applied these seemingly subtle and insignificant signals to make search engines significantly more relevant, new kinds of filter services can use the subtle signals of surprising validators (and surprising combinators) to make our filters more wisely permeable.

That may be society's most urgent need in information and media services.  Only when we can bring a new level of collaboration, a more intelligently shaped wisdom of crowds, will we benefit from the full potential of the Internet.  We need our technology to be more a part of the solution, and less a part of the problem.  If we can't learn to understand one another better, and reverse the current slide into extremism, nothing else will matter very much.

[Update:]  Note that the kind of filtering suggested here would ideally be personalized to each individual user, fully reflecting the "everything is deeply interwingled" and non-binary nuance of their overlapping Venn diagram of values, beliefs, tastes, communities of interest, and domains of expertise. However, in use-cases where that level of individual data analysis is impractical or impermissible, it could be done at a less granular level, based on simple categories, personas, or the like. For example, a news service that lacks detailed user data might categorize readers based on just a current session to identify who might be a surprising validator, or what might be serendipitous.

[Update 12/7/20:] Biden wins in 2020 with Surprising Validators!
A compelling report by Kevin Roose in the NY Times shows how Surprising Validators enabled Biden's "Rebel Alliance" to cut a hole in Trump's "Death Star" -- "…the sources that were most surprising were the one who had the most impact." "Perhaps the campaign's most unlikely validator was Fox News." This was by the campaign, external to the platforms' algorithms, but think how much more powerful this could be when fully integrated.

---
See the Selected Items tab for more on this theme.

[See also my earlier post on this theme:
Full Frontal Reality: how to combat the growing lunatic fringe.]

-------------
*The work Sunstein apparently refers to can be found by searching for "Biased Assimilation and Attitude Polarization," the title of a much-cited 1979 paper. I found some very interesting research and plan to review this further, seeking methods suited to algorithmic use. One interesting current center of study is the Yale Law School Cultural Cognition Project.

(On a personal note, this is an effort I have seen as having huge benefit to society since my first exposure to early work on computer-aided conferencing and decision support systems in the early 1970's.  I continue to see this as a vital challenge to pursue, and I welcome dialog and collaboration with others who share that mission.)