Showing posts with label truth. Show all posts
Showing posts with label truth. Show all posts

Friday, April 26, 2019

"Non-Binary" means "Non-Binary"...Mostly...Right?

A "gender non-binary female?"

Seeing the interview of Asia Kate Dillon on Late Night with Seth Meyers, I was struck by one statement -- one that suggests an insidious problem of binary thinking that pervades many of the current ills in our society. Dillon (who prefers the pronoun "they") reported gaining insight into their gender identity from the character description for their role in Billions as "a gender non-binary female," saying: “I just didn’t understand how those words could exist next to each other.”

What struck me was the questioning of how these words could be sensibly put together. Why would anyone ask that question? As I though more, I saw this as a perfect example of the much broader problem.

The curse of binary thinking

The question I ask is at a semantic level: how could that not be obvious? (regardless of one's views on gender identity). Doesn't the issue arise only if one interprets "female" in a binary way? I would have thought that one who identifies as "non-binary" would see beyond this conceptual trap of simplistic duality. Wouldn't a non-binary person be more non-binary in their thinking? Wouldn't it be obvious to a non-binary thinker that this is a matter of being non-binary and female, not of being non-binary or female?

It seems that binary thinking is so ingrained in our culture that we default to black and white readings when it is clear that most of life (outside of pure mathematics) is painted in shades of gray. It is common to think of some "females" as masculine, and some "males" as effeminate. Some view such terms as pejorative, but what is the reality? Why wouldn't a person presumed at birth to be female (for the usual blend of biological reasons) be able to be non-binary in a multitude of ways. Even biologically "female" has a multitude of aspects, which usually generally align, but sometimes diverge. Clearly, as to behavior in general and as to sexual orientation, there seems to be a spectrum, with many degrees in each of many dimensions (some barely noticed, some hard to miss).

So I write about this as an object lesson of how deeply the binary, black or white thinking or our culture distorts our view of the more deeply nuanced reality. Even one who sees themself as non-binary has a hard time escaping binary thinking. Why can the word "female" not be appropriate for a non-binary person (as we all are to some degree) -- one who has birth attributes that were ostensibly female. Isn't it just a fallacy of binary thinking to think it is not OK for a non-binary person to also be female? That a female cannot be non-binary?

I write about this because I have long taken issue with binary thinking. This is not to meant to criticize this actor in any way, but to sympathize broadly with the prevalence of this kind of blindness and absolutism in our culture. It is to empathize with those who suffer from being thought of in binary ways that fail to recognize the non-binary richness of life -- and those who suffer from thinking of themselves in a binary way. That is a harm that occurs to most of us at one time or another. As Whitman said:
Do I contradict myself?
Very well then I contradict myself,
(I am large, I contain multitudes.)
The bigger picture

Gender is just one of the host of current crises of binary thinking that lead to extreme polarization of all kinds. Political divides. The more irreconcilable divide over whether leadership must serve all of their constituency, or just those who support the leader, right or wrong. Fake news. Free speech and truth on campus vs. censorship for some zone of safety for binary thinkers. Trickle-down versus progressive economics. Capitalism versus socialism. Immigrant versus native. One race or religion versus another. Isn't the recent focus of some on "intersectionality" just an extension of binary thinking to multiple binary dimensions? Thinking in terms of binary categories (rather that category spectrums) distances and demonizes the other, blinded from seeing how much common ground there is.

The Tao symbol (which appears elsewhere in this blog) is a perfect illustration of my point, and an age-old symbol of the non-dualistic thinking central to some Asian traditions (I just noticed the irony of the actor's first name as I wrote this sentence!). We have black and white intertwined, and the dot of contrast indicates that each contains it opposite. That suggests that all females have some male in them (however large or small, and in whatever aspect) and all males have some female in them (much as some males would think that a blood libel).

Things are not black or white, but black and white. And even if nearly black or white in a single dimension, single dimensions rarely matter to the larger picture of any issue. I think we should all make a real effort to remind ourselves that that is the case for almost every issue of importance.

---

(I do not profess to be "woke," but do very much try to be "awakened" and accepting of the wondrous richness of our world. My focus here is on binary and non-binary thinking, itself. I use gender identity as the example only because of this statement that struck me. If I misunderstand or express my ideas inartfully in this fraught domain, that is not my intent. I hope it is taken in the spirit of finding greater understanding that is intended.)

(In that vein, I accept that there may be complex issues specific to gender and identity that go counter to my semantic argument in some respects. But my non-binary view is that that broader truth of non-duality still over-arches. And in an awakened non-binary world, the current last word can never be known to be the future last word.)

(See also the short post just below on the theme of this blog.)

Monday, January 14, 2019

The Real Crisis: The War to Save Democracy in 2020 Has Begun - Journalism Needs to Mobilize


This is one of the more complete of many prescriptions for journalists to manage this real crisis (and deflate the fake ones) -- but it is just another cry against the storm with no concrete plan for action.

The imminent crisis

The 2016 election in the US, similar problems around the world, "fake news," and disinformation have surfaced as crucial problems. Many are at work on solutions, but most will take time to be effective. We do not have that time.

In the US, and for the rest of the world, the most imminent threat is that Trump will use the press as he did in 2016 -- and still does. He still orchestrates a Trump-centric media circus. As Bruni points out, we need to restore meaningful conversation on the issues, whatever the policies at issue. 

Even most of those who support many of Trump's policies are dismayed at the dysfunction of this media circus (entertaining as it may be) -- this is not a partisan issue, but one all reasonable people can support.

Journalism needs to rally around best practices for containing this real and present danger now. Define them, follow them, and call out those who do not. To do that, leading journalists, publishers and J-schools should organize a Manhattan Project to unify and act now! If you do not do it right now, you may never have another chance.

Such a project should be inclusive, drawing in all who share the core values of intelligent discourse. 

Are you journalists, or cheerleaders (and profiteers) in a flame war?

Bruni's starter list

It is a long op-ed, well worth reading, and no doubt there are other important practices and tactics, but let's begin with some extracts from Bruni's op-ed (see the original for attribution of quotes):
“Pocahontas” won’t be lonely for long. …how much heed will we in the media pay to this stupidity? …That’s a specific question but also an overarching one — about the degree to which we’ll let him set the terms of the 2020 presidential campaign, about our appetite for antics versus substance, and about whether we’ll repeat the mistakes that we made in 2016 
Trump tortures us. Deliberately, yes, but I’m referring to the ways in which he keeps yanking our gaze his way.
“When you cover this as spectacle…what’s lost is context, perspective and depth. And when you cover this as spectacle, he is the star.” 
Trump was and is a perverse gift to the mainstream, establishment media, a magnet for eyeballs at a juncture when we were struggling economically and desperately needed one. Just present him as the high-wire act and car crash that he is; the audience gorges on it. But readers…[are] starved of information about the fraudulence of his supposed populism and the toll of his incompetence. And he wins. He doesn’t hate the media...He uses us.
Regarding their fitness for office, they [Trump and Clinton] were treated identically? In retrospect, that’s madness. It should have been in real time, too.
We need to do something else, too, which is to recognize that Trump now has an actual record in office and to discuss that with as much energy as we do his damned Twitter feed.
“Instead of covering Trump’s tweets on a live, breaking basis, just cover them in the last five minutes of a news show. They’re presidential statements, but we can balance them.”
We can also allow his challengers to talk about themselves as much as they do about him. …“It was deeply unfair… the question was always, ‘What’s your reaction to what Trump just said?,’ there’s no way to drive your own message.”
“It got to the point where it was one outrage after another, and we just moved on each time” …Instead, we should hold on to the most outrageous, unconscionable moments. We should pause there…. We can’t privilege the incremental over …the enduring. It lets Trump off the hook.
"…if you put enough experts on arguing… people will watch. And that’s what we’re doing with our politics. The media is not using their strength, their franchise, to elevate and illuminate the conversation. They’re just getting you all jazzed up about the game.”
But the lure of less demanding labors …is always there, especially because readers and viewers…reward it. What they lap up …is Trump the Baby…the Buffoon…the Bully… And that’s on them.
The real story of Trump isn’t his amorality and outrageousness. It’s Americans’ receptiveness to that. 
“Trump basically ran on blowing the whole thing up.…It’s critically important that we find ways to get at what it is people imagine government should be doing and…really look at what kind of leadership we need.”
A Manhattan Project for Journalism - the war to extinguish the flame war

When America became the "arsenal for democracy" in the battle against fascism, we mobilized for conventional warfare -- and with a massive Manhattan Project to change the game with an A-bomb. The best minds were assembled, tested many alternative strategies, and then focused the best resources in the world on what worked.

Trump has conquered the presidency with an artful flame war. Many have written very intelligently about the issues and strategies that Bruni raises. There is no silver bullet (or A-bomb), but there are a suite of strategies that promise to contain the nonsense -- but only if widely understood and practiced. No one person or organization has the knowledge or ability to do this alone. Bruni's points (and similar suggestions from many others) can be distilled, formalized and supplemented to provide a guide to best practices, both at high level, and in the guts of how journalism is practiced. Our best minds for journalism must come together and quickly define these best-practices, and then we must see to it that all understand them and work to enforce them.

If we have clear guidelines, we can call out and marginalize those who fan the flames - whether Trump and his supporters, or others.

Fair process is not partisan - the real challenge for "mainstream media"

Such a focus on process is not partisan, but simply a matter of a fairness to all citizens, and to the spirit of enlightened democracy that made America great. To the extent Trump or others (on either side of any issue) make responsible policy proposals and argue them responsibly, this would treat them fairly. To the extent they do not, it marginalizes them fairly.

Obviously our current government will not make this happen - no new "fairness doctrine" can be expected now. Journalists are uniquely positioned to step up to their responsibility. It must be a voluntary effort. Some prominent pundits and outlets will not cooperate, for political or business reasons. But a truly responsible "mainstream media" can work together to become a powerful force for reason. If we do not all hang together to fight the flame war, we will all hang separately.

Real Journalists of the World, Unite!

---
(If any broad effort to do this already exists, please let me know.)

(I am not a journalist, but one focus of my career has been on how technology can augment our collaborative intelligence. Journalism in this age is a form of such augmentation -- or more lately, de-augmentation.  I am ready to contribute to this effort as I can.)

Originally posted on my User-Centered Media blog.


Wednesday, October 10, 2018

In the War on Fake News, All of Us are Soldiers, Already!

This is intended as a supplement to my posts "A Cognitive Immune System for Social Media -- Developing Systemic Resistance to Fake News" and "The Augmented Wisdom of Crowds: Rate the Raters and Weight the Ratings." (But hopefully this stands on its own as well).Maybe this can make a clearer point of why the methods I propose are powerful and badly needed...
---

A NY Times article titled "Soldiers in Facebook’s War on Fake News Are Feeling Overrun" provides a simple context for showing how I propose to use information already available from all of us, on what is valid and what is fake.

The Times article describes a fact checking organization that works with Facebook in the Philippines (emphasis added):
On the front lines in the war over misinformation, Rappler is overmatched and outgunned - and that could be a worrying indicator of Facebook’s effort to curb the global problem by tapping fact-checking organizations around the world.
...it goes on to describe what I suggest is the heart of the issue:
When its fact checkers determine that a story is false, Facebook pushes it down on users’ News Feeds in favor of other material. Facebook does not delete the content, because it does not want to be seen as censoring free speech, and says demoting false content sharply reduces abuse. Still, falsehoods can resurface or become popular again.
The problem is that the fire hose of fake news is too fast and furious, and too diverse, for any specialized team of fact-checkers to keep up with it. Plus, the damage is done by the time they do identify the fakes and begin to demote them.

But we are all fact checking to some degree without even realizing it. We are all citizen-soldiers. Some do it better than others.

The trick is to draw out all of the signals we provide, in real time -- and use our knowledge of which users' signals are reliable -- to get smarter about what gets pushed down and what gets favored in our feeds. That can serve as a systemic cognitive immune system -- one based on rating the raters and weighting the ratings.

We are all rating all of our news, all of the time, whether implicitly or explicitly, without making any special effort:

  • When we read, "like," comment, or share an item, we provide implicit signals of interest, and perhaps approval.
  • When we comment or share an item, we provide explicit comments that may offer supplementary signals of approval or disapproval.
  • When we ignore an item, we provide a signal of disinterest (and perhaps disapproval).
  • When we return to other activity after viewing an item, the time elapsed signals our level of attention and interest.
Individually, inferences from the more implicit signals may be erratic and low in meaning. But when we have signals from thousands of people, the aggregate becomes meaningful. Trends can be seen quickly. (Facebook already uses such signals to target its ads -- that is how they makes so much money).

But simply adding all these signals can be misleading. 
  • Fake news can quickly spread through groups who are biased (including people or bots who have an ulterior interest in promoting an item) or are simply uncritical and easily inflamed -- making such an item appear to be popular.
  • But our platforms can learn who has which biases, and who is uncritical and easily inflamed.
  • They can learn who is respected within and beyond their narrow factions, and who is not, who is a shill (or a malicious bot) and who is not.
  • They can use this "rating" of the raters to weight their ratings higher or lower.
Done at scale, that can quickly provide probabilistically strong signals that an item is fake or misleading or just low quality. Those signals can enable the platform to demote low quality content and promote high quality content. 

To expand just a bit:
  • Facebook can use outside fact checkers, and can build AI to automatically signal items that seem questionable as one part of its defense.
  • But even without any information at all about the content and meaning of an item, it can make realtime inferences about its quality based on how users react to it.
  • If most of the amplification is from users known to be malicious, biased, or unreliable it can downrank items accordingly
  • It can test that downranking by monitoring further activity.
  • It might even enlist "testers" by promoting a questionable item to users known to be reliable, open, and critical thinkers -- and may even let some generally reliable users to self-select as validators (being careful not to overload them).
  • By being open-ended in this way, such downranking is not censorship -- it is merely a self-regulating learning process that works at Internet scale, on Internet time.
That is how we can augment the wisdom of the crowd -- in real time, with increasing reliability as we learn. That is how we build a cognitive immune system (as my other posts explain further).

This strategy is not new or unproven. It is is the core of Google's wildly successful PageRank algorithm for finding useful search results. And (as I have noted before), it was recently reported that Facebook is now beginning to do a similar, but apparently still primitive form of rating the trustworthiness of its users to try to identify fake news -- they track who spreads fake news and who reports abuse truthfully or deceitfully.* 

What I propose is that we take this much farther, and move rapidly to make it central to our filtering strategies for social media -- and more broadly. An all out effort to do that quickly may be our last, best hope for enlightened democracy.

Related posts:
----
(*More background from Facebook on their current efforts was cited in the Times article: Hard Questions: What is Facebook Doing to Protect Election Security?

[Update 10/12:] A subsequent Times article by Sheera Frenkel, adds perspective on the scope and pace of the problem -- and the difficulty in definitively identifying items as fakes that can rightly be censored "because of the blurry lines between free speech and disinformation" -- but such questionable items can be down-ranked.

Monday, October 08, 2018

A Cognitive Immune System for Social Media -- Developing Systemic Resistance to Fake News

To counter the spread of fake news, it's more important to manage and filter its spread than to try to interdict its creation -- or to try to inoculate people against its influence. 

A recent NY Times article on their inside look at Facebook's election "war room" highlights the problem, quoting cybersecurity expert Priscilla Moriuchi:
If you look at the way that foreign influence operations have changed these last two years, their focus isn’t really on propagating fake news anymore. “It’s on augmenting stories already out there which speak to hyperpartisan audiences.”
That is why much of the growing effort to respond to the newly recognized crisis of fake news, Russian disinformation, and other forms of disruption in our social media fails to address the core of the problem. We cannot solve the problem by trying to close our systems off from fake news, nor can we expect to radically change people's natural tendency toward cognitive bias. The core problem is that our social media platforms lack an effective "cognitive immune system" that can resist our own tendency to spread the "cognitive pathogens" that are endemic in our social information environment.

Consider how living organisms have evolved to contain infections. We did that not by developing impermeable skins that could be counted on to keep all infections out, nor by making all of our cells so invulnerable that they can resist whatever infectious agents may unpredictably appear.

We have powerfully complemented what we can do in those ways by developing a richly nuanced internal immune system that is deeply embedded throughout our tissues. That immune system uses emergent processes at a system-wide level -- to first learn to identify dangerous agents of disease, and then to learn how to resist their replication and virulence as they try to spread through our system.

The problem is that our social media lack an effective "cognitive immune system" of this kind. 

In fact many of our social media platforms are designed by the businesses that operate them to maximize engagement so they can sell ads. In doing so, they have learned that spreading incendiary disinformation that makes people angry and upset, polarizing them into warring factions, increases their engagement. As a result, these platforms actually learn to spread disease rather than to build immunity. They learn to exploit the fact that people have cognitive biases that make them want to be cocooned in comfortable filter bubbles and feel-good echo-chambers, and to ignore and refute anything that might challenge beliefs that are wrong but comfortable. They work against our human values, not for them.

What are we doing about it? Are we addressing this deep issue of immunity, or are we just putting on band-aids and hoping we can teach people to be smarter? (As a related issue, are we addressing the underlying issue of business model incentives?) Current efforts seem to be focused on measures at the end-points of our social media systems:
  • Stopping disinformation at the source. We certainly should apply band-aids to prevent bad-actors from injecting our media with news, posts, and other items that are intentionally false and dishonest. Of course we should seek to block such items and those who inject them. Band-aids are useful when we find an open wound that germs are gaining entry through. But band-aids are still just band-aids.
  • Making it easier for individuals to recognize when items they receive may be harmful because they are not what they seem. We certainly should provide "immune markers" in the form of consumer-reports-like ratings of items and of the publishers or people who produce them (as many are seeking to do). Making such markers visible to users can help prime them to be more skeptical, and perhaps apply more critical thinking -- much like applying an antiseptic. But that depends on the willingness of users to pay attention to such markers and apply the antiseptic. There is good reason to doubt that will have more than modest effectiveness, given people's natural laziness and instinct for thinking fast rather than slow. (Many social media users "like" items based only on click-bait headlines that are often inflammatory and misleading, without even reading the item -- and that is often enough to cause those items to spread massively.)
These end-point measures are helpful and should be aggressively pursued, but we need to urgently pursue a more systemic strategy of defense. We need to address the problem of dissemination and amplification itself. We need to be much smarter about what gets spread -- from whom, to whom, and why.

Doing that means getting deep into the guts of how our media are filtered and disseminated, step by step, through the "viral" amplification layers of the media systems that connect us. That means integrating a cognitive immune system into the core of our social media platforms. Getting the platform owners to buy in to that will be challenging, but it is the only effective remedy.

Building a cognitive immune system -- the biological parallel

This perspective comes out of work I have been doing for decades, and have written about on this blog (and in a patent filing since released into the public domain). That work centers on ideas for augmenting human intelligence with computer support. More specifically, it is centers on augmenting the wisdom of crowds. It is based on the idea the our wisdom is not the simple result of a majority vote -- but results from an emergent process that applies smart filters that rate the raters and weight the ratings. That provides a way to learn which votes should be more equal than others (in a way that is democratic and egalitarian, but also merit-based). This approach is explained in the posts listed below. It extends an approach that has been developing for centuries.

Supportive of those perspectives, I recently turned to some work on biological immunity that uses the term "cognitive immune system." That work highlight the rich informational aspects of actual immune systems, as a model for understanding how these systems work at a systems level. As noted in one paper (see longer extract below*), biological immune systems are "cognitive, adaptive, fault-tolerant, and fuzzy conceptually." I have only begun to think about the parallels here, but it is apparent that the system architecture I have proposed in my other posts is at least broadly parallel, being also "cognitive, adaptive, fault-tolerant, and fuzzy conceptually." (Of course being "fuzzy conceptually" makes it not the easiest thing to explain and build, but when that is the inherent nature of the problem, it may also necessarily be the essential nature of the solution -- just as it is for biological immune systems.)

An important aspect of this being "fuzzy conceptually," is what I call The Tao of Truth. We can't definitively declare good-faith "speech" as "fake" or "false" in the abstract. Validity is "fuzzy" because it depends on context and interpretation. ("Fuzzy logic" recognizes that in the real world, it is often the case that facts are not entirely true or false but, rather, have degrees of truth.)  That is why only the clearest cases of disinformation can be safely cut off at the source. But we can develop a robust system for ranking the probable (fuzzy) value and truthfulness of speech, revising those rankings, and using that to decide how to share it with whom. For practical purposes, truth is a filtering process, and we can get much smarter about how we apply our collective intelligence to do our filtering. It seems the concepts of "danger" and "self/not-self" in our immune systems have a similarly fuzzy Tao -- many denizens of our microbiome that are not "self" are beneficial to us, and our immune systems have learned that we live better with them inside of us.

My proposals

Expansion on the architecture I have proposed for a cognitive immune system -- and the need for it -- are here:
  • The Tao of Fake News – the essential need for fuzziness in our logic: the inherent limits of experts, moderators, and rating agencies – and the need for augmenting the wisdom of the crowd (as essential to maintaining the intellectual openness of our democratic/enlightenment values).
(These works did not explicitly address the parallels with biological cognitive immune systems -- exploring those parallels might well lead to improvements on these strategies.)

To those without a background in the technology of modern information platforms, this brief outline may seem abstract and unclear. But as noted in these more detailed posts, these methods are a generalization of methods used by Google (in its PageRank algorithm) to do highly context-relevant filtering of search results using a similar rate the raters and weight the ratings strategy. (That is also "cognitive, adaptive, fault-tolerant, and fuzzy conceptually.") These methods not simple, but they are little stretch from the current computational methods of search engines, or from the ad targeting methods already well-developed by Facebook and others. They can be readily applied -- if the platforms can be motivated to do so.

Broader issues of support for our cognitive immune system

The issue of motivation to do this is crucial. For the kind of cognitive immune system I propose to be effective, it must be built deeply into the guts of our social media platforms (whether directly, or via APIs). As noted above, getting incumbent platforms to shift their business models to align their internal incentives with that need will be challenging. But I suggest it need not be as difficult as it might seem.
A related non-technical issue that many have noted is the need for education of citizens 1) in critical thinking, and 2) in the civics of our democracy. Both seem to have been badly neglected in recent decades. Aggressively remedying that is important, to help inoculate users from disinformation and sloppy thinking -- but that will have limited effectiveness unless we alter the overwhelmingly fast dynamics of our information flows (with the cognitive immune system suggested here) -- to help make us smarter, not dumber in the face of this deluge of information.

---
[Update 10/12:] A subsequent Times article by Sheera Frenkel, adds perspective on the scope and pace of the problem -- and the difficulty in definitively identifying items as fakes that can rightly be censored "because of the blurry lines between free speech and disinformation" -- but such questionable items can be down-ranked.
-----
*Background on our Immune Systems -- from the introduction to the paper mentioned above, "A Cognitive Computational Model Inspired by the Immune System Response" (emphasis added):
The immune system (IS) is by nature a highly distributed, adaptive, and self-organized system that maintains a memory of past encounters and has the ability to continuously learn about new encounters; the immune system as a whole is being interpreted as an intelligent agent. The immune system, along with the central nervous system, represents the most complex biological system in nature [1]. This paper is an attempt to investigate and analyze the immune system response (ISR) in an effort to build a framework inspired by ISR. This framework maintains the same features as the IS itself; it is cognitive, adaptive, fault-tolerant, and fuzzy conceptually. The paper sets three phases for ISR operating sequentially, namely, “recognition,” “decision making,” and “execution,” in addition to another phase operating in parallel which is “maturation.” This paper approaches these phases in detail as a component based architecture model. Then, we will introduce a proposal for a new hybrid and cognitive architecture inspired by ISR. The framework could be used in interdisciplinary systems as manifested in the ISR simulation. Then we will be moving to a high level architecture for the complex adaptive system. IS, as a first class adaptive system, operates on the body context (antigens, body cells, and immune cells). ISR matured over time and enriched its own knowledge base, while neither the context nor the knowledge base is constant, so the response will not be exactly the same even when the immune system encounters the same antigen. A wide range of disciplines is to be discussed in the paper, including artificial intelligence, computational immunology, artificial immune system, and distributed complex adaptive systems. Immunology is one of the fields in biology where the roles of computational and mathematical modeling and analysis were recognized...
The paper supposes that immune system is a cognitive system; IS has beliefs, knowledge, and view about concrete things in our bodies [created out of an ongoing emergent process], which gives IS the ability to abstract, filter, and classify the information to take the proper decisions.

Monday, August 27, 2018

The Tao of Fake News / The Tao of Truth

We are smarter than this!

Everyone with any sense sees "fake news" disinformation campaigns as an existential threat to "truth, justice, and the American Way," but we keep looking for a Superman to sort out what is true and what is fake. A moment's reflection shows that, no Virginia, there is no SuperArbiter of truth. No matter who you choose to check or rate content, there will always be more or less legitimate claims of improper bias.
  • We can't rely on "experts" or "moderators" or any kind of "Consumer Reports" of news. We certainly can't rely on the Likes of the Crowd, a simplistic form of the Wisdom of the Crowd that is too prone to "The Madness of Crowds." 
  • But we can Augment the Wisdom of the Crowd.
  • We can't definitively declare good-faith "speech" as "fake" or "false." 
  • But we can develop a robust system for ranking the probable value and truthfulness of speech, revising those rankings, and using that to decide how to share it with whom.
For practical purposes, truth is a filtering process, and we can get much smarter about how we apply our collective intelligence to do our filtering.

The Tao of Fake News, Truth, and Meaning

Truth is a process. Truth is complex. Truth depends on interpretation and context. Meaning depends on who is saying something, to whom, and why (as Humpty-Dumpty observed). The truth in Rashomon is different for each of the characters. Truth is often very hard for individuals (even "experts") to parse.

Truth is a process, because there is no practical way to ensure that people speak the truth, nor any easy way to determine if they have spoken the truth. Many look to the idea of flagging fake news sources, but who judges, on what basis and what aspects? (A recent NeimanLab assessment of NewsGuard's attempt to do this shows how open to dispute even well-funded, highly professional efforts to do that are.)

Truth is a filtering process: How do we filter true speech from false speech? Over centuries we have come to rely on juries and similar kinds of panels, working in a structured process to draw out and "augment" the collective wisdom of a small crowd. In the sciences, we have a more richly structured process for augmenting the collective wisdom of a large crowd of scientists (and their experiments), informally weighing the authority of each member of the crowd -- and avoiding over-reliance on a few "experts." Our truths are not black and white, absolute, and eternal -- they are contingent, nuanced, and tentative -- but this Tao of truth has served us well.

It is now urgent that our methods for augmenting and filtering our collective wisdom be enhanced. We need to apply computer-mediated collaboration to apply a similar augmented wisdom of the crowd at Internet scale and speed. We can make quick initial assessments, then adapt, grow, and refine our assessments of what is true, in what way, and with regard to what.

Filtering truth -- networks, context, and community

If our goal is to exclude all false and harmful material, we will fail. The nature of truth, and of human values, is too complex. We can exclude the most obviously pernicious frauds -- but for good-faith speech from humans in a free society, we must rely on a more nuanced kind of wisdom.

Our media filter what we see. Now the filters in our dominant social media are controlled by a few corporations motivated to maximize ad revenue by maximizing engagement. They work to serve the advertisers that are their customers, not we users (who now are really their product). We need to get them to change how the filters operate, to maximize value to their users.

We need filters to be tuned to the real value of speech as communication from one person to other people.  Most people want the "firehose" of items on the Internet to be filtered in some way, but just how may vary. Our filters need to be responsive to the desires of the recipients. Partisans may like the comfort of their distorting filter bubbles, but most people will want at least some level of value, quality, and reality, at least some of the time. We can reinforce that by doing well at it.

There is also the fact that people live in communities. Standards for what is truthful and valuable vary from community to community -- and communities and people change over time. This is clearer than ever, now that our social networks are global.

Freedom of speech requires that objectionable speech be speak-able, with very narrow exceptions. The issue is who hears that speech, and what control do they have over what they hear. A related issues is when do third parties have a right to influence those listener choices, and how to keep that intrusive hand as light as possible. Some may think we should never see a swastika or a heresy, but who has the right to draw such lines for everyone in every context?

We cannot shut off objectionable speech, but we can get smarter about managing how it spreads. 

To see this more clearly, consider our human social network as a system of collective intelligence, one that informs an operational definition of truth. Whether at the level of a single social network like Facebook, or all of our information networks, we have three kinds of elements:
  • Sources of information items (publishers, ordinary people, organizations, and even bots) 
  • Recipients of information items  
  • Distribution systems that connect the sources and recipients using filters and presentation service that determine what we see and how we see it (including optional indicators of likely truthfulness, bias, and quality).
Controlling truth at the source may, at first, seem the simple solution, but requires a level of control of speech that is inconsistent with a free society. Letting misinformation and harmful content enter our networks may seem unacceptable, but (with narrow exceptions) censorship is just not a good solution.

Some question whether it is enough to "downrank" items in our feeds (not deleted, but less likely to be presented to us), but what better option do we have than to do that wisely? The best we can reasonably do is manage the spread of low quality and harmful information in a way that is respectful of the rights of both sources and recipients, to limit harm and maximize value.*

How can we do that, and who should control it? We, the people, should control it ourselves (with some limited oversight and support).  Here is how.

Getting smarter -- The Augmented Wisdom of Crowds

Neither automation nor human intelligence alone is up to the scale and dynamics of the problem.  We need a computer-augmented approach to managing the wisdom of the crowd -- as embodied in our filters, and controlled by us. That will pull in all of the human intelligence we can access, and apply algorithms and machine learning (with human oversight) to refine and apply it. The good news is that we have the technology to do that. It is just a matter of the will to develop and apply it.

My previous post outlines a practical strategy for doing that -- "The Augmented Wisdom of Crowds: Rate the Raters and Weight the Ratings." Google has already shown how powerful a parallel form of this strategy can be to filter which search results should be presented to whom-- on Internet scale. My proposal is to broaden these methods to filter what our our social media present to us.

The method is one of considering all available "signals" in the network and learning how to use them to inform our filtering process. The core of the information filtering process -- that can be used for all kinds of media, including our social media -- is to use all the data signals that our media systems have about our activity. We can consider activity patterns across these three dimensions:
  • Information items (content of any kind, including news items, personal updates, comments/replies, likes, and shares/retweets).
  • Participants (and communities and sub-communities of participants), who can serve as both sources and recipients of items (and of items about other items)
  • Subject and task domains (and sub-domains) that give important context to information items and participants.
We can apply this data with the understanding that any item or participant can be rated, and any item can contain one or more ratings (implicit or explicit) of other items and/or participants. The trick is to tease out and make sense of all of these interrelated ratings and relationships. To be smart about that, we must recognize that not all ratings are equal, so we "rate the raters, and weight the ratings" (using any data that signals a rating). We take that to multiple levels -- my reputational authority depends not only on the reputational authority of those who rate me, but on those who rate them (and so on).

This may seem very complicated (and at scale, it is), but Google proved the power of such algorithms to determine which search results are relevant to a user's query (at mind-boggling scale and speed). Their PageRank algorithm considers what pages link to a given page to assess the imputed reputational authority of that page -- with weightings based on the imputed authority of the pages that link to it (again to multiple levels). Facebook uses similarly sophisticated algorithms to determine what ads should be targeted to whom -- tracking and matching user interests, similarities, and communities and matching that with information on their response to similar ads.

In some encouraging news, it was recently reported that Facebook is now also doing a very primitive form of rating the trustworthiness of its users to try to identify fake news -- they track who spreads fake news and who reports abuse truthfully or deceitfully. What I propose is that we take this much farther, and make it central to our filtering strategies for social media and more broadly.

With this strategy, we can improve our media filters to better meet our needs, as follows:
  • Track explicit and implicit signals to determine authority and truthfulness -- both of the speakers (participants) and of the things they say (items) -- drawing on the wisdom of those who hear and repeat it (or otherwise signal how they value it).
  • Do similar tracking to understand the desires and critical thinking skills of each of the recipients
  • Rate the raters (all of us!) -- and weight the votes to favor those with better ratings. Do that n-levels deep (much as Google does).
  • Let the users signal what levels and types of filtering they want. Provide defaults and options to accommodate users desiring different balances of ease or of fine control and reporting. Let users change that as they desire, depending on their wish to relax, to do focused critical thinking, or to open up to serendipity.
  • Provide transparency and auditability -- to each user (and to independent auditors) -- as to what is filtered for them and how.**
  • Open the filtering mechanisms to independent providers, to spur innovation in a competitive marketplace in filtering algorithms for users to choose from.
That is the best broad solution that we can apply. As we get good at it we will be amazed at how effective it can be. But given the catastrophic folly of where have have let this get to...

First, do no harm!

Most urgently, we need to change the incentives of our filters to do good, not harm. At present, our filters are pouring gasoline on the fires (even as their corporate owners claim to be trying to put them out). As explained in a recent HBR article, "current digital advertising business models incentivize the spread of false news." That article explains the insidious problem of the ad model for paying for services (others have called it "the original sin of the Web") and offers some sensible remedies.  

I have proposed more innovative approaches to better-aligning business models -- and to using a light-handed, market-driven, regulatory approach to mandate doing that -- in "An Open Letter to Influencers Concerned About Facebook and Other Platforms."

We have learned that the Internet has all the messiness of humanity and its truths. We are facing a Pearl Harbor of a thousand pin-pricks that is rapidly escalating. We must mobilize onto a war footing now, to halt that before it is too late.
  • First we need to understand the nature and urgency of this threat to democracy, 
  • Then we must move on both short and longer time horizons to slow and then reverse the threat. 
The Tao of fake news contains its opposite, the Tao of Augmented Wisdom. If we seek that, the result will be not only to manage fake news, but to be smarter in our collective wisdom than we can now imagine.

Related posts:
---
*Of course some information items will be clearly malicious, coming from fraudulent human accounts or bots -- and shutting some of that off at the source is feasible and desirable. But much of the spread of "fake news" (malicious or not) is from real people acting in good faith, in accord with their understanding and beliefs. We cannot escape that non-binary nature of human reality, and must come to terms with our world in nuanced shades of gray. But we can get very sophisticated at distinguishing when news is spread by participants who are usually reliable from when it is spread by those who have established a reputation for being credulous, biased, or malicious.

**The usual concern with transparency is that if the algorithms are known, then bad-actors will game them. That is a valid concern, and some have suggested that even if the how of the filtering algorithm is secret, we should be able to see and audit the why for a given result.  But to the extent that there is an open market in filtering methods (and in countermeasures to disinformation), and our filters vary from user to user and time to time, there will be so much variability in the algorithms that it will be hard to game them effectively.

---
[Update 8/30/18:]  Giuliani and The Tao of Truth 

To indulge in some timely musing, the Tao of Truth gives a perspective on the widely noted recent public statement that "truth isn't truth." At the level of the Tao, we can say that "truth is/isn't truth," or more precisely, "truth is/isn't Truth" (with one capital T). That is the level at which we understand truth to be a process in which the question "what is truth?" depends on what we mean, at what level, in what context, with what assurance -- and how far we are in that process. We as a society have developed a broadly shared expectation of how that process should work. But as the process does its never-ending work, there are no absolutes -- only more or less strong evidence, reasoning, and consensus about what we believe the relevant truth to be. (That, of course is an Enlightenment social perspective, and some disagree with this very process, and instead favor a more absolute and authoritarian variation. Perhaps most fundamentally, we are now in a reactionary time in which our prevailing process for truth is being prominently questioned. The hope here is that continuing development of a free, open, and wise process prevails over return to a closed, authoritarian one -- and prevails over the loss of any consensus at all.

[Update 10/12/18:] A Times article by Sheera Frenkel, adds perspective on the scope and pace of the problem -- and the difficulty in definitively identifying items as fakes that can be censored "because of the blurry lines between free speech and disinformation" -- but such questionable items can be down-ranked.

[Update 11/2/20:] A nice article on the importance of understanding the social nature of truth ("epistemic dependence" -- our reliance on others' knowledge -- "knowing vicariously"), and the interplay of evidence, trust, and authority, is in MIT Tech Review. It refers to a much-cited fundamental paper on epistemic dependence from 1985.

---
See the Selected Items tab for more on this theme.