Showing posts with label disinfo. Show all posts
Showing posts with label disinfo. Show all posts

Monday, January 27, 2020

Make it So, Now! - 10 Ways Tech Platforms Can Safeguard the 2020 Election

"Ten things technology platforms can do to safeguard the 2020 U.S. election" is an urgent and vital statement that we should all read -- and do all we can to make happen -- especially if you have any connection to the platforms, Congress, or regulators (or the press). Hopefully, anyone reading this understands why this is urgent (but the article begins with a brief reminder).

Thirteen prominent thought leaders "met...to discuss immediate steps the major social media companies can take to help safeguard our democratic process and mitigate the weaponization of their platforms in the run-up to the 2020 U.S. elections. They published this as a "living document."

Here is their list of  "What can be done … now" (the article explains each):
  1. Remove and archive fraudulent and automated accounts
  2. Clearly identify paid political posts — even when they’re shared
  3. Use consistent definitions of an ad or paid post
  4. Verify and accurately disclose advertising entities in political ads
  5. Require certification for political ads to receive organic reach
  6. Remove pricing incentives for presidential candidates that reward virality (including a limit on microtargeting)
  7. Provide detailed resources with accurate voting information at top of feeds
  8. Provide a more transparent and consistent set of data in political ad archives
  9. Clarifying where they draw the line on “lying”
  10. Be transparent about the resources they are putting into safety and security
All of these should be do-able in a matter of months.  While many of the signatories "...are working on longer-term ways to create a healthier, safer internet, [they] are proposing more immediate steps that could be implemented before the 2020 election for Facebook and other social media platforms to consider." 

The writers include "a Facebook co-founder, former Facebook, Google and Twitter employees, early Facebook and Twitter investors, academics, non-profit leaders, national security and public policy professionals:" John Borthwick, Sean Eldridge, Yael Eisenstat, Nir Erfat, Tristan Harris, Justin Hendrix, Chris Hughes, Young Mie Kim, Roger McNamee, Adav Noti, Eli Pariser, Trevor Potter and Vivian Schiller.

I, too, am working on longer term issues, as outlined in this recent summary in the context of some important think tank reports: Regulating our Platforms -- A Deeper Vision Similarly, I have addressed one of the most urgent stop-gap issues (which is part of their #6), in 2020: A Goldilocks Solution for False Political Ads on Social Media is Emerging).

Friday, January 10, 2020

The Dis-information Choke Point: Dis-tribution (Not Supply or Demand) [Stub]

Demand for Deceit: How the Way We Think Drives Disinformation, is an excellent report from the National Endowment for Democracy (by Samuel Woolley and Katie Joseff, 1/8/20). It highlights the dual importance of both supply and demand side factors in the problem of disinformation (fake news). That crystallizes in my mind an essential gap in this field -- smarter control of distribution. The importance of this third element that mediates between supply and demand was implicit in my comments on algorithms (in section #2 of the prior post).

[This is a stub for a fuller post yet to come. (It is an adaptation of a brief update to my prior post on Regulating the Platforms, but deserves separate treatment.)]

There is little fundamentally new about the supply or the demand for disinformation.  What is fundamentally new is how disinformation is distributed.  That is what we most urgently need to fix. If disinformation falls in a forest… but appears in no one’s feed, does it disinform?

In social media a new form of distribution mediates between supply and demand.  The media platform does filtering that upranks or downranks content, and so governs what users see.  If disinformation is downranked, we will not see it -- even if it is posted and potentially accessible to billions of people.  Filtered distribution is what makes social media not just more information, faster, but an entirely new kind of medium.  Filtering is a new, automated form of moderation and amplification.  That has implications for both the design and the regulation of social media.

[Update: see comments below on Facebook's 2/17/20 White Paper on Regulation.] 

Controlling the choke point

By changing social media filtering algorithms we can dramatically reduce the distribution of disinformation.  It is widely recognized that there is a problem of distribution: current social media promote content that angers and polarizes because that increases engagement and thus ad revenues.  Instead the services could filter for quality and value to users, but they have little incentive to do so.  What little effort they ever have made to do that has been lost in their quest for ad revenue.

Social media marketers speak of "amplification." It is easy to see the supply and demand for disinformation, but marketing professionals know that it is amplification in distribution that makes all the difference. Distribution is the critical choke point for controlling this newly amplified spread of disinformation. (And as Feld points out, the First Amendment does not protect inappropriate uses of loudspeakers.)

While this is a complex area that warrants much study, as the report observes, the arguments cited against the importance of filter bubbles in the box on page 10 are less relevant to social media, where the filters are largely based on the user’s social graph (who promotes items to be fed to them, in the form of posts, likes, comments, and shares), not just active search behavior (what they search for). 

Changing the behavior of demand is clearly desirable, but a very long and costly effort. It is recognized that we cannot stop the supply. But we can control distribution -- changing filtering algorithms could have significant impact rapidly, and would apply across the board, at Internet scale and speed -- if the social media platforms could be motivated to design better algorithms.

How can we do that? A quick summary of key points from my prior posts...

We seem to forget what Google’s original PageRank algorithm had taught us.  Content quality can be inferred algorithmically based on human user behaviors, without intrinsic understanding of the meaning of the content.  Algorithms can be enhanced to be far more nuanced.  The current upranking is based on likes from all of one’s social graph -- all treated as equally valid.  Instead, we can design algorithms that learn to recognize the user behaviors on page 8, to learn which users share responsibly (reading more than headlines and showing discernment for quality) and which are promiscuous (sharing reflexively, with minimal dwell time) or malicious (repeatedly sharing content determined to be disinformation).  Why should those users have more than minimal influence on what other users see?

The spread of disinformation could be dramatically reduced by upranking “votes” on what to share from users with good reputations, and downranking votes from those with poor reputations.  I explain further in A Cognitive Immune System for Social Media -- Developing Systemic Resistance to Fake News and In the War on Fake News, All of Us are Soldiers, Already!  More specifics on designing such algorithms is in The Augmented Wisdom of Crowds: Rate the Raters and Weight the Ratings.  Social media are now reflecting the wisdom of the mob -- instead we need to seek the wisdom of the smart crowd.  That is what society has sought to do for centuries.

Beyond that, better algorithms could combat the social media filter bubble effects by applying measures that apply judo to the active drivers noted on page 8.  Cass Sunstein suggested “surprising validators” in 2012 one way this might be done, and I built on that to explain how that could be applied in social media algorithms:  Filtering for Serendipity -- Extremism, 'Filter Bubbles' and 'Surprising Validators’.

If platforms and regulators focused more on what such distribution algorithms could do, they might take action to make that happen (as addressed in Regulating our Platforms -- A Deeper Vision).

Yes, "the way we think drives disinformation," and social media distribution algorithms drive how we think -- we can drive them for good, not bad!

---
Background noteNiemanLab today pointed to a PNAS paper showing evidence that "... ratings given by our [lay] participants were very strongly correlated with ratings provided by professional fact-checkers. Thus, incorporating the trust ratings of laypeople into social media ranking algorithms may effectively identify low-quality news outlets and could well reduce the amount of misinformation circulating online." The study was based on explicit quality judgments, but using implicit data on quality judgments as I suggest should be similarly correlated, and could apply the imputed judgments of every social media user who interacted with an item with no added user effort.

[Update:] 
Comments on Facebook's 2/17/20 White Paper, Charting a Way Forward on Online Content Regulation

This is an interesting document, with some good discussion, but it seems to provide evidence that leads to the point I make here, but totally misses seeing it. Again this seems to be a case in which "It is difficult to get a man to understand something when his job depends on not understanding it."

The report makes the important point that:
Companies may be able to predict the harmfulness of posts by assessing the likely reach of content (through distribution trends and likely virality), assessing the likelihood that a reported post violates (through review with artificial intelligence), or assessing the likely severity of reported content
So Facebook understands that they can predict "the likely reach of content" -- why not influence it??? It is their distribution process and filtering algorithms that control "the likely reach of content." Why not throttle distribution to reduce the reach in accord with the predicted severity of the violation? Why not gather realtime feedback from the distribution process (including the responses of users) to refine those predictions, so they can course correct the initial predictions and rapidly refine the level of the throttle? That is what I have suggested in many posts, notably In the War on Fake News, All of Us are Soldiers, Already!


See the Selected Items tab for more on this theme.

Wednesday, November 06, 2019

2020: A Goldilocks Solution for False Political Ads on Social Media is Emerging

Zuckerberg has rationalized that Facebook should do nothing about lies, and Dorsey has Twitter copping to the other extreme of an indiscriminate ad ban. But a readily actionable Goldilocks solution has emerged in response – and there are reports that Facebook is considering it.*

[This post focuses on stopgap solutions for controversial and urgent concerns leading in to the 2020 election. My prior post, Free Speech, Not Free Targeting! (Using Our Own Data to Manipulate Us), addresses the deeper abuses related to microtargeting and how everything in our feeds is filtered.]

[Update 5/29/20:]
Two bills to limit microtargeting of political ads have been introduced in Congress, one by Rep. Eshoo, and one by Rep. Cicilline.  Both are along the lines of the proposals described here. (More at end.)

The real problem

While dishonest political ads are a problem, that in itself is nothing new that we cannot deal with.  What is new is microtargeting of dishonest ads, and that has created a crisis that puts the fairness of our elections in serous doubt.  Numerous sophisticated observers – including the chair of the Federal Election Commission and the former head of security at Facebook -- have identified a far better stopgap solution than an outright ban on all political ads (or doing nothing).

Since the real problem is microtargeting, the “just right” quick solution is to limit microtargeting (at least until we have better ways to control it).  Microtargeting provides the new and insidious capability for a political campaign to precisely tailor its messaging to microsegments of voters who are vulnerable to being manipulated in one way, and while sending many different, conflicting messages to other microsegments who can be manipulated in other ways – by precision targeting down to designated sets of individual voters (such as with multifacet categories or with Facebook Custom Audiences). The social media feedback cycle can further enlist those manipulated users to be used as conduits ("useful idiots") to amplify that harm throughout their social graphs (much like the familiar screech of audio feedback that is not properly damped). This new kind of message amplification has been weaponized to incite extreme radicalization and even violent action.

We must be clear that there is a right of speech, but only limited rights to amplification or targeting. We have always had political ads that lie. America was founded on the principle that the best counter to lies is not censorship, but truth. Policing lies is a very slippery slope, but when a lie is out in the open, it can be exposed, debunked, and shamed. Sunlight has proven the best disinfectant. With microtargeting there is no exposure to sunlight and shame.
  • This new microtargeted filtering service can direct user posts or paid advertising to those most vulnerable to being manipulated, without their informed permission or awareness.
  • The social media feedback cycle can further enlist those manipulated users to be used as conduits ("useful idiots") to amplify that harm throughout their social graphs (much like the familiar screech of audio feedback that is not properly damped). 
  • These abuses are hidden from others and generally not auditable. That is compounds the harm of lies, since they can be targeted to manipulate factions surreptitiously. 
Consensus for a stopgap solution

In the past week or so, limits on microtargeting have been suggested to take a range of forms, all of which seem workable and feasible:
  • Ellen Weintraub, chair of the Federal Election Commission (in the Washington Post), Don’t abolish political ads on social media. Stop microtargeting, suggests “A good rule of thumb could be for Internet advertisers to allow targeting no more specific than one political level below the election at which the ad is directed.
  • Alex Stamos, former Facebook security chief, in an interview with Columbia Journalism Review, suggests “There are a lot of ways you can try to regulate this, but I think the simplest is a requirement that the "segment" somebody can hit has a floor. Maybe 10,000 people for a presidential election, 1,000 for a Congressional.”
  • Siva Vaidhyanathan, in the NT Times, suggesting "here’s something Congress could do: restrict the targeting of political ads in any medium to the level of the electoral district of the race."
  • In my prior postI suggested “allow ads to only be run...in a way that is no more targeted than traditional media…such as to users within broad geographic areas, or to a single affinity category that is not more precise or personalized than traditional print or TV slotting options.
This seems to be an emerging consensus that this is the best we can expect to achieve in the short run, in time to protect the 2020 election. This is something that Zuckerberg, Dorsey, and others (such as Google) could just decide to do -- or might be pressured to do. NBC News reported yesterday that Facebook is considering such an action.

We should all focus on avoiding foolish debate over naive framing of this problem as a dichotomy of "free speech" versus "censorship." The real problem is not the right of free speech, but the more nuanced issues of limited rights to be heard versus the right not to be targeted in ways that use our personal data against our interests.

The longer term

In the longer term, dishonest political ads are only a part of this new problem of abuse of microtargeting, which applies to speech of all kinds -- paid or not, political or commercial, or not. Especially notable is the fact that much of what Cambridge Analytica did was to get ordinary people to spread lies created by bots posing as ordinary people. To solve these problems, we need to change how the platforms not only how identity is known, but also how content is filtered into our feeds. Filtering content into our feeds is a user service that should be designed to provide the value that users, not advertisers seek

There are huge opportunities for innovation here. My prior post explains that, shows how much we are missing because the platforms are now driven by advertiser needs for amplification of their voice, not user needs for filtering of all voices, and it points to how we might change that.


See my prior post for more, plus links to related posts.

---
*[Update 11/7:] WSJ reports Google is considering political ad targeting limits as well.
[Update 11/20:] Google has announced it will impose political ad targeting limits -- Zuck, your move.
[Update 11/22:] WSJ reports Facebook is considering similar political ad targeting limits.

The downside of targeting limits. Meanwhile, there are reports, notably in NYTimes, that highlight the downside of limiting targeting precision in this way. That is why it is prudent to view blanket limits not as a final cure, but a stopgap:

  • Political campaigns rightly point out how these limits harm legitimate campaign goals: “This change won’t curb disinformation...but it will hinder campaigns and (others) who are already working against the tide against bad actors to reach voters with facts.” “Broad targeting kills fund-raising efficiency”
  • That argues that the real solution is to recognize that platforms do have the right and obligation to police ads of all kinds, including paid political ads, in order to enable an appropriate mix of targeting privileges to legitimate campaigns -- when placing non-abusive ads -- to those who choose to receive them.
  • But since we are nowhere near a meaningful implementation of such a solution in time for major upcoming elections, we need a stopgap compromise now. That is why I originally advocated this targeting limit, while noting that it was only a stopgap.
[Update 5/29/20:]
Related to the update at the top, about the bills introduced in Congress, a nice statement quoted in the 5/26/20 Eshoo press release explains the problem in a nutshell:
It used to be true that a politician could tell different things to different voters, but journalists would check whether the politician in question was saying different things to different people and write about it if they found conflicting political promises. That is impossible now because the different messages are shown privately on social media, and a given journalist only has his or her own profile. In other words, it's impossible to have oversight. The status quo, in other words, is bad for democracy. This new bill would address this urgent problem. --Cathy O’Neil, author of Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy and CEO of ORCAA, an algorithmic auditing firm.

=========================
[Supplement 11/8:] These 11/5 updates from my prior post seem worth repeating here as added background


(Alex Stamos from CJR)
In a 10/28 CJR interview by Mathew Ingram, Talking with former Facebook security chief Alex Stamos, Stamos offers this useful diagram to clarify key elements of Facebook and other social media that are often blurred together. He clarifies the hierarchy of amplification by advertising and recommendation engines (filtering of feeds) at the top, and free expression in various forms of private messaging at the bottom. This shows how the risks of abuse that need control are primarily related to paid targeting and to filtering. Stamos points out that "the type of abuse a lot of people are talking about, political disinformation, is absolutely tied to amplification" and that at the rights of unfettered free expression get stronger at the bottom, "the right of individuals to be exposed to information they have explicitly sought out."

Stamos argues that "Tech platforms should absolutely not fact-check candidates organic (unpaid) speech," but, in support of the kind of targeting limit suggested here, he says "I recommended, along with my partners here at Stanford, for there to be a legal floor on the advertising segment size for ads of a political nature."

Ben Thompson, in Tech and Liberty, supports Stamos' arguments and distinguishes rights of speech from "the right to be heard." He notes that "Targeting... both grants a right to be heard that is something distinct from a right to speech, as well as limits our shared understanding of what there is to debate."

---
See the Selected Items tab for more on this theme.

Thursday, October 31, 2019

Free Speech, Not Free Targeting! (Using Our Own Data to Manipulate Us)

(Image adapted from The Great Hack movie)
Zuckerberg's recent arguments that Facebook should restrict free expression only in the face of imminent, clear, and egregious harm have generated a storm of discussion -- and a very insightful counter from Dorsey (at Twitter).

But most discussion of these issues misses how social media can be managed without sacrificing our constitutionally protected freedom of expression. It oversimplifies how speech works in social media and misdiagnoses the causes of harm and abuse. 

[Update: A newer 11/6 post focuses on stopgap solutions for controversial and urgent concerns leading in to the 2020 election: 2020: A Goldilocks Solution for False Political Ads on Social Media is EmergingThis post focuses on the broader and deeper abuses of microtargeting, and how everything in our feeds is filtered.]

Much of this debate seems like blind men arguing over how to control an elephant when they don't yet understand what an elephant is. That is compounded by an elephant driver who exploits that confusion to do what he likes. (Is he, too, blind? ...or just motivated not to see the harm his elephant does?)

I suggest some simple principles can lead to more productive solution. Effective regulation -- whether self-regulation by the platforms, or by government -- requires understanding that we are really dealing with a new and powerfully expanded kind of hybrid media -- which is provided by a new and powerfully expanded kind of hybrid platform. That understanding suggests how to find a proper balance that protects free expression without doing great harm.

(This is a preliminary outline that I hope to expand on and refine. In the  meantime, some valuable references are suggested.) 

The essence of the problem

I suggest these three simple principles as overarching:
  1. Clearly, we need to protect "free speech," and a "free press," the First Amendment rights that are essential to our democracy and to our "marketplace of ideas." Zuckerberg is right that we need to be vigilant against overreaching cures -- in the form of censorship -- that may be worse than the disease.
  2. But he and his opponents both seem to misunderstand the nature of these new platforms. The real problem arises from the new services these platforms enable: precision targeted delivery services are neither protected speech, nor the protected press. They are a new kind of add-on service, separate from speech or the press. 
  3. Enabling precision targeted delivery against our interests, based on data extracted from us without informed consent is an abuse of power -- by the platforms -- and by the advertisers who pay them for that microtargeted delivery service. This is not a question of whether our data is private (or even wholly ours) -- it is a question of the legitimate use of data that we have rights in versus uses of that data that we have rights to disallow (both individually and as a society). It is also a question of when manipulative use of targeted ads constitutes deceptive advertising, which is not protected speech, and what constraints should be placed on paid targeting of messages to users. 
By controlling precision targeted delivery of speech, we can limit harmful behavior in the dissemination of speech -- without censorship of that speech.

While finalizing this post, I realized that Renee DiResta made some similar points under the title Free Speech Is Not the Same As Free Reach, her 2018 Wired article that explains this problem using that slightly different but equally pointed turn of phrase. With some helpful background, DiResta observed that:
...in this moment, the conversation we should be having—how can we fix the algorithms?—is instead being co-opted and twisted by politicians and pundits howling about censorship and miscasting content moderation as the demise of free speech online. It would be good to remind them that free speech does not mean free reach. There is no right to algorithmic amplification. In fact, that’s the very problem that needs fixing.
...So what can we do about it? The solution isn’t to outlaw algorithmic ranking or make noise about legislating what results Google can return... 
...there is a trust problem, and a lack of understanding of how rankings and feeds work, and that allows bad-faith politicking to gain traction. The best solution to that is to increase transparency and internet literacy, enabling users to have a better understanding of why they see what they see—and to build these powerful curatorial systems with a sense of responsibility for what they return.
In the following sections, I outline novel suggestions for how to go farther to manage this problem of free reach/free targeting -- in a way that drives the platforms to make their algorithms more controllable by their users, for their users. Notice the semantics: targeting and reach are both done to users -- filtering is done for users.

========================================================
Sidebar: The Elements of Social Media

Before continuing -- since even Zuckerberg seems to be confused about the nature of his elephant -- let's review the essential elements of Facebook and other social media.

Posting: This is the simple part. We start with what Facebook calls the Publisher Box that allows you to "write something" to post a Status Update that you wish to be available to others. By itself, that is little more than an easy-to-update personal Web site (a "microblogging" site), that makes short content items available to anyone who seeks them out. Other users can do that by going to that your Timeline/Wall (for Friends or the Public, depending on settings that you can control)For abuse and regulatory purposes, this aspect of Facebook is essentially a user-friendly Web hosting provider -- with no new First Amendment harms or issues.

Individually Filtered News Feeds: This is where things get new and very different. Your News Feed is an individually filtered view of what your friends are saying or commenting on (including what you "Liked" as a kind of comment). Facebook's filtering algorithms filter all of that content, based on some secret algorithm, to show you the items Facebook thinks will most likely engage you. This serves as a new kind of automated moderation. Some items are upranked so they will appear in your feed, others are downranked so they will not be shown in your feed. That ranking is weighted based on the social graph that connects you to your friends, and their friends, and so on -- how much positive interest each item draws from those the fewest degrees apart from you in your social graph. That ranking is also adjusted based on all the other information Facebook has about you and our friends (from observing activity anywhere in the vast Facebook ecosystem, and from external sources). It is this new individually filtered dissemination function of social media platforms that creates this new kind of conflict between free expression and newly enabled harms. (A further important  new layer is the enablement of self-forming Groups of like-minded users who can post items to the group -- and so have them filtered into the feeds of other group members, much like a special type of Friend.)

Targeted Ads: Layered on top of the first two elements, ads are special kind of posting in which advertisers pay Facebook to have their postings selectively filtered into the news feeds of individual users. Importantly, what is new in social media is that an ad are no longer just crudely targeted to some page in a publication or some time-slot in a video channel that goes to all viewers of that page or channel. Instead, it is precision targeted (microtargeted) to a set of users who fit some narrowly defined combination of criteria (or to a Custom Audience based on specific email addresses). Thus individualized messages can be targeted to just those users predicted to be especially receptive or easily manipulated -- and to remain unseen by others. This creates an entirely new category of harm that is both powerful and secretive. (How insidious this can be has already been demonstrated in Cambridge Analytica's abuse of Facebook.)  In this respect it is much like subliminal advertising (which is banned and not afforded First Amendment protection). The questions about about the harm of paid political advertising are especially urgent and compelling, as expressed by none other than Jack Dorsey of Twitter, who has just taken an opposite stand from Zuckerberg, saying “This isn’t about free expression. This is about paying for reach. And paying to increase the reach of political speech has significant ramifications that today’s democratic infrastructure may not be prepared to handle. It’s worth stepping back in order to address.” (See more in the "Coda: The urgent issue of paid political advertising.")
========================================================

Why these principles?

For an enlightening and well-researched explanation of the legal background behind my three principles, I recommend The Case for the Digital Platform Act: Market Structure and Regulation of Digital Platforms, by Harold Feld of Public Knowledge. (My apologies if I mis-characterize any of his points here.)

Feld's Chapter V parses these issues nicely, with a detailed primer on First Amendment issues, as evolved in communications and media law and regulation history. It also provides an analysis of how these platforms are a new kind of hybrid of direct one-to-one and one-to-many communications -- and how they add a new level of self-organizing many-to-many communities (fed by the new filtering algorithms). He explains why we should preserve strong freedom of speech for the one-to-one, but judiciously regulate the one-to-many. He also notes how facilitating creation of self-organizing communities introduces a new set of dangerous issues (including the empowerment of terrorist and hate groups who were previously isolated).

I have previously expressed similar ideas, focusing on better ways to do the filtering and precision targeting of content to an individual level that powers the one-to-many communication on these platforms and drives their self-organizing communities. That filtering and targeting is quantum leaps beyond anything ever before enabled at scale. Unfortunately, it is currently optimized for advertiser value, rather than user value.

The insidious new harm in false speech and other disinformation on these platforms is not in the speech, itself -- and not in simple distribution of the speech -- but in the abuse of this new platform service of precision targeting (microtargeting). Further, the essential harm of the platforms is not that they have our personal information, but in what they do with it. As described in the sidebar above, filtering -- based on our social graphs and other personal data -- is the core service of social media, and that can be a very valuable service. This filtering acts as a new, automated, form of moderation -- one that emerges from the platform's algorithms as they both drive and are driven by the ongoing activity of its users in a powerful new kind of feedback loop. The problem we now face with social media arises when that filtering/moderation service is misguided and abused:
  • This new microtargeted filtering service can direct user posts or paid advertising to those most vulnerable to being manipulated, without their informed permission or awareness.
  • The social media feedback cycle can further enlist those manipulated users to be used as conduits ("useful idiots") to amplify that harm throughout their social graphs (much like the familiar screech of audio feedback that is not properly damped). 
So that is where some combination of self-regulation and government regulation is most needed. Feld points to many relevant precedents for content moderation that have been held to be consistent with First Amendment rights, and he suggests that this is a fruitful area for regulatory guidance. My perspective on this is:
  • Regulation and platform self-regulation can be applied to limit social media harms, without impermissible limitation of rights of speech or the press
  • Free expression always entails some risk of harm that we accept as free society.
  • The harm we can best protect against is not the posting of harmful content, but the delivering of that harmful content to those who have not specifically sought it out. 
  • That is where Zuckerberg completely misses the point (whether by greed, malice, or simple naivete -- “It is difficult to get a man to understand something, when his job depends on his not understanding it”).
  • And that is where many of Zuckerberg's opponents waste their energy fighting the wrong battle -- one they cannot and should not win. 
Freedom of speech (posting), not freedom of intrusion on others who have not invited it.

That new kind of intrusion is the essential issue that most discussion seems to be missing.
  • I suggest that users should retain the right to post information with few restriction (the narrow exceptions that have traditionally been allowed by the courts as appropriate limits to First Amendment rights). 
  • That can be allowed without undue harm, as long as objectionable content is automatically downranked enough in a filtering (moderation) process, to largely avoid sending it to users who do not want such content
  • This is consistent with the safe-harbor provisions of Section 230 of the Communications Decency Act of 1996. That was created with thought to the limited and largely unmoderated posting functions of early Web aggregators (notably CompuServe and Prodigy, as litigated at the time). That also accepted the the freedom of the myriad independent Web sites that one had to actively seek out. 
  • Given the variation in community standards that complicate the handling of First Amendment rights by global platforms, filtering can also be applied to selectively restrict distribution of postings that are objectionable in specific communities or jurisdictions, without restricting posters or other allowable recipients.
As an important complement to this understanding of the problem, I also argue that users should be granted significant ability to customize the filtering process that serves them. That could better limit the exposure of users (and communities of users) to undesired content, without censoring material they do want.
  • Filtering should be a service for users, and thus should be selectable and adjustable by users to meet their individual desires. That customization should be dynamically modifiable, as a user's desires vary from time to time and task to task. (Some similar selectability has been offered to a limited extent for search -- and should apply even more fully to feeds, recognizing that search and feeds are complementary services. 
  • Issues here relate not only to one-to-one versus one-to-many, but to distinguish the user-active "pull" of requested information (such as a Web site access) versus the user-passive "push" of unsolicited information in a platform-driven feed. Getting much smarter about that would have huge value to users, as well as limiting abuses. 
Recipient-controlled "censorship:" content filtering, user choice, and competitive innovation - 

I suggest new levels of censorship of social media postings are generally not needed, because filtering enables a new kind of recipient-controlled "censorship" of delivery.

Social media work because they offer a new kind of filtering service for users -- most particularly, filtering a feed based on one's social graph. That has succeeded in spite of the fact that the platforms currently give their users little say over how that filtering is done (beyond specifying the social graph), and largely use it to manipulate their users rather than serve them. I put that forth as a central argument for regulation and antitrust action against the platforms.

Filtering algorithms should give users the kind of content they value, when they value it:
  • to include or exclude what the user considers to be objectionable or of undesired quality generally
  • to be dynamically selectable  (or able to sense the user's mood, task, and flow state) 
  • to filter for challenge, enlightenment, enjoyment, humor, emotion, support, comraderie, or relaxation at any given time. 
I explain in detail how smart use of "augmented intelligence" that draws on human inputs can enable that in The Augmented Wisdom of Crowds: Rate the Raters and Weight the Ratings and in A Cognitive Immune System for Social Media -- Developing Systemic Resistance to Fake News. This kind of hybrid man+machine intelligence can be far more powerful (and dynamically responsive) than either human or machine intelligence alone in determining the relevance, value, and legitimacy of social media postings (and ads). With this kind of smart real-time filtering of our feeds to protect us, censorship of postings can be limited to clearly improper material. Such methods have gotten little attention because Facebook is secretive about its filtering methods, and has had little incentive to develop them to serve users in this way. (But Google's PageRank algorithm has demonstrated the power of such multilevel rate the raters techniques to determine the relevance, value, and legitimacy of content.)

A monolithic platform like Facebook would be hard-pressed to deliver that level of flexibility and innovation for a full range of user desires and skill levels even if it wanted to. Key strategies to meet this complex need are:
  • to enable users to select from an open market in filtering services, each filtering service provider tuning its algorithms to provide value that competes in the marketplace to appeal to specific segments of users 
  • to combine multiple filtering services and algorithms to produce a desired overall effect
  • to allow filtering algorithm parameters to be changed by their users to vary the mix of algorithms and the operation of individual algorithms at will
  • to also factor in whatever "expert" content rating services they want.
(For an example of how such an open market might be shaped, consider the long-successful model of the open market for analytics that are used to filter financial market data to rank investment options. Think of social media as having user interface agents, repositories of posts, repositories of social graphs, and filtering/presentation tools, where the latter correspond to the financial analytics. Each of those elements might be separable and interoperable in an open competitive market.) 

These proposals have huge implications for speech and democracy, and well as for competitive innovation in augmenting the development of human wisdom (or de-augmenting it, as is happening now). That is how Facebook and other platforms could be much better at "bringing people closer together" without being so devilishly effective at driving them apart.

The need for a New Digital Platform Agency 

While adding bureaucracy is always a concern -- especially relating to the dynamic competitive environment of emerging digital technology -- there are strong arguments for that in this context.

The world is coming to realize that the Chicago School of antitrust that dominated the recent era of narrow antitrust enforcement is not enough. Raising "costs" to consumers is not a sufficient measure of harm when direct monetary costs to consumers are "zero." The real costs are not zero. Understanding what social media could do for us provides a reference point that shows how much we are really paying for the low-value platform services we now have. We cannot afford these supposedly "free" services!

Competition for users could change the value proposition, but this space is too complex, dynamic, and dependent on industry and technical expertise to be left to self-regulation, the courts, or legislation.

We need a new, specialized agency. The Feld report (cited above) offers in-depth support for such an agency, as do the three references recommended in the announcement of a conference on The Debate Over a New Digital Platform Agency: Developing Digital Authority and Expertise. (I recently attended that conference, and plan to post more about in the near future).

Just touching on this theme, we need a specialist agency that can regulate the platforms with expertise (much as the FCC has regulated communications and mass media) to find the right balance between the First Amendment and the harmful speech that it does not protect -- and to support open, competitive innovation as this continues to evolve. Many are unaware of the important and productive history here. (I observed from within the Bell System how the FCC and the courts regulated and eventually broke it up, and how this empowered the dynamic competition that led to the open Web and the Internet of Things that we now enjoy.) Inspired by those lessons, I offer specific new suggestions for regulation in Architecting Our Platforms to Better Serve Us -- Augmenting and Modularizing the Algorithm. Creating such an agency will take time, and be challenging -- but the alternative is to put not only the First Amendment, but our democracy and our freedom at risk.

These problems are hard, both for user speech, and for the special problem of paid advertising, which gives the platforms an incentive to serve advertisers, not users. As Dorsey of Twitter put it:
These challenges will affect ALL internet communication, not just political ads. Best to focus our efforts on the root problems, without the additional burden and complexity taking money brings. Trying to fix both means fixing neither well, and harms our credibility. ...For instance, it‘s not credible for us to say: “We’re working hard to stop people from gaming our systems to spread misleading info, buuut if someone pays us to target and force people to see their political ad…well...they can say whatever they want! 😉”
I have outlined a promising path toward solutions that preserve our freedom of speech while managing proper targeting of that speech, the underlying issue that few seem to recognize. But it will be a long and winding road, one that almost certainly requires a specialized agency to set guidelines, monitor, and adjust, as we find our way in this evolving new world.

Coda: The urgent issue of paid political advertising

The current firestorm regarding paid political advertising highlights one area where my proposals for smarter filtering and expert regulation are especially urgent, and where the case for reasonable controls on speech is especially well founded. My arguments for user control of filtering would have advertising targeting options be clearly subordinate to user filtering preferences. That seems to be sound in terms of First Amendment law, and common sense. Amplifying that are the arguments I have made elsewhere (Reverse the Biz Model! -- Undo the Faustian Bargain for Ads and Data) that advertising can be done in ways that better serve both users and well-intended advertisers. All parties win when ads are relevant, useful, and non-intrusive to their recipients.

But given the urgency here, for temporary relief until such selective controls can be put into effect, Dorsey's total ban on Twitter seems well worth considering for Facebook as well. Zuckerberg's defensive waving of the flag of free expression seems naive and self-serving.

[See my newer post (11/6) on stopgap solutions for controversial and urgent concerns leading in to the 2020 election: 2020: A Goldilocks Solution for False Political Ads on Social Media is Emerging. It reorganizes and expands on updates that are retained below.]

---
See the Selected Items tab for more on this theme.

Two key summaries:
Further discussion is in these posts:

==================================================
==================================================
Updates on stopgaps have since been consolidated into an 11/6 post: 2020: A Goldilocks Solution for False Political Ads on Social Media is Emerging...
That is more complete, but this section is retained as a history of updates.

[Update 11/2/19:]

An excellent analysis of the special case of political speech related to candidates is in Sam Lessin's 2016 Free Speech and Democracy in the Age of Micro-Targeting, which makes a well-reasoned argument that:
The growth of micro-targeting and disappearing messaging on the internet means that politicians can say different things to large numbers of people individually, in a way that can’t be monitored. Requirements to put this discourse on the public record are required to maintain democracy.
Lessin has a point that the secret (and often disappearing) nature of these communications, even when invited, is a threat to democracy. I agree that his remedy of disclosure is powerful, and it is a potentially important complement to my broad remedy of user-controlled targeting filters.

2020 Stopgaps?  

As to the urgent issue of the 2020 election, acting quickly will be hard. My proposal for user-controlled targeting filters is unlikely to be feasible as soon as 2020. So what can we do now?

Perhaps most feasible for 2020, is a simplistic stop-gap solution that might be easy to apply quickly:  just enact a temporary ban -- not on placing political ads, but on the individualized targeting of political ads. Do this as a simple and safe compromise between the Zuckerberg and Dorsey policies until we have a regulatory regime to manage micro-targeting properly:
  • Avoid a total ban on political ads on social media, but allow ads to only be run just as they are in traditional media, in a way that is no more targeted than traditional media. 
  • Disallow precision targeting to individuals: allow as many or as few ads as advertisers wish to purchase, but target them to all users, or to whatever random subset of all users fill the paid allotment.
  • A slight extension of this might permit a "traditional" level of targeting, such as to users within broad geographic areas, or to a single affinity category that is not more precise or personalized than traditional print or TV slotting options.
This is consistent with my point that the harm is not the speech, but the precision targeting of the speech, and would buy time to develop a more nuanced approach. It is something that Zuckerberg, Dorsey, and others could just decide to do on their own (...or be pressured to do).

[Update 11/3/19:] Siva Vaidhyanathan made a very similar proposal to my stop-gap suggestion: "here’s something Congress could do: restrict the targeting of political ads in any medium to the level of the electoral district of the race." That seems a good compromise that could stand until we have a better solution (or become a part of a more complete solution). (I am not sure if Vaidhyanathan meant to allow targeting to the level of individual districts in a multi-district election, but it seems to me that would be sufficient to enable reasonable visibility and not much harder to do quickly than the broader bans I had suggested.)

[Update 11/5/19:] Three other experts have argued for much the same kind of limits on targeting as the effective middle-ground solution.

(Alex Stamos from CJR)
One is Alex Stamos, in a 10/28 CJR interview by Mathew Ingram, Talking with former Facebook security chief Alex Stamos. Stamos offers a useful diagram to clarify key elements of Facebook and other social media that are often blurred together. He clarifies the hierarchy of amplification by advertising and recommendation engines (filtering of feeds) at the top, and free expression in various forms of private messaging at the bottom. This shows how the risks of abuse that need control are primarily related to paid targeting and to filtering. Stamos points out that "the type of abuse a lot of people are talking about, political disinformation, is absolutely tied to amplification" and that at the rights of unfettered free expression get stronger at the bottom, "the right of individuals to be exposed to information they have explicitly sought out."

Stamos argues that "Tech platforms should absolutely not fact-check candidates organic (unpaid) speech," but, in support of the kind of targeting limit suggested here, he says "I recommended, along with my partners here at Stanford, for there to be a legal floor on the advertising segment size for ads of a political nature."

Ben Thompson, in Tech and Liberty, supports Stamos' arguments and distinguishes rights of speech from "the right to be heard." He notes that "Targeting... both grants a right to be heard that is something distinct from a right to speech, as well as limits our shared understanding of what there is to debate."

And -- I just realized there had been another powerful voice on this issue! Ellen Weintraub, chair of the Federal Election Commission (in WaPo), Don’t abolish political ads on social media. Stop microtargeting. She suggests the same kind of limits on targeting of political ads outlined here, in even more specific terms (emphasis added):
A good rule of thumb could be for Internet advertisers to allow targeting no more specific than one political level below the election at which the ad is directed. Want to influence the governor’s race in Kansas? Your Internet ads could run across Kansas, or target individual counties, but that’s it. Running at-large for the Houston City Council? You could target the whole city or individual council districts. Presidential ads could likely be safely targeted down two levels, to the state and then to the county or congressional district level.
Maybe this flurried convergence of informed opinion will actually lead to some effective action.

Until we get more key people (including the press) to have some common understanding of what the problem is, it will be very hard to get a solution. For most of us, that is just a matter of making some effort to think clearly. For some it seems to be a matter of motivated reasoning that makes them not want to understand. (Many -- not always the same people -- have suggested that both Zuckerberg and Dorsey suffer from motivated reasoning.)

...And, as addressed in the first sections of this post, maybe that will help move us toward broader action to regain the promise of social media -- to apply smart filtering to make its users smarter, not dumber!

Tuesday, August 20, 2019

The Great Hack - The Most Important and Scariest Film of this Century

++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++

An Unseen Digital Pearl Harbor, Dissected

The Great Hack is a film every person should see!

America and other democracies have been invaded, and subverted in ways and degrees that few appreciate.

This film (on Netflix) uncovers the layers of the Russia / Cambridge Analytica hack of the US 2020 election (and Brexit), and clearly shows how deeply our social media have been subverted as a fifth column aimed at the heart of democracy and enlightened civilization.

The Great Hack provides clarity about the insidious damage being done by our seemingly benign social media -- and still running wild because too few understand or care about our state of peril -- and because those businesses profit from enabling our attackers. It provides an excellent primer for those who have not tuned in to this, and for those who do not understand the nature of the threat.

It is a much needed wake up call that far more need more urgently to heed.

"Prof. Carroll Goes to London"

What makes this a great film, and not just an important documentary, is how it is told as a the story of a (not so) common man.

Much like Jimmy Stewart's Mr. Smith Goes to Washington, this is the story of a mild-mannered citizen, Professor David Carroll of The New School in NYC, who sees a problem and seeks to follow a simple quest for truth and justice (to know what data Facebook and Cambridge Analytica have on him). It traces his awakening and journey to the belly of the beast. This time it is real, and the stakes could not be higher.

---
I found this film especially interesting, having met David at an event on the fake news problem in February 2017, and then at a number of subsequent events on this important theme (many under the auspices of NYC Media Lab and its director, Justin Hendix). It is a problem I have explored and offered some remedies for on this blog.

Wednesday, October 10, 2018

In the War on Fake News, All of Us are Soldiers, Already!

This is intended as a supplement to my posts "A Cognitive Immune System for Social Media -- Developing Systemic Resistance to Fake News" and "The Augmented Wisdom of Crowds: Rate the Raters and Weight the Ratings." (But hopefully this stands on its own as well).Maybe this can make a clearer point of why the methods I propose are powerful and badly needed...
---

A NY Times article titled "Soldiers in Facebook’s War on Fake News Are Feeling Overrun" provides a simple context for showing how I propose to use information already available from all of us, on what is valid and what is fake.

The Times article describes a fact checking organization that works with Facebook in the Philippines (emphasis added):
On the front lines in the war over misinformation, Rappler is overmatched and outgunned - and that could be a worrying indicator of Facebook’s effort to curb the global problem by tapping fact-checking organizations around the world.
...it goes on to describe what I suggest is the heart of the issue:
When its fact checkers determine that a story is false, Facebook pushes it down on users’ News Feeds in favor of other material. Facebook does not delete the content, because it does not want to be seen as censoring free speech, and says demoting false content sharply reduces abuse. Still, falsehoods can resurface or become popular again.
The problem is that the fire hose of fake news is too fast and furious, and too diverse, for any specialized team of fact-checkers to keep up with it. Plus, the damage is done by the time they do identify the fakes and begin to demote them.

But we are all fact checking to some degree without even realizing it. We are all citizen-soldiers. Some do it better than others.

The trick is to draw out all of the signals we provide, in real time -- and use our knowledge of which users' signals are reliable -- to get smarter about what gets pushed down and what gets favored in our feeds. That can serve as a systemic cognitive immune system -- one based on rating the raters and weighting the ratings.

We are all rating all of our news, all of the time, whether implicitly or explicitly, without making any special effort:

  • When we read, "like," comment, or share an item, we provide implicit signals of interest, and perhaps approval.
  • When we comment or share an item, we provide explicit comments that may offer supplementary signals of approval or disapproval.
  • When we ignore an item, we provide a signal of disinterest (and perhaps disapproval).
  • When we return to other activity after viewing an item, the time elapsed signals our level of attention and interest.
Individually, inferences from the more implicit signals may be erratic and low in meaning. But when we have signals from thousands of people, the aggregate becomes meaningful. Trends can be seen quickly. (Facebook already uses such signals to target its ads -- that is how they makes so much money).

But simply adding all these signals can be misleading. 
  • Fake news can quickly spread through groups who are biased (including people or bots who have an ulterior interest in promoting an item) or are simply uncritical and easily inflamed -- making such an item appear to be popular.
  • But our platforms can learn who has which biases, and who is uncritical and easily inflamed.
  • They can learn who is respected within and beyond their narrow factions, and who is not, who is a shill (or a malicious bot) and who is not.
  • They can use this "rating" of the raters to weight their ratings higher or lower.
Done at scale, that can quickly provide probabilistically strong signals that an item is fake or misleading or just low quality. Those signals can enable the platform to demote low quality content and promote high quality content. 

To expand just a bit:
  • Facebook can use outside fact checkers, and can build AI to automatically signal items that seem questionable as one part of its defense.
  • But even without any information at all about the content and meaning of an item, it can make realtime inferences about its quality based on how users react to it.
  • If most of the amplification is from users known to be malicious, biased, or unreliable it can downrank items accordingly
  • It can test that downranking by monitoring further activity.
  • It might even enlist "testers" by promoting a questionable item to users known to be reliable, open, and critical thinkers -- and may even let some generally reliable users to self-select as validators (being careful not to overload them).
  • By being open-ended in this way, such downranking is not censorship -- it is merely a self-regulating learning process that works at Internet scale, on Internet time.
That is how we can augment the wisdom of the crowd -- in real time, with increasing reliability as we learn. That is how we build a cognitive immune system (as my other posts explain further).

This strategy is not new or unproven. It is is the core of Google's wildly successful PageRank algorithm for finding useful search results. And (as I have noted before), it was recently reported that Facebook is now beginning to do a similar, but apparently still primitive form of rating the trustworthiness of its users to try to identify fake news -- they track who spreads fake news and who reports abuse truthfully or deceitfully.* 

What I propose is that we take this much farther, and move rapidly to make it central to our filtering strategies for social media -- and more broadly. An all out effort to do that quickly may be our last, best hope for enlightened democracy.

Related posts:
----
(*More background from Facebook on their current efforts was cited in the Times article: Hard Questions: What is Facebook Doing to Protect Election Security?

[Update 10/12:] A subsequent Times article by Sheera Frenkel, adds perspective on the scope and pace of the problem -- and the difficulty in definitively identifying items as fakes that can rightly be censored "because of the blurry lines between free speech and disinformation" -- but such questionable items can be down-ranked.

Monday, October 08, 2018

A Cognitive Immune System for Social Media -- Developing Systemic Resistance to Fake News

To counter the spread of fake news, it's more important to manage and filter its spread than to try to interdict its creation -- or to try to inoculate people against its influence. 

A recent NY Times article on their inside look at Facebook's election "war room" highlights the problem, quoting cybersecurity expert Priscilla Moriuchi:
If you look at the way that foreign influence operations have changed these last two years, their focus isn’t really on propagating fake news anymore. “It’s on augmenting stories already out there which speak to hyperpartisan audiences.”
That is why much of the growing effort to respond to the newly recognized crisis of fake news, Russian disinformation, and other forms of disruption in our social media fails to address the core of the problem. We cannot solve the problem by trying to close our systems off from fake news, nor can we expect to radically change people's natural tendency toward cognitive bias. The core problem is that our social media platforms lack an effective "cognitive immune system" that can resist our own tendency to spread the "cognitive pathogens" that are endemic in our social information environment.

Consider how living organisms have evolved to contain infections. We did that not by developing impermeable skins that could be counted on to keep all infections out, nor by making all of our cells so invulnerable that they can resist whatever infectious agents may unpredictably appear.

We have powerfully complemented what we can do in those ways by developing a richly nuanced internal immune system that is deeply embedded throughout our tissues. That immune system uses emergent processes at a system-wide level -- to first learn to identify dangerous agents of disease, and then to learn how to resist their replication and virulence as they try to spread through our system.

The problem is that our social media lack an effective "cognitive immune system" of this kind. 

In fact many of our social media platforms are designed by the businesses that operate them to maximize engagement so they can sell ads. In doing so, they have learned that spreading incendiary disinformation that makes people angry and upset, polarizing them into warring factions, increases their engagement. As a result, these platforms actually learn to spread disease rather than to build immunity. They learn to exploit the fact that people have cognitive biases that make them want to be cocooned in comfortable filter bubbles and feel-good echo-chambers, and to ignore and refute anything that might challenge beliefs that are wrong but comfortable. They work against our human values, not for them.

What are we doing about it? Are we addressing this deep issue of immunity, or are we just putting on band-aids and hoping we can teach people to be smarter? (As a related issue, are we addressing the underlying issue of business model incentives?) Current efforts seem to be focused on measures at the end-points of our social media systems:
  • Stopping disinformation at the source. We certainly should apply band-aids to prevent bad-actors from injecting our media with news, posts, and other items that are intentionally false and dishonest. Of course we should seek to block such items and those who inject them. Band-aids are useful when we find an open wound that germs are gaining entry through. But band-aids are still just band-aids.
  • Making it easier for individuals to recognize when items they receive may be harmful because they are not what they seem. We certainly should provide "immune markers" in the form of consumer-reports-like ratings of items and of the publishers or people who produce them (as many are seeking to do). Making such markers visible to users can help prime them to be more skeptical, and perhaps apply more critical thinking -- much like applying an antiseptic. But that depends on the willingness of users to pay attention to such markers and apply the antiseptic. There is good reason to doubt that will have more than modest effectiveness, given people's natural laziness and instinct for thinking fast rather than slow. (Many social media users "like" items based only on click-bait headlines that are often inflammatory and misleading, without even reading the item -- and that is often enough to cause those items to spread massively.)
These end-point measures are helpful and should be aggressively pursued, but we need to urgently pursue a more systemic strategy of defense. We need to address the problem of dissemination and amplification itself. We need to be much smarter about what gets spread -- from whom, to whom, and why.

Doing that means getting deep into the guts of how our media are filtered and disseminated, step by step, through the "viral" amplification layers of the media systems that connect us. That means integrating a cognitive immune system into the core of our social media platforms. Getting the platform owners to buy in to that will be challenging, but it is the only effective remedy.

Building a cognitive immune system -- the biological parallel

This perspective comes out of work I have been doing for decades, and have written about on this blog (and in a patent filing since released into the public domain). That work centers on ideas for augmenting human intelligence with computer support. More specifically, it is centers on augmenting the wisdom of crowds. It is based on the idea the our wisdom is not the simple result of a majority vote -- but results from an emergent process that applies smart filters that rate the raters and weight the ratings. That provides a way to learn which votes should be more equal than others (in a way that is democratic and egalitarian, but also merit-based). This approach is explained in the posts listed below. It extends an approach that has been developing for centuries.

Supportive of those perspectives, I recently turned to some work on biological immunity that uses the term "cognitive immune system." That work highlight the rich informational aspects of actual immune systems, as a model for understanding how these systems work at a systems level. As noted in one paper (see longer extract below*), biological immune systems are "cognitive, adaptive, fault-tolerant, and fuzzy conceptually." I have only begun to think about the parallels here, but it is apparent that the system architecture I have proposed in my other posts is at least broadly parallel, being also "cognitive, adaptive, fault-tolerant, and fuzzy conceptually." (Of course being "fuzzy conceptually" makes it not the easiest thing to explain and build, but when that is the inherent nature of the problem, it may also necessarily be the essential nature of the solution -- just as it is for biological immune systems.)

An important aspect of this being "fuzzy conceptually," is what I call The Tao of Truth. We can't definitively declare good-faith "speech" as "fake" or "false" in the abstract. Validity is "fuzzy" because it depends on context and interpretation. ("Fuzzy logic" recognizes that in the real world, it is often the case that facts are not entirely true or false but, rather, have degrees of truth.)  That is why only the clearest cases of disinformation can be safely cut off at the source. But we can develop a robust system for ranking the probable (fuzzy) value and truthfulness of speech, revising those rankings, and using that to decide how to share it with whom. For practical purposes, truth is a filtering process, and we can get much smarter about how we apply our collective intelligence to do our filtering. It seems the concepts of "danger" and "self/not-self" in our immune systems have a similarly fuzzy Tao -- many denizens of our microbiome that are not "self" are beneficial to us, and our immune systems have learned that we live better with them inside of us.

My proposals

Expansion on the architecture I have proposed for a cognitive immune system -- and the need for it -- are here:
  • The Tao of Fake News – the essential need for fuzziness in our logic: the inherent limits of experts, moderators, and rating agencies – and the need for augmenting the wisdom of the crowd (as essential to maintaining the intellectual openness of our democratic/enlightenment values).
(These works did not explicitly address the parallels with biological cognitive immune systems -- exploring those parallels might well lead to improvements on these strategies.)

To those without a background in the technology of modern information platforms, this brief outline may seem abstract and unclear. But as noted in these more detailed posts, these methods are a generalization of methods used by Google (in its PageRank algorithm) to do highly context-relevant filtering of search results using a similar rate the raters and weight the ratings strategy. (That is also "cognitive, adaptive, fault-tolerant, and fuzzy conceptually.") These methods not simple, but they are little stretch from the current computational methods of search engines, or from the ad targeting methods already well-developed by Facebook and others. They can be readily applied -- if the platforms can be motivated to do so.

Broader issues of support for our cognitive immune system

The issue of motivation to do this is crucial. For the kind of cognitive immune system I propose to be effective, it must be built deeply into the guts of our social media platforms (whether directly, or via APIs). As noted above, getting incumbent platforms to shift their business models to align their internal incentives with that need will be challenging. But I suggest it need not be as difficult as it might seem.
A related non-technical issue that many have noted is the need for education of citizens 1) in critical thinking, and 2) in the civics of our democracy. Both seem to have been badly neglected in recent decades. Aggressively remedying that is important, to help inoculate users from disinformation and sloppy thinking -- but that will have limited effectiveness unless we alter the overwhelmingly fast dynamics of our information flows (with the cognitive immune system suggested here) -- to help make us smarter, not dumber in the face of this deluge of information.

---
[Update 10/12:] A subsequent Times article by Sheera Frenkel, adds perspective on the scope and pace of the problem -- and the difficulty in definitively identifying items as fakes that can rightly be censored "because of the blurry lines between free speech and disinformation" -- but such questionable items can be down-ranked.
-----
*Background on our Immune Systems -- from the introduction to the paper mentioned above, "A Cognitive Computational Model Inspired by the Immune System Response" (emphasis added):
The immune system (IS) is by nature a highly distributed, adaptive, and self-organized system that maintains a memory of past encounters and has the ability to continuously learn about new encounters; the immune system as a whole is being interpreted as an intelligent agent. The immune system, along with the central nervous system, represents the most complex biological system in nature [1]. This paper is an attempt to investigate and analyze the immune system response (ISR) in an effort to build a framework inspired by ISR. This framework maintains the same features as the IS itself; it is cognitive, adaptive, fault-tolerant, and fuzzy conceptually. The paper sets three phases for ISR operating sequentially, namely, “recognition,” “decision making,” and “execution,” in addition to another phase operating in parallel which is “maturation.” This paper approaches these phases in detail as a component based architecture model. Then, we will introduce a proposal for a new hybrid and cognitive architecture inspired by ISR. The framework could be used in interdisciplinary systems as manifested in the ISR simulation. Then we will be moving to a high level architecture for the complex adaptive system. IS, as a first class adaptive system, operates on the body context (antigens, body cells, and immune cells). ISR matured over time and enriched its own knowledge base, while neither the context nor the knowledge base is constant, so the response will not be exactly the same even when the immune system encounters the same antigen. A wide range of disciplines is to be discussed in the paper, including artificial intelligence, computational immunology, artificial immune system, and distributed complex adaptive systems. Immunology is one of the fields in biology where the roles of computational and mathematical modeling and analysis were recognized...
The paper supposes that immune system is a cognitive system; IS has beliefs, knowledge, and view about concrete things in our bodies [created out of an ongoing emergent process], which gives IS the ability to abstract, filter, and classify the information to take the proper decisions.