[This is a stub for a fuller post yet to come. (It is an adaptation of a brief update to my prior post on Regulating the Platforms, but deserves separate treatment.)]
There is little fundamentally new about the supply or the demand for disinformation. What is fundamentally new is how disinformation is distributed. That is what we most urgently need to fix. If disinformation falls in a forest… but appears in no one’s feed, does it disinform?
In social media a new form of distribution mediates between supply and demand. The media platform does filtering that upranks or downranks content, and so governs what users see. If disinformation is downranked, we will not see it -- even if it is posted and potentially accessible to billions of people. Filtered distribution is what makes social media not just more information, faster, but an entirely new kind of medium. Filtering is a new, automated form of moderation and amplification. That has implications for both the design and the regulation of social media.
[Update: see comments below on Facebook's 2/17/20 White Paper on Regulation.]
[Update: see comments below on Facebook's 2/17/20 White Paper on Regulation.]
Controlling the choke point
By changing social media filtering algorithms we can dramatically reduce the distribution of disinformation. It is widely recognized that there is a problem of distribution: current social media promote content that angers and polarizes because that increases engagement and thus ad revenues. Instead the services could filter for quality and value to users, but they have little incentive to do so. What little effort they ever have made to do that has been lost in their quest for ad revenue.
Social media marketers speak of "amplification." It is easy to see the supply and demand for disinformation, but marketing professionals know that it is amplification in distribution that makes all the difference. Distribution is the critical choke point for controlling this newly amplified spread of disinformation. (And as Feld points out, the First Amendment does not protect inappropriate uses of loudspeakers.)
Social media marketers speak of "amplification." It is easy to see the supply and demand for disinformation, but marketing professionals know that it is amplification in distribution that makes all the difference. Distribution is the critical choke point for controlling this newly amplified spread of disinformation. (And as Feld points out, the First Amendment does not protect inappropriate uses of loudspeakers.)
While this is a complex area that warrants much study, as the report observes, the arguments cited against the importance of filter bubbles in the box on page 10 are less relevant to social media, where the filters are largely based on the user’s social graph (who promotes items to be fed to them, in the form of posts, likes, comments, and shares), not just active search behavior (what they search for).
Changing the behavior of demand is clearly desirable, but a very long and costly effort. It is recognized that we cannot stop the supply. But we can control distribution -- changing filtering algorithms could have significant impact rapidly, and would apply across the board, at Internet scale and speed -- if the social media platforms could be motivated to design better algorithms.
How can we do that? A quick summary of key points from my prior posts...
If platforms and regulators focused more on what such distribution algorithms could do, they might take action to make that happen (as addressed in Regulating our Platforms -- A Deeper Vision).
Yes, "the way we think drives disinformation," and social media distribution algorithms drive how we think -- we can drive them for good, not bad!
---
Background note: NiemanLab today pointed to a PNAS paper showing evidence that "... ratings given by our [lay] participants were very strongly correlated with ratings provided by professional fact-checkers. Thus, incorporating the trust ratings of laypeople into social media ranking algorithms may effectively identify low-quality news outlets and could well reduce the amount of misinformation circulating online." The study was based on explicit quality judgments, but using implicit data on quality judgments as I suggest should be similarly correlated, and could apply the imputed judgments of every social media user who interacted with an item with no added user effort.
[Update:]
Comments on Facebook's 2/17/20 White Paper, Charting a Way Forward on Online Content Regulation
This is an interesting document, with some good discussion, but it seems to provide evidence that leads to the point I make here, but totally misses seeing it. Again this seems to be a case in which "It is difficult to get a man to understand something when his job depends on not understanding it."
The report makes the important point that:
See the Selected Items tab for more on this theme.
How can we do that? A quick summary of key points from my prior posts...
We seem to forget what Google’s original PageRank algorithm had taught us. Content quality can be inferred algorithmically based on human user behaviors, without intrinsic understanding of the meaning of the content. Algorithms can be enhanced to be far more nuanced. The current upranking is based on likes from all of one’s social graph -- all treated as equally valid. Instead, we can design algorithms that learn to recognize the user behaviors on page 8, to learn which users share responsibly (reading more than headlines and showing discernment for quality) and which are promiscuous (sharing reflexively, with minimal dwell time) or malicious (repeatedly sharing content determined to be disinformation). Why should those users have more than minimal influence on what other users see?
The spread of disinformation could be dramatically reduced by upranking “votes” on what to share from users with good reputations, and downranking votes from those with poor reputations. I explain further in A Cognitive Immune System for Social Media -- Developing Systemic Resistance to Fake News and In the War on Fake News, All of Us are Soldiers, Already! More specifics on designing such algorithms is in The Augmented Wisdom of Crowds: Rate the Raters and Weight the Ratings. Social media are now reflecting the wisdom of the mob -- instead we need to seek the wisdom of the smart crowd. That is what society has sought to do for centuries.
Beyond that, better algorithms could combat the social media filter bubble effects by applying measures that apply judo to the active drivers noted on page 8. Cass Sunstein suggested “surprising validators” in 2012 one way this might be done, and I built on that to explain how that could be applied in social media algorithms: Filtering for Serendipity -- Extremism, 'Filter Bubbles' and 'Surprising Validators’.
Yes, "the way we think drives disinformation," and social media distribution algorithms drive how we think -- we can drive them for good, not bad!
---
Background note: NiemanLab today pointed to a PNAS paper showing evidence that "... ratings given by our [lay] participants were very strongly correlated with ratings provided by professional fact-checkers. Thus, incorporating the trust ratings of laypeople into social media ranking algorithms may effectively identify low-quality news outlets and could well reduce the amount of misinformation circulating online." The study was based on explicit quality judgments, but using implicit data on quality judgments as I suggest should be similarly correlated, and could apply the imputed judgments of every social media user who interacted with an item with no added user effort.
[Update:]
Comments on Facebook's 2/17/20 White Paper, Charting a Way Forward on Online Content Regulation
This is an interesting document, with some good discussion, but it seems to provide evidence that leads to the point I make here, but totally misses seeing it. Again this seems to be a case in which "It is difficult to get a man to understand something when his job depends on not understanding it."
The report makes the important point that:
Companies may be able to predict the harmfulness of posts by assessing the likely reach of content (through distribution trends and likely virality), assessing the likelihood that a reported post violates (through review with artificial intelligence), or assessing the likely severity of reported contentSo Facebook understands that they can predict "the likely reach of content" -- why not influence it??? It is their distribution process and filtering algorithms that control "the likely reach of content." Why not throttle distribution to reduce the reach in accord with the predicted severity of the violation? Why not gather realtime feedback from the distribution process (including the responses of users) to refine those predictions, so they can course correct the initial predictions and rapidly refine the level of the throttle? That is what I have suggested in many posts, notably In the War on Fake News, All of Us are Soldiers, Already!
See the Selected Items tab for more on this theme.
No comments:
Post a Comment