Renee DiResta's article in The Atlantic, Virus Experts Aren’t Getting the Message Out (5/6/20), is an insightful analysis of how social media have broken society's ability for "mediating consent" as to what is fact and what is fiction (or worse).
Here is the letter I wrote in response:
To Renee DiResta’s excellent statement of the problem -- getting quality information on Covid-19 to the public in this time of crisis (which is just part of a much broader problem) -- I would add suggestions for a better solution. She is correct that our public health institutions must adapt to modern modes of communication, and that media should select for authoritative voices, including those who are outside those institutions. She rightly says “Some of the best frameworks for curating good information…involve a hybrid of humans and artificial intelligence…These processes are difficult to scale because they involve human review, but they also recognize the value of factoring authoritativeness—not just pure popularity...the ‘consensus of the most liked.’”
The solution to this critical challenge of scaling is to use algorithms more effectively -- to “augment the wisdom of crowds.” The crowd gets wiser when the human votes of authoritative likes count for more than those of foolish or malicious likes. This can be done by building on the huge success of how Google’s hybrid PageRank algorithm first augmented the wisdom of the Web-linking crowd. PageRank did not rely on machine understanding of content (still very difficult), but only on the raw power of machine tabulation of human understanding (IBM began with tabulating the 1890 census).
The genius of PageRank is not to rank Web pages by purely human authorities as Yahoo did, nor by pure algorithms as AltaVista did, but by a clever and scalable hybrid of man and machine. It interprets links to a Web page from other sites as equivalent to likes that signal the judgment of human “Webmasters” or authors. But it then augments those judgments: instead of weighting all such links as equal votes of authority, it weights them based on their own authority. It sees who links to them (one level removed), and recursively, what authority those links should have, based on who links to them (a further level removed).
Social media and other information discovery media could apply much the same method. A “RateRank” algorithm that augments human intelligence in this way could determine whose likes it should rank as authoritative and whose likes it should rank as noise. It could track signals that reflect human judgement – likes, shares, comments, followers, etc. -- and determine reputations for those “ratings” to know which to weight highly as from respected raters, and which to discount as from usually foolish or malicious raters. Certifications of authority from independent rating institutions could also be factored in, but this algorithm would also up-rank emerging or non-mainstream voices that deserve to be heard -- including those that are responsibly contrarian.
Such hybrid algorithms would power a highly adaptive “cognitive immune system” that would help insure -- at Internet scale and speed -- that the most authoritative and deserving messages get out most widely, and that misinformation and disinformation is suppressed. (This need not limit First Amendment rights, since it would limit how dubious content is distributed, but it could still be posted and accessible to anyone who specifically seeks it.)
These proposals for up-ranking quality (details at http://bit.ly/AugmWoC) have gotten attention in the technology and policy community, but media businesses have yet to be receptive. The only apparent reason seems to be that their advertising-driven business model thrives on “elevating popularity over facts” as DiResta notes. But, if the current algorithmic de-augmentation of human intelligence does not change, humanity may never recover.
---
[This letter was sent to The Atlantic on 5/11/20. I had previously sent a draft to Renee for comment, and she responded that she viewed it as thoughtful and encouraged me to submit it.]
"Everything is deeply intertwingled" – Ted Nelson’s insight that inspired the Web. People can be smarter about dealing with that - in media services, social media, AI, and society and life more broadly. Technology can augment that -- most notably as the Augmented Wisdom of Crowds (see the Selected Items tab below). The former name, “Reisman on User-Centered Media” still applies: open and adaptable to each user's needs and desires – and sharing in the value they create for users.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment