Showing posts with label biased assimilation. Show all posts
Showing posts with label biased assimilation. Show all posts

Thursday, January 31, 2019

Zucked -- Roger McNamee's Wake Up Call ...And Beyond

Zucked: Waking Up to the Facebook Catastrophe is an authoritative and frightening call to arms -- but I was disappointed that author Roger McNamee did not address some of the suggestions for remedies that I shared with him last June (posted as An Open Letter to Influencers Concerned About Facebook and Other Platforms).

Here are brief comments on this excellent book, and highlights of what I would add. Many recognize the problem with the advertising-based business model, but few seem to be serious about finding creative ways to solve it. It is not yet proven that my suggestions will work quite as I envision, but the deeper need is to get people thinking about finding and testing more win-win solutions. His book makes a powerful case for why this is urgently needed.

McNamee's urgent call to action

McNamee offers the perspective of a powerful Facebook and industry insider. A prominent tech VC, he was an early investor and mentor to Zuckerberg -- the advisor who suggested that he not sell to Yahoo, and who introduced him to Sandberg. He was alarmed in early-mid 2016 by early evidence of manipulation affecting the UK and US elections, but found that Zuckerberg and Sandberg were unwilling to recognize and act on his concerns. As he became more concerned, he joined with others to raise awareness of this issue and work to bring about needed change.

He provides a rich summary of how we got here, most of the issues we now face, and the many prominent voices for remedial action. He addresses the business issues and the broader questions of governance, democracy, and public policy. He tells us: “A dystopian technology future overran our lives before we were ready.” (As also quoted in the sharply favorable NY Times review.)

It's the business model, stupid!

McNamee adds his authoritative voice to the many observers who have concluded that the business model that serves advertisers to enable consumers to obtain "free" services distorts incentives, causing businesses to optimize for advertisers, not for users:
Without a change in incentives, we should expect the platforms to introduce new technologies that enhance their already-pervasive surveillance capabilities...the financial incentives of advertising business models guarantee that persuasion will always be the default goal of every design."
He goes on to suggest:
The most effective path would be for users to force change. Users have leverage...
The second path is government intervention. Normally I would approach regulation with extreme reluctance, but the ongoing damage to democracy, public health, privacy, and competition justifies extraordinary measures. The first step would be to address the design and bushiness model failures that make internet platforms vulnerable to exploitation. ...Facebook and Google have failed at self-regulation.
My suggestions on the business model, and related regulatory action

This is where I have novel suggestions -- outlined on my FairPayZone blog, and communicated to McNamee last June -- that have not gotten wide attention, and are ignored in Zucked. These are at two levels.

The auto emissions regulatory strategy. This is a simple, proven regulatory approach for forcing Facebook (and similar platforms) to shift from advertising-based revenue to user-based revenue. That would fundamentally shift incentives from user manipulation to user value.

If Facebook or other consumer platforms fail to move to do that voluntarily, this simple regulatory strategy could force that -- in a market-driven way. The government could simply mandate that X% of their revenue must come from their users -- with a timetable for gradually increasing X.  This is how auto emissions mandates work -- don't mandate how to fix things, just mandate a measurable result, and let the business figure out how best to achieve that. Since reverse-metered ads (with a specific credit against user fees) would count as a form of reader revenue, that would provide an immediate incentive for Facebook to provide such compensation -- and to begin developing other forms of user revenue. This strategy is outlined in Privacy AND Innovation ...NOT Oligopoly -- A Market Solution to a Market Problem.

The deeper shift to user revenue models. Creative strategies can enable Facebook (and other businesses) to shift from advertising revenue to become substantially user-funded. Zuckerberg has
thrown up his hands at finding a better way: "I don’t think the ad model is going to go away, because I think fundamentally, it’s important to have a service like this that everyone in the world can use, and the only way to do that is to have it be very cheap or free."

Who Should Pay the Piper for Facebook? (& the rest), explains this new business model architecture -- with a focus on how it can be applied to let Facebook be "cheap or free" for those who get limited value and have limited ability to pay, but still be paid for, at fair levels for those who get more value and who are willing and able to pay for that. This architecture, called FairPay, has gained recognition for operationalizing a solution that businesses can begin to apply now.
  • A reverse meter for ads and data. This FairPay architecture still allows for advertising to continue to defray the cost of service, but on a more selective, opt-in basis --  by applying a "reverse meter" that credits the value of user attention and data against each user's service fees -- at agreed upon terms and rates. That shifts the game from the advertiser being the customer of the platform, to to the advertiser being the customer of the user (facilitated by the platform). In that way advertising is carried only if done in a manner that is acceptable to the user. That aligns the incentives of the user, the advertiser, and the platform. Others have proposed similar directions, but I take it farther, in ways that Facebook could act on now.
  • A consumer-value-first model for user-revenue. Reverse metering is a good starting place for re-aligning incentives, but Facebook can go much deeper, to transform how its business operates.The simplest introduction to the transformative twist of the FairPay strategy is in my Techonomy article, Information Wants to be Free; Consumers May Want to Pay   (It has also been outlined in in Harvard Business Review, and more recently in the Journal of Revenue and Pricing Management.) The details will depend on context, and will need testing to fully develop and refine over time, but the principles are clear and well supported.

    This involves ways to mass-customize pricing of Facebook, to be "cheap or free" where appropriate, and to set customized fair prices for each user who obtain real value and can be enticed to pay for that. That is adaptive to individual usage and value-- and eliminates the risk of having to pay when the value actually obtained did not warrant that. That aligns incentives for transparency, trust, and co-creation of real value for each user. Behavioral economics has shown that people are willing to pay and will do so even voluntarily -- when they see good reason to help sustain the creation of value that they actually want and receive. We just need business models that understand and build on that.
Bottom line. Whatever the details, unless the Facebook shifts direction on its own to aggressively move in the direction of user payments -- which now seems unlikely -- regulatory pressure will be needed to force that (just as with auto emissions). A user revolt might force similar changes as well, but the problem is far too urgent to wait and see.

The broader call -- augmenting the wisdom of crowds

Shifting to a user-revenue-based business model will change incentives and drive significant progress to remedy many of the problems that McNamee and many others have raised. McNamee provides a wide-ranging overview of many of those problems and most of the initiatives that promise to help resolve them, but there, too, I offer suggestions that have not gained attention.

Most fundamental is the power of social media platforms to shape collective intelligence. Many have come to see that, while technology has great power to augment human intelligence, applied badly, it can have the opposite effect of making us more stupid. We need to steer hard for a more positive direction, now that we see how dangerous it is to take good results for granted, and how easily things can go bad. McNamee observes that "We...need to address these problems the old fashioned way, by talking to one another and finding common ground." Effective social media design can help us do that.

Another body of my work relates to how to design social media feeds and filtering algorithms to do just that, as explained in The Augmented Wisdom of Crowds:  Rate the Raters and Weight the Ratings:
  • The core issue is one of trust and authority -- it is hard to get consistent agreement in any broad population on who should be trusted or taken as an authority, no matter what their established credentials or reputation. Who decides what is fake news? What I suggested is that this is the same problem that has been made manageable by getting smarter about the wisdom of crowds -- much as Google's PageRank algorithm beat out Yahoo and AltaVista at making search engines effective at finding content that is relevant and useful.

    As explained further in that post, the essence of the method is to "rate the raters" -- and to weight those ratings accordingly. Working at Web scale, no rater's authority can be relied on without drawing on the judgement of the crowd. Furthermore, simple equal voting does not fully reflect the wisdom of the crowd -- there is deeper wisdom about those votes to be drawn from the crowd.

    Some of the crowd are more equal than others. Deciding who is more equal, and whose vote should be weighted more heavily can be determined by how people rate the raters -- and how those raters are rated -- and so on. Those ratings are not universal, but depend on the context: the domain and the community -- and the current intent or task of the user. Each of us wants to see what is most relevant, useful, appealing, or eye-opening -- for us -- and perhaps with different balances at different times. Computer intelligence can distill those recursive, context-dependent ratings, to augment human wisdom.
  • A major complicating issue is that of biased assimilation. The perverse truth seems to be that "balanced information may actually inflame extreme views." This is all too clear in the mirror worlds of pro-Trump and anti-Trump factions and their media favorites like Fox, CNN, and MSNBC. Each side thinks the other is unhinged or even evil, and layers a vicious cycle of distrust around anything they say. It seems one of the few promising counters to this vicious cycle is what Cass Sunstein referred to as surprising validators: people one usually gives credence to, but who suggest one's view on a particular issue might be wrong. An example of a surprising validator was the "Confession of an Anti-GMO Activist." This item is  readily identifiable as a "turncoat" opinion that might be influential for many, but smart algorithms can find similar items that are more subtle, and tied to less prominent people who may be known and respected by a particular user. There is an opportunity for electronic media services to exploit this insight that "what matters most may be not what is said, but who, exactly, is saying it."
If and when Facebook and other platforms really care about delivering value to their users (and our larger society), they will develop this kind of ability to augment the wisdom of the crowd. (Similar large-scale ranking technology is already proven in uses for advertising and Google search.) Our enlightened, democratic civilization will disintegrate or thrive, depending on whether they do that.

The facts of the facts. One important principle which I think McNamee misunderstands (as do many), is his critique that "To Facebook, facts are not absolute; they are a choice to be left initially to users and their friends but then magnified by algorithms to promote engagement." Yes, the problem is that the drive for engagement distorts our drive for the facts -- but the problem is not that "To Facebook, facts are not absolute." As I explain in The Tao of Fake Newsfacts are not absolute --we cannot rely on expert authorities to define absolute truth -- human knowledge emerges from an adaptive process of collective truth-seeking by successive approximation and the application of collective wisdom. It is always contingent on that, not absolute. That is how scholarship and science and democratic government work, that is what the psychology of cognition and knowledge demonstrates, and that is what effective social media can help all of us do better.

Other monopoly platform excesses - openness and interoperability

McNamee provides a good survey of many of the problems of monopoly (or oligopoly) power in the platforms, and some of the regulatory and antitrust remedies that are needed to restore the transparency, openness, and flexibility and market-driven incentives needed for healthy innovation. These include user ownership of their data and metadata, portability of the users' social graphs to promote competition, and audits and transparency of algorithms.

I have addressed similar issues, and go beyond McNamee's suggestions to emphasize the need for openness and interoperability of competing and complementary services -- see Architecting Our Platforms to Better Serve Us -- Augmenting and Modularizing the Algorithm. This draws on my early career experience watching antitrust regulatory actions relating to AT&T (in the Bell System days), IBM (in the mainframe era), and Microsoft (in the early Internet browser wars).

The wake up call

There are many prominent voices shouting wake up calls. See the partial list at the bottom of An Open Letter to Influencers Concerned About Facebook and Other Platforms, and MacNamee's Bibliographic Essay at the end of Zucked (excellent, except for the omission that I address here).

All are pointing in much the same direction. We all need to do what we can to focus the powers that be -- and the general population -- to understand and address this problem. The time to turn this rudderless ship around is dangerously short, and effective action to set a better direction and steer for it has barely begun. We have already sailed blithely into killer icebergs, and many more are ahead.

---
This is cross-posted from both of my blogs, FairPayZone.com and Reisman on User-Centered Media, which delve further into these issues.

---
See the Selected Items tab for more on this theme.

Monday, August 27, 2018

The Tao of Fake News / The Tao of Truth

We are smarter than this!

Everyone with any sense sees "fake news" disinformation campaigns as an existential threat to "truth, justice, and the American Way," but we keep looking for a Superman to sort out what is true and what is fake. A moment's reflection shows that, no Virginia, there is no SuperArbiter of truth. No matter who you choose to check or rate content, there will always be more or less legitimate claims of improper bias.
  • We can't rely on "experts" or "moderators" or any kind of "Consumer Reports" of news. We certainly can't rely on the Likes of the Crowd, a simplistic form of the Wisdom of the Crowd that is too prone to "The Madness of Crowds." 
  • But we can Augment the Wisdom of the Crowd.
  • We can't definitively declare good-faith "speech" as "fake" or "false." 
  • But we can develop a robust system for ranking the probable value and truthfulness of speech, revising those rankings, and using that to decide how to share it with whom.
For practical purposes, truth is a filtering process, and we can get much smarter about how we apply our collective intelligence to do our filtering.

The Tao of Fake News, Truth, and Meaning

Truth is a process. Truth is complex. Truth depends on interpretation and context. Meaning depends on who is saying something, to whom, and why (as Humpty-Dumpty observed). The truth in Rashomon is different for each of the characters. Truth is often very hard for individuals (even "experts") to parse.

Truth is a process, because there is no practical way to ensure that people speak the truth, nor any easy way to determine if they have spoken the truth. Many look to the idea of flagging fake news sources, but who judges, on what basis and what aspects? (A recent NeimanLab assessment of NewsGuard's attempt to do this shows how open to dispute even well-funded, highly professional efforts to do that are.)

Truth is a filtering process: How do we filter true speech from false speech? Over centuries we have come to rely on juries and similar kinds of panels, working in a structured process to draw out and "augment" the collective wisdom of a small crowd. In the sciences, we have a more richly structured process for augmenting the collective wisdom of a large crowd of scientists (and their experiments), informally weighing the authority of each member of the crowd -- and avoiding over-reliance on a few "experts." Our truths are not black and white, absolute, and eternal -- they are contingent, nuanced, and tentative -- but this Tao of truth has served us well.

It is now urgent that our methods for augmenting and filtering our collective wisdom be enhanced. We need to apply computer-mediated collaboration to apply a similar augmented wisdom of the crowd at Internet scale and speed. We can make quick initial assessments, then adapt, grow, and refine our assessments of what is true, in what way, and with regard to what.

Filtering truth -- networks, context, and community

If our goal is to exclude all false and harmful material, we will fail. The nature of truth, and of human values, is too complex. We can exclude the most obviously pernicious frauds -- but for good-faith speech from humans in a free society, we must rely on a more nuanced kind of wisdom.

Our media filter what we see. Now the filters in our dominant social media are controlled by a few corporations motivated to maximize ad revenue by maximizing engagement. They work to serve the advertisers that are their customers, not we users (who now are really their product). We need to get them to change how the filters operate, to maximize value to their users.

We need filters to be tuned to the real value of speech as communication from one person to other people.  Most people want the "firehose" of items on the Internet to be filtered in some way, but just how may vary. Our filters need to be responsive to the desires of the recipients. Partisans may like the comfort of their distorting filter bubbles, but most people will want at least some level of value, quality, and reality, at least some of the time. We can reinforce that by doing well at it.

There is also the fact that people live in communities. Standards for what is truthful and valuable vary from community to community -- and communities and people change over time. This is clearer than ever, now that our social networks are global.

Freedom of speech requires that objectionable speech be speak-able, with very narrow exceptions. The issue is who hears that speech, and what control do they have over what they hear. A related issues is when do third parties have a right to influence those listener choices, and how to keep that intrusive hand as light as possible. Some may think we should never see a swastika or a heresy, but who has the right to draw such lines for everyone in every context?

We cannot shut off objectionable speech, but we can get smarter about managing how it spreads. 

To see this more clearly, consider our human social network as a system of collective intelligence, one that informs an operational definition of truth. Whether at the level of a single social network like Facebook, or all of our information networks, we have three kinds of elements:
  • Sources of information items (publishers, ordinary people, organizations, and even bots) 
  • Recipients of information items  
  • Distribution systems that connect the sources and recipients using filters and presentation service that determine what we see and how we see it (including optional indicators of likely truthfulness, bias, and quality).
Controlling truth at the source may, at first, seem the simple solution, but requires a level of control of speech that is inconsistent with a free society. Letting misinformation and harmful content enter our networks may seem unacceptable, but (with narrow exceptions) censorship is just not a good solution.

Some question whether it is enough to "downrank" items in our feeds (not deleted, but less likely to be presented to us), but what better option do we have than to do that wisely? The best we can reasonably do is manage the spread of low quality and harmful information in a way that is respectful of the rights of both sources and recipients, to limit harm and maximize value.*

How can we do that, and who should control it? We, the people, should control it ourselves (with some limited oversight and support).  Here is how.

Getting smarter -- The Augmented Wisdom of Crowds

Neither automation nor human intelligence alone is up to the scale and dynamics of the problem.  We need a computer-augmented approach to managing the wisdom of the crowd -- as embodied in our filters, and controlled by us. That will pull in all of the human intelligence we can access, and apply algorithms and machine learning (with human oversight) to refine and apply it. The good news is that we have the technology to do that. It is just a matter of the will to develop and apply it.

My previous post outlines a practical strategy for doing that -- "The Augmented Wisdom of Crowds: Rate the Raters and Weight the Ratings." Google has already shown how powerful a parallel form of this strategy can be to filter which search results should be presented to whom-- on Internet scale. My proposal is to broaden these methods to filter what our our social media present to us.

The method is one of considering all available "signals" in the network and learning how to use them to inform our filtering process. The core of the information filtering process -- that can be used for all kinds of media, including our social media -- is to use all the data signals that our media systems have about our activity. We can consider activity patterns across these three dimensions:
  • Information items (content of any kind, including news items, personal updates, comments/replies, likes, and shares/retweets).
  • Participants (and communities and sub-communities of participants), who can serve as both sources and recipients of items (and of items about other items)
  • Subject and task domains (and sub-domains) that give important context to information items and participants.
We can apply this data with the understanding that any item or participant can be rated, and any item can contain one or more ratings (implicit or explicit) of other items and/or participants. The trick is to tease out and make sense of all of these interrelated ratings and relationships. To be smart about that, we must recognize that not all ratings are equal, so we "rate the raters, and weight the ratings" (using any data that signals a rating). We take that to multiple levels -- my reputational authority depends not only on the reputational authority of those who rate me, but on those who rate them (and so on).

This may seem very complicated (and at scale, it is), but Google proved the power of such algorithms to determine which search results are relevant to a user's query (at mind-boggling scale and speed). Their PageRank algorithm considers what pages link to a given page to assess the imputed reputational authority of that page -- with weightings based on the imputed authority of the pages that link to it (again to multiple levels). Facebook uses similarly sophisticated algorithms to determine what ads should be targeted to whom -- tracking and matching user interests, similarities, and communities and matching that with information on their response to similar ads.

In some encouraging news, it was recently reported that Facebook is now also doing a very primitive form of rating the trustworthiness of its users to try to identify fake news -- they track who spreads fake news and who reports abuse truthfully or deceitfully. What I propose is that we take this much farther, and make it central to our filtering strategies for social media and more broadly.

With this strategy, we can improve our media filters to better meet our needs, as follows:
  • Track explicit and implicit signals to determine authority and truthfulness -- both of the speakers (participants) and of the things they say (items) -- drawing on the wisdom of those who hear and repeat it (or otherwise signal how they value it).
  • Do similar tracking to understand the desires and critical thinking skills of each of the recipients
  • Rate the raters (all of us!) -- and weight the votes to favor those with better ratings. Do that n-levels deep (much as Google does).
  • Let the users signal what levels and types of filtering they want. Provide defaults and options to accommodate users desiring different balances of ease or of fine control and reporting. Let users change that as they desire, depending on their wish to relax, to do focused critical thinking, or to open up to serendipity.
  • Provide transparency and auditability -- to each user (and to independent auditors) -- as to what is filtered for them and how.**
  • Open the filtering mechanisms to independent providers, to spur innovation in a competitive marketplace in filtering algorithms for users to choose from.
That is the best broad solution that we can apply. As we get good at it we will be amazed at how effective it can be. But given the catastrophic folly of where have have let this get to...

First, do no harm!

Most urgently, we need to change the incentives of our filters to do good, not harm. At present, our filters are pouring gasoline on the fires (even as their corporate owners claim to be trying to put them out). As explained in a recent HBR article, "current digital advertising business models incentivize the spread of false news." That article explains the insidious problem of the ad model for paying for services (others have called it "the original sin of the Web") and offers some sensible remedies.  

I have proposed more innovative approaches to better-aligning business models -- and to using a light-handed, market-driven, regulatory approach to mandate doing that -- in "An Open Letter to Influencers Concerned About Facebook and Other Platforms."

We have learned that the Internet has all the messiness of humanity and its truths. We are facing a Pearl Harbor of a thousand pin-pricks that is rapidly escalating. We must mobilize onto a war footing now, to halt that before it is too late.
  • First we need to understand the nature and urgency of this threat to democracy, 
  • Then we must move on both short and longer time horizons to slow and then reverse the threat. 
The Tao of fake news contains its opposite, the Tao of Augmented Wisdom. If we seek that, the result will be not only to manage fake news, but to be smarter in our collective wisdom than we can now imagine.

Related posts:
---
*Of course some information items will be clearly malicious, coming from fraudulent human accounts or bots -- and shutting some of that off at the source is feasible and desirable. But much of the spread of "fake news" (malicious or not) is from real people acting in good faith, in accord with their understanding and beliefs. We cannot escape that non-binary nature of human reality, and must come to terms with our world in nuanced shades of gray. But we can get very sophisticated at distinguishing when news is spread by participants who are usually reliable from when it is spread by those who have established a reputation for being credulous, biased, or malicious.

**The usual concern with transparency is that if the algorithms are known, then bad-actors will game them. That is a valid concern, and some have suggested that even if the how of the filtering algorithm is secret, we should be able to see and audit the why for a given result.  But to the extent that there is an open market in filtering methods (and in countermeasures to disinformation), and our filters vary from user to user and time to time, there will be so much variability in the algorithms that it will be hard to game them effectively.

---
[Update 8/30/18:]  Giuliani and The Tao of Truth 

To indulge in some timely musing, the Tao of Truth gives a perspective on the widely noted recent public statement that "truth isn't truth." At the level of the Tao, we can say that "truth is/isn't truth," or more precisely, "truth is/isn't Truth" (with one capital T). That is the level at which we understand truth to be a process in which the question "what is truth?" depends on what we mean, at what level, in what context, with what assurance -- and how far we are in that process. We as a society have developed a broadly shared expectation of how that process should work. But as the process does its never-ending work, there are no absolutes -- only more or less strong evidence, reasoning, and consensus about what we believe the relevant truth to be. (That, of course is an Enlightenment social perspective, and some disagree with this very process, and instead favor a more absolute and authoritarian variation. Perhaps most fundamentally, we are now in a reactionary time in which our prevailing process for truth is being prominently questioned. The hope here is that continuing development of a free, open, and wise process prevails over return to a closed, authoritarian one -- and prevails over the loss of any consensus at all.

[Update 10/12/18:] A Times article by Sheera Frenkel, adds perspective on the scope and pace of the problem -- and the difficulty in definitively identifying items as fakes that can be censored "because of the blurry lines between free speech and disinformation" -- but such questionable items can be down-ranked.

[Update 11/2/20:] A nice article on the importance of understanding the social nature of truth ("epistemic dependence" -- our reliance on others' knowledge -- "knowing vicariously"), and the interplay of evidence, trust, and authority, is in MIT Tech Review. It refers to a much-cited fundamental paper on epistemic dependence from 1985.

---
See the Selected Items tab for more on this theme.

Sunday, July 22, 2018

The Augmented Wisdom of Crowds: Rate the Raters and Weight the Ratings

How technology can make us all smarter, not dumber

We thought social media and computer-mediated communications technologies would make us smarter, but recent experience with Facebook, Twitter, and others suggests they are now making us much dumber. We face a major and fundamental crisis. Civilization seems to be descending into a battle of increasingly polarized factions who cannot understand or accept one another, fueled by filter bubbles and echo chambers.

Many have begun to focus serious attention on this problem, but it seems we are fighting the last war -- not using tools that match the task.

A recent conference, "Fake News Horror Show," convened people focused on these issues from government, academia, and industry, and one of the issues was who decides what is "fake news," how, and on what basis. There are many efforts at fact checking, and at certification or rating of reputable vs. disreputable sources -- but also recognition that such efforts can be crippled by circularity: who is credible-enough in the eyes of diverse communities of interest to escape the charge of "fake news" themselves?

I raised two points at that conference. This post expands on the first point and shows how it provides a basis for addressing the second:
  • The core issue is one of trust and authority -- it is hard to get consistent agreement in any broad population on who should be trusted or taken as an authority, no matter what their established credentials or reputation. Who decides what is fake news? What I suggested is that this is the same problem that has been made manageable by getting smarter about the wisdom of crowds -- much as Google's PageRank algorithm beat out Yahoo and AltaVista at making search engines effective at finding content that is relevant and useful.

    As explained further below, the essence of the method is to "rate the raters" -- and to weight those ratings accordingly. Working at Web scale, no rater's authority can be relied on without drawing on the judgement of the crowd. Furthermore, simple equal voting does not fully reflect the wisdom of the crowd -- there is deeper wisdom about those votes to be drawn from the crowd.

    Some of the crowd are more equal than others. Deciding who is more equal, and whose vote should be weighted more heavily can be determined by how people rate the raters -- and how those raters are rated -- and so on. Those ratings are not universal, but depend on the context: the domain and the community -- and the current intent or task of the user. Each of us wants to see what is most relevant, useful, appealing, or eye-opening -- for us -- and perhaps with different balances at different times. Computer intelligence can distill those recursive, context-dependent ratings, to augment human wisdom.
  • A major complicating issue is that of biased assimilation. The perverse truth seems to be that "balanced information may actually inflame extreme views." This is all too clear in the mirror worlds of pro-Trump and anti-Trump factions and their media favorites like Fox, CNN, and MSNBC. Each side thinks the other is unhinged or even evil, and layers a vicious cycle of distrust around anything they say. It seems one of the few promising counters to this vicious cycle is what Cass Sunstein referred to as surprising validators: people one usually gives credence to, but who suggest one's view on a particular issue might be wrong. A recent example of a surprising validator was the "Confession of an Anti-GMO Activist." This item is  readily identifiable as a "turncoat" opinion that might be influential for many, but smart algorithms can find similar items that are more subtle, and tied to less prominent people who may be known and respected by a particular user. There is an opportunity for electronic media services to exploit this insight that "what matters most may be not what is said, but who, exactly, is saying it."
These are themes I have been thinking and writing about on and off for decades. This growing crisis, as highlighted by the Fake News Horror Show conference, spurred me to write this outline for a broad architecture (and specific methods) for addressing these issues. Discussions at that event led to my invitation to an up-coming workshop hosted by the Global Engagement Center (a US State Department unit) focused on "technologies for use against foreign propaganda, disinformation, and radicalization to violence." This post is offered to contribute to those efforts.

Beyond that urgent focus, this architecture has relevance to the broader improvement of social media and other collaborative systems. Some key themes:
  • Binary, black or white thinking is easy and natural, but humans are capable of dealing with the fact that reality is nuanced in many shades of gray, in many dimensions. Our electronic media can augment that capability.
  • Instead, our most widely used social media now foster simplistic, binary thinking.
  • Simple strategies (analogous to those proven and continually refined in Google's search engine) enable our social media systems to recognize more of the underlying nuance, and bring it to our attention in far more effective ways.
  • We can apply an architecture that draws on some core structures and methods to enable intelligent systems to better augment human intelligence, and to do that in ways tuned to the needs of a diversity of people -- from different schools of thought and with different levels of intelligence, education, and attention.
  • Doing this can not only better expose truly fake news for what it is, but can make us smarter and more aware and reflective of nuance. 
  • This can not only guide our attention toward quality, but can also enable us to be more favored by surprising validators and other forms of serendipity needed to escape our filter bubbles.
Where I am coming from

I was first exposed to early forms of augmented intelligence and hypermedia in 1969 (notably Nelson and Engelbart), and to collaborative systems in 1971 (notably Turoff). That set a broad theme for my work. After varied roles in IT and media technology, I became an inventor, and one of my patent applications outlined a collaborative system for social development of inventions and other ideas (in 2002-3). While my specific business objective proved elusive (as the world of patents changed), what I described was a general architecture for collaborative development of ideas that has very wide applicability ("ideas" include news stories, social media posts, and "likes"). That is obviously more timely now than ever. I had written on this blog about some specific aspects of those ideas in 2012: "Filtering for Serendipity -- Extremism, 'Filter Bubbles' and 'Surprising Validators.'" To encourage use of those ideas, I released that patent filing into the public domain in 2016.

Here, I take a first shot at a broad description of these strategies that is intended to be more readable and relevant to our current crisis than the legalese of the patent application. As supplement to this, a copy of that patent document with highlighting of the portions that remain most relevant is posted online.*

Of course some of these ideas are more readily applied than others. But the goal of an architecture is to provide a vision and a framework to build on. Considering the broad scope of what might be done over time is the best way to be sure that we do the best that we can do at any point in time. We can then adjust and improve on that to build toward still-better solutions.

Augmenting the wisdom of crowds

Civilization has risen because of our human skills: to cooperate, to learn from one another, and to coalesce on wisdom and resist folly -- difficult as it may often be to distinguish which is which.

Life is complex, and things are rarely black or white. The Tao symbolizes the realization that everything contains its opposite -- Ted Nelson put it that "everything is deeply intertwingled," and conceived of the Web as a way to reflect that. But throughout human history this nuanced intertwingling has remained challenging for people to grasp.

Behavioral psychology has elucidated the mechanisms behind our difficulty. We are capable of deep and subtle rational thought (Kahneman's System 2, "thinking slow"), but we are pragmatic and lazy, and prefer the the more instinctive, quick, and easy path (system 1, "thinking fast" -- a mode that offers great survival value when faced with urgent decisions. Only reluctantly do we think more deeply. The thinking fast of System 1 favors biased assimilation, with its reliance on the "cognitive ease," quick reactions, and emotional and tribal appeal, rather than rationality.

Augmenting human intellect

For over half a century, a seminal dream of computer technology has been "augmenting human intellect" based on "man-computer symbiosis." The developers of our augmentation tools and our social media believed in their power to enhance community and wisdom -- but we failed to realize how easily our systems can reduce us to the lowest common denominator if we do not apply consistent and coherent measures to better augment the intelligence they automated. A number of early collaborative Web services recognized that some contributors should be more equal than others (for example, Slashdot, with its "karma" reputation system). Simple reputation systems have also proven important for eBay and other market services. However, the social media that came to dominate broader society failed to realize how important that is, and were motivated to "move fast and break things" in a rush to scale and profit.

Now, we are trying to clean up the broken mess of this Frankenberg's monster, to find ways to flag "fake news" in its various harmful forms. But we still seem not to be applying the seminal work in this field. That failure has made our use of the wisdom of crowds stupid to the point of catastrophe. Instead of augmenting our intellect as Engelbart proposed, we are de-augmenting it. People see what is popular, read a headline without reading the full story, jump to conclusions and "like" it, making it more popular, so more people see it. The headlines increasingly become clickbait that distorts the real story. Influence shifts from ideas to memes. This is clearly a vicious cycle -- one that the social media services have little economic incentive to change -- polarization increases engagement, which sells more ads. We urgently need fundamental changes to these systems.

Crowdsourced, domain-specific, authorities -- rating the raters -- much like Google

Raw forms of the wisdom of crowds look to "votes" from crowd, weight them equally, and select the most popular or "liked" items (or a simple average of all votes). This has been done for forecasting, for citation analysis of academic papers, and in early computer searching. But it becomes apparent that this can lead to the lowest common denominator of wisdom, and is easily manipulated with fraudulent votes. Of course we can restrict this to curated "expert" opinion, but then we lose the wisdom of the larger crowd (including its ability to rapidly sense early signs of change).

It was learned that better results can be obtained by weighting votes based on authority, as done in Google's PageRank algorithm, so that votes with higher authority count more heavily (while still using the full crowd to balance the effects of supposed authorities who might be wrong). In academic papers, it was realized that it matters which journal cites an article (now that many low-quality pay-to-publish journals have proliferated).

In Google's search algorithm (dating from 1996, and continuously refined), it was realized that links from a widely-linked-to Web site should be weighted higher in authority than links from another that has few links in to it. The algorithm became recursive: PageRank (used to rank the top search results) depends on how many direct links come in, weighted by a second level factor of how many sites link in to those sites, and weighted in turn by a third level factor of how many of those have many inward links, and so on. Related refinements partitioned these rankings by subject domain, so that authority might be high in one domain, but not in others. The details of how many levels of recursion and how the weighting is done are constantly tuned by Google, but this basic rate the raters strategy is the foundation for Google's continuing success, even as it is now enhanced with many other "signals" in a continually adaptive way. (These include scoring based on analysis of page content and format to weight sites that seem to be legitimate above those that seem to be spam or link farms.)

Proposed methods and architecture

My patent disclosure explains much the same rate the raters strategy (call it RateRank?) as applicable to ranking items of nearly any kind, in a richly nuanced, open, social context for augmenting the wisdom of crowds. (It is a strategy that can itself be adapted and refined by augmenting the wisdom of crowds -- another case of "eat your own dog food!")

The core architecture works in terms of three major dimensions that apply to a full range of information systems and services:
  1. Items. These can be any kind of information item, including contribution items (such as news stories, blog posts, or social media posts, or even books or videos, or collections of items), comment/analysis items (including social media comments on other items), and rating/feedback items (including likes and retweets, as well as comments that imply a rating of another item)
  2. Participants (and communities and sub-communities of participants). These are individuals, who may or may not have specific roles (including submitters, commenters, raters, and special roles such as experts, moderators, or administrators). In social media systems, these might include people (with verified IDs or anonymous), collections of people in the form of businesses, commercial advertisers, political advertisers, and other organizations. (Special rules and restrictions might apply to non-human participants, including bots and corporate or state actors.) Communities of participants might be explicit (with controlled membership), such as Facebook groups, and implicit (and fuzzy), based on closeness of social graph relationships and domain interests. These might include communities of interest, practice, geographic locality, or  degree of social graph closeness. 
  3. Domains (and sub-domains). These may be subject-matter domains in various dimensions. Domains may overlap or cross-cut. (For example issues about GMOs might involve cross-cutting scientific, business, governmental/regulatory, and political domains.)
An important aspect of generality in this architecture is that:
  • Any item or participant can be rated (explicitly or implicitly)
  • Any item can contain one or more ratings of other items or participants (and of itself)
It should be understood that Google's algorithm is a specialized instance of such an architecture -- one where all the items are Web pages, and all links between Web pages are implicit ratings of the link destination by the link source. The key element of man-computer symbiosis here is that the decision to place a link is assumed to be a "rating" decision of a human Webmaster or author (a vote for the destination, by the source, from the source context), but the analysis and weighting of those links (votes) is algorithmic. Much as could be applied to fake news, Google has developed finely tuned algorithms for detecting the multitudes of "link farms" that use bots that seek to fraudulently mimic this human intelligence, and downgrades the weighting of such links.

How the augmenting works

The heart of the method is a fully adaptive process that rates the raters recursively, using explicit and implicit ratings of items and raters (and potentially even the algorithms of the system itself). Rate the raters, rate those who rate the raters, and so on. Weight the ratings according to the rater's reputation (in context), so the wisest members of the crowd, in the current context, as judged by the crowd, have the most say. The wisest in context meaning the wisest in the domains and communities that are most relevant to the current usage context. But still, all of the crowd should be considered at some level.

This causes good items and raters (and algorithms) to bubble up into prominence, and less well-rated ones to sink from prominence. This process would rarely be binary black and white. Highly rated items or participants can lose that rating over time, and in other contexts. Poorly rated items or participants might never be removed (except for extreme abuse) but simply downgraded (to contribute what small weight is warranted, especially if many agree on a contrary view) and can remain accessible with digging, when desired. (As noted below, our social media systems have become essential utilities, and exclusion of people or ideas on the fringe is at odds with the value of free speech in our open society.) The rules and algorithms could be continuously learning and adaptive, using a hybrid of machine learning and human oversight. 

Attention management systems can ensure that the best items tend to made most visible, and the worst least visible, but the system should adjust those rankings to the context of what is known about the user in general, and what is inferred about what the user is seeking at a given time -- with options for explicit overrides (much as Google adjusts its search rankings to the user and their current query patterns).  It should be noted that Facebook and others already use some similar methods, but unfortunately these are oriented to maximizing an intensity of "engagement" that optimizes for the company's ad sale opportunities, rather than to a quality of content and engagement for the user. We need sophistication of algorithms, data science, and machine learning applied to quality for users, not just engagement for advertisers and those who would manipulate us.

Participants might be imputed high authority in one domain, or in one community, but lower in others. Movie stars might outrank Nobel prize-winners when considering a topic in the arts or even in social awareness, but not in economic theory. NRA members might outrank gun control opponents for members of an NRA community, but not for non-members of that community.

Openness is a key enabling feature: these algorithms should not be monolithic, opaque, and controlled by any one system, but should be flexible, transparent, and adaptive -- and depend on user task/context/desires/skill at any given time. Some users may choose simple default processes and behaviors, but other could be enabled to mix and match alternative ranking and filtering processes, and to apply deeper levels of analytics to understand why the system is presenting a given view. Users should be able to modify the view they see as they may desire, either by changing parameters or swapping alternative algorithms. Such alternative algorithms could be from a single provider, or alternative sources in an open marketspace, or "roll your own."

Within this framework, key design factors include how these key processes are managed to work in concert, and to change how each of these behaves, for a given user, at given time, depending on task/context/desires/skill (including the level of effort a user wishes to put in):
  • The core rate the raters process, based on both implicit and explicit ratings, weighted by authority as assessed by other raters (as themselves weighted based on ratings by others), with selective levels of partitioning by community and domain. Consideration of formal and institutional authority can be applied to partially balance crowdsourced authority. Dynamic selection of weighting and balancing methods might depend on user task/context/desires
  • Attention tools that filter irrelevant items and highlight relevant ones (such as to give Facebook or Twitter users different views of their feed). Thus different Facebook or Twitter user might be able to get different views of their feed, and change that as desired.
  • Consideration with regard to which communities and sub-communities most contribute to rankings for specific items at specific times.  Communities might have graded openness (in the form of selectively permeable boundaries) to avoid groupthink and cross-fertilize effectively. This could be applied by using insider/outsider thresholds to manage separation/openness.
  • Consideration with regard to domains and sub-domains to maximize the quality and relevance of ratings, authority, and attention, and to avoid groupthink and cross-fertilize effectively.
  • Consideration of explicit vs. implicit ratings.. While explicit ratings may provide the strongest and most nuanced information, implicit ratings may be far more readily available, thus representing a larger crowd, and so may have the greatest value in augmenting the wisdom of the crowd. Just as with search and ad targeting, implicit ratings can include subtle factors, such as measures of attention, sentiment, emotion, and other behaviors.
  • Consideration of verified vs. unverified vs. anonymous participants. It may be desirable to allow a range of levels, use weighting where anonymous participants have no reputation or a negative reputation. Bots might be banned, or given very poor reputation.
  • Open creation, selection and use of alternative tools for filtering, discovery, attention/alerting, ranking, and analytics depending on user task/context/desires. This kind of openness can stimulate development and testing of creative alternatives and enable market-based selection of the best-suited tools.
  • Use of valuation, crowdfunding, recognition, publicity, and other non-monetary incentives can also be used to encourage productive and meaningful participation, to bring out the best of the crowd.
(As expanded on below, all of this should be done with transparency and user control.)

[Update 10/10/18:] This subsequent post: In the War on Fake News, All of Us are Soldiers, Already!, may help make this more concrete and clarify why it is badly needed.

Applying this to social media -- fake news, community standards, polarization, and serendipity

A core objective is to augment the wisdom of crowds -- to benefit from the crowd to filter out the irrelevant or poor quality -- but to have augmented intelligence in determining relevance and quality in a dynamically nuanced way that reduces the de-augmenting effect of echo chambers and filter bubbles.

Using these methods, true fake news, which is clearly dishonest and created by known bad actors, can be readily filtered out, with low risk of blocking good-faith contrarian perspectives from quality sources. Such fake news can readily be distinguished from  legitimate partisan spin (point and counterpoint), from legitimate criticism (a news photo of a Nazi sign) or historically important news items (the Vietnam "terror of war" photo), and from legitimate humor or satire.

A dilemma that has become very apparent in our social media relates to "community standards" for managing people and items that are "objectionable." Since our social media systems have become essential utilities, exclusion of people or ideas on the fringe is at odds with the rights of free speech in our open society. Jessica Lessin recently commented on Facebook's "clumsy" struggles with content moderation, and on the calls of some to ban people and items. She observes that Facebook wants the community to determine the rules, but also is pressed to placate regulators -- and observes that "getting two billion people to write your rules isn’t very practical."

"Getting two billion people to write your rules" is just what the augmented wisdom of crowds does seek to make practical -- and more effective than any other strategy. The rules would rarely ban people (real humans) or items, but simply limit their visibility beyond the participants and communities that choose to accept such people or items. Such "objectionable" people have no right to require they be granted wide exposure, and, at the same time, those who find some people or materials objectionable rarely have a right to insist on an absolute and total ban.

This ties back to the converse issue, the seeking of surprising validators and serendipity described in my 2015 post. By understanding the items and participants, how they are rated by whom, and how they fit into communities, social graphs, and domains, highly personalized attention management tools can minimize exposure to what is truly objectionable, but can find and present just the right surprising validators for each individual user (at times when they might be receptive). Similarly, these tools can custom-chose serendipitous items from other communities and domains that would otherwise be missed.

This is an area where advanced augmentation of crowd wisdom can become uniquely powerful. The mainstream will become more aware and accepting of fringe views and materials (and might set aside specific times for exploring such items), and the extremes will have the freedom to choose (1) whether they wish to make their case in a way that others can accept as unpleasant but not unreasonable and antisocial, or (2) to be placed beyond the pale of broader society: hard to find, but still short of total exclusion. Again, a high degree of customization can be applied (and varied with changing context). Those who want walled gardens can create them -- with windows and gates that open where and when desired.

Innovation, openness, transparency, and privacy

Of course the key issues are how do we apply quick fixes for our current crisis, how do we evolve toward better media ecosystems, and how do we balance privacy and transparency. I generally advocate for openness and transparency. 

The Internet and the early Web were built on openness and transparency, which fueled a huge burst of innovation.  (Just as I refer to my 2002-3 patent filing, one can make a broad argument that many of the most important ideas of digital society emerged around the time of that "dot-com" era or before.) Open, interoperable systems (both Web 1.0 and Web 2.0) enabled a thousand flowers to bloom. There are also similar lessons from systems for financial market data (one of the first great data market ecologies) fueled by open access to market data from trading exchanges, and to competing, interoperable distribution, analytics, and presentation services. The patent filing I describe here (and others of mine) build on similar openness and interoperability. 

Now that we have veered down a path of closed, monopolistic walled gardens that have gained great power, we face difficult questions of how to manage them for the public good. I suggest we probably need a mix of all four of the following. Determining just how to do that will be challenging. (Some suggestions related to each of these follow.) 
  1. Can we motivate monopolies like Facebook to voluntarily shift to better serve us? Ideally, that would be the fastest solution, since they have full power to introduce such methods (and the skills to do so are much the same as the skills they now apply for targeting ads).
  2. Can we independently layer needed functions on top of such services (or in competition with them)? The questions are how to interface to existing services (with or without cooperation) and how to gain critical mass. Even at more limited scale, such secondary systems might provide augmented wisdom that could be fed back into the dominant systems, such as to help flag harmful items.
  3. Should we mandate regulatory controls, accepting these systems as natural monopolies to be regulated as such (much like early days of regulating the Bell System monopoly on telephonic media platforms)? There seem to be strong arguments for at least some of this, but being smart about it will be a challenge.
  4. Should we open them up or break portions of them apart (much like the later days of regulating the  Bell System)? Here, too, there seem to be strong arguments for at least some of this, but being smart about it will be a challenge.
  5. Can we use regulation to force the monopolies to better serve their users (and society) by forcing changes in their business model (with incentives to serve users rather than advertisers)? I suggest that may be one of the most feasible and effective levers we can apply.
My suggestions about those alternatives:
A transparent society?

A central (and increasingly urgent) dilemma relates to privacy. Some of my suggestions for openness and transparency in our social media and similar collaborative systems could potentially conflict with privacy concerns. We may have to choose between strict privacy and smart, effective systems that create immense new value for users and society. We need to think more deeply about which objectives matter, and how to get the best of mix. Privacy is an important human issue, but its role in our world of Big Data and AI is changing: 
  • As David Brin suggested in The Transparent Society, the question of privacy is not just what is known about us, but who controls that information. Brin suggests the greatest danger is that authoritarian governments will control information and use it to control us (as China is increasingly on track to do that). 
  • We now face a similar concern with monopolies that have taken on quasi-governmental roles -- they seem to be answerable to no one, and are motivated not to serve their users, but to manipulate us to serve the advertisers who they profit from. (There are also the advertisers, themselves.)
  • Brin suggested our technology will return us to the more transparent human norms of the village -- everyone knew one-another's secrets, but that created a balance of power where all but the most antisocial secrets were largely ignored and accepted. We seem to be well on the way to accepting less privacy, as long as our information is not abused.
  • I suggest we will gain the most by moving in the direction of openness and transparency -- with care to protect the aspects of privacy that really need protection (by managing well-targeted constraints on who has access to what, under what controls). 
That takes us back to the genius of man-computer symbiosis -- AI and machine learning thrive on big data. Locking up or siloing big data can cripple our ability to augment the wisdom of crowds and leave us at the mercy of the governments or businesses that do have our data. We need to find a wise middle ground of openness that fuels augmented intelligence and market forces -- in which service providers are driven by customer demand and desires, and constrained only by the precision-crafted privacy protections that are truly needed.

-----------------------

See the Selected Items tab for related posts 
[Update 12/30/19, 12/14/21: That list replaces the shorter list originally posted here.]

Supportive References for Augmenting the Wisdom of Crowds and The Tao of Truth
------

*Appendix -- My patent disclosure document (now in public domain)

This post draws on the architecture and methods described in detail in my US patent application entitled "Method and Apparatus for and Idea Adoption Marketplace" (10/692,974), which was published 9/17/04. It was filed 10/24/03 formalizing a provisional filing on 10/24/02. I released this material into the public domain on 12/19/16. I retain no patent rights in it, and it is open to all who can benefit from it.

A copy of that application with highlighting of portions most relevant to current needs is now online. While this is written in the hard-to-read legalese style required for patent applications, it is hoped that the highlighted sections are helpful to those with interest. (A duplicate copy is here.)

The highlighted sections present a broad architecture that now seems more timely than ever, and provides an extensible framework for far better social media -- and important aspects of digital democracy in general.

For those who are curious, there is a brief write-up on the original motivation of this work.

(This patent application was cited by 183 other patent applications (as of 12/21/21), an indicator of its contribution. 21 of those citations were by Facebook.)

Thursday, December 15, 2016

2016: Fake News, Echo Chambers, Filter Bubbles and the "De-Augmentation" of Our Intellect

Silicon Valley, we have a problem!

The 2016 election has made it all too clear that growing concerns some of us had about the failing promise of our new media were far more acute than we had imagined. Stuart Elliott recently observed that "...the only thing easier to find than fake news is discussion of the phenomenon of fake news."

But as many have noted, this is a far bigger problem than just fake news (which is a reasonably tractable problem to solve). It is a problem of echo chambers and filter bubbles, and a still broader problem of critical thinking and responsible human relations. While the vision has been that new media could "augment human intellect," instead, it seems our media are "de-augmenting" our intellect. It is that deeper and more insidious problem that I focus on here.

The most specifically actionable ideas I have about reversing that are well described in my 2012 post, Filtering for Serendipity -- Extremism, "Filter Bubbles" and "Surprising Validators,"which has recently gotten attention from such influential figures as Tim O'Reilly and Eli Pariser. (Some readers may wish to jump directly to that post.)

This post aims at putting that in the broader, and more currently urgent context. As one who has thought about electronic social media, and how to enhance collaborative intelligence, and the "wisdom"/"madness" of crowds since the 1970s, I thought it timely to post on this again, expand on its context, and again offer to collaborate with those seeking remedies.

This post just touches on some issues that I hope to expand on in the future. This is a rich and complex challenge. Even perverse: as noted in my 2012 post, and again below, "balanced information may actually inflame extreme views." But at last there is a critical mass of people who realize this may be the most urgent problem in our Internet media world. Humanity may be on the road to self-destruction -- if we don't find a way to fix this fast.

Some perspectives -- augmenting or de-augmenting?

Around 1970 I was exposed to two seminal early digital media thinkers. Those looking to solve these problems today would do well to look back at this rich body of work. These problems are not new -- only newly critical.
  • Doug Engelbart was a co-inventor of hypertext (the linking medium of the Web) and related tools, with the stated objective of "Augmenting Human Intellect."  His classic tech report memorably illustrated the idea of augmenting how we use media, such as writing to help us think, in terms of the opposite -- we can de-augment the task of writing with a pencil by tying the pencil to a brick! While the Web and social media have done much to augment our thinking and discourse, we now see that they are also doing much to de-augment it.
  • Murray Turoff did important early work on social decision support and collaborative problem solving systems. These systems were aimed as consensus-seeking (initially focused on defense and emergency preparedness), and included the Delphi technique, with its specific methods for balancing the loudest and most powerful voices. 
Not so long after that, I visited a lab at what is now Verizon, to see a researcher (Nathan Felde) working with an experimental super-high resolution screen for multimedia (10,000 by 10,000 pixels, as I recall -- that is more than 10 times richer than the 4K video that is just now becoming generally available). He observed that after working with that, going back to a then-conventional screen was like "eating dinner through a straw" -- de-augmentation again.

Now we find ourselves in an increasingly "post-literate" media world, with TV sound bites, 140 character Tweets, and Facebook posts that are not much longer. We increasingly consume our media on small handheld screens -- mobile and hyper-connected, but displaying barely a few sentences -- eating food for our heads through a straw.*

What a fundamental de-augmentation this is, and why it matters is chillingly described in "Donald Trump, the first President of our Post-Literate Age," A Bloomberg View piece by Joe Weisenthal:
Before the invention of writing, knowledge existed in the present tense between two or more people; when information was forgotten, it disappeared forever. That state of affairs created a special need for ideas that were easily memorized and repeatable (so, in a way, they could go viral). The immediacy of the oral world did not favor complicated, abstract ideas that need to be thought through. Instead, it elevated individuals who passed along memorable stories, wisdom and good news.
And here we begin to see how the age of social media resembles the pre-literate, oral world. Facebook, Twitter, Snapchat and other platforms are fostering an emerging linguistic economy that places a high premium on ideas that are pithy, clear, memorable and repeatable (that is to say, viral). Complicated, nuanced thoughts that require context don’t play very well on most social platforms, but a resonant hashtag can have extraordinary influence. 
Farad Manjoo gives further perspective in "Social Media’s Globe-Shaking Power," closing with:
Mr. Trump is just the tip of the iceberg. Prepare for interesting times.
Engelbart and Turoff (and others such as Ted Nelson, the other inventor of hypertext) pointed the way to doing the opposite -- we urgently need to re-focus on that vision, and extend it for this new age.

Current voices for change

One prominent call for change was by Tim O'Reilly, a very influential publisher, widely respected as a thought leader in Internet circles. He posted on "Media in the Age of Algorithms" and triggered much comment (including my comment referring to my 2012 post, which Tim recommended).

Another prominent voice is Eli Pariser, who is known for his TED Talk and book on The Filter Bubble, a term he popularized in 2011. He recently created a collaborative Google Doc, which, as reported in Fortune," has become a hive of collaborative activity, with hundreds of journalists and other contributors brainstorming strategies for pushing back against publishers that peddle falsehoods" (I am one, contributing a section headed "Surprising Validators). The editable Doc is apparently generating so much traffic that a read-only copy has been posted!

Shelly Palmer did a nice post this summer, "Your Comfort Zone May Destroy The World." We need to not just exhort stepping outside our comfort zones, which few will do unaided, but to make our media smart about enticing us to do that in easy and compelling ways.

The way forward

As I said, this is a rich and complex challenge. Many of the obvious solutions are too simplistic. As my 2012 post begins:
Balanced information may actually inflame extreme views -- that is the counter-intuitive suggestion in a NY Times op-ed by Cass Sunstein, "Breaking Up the Echo" (9/17/12).   Sunstein is drawing on some very interesting research, and this points toward an important new direction for our media systems.
Please read that post to see why that is, how Sunstein suggests we might cut through that, and the filtering, rating, and ranking strategies I suggest for doing that. (The idea is to find and highlight what Sunstein called "Surprising Validators" -- people who you already give credence to, who suggest that your ideas might be wrong, at least in part -- enticing you to take a small step outside your comfort zone, and re-think, to see things just a bit more broadly.)

I hope to continue to expand on this, and to work with others on these vital issues in the near future.

=================================
Supporting serious journalism

One other critical aspect of this larger problem is citizen-support of serious journalism -- not chasing clicks or commercial sponsorship, but journalism for citizens. My other blog on FairPay addresses that need, most recently with this companion post: Panic in the Streets! Now People are Ready to Patron-ize Journalism!

=================================

See the Selected Items tab for more on this theme.
---
*Relying on smartphones to feed our heads reminds me of my disappointment with clunky HyperCard on early Macs (the first widely available hypertext system -- nearly 20 years after the early full-screen demos that so impressed me!), with its tiny "cards" instead of pages of unlimited length. How happy I was to see Mosaic and Netscape browsers on full-sized screens finally appear some 5 years later. We are losing such richness as the price of mobility! (I am writing this with a triple-monitor desktop system, which I sorely miss when away from my office, even with a laptop or iPad. And I admit, I am not great at typing with just my thumbs. ...Does anyone have a spare brick?)

[Image:  Thanks to Eli Pariser and Shelly Palmer for the separate images that I mashed up for this post.]