Wednesday, December 16, 2020

Biden Campaign Shows How Social Media Can Reduce Polarization

A compelling report by Kevin Roose in the NY Times explains how the Biden campaign used "surprising validators" like Fox News to penetrate the filter bubbles of Trump supporters. The broader lesson is that social media algorithms can automate this strategy for each of us -- surfacing turncoat and contrarian views from sources we trust that can make us stop and think. That can disarm polarization far more effectively than the “neutral” fact-checking and warning labels that the platforms have been pressured to try.

...So the campaign pivoted…expanding Mr. Biden’s reach by working with social media influencers and “validators,” people who were trusted by the kinds of voters the campaign hoped to reach.

...Perhaps the campaign’s most unlikely validator was Fox News. Headlines from the outlet that reflected well on Mr. Biden were relatively rare, but the campaign’s tests showed that they were more persuasive to on-the-fence voters than headlines from other outlets. So when they appeared — as they did in October when Fox News covered an endorsement that Mr. Biden received from more than 120 Republican former national security and military officials — the campaign paid to promote them on Facebook and other platforms.

“The headlines from the sources that were the most surprising were the ones that had the most impact …When people saw a Fox News headline endorsing Joe Biden, it made them stop scrolling and think.”

“Stop scrolling and think?” Does that happen when a social media user sees an “independent” fact-check warning label? Or an “authoritative” article that presents a contrary view? 

Cass Sunstein introduced the term “surprising validators” in a 2012 Times op-ed, explaining how they could cut through filter-bubble echo-chambers -- while “balanced” critiques were “likely to increase polarization rather than reduce it.” 

That spurred my suggestions that social media should build alerting to surprising validators directly into their algorithms, as a way to help combat growing polarization and disinformation. Unlike fact-checking and labels that are not only slow and labor-intensive, but fail to convince those already polarized, surprising validators can be automatically identified by social media with Internet speed, scale, and economy, and work like Trojan Horses to penetrate closed minds. 

I am seeking to publish a fuller discussion of this and why it is is the best way to reverse polarization and create a "cognitive immune system" that can help protect our democracy.

Friday, December 11, 2020

Across a Crowded Zoom, But No Enchantment...

 ♫ Some enchanted evening…You may see a stranger…Across a crowded Zoom 

Just published in Techonomy, my "No Enchantment Across a Crowded Zoom" offers some musings on the fundamental problem of how virtual conferencing is unsatisfying because it fails to convey the subtle energies and interpersonal mirroring of live interaction. (The article plays off of what many of you know as perhaps the "greatest song ever written for a musical.")

Hopefully that article offers food for thought on what might be improved in Zoom and similar tools. (My thanks to Pip Mothersill, Ph.D. from MIT Media Lab, for some stimulating conversations on serendipity and Zoom.)

Here I add one tangential idea about the subtleties of communication that I was reminded of while writing the article:

GLENDOWER. I can call spirits from the vasty deep. 

HOTSPUR. Why, so can I, or so can any man; But will they come when you do call for them on Zoom?

Many years ago, I heard the famous Gyuto Monks* of Tibet do some of their striking meditative chants --  noted for the deep harmonics that their practiced throat-singing techniques create. The next day they were empaneled in a classroom with Robert Thurman leading an intimate chat about their experiences. 

One of the tidbits was the story of how reluctant they had been to allow their chants to be recorded. The reason for their reluctance was that the chants are part of a meditative process in which fierce demigods are summoned to appear, and then entreated to be beneficent.

The fear was that if the chanting was recorded, on playback, the summoned spirits might hear the call and actually appear. Finding no spiritually adept monks there to greet them, those demigods might become angry. 

Happily, after some thought and meditation, the monks concluded that the spirits would be called only by the live voices of monks in prayer, not the disembodied sounds of a recording!

[*The quality of YouTube audio does not do the chants justice, even for mere mortals -- quality recordings give more sense of the live experience. I can attest that no angry spirits materialized on playing the chants in my home (neither vinyl nor CD).]

Wednesday, December 09, 2020

“How to Save Democracy From Technology" (Fukuyama et. al.)

[This is a quick preliminary alert -- a fuller commentary is NOW ONLINE has been drafted for publication -- see this page for supporting information.]

An important new article in Foreign Affairs by Francis Fukuyama and others makes a compelling case: Few realize that the real harm from Big Tech* platforms is not just bigness or failure to self-regulate, but that they threaten democratic society, itself. They go on to suggest a fundamental remedy (emphasis added): 
Fewer still have considered a practical way forward: taking away the platforms’ role as gatekeepers of content …inviting a new group of competitive ‘middleware’ companies to enable users to choose how information is presented to them. And it would likely be more effective than a quixotic effort to break these companies up.”

The article makes a strong case that the systemic cure for this problem is to give people the power to control the “middleware” that filters our view of the information flowing through the platform in the ways that we each desire. Controlling that at a fine-grained level is beyond the skill or patience of most users, so the best way to do that is to create a diverse open market of interoperable middleware services that users can select from. The authors point out that this middleware could be funded with a revenue share from the platforms – and that, instead of reducing the platform revenue, it might actually increase it by providing better service to bring in more users and more activity. Their article is backed up by an excellent Stanford white paper that provides much more detail.

This resonates with similar proposals I have written over the past two decades. The threat to democracy is not platform control over what is posted, but their unilateral and non-transparent control over what is seen by whom. The platforms control the filters/recommenders of what we each see - and subvert that so they can engage us and sell ads. The only real solution is to delegate that control to users, so that undesirable (in the eye of the receiver) content is not amplified, and bad communities are not proselytized – all without censorship except in extreme cases. An open market is the best way to do that, to ensure the competition that brings us choice, diversity, and innovation -- and to decouple these decisions from the perverse incentives of the platforms to favor advertising revenue over user welfare.

The basic idea of an open market if filtering middleware is described in my Architecting Our Platforms to Better Serve Us -- Augmenting and Modularizing the Algorithm (building on work I began in 2002).  Part of the relevant section:

Filtering rules

Filters are central to the function of Facebook, Google, and Twitter. ... there are issues of homophily, filter bubbles, echo chambers, and fake news, and spoofing that are core to whether these networks make us smart or stupid, and whether we are easily manipulated to think in certain ways. Why do we not mandate that platforms be opened to user-selectable filtering algorithms (and/or human curators)? The major platforms can control their core services, but could allow users to select separate filters that interoperate with the platform. Let users control their filters, whether just by setting key parameters, or by substituting pluggable alternative filter algorithms. (This would work much like third party analytics in financial market data systems.) Greater competition and transparency would allow users to compare alternative filters and decide what kinds of content they do or do not want. It would stimulate innovation to create new kinds of filters that might be far more useful and smart...

Don't just prevent harm, empower benefit

My deeper proposals explore how changes in algorithms and business models could make such an open market in filtering middleware even more effective. Instead of just preventing platforms from doing harm, this could empower social media to do good, in the ways that each of us choose. That is the essence of democracy and our marketplace of ideas. Good technology can empower us, serving as "bicycles for the mind."

* While Fukuyama's article is entitled “How to Save Democracy From Technology,” this is really not a problem of technology, itself, but of badly applied technology -- bad algorithms and architectures, motivated by bad business models.

[Revised 1/23/21]

Friday, November 13, 2020

Wisdom for "A Time to ...Restore the Soul of America"

In defeating Trump after years of ever-worsening division, Biden has his mandate to "restore the soul of America." He cited Ecclesiastes in his victory speech, seeking to set a tone for reuniting our divided society in the United States. That mandate is still contested by many, and our democracy remains in mortal peril.

I wrote in Techonomy on Biden’s Opportunity: Look to The Tao to Heal our Divided Country, adding ageless Eastern wisdom on setting the tone for reuniting around common values. I suggest this is a time for Biden -- and all of us -- to: 

  • emulate the Buddha calling on the Spirit of the Earth (the soul of the American People) for a mandate of righteousness, and 
  • to look to the universalism of the Tao (that all things are non-binary and contain their opposites).

My points complement and offer simple guidance consistent with recent expressions of many. For example:


Wednesday, August 12, 2020

Reverse the “Nuance Destruction Machine!”

Techonomy has just published my latest article on our social media disaster, Don’t Swim Against the Tide of “Nuance Destruction”
Social media is “a nuance destruction machine,” as Jeff Bezos concisely put it in his recent widely reported Congressional testimony.
...As long as social media’s financial incentives favor engagement (or “enragement”) over quality, its filtering algorithms will be designed to be favorable to messages of hate and fear. As long as that happens at Internet speed, bolted-on efforts to add back nuance and limit conflict will be futile. We will waste time and resources with little result, and democracy may drown in the undertow.
The article explains a two-part solution:
  1. Change the incentives.
  2. That will motivate platforms to redesign algorithms to filter for nuance — and against incivility and hate speech.
---
To go beyond the brief outline in the article, see this list of Selected Items.

Monday, July 13, 2020

The Fog of Coronavirus: No Bright Lines

An adaptive mind-set for coping with COVID risk (+ adaptive updates at end)

At least six feet apart? More or less? The problem is that no bright lines bound a fog! Generals speak of “the fog of war” – we are at war with a virus that literally is a fog.

People like simple rules.  “Stay six feet apart.”  Bright enough, but do I really have to? Is it really enough? Oh, that is if it lasts more than fifteen minutes?  Inside or outside?  Downwind?  Talking or singing?  Loudly?  Emerging research summarized in MIT Tech Review and recently endorsed by the WHO shows this is far more complex and unpredictable than the six foot rule suggests.  Do masks really help?  When?  Do schools and offices reopen?  How?  Again, the answers seem to be “it depends, and our advice may change.”  Individuals and businesses need to be smart about this at all times.

Humans have been bred to be good at the complex calculus of throwing a spear or jumping a crevice, but we have less inbred ability to think about problems without immediate feedback -- like the spread of a virus.

Epidemiologists offer simple rules with bright lines.  But those are gross simplifications.  Apply them simplistically at your peril – and the peril of those around you.  I am no epidemiologist, but was educated in the science of decision making under risk and uncertainty and of optimizing in the real world.  We must all do that science as best we can.  When hunting for game or playing ball, simple rules about where to look, run, and aim are just the start – we layer on a rich intuitive calculus incorporating all we know about the task and its dynamics. Unfortunately, few of us learn even the basics of the dynamics contagion (and the science is still emerging). 

Here are some suggestions on structuring our thinking.  But this above all:  use good judgement all the time, every time, and try to make it both informed and thoughtful, with situational awareness to the current context.  This applies to individuals and groups, including businesses, schools, and other organizations.

It floats though the air with the greatest of ease…

It seems the coronavirus is transmitted primarily (but not only) as an aerosol or droplets that can be inhaled or touched -- directly or indirectly, and that even those without symptoms can transmit infection.  The aerosol seems most pervasive and significant, although droplets and indirect contact remain a concern.  So we need to consider not only what we touch and where we cough or sneeze, but how we might inhale the aerosol.  It seems that aerosol can float far more than 6 feet, and longer than 15 minutes.

Also, it seems “the dose makes the poison.” One might expect a single virus particle to be enough to cause contagion, but apparently it takes some quantity over time to overpower our immune system.  The more exposure the greater risk.  That is a complex and uncertain function of distance, time, and the number of exposures (and also depends on how healthy your immune system is).  Familiarity breeds viruses.

An analysis in Science (worth careful reading) shows how fuzzy this aerosol is -- significant amounts of coronavirus can travel much more than six feet under common conditions. An aerosol is literally a fog. Try putting a bright line around that! Ideally, we should consider the fluid dynamics of the air we are in. How much aerosol or droplets of what sizes? What level of exhalation? What airflow from a carrier to us.  How much get through a mask?  Studies of contagion airflow in a Wuhan restaurant and from runners show how tricky this can be.

Risk is complex and uncertain

Even experts find this interplay daunting. Tom Frieden, former CDC head, said (about broader issues), “People keep asking me, ‘What’s the one thing we have to do?’ The one thing we have to do is to understand that there is not one thing.”  It seems we must constantly do our own calculus in each situation as it emerges.  We must consider the nature of the game and the current conditions and intuit the dynamics (and aerodynamics) -- just as when we hunt or play ball.

Furthermore, we need to understand the basics of probability. People argue “going to a bar is no worse than going to a supermarket.”  Wrong on two levels. First, the exposure in a bar is likely closer and longer. More importantly, even if the risk was equal, it is how often you take risks that determines total risk. You can play Russian roulette once or twice and likely survive. Ten or twenty times and you will almost certainly die.  We must weigh level of risk, duration, and frequency.

Authorities like the CDC should be teaching us these underlying principles, so we can do our own calculus in each context.  But it is on each of us to read widely, apply critical thinking to weed out the junk, and use practical guides like the posts of immunologist Erin Bromage or similar coverage in general or business publications.

“In this together” – we are our brother’s keeper

Equally important, we need to help others to do likewise.  Covid-19 presents a life or death problem of optimizing individual risk and, as social creatures who infect one another, that requires optimizing collective risk.  We have an ongoing collective responsibility for being informed and thoughtful not just about our own risk, but our risk to others. 

We need to think about the social bubbles of those we will take the risk of being close to much of the time.  Within a few small, carefully controlled bubbles we might not wear masks, just as within family units.  But is now seem clear that we need to wear masks whenever anywhere near others not in our bubble.  We need to get everyone to understand that is not just so much for our own safety, but for that of others.  This is not a question of individual freedom, but of our responsibility as members of society (just as we do not drive drunk on public streets, even if we are a free-spirited daredevil).

At the macro level of groups, we have similar fuzziness that depends on the current context and dynamics.  We are realizing that we must go beyond simple on/off rules.  Hybrid” models are emerging.  Not only must such models be carefully phased in depending on current conditions, they must remain adaptive to tightening or loosening in response to changing conditions. 

Agility in risk management

Soldiers navigate the fog of war with a continuous OODA loop (Observe-Orient-Decide-Act).  Individuals and businesses should seek to do the same, and instill this in all of their family, team, or community.  Systematization is an essential tool, but understand its limits.  Every business should have an ongoing program of guidance and education for applying OODA loops as long as the virus is not fully controlled.

We are developing metric scorecards and prediction models to assess the risk of social spreading in each locale, but our understanding of the science is evolving.  Making this even more complex, recent evidence suggests the virus has been mutating faster than first thought, so any rules that make sense today might not be so good next week.

There is huge pressure to relax these annoying constraints when conditions permit – but we must apply a carefully considered gradient -- not just a simple, binary risk-on/risk-off cycle -- in our vigilance level.  To do that all of us need to track the key indicators to know when risk of community spread is high or low.  And because our knowledge and the virus are evolving, we all, at each level, must also apply an OODA loop to the basic principles of our strategy – evolving that as we learn more. OODA applies at all levels -- to our everyday activity, our basic tactics, and our broad strategy. Don't get stuck "fighting the last war."

Good leaders will try their best to help all of us to “be smart.”  Smart businesses will care about their customers, employees, and reputations.  That may warrant empowering a “chief health officer.”  Entrepreneurs will seek opportunities to help in those efforts (and profit from that).  We are bound as humans by a social contract to be careful for one another.  Like it or not, we are truly in this together.  Institutional dysfunction leads to health dysfunction.  We need effective government and institutions that foster social trust.  The US has that in some locales, but overall, it is tragically lacking (as the WSJ reports).

We are all living in an era of unusual personal risk – each of us must be smart and situationally aware about managing the risks we take for ourselves and those we may infect.


-----------------------------------------------
Coda: context matters...and changes

As I was finalizing this post, this sad example underlined the importance of context -- and how it changes. Anthony Fauci is now being criticized (by the White House!) for his statement back on April 3:  "…There’s no reason to be walking around with a mask..."

In context, he also said (emphasis added):  "...Right now in the United States, people should not be walking around with masks. ...when you think masks, you should think of health care providers needing them and people who are ill. ... It could lead to a shortage of masks for the people who really need it." At the time, it was not foolish at all. The times changed, and Fauci's statements changed with the context. 

Pay attention to the situation. Don't be a Covidiot!


[Update 4/18/21:] The Swiss cheese model


A very simple metaphorical model for dealing with the fog, in the spirit outlined here, is the Swiss Cheese Respiratory Pandemic Defense, as nicely explained in the NY Times with the image above.

This illustrates the basic principle that if the probability of each layer of defense (Dn) failing is Pn, then the probability of ten layers of defense (D1, D2, D3, ...D10) failing is P1 x P2 x P3 ... x P10. If each is so poor it fails one half of the time, each P is .5, and the combination of all ten will fail with a probability of .5 to the 10th power. That reduces it to .00098, or 0.098% -- about one in one thousand times. Every layer of cheese has a multiplier effect, even if it has lots of holes!

[Update 7/25/21:] Not out of the woods yet!

These two reports in the NY Times underline the need for continuing situational awareness, and guidance on that:
[Update 7/30/21:] Being smart requires better data and guidance

It’s situational awareness, stupid! Excellent NYT essay, If We Must Wear Masks Again, We Need a SmartApproach, underlines the need for better data and guidance to enable situational awareness -- and that the situation is constantly evolving.

All levels of government and institutions need to cooperate to teach each of us how to understand and apply the data and the science as it evolves -- to manage our own OODA loops. That means day by day, zipcode by zipcode, site by site, personal condition by personal condition. The CDC should teach us that its guidance is always contingent on the situation, and thus will change. 

-----------------------------------------------

Sunday, May 31, 2020

The "Weather-VIX" -- A Volatility IndeX for Weather?

A better way to understand climate change and global warming may be to focus less on quantifying the direction of changes, but on quantifying the volatility of weather extremes of all kinds -- temperature, precipitation, humidity, wind, storms, etc.

Many have noted that "global warming" is not just a matter of warming, and that we might better focus on solving the problem with better messaging. Tom Friedman has referred to it as "global wierding," saying, "The frequency, intensity and cost of extreme weather events all increase. The wets get wetter, the hots get hotter, the dry periods get drier, the snows get heavier, the hurricanes get stronger. Weather is too complex to attribute any single event to climate change, but the fact that extreme weather events are becoming more frequent and more expensive — especially in a world of crowded cities like Houston and New Orleans — is indisputable." Brad Plumer made similar points about the need for more understandable messaging.

I have been suggesting that we track and report a “Weather-VIX” (WVIX) -- much as financial markets track a "Volatility IndeX" (VIX). In financial markets, the VIX is often understood as a "fear index." For weather, it might be seen as a "disruption index."

A Weather-VIX volatility index for our weather, would be a complementary metric to average temperature trends. By tracking the volatility of weather (from day to day), wouldn't we see a very significant and increased volatility in temperatures, precipitation, and wind speed? Unlike the small changes in average temperature, volatility trends might be far more dramatic, and much less easily dismissed as just a natural fluctuation. Refocusing on volatility would also remove silly arguments that extremes of cold refute global "warming" -- of course the warming is not always "global," and is not always consistent at any given time. We can better understand that the weather will not be volatile at any given place at every given time, but tracking volatility in each region would give clearer evidence of increasing overall volatility, and how that varies from region to region.

This WVIX could also be tied to the monetary costs of extremes in both directions -- “WVIX-cost.”

Even if only based on data for the last hundred years or so (and only in locations with good data), we might see that violent and erratic weather is already accelerating to increasingly costly levels. Insurance companies will be among the first to see and quantify this as an actuarial cost, but with a simple WVIX index, we will all be able to understand this effect more clearly.

Monday, May 18, 2020

The Pandemic Reminds Us "Everything is Deeply Intertwingled" – We Need Better Logics for That


A bat catches a cold in Wuhan, and weeks later the whole world coughs. America and China battle a trade war, and then there is a shortage of PPE and ventilator parts from China.  Poor neighborhoods suffer high death rates because of poor health, but even celebrities and heads of state go into ICUs.  The economy craters, and we argue over relief to businesses versus workers based on which is more disposed to misuse what they might be given.  Health officials say flatten the curve, financiers say reopen, and corporations say they don’t dare reopen without testing.  The Federal government is too polarized to fix much of anything, and has forgotten their real job of governing by consensus.  

Modern technologies of global connection -- both physical and virtual -- make the pandemic emerge in weeks instead of years, and make all the butterfly effects far more complex.  That is our new curse, but also our new blessing.  We have global travel and supply chains, global communications and media networks -- a global village composed of local villages.  Techies moved fast and broke society, and now discourse seems too polarized to fix it.

All of these effects are driven by market forces -- however regulated.  Marketplaces of goods and services and marketplaces of ideas.  These marketplaces are driven by complex interplays of top-down structure and bottom-up emergence from billions of actors --and systems of actors.  

Technology has made these forces more dynamic and turbulent, but technology can enable smarter and better-regulated marketplaces -- if we re-focus.  We cannot undo this onrushing dynamic -- we need to get smarter about how we use technology to help us go forward.

The pandemic may be the kick in the ass we need to reform society over a wide range of domains and levels.  Seeing the commonalities can help us capture a new synergy.  If rise to that challenge, the future will be bright. If we fail it will be dark.  Many see that, but few focus on the root causes. 

Peter Drucker said “The greatest danger in times of turbulence is not the turbulence, it is to act with yesterday’s logic.” Two new logics can help us correct the failures of our current logics.

Ever-growing intertwingularity

The problem we now face all too urgently is that our lives are all deeply intertwingled, but we fall back to simplistic “fast” thinking with rigid categories and institutions.  Some leaders rise to the challenge and others flail, and we argue over who does which.  The regulation of our marketplace of ideas that “mediates consent” about facts has broken down, as has our social/economic marketplace.  These problems are difficult and complex – but we can get smarter about solving them – the first order and second order effects.  (To get a sense of the range of these issues, see this briefing by Tony O’Driscoll [since adapted for publication] and this McKinsey report. To see how this has reopened old questions, and may provide an opening for new thinking, see this NY Times report on the shifting issues for the 2020 election.)
 
The symbolic circle of the Tao reminds us of that truth is never entirely black or white, but shades of gray that depend on the light we view it in and the perspective we view it from.  Just how much is subject to argument, discovery, and rediscovery, as reality emerges.  This is age-old, but it is more urgent than ever that we come to grips with it.  2020 will mark a turning point in human history.

For decades our world and our markets have been increasingly stressed, even as we seemed to be progressing.  Tensions of nationality, race, ideology, religion, economics, technology, and governance are raging.  Things fall apart; the centre cannot hold /…The best lack all conviction, while the worst / Are full of passionate intensity.”  It is now urgent that we re-center more wisely on our better convictions.

The Enlightenment has run aground because those who saw the light and had the benefits did not pay enough attention to sharing that.  Liberals turned away from “the deplorables” instead of caring for and raising them up.  Capitalists extracted short-term profits and enriched themselves with stock buybacks -- exploiting workers instead of empowering them.  Factions and political parties fought zero-sum struggles to control the existing pie instead of engaging in win-win cooperation to create and share a larger pie.

The Chinese ideograph for crisis is composed from the characters for danger plus opportunity.  Many retreat in fear of the danger and seek to throw blame and erect walls, but wiser heads look to the opportunity.  Most see opportunity in narrow domains, but some look to the big picture.  We now face an urgent and historic opportunity to refocus on a more enlightened and productive kind of cooperation across the full range of issues.

Those who see and work on these problems in particular domains of concern and expertise can unite in spirit and vision with those in other domains.  We can forge a new Age of Enlightenment – a Reformation of The Enlightenment.  An awakening of interconnection and cooperative spirit is emerging.  Our challenge is to synergize it.  Some elements:
  • Economic and health insecurity for some leads to insecurity for all.  A safety net is needed.
  • Market systems need slack to respond to black swan events.  “Just-in-time” and “lean” are efficient only when not overstressed.  Global supply chains need resilience and redundancy. Too much slack and safety drain our wealth and will, but too little lead to disaster.
  • Moving fast and breaking things can break things that cannot be fixed.  Experience can blind us, but inexperience can kill.
  • Power among local, state, national, and global government must be properly balanced and adaptable to stress.  Power and wealth must be shared fairly among people, factions, and nations, or those left wanting will throw rocks at the crystal palace.  The resurgence of nationalism, factionalism, and the crisis of disinformation are symptoms of perceived unfairness.  Government that is too small is just as bad as too big.

Our modern, high-tech world is far too complex for purely top-down or bottom-up management and governance -- we need a smart and adaptive blend. That requires openness, transparency, trust, and fairness, so even when there is disagreement, there is a common sense of reasonableness and good spirit.

New Logics for Intertwingularity

My recent work has focused on two new ways to deal better with this growing complexity.  These new logics that do not just exhort people to be better and wiser, but better align interests so that virtue is rewarded. 

One relates to failures of our marketplace of ideas – especially our social media and other collaborative systems.  Computer-augmented human collaboration first emerged in the 1960s, and was used for disaster preparedness (natural and nuclear).  It progressed slowly until the Web made it far more powerful and accessible to consumers, but we failed to direct those social media systems to serve us well.  Struggling to find a business model, they hit on advertising. We now recognize that to be “the original sin of the Internet” because it misdirects our platforms to serve advertisers and not users.  Algorithms can help augment human intelligence to make us smarter collectively -- instead of making us stupider, as social media now do.  Systems that elucidate nuance, context, and perspective can empower us to draw out and augment the wisdom of crowds (as explained in detail on this blog) to deal more smartly with our deeply intertwingled world.  That could drive a new Age of Enlightenment in which technology augments the marketplace of ideas in the ways that we have always relied on to mediate consent – an emergent mix of top-down guidance and bottom-up emergence that can lead to new, yet natural, forms of digital democracy.

The other relates to failures of our economic marketplace – how we can shift from the short-term. zero-sum logic of extractive mass-market capitalism to more long-term, win-win forms of market cooperation.  That can restore the emergent, distributed, and human logic of traditional markets that Adam Smith saw as socially beneficial -- before modern mass-marketing alienated producers from consumers and lost sight of broader human values.  Our digital economy now enables new ways to shift from fighting over a current pie to cooperating to co-create a larger pie -- and to share it fairly.  That logic can empower a reformation of market capitalism from within that could actually be more profitable, and thus self-motivating.  We can apply the power of computer-mediated marketplaces to let businesses and consumers negotiate at a human level -- about the values they care about, how to co-create that value, and how to share in the benefits.  We have begun to think in terms of customer journeys, but have been trying to fit customers into segments or personas. Instead, we need to design for segments of one that are custom-fit to each customer, to build relationships with each customer on human terms.

These two logics are interrelated: a flawed economic logic for consumer platform services has been built on advertising revenue (“the Internet’s original sin”). That has warped incentives to favor engagement with junk content that sells ads, rather then the value to users of quality content. An improved logic for value will create incentives for our platforms to facilitate a logic for a better marketplace of ideas.

The brief descriptions of these new logics may sound like just more exhortations, but the posts that they link to provide details of operational mechanisms -- and evidence that their elements have proven effective.  These new combinations of elements can quickly become second nature, because they draw on and re-channel natural behaviors that promise to make them highly self-reinforcing.

Many allied visions for better logics of emergence are finding new relevance in this era of crisis.  We have only to join together and rise to the occasion.  We say that “we are all in this together” – we need to open our minds to really think that way, and to work with new logics and “choice architectures” that make that natural.  With better logics, our instinctive behaviors can once again synergize to flow in increasingly enlightened ways.

---
For more about the new logic for the marketplace of ideas (and intertwingularity in general), see this list of selected items on the SmartlyIntertwingled.com blog.

For more about the new logic for the economic marketplace, see this list of selected items on the FairPayZone.com blog.

Letter to The Atlantic (Renee DiResta) on "Getting the Message Out" (From Experts on Virus)

Renee DiResta's article in The Atlantic, Virus Experts Aren’t Getting the Message Out (5/6/20), is an insightful analysis of how social media have broken society's ability for "mediating consent" as to what is fact and what is fiction (or worse). 

Here is the letter I wrote in response:

To Renee DiResta’s excellent statement of the problem -- getting quality information on Covid-19 to the public in this time of crisis (which is just part of a much broader problem) -- I would add suggestions for a better solution. She is correct that our public health institutions must adapt to modern modes of communication, and that media should select for authoritative voices, including those who are outside those institutions. She rightly says “Some of the best frameworks for curating good information…involve a hybrid of humans and artificial intelligence…These processes are difficult to scale because they involve human review, but they also recognize the value of factoring authoritativeness—not just pure popularity...the ‘consensus of the most liked.’”

The solution to this critical challenge of scaling is to use algorithms more effectively -- to “augment the wisdom of crowds.”  The crowd gets wiser when the human votes of authoritative likes count for more than those of foolish or malicious likes. This can be done by building on the huge success of how Google’s hybrid PageRank algorithm first augmented the wisdom of the Web-linking crowd. PageRank did not rely on machine understanding of content (still very difficult), but only on the raw power of machine tabulation of human understanding (IBM began with tabulating the 1890 census).

The genius of PageRank is not to rank Web pages by purely human authorities as Yahoo did, nor by pure algorithms as AltaVista did, but by a clever and scalable hybrid of man and machine. It interprets links to a Web page from other sites as equivalent to likes that signal the judgment of human “Webmasters” or authors. But it then augments those judgments: instead of weighting all such links as equal votes of authority, it weights them based on their own authority.  It sees who links to them (one level removed), and recursively, what authority those links should have, based on who links to them (a further level removed).

Social media and other information discovery media could apply much the same method. A “RateRank” algorithm that augments human intelligence in this way could determine whose likes it should rank as authoritative and whose likes it should rank as noise. It could track signals that reflect human judgement – likes, shares, comments, followers, etc. -- and determine reputations for those “ratings” to know which to weight highly as from respected raters, and which to discount as from usually foolish or malicious raters. Certifications of authority from independent rating institutions could also be factored in, but this algorithm would also up-rank emerging or non-mainstream voices that deserve to be heard -- including those that are responsibly contrarian.

Such hybrid algorithms would power a highly adaptive “cognitive immune system” that would help insure -- at Internet scale and speed -- that the most authoritative and deserving messages get out most widely, and that misinformation and disinformation is suppressed. (This need not limit First Amendment rights, since it would limit how dubious content is distributed, but it could still be posted and accessible to anyone who specifically seeks it.)

These proposals for up-ranking quality (details at http://bit.ly/AugmWoC) have gotten attention in the technology and policy community, but media businesses have yet to be receptive. The only apparent reason seems to be that their advertising-driven business model thrives on “elevating popularity over facts” as DiResta notes. But, if the current algorithmic de-augmentation of human intelligence does not change, humanity may never recover.

---
[This letter was sent to The Atlantic on 5/11/20. I had previously sent a draft to Renee for comment, and she responded that she viewed it as thoughtful and encouraged me to submit it.]

Tuesday, March 10, 2020

Remembering James Monaco as Media-Tech Pioneer

I was very saddened by the passing of James Monaco, a prominent film critic, scholar, and author, and visionary pioneer of new media, on 11/25/19.* Jim hired me to work at Baseline Information for the Film Industry in 1990, and then went on to found UNET in 1992, with me as co-founder. He was a valued colleague and friend, and had a significant impact on my career -- enabling me to dive into the bleeding edges of interactive media (which proved invaluable to my later work as a successful inventor in that space).

Others can address Jim's earlier prominent roles in film and publishing. From my perspective in the media-tech space, Jim's genius was in being a sophisticated media visionary who would understand enough about technology to envision advances that were both fundamental and feasible. What follows are some personal notes about where Jim pushed the envelope in the days where the Web was a laboratory curiosity and few had ever heard of interactive media.

Two of the most notable epiphanies in my career involved hypermedia -- hyperlinking from one chunk of a web of content to another that is now taken for granted. Jim enabled the second of those epiphanies.
  • In 1969 I clicked a hyperlink and saw the future -- but I had a long time to wait. It was a system envisioned by Ted Nelson and built with a team at Brown, running on an IBM graphics workstation connected to a mainframe. It remained largely an academic curiosity until the late '80s when Apple's Hypercard briefly popularized a sadly clunky and limited variation that helped inspire Tim Berners Lee to later conceive the World Wide Web protocols that revolutionized the digital world.
  • The first hypermedia system with exciting content that I got to really play with was at Baseline in 1992.** Jim had us turn his 12-volume film encyclopedia, The Motion Picture Guide, into a hyperlinked CD-ROM.*** That led to my second hypermedia epiphany, when we began to test the pre-production version. Find an actor and get full details (including images, as I recall). Click on an entry in that actor's filmography and get full details on the movie. Click on another actor or the director of that movie and get their pages. Then linking on to more movies, then more people, and so on. Thrilling engagement on a serendipitous path that emerges out of the experience (what is now anachronistically defined in Wikipedia as a "Wiki rabbit hole"). That was a captivating experience of flow as the future of interactive media (long before I read about psychological flow). And being run locally on a multimedia CD-ROM, it's responsiveness was much like the modern multimedia Web, not the primitive online world of those early days.
My relationship with Jim began as Baseline was expanding its role in professional online film and TV information, and also pioneering in consumer online media services. Baseline provided pre-Web online services using French Minitel "videotex" terminals emulated on PCs (as well as some actual Minitels that were popular with film studio execs for their simplicity of use). The flagship offering was a professional-quality service similar to the later IMDB. Baseline also provided online reporting from the Cannes festival, was the first to provide online access to The Hollywood Reporter, and offered an early gateway to the EaasySabre travel reservation system.

Jim also saw the value of consumer online services At a time when the only game was Compuserve, AOL, Genie, and bulletin boards, he had the vision to realize that publishers wanted to have online offerings that had their own signature look-and-feel. Some early trials at Baseline of a consumer version of Baseline called FLIKS led to pilot projects with major publishers to try their own customized services, beginning with TVGuide. I got to pitch our proposal to publisher Anthea Disney and worked with other key people at News Corp, and many others in the nascent world of online and CD-ROM media. "Silicon Alley" became trendy in New York in the late '90s, but back then we were in what what was known as "Videotex Alley." Jim had leadership roles in the Videotex Industry Association, including a term as president, bringing me into contact with Steve Case of AOL, Mark Walsh of GENIE, Bob Stein of Voyager, Martin Niesenholz of the NY Times, Martin Pearlstein of the WSJ (just after he left to start new ventures), and other pioneers in the interactive media space. He also revised his classic How To Read a Film book to cover new media technology in addition to traditional film technology (with my help), which later led to his Multimedia Edition on DVD that included many film clips.

Jim moved on to found UNET (with me) in mid-1992, and we picked up the TVGuide project, along with pilot efforts with other major publishers, such as Golf and Running. The TVGuide project became a joint venture with News Corp, which purchased Delphi and got us involved with them. (That project was later bought out from UNET and rolled into the News Corp. - MCI joint venture). We were getting traction with beta versions of these magazine systems, which were very appealing to publishers -- but with limited funding for development and marketing, it was a slow process.

By late 1993, I was beginning work on my own invention, which led to my founding of Teleshuttle. That built on Jim's insights about the importance of publisher-customized look-and-feel, as well as on some casual musing we had done together, wondering why no one had built a hybrid of online and CD-ROM to combine the best of both. Some months later, the answer came to me one night in a flash of inspiration. I realized why building such a hybrid was hard to do, and how a radically different software architecture could make it much easier. (Instead of hard-to-program, synchronously conversational, server-driven interactions, it was a much simpler, decentralized, asynchronous, protocol of file transfers driven by smart clients.) Jim continued with UNET, and I transitioned into my work starting Teleshuttle to develop that invention.

Jim later made a key contribution to Teleshuttle in 1995, by finding and reselling Teleshuttle's SaaS offering to Creative Multimedia, the first (and only) company to pay to use it to actually create and market a hybrid CD-ROM -- Blockbuster Video® Guide To Movies & Videos. That had modest success, and saw a second edition a year later, but the Web was then exploding faster than Jim or I could find the funding needed to adapt our businesses to capitalize on these new opportunities. I pivoted to consulting and inventing, and Jim refocused on both electronic and print publishing.

---
*Note: This post is essentially as drafted 3/10/20 in anticipation of a memorial planned for 3/14 -- but deferred due to Covid, and now being held on 11/20/21. It sat incomplete until now posted as of that date of drafting (with just minor revisions).

**Some of these dates and details are my unverified recollections, and may not be precise.

***Jim's also licensed his data to Microsoft for their later Cinemania CD-ROM product.

Monday, January 27, 2020

Make it So, Now! - 10 Ways Tech Platforms Can Safeguard the 2020 Election

"Ten things technology platforms can do to safeguard the 2020 U.S. election" is an urgent and vital statement that we should all read -- and do all we can to make happen -- especially if you have any connection to the platforms, Congress, or regulators (or the press). Hopefully, anyone reading this understands why this is urgent (but the article begins with a brief reminder).

Thirteen prominent thought leaders "met...to discuss immediate steps the major social media companies can take to help safeguard our democratic process and mitigate the weaponization of their platforms in the run-up to the 2020 U.S. elections. They published this as a "living document."

Here is their list of  "What can be done … now" (the article explains each):
  1. Remove and archive fraudulent and automated accounts
  2. Clearly identify paid political posts — even when they’re shared
  3. Use consistent definitions of an ad or paid post
  4. Verify and accurately disclose advertising entities in political ads
  5. Require certification for political ads to receive organic reach
  6. Remove pricing incentives for presidential candidates that reward virality (including a limit on microtargeting)
  7. Provide detailed resources with accurate voting information at top of feeds
  8. Provide a more transparent and consistent set of data in political ad archives
  9. Clarifying where they draw the line on “lying”
  10. Be transparent about the resources they are putting into safety and security
All of these should be do-able in a matter of months.  While many of the signatories "...are working on longer-term ways to create a healthier, safer internet, [they] are proposing more immediate steps that could be implemented before the 2020 election for Facebook and other social media platforms to consider." 

The writers include "a Facebook co-founder, former Facebook, Google and Twitter employees, early Facebook and Twitter investors, academics, non-profit leaders, national security and public policy professionals:" John Borthwick, Sean Eldridge, Yael Eisenstat, Nir Erfat, Tristan Harris, Justin Hendrix, Chris Hughes, Young Mie Kim, Roger McNamee, Adav Noti, Eli Pariser, Trevor Potter and Vivian Schiller.

I, too, am working on longer term issues, as outlined in this recent summary in the context of some important think tank reports: Regulating our Platforms -- A Deeper Vision Similarly, I have addressed one of the most urgent stop-gap issues (which is part of their #6), in 2020: A Goldilocks Solution for False Political Ads on Social Media is Emerging).

Monday, January 20, 2020

Personalized Nutrition -- Because Everything is Deeply Intertwingled!

Nutrition is hard to get right because everything is deeply intertwingled. Personalized Nutrition is changing that!

This new perspective on nutrition is gaining attention, as an aspect of personalized medicine, and is the subject of a new paper, Toward the Definition of Personalized Nutrition: A Proposal by The American Nutrition Association.  (I saw it as it was finalized, since my wife, Dana Reed, is a co-author, and a board member and part of the nutrition science team at ANA.)

The key idea is:
Personalized nutrition (PN) is rooted in the concept that one size does not fit all; differences in biochemistry, metabolism, genetics, and microbiota contribute to the dramatic inter-individual differences observed in response to nutrition, nutrient status, dietary patterns, timing of eating, and environmental exposures. PN has been described in a variety of ways, and other terms such as “precision nutrition,” “individualized nutrition,” and “nutritional genomics” have similar, sometimes overlapping, meanings in the literature.
I have always been something less than a poster child for following nutrition guidelines, for reasons that this report cites:  "...guidelines have only limited ability to address the myriad inputs that influence the unique manifestation of an individual’s health or disease status."

I frequently cite the conundrum from Woody Allen's Sleeper, when the 1970s protagonist had just been awakened by doctors after 200 years:
Dr. Melik: This morning for breakfast he requested something called "wheat germ, organic honey and tiger's milk."
Dr. Aragon: [chuckling] Oh, yes. Those are the charmed substances that some years ago were thought to contain life-preserving properties.
Dr. Melik: You mean there was no deep fat? No steak or cream pies or... hot fudge?
Dr. Aragon: Those were thought to be unhealthy... precisely the opposite of what we now know to be true.
Overstated to be sure, but the real issue is that "one man's meat is another man's poison." Determining which is which for a given person has been impractical, but now we are not only learning that this is far more intertwingled than was thought, but we are gaining the ability to tease out what applies to a given person.

I come from this not from biology, but from machine learning and predictive analytics. My focus is on getting smarter about how everything is intertwingled.

One of the most intriguing companies I have run across is Nutrino, a startup acquired by Medtronic, that analyzes data from continuous glucose monitors used by diabetics to understand the factors that affect their glucose response over time. They correlate to specific food intakes, activity, sleep, mood, blood tests, genomics, biomics, and more. They call it a FoodPrint, "a digital signature of how our body reacts to different foods. It is contextually driven and provides correlations, insights and predictions that become the underpinning for personal and continually improving nutrition recommendations." This is one of the first successful efforts to tease out how what I eat (and what else I do) really affects me as an individual, in all of its real-world intertwingularity.

It is time to move beyond the current so-called "gold standard" of intervention-based studies, the randomized double blind placebo controlled (RDBPC) clinical tests. Reality is far too intertwingled for that to be more than narrowly useful. It is time to embrace big data, correlation, and predictive analytics. Some early recognition of this is that drugmakers are getting the FDA to accept mining of patient data as a way to avoid need for clinical trials.

We have a long way to go, but I want to know how likely it is that a given amount of deep fat or hot fudge, or wheat germ or kale (in combination with the rest of my diet, behavior and risk factors), will have a significant effect, over a time frame that can motivate whether or not I indulge in my chocolate or eat my spinach.

It is not enough to know that the dose makes the poison -- I want to know if the average man's poison is really just my meat.

Before very long we will know.

Friday, January 10, 2020

The Dis-information Choke Point: Dis-tribution (Not Supply or Demand) [Stub]

Demand for Deceit: How the Way We Think Drives Disinformation, is an excellent report from the National Endowment for Democracy (by Samuel Woolley and Katie Joseff, 1/8/20). It highlights the dual importance of both supply and demand side factors in the problem of disinformation (fake news). That crystallizes in my mind an essential gap in this field -- smarter control of distribution. The importance of this third element that mediates between supply and demand was implicit in my comments on algorithms (in section #2 of the prior post).

[This is a stub for a fuller post yet to come. (It is an adaptation of a brief update to my prior post on Regulating the Platforms, but deserves separate treatment.)]

There is little fundamentally new about the supply or the demand for disinformation.  What is fundamentally new is how disinformation is distributed.  That is what we most urgently need to fix. If disinformation falls in a forest… but appears in no one’s feed, does it disinform?

In social media a new form of distribution mediates between supply and demand.  The media platform does filtering that upranks or downranks content, and so governs what users see.  If disinformation is downranked, we will not see it -- even if it is posted and potentially accessible to billions of people.  Filtered distribution is what makes social media not just more information, faster, but an entirely new kind of medium.  Filtering is a new, automated form of moderation and amplification.  That has implications for both the design and the regulation of social media.

[Update: see comments below on Facebook's 2/17/20 White Paper on Regulation.] 

Controlling the choke point

By changing social media filtering algorithms we can dramatically reduce the distribution of disinformation.  It is widely recognized that there is a problem of distribution: current social media promote content that angers and polarizes because that increases engagement and thus ad revenues.  Instead the services could filter for quality and value to users, but they have little incentive to do so.  What little effort they ever have made to do that has been lost in their quest for ad revenue.

Social media marketers speak of "amplification." It is easy to see the supply and demand for disinformation, but marketing professionals know that it is amplification in distribution that makes all the difference. Distribution is the critical choke point for controlling this newly amplified spread of disinformation. (And as Feld points out, the First Amendment does not protect inappropriate uses of loudspeakers.)

While this is a complex area that warrants much study, as the report observes, the arguments cited against the importance of filter bubbles in the box on page 10 are less relevant to social media, where the filters are largely based on the user’s social graph (who promotes items to be fed to them, in the form of posts, likes, comments, and shares), not just active search behavior (what they search for). 

Changing the behavior of demand is clearly desirable, but a very long and costly effort. It is recognized that we cannot stop the supply. But we can control distribution -- changing filtering algorithms could have significant impact rapidly, and would apply across the board, at Internet scale and speed -- if the social media platforms could be motivated to design better algorithms.

How can we do that? A quick summary of key points from my prior posts...

We seem to forget what Google’s original PageRank algorithm had taught us.  Content quality can be inferred algorithmically based on human user behaviors, without intrinsic understanding of the meaning of the content.  Algorithms can be enhanced to be far more nuanced.  The current upranking is based on likes from all of one’s social graph -- all treated as equally valid.  Instead, we can design algorithms that learn to recognize the user behaviors on page 8, to learn which users share responsibly (reading more than headlines and showing discernment for quality) and which are promiscuous (sharing reflexively, with minimal dwell time) or malicious (repeatedly sharing content determined to be disinformation).  Why should those users have more than minimal influence on what other users see?

The spread of disinformation could be dramatically reduced by upranking “votes” on what to share from users with good reputations, and downranking votes from those with poor reputations.  I explain further in A Cognitive Immune System for Social Media -- Developing Systemic Resistance to Fake News and In the War on Fake News, All of Us are Soldiers, Already!  More specifics on designing such algorithms is in The Augmented Wisdom of Crowds: Rate the Raters and Weight the Ratings.  Social media are now reflecting the wisdom of the mob -- instead we need to seek the wisdom of the smart crowd.  That is what society has sought to do for centuries.

Beyond that, better algorithms could combat the social media filter bubble effects by applying measures that apply judo to the active drivers noted on page 8.  Cass Sunstein suggested “surprising validators” in 2012 one way this might be done, and I built on that to explain how that could be applied in social media algorithms:  Filtering for Serendipity -- Extremism, 'Filter Bubbles' and 'Surprising Validators’.

If platforms and regulators focused more on what such distribution algorithms could do, they might take action to make that happen (as addressed in Regulating our Platforms -- A Deeper Vision).

Yes, "the way we think drives disinformation," and social media distribution algorithms drive how we think -- we can drive them for good, not bad!

---
Background noteNiemanLab today pointed to a PNAS paper showing evidence that "... ratings given by our [lay] participants were very strongly correlated with ratings provided by professional fact-checkers. Thus, incorporating the trust ratings of laypeople into social media ranking algorithms may effectively identify low-quality news outlets and could well reduce the amount of misinformation circulating online." The study was based on explicit quality judgments, but using implicit data on quality judgments as I suggest should be similarly correlated, and could apply the imputed judgments of every social media user who interacted with an item with no added user effort.

[Update:] 
Comments on Facebook's 2/17/20 White Paper, Charting a Way Forward on Online Content Regulation

This is an interesting document, with some good discussion, but it seems to provide evidence that leads to the point I make here, but totally misses seeing it. Again this seems to be a case in which "It is difficult to get a man to understand something when his job depends on not understanding it."

The report makes the important point that:
Companies may be able to predict the harmfulness of posts by assessing the likely reach of content (through distribution trends and likely virality), assessing the likelihood that a reported post violates (through review with artificial intelligence), or assessing the likely severity of reported content
So Facebook understands that they can predict "the likely reach of content" -- why not influence it??? It is their distribution process and filtering algorithms that control "the likely reach of content." Why not throttle distribution to reduce the reach in accord with the predicted severity of the violation? Why not gather realtime feedback from the distribution process (including the responses of users) to refine those predictions, so they can course correct the initial predictions and rapidly refine the level of the throttle? That is what I have suggested in many posts, notably In the War on Fake News, All of Us are Soldiers, Already!


See the Selected Items tab for more on this theme.