Saturday, January 13, 2018

"The Square and the Tower" — Augmenting and Modularizing the Algorithm (a Review and Beyond)

Niall Ferguson's new book, The Square and the Tower: Networks and Power from the Freemasons to Facebook is a sweeping historical review of the perennial power struggle between top-down hierarchies and more open forms of networks. It offers a thought-provoking perspective on a wide range of current global issues, as the beautiful techno-utopian theories of free and open networks increasingly face murder by two brutal gangs of facts: repressive hierarchies and anarchistic swarms.

Ferguson examines the ebb and flow of power, order, and revolution, with important parallels between the Gutenberg revolution (which led to 130 years of conflict) and our digital revolution, as well as much in between. There is valuable perspective on the interplay of social networks (old and new), the hierarchies of governments (liberal and illiberal), anarchists/terrorists, and businesses (disruptive and monopolistic). One can disagree with Ferguson's conservative politics yet find his analysis illuminating.

Drawing on a long career as a systems analyst/engineer/designer, manager, entrepreneur and inventor, I have recently come to share much of Ferguson's fear that we are going off the rails. He cites important examples like the 9/11 attacks, counterattacks, and ISIS, the financial meltdown of 2008, and most concerning to me, the 2016 election as swayed by social media and hacking. However -- discouraging as these are -- he seems to take an excessively binary view of network structure, and to discount the ability of open networks to better reorganize and balance excesses and abuse. He argues that traditional hierarchies should reestablish dominance.

In that regard, I think Ferguson fails to see the potential for better ways to design, manage, use, and govern our networks -- and to better balance the best of hierarchy and openness. To be fair, few technologists are yet focused on the opportunities that I see as reachable, and now urgently needed.

New levels of man-machine augmentation and new levels of decentralizing and modularizing intelligence can make these network smarter and more continuously adaptable to our wishes, while maintaining sensible and flexible levels of control.   We can build on distributed intelligence in our networks to find more nuanced ways to balance openness and stability (without relying on unchecked levels of machine intelligence). Think of it as a new kind of systems architecture for modular engineering of rules that blends top-down stability with bottom-up emergence that apply checks and balances that work much like our representative democracy. This is a still-formative exploration of some ideas that I have written about, and plan to expand on in the future. First some context.

The Square (networks), the Tower (hierarchies) and the Algorithms that make all the difference

Ferguson's title comes from his metaphor of the medieval city of Sienna, with a large public square that serves as a marketplace and meeting place, and a high tower of government (as well as a nearby cathedral) that displayed the power of those hierarchies. But as he elaborates, networks have complex architectures and governance rules that are far richer than the binary categories of either "network" ( a peer to peer network with informal and emergent rules) or "hierarchy" (a constrained network with more formal directional rankings and restrictions on connectivity).

The crucial differences among all kinds of networks are in the rules (algorithms, code, policies) that determine which nodes connect, and with what powers. While his analysis draws out the rich variety of such structures, in many interesting examples, with diagrams, what he seems to miss is any suggestion of a new synthesis. Modern computer-based networks enable our algorithms to be far more nuanced and dynamically variable. They become far more emergent in both structure and policy, while still subject to basic constraints needed for stability and fairness.

Traditional networks have rules that are either relatively open (but somewhat slow to change), or constrained by laws and customs (and thus resistant to change) -- and even our current social and information networks are constrained in important ways. For example,
  • The US constitution defines the powers and the structures for the governing hierarchy, and processes for legislation and execution, made resilient by its provisions for self-amendable checks and balances. 
  • Real-world social hierarchies have structures based on empowered people that tend to shift more or less slowly.
  • Facebook has a social graph that is emergent, but the algorithms for filtering who sees what are strictly controlled by and private to Facebook. (They have just announced a major change --  unilaterally -- hopefully for the better for users and society, if not for content publishers.)
  • Google has a page graph that is given dynamic weight by the PageRank algorithm, but the management of that algorithm is strictly controlled by Google. It has been continuously evolving in important respects, but the details are kept secret to make it harder to game.
As Ferguson points out, our vaunted high-tech networks are controlled by corporate hierarchies (he refers to FANG, Facebook, Amazon, Netflix, and Google, and BAT, Baidu, Alibaba, and Tencent) -- but subject to levels of government control that vary in the US, EU, and China. This corporate control is a source of tension and resistance to change -- and a barrier to more emergent adaptation to changing needs and stressors (such as the Russian interference in our elections). These new monopolistic hierarchies extract high rents from the network -- meaning us, the users -- mostly in the form of advertising and sales of personal data.

A fuller summary of Ferguson's message is in his WSJ preview article, "In Praise of Hierarchy." That headlines which side of fence he is on.

Smarter, more open and emergent algorithms -- APIs and a common carrier governance model

My view on this is more positive -- in that the answer to the question of governance is to make our network algorithms not only smarter, but more open to individual and multi-party control. Business monopolies or oligarchies (or governments) may own and control essential infrastructure, but we can place limits on what they control and what is open. In the antitrust efforts of the past century governments found need to regulate rail and telephone networks as common carriers, with limited corporate-owner power to control how they are used, giving marketplace players (competitors and consumers) a large share in that control. 

Initially this was rigid and regulated in great detail by the government (very hierarchical), but the Carterfone decision showed how to open the old AT&T Bell System network to allow connection of devices not tested and approved by AT&T. Many forget how only AT&T phones could be used (except for special cases of alternative devices like early faxes (Xerox "telecopiers") that went through cumbersome and often arbitrary AT&T approval processes). That changed when the FCC's decision opened the network up to any device that met defined electrical interface standards (using the still-familiar RJ11, a "Registered Jack"). Similarly only AT&T long-distance connections could be used, until the antitrust Consent Decree opened up competition among the "Baby Bells" and broke them off from Long Lines to compete on equal terms with carriers like MCI and Sprint.

In software systems, such plug-like interfaces are known as APIs (Application Program Interfaces), and are now widely accepted as the standard way to let systems inter-operate with one another -- just enough, but no more -- much like a hardware jack does. This creates a level of modularity in architecture that lets multiple systems, subsystems, and components  inter-operate as interchangeable parts -- the great advance of the first Industrial Revolution.

What I suggest as the next step in evolution of our networks is a new kind of common carrier model that recognizes networks like Facebook, Google, and Twitter as common utilities once they reach some level of market dominance. Then antitrust protections would mandate open APIs to allow substitution of key components by customers -- to enable them to choose from an open market of alternatives that offer different features and different algorithms. Some specific suggestions are below, but first, a bit more on the motivations.

Modularity, emergence, markets, transparency, and democracy

Systems architects have long recognized that modularity is essential to making complex systems feasible and manageable. Software developers saw from the early days that monolithic systems did not scale -- they were hard to build, maintain, or modify. Web 2.0 extended that modularity to our network services, using network APIs that could be opened to the marketplace. Now we see wonderful examples of rich applications in the cloud composed of elements of logic, data, and analytics from a vast array of companies (such as travel services that seamlessly combine air, car rental, hotel, local attractions, loyalty programs, and tracking services from many companies).

The beauty of this kind of modularity is that systems can be highly emergent, based on the transparency and stability of published APIs, to quickly adapt to meet needs that were not anticipated. Some of this can be at the consumer's discretion, and some is enabled by nimble entrepreneurs. The full dynamics of the market can be applied, yet basic levels of control can be retained by the various players to ensure resilience and minimize abuse or failures.

The challenge that Ferguson makes clear is how to apply hierarchical control in the form of regulation in a way that limits risks, while enabling emergence driven by market forces. What we need is new focus on how to modularize critical common core utility services and how to govern the policies and algorithms that are applied, at multiple levels in the design of these systems (another kind of hierarchy). That can be done through some combination of industry self-regulation (where a few major players have the capability to do that, probably faster and more effectively than government), but by government where necessary (and only to the extent and duration necessary).

That obviously will be difficult and contentious, but it is now essential, if we are not to endure a new age of disorder, revolution, and war much like the age that followed Gutenberg. Silicon Valley and the rest of the tech world need to take responsibility for the genie they have let out of the bottle, and mobilize to deal with it.

Once that progresses and is found to be effective, similar methods can be applied to make government itself more modular, emergent, transparent, and democratic -- moving carefully toward "Democracy 2.0." (The carefully part is important -- Ferguson rightfully notes the dangers we face, and we have done a poor job of teaching our citizens, and our technologists, the principles of history, civics, and governance that are prerequisite to a working democracy.)

Opening the FANG walled gardens (with emphasis on Facebook and Google, plus Twitter)

This section outlines some rough ideas. (Some were posted in comments on an article in The Information by Sam Lessin, titled, "The Tower of Babel: Five Challenges of the Modern Internet" -- another tower.)

The fundamental principle is that entrepreneurs should be free to innovate improvements to these "essential" platforms, to be selected by consumer market forces. Just as we moved beyond the restrictive walled gardens of AOL, and the early closed app stores (limited to apps created by Apple or Motorola or Verizon), we have unleashed a cornucopia of innovative Web services and apps that have made our services far more effective (and far more valuable to the platform owners as well, in spite of their fears). Why should first movers be allowed to block essential innovation? Why should they have sole control of the essential algorithms that are coming to govern major aspects of our lives? Why shouldn't our systems evolve toward fitness functions that we control, with just enough hierarchical structure to prevent excessive instability at any given time?

Filtering rules. Filters are central to the function of Facebook, Google, and Twitter. As Ferguson observes, there are issues of homophily, filter bubbles, echo chambers, and fake news, and spoofing that are core to whether these networks make us smart or stupid, and whether we are easily manipulated to think in certain ways. Why not mandate that platforms be opened to user-selectable filtering algorithms (and/or human curators)? The major platforms can control their core services, but could allow for separate filters that inter-operate. Let users control their filters, whether just by setting key parameters, or by substituting pluggable alternative filters. This would be much like third party analytics in financial market data systems. Greater competition and transparency would allow users to compare alternative filters and decide what kinds of content they do or do not want. It would stimulate innovation to create new kinds of filters that might be far more useful.

For example, I have proposed strategies for filters that can help counter filter bubble effects by being much smarter about how people are exposed to views that may be outside of their bubble, doing it in ways that they welcome and think about. My post, Filtering for Serendipity -- Extremism, "Filter Bubbles" and "Surprising Validators" explains the need, and how that might be done. The key idea is to assign levels of authority to people based on the reputational authority that other people ascribe to them (think of it a RateRank, analogous to Google's PageRank algorithm). This approach also suggests ways to create smart serendipity, something that could be very valuable as well.

The "wisdom of the crowd" may be a misnomer when the crowd is an undifferentiated mob, but, what I propose seeking the wisdom of the smart crowd -- first using the crowd to evaluate who is smart, and then letting the wisdom of the smart sub-crowd emerge, in a cyclic, self-improving process (much as Google's algorithm improves with usage).

Social graphs and user agents: Why do Facebook, Twitter, LinkedIn, and others own separate, private forms of our social graph. Why not let other user agents interoperate with a given platform’s social graph? Does the platform own my social graph or do I? Does the platform control how that affects my filter or do I? Yes, we may have different flavors of social graph, such as personal for Facebook and professional for LinkedIn, but we could still have distinct sub-communities that we select when we use an integrated multi-graph, and those could offer greater nuance and flexibility with more direct user control.

Email systems were modularized long ago, so that we compose and read mail using user agents (Outlook, Apple mail, Gmail, and others) that connect with remote mail transfer agent servers (that we may barely be aware of) which interchange mail with any other mail transfer agent to reach anyone using any kind of user agent, thus enabling universal connectivity. Why not do the same to let any social media user agent inter-operate with any other, using a common social graph? We would then set our user agent to apply filters to let us see whichever communities we want to see at any given time.

Identity: A recurring problem with many social networks is abuse by anonymous users (often people with many aliases, or even just bots). Once again, this need not be a simple binary choice. It would not be hard to have multiple levels of participant, some anonymous and some with one or more levels of authentication as real human individuals (or legitimate organizations). These could then be clearly marked in feeds, and users could filter out anonymous or unverified users if desired.

Value transfers and extractions: Another important problem, and one that Ferguson cites is that the new platform businesses are driven by advertising and data sales, which means the consumer is not the customer but the product. Short of simply ending that practice (to end advertising and make the consumer the customer), those platforms could be driven to allow customer choice about such intrusions and extractions of value. Some users may be willing opt in to such practices, to continue to get "free" service, and some could opt out, by paying compensatory fees -- and thus becoming the customer. If significant numbers of users opted to become the customer, then the platforms would necessarily become far more customer-first -- for consumer customers, not the business customers who now pay the rent. (I have done extensive work on such alternative strategies, as described in my FairPayZone blog and my book.)

Analytics and metrics: we need access to relevant usage data and performance metrics to help test and assess alternatives, especially when independent components interact in our systems. Both developers and users will need guidance on alternatives. The Netflix Prize contests for improved recommender algorithms provided anonymized test data from Netflix to participant teams. Concerns about Facebook's algorithm, and the recent change that some testing suggests may do more harm than good, point to the need for independent review. Open alternatives will increase the need for transparency and validation by third parties. (Sensitive data could be restricted to qualified organizations.) [This paragraph added 1/14.]

If such richness sounds overly complex, remember that complexity can be hidden by well-designed user agents and default rules. Those who are happy with a platform's defaults need not be affected by the options that other users might enable (or swap in) to customize their experience. But we could have the choice, and innovations that are valuable can emerge for use by early adopters, and then spread into the mainstream if success fuels demand. That is the genius of our market economy -- a spontaneous, emergent process for adaptively finding what works and has value -- more effective than any hierarchy (as Ferguson extols, with reference to Smith, Hayek, and Levitt).

Augmentation of humans (and their networks)

Another very powerful aspect of networks and algorithms that Ferguson (and many others) neglect is  the augmentation of human intelligence. This idea dates back some 60 years (and more), when "artificial intelligence" went through its first hype cycle -- Licklider and Engelbart observed that the smarter strategy is not to seek totally artificial intelligence, but to seek hybrid strategies that draw on and augment human intelligence. Licklider called it "man-computer symbiosis, and used ARPA funding to support the work of Engelbart on "augmenting human intellect." In an age of mundane uses of computers, that proved eye-opening ("the mother of all demos") at a 1968 conference, and was one of the key inspirations for modern user interfaces, hypertext, and the Web.

The term augmentation is resurfacing in the artificial intelligence field, as we are once again realizing how limited machine intelligence still is, and that (especially where broad and flexible intelligence is needed) it is often far more effective to seek to apply augmented intelligence that works symbiotically with humans, retaining human visibility and guidance over how machine intelligence is used.

Why not apply this kind of emergent, reconfigurable augmented intelligence to drive a bottom up way to dynamically assign (and re-assign) authority in our networks, much like the way representative democracy assigns (and re-assigns) authority from the citizen up? Think of it as dynamically adaptive policy engineering (and consider that a strong bottom-up component will keep such "engineering" democratic and not authoritarian). Done well, this can keep our systems human-centered.

Not binary:  networks versus hierarchies -- "Everything is deeply intertwingled"

Ted Nelson (who coined the term "hypertext" and was also one of the foundational visionaries of the Web), wrote in 1974 that "everything is deeply intertwingled." Ferguson's exposition illuminates how true that is of history. Unfortunately, his artificially binary dichotomy of hierarchies versus networks tends to mask this, and seems to blind him to how much more intertwingled we can expect our networks to be in the future. As Nelson put it, "Hierarchical and sequential structures, especially popular since Gutenberg, are usually forced and artificial. Intertwingularity is not generally acknowledged—people keep pretending they can make things hierarchical, categorizable and sequential when they can't."

It's a race:  augmented network hierarchies that are emergently smart, balanced, and dynamically adaptable -- or disaster

If we pull together to realize this potential, we can transcend the dichotomies and conflicts of the Square and the Tower that Ferguson reveals as so wickedly complex and dangerous. Just as Malthus failed to account for the emergent genius of civilization, and the non-linear improvements it produces, Ferguson seems to discount how non-linear the effect of smarter networks with more dynamically augmented and balanced structures can be. But he is right to be very fearful, and to raise the alarm -- we are racing along a very dangerous path, and are not being nearly smart or proactive enough about what we need to do to avert disaster. What we need now is not a top-down command and control Manhattan Project, but a multi-faceted, broadly-based movement.


First published in Reisman on User-Centered Media, 1/13/18.

Monday, June 19, 2017

Bricks and Clicks -- Showrooming, Riggio, and Bezos

Last week brought two notable news items about Amazon and the future or retail. Most noted was the deal to buy Whole Foods, which would greatly accelerate Amazon's move to the center of the still rudimentary combination of bricks and clicks. Drawing some limited attention was the issuance of an Amazon patent (filed in May, 2012) on smart ways to turn showrooming to a store's advantage.

This was especially relevant to me for several reasons:
  • I have been dabbling in bricks and clicks since at least twenty years ago, when I tried to pitch Steve Riggio of Barnes & Noble, on a bricks and clicks strategy to counter Amazon (to no avail).
  • It was gratifying to see that the Amazon patent cited seventeen of my patents or applications as prior art. 
  • My 2013 post, The Joy of Showrooming:  From Profit Drain to Profit Center, outlined promising ideas much along the lines of aspects of Amazon's patent. My ideas were for collaboration of the showroom owner and the Web competitor, to optimize the best value exchange, based on showrooming referral fees that compensate the showroom owner for the showroom service provided.
The IoBC (Internet of Bricks and Clicks)

The Internet of Things has gotten much press about how it connects not just people and businesses, but literally everyThing. Few grasp even the early impacts of this sea change, and even the technically sophisticated can only dimly grasp where it will take us.

Bricks and clicks is just an aspect of that. It is much like the famous New Yorker's View of the World magazine cover by Steinberg.
  • Traditional retail businesses view online from their store-centered perspective, adding online services to counter and co-opt the enemy attack.
  • Online businesses view stores from the online-centered perspective, dabbling in stores and depots to expand their beachhead.
  • Only a few, like Bezos, see the big picture of an agnostic, flexible blend of resources and capabilities that most effectively provide what we want, when and how we want it.
To see the larger view we must climb above our attachments to stores and warehouses, or Web sites and apps. We must consider the objectives of the customers and how best to give them whatever they want, with whatever resources can be applied, as costs permit. How we orchestrate those resources to meet those objectives will change rapidly, as our systems and their integration improves. Only the most far-sighted and nimble will see and go more than a few steps down this path.

The WSJ op-ed on the merger and article on patent give some hint of the kind of changes we can look forward to (and I expand on that below). Fasten your seat belts, it's going to be a bumpy ride.

The User-Centered view

This blog, User-Centered Media, is focused on my dominant perspective on technology -- as a tool to improve our lives. While many of the most creative people in technology work for businesses that sell technology products (resources) for others to use, for most of my career I worked for companies that wanted to use technology. Vendors want to make what they know and sell what they make. Users want to find whatever resources they need and put them together to help their people do things -- whether the resources are people, computers, Web sites, networks, devices, stores, warehouses, or transport.

So far bricks and clicks have been developing from the two poles, but that has not taken us very far. But it seems we may be at an inflection point:
  • The two articles I cited above give a hint of what we will begin to see form in the middle. Amazon has been in the lead and just took a big step forward. The WSJ op-ed observes: "Mr. Bezos’s ambition is...oriented toward accelerating consumer gratification however possible."
  • Additional perspectives on Bezos' user-centered innovation style were in the Times a few days ago
  • In a NY Times Upshot article today quoting Erik Brynjolfsson, "The bigger and more profound way that technology affects jobs is by completely reinventing the business model...Amazon didn’t go put a robot into the bookstores and help you check out books faster. It completely reinvented bookstores."
  • More detail on the guts of integration is reviewed in another Times article today
  • Still another Times article today looks at the symmetry of the Amazon-Whole Foods deal and the Walmart-Bonobos deal. 
  • And still another looks at the similar synergies of the Target investment in Casper. 
Bezos is well on the way to this broad reinvention of retail, but, as suggested in a recent (May!) article on how VR/AR might factor into this by my old friend Gary Arlen (and our online comments back and forth), we are still far from game over.

With that, I indulge in some comments on my own dabblings in this space.

The Barnes & Noble that might have been

Just over twenty years ago I was invited to a B&N focus group when Amazon was just nipping at their heels and they had just built their Web site. Based on my reactions to what I saw, I wrote to Riggio, then COO, on May 8, 1997:
B&N on the Web presents an exciting opportunity to leverage your store-based business with your online business at three levels:
  • In-store uses of the Web as a kiosk to provide a new richness in self-service (using Firefly, book reviews, and other aids). 
  • Hybrid uses of remote Web access as a prelude to a store visit (such as to pre-select a book, find the nearest store that has it, and put it on hold for pickup)
  • Remote uses that you are off to a good start on (featuring collaborative filtering and other recommender and community-building tools)
I can help not only with the new media side of this, but also with the back-end integration. 
Just first steps -- but all in software, with no need for any changes at all to their physical logistics. Where might B&N be now if Riggio responded to my letter?

The Joy of Showrooming

In May 2013 (May seems a good month for this), I did a post, The Joy of Showrooming: From Profit Drain to Profit Center. It addresses new methods much the same as those in the Amazon patent (which was filed a year earlier than my post, so I am glad I chose not to apply for a patent!). After some comments on the emerging concerns about showrooming and price-checking apps, I said:
What I suggest is to take this threat, and view it as an opportunity:
  • What if showrooming activity could be tracked, and e-tailers convinced to pay a "showroom fee" to the provider of a showrooming service, if the sale came from that showroom? 
  • What if the retailer could filter Internet traffic from their store, and trace which URLs are for competitors, track purchase transactions that emanate from the store, and pass through only those that go to retailers that agree to pay the fee? 
There are a number of ways this can be done, and that can lead to a new retail ecology that benefits all.
It is not clear where Amazon wants to go with this, and whether they want to build an ecosystem with other strong players. But as Arlen and I agree, these are still early days -- others may seize the opportunity to build a showrooming referral ecosystem (but now should consider to what extent the Amazon patent might impede them).

(I have not checked which of my patents were cited by Amazon, but in general my inventions relate to a user-centered view of networked resources, and of the IoT. And for those who might care about IP issues, all of my patents have been sold, and I am not actively involved in getting more.)

[Update 6/20: A Forrester blog post made an interesting observation: "Amazon knows that to win at brick and mortar, retail theater is paramount. Whole Foods locations are destinations where the idea of “Retail Theater” still thrives." Showrooming is theater.]

The Future of Retail

In 2014 I co-led the very stimulating MIT Enterprise Forum of NYC: Think Tank Session III: The Future of Retail: Reinventing how people get stuff (one of a series on The Future of X). This Think Tank brought together thirteen "lead participants" with diverse expertise in the field for an open brainstorming with about sixty other technology entrepreneur "participants." While aspects of the video are now dated, those who care to look are likely to find many examples of forward thinking that are still timely.

And now for something not completely different

While most of my work on my current main project, FairPay, is not specifically oriented to bricks, it does bring the same kind of user-centered, customer-first thinking to retail. Much of the focus is on the digital realm, but it suggests some new directions that relate to the physical realm as well. For more ideas on those aspects of the future of retail, check out my other blog, The FairPay Zone.

Friday, June 16, 2017

​How come my AI don't bark when you come around?

I listened to the inimitable Dr. John last night singing his very wry song, "How come my dog don't bark when you come around?"
Now you say you ain't never met my wife, you ain't never seen her befo,'
Say you ain't been hangin' roun' my crib; well here's somethin' I wanna know...
I wanna know what in the worl' is goin' down,
How come my dog don't bark when you come around?     [more]
In a distracting moment of acute nerdiness, it occurred to me that this was a great example of the deep challenges of AI: Multiple levels of meaning and abstraction that require not only common sense understandings beyond the words and phrases, but the layers of conceptual, emotional and literary meanings, such as suspicion and irony.

Just parsing the literal meaning is the easy part.

  • It is not so hard for our AI to understand the literal question.
  • Does it understand the likely answer to to that question?
  • Does it know what suspicions the likely answer leads to?
  • ...who the parties are?
  • ...and what actions those suspicions motivate?

Can Alexa or Siri or Watson figure that out? How, and how well. Now? or when? What other levels of understanding does AI face? Can that AI hunt?

(Of course these problems are well-known. This was just a flash of connection that amused me, before I returned to focusing on the performance.)

Thursday, May 25, 2017

Through The Looking Glass: PsyWar Dispatches

A recent NY Times front page article, "The Right Builds an Alternative Narrative About the Crises Around Trump," is an example of a perspective that we need to make a regular feature, to support our defense in the war on truth.

It has been widely recognized that the shocking surprise of the Trump election - and that it happened at all - is partly because reasonable people who live in the world of truth were also living in a filter bubble. This reasonable majority did not realize how many people were living (at least in part) in an Alice and Wonderland world of alt-truth that had been building for years - and that that was enough to sway the election.

Perhaps the simplest countermeasure is to know thine enemy. There is plenty of evidence they will not be converted by frontal attack with facts (that produces a defensive reaction that simply deepens their polarization), but all of us can help, by peeling away the layers - if we know who and what we are dealing with. We are now beginning to realize just how vicious a cycle this is.

PsyWar Dispatches

The Times article illustrates, by example, the need for regular dispatches from PsyWar correspondents. Some of us may try to occasionally watch Fox News (and perhaps sometimes the more extreme outlets) - but that can be psychologically painful and time-consuming. Just as we have always relied on war correspondents to face challenges of danger and horror that few of us would voluntarily endure, we need PsyWar correspondents to do the equivalent, so they can provide dispatches to us.

Armed with better understanding of where the darkness is in our midst, all of us can help shine our lights on it (person to person, in our communities), and can better manage the harm it does.

What I suggest is to publish a regular feature, much like that article, whether daily (for now) or a few times per week, or less, as activity and issues warrant. It could include a regularly running real-time commentary (like that article), plus occasional analysis pieces.

Doing good - a business opportunity?

This might be done by the Times or the Washington Post (or even as a joint effort), or by other existing publications - or as a new startup.

This might be the basis for a nice entrepreneurial news venture - one that might very quickly be profitable! It would not cost much and might attract significant subscription revenue. (Of course it could be done as a non-profit service - one that might quickly become self-sustaining..)

Keep it simple

The primary function of this report would be not to fact-check or debunk or convince - but merely to enable all us to understand what is being thought and said by those in our midst who have been blinded. It is the most basic layer of truth and sunlight - not to argue, but merely to expose. Argument and countermeasures can then grow organically from a multitude of places and in a multitude of forms. But first, every one of us must be reminded of how pervasive and insidious the threat is - and be informed of just where it is.

Of course those on the other side of any given issue will do likewise. So what? If the report is simply a faithful report of what is being said (without the ad hominem attacks, whether on "deplorables" or on "tinfoil hat conspiracy liberal hysteria"), there is nothing to debunk.

If we all know what lies are spreading (however you define that), the real truth will out.


I have previously written on this problem of polarization in filter bubbles / echo chambers, suggesting some more sophisticated ways to finesse it with technology: Filtering for Serendipity -- Extremism, "Filter Bubbles" and "Surprising Validators".

Thursday, December 15, 2016

2016: Fake News, Echo Chambers, Filter Bubbles and the "De-Augmentation" of Our Intellect

Silicon Valley, we have a problem!

The 2016 election has made it all too clear that growing concerns some of us had about the failing promise of our new media were far more acute than we had imagined. Stuart Elliott recently observed that "...the only thing easier to find than fake news is discussion of the phenomenon of fake news."

But as many have noted, this is a far bigger problem than just fake news (which is a reasonably tractable problem to solve). It is a problem of echo chambers and filter bubbles, and a still broader problem of critical thinking and responsible human relations. While the vision has been that new media could "augment human intellect," instead, it seems our media are "de-augmenting" our intellect. It is that deeper and more insidious problem that I focus on here.

The most specifically actionable ideas I have about reversing that are well described in my 2012 post, Filtering for Serendipity -- Extremism, "Filter Bubbles" and "Surprising Validators,"which has recently gotten attention from such influential figures as Tim O'Reilly and Eli Pariser. (Some readers may wish to jump directly to that post.)

This post aims at putting that in the broader, and more currently urgent context. As one who has thought about electronic social media, and how to enhance collaborative intelligence, and the "wisdom"/"madness" of crowds since the 1970s, I thought it timely to post on this again, expand on its context, and again offer to collaborate with those seeking remedies.

This post just touches on some issues that I hope to expand on in the future. This is a rich and complex challenge. Even perverse: as noted in my 2012 post, and again below, "balanced information may actually inflame extreme views." But at last there is a critical mass of people who realize this may be the most urgent problem in our Internet media world. Humanity may be on the road to self-destruction -- if we don't find a way to fix this fast.

Some perspectives -- augmenting or de-augmenting?

Around 1970 I was exposed to two seminal early digital media thinkers. Those looking to solve these problems today would do well to look back at this rich body of work. These problems are not new -- only newly critical.
  • Doug Engelbart was a co-inventor of hypertext (the linking medium of the Web) and related tools, with the stated objective of "Augmenting Human Intellect."  His classic tech report memorably illustrated the idea of augmenting how we use media, such as writing to help us think, in terms of the opposite -- we can de-augment the task of writing with a pencil by tying the pencil to a brick! While the Web and social media have done much to augment our thinking and discourse, we now see that they are also doing much to de-augment it.
  • Murray Turoff did important early work on social decision support and collaborative problem solving systems. These systems were aimed as consensus-seeking (initially focused on defense and emergency preparedness), and included the Delphi technique, with its specific methods for balancing the loudest and most powerful voices. 
Not so long after that, I visited a lab at what is now Verizon, to see a researcher (Nathan Felde) working with an experimental super-high resolution screen for multimedia (10,000 by 10,000 pixels, as I recall -- that is more than 10 times richer than the 4K video that is just now becoming generally available). He observed that after working with that, going back to a then-conventional screen was like "eating dinner through a straw" -- de-augmentation again.

Now we find ourselves in an increasingly "post-literate" media world, with TV sound bites, 140 character Tweets, and Facebook posts that are not much longer. We increasingly consume our media on small handheld screens -- mobile and hyper-connected, but displaying barely a few sentences -- eating food for our heads through a straw.*

What a fundamental de-augmentation this is, and why it matters is chillingly described in "Donald Trump, the first President of our Post-Literate Age," A Bloomberg View piece by Joe Weisenthal:
Before the invention of writing, knowledge existed in the present tense between two or more people; when information was forgotten, it disappeared forever. That state of affairs created a special need for ideas that were easily memorized and repeatable (so, in a way, they could go viral). The immediacy of the oral world did not favor complicated, abstract ideas that need to be thought through. Instead, it elevated individuals who passed along memorable stories, wisdom and good news.
And here we begin to see how the age of social media resembles the pre-literate, oral world. Facebook, Twitter, Snapchat and other platforms are fostering an emerging linguistic economy that places a high premium on ideas that are pithy, clear, memorable and repeatable (that is to say, viral). Complicated, nuanced thoughts that require context don’t play very well on most social platforms, but a resonant hashtag can have extraordinary influence. 
Farad Manjoo gives further perspective in "Social Media’s Globe-Shaking Power," closing with:
Mr. Trump is just the tip of the iceberg. Prepare for interesting times.
Engelbart and Turoff (and others such as Ted Nelson, the other inventor of hypertext) pointed the way to doing the opposite -- we urgently need to re-focus on that vision, and extend it for this new age.

Current voices for change

One prominent call for change was by Tim O'Reilly, a very influential publisher, widely respected as a thought leader in Internet circles. He posted on "Media in the Age of Algorithms" and triggered much comment (including my comment referring to my 2012 post, which Tim recommended).

Another prominent voice is Eli Pariser, who is known for his TED Talk and book on The Filter Bubble, a term he popularized in 2011. He recently created a collaborative Google Doc, which, as reported in Fortune," has become a hive of collaborative activity, with hundreds of journalists and other contributors brainstorming strategies for pushing back against publishers that peddle falsehoods" (I am one, contributing a section headed "Surprising Validators). The editable Doc is apparently generating so much traffic that a read-only copy has been posted!

Shelly Palmer did a nice post this summer, "Your Comfort Zone May Destroy The World." We need to not just exhort stepping outside our comfort zones, which few will do unaided, but to make our media smart about enticing us to do that in easy and compelling ways.

The way forward

As I said, this is a rich and complex challenge. Many of the obvious solutions are too simplistic. As my 2012 post begins:
Balanced information may actually inflame extreme views -- that is the counter-intuitive suggestion in a NY Times op-ed by Cass Sunstein, "Breaking Up the Echo" (9/17/12).   Sunstein is drawing on some very interesting research, and this points toward an important new direction for our media systems.
Please read that post to see why that is, how Sunstein suggests we might cut through that, and the filtering, rating, and ranking strategies I suggest for doing that. (The idea is to find and highlight what Sunstein called "Surprising Validators" -- people who you already give credence to, who suggest that your ideas might be wrong, at least in part -- enticing you to take a small step outside your comfort zone, and re-think, to see things just a bit more broadly.)

I hope to continue to expand on this, and to work with others on these vital issues in the near future.

Supporting serious journalism

One other critical aspect of this larger problem is citizen-support of serious journalism -- not chasing clicks or commercial sponsorship, but journalism for citizens. My other blog on FairPay addresses that need, most recently with this companion post: Panic in the Streets! Now People are Ready to Patron-ize Journalism!


*Relying on smartphones to feed our heads reminds me of my disappointment with clunky HyperCard on early Macs (the first widely available hypertext system -- nearly 20 years after the early full-screen demos that so impressed me!), with its tiny "cards" instead of pages of unlimited length. How happy I was to see Mosaic and Netscape browsers on full-sized screens finally appear some 5 years later. We are losing such richness as the price of mobility! (I am writing this with a triple-monitor desktop system, which I sorely miss when away from my office, even with a laptop or iPad. And I admit, I am not great at typing with just my thumbs. ...Does anyone have a spare brick?)

[Image:  Thanks to Eli Pariser and Shelly Palmer for the separate images that I mashed up for this post.]

Wednesday, March 23, 2016

Universal Basic Income and a new Economics of Abundance -- "When looms weave by themselves, man's slavery will end"

Universal Basic Income (UBI) has become a hot topic, as noted in a recent NY Times article by Farhad Manjoo.  Confronting the rise of automation and robots, even libertarians and conservatives are warming to the idea of a universal income provided by the government, and some prominent technology VCs have become very interested. What we face is a new economics of abundance, and it will have many pervasive ramifications we can only dimly foresee -- some sooner than we may think.

The roots of this idea go back as far as Aristotle: “When looms weave by them­selves, man's slavery will end.” I read this quote in a 1964 NY Times article on automation, and it helped set the path of my "user-centered" career in technology. I wrote a high school essay taking off on it, extrapolating how it enabled utopias that blended Bellamy's "Looking Backward" and Wells' "Men Like Gods." Of course that was over 50 years ago, and my youthful utopian views were less seasoned with experience and pragmatism, but the core idea I expressed then still stands up:
...This raises the question whether the product of human labor is necessarily limited. The answer is that it most certainly is not. In this century a messiah has arisen -- perhaps Messiac* might be a better name for this role of automation. As Aristotle said, "when looms weave by themselves man' s slavery will end." The looms are now beginning to weave.
...One can easily conceive a giant automated complex, call it Messiac, that can produce nearly unlimited quantities of goods with only the labor of a few operators and repairmen. Similarly, farms can be improved greatly in efficiency by automation and eventually synthetic, mass-producable foods will be developed. Messiac could thus provide all with everything they needed or desired. It would not only eliminate poverty, but also remove all cause for stealing -- it is easier to push a few buttons for something than to steal it. Any individual who did not want to would not even need to work. The necessary labor would be of sufficient interest and lightness that volunteers could handle it. This remaining work would be of a professional nature and as such would have a high degree of interest... Messiac is thus an economic system that is far more utopian than that of the best traditional utopia.
It is timely that UBI is getting attention just as I have embarked on a much narrower and more immediate quest to develop a step toward that economics of abundance, based on some more sophisticated economics. While we undertake the long conversation about UBI, and the first baby steps on this road, there is a little-recognized opportunity for a related change in that direction.

Digital content and services already weave by themselves -- in the sense that they can be infinitely replicated at almost no cost. This has already caused great turmoil in the content industries -- journalism and music have been in crisis, and TV/video is not far behind. Content can be free, but who will work to create it, and how will they be compensated?

In my other blog, I suggest that the answer is in FairPay, a radically new strategy for pricing that adaptively seeks win-win compensation for creating products and services. Since there is no scarcity with digital, there is no invisible hand to allocate scarcity. Instead we need an invisible handshake, an agreement to set prices fairly to sustain creators, based on allocating "share of wallet" (whether hard-earned dollars, or UBI allowances).  FairPay suggests a simple, pragmatic mechanism for balancing power between consumers and creators/producers to agree on an equitable share of wallet.

Check it out. I suggest FairPay will shed light on how we will live soon, and even more so in a future world where all the looms weave by themselves.

*That was the era where computers had names like UNIVAC (UNIVersal Automatic Computer), ENIAC, and EDSAC.

Monday, February 08, 2016

Cuba's Digital Future -- Market Incentives, Resource Allocation, and FairPay as a New Path

My recent one-week People-to-People tour in Cuba presented a fascinating contrast to our market economy and suggests an interesting path forward for Cuba. I was focused mostly on what I saw of the real economy (there is not yet much of a digital economy), but on reflection, realized that Cuba's digital future could be unusually interesting. 

I began to think that the new FairPay strategy for more efficient and more customer-friendly pricing (especially for digital content and services) might fit particularly nicely with Cuba's desire to modernize and privatize its economy without giving up on its strong social values. Maybe Cuba has an opportunity to find a unique path to creating a new kind of market for user-centered media services.

Cuba's evolving real economy

What I saw of the real economy presented a sharp lesson in how essential markets are to providing incentives for productivity and efficient resource allocation. My tour program contacts presented an impressive view of how Cubans have applied socialist ideas resourcefully to develop sustainable production of food and provide strong education and medical care, but the backdrop was one of inefficiency and wasted potential.

  • Most striking was the story of the owner/driver of the beautifully maintained 1950's Ford convertible serving as tourist taxi that I had a short ride in. He had been an experienced medical doctor with a specialty, who was making $80 per month (actually 80 CUCs, but close enough -- a very good salary for Cuba). Now, as as a taxi driver who owns an attractive car, he makes $80 per day! -- 20-30 times as much as he made as a medical specialist!  What a waste of education and scarce skills!
  • Similar surprises were apparent under the surface in farms and food markets, the low standard of living of most of the population, and the striking decay of pre-revolution buildings and other infrastructure.
  • The food markets in particular showed the contrast of government ration books and subsidized prices for a very limited selection of basic food items, combined with gradually increased acceptance of a level of black market trade in more scarce and desirable items.
  • Modest efforts at allowing private development of restaurants ("paladares"), some of which were very nice, also presented a striking (but still very limited) contrast in how market incentives fuel productive enterprise.
A Q&A session with Reuters economic correspondent Marc Frank (in Cuba for over 20 years) added an interesting perspective on Raul Castro's ongoing efforts to shift toward more private enterprise and prepare Cuba to more fully participate in the world market -- now likely to accelerate with the thaw in US relations and a probable end to our trade embargo.

(Of course my understanding of the Cuban economy is quite limited -- these are just my impressions from what I saw in my one week there.)

Cuba's future digital economy

While Internet use and literacy in Cuba is low and will take time to grow, this cultural/economic climate raises the interesting idea that the new FairPay strategy for digital services might be especially relevant to development of user-centered media services for Cuban consumers. 
  • A core objective of the Cuban economy has been the socialist/communist ideal of "to each according to his need," The problem has been that the "from each according to his ability" does not work well (incentives are too weak), and the combination fails to provide efficient allocation of resources. That has led toward privatization -- but with conventional pricing practices, privatization does not deliver "to each according to his need."
  • FairPay creates a market solution to this problem -- not by hoping that productivity will be achieved "from each according to his ability," but by providing direct profit incentives for producers to learn what each consumer wants and what they should pay, and to produce the digital services that are desired accordingly. More like payment from each according to the value received (and willingness to pay fairly for that, to the extent able), and profit to the producer as a fair share of the actual value created
Consider the contrast between FairPay and conventional pricing models for digital:
  • Both conventional models and FairPay seek to enable businesses to price their services in the way that realistically maximizes profits.
  • Conventional models for pricing digital services, like freemium (and soft paywalls), may provide limited free services to all who want them, but support themselves by charging set prices for more advanced "premium" services. That prices the premium services out of reach of many consumers who can't justify that set price, but who would happily pay less. That is a loss to the market because digital services can be replicated at almost no cost, to serve a very wide population of consumers who would gain value from such services. Thus the value these services could bring to the wider market is wasted, as explained in my post: Beyond the Deadweight Loss of "All You Can Eat" Subscriptions.
  • FairPay seeks to maximize profit by finding the right price for each consumer who finds value in a service. It does this by using an adaptive process to set win-win prices tailored to each consumer's needs: based on usage, value obtained, and ability to pay. FairPay exploits the unlimited replication of digital -- a lack of scarcity that makes rationing unnecessary. FairPay seeks to approximate an individually fair (and affordable) price for each consumer -- adapting over the life of the relationship -- to ensure that production is sustainably supported and incentivized, 
  • The FairPayZone blog explains in detail how that works in a market-driven, dynamically adaptive way. FairPay is aimed at broad use in the current market-based environment of the US and most of the world. But the wondrous new economics of abundance in digital markets now makes it possible to achieve many of the ideals of socialism out of a profit-driven market-based system, in ways that are not yet widely recognized. 
So perhaps, as Cuba expands its Internet infrastructure to enable wide use, FairPay will resonate as a way to achieve its ideals of fairness in a market-driven way. Businesses can seek profits, and do so in a way that adapts to the needs (and resources) of each individual consumer. 
  • For most of the world, FairPay can be viewed as adding a kinder, gentler (and smarter, more efficient) touch to the invisible hand -- what I refer to as an invisible handshake
  • For Cuba, FairPay may be seen as adding new market drivers to a social handshake, to make it more productive and economically efficient (at least in the digital realm, and perhaps more widely).
For a full introduction to FairPay see the Overview and the sidebar on How FairPay Works at There is also a guide to More Details (including links to a video).  (While most of my posts on FairPay are on the FairPayZone blog, this one seemed better suited to this blog.)