Tuesday, June 26, 2018

AI = Augmented Intelligence: One More Time: Man + Machine (via HBR and SMR)

In a notable bit of synchronicity, the summer issues of both Harvard Business Review and MIT Sloan Management Review have feature articles advocating a more symbiotic approach to AI:

As Malone encapsulates it, what we need is, "an architecture for general purpose, problem-solving superminds: Computers use their specialized intelligence to solve parts of the problem, people use their general intelligence to do the rest, and computers help engage and coordinate far larger groups of people than has ever been possible."

Why do we keep forgetting how important such a symbiotic approach is?  As I have written multiple times on this blog (most recently in my last post):
Another very powerful aspect of networks and algorithms that many neglect is  the augmentation of human intelligence. This idea dates back some 60 years (and more), when "artificial intelligence" went through its first hype cycle -- Licklider and Engelbart observed that the smarter strategy is not to seek totally artificial intelligence, but to seek hybrid strategies that draw on and augment human intelligence. Licklider called it "man-computer symbiosis, and used ARPA funding to support the work of Engelbart on "augmenting human intellect." In an age of arcane and limited uses of computers, that proved eye-opening at a 1968 conference ("the mother of all demos"), and was one of the key inspirations for modern user interfaces, hypertext, and the Web.
The term augmentation is resurfacing in the artificial intelligence field, as we are once again realizing how limited machine intelligence still is, and that (especially where broad and flexible intelligence is needed) it is often far more effective to seek to apply augmented intelligence that works symbiotically with humans, retaining human visibility and guidance over how machine intelligence is used.
Both articles are valuable updates and teachings on how and why to pursue this understanding. But why is it so hard to keep in mind that what we seek is not man or machine, but man augmented by machine?

Thursday, April 26, 2018

Architecting Our Platforms to Better Serve Us -- Augmenting and Modularizing the Algorithm

We dreamed that our Internet platforms would serve us miraculously, but now see that they have taken a wrong turn in many serious respects. That realization has reached a crescendo in the press and in Congress with regard to Facebook and Google's advertising-driven services, but it reaches far more deeply.

"Titans on Trial: Do internet giants have too much power? Should governments intervene?" -- I had the honor last night of attending this stimulating mock trial, with author Ken Auletta as judge and FTC Commissioner Terrell McSweeny and Rob Atkinson, President of the Information Technology and Innovation Foundation (ITIF) as opposing advocates (hosted by Genesys Partners). My interpretation of the the jury verdict (voted by all of the attendees, who were mostly investors or entrepreneurs) was: yes, most agree that regulation is needed, but it must be nuanced and smartly done, not heavy handed. Just how to do that will be a challenge, but it is a challenge that we must urgently consider.

I have been outlining views on this that go in some novel directions, but are generally consistent with the views of many other observers. This post takes a broad view of those suggestions, drawing from several earlier posts.

One of the issues touched on below is a core business model issue -- the idea that the ad-model of "free" services in exchange for attention to ads is "the original sin of the Internet." It has made users of Facebook and Google (and many others) "the product, not the customer," in a way that distorts incentives and fails to serve the user interest and the public interest. As the Facebook fiasco makes clear, these business model incentives can drive these platforms to provide just enough value to "engage" us to give up our data and attend to the advertiser's messages and manipulation and even to foster dopamine-driven addiction, but not necessarily to offer consumer value (services and data protection) that truly serves our interests.

That issue is specifically addressed in a series of posts in my other blog that focuses on a novel approach to business models (and regulation that centers on that), and those posts remain the most focused presentations on those particular issues:
This rest of this post adapts a broader outline of ideas previously embedded in a book review (on Neal Ferguson's "The Square and the Tower: Networks and Power from the Freemasons to Facebook," a historical review of power in the competing forms of networks and hierarchies). Here I abridge and update that post to concentrate on on our digital platforms. (Some complementary points on the need for new thinking on regulation -- and the need for greater tech literacy and nuance -- are in a recent HBR article, "The U.S. Needs a New Paradigm for Data Governance.")

Rethinking our networks -- and the algorithms that make all the difference

Drawing on my long career as a systems analyst/engineer/designer, manager, entrepreneur, inventor, and investor (including early days in the Bell System when it was a regulated monopoly providing "universal service"), I have recently come to share the fear of many that we are going off the rails.

But in spite of the frenzy, it seems we are still failing to refocus on better ways to design, manage, use, and govern our networks -- to better balance the best of hierarchy and openness. Few who understand technology and policy are yet focused on the opportunities that I see as reachable, and now urgently needed.

New levels of man-machine augmentation and new levels of decentralizing and modularizing intelligence can make these network smarter and more continuously adaptable to our wishes, while maintaining sensible and flexible levels of control -- and with the innovative efficiency of an open market.   We can build on distributed intelligence in our networks to find more nuanced ways to balance openness and stability (without relying on unchecked levels of machine intelligence). Think of it as a new kind of systems architecture for modular engineering of rules that blends top-down stability with bottom-up emergence, to apply checks and balances that work much like our representative democracy. This is a still-formative development of ideas that I have written about for years, and plan to continue into the future.

First some context. The crucial differences among all kinds of networks (including hierarchies) are in the rules (algorithms, code, policies) that determine which nodes connect, and with what powers. We now have the power to create a new synthesis. Modern computer-based networks enable our algorithms to be far more nuanced and dynamically variable. They become far more emergent in both structure and policy, while still subject to basic constraints needed for stability and fairness.

Traditional networks have rules that are either relatively open (but somewhat slow to change), or constrained by laws and customs (and thus resistant to change). Even our current social and information networks are constrained in important ways. Some examples:
  • The US constitution defines the powers and the structures for the governing hierarchy, and processes for legislation and execution, made resilient by its provisions for self-amendable checks and balances. 
  • Real-world social hierarchies have structures based on empowered people that tend to shift more or less slowly.
  • Facebook has a social graph that is emergent, but the algorithms for filtering who sees what are strictly controlled by, and private to, Facebook. (In January they announced a major change --  unilaterally -- perhaps for the better for users and society, if not for content publishers, but reports quickly surfaced that it had unintended consequences when tested.)
  • Google has a page graph that is given dynamic weight by the PageRank algorithm, but the management of that algorithm is strictly controlled by Google. It has been continuously evolving in important respects, but the details are kept secret to make it harder to game.
Our vaunted high-tech networks are controlled by corporate hierarchies (FANG: Facebook, Amazon, Netflix, and Google in much of the world, and BAT: Baidu, Alibaba, and Tencent in China) -- but are subject to limited levels of government control that vary in the US, EU, and China. This corporate control is a source of tension and resistance to change -- and a barrier to more emergent adaptation to changing needs and stressors (such as the Russian interference in our elections). These new monopolistic hierarchies extract high rents from the network -- meaning us, the users -- mostly indirectly, in the form of advertising and sales of personal data.

Smarter, more open and emergent algorithms -- APIs and a common carrier governance model

The answer to the question of governance is to make our network algorithms not only smarter, but more open to appropriate levels of individual and multi-party control. Business monopolies or oligarchies (or governments) may own and control essential infrastructure, but we can place limits on what they control and what is open. In the antitrust efforts of the past century governments found need to regulate rail and telephone networks as common carriers, with limited corporate-owner power to control how they are used, giving marketplace players (competitors and consumers) a share in that control. 

Initially this was rigid and regulated in great detail by the government, but the Carterfone decision showed how to open the old AT&T Bell System network to allow connection of devices not tested and approved by AT&T. Many forget how only AT&T phones could be used (except for a few cases of alternative devices like early fax machines that went through cumbersome and often arbitrary AT&T approval processes). Remember the acoustic modem coupler, needed because modems could not be directly connected? That changed when the FCC's decision opened the network up to any device that met defined electrical interface standards (using the still-familiar RJ11, a "Registered Jack").

Similarly only AT&T long-distance connections could be used, until the antitrust Consent Decree opened up competition among the "Baby Bells" and broke them off from Long Lines to compete on equal terms with carriers like MCI and Sprint. Manufacturing was also opened to new competitors.

In software systems, such plug-like interfaces are known as APIs (Application Program Interfaces), and are now widely accepted as the standard way to let systems interoperate with one another -- just enough, but no more -- much like a hardware jack does. This creates a level of modularity in architecture that lets multiple systems, subsystems, and components  interoperate as interchangeable parts -- extending the great advance of the first Industrial Revolution to software.

What I suggest as the next step in evolution of our networks is a new kind of common carrier model that recognizes networks like Facebook, Google, and Twitter as common utilities once they reach some level of market dominance. Then antitrust protections would mandate open APIs to allow substitution of key components by customers -- to enable them to choose from an open market of alternatives that offer different features and different algorithms. Some specific suggestions are below (including the very relevant model of sophisticated interoperablilty in electronic mail networks), but first, a bit more on the motivations.

Modularity, emergence, markets, transparency, and democracy

Systems architects have long recognized that modularity is essential to making complex systems feasible and manageable. Software developers saw from the early days that monolithic systems did not scale -- they were hard to build, maintain, or modify. (The picture here of the tar pits is from Fred Brooks classic 1972 book in IBM's first large software project.)  Web 2.0 extended that modularity to our network services, using network APIs that could be opened to the marketplace. Now we see wonderful examples of rich applications in the cloud that are composed of elements of logic, data, and analytics from a vast array of companies (such as travel services that seamlessly combine air, car rental, hotel, local attractions, loyalty programs, advertising, and tracking services from many companies).

The beauty of this kind of modularity is that systems can be highly emergent, based on the transparency and stability of published, open APIs, to quickly adapt to meet needs that were not anticipated. Some of this can be at the consumer's discretion, and some is enabled by nimble entrepreneurs. The full dynamics of the market can be applied, yet basic levels of control can be retained by the various players to ensure resilience and minimize abuse or failures.

The challenge is how to apply hierarchical control in the form of regulation in a way that limits risks, while enabling emergence driven by market forces. What we need is new focus on how to modularize critical common core utility services and how to govern the policies and algorithms that are applied, at multiple levels in the design of these systems (another, more hidden and abstract, kind of hierarchy). That can be done through some combination of industry self-regulation (where a few major players have the capability to do that, probably faster and more effectively than government), but by government where necessary (preferably only to the extent and duration necessary).

That obviously will be difficult and contentious, but it is now essential, if we are not to endure a new age of disorder, revolution, and war much like the age of religious war that followed Gutenberg (as Ferguson described). Silicon Valley and the rest of the tech world need to take responsibility for the genie they have let out of the bottle, and to mobilize to deal with it, and to get citizens and policymakers to understand the issues.

Once that progresses and is found to be effective, similar methods may eventually be applied to make government itself more modular, emergent, transparent, and democratic -- moving carefully toward "Democracy 2.0." (The carefully part is important -- Ferguson rightfully noted the dangers we face, and we have done a poor job of teaching our citizens, and our technologists, even the traditional principles of history, civics, and governance that are prerequisite to a working democracy.)

Opening the FANG walled gardens (with emphasis on Facebook and Google, plus Twitter)

This section outlines some rough ideas. (Some were posted in comments on an article in The Information by Sam Lessin, titled, "The Tower of Babel: Five Challenges of the Modern Internet.")

The fundamental principle is that entrepreneurs should be free to innovate improvements to these "essential" platforms -- which can then be selected by consumer market forces. Just as we moved beyond the restrictive walled gardens of AOL, and the early closed app stores (initially limited to apps created by Apple), we have unleashed a cornucopia of innovative Web services and apps that have made our services far more effective (and far more valuable to the platform owners as well, in spite of their early fears). Why should first movers be allowed to block essential innovation? Why should they have sole control and knowledge of the essential algorithms that are coming to govern major aspects of our lives? Why shouldn't our systems evolve toward fitness functions that we control and understand, with just enough hierarchical structure to prevent excessive instability at any given time?

Consider the following specific areas of opportunity.

Filtering rules. Filters are central to the function of Facebook, Google, and Twitter. As Ferguson observes, there are issues of homophily, filter bubbles, echo chambers, and fake news, and spoofing that are core to whether these networks make us smart or stupid, and whether we are easily manipulated to think in certain ways. Why do we not mandate that platforms be opened to user-selectable filtering algorithms (and/or human curators)? The major platforms can control their core services, but could allow users to select separate filters that interoperate with the platform. Let users control their filters, whether just by setting key parameters, or by substituting pluggable alternative filter algorithms. (This would work much like third party analytics in financial market data systems.) Greater competition and transparency would allow users to compare alternative filters and decide what kinds of content they do or do not want. It would stimulate innovation to create new kinds of filters that might be far more useful and smart.

For example, I have proposed strategies for filters that can help counter filter bubble effects by being much smarter about how people are exposed to views that may be outside of their bubble, doing it in ways that they welcome and want to think about. My post, Filtering for Serendipity -- Extremism, "Filter Bubbles" and "Surprising Validators" explains the need, and how that might be done. The key idea is to assign levels of authority to people based on the reputational authority that other people ascribe to them (think of it as RateRank, analogous to Google's PageRank algorithm). This approach also suggests ways to create smart serendipity, something that could be very valuable as well.

The "wisdom of the crowd" may be a misnomer when the crowd is an undifferentiated mob, but,  I propose seeking the wisdom of the smart crowd -- first using the crowd to evaluate who is smart, and then letting the wisdom of the smart sub-crowd emerge, in a cyclic, self-improving process (much as Google's algorithm improves with usage, and much as science is open to all, but driven by those who gain authority, temporary as that may be).

Social graphs: Why do Facebook, Twitter, LinkedIn, and others own separate, private forms of our social graph. Why not let other user agents interoperate with a given platform’s social graph? Does the platform own the data defining my social graph relationships or do I? Does the platform control how that affects my filter or do I? Yes, we may have different flavors of social graph, such as personal for Facebook and professional for LinkedIn, but we could still have distinct sub-communities that we select when we use an integrated multi-graph, and those could offer greater nuance and flexibility with more direct user control.

User agents versus network service agents: Email systems were modularized in Internet standards long ago, so that we compose and read mail using user agents (Outlook, Apple mail, Gmail, and others) that connect with federated remote mail transfer agent servers (that we may barely be aware of) which interchange mail with any other mail transfer agent to reach anyone using any kind of user agent, thus enabling universal connectivity.

Why not do much the same, to let any social media user agent interoperate with any other, using a federated social graph and federated message transfer agents? We could then set our user agent to apply filters to let us see whichever communities we want to see at any given time. Some startups have attempted to build stand-alone social networks that focus on sub-communities like family or close friends versus hundreds of more or less remote acquaintances. Why not just make that a flexible and dynamic option, that we can control at will with a single user agent? Why require a startup to build and scale all aspects of a social media service, when they could just focus on a specific innovation? (The social media UX can be made interoperable to a high degree across different user agents, just as email user agents handle HTML, images, attachments, emojis, etc. -- and as do competing Web browsers.)

Identity: A recurring problem with many social networks is abuse by anonymous users (often people with many aliases, or even just bots). Once again, this need not be a simple binary choice. It would not be hard to have multiple levels of participant, some anonymous and some with one or more levels of authentication as real human individuals (or legitimate organizations). First class users would get validated identities, and be given full privileges, while anonymous users might be permitted but clearly flagged as such, with second class privileges. That would allow users to be exposed to anonymous content, when desired, but without confusion as to trust levels. Levels of identity could be clearly marked in feeds, and users could filter out anonymous or unverified users if desired. (We do already see some hints of this, but only to a very limited degree.)

Value transfers and extractions: As noted above, another very important problem is that the new platform businesses are driven by advertising and data sales, which means the consumer is not the customer but the product. Short of simply ending that practice (to end advertising and make the consumer the customer), those platforms could be driven to allow customer choice about such intrusions and extractions of value. Some users may be willing opt in to such practices, to continue to get "free" service, and some could opt out, by paying compensatory fees -- and thus becoming the customer. If significant numbers of users opted to become the customer, then the platforms would necessarily become far more customer-first -- for consumer customers, not the business customers who now pay the rent.

I have done extensive work on alternative strategies that adaptively customize value propositions and prices to markets of one -- a new strategy for a new social contract that can shape our commercial relationships to sustain services in proportion to the value they provide, and our ability to pay, so all can afford service. A key part of the issue is to ensure that users are compensated for the value of the data they provide. That can be done as a credit against user subscription fees (a "reverse meter"), at levels that users accept as fair compensation. That would shift incentives toward satisfying users (effectively making the advertiser their customer, rather than the other way around). This method has been described in the Journal of Revenue and Pricing Management: “A novel architecture to monetize digital offerings,” and very briefly in Harvard Business Review. More detail is my FairPayZone blog and my book (see especially the posts about the Facebook and Google business models that are listed in the opening section, above, and again at the end.*)

Analytics and metrics: we need access to relevant usage data and performance metrics to help test and assess alternatives, especially when independent components interact in our systems. Both developers and users will need guidance on alternatives. The Netflix Prize contests for improved recommender algorithms provided anonymized test data from Netflix to participant teams. Concerns about Facebook's algorithm, and the recent change that some testing suggests may do more harm than good, point to the need for independent review. Open alternatives will increase the need for transparency and validation by third parties.

(Sensitive data could be restricted to qualified organizations, with special controls to avoid issues like the Cambridge Analytica mis-use. The answer to such abuse is not greater concentration of power in one platform, as Maurice Stucke points out in Harvard Business Review, "Here Are All the Reasons It’s a Bad Idea to Let a Few Tech Companies Monopolize Our Data." (Facebook has already moved toward greater concentration of power.)

If such richness sounds overly complex, remember that complexity can be hidden by well-designed user agents and default rules. Those who are happy with a platform's defaults need not be affected by the options that other users might enable (or swap in) to customize their experience. We do that very successfully now with our choice of Web browsers and email user agents. We could have similar flexibility and choice in our platforms -- innovations that are valuable can emerge for use by early adopters, and then spread into the mainstream if success fuels demand. That is the genius of our market economy -- a spontaneous, emergent process for adaptively finding what works and has value -- in ways more effective than any hierarchy (as Ferguson extols, with reference to Smith, Hayek, and Levitt).

Augmentation of humans (and their networks)

Another very powerful aspect of networks and algorithms that many neglect is  the augmentation of human intelligence. This idea dates back some 60 years (and more), when "artificial intelligence" went through its first hype cycle -- Licklider and Engelbart observed that the smarter strategy is not to seek totally artificial intelligence, but to seek hybrid strategies that draw on and augment human intelligence. Licklider called it "man-computer symbiosis, and used ARPA funding to support the work of Engelbart on "augmenting human intellect." In an age of arcane and limited uses of computers, that proved eye-opening at a 1968 conference ("the mother of all demos"), and was one of the key inspirations for modern user interfaces, hypertext, and the Web.

The term augmentation is resurfacing in the artificial intelligence field, as we are once again realizing how limited machine intelligence still is, and that (especially where broad and flexible intelligence is needed) it is often far more effective to seek to apply augmented intelligence that works symbiotically with humans, retaining human visibility and guidance over how machine intelligence is used.

Why not apply this kind of emergent, reconfigurable augmented intelligence to drive a bottom up way to dynamically assign (and re-assign) authority in our networks, much like the way representative democracy assigns (and re-assigns) authority from the citizen up? Think of it as dynamically adaptive policy engineering (and consider that a strong bottom-up component will keep such "engineering" democratic and not authoritarian). Done well, this can keep our systems human-centered.

Reality is not binary:  "Everything is deeply intertwingled"

Ted Nelson (who coined the term "hypertext" and was another of the foundational visionaries of the Web), wrote in 1974 that "everything is deeply intertwingled." As he put it, "Hierarchical and sequential structures, especially popular since Gutenberg, are usually forced and artificial. Intertwingularity is not generally acknowledged—people keep pretending they can make things hierarchical, categorizable and sequential when they can't."

It's a race:  augmented network hierarchies that are emergently smart, balanced, and dynamically adaptable -- or disaster

If we pull together to realize this potential, we can transcend the dichotomies and conflicts that are so wickedly complex and dangerous. Just as Malthus failed to account for the emergent genius of civilization, and the non-linear improvements it produces, many of us discount how non-linear the effect of smarter networks, with more dynamically augmented and balanced structures, can be. But we are racing along a very dangerous path, and are not being nearly smart or proactive enough about what we need to do to avert disaster. What we need now is not a top-down command and control Manhattan Project, but a multi-faceted, broadly-based movement, with elements of regulation, but primarily reliant on flexible, modular architectural design.

---

Coda:  On becoming more smartly intertwingled

Everything in our world has always been deeply intertwingled. Human intellect augmented with technology enables us to make our world more smartly intertwingled. But we have lost our way, in the manner that Engelbart alluded to in his illustration of de-augmentation -- we are becoming deeply polarized, addicted to self-destructive dopamine-driven engagement without insight or nuance. We are being de-augmented by our own technology run amok.


(I plan to re-brand this blog as "Smartly Intertwingled" -- that is the objective that drives my work. The theme of "User-Centered Media" is just one important aspect of that.)


--------------------------------------------------------------------------------------------

*On business models - FairPay (my other blog):  As noted above, a series of posts in my other blog focus on a novel approach to business models (and regulation that centers on that), and those posts remain my best presentation on those issues:

Saturday, January 13, 2018

"The Square and the Tower" — Augmenting and Modularizing the Algorithm (a Review and Beyond)

[Note: A newer post updates this one and removes much of the book review portion, to concentrate on the forward-looking platform issues: Architecting Our Platforms to Better Serve Us -- Augmenting and Modularizing the Algorithm.]

---
Niall Ferguson's new book, The Square and the Tower: Networks and Power from the Freemasons to Facebook is a sweeping historical review of the perennial power struggle between top-down hierarchies and more open forms of networks. It offers a thought-provoking perspective on a wide range of current global issues, as the beautiful techno-utopian theories of free and open networks increasingly face murder by two brutal gangs of facts: repressive hierarchies and anarchistic swarms.

Ferguson examines the ebb and flow of power, order, and revolution, with important parallels between the Gutenberg revolution (which led to 130 years of conflict) and our digital revolution, as well as much in between. There is valuable perspective on the interplay of social networks (old and new), the hierarchies of governments (liberal and illiberal), anarchists/terrorists, and businesses (disruptive and monopolistic). One can disagree with Ferguson's conservative politics yet find his analysis illuminating.

Drawing on a long career as a systems analyst/engineer/designer, manager, entrepreneur and inventor, I have recently come to share much of Ferguson's fear that we are going off the rails. He cites important examples like the 9/11 attacks, counterattacks, and ISIS, the financial meltdown of 2008, and most concerning to me, the 2016 election as swayed by social media and hacking. However -- discouraging as these are -- he seems to take an excessively binary view of network structure, and to discount the ability of open networks to better reorganize and balance excesses and abuse. He argues that traditional hierarchies should reestablish dominance.

In that regard, I think Ferguson fails to see the potential for better ways to design, manage, use, and govern our networks -- and to better balance the best of hierarchy and openness. To be fair, few technologists are yet focused on the opportunities that I see as reachable, and now urgently needed.

New levels of man-machine augmentation and new levels of decentralizing and modularizing intelligence can make these network smarter and more continuously adaptable to our wishes, while maintaining sensible and flexible levels of control.   We can build on distributed intelligence in our networks to find more nuanced ways to balance openness and stability (without relying on unchecked levels of machine intelligence). Think of it as a new kind of systems architecture for modular engineering of rules that blends top-down stability with bottom-up emergence that apply checks and balances that work much like our representative democracy. This is a still-formative exploration of some ideas that I have written about, and plan to expand on in the future. First some context.

The Square (networks), the Tower (hierarchies) and the Algorithms that make all the difference

Ferguson's title comes from his metaphor of the medieval city of Sienna, with a large public square that serves as a marketplace and meeting place, and a high tower of government (as well as a nearby cathedral) that displayed the power of those hierarchies. But as he elaborates, networks have complex architectures and governance rules that are far richer than the binary categories of either "network" ( a peer to peer network with informal and emergent rules) or "hierarchy" (a constrained network with more formal directional rankings and restrictions on connectivity).

The crucial differences among all kinds of networks are in the rules (algorithms, code, policies) that determine which nodes connect, and with what powers. While his analysis draws out the rich variety of such structures, in many interesting examples, with diagrams, what he seems to miss is any suggestion of a new synthesis. Modern computer-based networks enable our algorithms to be far more nuanced and dynamically variable. They become far more emergent in both structure and policy, while still subject to basic constraints needed for stability and fairness.

Traditional networks have rules that are either relatively open (but somewhat slow to change), or constrained by laws and customs (and thus resistant to change) -- and even our current social and information networks are constrained in important ways. For example,
  • The US constitution defines the powers and the structures for the governing hierarchy, and processes for legislation and execution, made resilient by its provisions for self-amendable checks and balances. 
  • Real-world social hierarchies have structures based on empowered people that tend to shift more or less slowly.
  • Facebook has a social graph that is emergent, but the algorithms for filtering who sees what are strictly controlled by and private to Facebook. (They have just announced a major change --  unilaterally -- hopefully for the better for users and society, if not for content publishers.)
  • Google has a page graph that is given dynamic weight by the PageRank algorithm, but the management of that algorithm is strictly controlled by Google. It has been continuously evolving in important respects, but the details are kept secret to make it harder to game.
As Ferguson points out, our vaunted high-tech networks are controlled by corporate hierarchies (he refers to FANG, Facebook, Amazon, Netflix, and Google, and BAT, Baidu, Alibaba, and Tencent) -- but subject to levels of government control that vary in the US, EU, and China. This corporate control is a source of tension and resistance to change -- and a barrier to more emergent adaptation to changing needs and stressors (such as the Russian interference in our elections). These new monopolistic hierarchies extract high rents from the network -- meaning us, the users -- mostly in the form of advertising and sales of personal data.

A fuller summary of Ferguson's message is in his WSJ preview article, "In Praise of Hierarchy." That headlines which side of fence he is on.

Smarter, more open and emergent algorithms -- APIs and a common carrier governance model

My view on this is more positive -- in that the answer to the question of governance is to make our network algorithms not only smarter, but more open to individual and multi-party control. Business monopolies or oligarchies (or governments) may own and control essential infrastructure, but we can place limits on what they control and what is open. In the antitrust efforts of the past century governments found need to regulate rail and telephone networks as common carriers, with limited corporate-owner power to control how they are used, giving marketplace players (competitors and consumers) a large share in that control. 

Initially this was rigid and regulated in great detail by the government (very hierarchical), but the Carterfone decision showed how to open the old AT&T Bell System network to allow connection of devices not tested and approved by AT&T. Many forget how only AT&T phones could be used (except for special cases of alternative devices like early faxes (Xerox "telecopiers") that went through cumbersome and often arbitrary AT&T approval processes). That changed when the FCC's decision opened the network up to any device that met defined electrical interface standards (using the still-familiar RJ11, a "Registered Jack"). Similarly only AT&T long-distance connections could be used, until the antitrust Consent Decree opened up competition among the "Baby Bells" and broke them off from Long Lines to compete on equal terms with carriers like MCI and Sprint.

In software systems, such plug-like interfaces are known as APIs (Application Program Interfaces), and are now widely accepted as the standard way to let systems inter-operate with one another -- just enough, but no more -- much like a hardware jack does. This creates a level of modularity in architecture that lets multiple systems, subsystems, and components  inter-operate as interchangeable parts -- the great advance of the first Industrial Revolution.

What I suggest as the next step in evolution of our networks is a new kind of common carrier model that recognizes networks like Facebook, Google, and Twitter as common utilities once they reach some level of market dominance. Then antitrust protections would mandate open APIs to allow substitution of key components by customers -- to enable them to choose from an open market of alternatives that offer different features and different algorithms. Some specific suggestions are below, but first, a bit more on the motivations.

Modularity, emergence, markets, transparency, and democracy

Systems architects have long recognized that modularity is essential to making complex systems feasible and manageable. Software developers saw from the early days that monolithic systems did not scale -- they were hard to build, maintain, or modify. Web 2.0 extended that modularity to our network services, using network APIs that could be opened to the marketplace. Now we see wonderful examples of rich applications in the cloud composed of elements of logic, data, and analytics from a vast array of companies (such as travel services that seamlessly combine air, car rental, hotel, local attractions, loyalty programs, and tracking services from many companies).

The beauty of this kind of modularity is that systems can be highly emergent, based on the transparency and stability of published APIs, to quickly adapt to meet needs that were not anticipated. Some of this can be at the consumer's discretion, and some is enabled by nimble entrepreneurs. The full dynamics of the market can be applied, yet basic levels of control can be retained by the various players to ensure resilience and minimize abuse or failures.

The challenge that Ferguson makes clear is how to apply hierarchical control in the form of regulation in a way that limits risks, while enabling emergence driven by market forces. What we need is new focus on how to modularize critical common core utility services and how to govern the policies and algorithms that are applied, at multiple levels in the design of these systems (another kind of hierarchy). That can be done through some combination of industry self-regulation (where a few major players have the capability to do that, probably faster and more effectively than government), but by government where necessary (and only to the extent and duration necessary).

That obviously will be difficult and contentious, but it is now essential, if we are not to endure a new age of disorder, revolution, and war much like the age that followed Gutenberg. Silicon Valley and the rest of the tech world need to take responsibility for the genie they have let out of the bottle, and mobilize to deal with it.

Once that progresses and is found to be effective, similar methods can be applied to make government itself more modular, emergent, transparent, and democratic -- moving carefully toward "Democracy 2.0." (The carefully part is important -- Ferguson rightfully notes the dangers we face, and we have done a poor job of teaching our citizens, and our technologists, the principles of history, civics, and governance that are prerequisite to a working democracy.)

Opening the FANG walled gardens (with emphasis on Facebook and Google, plus Twitter)

This section outlines some rough ideas. (Some were posted in comments on an article in The Information by Sam Lessin, titled, "The Tower of Babel: Five Challenges of the Modern Internet" -- another tower.)

The fundamental principle is that entrepreneurs should be free to innovate improvements to these "essential" platforms, to be selected by consumer market forces. Just as we moved beyond the restrictive walled gardens of AOL, and the early closed app stores (limited to apps created by Apple or Motorola or Verizon), we have unleashed a cornucopia of innovative Web services and apps that have made our services far more effective (and far more valuable to the platform owners as well, in spite of their fears). Why should first movers be allowed to block essential innovation? Why should they have sole control of the essential algorithms that are coming to govern major aspects of our lives? Why shouldn't our systems evolve toward fitness functions that we control, with just enough hierarchical structure to prevent excessive instability at any given time?

Filtering rules. Filters are central to the function of Facebook, Google, and Twitter. As Ferguson observes, there are issues of homophily, filter bubbles, echo chambers, and fake news, and spoofing that are core to whether these networks make us smart or stupid, and whether we are easily manipulated to think in certain ways. Why not mandate that platforms be opened to user-selectable filtering algorithms (and/or human curators)? The major platforms can control their core services, but could allow for separate filters that inter-operate. Let users control their filters, whether just by setting key parameters, or by substituting pluggable alternative filters. This would be much like third party analytics in financial market data systems. Greater competition and transparency would allow users to compare alternative filters and decide what kinds of content they do or do not want. It would stimulate innovation to create new kinds of filters that might be far more useful.

For example, I have proposed strategies for filters that can help counter filter bubble effects by being much smarter about how people are exposed to views that may be outside of their bubble, doing it in ways that they welcome and think about. My post, Filtering for Serendipity -- Extremism, "Filter Bubbles" and "Surprising Validators" explains the need, and how that might be done. The key idea is to assign levels of authority to people based on the reputational authority that other people ascribe to them (think of it a RateRank, analogous to Google's PageRank algorithm). This approach also suggests ways to create smart serendipity, something that could be very valuable as well.

The "wisdom of the crowd" may be a misnomer when the crowd is an undifferentiated mob, but, what I propose seeking the wisdom of the smart crowd -- first using the crowd to evaluate who is smart, and then letting the wisdom of the smart sub-crowd emerge, in a cyclic, self-improving process (much as Google's algorithm improves with usage).

Social graphs and user agents: Why do Facebook, Twitter, LinkedIn, and others own separate, private forms of our social graph. Why not let other user agents interoperate with a given platform’s social graph? Does the platform own my social graph or do I? Does the platform control how that affects my filter or do I? Yes, we may have different flavors of social graph, such as personal for Facebook and professional for LinkedIn, but we could still have distinct sub-communities that we select when we use an integrated multi-graph, and those could offer greater nuance and flexibility with more direct user control.

Email systems were modularized long ago, so that we compose and read mail using user agents (Outlook, Apple mail, Gmail, and others) that connect with remote mail transfer agent servers (that we may barely be aware of) which interchange mail with any other mail transfer agent to reach anyone using any kind of user agent, thus enabling universal connectivity. Why not do the same to let any social media user agent inter-operate with any other, using a common social graph? We would then set our user agent to apply filters to let us see whichever communities we want to see at any given time.

Identity: A recurring problem with many social networks is abuse by anonymous users (often people with many aliases, or even just bots). Once again, this need not be a simple binary choice. It would not be hard to have multiple levels of participant, some anonymous and some with one or more levels of authentication as real human individuals (or legitimate organizations). These could then be clearly marked in feeds, and users could filter out anonymous or unverified users if desired.

Value transfers and extractions: Another important problem, and one that Ferguson cites is that the new platform businesses are driven by advertising and data sales, which means the consumer is not the customer but the product. Short of simply ending that practice (to end advertising and make the consumer the customer), those platforms could be driven to allow customer choice about such intrusions and extractions of value. Some users may be willing opt in to such practices, to continue to get "free" service, and some could opt out, by paying compensatory fees -- and thus becoming the customer. If significant numbers of users opted to become the customer, then the platforms would necessarily become far more customer-first -- for consumer customers, not the business customers who now pay the rent. (I have done extensive work on such alternative strategies, as described in my FairPayZone blog and my book.)

Analytics and metrics: we need access to relevant usage data and performance metrics to help test and assess alternatives, especially when independent components interact in our systems. Both developers and users will need guidance on alternatives. The Netflix Prize contests for improved recommender algorithms provided anonymized test data from Netflix to participant teams. Concerns about Facebook's algorithm, and the recent change that some testing suggests may do more harm than good, point to the need for independent review. Open alternatives will increase the need for transparency and validation by third parties. (Sensitive data could be restricted to qualified organizations.) [This paragraph added 1/14.]

If such richness sounds overly complex, remember that complexity can be hidden by well-designed user agents and default rules. Those who are happy with a platform's defaults need not be affected by the options that other users might enable (or swap in) to customize their experience. But we could have the choice, and innovations that are valuable can emerge for use by early adopters, and then spread into the mainstream if success fuels demand. That is the genius of our market economy -- a spontaneous, emergent process for adaptively finding what works and has value -- more effective than any hierarchy (as Ferguson extols, with reference to Smith, Hayek, and Levitt).

Augmentation of humans (and their networks)

Another very powerful aspect of networks and algorithms that Ferguson (and many others) neglect is  the augmentation of human intelligence. This idea dates back some 60 years (and more), when "artificial intelligence" went through its first hype cycle -- Licklider and Engelbart observed that the smarter strategy is not to seek totally artificial intelligence, but to seek hybrid strategies that draw on and augment human intelligence. Licklider called it "man-computer symbiosis, and used ARPA funding to support the work of Engelbart on "augmenting human intellect." In an age of mundane uses of computers, that proved eye-opening ("the mother of all demos") at a 1968 conference, and was one of the key inspirations for modern user interfaces, hypertext, and the Web.

The term augmentation is resurfacing in the artificial intelligence field, as we are once again realizing how limited machine intelligence still is, and that (especially where broad and flexible intelligence is needed) it is often far more effective to seek to apply augmented intelligence that works symbiotically with humans, retaining human visibility and guidance over how machine intelligence is used.

Why not apply this kind of emergent, reconfigurable augmented intelligence to drive a bottom up way to dynamically assign (and re-assign) authority in our networks, much like the way representative democracy assigns (and re-assigns) authority from the citizen up? Think of it as dynamically adaptive policy engineering (and consider that a strong bottom-up component will keep such "engineering" democratic and not authoritarian). Done well, this can keep our systems human-centered.

Not binary:  networks versus hierarchies -- "Everything is deeply intertwingled"

Ted Nelson (who coined the term "hypertext" and was also one of the foundational visionaries of the Web), wrote in 1974 that "everything is deeply intertwingled." Ferguson's exposition illuminates how true that is of history. Unfortunately, his artificially binary dichotomy of hierarchies versus networks tends to mask this, and seems to blind him to how much more intertwingled we can expect our networks to be in the future. As Nelson put it, "Hierarchical and sequential structures, especially popular since Gutenberg, are usually forced and artificial. Intertwingularity is not generally acknowledged—people keep pretending they can make things hierarchical, categorizable and sequential when they can't."

It's a race:  augmented network hierarchies that are emergently smart, balanced, and dynamically adaptable -- or disaster

If we pull together to realize this potential, we can transcend the dichotomies and conflicts of the Square and the Tower that Ferguson reveals as so wickedly complex and dangerous. Just as Malthus failed to account for the emergent genius of civilization, and the non-linear improvements it produces, Ferguson seems to discount how non-linear the effect of smarter networks with more dynamically augmented and balanced structures can be. But he is right to be very fearful, and to raise the alarm -- we are racing along a very dangerous path, and are not being nearly smart or proactive enough about what we need to do to avert disaster. What we need now is not a top-down command and control Manhattan Project, but a multi-faceted, broadly-based movement.

---

First published in Reisman on User-Centered Media, 1/13/18.

Monday, June 19, 2017

Bricks and Clicks -- Showrooming, Riggio, and Bezos

Last week brought two notable news items about Amazon and the future or retail. Most noted was the deal to buy Whole Foods, which would greatly accelerate Amazon's move to the center of the still rudimentary combination of bricks and clicks. Drawing some limited attention was the issuance of an Amazon patent (filed in May, 2012) on smart ways to turn showrooming to a store's advantage.

This was especially relevant to me for several reasons:
  • I have been dabbling in bricks and clicks since at least twenty years ago, when I tried to pitch Steve Riggio of Barnes & Noble, on a bricks and clicks strategy to counter Amazon (to no avail).
  • It was gratifying to see that the Amazon patent cited seventeen of my patents or applications as prior art. 
  • My 2013 post, The Joy of Showrooming:  From Profit Drain to Profit Center, outlined promising ideas much along the lines of aspects of Amazon's patent. My ideas were for collaboration of the showroom owner and the Web competitor, to optimize the best value exchange, based on showrooming referral fees that compensate the showroom owner for the showroom service provided.
The IoBC (Internet of Bricks and Clicks)

The Internet of Things has gotten much press about how it connects not just people and businesses, but literally everyThing. Few grasp even the early impacts of this sea change, and even the technically sophisticated can only dimly grasp where it will take us.

Bricks and clicks is just an aspect of that. It is much like the famous New Yorker's View of the World magazine cover by Steinberg.
  • Traditional retail businesses view online from their store-centered perspective, adding online services to counter and co-opt the enemy attack.
  • Online businesses view stores from the online-centered perspective, dabbling in stores and depots to expand their beachhead.
  • Only a few, like Bezos, see the big picture of an agnostic, flexible blend of resources and capabilities that most effectively provide what we want, when and how we want it.
To see the larger view we must climb above our attachments to stores and warehouses, or Web sites and apps. We must consider the objectives of the customers and how best to give them whatever they want, with whatever resources can be applied, as costs permit. How we orchestrate those resources to meet those objectives will change rapidly, as our systems and their integration improves. Only the most far-sighted and nimble will see and go more than a few steps down this path.

The WSJ op-ed on the merger and article on patent give some hint of the kind of changes we can look forward to (and I expand on that below). Fasten your seat belts, it's going to be a bumpy ride.

The User-Centered view

This blog, User-Centered Media, is focused on my dominant perspective on technology -- as a tool to improve our lives. While many of the most creative people in technology work for businesses that sell technology products (resources) for others to use, for most of my career I worked for companies that wanted to use technology. Vendors want to make what they know and sell what they make. Users want to find whatever resources they need and put them together to help their people do things -- whether the resources are people, computers, Web sites, networks, devices, stores, warehouses, or transport.

So far bricks and clicks have been developing from the two poles, but that has not taken us very far. But it seems we may be at an inflection point:
  • The two articles I cited above give a hint of what we will begin to see form in the middle. Amazon has been in the lead and just took a big step forward. The WSJ op-ed observes: "Mr. Bezos’s ambition is...oriented toward accelerating consumer gratification however possible."
  • Additional perspectives on Bezos' user-centered innovation style were in the Times a few days ago
  • In a NY Times Upshot article today quoting Erik Brynjolfsson, "The bigger and more profound way that technology affects jobs is by completely reinventing the business model...Amazon didn’t go put a robot into the bookstores and help you check out books faster. It completely reinvented bookstores."
  • More detail on the guts of integration is reviewed in another Times article today
  • Still another Times article today looks at the symmetry of the Amazon-Whole Foods deal and the Walmart-Bonobos deal. 
  • And still another looks at the similar synergies of the Target investment in Casper. 
Bezos is well on the way to this broad reinvention of retail, but, as suggested in a recent (May!) article on how VR/AR might factor into this by my old friend Gary Arlen (and our online comments back and forth), we are still far from game over.

With that, I indulge in some comments on my own dabblings in this space.

The Barnes & Noble that might have been

Just over twenty years ago I was invited to a B&N focus group when Amazon was just nipping at their heels and they had just built their Web site. Based on my reactions to what I saw, I wrote to Riggio, then COO, on May 8, 1997:
B&N on the Web presents an exciting opportunity to leverage your store-based business with your online business at three levels:
  • In-store uses of the Web as a kiosk to provide a new richness in self-service (using Firefly, book reviews, and other aids). 
  • Hybrid uses of remote Web access as a prelude to a store visit (such as to pre-select a book, find the nearest store that has it, and put it on hold for pickup)
  • Remote uses that you are off to a good start on (featuring collaborative filtering and other recommender and community-building tools)
I can help not only with the new media side of this, but also with the back-end integration. 
Just first steps -- but all in software, with no need for any changes at all to their physical logistics. Where might B&N be now if Riggio responded to my letter?

The Joy of Showrooming

In May 2013 (May seems a good month for this), I did a post, The Joy of Showrooming: From Profit Drain to Profit Center. It addresses new methods much the same as those in the Amazon patent (which was filed a year earlier than my post, so I am glad I chose not to apply for a patent!). After some comments on the emerging concerns about showrooming and price-checking apps, I said:
What I suggest is to take this threat, and view it as an opportunity:
  • What if showrooming activity could be tracked, and e-tailers convinced to pay a "showroom fee" to the provider of a showrooming service, if the sale came from that showroom? 
  • What if the retailer could filter Internet traffic from their store, and trace which URLs are for competitors, track purchase transactions that emanate from the store, and pass through only those that go to retailers that agree to pay the fee? 
There are a number of ways this can be done, and that can lead to a new retail ecology that benefits all.
It is not clear where Amazon wants to go with this, and whether they want to build an ecosystem with other strong players. But as Arlen and I agree, these are still early days -- others may seize the opportunity to build a showrooming referral ecosystem (but now should consider to what extent the Amazon patent might impede them).

(I have not checked which of my patents were cited by Amazon, but in general my inventions relate to a user-centered view of networked resources, and of the IoT. And for those who might care about IP issues, all of my patents have been sold, and I am not actively involved in getting more.)

[Update 6/20: A Forrester blog post made an interesting observation: "Amazon knows that to win at brick and mortar, retail theater is paramount. Whole Foods locations are destinations where the idea of “Retail Theater” still thrives." Showrooming is theater.]

The Future of Retail

In 2014 I co-led the very stimulating MIT Enterprise Forum of NYC: Think Tank Session III: The Future of Retail: Reinventing how people get stuff (one of a series on The Future of X). This Think Tank brought together thirteen "lead participants" with diverse expertise in the field for an open brainstorming with about sixty other technology entrepreneur "participants." While aspects of the video are now dated, those who care to look are likely to find many examples of forward thinking that are still timely.

And now for something not completely different

While most of my work on my current main project, FairPay, is not specifically oriented to bricks, it does bring the same kind of user-centered, customer-first thinking to retail. Much of the focus is on the digital realm, but it suggests some new directions that relate to the physical realm as well. For more ideas on those aspects of the future of retail, check out my other blog, The FairPay Zone.

---
[Update 3/31/18: Sloan Management Review published an excellent article on this theme: The Store Is Dead — Long Live the Store.]



Friday, June 16, 2017

​How come my AI don't bark when you come around?

I listened to the inimitable Dr. John last night singing his very wry song, "How come my dog don't bark when you come around?"
Now you say you ain't never met my wife, you ain't never seen her befo,'
Say you ain't been hangin' roun' my crib; well here's somethin' I wanna know...
I wanna know what in the worl' is goin' down,
How come my dog don't bark when you come around?     [more]
In a distracting moment of acute nerdiness, it occurred to me that this was a great example of the deep challenges of AI: Multiple levels of meaning and abstraction that require not only common sense understandings beyond the words and phrases, but the layers of conceptual, emotional and literary meanings, such as suspicion and irony.

Just parsing the literal meaning is the easy part.

  • It is not so hard for our AI to understand the literal question.
  • Does it understand the likely answer to to that question?
  • Does it know what suspicions the likely answer leads to?
  • ...who the parties are?
  • ...and what actions those suspicions motivate?

Can Alexa or Siri or Watson figure that out? How, and how well. Now? or when? What other levels of understanding does AI face? Can that AI hunt?

(Of course these problems are well-known. This was just a flash of connection that amused me, before I returned to focusing on the performance.)

Thursday, May 25, 2017

Through The Looking Glass: PsyWar Dispatches

A recent NY Times front page article, "The Right Builds an Alternative Narrative About the Crises Around Trump," is an example of a perspective that we need to make a regular feature, to support our defense in the war on truth.

It has been widely recognized that the shocking surprise of the Trump election - and that it happened at all - is partly because reasonable people who live in the world of truth were also living in a filter bubble. This reasonable majority did not realize how many people were living (at least in part) in an Alice and Wonderland world of alt-truth that had been building for years - and that that was enough to sway the election.

Perhaps the simplest countermeasure is to know thine enemy. There is plenty of evidence they will not be converted by frontal attack with facts (that produces a defensive reaction that simply deepens their polarization), but all of us can help, by peeling away the layers - if we know who and what we are dealing with. We are now beginning to realize just how vicious a cycle this is.

PsyWar Dispatches

The Times article illustrates, by example, the need for regular dispatches from PsyWar correspondents. Some of us may try to occasionally watch Fox News (and perhaps sometimes the more extreme outlets) - but that can be psychologically painful and time-consuming. Just as we have always relied on war correspondents to face challenges of danger and horror that few of us would voluntarily endure, we need PsyWar correspondents to do the equivalent, so they can provide dispatches to us.

Armed with better understanding of where the darkness is in our midst, all of us can help shine our lights on it (person to person, in our communities), and can better manage the harm it does.

What I suggest is to publish a regular feature, much like that article, whether daily (for now) or a few times per week, or less, as activity and issues warrant. It could include a regularly running real-time commentary (like that article), plus occasional analysis pieces.

Doing good - a business opportunity?

This might be done by the Times or the Washington Post (or even as a joint effort), or by other existing publications - or as a new startup.

This might be the basis for a nice entrepreneurial news venture - one that might very quickly be profitable! It would not cost much and might attract significant subscription revenue. (Of course it could be done as a non-profit service - one that might quickly become self-sustaining..)

Keep it simple

The primary function of this report would be not to fact-check or debunk or convince - but merely to enable all us to understand what is being thought and said by those in our midst who have been blinded. It is the most basic layer of truth and sunlight - not to argue, but merely to expose. Argument and countermeasures can then grow organically from a multitude of places and in a multitude of forms. But first, every one of us must be reminded of how pervasive and insidious the threat is - and be informed of just where it is.

Of course those on the other side of any given issue will do likewise. So what? If the report is simply a faithful report of what is being said (without the ad hominem attacks, whether on "deplorables" or on "tinfoil hat conspiracy liberal hysteria"), there is nothing to debunk.

If we all know what lies are spreading (however you define that), the real truth will out.

---

I have previously written on this problem of polarization in filter bubbles / echo chambers, suggesting some more sophisticated ways to finesse it with technology: Filtering for Serendipity -- Extremism, "Filter Bubbles" and "Surprising Validators".

Thursday, December 15, 2016

2016: Fake News, Echo Chambers, Filter Bubbles and the "De-Augmentation" of Our Intellect

Silicon Valley, we have a problem!

The 2016 election has made it all too clear that growing concerns some of us had about the failing promise of our new media were far more acute than we had imagined. Stuart Elliott recently observed that "...the only thing easier to find than fake news is discussion of the phenomenon of fake news."

But as many have noted, this is a far bigger problem than just fake news (which is a reasonably tractable problem to solve). It is a problem of echo chambers and filter bubbles, and a still broader problem of critical thinking and responsible human relations. While the vision has been that new media could "augment human intellect," instead, it seems our media are "de-augmenting" our intellect. It is that deeper and more insidious problem that I focus on here.

The most specifically actionable ideas I have about reversing that are well described in my 2012 post, Filtering for Serendipity -- Extremism, "Filter Bubbles" and "Surprising Validators,"which has recently gotten attention from such influential figures as Tim O'Reilly and Eli Pariser. (Some readers may wish to jump directly to that post.)

This post aims at putting that in the broader, and more currently urgent context. As one who has thought about electronic social media, and how to enhance collaborative intelligence, and the "wisdom"/"madness" of crowds since the 1970s, I thought it timely to post on this again, expand on its context, and again offer to collaborate with those seeking remedies.

This post just touches on some issues that I hope to expand on in the future. This is a rich and complex challenge. Even perverse: as noted in my 2012 post, and again below, "balanced information may actually inflame extreme views." But at last there is a critical mass of people who realize this may be the most urgent problem in our Internet media world. Humanity may be on the road to self-destruction -- if we don't find a way to fix this fast.

Some perspectives -- augmenting or de-augmenting?

Around 1970 I was exposed to two seminal early digital media thinkers. Those looking to solve these problems today would do well to look back at this rich body of work. These problems are not new -- only newly critical.
  • Doug Engelbart was a co-inventor of hypertext (the linking medium of the Web) and related tools, with the stated objective of "Augmenting Human Intellect."  His classic tech report memorably illustrated the idea of augmenting how we use media, such as writing to help us think, in terms of the opposite -- we can de-augment the task of writing with a pencil by tying the pencil to a brick! While the Web and social media have done much to augment our thinking and discourse, we now see that they are also doing much to de-augment it.
  • Murray Turoff did important early work on social decision support and collaborative problem solving systems. These systems were aimed as consensus-seeking (initially focused on defense and emergency preparedness), and included the Delphi technique, with its specific methods for balancing the loudest and most powerful voices. 
Not so long after that, I visited a lab at what is now Verizon, to see a researcher (Nathan Felde) working with an experimental super-high resolution screen for multimedia (10,000 by 10,000 pixels, as I recall -- that is more than 10 times richer than the 4K video that is just now becoming generally available). He observed that after working with that, going back to a then-conventional screen was like "eating dinner through a straw" -- de-augmentation again.

Now we find ourselves in an increasingly "post-literate" media world, with TV sound bites, 140 character Tweets, and Facebook posts that are not much longer. We increasingly consume our media on small handheld screens -- mobile and hyper-connected, but displaying barely a few sentences -- eating food for our heads through a straw.*

What a fundamental de-augmentation this is, and why it matters is chillingly described in "Donald Trump, the first President of our Post-Literate Age," A Bloomberg View piece by Joe Weisenthal:
Before the invention of writing, knowledge existed in the present tense between two or more people; when information was forgotten, it disappeared forever. That state of affairs created a special need for ideas that were easily memorized and repeatable (so, in a way, they could go viral). The immediacy of the oral world did not favor complicated, abstract ideas that need to be thought through. Instead, it elevated individuals who passed along memorable stories, wisdom and good news.
And here we begin to see how the age of social media resembles the pre-literate, oral world. Facebook, Twitter, Snapchat and other platforms are fostering an emerging linguistic economy that places a high premium on ideas that are pithy, clear, memorable and repeatable (that is to say, viral). Complicated, nuanced thoughts that require context don’t play very well on most social platforms, but a resonant hashtag can have extraordinary influence. 
Farad Manjoo gives further perspective in "Social Media’s Globe-Shaking Power," closing with:
Mr. Trump is just the tip of the iceberg. Prepare for interesting times.
Engelbart and Turoff (and others such as Ted Nelson, the other inventor of hypertext) pointed the way to doing the opposite -- we urgently need to re-focus on that vision, and extend it for this new age.

Current voices for change

One prominent call for change was by Tim O'Reilly, a very influential publisher, widely respected as a thought leader in Internet circles. He posted on "Media in the Age of Algorithms" and triggered much comment (including my comment referring to my 2012 post, which Tim recommended).

Another prominent voice is Eli Pariser, who is known for his TED Talk and book on The Filter Bubble, a term he popularized in 2011. He recently created a collaborative Google Doc, which, as reported in Fortune," has become a hive of collaborative activity, with hundreds of journalists and other contributors brainstorming strategies for pushing back against publishers that peddle falsehoods" (I am one, contributing a section headed "Surprising Validators). The editable Doc is apparently generating so much traffic that a read-only copy has been posted!

Shelly Palmer did a nice post this summer, "Your Comfort Zone May Destroy The World." We need to not just exhort stepping outside our comfort zones, which few will do unaided, but to make our media smart about enticing us to do that in easy and compelling ways.

The way forward

As I said, this is a rich and complex challenge. Many of the obvious solutions are too simplistic. As my 2012 post begins:
Balanced information may actually inflame extreme views -- that is the counter-intuitive suggestion in a NY Times op-ed by Cass Sunstein, "Breaking Up the Echo" (9/17/12).   Sunstein is drawing on some very interesting research, and this points toward an important new direction for our media systems.
Please read that post to see why that is, how Sunstein suggests we might cut through that, and the filtering, rating, and ranking strategies I suggest for doing that. (The idea is to find and highlight what Sunstein called "Surprising Validators" -- people who you already give credence to, who suggest that your ideas might be wrong, at least in part -- enticing you to take a small step outside your comfort zone, and re-think, to see things just a bit more broadly.)

I hope to continue to expand on this, and to work with others on these vital issues in the near future.

=================================
Supporting serious journalism

One other critical aspect of this larger problem is citizen-support of serious journalism -- not chasing clicks or commercial sponsorship, but journalism for citizens. My other blog on FairPay addresses that need, most recently with this companion post: Panic in the Streets! Now People are Ready to Patron-ize Journalism!

=================================

---
*Relying on smartphones to feed our heads reminds me of my disappointment with clunky HyperCard on early Macs (the first widely available hypertext system -- nearly 20 years after the early full-screen demos that so impressed me!), with its tiny "cards" instead of pages of unlimited length. How happy I was to see Mosaic and Netscape browsers on full-sized screens finally appear some 5 years later. We are losing such richness as the price of mobility! (I am writing this with a triple-monitor desktop system, which I sorely miss when away from my office, even with a laptop or iPad. And I admit, I am not great at typing with just my thumbs. ...Does anyone have a spare brick?)

[Image:  Thanks to Eli Pariser and Shelly Palmer for the separate images that I mashed up for this post.]