Sunday, July 22, 2018

The Augmented Wisdom of Crowds: Rate the Raters and Weight the Ratings

How technology can make us all smarter, not dumber

We thought social media and computer-mediated communications technologies would make us smarter, but recent experience with Facebook, Twitter, and others suggests they are now making us much dumber. We face a major and fundamental crisis. Civilization seems to be descending into a battle of increasingly polarized factions who cannot understand or accept one another, fueled by filter bubbles and echo chambers.

Many have begun to focus serious attention on this problem, but it seems we are fighting the last war -- not using tools that match the task.

A recent conference, "Fake News Horror Show," convened people focused on these issues from government, academia, and industry, and one of the issues was who decides what is "fake news," how, and on what basis. There are many efforts at fact checking, and at certification or rating of reputable vs. disreputable sources -- but also recognition that such efforts can be crippled by circularity: who is credible-enough in the eyes of diverse communities of interest to escape the charge of "fake news" themselves?

I raised two points at that conference. This post expands on the first point and shows how it provides a basis for addressing the second:
  • The core issue is one of trust and authority -- it is hard to get consistent agreement in any broad population on who should be trusted or taken as an authority, no matter what their established credentials or reputation. Who decides what is fake news? What I suggested is that this is the same problem that has been made manageable by getting smarter about the wisdom of crowds -- much as Google's PageRank algorithm beat out Yahoo and AltaVista at making search engines effective at finding content that is relevant and useful.

    As explained further below, the essence of the method is to "rate the raters" -- and to weight those ratings accordingly. Working at Web scale, no rater's authority can be relied on without drawing on the judgement of the crowd. Furthermore, simple equal voting does not fully reflect the wisdom of the crowd -- there is deeper wisdom about those votes to be drawn from the crowd.

    Some of the crowd are more equal than others. Deciding who is more equal, and whose vote should be weighted more heavily can be determined by how people rate the raters -- and how those raters are rated -- and so on. Those ratings are not universal, but depend on the context: the domain and the community -- and the current intent or task of the user. Each of us wants to see what is most relevant, useful, appealing, or eye-opening -- for us -- and perhaps with different balances at different times. Computer intelligence can distill those recursive, context-dependent ratings, to augment human wisdom.
  • A major complicating issue is that of biased assimilation. The perverse truth seems to be that "balanced information may actually inflame extreme views." This is all too clear in the mirror worlds of pro-Trump and anti-Trump factions and their media favorites like Fox, CNN, and MSNBC. Each side thinks the other is unhinged or even evil, and layers a vicious cycle of distrust around anything they say. It seems one of the few promising counters to this vicious cycle is what Cass Sunstein referred to as surprising validators: people one usually gives credence to, but who suggest one's view on a particular issue might be wrong. A recent example of a surprising validator was the "Confession of an Anti-GMO Activist." This item is  readily identifiable as a "turncoat" opinion that might be influential for many, but smart algorithms can find similar items that are more subtle, and tied to less prominent people who may be known and respected by a particular user. There is an opportunity for electronic media services to exploit this insight that "what matters most may be not what is said, but who, exactly, is saying it."
These are themes I have been thinking and writing about on and off for decades. This growing crisis, as highlighted by the Fake News Horror Show conference, spurred me to write this outline for a broad architecture (and specific methods) for addressing these issues. Discussions at that event led to my invitation to an up-coming workshop hosted by the Global Engagement Center (a US State Department unit) focused on "technologies for use against foreign propaganda, disinformation, and radicalization to violence." This post is offered to contribute to those efforts.

Beyond that urgent focus, this architecture has relevance to the broader improvement of social media and other collaborative systems. Some key themes:
  • Binary, black or white thinking is easy and natural, but humans are capable of dealing with the fact that reality is nuanced in many shades of gray, in many dimensions. Our electronic media can augment that capability.
  • Instead, our most widely used social media now foster simplistic, binary thinking.
  • Simple strategies (analogous to those proven and continually refined in Google's search engine) enable our social media systems to recognize more of the underlying nuance, and bring it to our attention in far more effective ways.
  • We can apply an architecture that draws on some core structures and methods to enable intelligent systems to better augment human intelligence, and to do that in ways tuned to the needs of a diversity of people -- from different schools of thought and with different levels of intelligence, education, and attention.
  • Doing this can not only better expose truly fake news for what it is, but can make us smarter and more aware and reflective of nuance. 
  • This can not only guide our attention toward quality, but can also enable us to be more favored by surprising validators and other forms of serendipity needed to escape our filter bubbles.
Where I am coming from

I was first exposed to early forms of augmented intelligence and hypermedia in 1969 (notably Nelson and Engelbart), and to collaborative systems in 1971 (notably Turoff). That set a broad theme for my work. After varied roles in IT and media technology, I became an inventor, and one of my patent applications outlined a collaborative system for social development of inventions and other ideas (in 2002). While my specific business objective proved elusive (as the world of patents changed), what I described was a general architecture for collaborative development of ideas that has very wide applicability ("ideas" include news stories, social media posts, and "likes"). That is obviously more timely now than ever. I had written on this blog about some specific aspects of those ideas in 2012: "Filtering for Serendipity -- Extremism, 'Filter Bubbles' and 'Surprising Validators.'" To encourage use of those ideas, I released that patent filing into the public domain in 2016.

Here, I take a first shot at a broad description of these strategies that is intended to be more readable and relevant to our current crisis than the legalese of the patent application. As supplement to this, a copy of that patent document with highlighting of the portions that remain most relevant is posted online.*

Of course some of these ideas are more readily applied than others. But the goal of an architecture is to provide a vision and a framework to build on. Without considering the broad scope of what might be done over time is the best way to be sure that we do the best that we can do at any point in time. We can then adjust and improve on that to build toward still-better solutions.

Augmenting the wisdom of crowds

Civilization has risen because of our human skills: to cooperate, to learn from one another, and to coalesce on wisdom and resist folly -- difficult as it may often be to distinguish which is which.

Life is complex, and things are rarely black or white. The Tao symbolizes the realization that everything contains its opposite -- Ted Nelson put it that "everything is deeply intertwingled," and conceived of the Web as a way to reflect that. But throughout human history this nuanced intertwingling has remained challenging for people to grasp.

Behavioral psychology has elucidated the mechanisms behind our difficulty. We are capable of deep and subtle rational thought (Kahneman's System 2, "thinking slow"), but we are pragmatic and lazy, and prefer the the more instinctive, quick, and easy path (system 1, "thinking fast" -- a mode that offers great survival value when faced with urgent decisions. Only reluctantly do we think more deeply. The thinking fast of System 1 favors biased assimilation, with its reliance on the "cognitive ease," quick reactions, and emotional and tribal appeal, rather than rationality.

Augmenting human intellect

For over half a century, a seminal dream of computer technology has been "augmenting human intellect" based on "man-computer symbiosis." The developers of our augmentation tools and our social media believed in their power to enhance community and wisdom -- but we failed to realize how easily our systems can reduce us to the lowest common denominator if we do not apply consistent and coherent measures to better augment the intelligence they automated. A number of early collaborative Web services recognized that some contributors should be more equal than others (for example, Slashdot, with its "karma" reputation system). Simple reputation systems have also proven important for eBay and other market services. However, the social media that came to dominate broader society failed to realize how important that is, and were motivated to "move fast and break things" in a rush to scale and profit.

Now, we are trying to clean up the broken mess of this Frankenberg's monster, to find ways to flag "fake news" in its various harmful forms. But we still seem not to be applying the seminal work in this field. That failure has made our use of the wisdom of crowds stupid to the point of catastrophe. Instead of augmenting our intellect as Engelbart proposed, we are de-augmenting it. People see what is popular, read a headline without reading the full story, jump to conclusions and "like" it, making it more popular, so more people see it. The headlines increasingly become clickbait that distorts the real story. Influence shifts from ideas to memes. This is clearly a vicious cycle -- one that the social media services have little economic incentive to change -- polarization increases engagement, which sells more ads. We urgently need fundamental changes to these systems.

Crowdsourced, domain-specific, authorities -- rating the raters -- much like Google

Raw forms of the wisdom of crowds look to "votes" from crowd, weight them equally, and select the most popular or "liked" items (or a simple average of all votes). This has been done for forecasting, for citation analysis of academic papers, and in early computer searching. But it becomes apparent that this can lead to the lowest common denominator of wisdom, and is easily manipulated with fraudulent votes. Of course we can restrict this to curated "expert" opinion, but then we lose the wisdom of the larger crowd (including its ability to rapidly sense early signs of change).

It was learned that better results can be obtained by weighting votes based on authority, as done in Google's PageRank algorithm, so that votes with higher authority count more heavily (while still using the full crowd to balance the effects of supposed authorities who might be wrong). In academic papers, it was realized that it matters which journal cites an article (now that many low-quality pay-to-publish journals have proliferated).

In Google's search algorithm (dating from 1996, and continuously refined), it was realized that links from a widely-linked-to Web site should be weighted higher in authority than links from another that has few links in to it. The algorithm became recursive: PageRank (used to rank the top search results) depends on how many direct links come in, weighted by a second level factor of how many sites link in to those sites, and weighted in turn by a third level factor of how many of those have many inward links, and so on. Related refinements partitioned these rankings by subject domain, so that authority might be high in one domain, but not in others. The details of how many levels of recursion and how the weighting is done are constantly tuned by Google, but this basic rate the raters strategy is the foundation for Google's continuing success, even as it is now enhanced with many other "signals" in a continually adaptive way. (These include scoring based on analysis of page content and format to weight sites that seem to be legitimate above those that seem to be spam or link farms.)

Proposed methods and architecture

My patent disclosure explains much the same rate the raters strategy (call it RateRank?) as applicable to ranking items of nearly any kind, in a richly nuanced, open, social context for augmenting the wisdom of crowds. (It is a strategy that can itself be adapted and refined by augmenting the wisdom of crowds -- another case of "eat your own dog food!")

The core architecture works in terms of three major dimensions that apply to a full range of information systems and services:
  1. Items. These can be any kind of information item, including contribution items (such as news stories, blog posts, or social media posts, or even books or videos, or collections of items), comment/analysis items (including social media comments on other items), and rating/feedback items (including likes and retweets, as well as comments that imply a rating of another item)
  2. Participants (and communities and sub-communities of participants). These are individuals, who may or may not have specific roles (including submitters, commenters, raters, and special roles such as experts, moderators, or administrators). In social media systems, these might include people (with verified IDs or anonymous), collections of people in the form of businesses, commercial advertisers, political advertisers, and other organizations. (Special rules and restrictions might apply to non-human participants, including bots and corporate or state actors.) Communities of participants might be explicit (with controlled membership), such as Facebook groups, and implicit (and fuzzy), based on closeness of social graph relationships and domain interests. These might include communities of interest, practice, geographic locality, or  degree of social graph closeness. 
  3. Domains (and sub-domains). These may be subject-matter domains in various dimensions. Domains may overlap or cross-cut. (For example issues about GMOs might involve cross-cutting scientific, business, governmental/regulatory, and political domains.)
An important aspect of generality in this architecture is that:
  • Any item or participant can be rated (explicitly or implicitly)
  • Any item can contain one or more ratings of other items or participants (and of itself)
It should be understood that Google's algorithm is a specialized instance of such an architecture -- one where all the items are Web pages, and all links between Web pages are implicit ratings of the link destination by the link source. The key element of man-computer symbiosis here is that the decision to place a link is assumed to be a "rating" decision of a human Webmaster or author (a vote for the destination, by the source, from the source context), but the analysis and weighting of those links (votes) is algorithmic. Much as could be applied to fake news, Google has developed finely tuned algorithms for detecting the multitudes of "link farms" that use bots that seek to fraudulently mimic this human intelligence, and downgrades the weighting of such links.

How the augmenting works

The heart of the method is a fully adaptive process that rates the raters recursively, using explicit and implicit ratings of items and raters (and potentially even the algorithms of the system itself). Rate the raters, rate those who rate the raters, and so on. Weight the ratings according to the rater's reputation (in context), so the wisest members of the crowd, in the current context, as judged by the crowd, have the most say. The wisest in context meaning the wisest in the domains and communities that are most relevant to the current usage context. But still, all of the crowd should be considered at some level.

This causes good items and raters (and algorithms) to bubble up into prominence, and less well-rated ones to sink from prominence. This process would rarely be binary black and white. Highly rated items or participants can lose that rating over time, and in other contexts. Poorly rated items or participants might never be removed (except for extreme abuse) but simply downgraded (to contribute what small weight is warranted, especially if many agree on a contrary view) and can remain accessible with digging, when desired. (As noted below, our social media systems have become essential utilities, and exclusion of people or ideas on the fringe is at odds with the value of free speech in our open society.) The rules and algorithms could be continuously learning and adaptive, using a hybrid of machine learning and human oversight. 

Attention management systems can ensure that the best items tend to made most visible, and the worst least visible, but the system should adjust those rankings to the context of what is known about the user in general, and what is inferred about what the user is seeking at a given time -- with options for explicit overrides (much as Google adjusts its search rankings to the user and their current query patterns).  It should be noted that Facebook and others already use some similar methods, but unfortunately these are oriented to maximizing an intensity of "engagement" that optimizes for the company's ad sale opportunities, rather than to a quality of content and engagement for the user. We need sophistication of algorithms, data science, and machine learning applied to quality for users, not just engagement for advertisers and those who would manipulate us.

Participants might be imputed high authority in one domain, or in one community, but lower in others. Movie stars might outrank Nobel prize-winners when considering a topic in the arts or even in social awareness, but not in economic theory. NRA members might outrank gun control opponents for members of an NRA community, but not for non-members of that community.

Openness is a key enabling feature: these algorithms should not be monolithic, opaque, and controlled by any one system, but should be flexible, transparent, and adaptive -- and depend on user task/context/desires/skill at any given time. Some users may choose simple default processes and behaviors, but other could be enabled to mix and match alternative ranking and filtering processes, and to apply deeper levels of analytics to understand why the system is presenting a given view. Users should be able to modify the view they see as they may desire, either by changing parameters or swapping alternative algorithms. Such alternative algorithms could be from a single provider, or alternative sources in an open marketspace, or "roll your own."

Within this framework, key design factors include how these key processes are managed to work in concert, and to change how each of these behaves, for a given user, at given time, depending on task/context/desires/skill (including the level of effort a user wishes to put in):
  • The core rate the raters process, based on both implicit and explicit ratings, weighted by authority as assessed by other raters (as themselves weighted based on ratings by others), with selective levels of partitioning by community and domain. Consideration of formal and institutional authority can be applied to partially balance crowdsourced authority. Dynamic selection of weighting and balancing methods might depend on user task/context/desires
  • Attention tools that filter irrelevant items and highlight relevant ones (such as to give Facebook or Twitter users different views of their feed). Thus different Facebook or Twitter user might be able to get different views of their feed, and change that as desired.
  • Consideration with regard to which communities and sub-communities most contribute to rankings for specific items at specific times.  Communities might have graded openness (in the form of selectively permeable boundaries) to avoid groupthink and cross-fertilize effectively. This could be applied by using insider/outsider thresholds to manage separation/openness.
  • Consideration with regard to domains and sub-domains to maximize the quality and relevance of ratings, authority, and attention, and to avoid groupthink and cross-fertilize effectively.
  • Consideration of explicit vs. implicit ratings.. While explicit ratings may provide the strongest and most nuanced information, implicit ratings may be far more readily available, thus representing a larger crowd, and so may have the greatest value in augmenting the wisdom of the crowd. Just as with search and ad targeting, implicit ratings can include subtle factors, such as measures of attention, sentiment, emotion, and other behaviors.
  • Consideration of verified vs. unverified vs. anonymous participants. It may be desirable to allow a range of levels, use weighting where anonymous participants have no reputation or a negative reputation. Bots might be banned, or given very poor reputation.
  • Open creation, selection and use of alternative tools for filtering, discovery, attention/alerting, ranking, and analytics depending on user task/context/desires. This kind of openness can stimulate development and testing of creative alternatives and enable market-based selection of the best-suited tools.
  • Use of valuation, crowdfunding, recognition, publicity, and other non-monetary incentives can also be used to encourage productive and meaningful participation, to bring out the best of the crowd.
(As expanded on below, all of this should be done with transparency and user control.)

Applying this to social media -- fake news, community standards, polarization, and serendipity

A core objective is to augment the wisdom of crowds -- to benefit from the crowd to filter out the irrelevant or poor quality -- but to have augmented intelligence in determining relevance and quality in a dynamically nuanced way that reduces the de-augmenting effect of echo chambers and filter bubbles.

Using these methods, true fake news, which is clearly dishonest and created by known bad actors, can be readily filtered out, with low risk of blocking good-faith contrarian perspectives from quality sources. Such fake news can readily be distinguished from  legitimate partisan spin (point and counterpoint), from legitimate criticism (a news photo of a Nazi sign) or historically important news items (the Vietnam "terror of war" photo), and from legitimate humor or satire.

A dilemma that has become very apparent in our social media relates to "community standards" for managing people and items that are "objectionable." Since our social media systems have become essential utilities, exclusion of people or ideas on the fringe is at odds with the rights of free speech in our open society. Jessica Lessin recently commented on Facebook's "clumsy" struggles with content moderation, and on the calls of some to ban people and items. She observes that Facebook wants the community to determine the rules, but also is pressed to placate regulators -- and observes that "getting two billion people to write your rules isn’t very practical."

"Getting two billion people to write your rules" is just what the augmented wisdom of crowds does seek to make practical -- and more effective than any other strategy. The rules would rarely ban people (real humans) or items, but simply limit their visibility beyond the participants and communities that choose to accept such people or items. Such "objectionable" people have no right to require they be granted wide exposure, and, at the same time, those who find some people or materials objectionable rarely have a right to insist on an absolute and total ban.

This ties back to the converse issue, the seeking of surprising validators and serendipity described in my 2015 post. By understanding the items and participants, how they are rated by whom, and how they fit into communities, social graphs, and domains, highly personalized attention management tools can minimize exposure to what is truly objectionable, but can find and present just the right surprising validators for each individual user (at times when they might be receptive). Similarly, these tools can custom-chose serendipitous items from other communities and domains that would otherwise be missed.

This is an area where advanced augmentation of crowd wisdom can become uniquely powerful. The mainstream will become more aware and accepting of fringe views and materials (and might set aside specific times for exploring such items), and the extremes will have the freedom to choose (1) whether they wish to make their case in a way that others can accept as unpleasant but not unreasonable and antisocial, or (2) to be placed beyond the pale of broader society: hard to find, but still short of total exclusion. Again, a high degree of customization can be applied (and varied with changing context). Those who want walled gardens can create them -- with windows and gates that open where and when desired.

Innovation, openness, transparency, and privacy

Of course the key issues are how do we apply quick fixes for our current crisis, how do we evolve toward better media ecosystems, and how do we balance privacy and transparency. I generally advocate for openness and transparency. 

The Internet and the early Web were built on openness and transparency, which fueled a huge burst of innovation.  (Just as I refer to my 2002 patent filing, one can make a broad argument that many of the most important ideas of digital society emerged around the time of that "dot-com" era or before.) Open, interoperable systems (both Web 1.0 and Web 2.0) enabled a thousand flowers to bloom. There are also similar lessons from systems for financial market data (one of the first great data market ecologies) fueled by open access to market data from trading exchanges, and to competing, interoperable distribution, analytics, and presentation services. The patent filing I describe here (and others of mine) build on similar openness and interoperability. 

Now that we have veered down a path of closed, monopolistic walled gardens that have gained great power, we face difficult questions of how to manage them for the public good. I suggest we probably need a mix of all four of the following. Determining just how to do that will be challenging. (Some suggestions related to each of these follow.) 
  1. Can we motivate monopolies like Facebook to voluntarily shift to better serve us? Ideally, that would be the fastest solution, since they have full power to introduce such methods (and the skills to do so are much the same as the skills they now apply for targeting ads).
  2. Can we independently layer needed functions on top of such services (or in competition with them)? The questions are how to interface to existing services (with or without cooperation) and how to gain critical mass. Even at more limited scale, such secondary systems might provide augmented wisdom that could be fed back into the dominant systems, such as to help flag harmful items.
  3. Should we mandate regulatory controls, accepting these systems as natural monopolies to be regulated as such (much like early days of regulating the Bell System monopoly on telephonic media platforms)? There seem to be strong arguments for at least some of this, but being smart about it will be a challenge.
  4. Should we open them up or break portions of them apart (much like the later days of regulating the  Bell System)? Here, too, there seem to be strong arguments for at least some of this, but being smart about it will be a challenge.
  5. Can we use regulation to force the monopolies to better serve their users (and society) by forcing changes in their business model (with incentives to serve users rather than advertisers)? I suggest that may be one of the most feasible and effective levers we can apply.
My suggestions about those alternatives:
A transparent society?

A central (and increasingly urgent) dilemma relates to privacy. Some of my suggestions for openness and transparency in our social media and similar collaborative systems could potentially conflict with privacy concerns. We may have to choose between strict privacy and smart, effective systems that create immense new value for users and society. We need to think more deeply about which objectives matter, and how to get the best of mix. Privacy is an important human issue, but its role in our world of Big Data and AI is changing: 
  • As David Brin suggested in The Transparent Society, the question of privacy is not just what is known about us, but who controls that information. Brin suggests the greatest danger is that authoritarian governments will control information and use it to control us (as China is increasingly on track to do that). 
  • We now face a similar concern with monopolies that have taken on quasi-governmental roles -- they seem to be answerable to no one, and are motivated not to serve their users, but to manipulate us to serve the advertisers who they profit from. (There are also the advertisers, themselves.)
  • Brin suggested our technology will return us to the more transparent human norms of the village -- everyone knew one-another's secrets, but that created a balance of power where all but the most antisocial secrets were largely ignored and accepted. We seem to be well on the way to accepting less privacy, as long as our information is not abused.
  • I suggest we will gain the most by moving in the direction of openness and transparency -- with care to protect the aspects of privacy that really need protection (by managing well-targeted constraints on who has access to what, under what controls). 
That takes us back to the genius of man-computer symbiosis -- AI and machine learning thrive on big data. Locking up or siloing big data can cripple our ability to augment the wisdom of crowds and leave us at the mercy of the governments or businesses that do have our data. We need to find a wise middle ground of openness that fuels augmented intelligence and market forces -- in which service providers are driven by customer demand and desires, and constrained only by the precision-crafted privacy protections that are truly needed.

------
Related posts:

------

*Appendix -- My patent disclosure document (now in public domain)

This post draws on the architecture and methods described in detail in my US patent application entitled "Method and Apparatus for and Idea Adoption Marketplace" (10/692,974), which was published 9/17/04. It was filed 10/24/03 formalizing a provisional filing on 10/24/02. I released this material into the public domain on 12/19/16. I retain no patent rights in it, and it is open to all who can benefit from it.

A copy of that application with highlighting of portions that remain most relevant to current needs is now online. While this is written in a legalese style required for patent applications that is not very readable, it is hoped that the highlighted sections are readable to those with interest.

The highlighted sections present a broad architecture that now seems more timely than ever, and provides an extensible framework for far better social media -- and important aspects of digital democracy in general.

Tuesday, June 26, 2018

AI = Augmented Intelligence: One More Time: Man + Machine (via HBR and SMR)

In a notable bit of synchronicity, the summer issues of both Harvard Business Review and MIT Sloan Management Review have feature articles advocating a more symbiotic approach to AI:

As Malone encapsulates it, what we need is, "an architecture for general purpose, problem-solving superminds: Computers use their specialized intelligence to solve parts of the problem, people use their general intelligence to do the rest, and computers help engage and coordinate far larger groups of people than has ever been possible."

Why do we keep forgetting how important such a symbiotic approach is?  As I have written multiple times on this blog (most recently in my last post):
Another very powerful aspect of networks and algorithms that many neglect is  the augmentation of human intelligence. This idea dates back some 60 years (and more), when "artificial intelligence" went through its first hype cycle -- Licklider and Engelbart observed that the smarter strategy is not to seek totally artificial intelligence, but to seek hybrid strategies that draw on and augment human intelligence. Licklider called it "man-computer symbiosis, and used ARPA funding to support the work of Engelbart on "augmenting human intellect." In an age of arcane and limited uses of computers, that proved eye-opening at a 1968 conference ("the mother of all demos"), and was one of the key inspirations for modern user interfaces, hypertext, and the Web.
The term augmentation is resurfacing in the artificial intelligence field, as we are once again realizing how limited machine intelligence still is, and that (especially where broad and flexible intelligence is needed) it is often far more effective to seek to apply augmented intelligence that works symbiotically with humans, retaining human visibility and guidance over how machine intelligence is used.
Both articles are valuable updates and teachings on how and why to pursue this understanding. But why is it so hard to keep in mind that what we seek is not man or machine, but man augmented by machine?

Thursday, April 26, 2018

Architecting Our Platforms to Better Serve Us -- Augmenting and Modularizing the Algorithm

We dreamed that our Internet platforms would serve us miraculously, but now see that they have taken a wrong turn in many serious respects. That realization has reached a crescendo in the press and in Congress with regard to Facebook and Google's advertising-driven services, but it reaches far more deeply.

"Titans on Trial: Do internet giants have too much power? Should governments intervene?" -- I had the honor last night of attending this stimulating mock trial, with author Ken Auletta as judge and FTC Commissioner Terrell McSweeny and Rob Atkinson, President of the Information Technology and Innovation Foundation (ITIF) as opposing advocates (hosted by Genesys Partners). My interpretation of the the jury verdict (voted by all of the attendees, who were mostly investors or entrepreneurs) was: yes, most agree that regulation is needed, but it must be nuanced and smartly done, not heavy handed. Just how to do that will be a challenge, but it is a challenge that we must urgently consider.

I have been outlining views on this that go in some novel directions, but are generally consistent with the views of many other observers. This post takes a broad view of those suggestions, drawing from several earlier posts.

One of the issues touched on below is a core business model issue -- the idea that the ad-model of "free" services in exchange for attention to ads is "the original sin of the Internet." It has made users of Facebook and Google (and many others) "the product, not the customer," in a way that distorts incentives and fails to serve the user interest and the public interest. As the Facebook fiasco makes clear, these business model incentives can drive these platforms to provide just enough value to "engage" us to give up our data and attend to the advertiser's messages and manipulation and even to foster dopamine-driven addiction, but not necessarily to offer consumer value (services and data protection) that truly serves our interests.

That issue is specifically addressed in a series of posts in my other blog that focuses on a novel approach to business models (and regulation that centers on that), and those posts remain the most focused presentations on those particular issues:
This rest of this post adapts a broader outline of ideas previously embedded in a book review (on Neal Ferguson's "The Square and the Tower: Networks and Power from the Freemasons to Facebook," a historical review of power in the competing forms of networks and hierarchies). Here I abridge and update that post to concentrate on on our digital platforms. (Some complementary points on the need for new thinking on regulation -- and the need for greater tech literacy and nuance -- are in a recent HBR article, "The U.S. Needs a New Paradigm for Data Governance.")

Rethinking our networks -- and the algorithms that make all the difference

Drawing on my long career as a systems analyst/engineer/designer, manager, entrepreneur, inventor, and investor (including early days in the Bell System when it was a regulated monopoly providing "universal service"), I have recently come to share the fear of many that we are going off the rails.

But in spite of the frenzy, it seems we are still failing to refocus on better ways to design, manage, use, and govern our networks -- to better balance the best of hierarchy and openness. Few who understand technology and policy are yet focused on the opportunities that I see as reachable, and now urgently needed.

New levels of man-machine augmentation and new levels of decentralizing and modularizing intelligence can make these network smarter and more continuously adaptable to our wishes, while maintaining sensible and flexible levels of control -- and with the innovative efficiency of an open market.   We can build on distributed intelligence in our networks to find more nuanced ways to balance openness and stability (without relying on unchecked levels of machine intelligence). Think of it as a new kind of systems architecture for modular engineering of rules that blends top-down stability with bottom-up emergence, to apply checks and balances that work much like our representative democracy. This is a still-formative development of ideas that I have written about for years, and plan to continue into the future.

First some context. The crucial differences among all kinds of networks (including hierarchies) are in the rules (algorithms, code, policies) that determine which nodes connect, and with what powers. We now have the power to create a new synthesis. Modern computer-based networks enable our algorithms to be far more nuanced and dynamically variable. They become far more emergent in both structure and policy, while still subject to basic constraints needed for stability and fairness.

Traditional networks have rules that are either relatively open (but somewhat slow to change), or constrained by laws and customs (and thus resistant to change). Even our current social and information networks are constrained in important ways. Some examples:
  • The US constitution defines the powers and the structures for the governing hierarchy, and processes for legislation and execution, made resilient by its provisions for self-amendable checks and balances. 
  • Real-world social hierarchies have structures based on empowered people that tend to shift more or less slowly.
  • Facebook has a social graph that is emergent, but the algorithms for filtering who sees what are strictly controlled by, and private to, Facebook. (In January they announced a major change --  unilaterally -- perhaps for the better for users and society, if not for content publishers, but reports quickly surfaced that it had unintended consequences when tested.)
  • Google has a page graph that is given dynamic weight by the PageRank algorithm, but the management of that algorithm is strictly controlled by Google. It has been continuously evolving in important respects, but the details are kept secret to make it harder to game.
Our vaunted high-tech networks are controlled by corporate hierarchies (FANG: Facebook, Amazon, Netflix, and Google in much of the world, and BAT: Baidu, Alibaba, and Tencent in China) -- but are subject to limited levels of government control that vary in the US, EU, and China. This corporate control is a source of tension and resistance to change -- and a barrier to more emergent adaptation to changing needs and stressors (such as the Russian interference in our elections). These new monopolistic hierarchies extract high rents from the network -- meaning us, the users -- mostly indirectly, in the form of advertising and sales of personal data.

Smarter, more open and emergent algorithms -- APIs and a common carrier governance model

The answer to the question of governance is to make our network algorithms not only smarter, but more open to appropriate levels of individual and multi-party control. Business monopolies or oligarchies (or governments) may own and control essential infrastructure, but we can place limits on what they control and what is open. In the antitrust efforts of the past century governments found need to regulate rail and telephone networks as common carriers, with limited corporate-owner power to control how they are used, giving marketplace players (competitors and consumers) a share in that control. 

Initially this was rigid and regulated in great detail by the government, but the Carterfone decision showed how to open the old AT&T Bell System network to allow connection of devices not tested and approved by AT&T. Many forget how only AT&T phones could be used (except for a few cases of alternative devices like early fax machines that went through cumbersome and often arbitrary AT&T approval processes). Remember the acoustic modem coupler, needed because modems could not be directly connected? That changed when the FCC's decision opened the network up to any device that met defined electrical interface standards (using the still-familiar RJ11, a "Registered Jack").

Similarly only AT&T long-distance connections could be used, until the antitrust Consent Decree opened up competition among the "Baby Bells" and broke them off from Long Lines to compete on equal terms with carriers like MCI and Sprint. Manufacturing was also opened to new competitors.

In software systems, such plug-like interfaces are known as APIs (Application Program Interfaces), and are now widely accepted as the standard way to let systems interoperate with one another -- just enough, but no more -- much like a hardware jack does. This creates a level of modularity in architecture that lets multiple systems, subsystems, and components  interoperate as interchangeable parts -- extending the great advance of the first Industrial Revolution to software.

What I suggest as the next step in evolution of our networks is a new kind of common carrier model that recognizes networks like Facebook, Google, and Twitter as common utilities once they reach some level of market dominance. Then antitrust protections would mandate open APIs to allow substitution of key components by customers -- to enable them to choose from an open market of alternatives that offer different features and different algorithms. Some specific suggestions are below (including the very relevant model of sophisticated interoperablilty in electronic mail networks), but first, a bit more on the motivations.

Modularity, emergence, markets, transparency, and democracy

Systems architects have long recognized that modularity is essential to making complex systems feasible and manageable. Software developers saw from the early days that monolithic systems did not scale -- they were hard to build, maintain, or modify. (The picture here of the tar pits is from Fred Brooks classic 1972 book in IBM's first large software project.)  Web 2.0 extended that modularity to our network services, using network APIs that could be opened to the marketplace. Now we see wonderful examples of rich applications in the cloud that are composed of elements of logic, data, and analytics from a vast array of companies (such as travel services that seamlessly combine air, car rental, hotel, local attractions, loyalty programs, advertising, and tracking services from many companies).

The beauty of this kind of modularity is that systems can be highly emergent, based on the transparency and stability of published, open APIs, to quickly adapt to meet needs that were not anticipated. Some of this can be at the consumer's discretion, and some is enabled by nimble entrepreneurs. The full dynamics of the market can be applied, yet basic levels of control can be retained by the various players to ensure resilience and minimize abuse or failures.

The challenge is how to apply hierarchical control in the form of regulation in a way that limits risks, while enabling emergence driven by market forces. What we need is new focus on how to modularize critical common core utility services and how to govern the policies and algorithms that are applied, at multiple levels in the design of these systems (another, more hidden and abstract, kind of hierarchy). That can be done through some combination of industry self-regulation (where a few major players have the capability to do that, probably faster and more effectively than government), but by government where necessary (preferably only to the extent and duration necessary).

That obviously will be difficult and contentious, but it is now essential, if we are not to endure a new age of disorder, revolution, and war much like the age of religious war that followed Gutenberg (as Ferguson described). Silicon Valley and the rest of the tech world need to take responsibility for the genie they have let out of the bottle, and to mobilize to deal with it, and to get citizens and policymakers to understand the issues.

Once that progresses and is found to be effective, similar methods may eventually be applied to make government itself more modular, emergent, transparent, and democratic -- moving carefully toward "Democracy 2.0." (The carefully part is important -- Ferguson rightfully noted the dangers we face, and we have done a poor job of teaching our citizens, and our technologists, even the traditional principles of history, civics, and governance that are prerequisite to a working democracy.)

Opening the FANG walled gardens (with emphasis on Facebook and Google, plus Twitter)

This section outlines some rough ideas. (Some were posted in comments on an article in The Information by Sam Lessin, titled, "The Tower of Babel: Five Challenges of the Modern Internet.")

The fundamental principle is that entrepreneurs should be free to innovate improvements to these "essential" platforms -- which can then be selected by consumer market forces. Just as we moved beyond the restrictive walled gardens of AOL, and the early closed app stores (initially limited to apps created by Apple), we have unleashed a cornucopia of innovative Web services and apps that have made our services far more effective (and far more valuable to the platform owners as well, in spite of their early fears). Why should first movers be allowed to block essential innovation? Why should they have sole control and knowledge of the essential algorithms that are coming to govern major aspects of our lives? Why shouldn't our systems evolve toward fitness functions that we control and understand, with just enough hierarchical structure to prevent excessive instability at any given time?

Consider the following specific areas of opportunity.

Filtering rules. Filters are central to the function of Facebook, Google, and Twitter. As Ferguson observes, there are issues of homophily, filter bubbles, echo chambers, and fake news, and spoofing that are core to whether these networks make us smart or stupid, and whether we are easily manipulated to think in certain ways. Why do we not mandate that platforms be opened to user-selectable filtering algorithms (and/or human curators)? The major platforms can control their core services, but could allow users to select separate filters that interoperate with the platform. Let users control their filters, whether just by setting key parameters, or by substituting pluggable alternative filter algorithms. (This would work much like third party analytics in financial market data systems.) Greater competition and transparency would allow users to compare alternative filters and decide what kinds of content they do or do not want. It would stimulate innovation to create new kinds of filters that might be far more useful and smart.

For example, I have proposed strategies for filters that can help counter filter bubble effects by being much smarter about how people are exposed to views that may be outside of their bubble, doing it in ways that they welcome and want to think about. My post, Filtering for Serendipity -- Extremism, "Filter Bubbles" and "Surprising Validators" explains the need, and how that might be done. The key idea is to assign levels of authority to people based on the reputational authority that other people ascribe to them (think of it as RateRank, analogous to Google's PageRank algorithm). This approach also suggests ways to create smart serendipity, something that could be very valuable as well.

The "wisdom of the crowd" may be a misnomer when the crowd is an undifferentiated mob, but,  I propose seeking the wisdom of the smart crowd -- first using the crowd to evaluate who is smart, and then letting the wisdom of the smart sub-crowd emerge, in a cyclic, self-improving process (much as Google's algorithm improves with usage, and much as science is open to all, but driven by those who gain authority, temporary as that may be).

Social graphs: Why do Facebook, Twitter, LinkedIn, and others own separate, private forms of our social graph. Why not let other user agents interoperate with a given platform’s social graph? Does the platform own the data defining my social graph relationships or do I? Does the platform control how that affects my filter or do I? Yes, we may have different flavors of social graph, such as personal for Facebook and professional for LinkedIn, but we could still have distinct sub-communities that we select when we use an integrated multi-graph, and those could offer greater nuance and flexibility with more direct user control.

User agents versus network service agents: Email systems were modularized in Internet standards long ago, so that we compose and read mail using user agents (Outlook, Apple mail, Gmail, and others) that connect with federated remote mail transfer agent servers (that we may barely be aware of) which interchange mail with any other mail transfer agent to reach anyone using any kind of user agent, thus enabling universal connectivity.

Why not do much the same, to let any social media user agent interoperate with any other, using a federated social graph and federated message transfer agents? We could then set our user agent to apply filters to let us see whichever communities we want to see at any given time. Some startups have attempted to build stand-alone social networks that focus on sub-communities like family or close friends versus hundreds of more or less remote acquaintances. Why not just make that a flexible and dynamic option, that we can control at will with a single user agent? Why require a startup to build and scale all aspects of a social media service, when they could just focus on a specific innovation? (The social media UX can be made interoperable to a high degree across different user agents, just as email user agents handle HTML, images, attachments, emojis, etc. -- and as do competing Web browsers.)

Identity: A recurring problem with many social networks is abuse by anonymous users (often people with many aliases, or even just bots). Once again, this need not be a simple binary choice. It would not be hard to have multiple levels of participant, some anonymous and some with one or more levels of authentication as real human individuals (or legitimate organizations). First class users would get validated identities, and be given full privileges, while anonymous users might be permitted but clearly flagged as such, with second class privileges. That would allow users to be exposed to anonymous content, when desired, but without confusion as to trust levels. Levels of identity could be clearly marked in feeds, and users could filter out anonymous or unverified users if desired. (We do already see some hints of this, but only to a very limited degree.)

Value transfers and extractions: As noted above, another very important problem is that the new platform businesses are driven by advertising and data sales, which means the consumer is not the customer but the product. Short of simply ending that practice (to end advertising and make the consumer the customer), those platforms could be driven to allow customer choice about such intrusions and extractions of value. Some users may be willing opt in to such practices, to continue to get "free" service, and some could opt out, by paying compensatory fees -- and thus becoming the customer. If significant numbers of users opted to become the customer, then the platforms would necessarily become far more customer-first -- for consumer customers, not the business customers who now pay the rent.

I have done extensive work on alternative strategies that adaptively customize value propositions and prices to markets of one -- a new strategy for a new social contract that can shape our commercial relationships to sustain services in proportion to the value they provide, and our ability to pay, so all can afford service. A key part of the issue is to ensure that users are compensated for the value of the data they provide. That can be done as a credit against user subscription fees (a "reverse meter"), at levels that users accept as fair compensation. That would shift incentives toward satisfying users (effectively making the advertiser their customer, rather than the other way around). This method has been described in the Journal of Revenue and Pricing Management: “A novel architecture to monetize digital offerings,” and very briefly in Harvard Business Review. More detail is my FairPayZone blog and my book (see especially the posts about the Facebook and Google business models that are listed in the opening section, above, and again at the end.*)

Analytics and metrics: we need access to relevant usage data and performance metrics to help test and assess alternatives, especially when independent components interact in our systems. Both developers and users will need guidance on alternatives. The Netflix Prize contests for improved recommender algorithms provided anonymized test data from Netflix to participant teams. Concerns about Facebook's algorithm, and the recent change that some testing suggests may do more harm than good, point to the need for independent review. Open alternatives will increase the need for transparency and validation by third parties.

(Sensitive data could be restricted to qualified organizations, with special controls to avoid issues like the Cambridge Analytica mis-use. The answer to such abuse is not greater concentration of power in one platform, as Maurice Stucke points out in Harvard Business Review, "Here Are All the Reasons It’s a Bad Idea to Let a Few Tech Companies Monopolize Our Data." (Facebook has already moved toward greater concentration of power.)

If such richness sounds overly complex, remember that complexity can be hidden by well-designed user agents and default rules. Those who are happy with a platform's defaults need not be affected by the options that other users might enable (or swap in) to customize their experience. We do that very successfully now with our choice of Web browsers and email user agents. We could have similar flexibility and choice in our platforms -- innovations that are valuable can emerge for use by early adopters, and then spread into the mainstream if success fuels demand. That is the genius of our market economy -- a spontaneous, emergent process for adaptively finding what works and has value -- in ways more effective than any hierarchy (as Ferguson extols, with reference to Smith, Hayek, and Levitt).

Augmentation of humans (and their networks)

Another very powerful aspect of networks and algorithms that many neglect is  the augmentation of human intelligence. This idea dates back some 60 years (and more), when "artificial intelligence" went through its first hype cycle -- Licklider and Engelbart observed that the smarter strategy is not to seek totally artificial intelligence, but to seek hybrid strategies that draw on and augment human intelligence. Licklider called it "man-computer symbiosis, and used ARPA funding to support the work of Engelbart on "augmenting human intellect." In an age of arcane and limited uses of computers, that proved eye-opening at a 1968 conference ("the mother of all demos"), and was one of the key inspirations for modern user interfaces, hypertext, and the Web.

The term augmentation is resurfacing in the artificial intelligence field, as we are once again realizing how limited machine intelligence still is, and that (especially where broad and flexible intelligence is needed) it is often far more effective to seek to apply augmented intelligence that works symbiotically with humans, retaining human visibility and guidance over how machine intelligence is used.

Why not apply this kind of emergent, reconfigurable augmented intelligence to drive a bottom up way to dynamically assign (and re-assign) authority in our networks, much like the way representative democracy assigns (and re-assigns) authority from the citizen up? Think of it as dynamically adaptive policy engineering (and consider that a strong bottom-up component will keep such "engineering" democratic and not authoritarian). Done well, this can keep our systems human-centered.

Reality is not binary:  "Everything is deeply intertwingled"

Ted Nelson (who coined the term "hypertext" and was another of the foundational visionaries of the Web), wrote in 1974 that "everything is deeply intertwingled." As he put it, "Hierarchical and sequential structures, especially popular since Gutenberg, are usually forced and artificial. Intertwingularity is not generally acknowledged—people keep pretending they can make things hierarchical, categorizable and sequential when they can't."

It's a race:  augmented network hierarchies that are emergently smart, balanced, and dynamically adaptable -- or disaster

If we pull together to realize this potential, we can transcend the dichotomies and conflicts that are so wickedly complex and dangerous. Just as Malthus failed to account for the emergent genius of civilization, and the non-linear improvements it produces, many of us discount how non-linear the effect of smarter networks, with more dynamically augmented and balanced structures, can be. But we are racing along a very dangerous path, and are not being nearly smart or proactive enough about what we need to do to avert disaster. What we need now is not a top-down command and control Manhattan Project, but a multi-faceted, broadly-based movement, with elements of regulation, but primarily reliant on flexible, modular architectural design.

---

Coda:  On becoming more smartly intertwingled

Everything in our world has always been deeply intertwingled. Human intellect augmented with technology enables us to make our world more smartly intertwingled. But we have lost our way, in the manner that Engelbart alluded to in his illustration of de-augmentation -- we are becoming deeply polarized, addicted to self-destructive dopamine-driven engagement without insight or nuance. We are being de-augmented by our own technology run amok.


(I plan to re-brand this blog as "Smartly Intertwingled" -- that is the objective that drives my work. The theme of "User-Centered Media" is just one important aspect of that.)


--------------------------------------------------------------------------------------------

*On business models - FairPay (my other blog):  As noted above, a series of posts in my other blog focus on a novel approach to business models (and regulation that centers on that), and those posts remain my best presentation on those issues:

Saturday, January 13, 2018

"The Square and the Tower" — Augmenting and Modularizing the Algorithm (a Review and Beyond)

[Note: A newer post updates this one and removes much of the book review portion, to concentrate on the forward-looking platform issues: Architecting Our Platforms to Better Serve Us -- Augmenting and Modularizing the Algorithm.]

---
Niall Ferguson's new book, The Square and the Tower: Networks and Power from the Freemasons to Facebook is a sweeping historical review of the perennial power struggle between top-down hierarchies and more open forms of networks. It offers a thought-provoking perspective on a wide range of current global issues, as the beautiful techno-utopian theories of free and open networks increasingly face murder by two brutal gangs of facts: repressive hierarchies and anarchistic swarms.

Ferguson examines the ebb and flow of power, order, and revolution, with important parallels between the Gutenberg revolution (which led to 130 years of conflict) and our digital revolution, as well as much in between. There is valuable perspective on the interplay of social networks (old and new), the hierarchies of governments (liberal and illiberal), anarchists/terrorists, and businesses (disruptive and monopolistic). One can disagree with Ferguson's conservative politics yet find his analysis illuminating.

Drawing on a long career as a systems analyst/engineer/designer, manager, entrepreneur and inventor, I have recently come to share much of Ferguson's fear that we are going off the rails. He cites important examples like the 9/11 attacks, counterattacks, and ISIS, the financial meltdown of 2008, and most concerning to me, the 2016 election as swayed by social media and hacking. However -- discouraging as these are -- he seems to take an excessively binary view of network structure, and to discount the ability of open networks to better reorganize and balance excesses and abuse. He argues that traditional hierarchies should reestablish dominance.

In that regard, I think Ferguson fails to see the potential for better ways to design, manage, use, and govern our networks -- and to better balance the best of hierarchy and openness. To be fair, few technologists are yet focused on the opportunities that I see as reachable, and now urgently needed.

New levels of man-machine augmentation and new levels of decentralizing and modularizing intelligence can make these network smarter and more continuously adaptable to our wishes, while maintaining sensible and flexible levels of control.   We can build on distributed intelligence in our networks to find more nuanced ways to balance openness and stability (without relying on unchecked levels of machine intelligence). Think of it as a new kind of systems architecture for modular engineering of rules that blends top-down stability with bottom-up emergence that apply checks and balances that work much like our representative democracy. This is a still-formative exploration of some ideas that I have written about, and plan to expand on in the future. First some context.

The Square (networks), the Tower (hierarchies) and the Algorithms that make all the difference

Ferguson's title comes from his metaphor of the medieval city of Sienna, with a large public square that serves as a marketplace and meeting place, and a high tower of government (as well as a nearby cathedral) that displayed the power of those hierarchies. But as he elaborates, networks have complex architectures and governance rules that are far richer than the binary categories of either "network" ( a peer to peer network with informal and emergent rules) or "hierarchy" (a constrained network with more formal directional rankings and restrictions on connectivity).

The crucial differences among all kinds of networks are in the rules (algorithms, code, policies) that determine which nodes connect, and with what powers. While his analysis draws out the rich variety of such structures, in many interesting examples, with diagrams, what he seems to miss is any suggestion of a new synthesis. Modern computer-based networks enable our algorithms to be far more nuanced and dynamically variable. They become far more emergent in both structure and policy, while still subject to basic constraints needed for stability and fairness.

Traditional networks have rules that are either relatively open (but somewhat slow to change), or constrained by laws and customs (and thus resistant to change) -- and even our current social and information networks are constrained in important ways. For example,
  • The US constitution defines the powers and the structures for the governing hierarchy, and processes for legislation and execution, made resilient by its provisions for self-amendable checks and balances. 
  • Real-world social hierarchies have structures based on empowered people that tend to shift more or less slowly.
  • Facebook has a social graph that is emergent, but the algorithms for filtering who sees what are strictly controlled by and private to Facebook. (They have just announced a major change --  unilaterally -- hopefully for the better for users and society, if not for content publishers.)
  • Google has a page graph that is given dynamic weight by the PageRank algorithm, but the management of that algorithm is strictly controlled by Google. It has been continuously evolving in important respects, but the details are kept secret to make it harder to game.
As Ferguson points out, our vaunted high-tech networks are controlled by corporate hierarchies (he refers to FANG, Facebook, Amazon, Netflix, and Google, and BAT, Baidu, Alibaba, and Tencent) -- but subject to levels of government control that vary in the US, EU, and China. This corporate control is a source of tension and resistance to change -- and a barrier to more emergent adaptation to changing needs and stressors (such as the Russian interference in our elections). These new monopolistic hierarchies extract high rents from the network -- meaning us, the users -- mostly in the form of advertising and sales of personal data.

A fuller summary of Ferguson's message is in his WSJ preview article, "In Praise of Hierarchy." That headlines which side of fence he is on.

Smarter, more open and emergent algorithms -- APIs and a common carrier governance model

My view on this is more positive -- in that the answer to the question of governance is to make our network algorithms not only smarter, but more open to individual and multi-party control. Business monopolies or oligarchies (or governments) may own and control essential infrastructure, but we can place limits on what they control and what is open. In the antitrust efforts of the past century governments found need to regulate rail and telephone networks as common carriers, with limited corporate-owner power to control how they are used, giving marketplace players (competitors and consumers) a large share in that control. 

Initially this was rigid and regulated in great detail by the government (very hierarchical), but the Carterfone decision showed how to open the old AT&T Bell System network to allow connection of devices not tested and approved by AT&T. Many forget how only AT&T phones could be used (except for special cases of alternative devices like early faxes (Xerox "telecopiers") that went through cumbersome and often arbitrary AT&T approval processes). That changed when the FCC's decision opened the network up to any device that met defined electrical interface standards (using the still-familiar RJ11, a "Registered Jack"). Similarly only AT&T long-distance connections could be used, until the antitrust Consent Decree opened up competition among the "Baby Bells" and broke them off from Long Lines to compete on equal terms with carriers like MCI and Sprint.

In software systems, such plug-like interfaces are known as APIs (Application Program Interfaces), and are now widely accepted as the standard way to let systems inter-operate with one another -- just enough, but no more -- much like a hardware jack does. This creates a level of modularity in architecture that lets multiple systems, subsystems, and components  inter-operate as interchangeable parts -- the great advance of the first Industrial Revolution.

What I suggest as the next step in evolution of our networks is a new kind of common carrier model that recognizes networks like Facebook, Google, and Twitter as common utilities once they reach some level of market dominance. Then antitrust protections would mandate open APIs to allow substitution of key components by customers -- to enable them to choose from an open market of alternatives that offer different features and different algorithms. Some specific suggestions are below, but first, a bit more on the motivations.

Modularity, emergence, markets, transparency, and democracy

Systems architects have long recognized that modularity is essential to making complex systems feasible and manageable. Software developers saw from the early days that monolithic systems did not scale -- they were hard to build, maintain, or modify. Web 2.0 extended that modularity to our network services, using network APIs that could be opened to the marketplace. Now we see wonderful examples of rich applications in the cloud composed of elements of logic, data, and analytics from a vast array of companies (such as travel services that seamlessly combine air, car rental, hotel, local attractions, loyalty programs, and tracking services from many companies).

The beauty of this kind of modularity is that systems can be highly emergent, based on the transparency and stability of published APIs, to quickly adapt to meet needs that were not anticipated. Some of this can be at the consumer's discretion, and some is enabled by nimble entrepreneurs. The full dynamics of the market can be applied, yet basic levels of control can be retained by the various players to ensure resilience and minimize abuse or failures.

The challenge that Ferguson makes clear is how to apply hierarchical control in the form of regulation in a way that limits risks, while enabling emergence driven by market forces. What we need is new focus on how to modularize critical common core utility services and how to govern the policies and algorithms that are applied, at multiple levels in the design of these systems (another kind of hierarchy). That can be done through some combination of industry self-regulation (where a few major players have the capability to do that, probably faster and more effectively than government), but by government where necessary (and only to the extent and duration necessary).

That obviously will be difficult and contentious, but it is now essential, if we are not to endure a new age of disorder, revolution, and war much like the age that followed Gutenberg. Silicon Valley and the rest of the tech world need to take responsibility for the genie they have let out of the bottle, and mobilize to deal with it.

Once that progresses and is found to be effective, similar methods can be applied to make government itself more modular, emergent, transparent, and democratic -- moving carefully toward "Democracy 2.0." (The carefully part is important -- Ferguson rightfully notes the dangers we face, and we have done a poor job of teaching our citizens, and our technologists, the principles of history, civics, and governance that are prerequisite to a working democracy.)

Opening the FANG walled gardens (with emphasis on Facebook and Google, plus Twitter)

This section outlines some rough ideas. (Some were posted in comments on an article in The Information by Sam Lessin, titled, "The Tower of Babel: Five Challenges of the Modern Internet" -- another tower.)

The fundamental principle is that entrepreneurs should be free to innovate improvements to these "essential" platforms, to be selected by consumer market forces. Just as we moved beyond the restrictive walled gardens of AOL, and the early closed app stores (limited to apps created by Apple or Motorola or Verizon), we have unleashed a cornucopia of innovative Web services and apps that have made our services far more effective (and far more valuable to the platform owners as well, in spite of their fears). Why should first movers be allowed to block essential innovation? Why should they have sole control of the essential algorithms that are coming to govern major aspects of our lives? Why shouldn't our systems evolve toward fitness functions that we control, with just enough hierarchical structure to prevent excessive instability at any given time?

Filtering rules. Filters are central to the function of Facebook, Google, and Twitter. As Ferguson observes, there are issues of homophily, filter bubbles, echo chambers, and fake news, and spoofing that are core to whether these networks make us smart or stupid, and whether we are easily manipulated to think in certain ways. Why not mandate that platforms be opened to user-selectable filtering algorithms (and/or human curators)? The major platforms can control their core services, but could allow for separate filters that inter-operate. Let users control their filters, whether just by setting key parameters, or by substituting pluggable alternative filters. This would be much like third party analytics in financial market data systems. Greater competition and transparency would allow users to compare alternative filters and decide what kinds of content they do or do not want. It would stimulate innovation to create new kinds of filters that might be far more useful.

For example, I have proposed strategies for filters that can help counter filter bubble effects by being much smarter about how people are exposed to views that may be outside of their bubble, doing it in ways that they welcome and think about. My post, Filtering for Serendipity -- Extremism, "Filter Bubbles" and "Surprising Validators" explains the need, and how that might be done. The key idea is to assign levels of authority to people based on the reputational authority that other people ascribe to them (think of it a RateRank, analogous to Google's PageRank algorithm). This approach also suggests ways to create smart serendipity, something that could be very valuable as well.

The "wisdom of the crowd" may be a misnomer when the crowd is an undifferentiated mob, but, what I propose seeking the wisdom of the smart crowd -- first using the crowd to evaluate who is smart, and then letting the wisdom of the smart sub-crowd emerge, in a cyclic, self-improving process (much as Google's algorithm improves with usage).

Social graphs and user agents: Why do Facebook, Twitter, LinkedIn, and others own separate, private forms of our social graph. Why not let other user agents interoperate with a given platform’s social graph? Does the platform own my social graph or do I? Does the platform control how that affects my filter or do I? Yes, we may have different flavors of social graph, such as personal for Facebook and professional for LinkedIn, but we could still have distinct sub-communities that we select when we use an integrated multi-graph, and those could offer greater nuance and flexibility with more direct user control.

Email systems were modularized long ago, so that we compose and read mail using user agents (Outlook, Apple mail, Gmail, and others) that connect with remote mail transfer agent servers (that we may barely be aware of) which interchange mail with any other mail transfer agent to reach anyone using any kind of user agent, thus enabling universal connectivity. Why not do the same to let any social media user agent inter-operate with any other, using a common social graph? We would then set our user agent to apply filters to let us see whichever communities we want to see at any given time.

Identity: A recurring problem with many social networks is abuse by anonymous users (often people with many aliases, or even just bots). Once again, this need not be a simple binary choice. It would not be hard to have multiple levels of participant, some anonymous and some with one or more levels of authentication as real human individuals (or legitimate organizations). These could then be clearly marked in feeds, and users could filter out anonymous or unverified users if desired.

Value transfers and extractions: Another important problem, and one that Ferguson cites is that the new platform businesses are driven by advertising and data sales, which means the consumer is not the customer but the product. Short of simply ending that practice (to end advertising and make the consumer the customer), those platforms could be driven to allow customer choice about such intrusions and extractions of value. Some users may be willing opt in to such practices, to continue to get "free" service, and some could opt out, by paying compensatory fees -- and thus becoming the customer. If significant numbers of users opted to become the customer, then the platforms would necessarily become far more customer-first -- for consumer customers, not the business customers who now pay the rent. (I have done extensive work on such alternative strategies, as described in my FairPayZone blog and my book.)

Analytics and metrics: we need access to relevant usage data and performance metrics to help test and assess alternatives, especially when independent components interact in our systems. Both developers and users will need guidance on alternatives. The Netflix Prize contests for improved recommender algorithms provided anonymized test data from Netflix to participant teams. Concerns about Facebook's algorithm, and the recent change that some testing suggests may do more harm than good, point to the need for independent review. Open alternatives will increase the need for transparency and validation by third parties. (Sensitive data could be restricted to qualified organizations.) [This paragraph added 1/14.]

If such richness sounds overly complex, remember that complexity can be hidden by well-designed user agents and default rules. Those who are happy with a platform's defaults need not be affected by the options that other users might enable (or swap in) to customize their experience. But we could have the choice, and innovations that are valuable can emerge for use by early adopters, and then spread into the mainstream if success fuels demand. That is the genius of our market economy -- a spontaneous, emergent process for adaptively finding what works and has value -- more effective than any hierarchy (as Ferguson extols, with reference to Smith, Hayek, and Levitt).

Augmentation of humans (and their networks)

Another very powerful aspect of networks and algorithms that Ferguson (and many others) neglect is  the augmentation of human intelligence. This idea dates back some 60 years (and more), when "artificial intelligence" went through its first hype cycle -- Licklider and Engelbart observed that the smarter strategy is not to seek totally artificial intelligence, but to seek hybrid strategies that draw on and augment human intelligence. Licklider called it "man-computer symbiosis, and used ARPA funding to support the work of Engelbart on "augmenting human intellect." In an age of mundane uses of computers, that proved eye-opening ("the mother of all demos") at a 1968 conference, and was one of the key inspirations for modern user interfaces, hypertext, and the Web.

The term augmentation is resurfacing in the artificial intelligence field, as we are once again realizing how limited machine intelligence still is, and that (especially where broad and flexible intelligence is needed) it is often far more effective to seek to apply augmented intelligence that works symbiotically with humans, retaining human visibility and guidance over how machine intelligence is used.

Why not apply this kind of emergent, reconfigurable augmented intelligence to drive a bottom up way to dynamically assign (and re-assign) authority in our networks, much like the way representative democracy assigns (and re-assigns) authority from the citizen up? Think of it as dynamically adaptive policy engineering (and consider that a strong bottom-up component will keep such "engineering" democratic and not authoritarian). Done well, this can keep our systems human-centered.

Not binary:  networks versus hierarchies -- "Everything is deeply intertwingled"

Ted Nelson (who coined the term "hypertext" and was also one of the foundational visionaries of the Web), wrote in 1974 that "everything is deeply intertwingled." Ferguson's exposition illuminates how true that is of history. Unfortunately, his artificially binary dichotomy of hierarchies versus networks tends to mask this, and seems to blind him to how much more intertwingled we can expect our networks to be in the future. As Nelson put it, "Hierarchical and sequential structures, especially popular since Gutenberg, are usually forced and artificial. Intertwingularity is not generally acknowledged—people keep pretending they can make things hierarchical, categorizable and sequential when they can't."

It's a race:  augmented network hierarchies that are emergently smart, balanced, and dynamically adaptable -- or disaster

If we pull together to realize this potential, we can transcend the dichotomies and conflicts of the Square and the Tower that Ferguson reveals as so wickedly complex and dangerous. Just as Malthus failed to account for the emergent genius of civilization, and the non-linear improvements it produces, Ferguson seems to discount how non-linear the effect of smarter networks with more dynamically augmented and balanced structures can be. But he is right to be very fearful, and to raise the alarm -- we are racing along a very dangerous path, and are not being nearly smart or proactive enough about what we need to do to avert disaster. What we need now is not a top-down command and control Manhattan Project, but a multi-faceted, broadly-based movement.

---

First published in Reisman on User-Centered Media, 1/13/18.