Thursday, April 26, 2018

Architecting Our Platforms to Better Serve Us -- Augmenting and Modularizing the Algorithm

We dreamed that our Internet platforms would serve us miraculously, but now see that they have taken a wrong turn in many serious respects. That realization has reached a crescendo in the press and in Congress with regard to Facebook and Google's advertising-driven services, but it reaches far more deeply.

"Titans on Trial: Do internet giants have too much power? Should governments intervene?" -- I had the honor last night of attending this stimulating mock trial, with author Ken Auletta as judge and FTC Commissioner Terrell McSweeny and Rob Atkinson, President of the Information Technology and Innovation Foundation (ITIF) as opposing advocates (hosted by Genesys Partners). My interpretation of the the jury verdict (voted by all of the attendees, who were mostly investors or entrepreneurs) was: yes, most agree that regulation is needed, but it must be nuanced and smartly done, not heavy handed. Just how to do that will be a challenge, but it is a challenge that we must urgently consider.

I have been outlining views on this that go in some novel directions, but are generally consistent with the views of many other observers. This post takes a broad view of those suggestions, drawing from several earlier posts.

One of the issues touched on below is a core business model issue -- the idea that the ad-model of "free" services in exchange for attention to ads is "the original sin of the Internet." It has made users of Facebook and Google (and many others) "the product, not the customer," in a way that distorts incentives and fails to serve the user interest and the public interest. As the Facebook fiasco makes clear, these business model incentives can drive these platforms to provide just enough value to "engage" us to give up our data and attend to the advertiser's messages and manipulation and even to foster dopamine-driven addiction, but not necessarily to offer consumer value (services and data protection) that truly serves our interests.

That issue is specifically addressed in a series of posts in my other blog that focuses on a novel approach to business models (and regulation that centers on that), and those posts remain the most focused presentations on those particular issues:
This rest of this post adapts a broader outline of ideas previously embedded in a book review (on Neal Ferguson's "The Square and the Tower: Networks and Power from the Freemasons to Facebook," a historical review of power in the competing forms of networks and hierarchies). Here I abridge and update that post to concentrate on on our digital platforms. (Some complementary points on the need for new thinking on regulation -- and the need for greater tech literacy and nuance -- are in a recent HBR article, "The U.S. Needs a New Paradigm for Data Governance.")

Rethinking our networks -- and the algorithms that make all the difference

Drawing on my long career as a systems analyst/engineer/designer, manager, entrepreneur, inventor, and investor (including early days in the Bell System when it was a regulated monopoly providing "universal service"), I have recently come to share the fear of many that we are going off the rails.

But in spite of the frenzy, it seems we are still failing to refocus on better ways to design, manage, use, and govern our networks -- to better balance the best of hierarchy and openness. Few who understand technology and policy are yet focused on the opportunities that I see as reachable, and now urgently needed.

New levels of man-machine augmentation and new levels of decentralizing and modularizing intelligence can make these network smarter and more continuously adaptable to our wishes, while maintaining sensible and flexible levels of control -- and with the innovative efficiency of an open market.   We can build on distributed intelligence in our networks to find more nuanced ways to balance openness and stability (without relying on unchecked levels of machine intelligence). Think of it as a new kind of systems architecture for modular engineering of rules that blends top-down stability with bottom-up emergence, to apply checks and balances that work much like our representative democracy. This is a still-formative development of ideas that I have written about for years, and plan to continue into the future.

First some context. The crucial differences among all kinds of networks (including hierarchies) are in the rules (algorithms, code, policies) that determine which nodes connect, and with what powers. We now have the power to create a new synthesis. Modern computer-based networks enable our algorithms to be far more nuanced and dynamically variable. They become far more emergent in both structure and policy, while still subject to basic constraints needed for stability and fairness.

Traditional networks have rules that are either relatively open (but somewhat slow to change), or constrained by laws and customs (and thus resistant to change). Even our current social and information networks are constrained in important ways. Some examples:
  • The US constitution defines the powers and the structures for the governing hierarchy, and processes for legislation and execution, made resilient by its provisions for self-amendable checks and balances. 
  • Real-world social hierarchies have structures based on empowered people that tend to shift more or less slowly.
  • Facebook has a social graph that is emergent, but the algorithms for filtering who sees what are strictly controlled by, and private to, Facebook. (In January they announced a major change --  unilaterally -- perhaps for the better for users and society, if not for content publishers, but reports quickly surfaced that it had unintended consequences when tested.)
  • Google has a page graph that is given dynamic weight by the PageRank algorithm, but the management of that algorithm is strictly controlled by Google. It has been continuously evolving in important respects, but the details are kept secret to make it harder to game.
Our vaunted high-tech networks are controlled by corporate hierarchies (FANG: Facebook, Amazon, Netflix, and Google in much of the world, and BAT: Baidu, Alibaba, and Tencent in China) -- but are subject to limited levels of government control that vary in the US, EU, and China. This corporate control is a source of tension and resistance to change -- and a barrier to more emergent adaptation to changing needs and stressors (such as the Russian interference in our elections). These new monopolistic hierarchies extract high rents from the network -- meaning us, the users -- mostly indirectly, in the form of advertising and sales of personal data.

Smarter, more open and emergent algorithms -- APIs and a common carrier governance model

The answer to the question of governance is to make our network algorithms not only smarter, but more open to appropriate levels of individual and multi-party control. Business monopolies or oligarchies (or governments) may own and control essential infrastructure, but we can place limits on what they control and what is open. In the antitrust efforts of the past century governments found need to regulate rail and telephone networks as common carriers, with limited corporate-owner power to control how they are used, giving marketplace players (competitors and consumers) a share in that control. 

Initially this was rigid and regulated in great detail by the government, but the Carterfone decision showed how to open the old AT&T Bell System network to allow connection of devices not tested and approved by AT&T. Many forget how only AT&T phones could be used (except for a few cases of alternative devices like early fax machines that went through cumbersome and often arbitrary AT&T approval processes). Remember the acoustic modem coupler, needed because modems could not be directly connected? That changed when the FCC's decision opened the network up to any device that met defined electrical interface standards (using the still-familiar RJ11, a "Registered Jack").

Similarly only AT&T long-distance connections could be used, until the antitrust Consent Decree opened up competition among the "Baby Bells" and broke them off from Long Lines to compete on equal terms with carriers like MCI and Sprint. Manufacturing was also opened to new competitors.

In software systems, such plug-like interfaces are known as APIs (Application Program Interfaces), and are now widely accepted as the standard way to let systems interoperate with one another -- just enough, but no more -- much like a hardware jack does. This creates a level of modularity in architecture that lets multiple systems, subsystems, and components  interoperate as interchangeable parts -- extending the great advance of the first Industrial Revolution to software.

What I suggest as the next step in evolution of our networks is a new kind of common carrier model that recognizes networks like Facebook, Google, and Twitter as common utilities once they reach some level of market dominance. Then antitrust protections would mandate open APIs to allow substitution of key components by customers -- to enable them to choose from an open market of alternatives that offer different features and different algorithms. Some specific suggestions are below (including the very relevant model of sophisticated interoperablilty in electronic mail networks), but first, a bit more on the motivations.

Modularity, emergence, markets, transparency, and democracy

Systems architects have long recognized that modularity is essential to making complex systems feasible and manageable. Software developers saw from the early days that monolithic systems did not scale -- they were hard to build, maintain, or modify. (The picture here of the tar pits is from Fred Brooks classic 1972 book in IBM's first large software project.)  Web 2.0 extended that modularity to our network services, using network APIs that could be opened to the marketplace. Now we see wonderful examples of rich applications in the cloud that are composed of elements of logic, data, and analytics from a vast array of companies (such as travel services that seamlessly combine air, car rental, hotel, local attractions, loyalty programs, advertising, and tracking services from many companies).

The beauty of this kind of modularity is that systems can be highly emergent, based on the transparency and stability of published, open APIs, to quickly adapt to meet needs that were not anticipated. Some of this can be at the consumer's discretion, and some is enabled by nimble entrepreneurs. The full dynamics of the market can be applied, yet basic levels of control can be retained by the various players to ensure resilience and minimize abuse or failures.

The challenge is how to apply hierarchical control in the form of regulation in a way that limits risks, while enabling emergence driven by market forces. What we need is new focus on how to modularize critical common core utility services and how to govern the policies and algorithms that are applied, at multiple levels in the design of these systems (another, more hidden and abstract, kind of hierarchy). That can be done through some combination of industry self-regulation (where a few major players have the capability to do that, probably faster and more effectively than government), but by government where necessary (preferably only to the extent and duration necessary).

That obviously will be difficult and contentious, but it is now essential, if we are not to endure a new age of disorder, revolution, and war much like the age of religious war that followed Gutenberg (as Ferguson described). Silicon Valley and the rest of the tech world need to take responsibility for the genie they have let out of the bottle, and to mobilize to deal with it, and to get citizens and policymakers to understand the issues.

Once that progresses and is found to be effective, similar methods may eventually be applied to make government itself more modular, emergent, transparent, and democratic -- moving carefully toward "Democracy 2.0." (The carefully part is important -- Ferguson rightfully noted the dangers we face, and we have done a poor job of teaching our citizens, and our technologists, even the traditional principles of history, civics, and governance that are prerequisite to a working democracy.)

Opening the FANG walled gardens (with emphasis on Facebook and Google, plus Twitter)

This section outlines some rough ideas. (Some were posted in comments on an article in The Information by Sam Lessin, titled, "The Tower of Babel: Five Challenges of the Modern Internet.")

The fundamental principle is that entrepreneurs should be free to innovate improvements to these "essential" platforms -- which can then be selected by consumer market forces. Just as we moved beyond the restrictive walled gardens of AOL, and the early closed app stores (initially limited to apps created by Apple), we have unleashed a cornucopia of innovative Web services and apps that have made our services far more effective (and far more valuable to the platform owners as well, in spite of their early fears). Why should first movers be allowed to block essential innovation? Why should they have sole control and knowledge of the essential algorithms that are coming to govern major aspects of our lives? Why shouldn't our systems evolve toward fitness functions that we control and understand, with just enough hierarchical structure to prevent excessive instability at any given time?

Consider the following specific areas of opportunity.

Filtering rules. Filters are central to the function of Facebook, Google, and Twitter. As Ferguson observes, there are issues of homophily, filter bubbles, echo chambers, and fake news, and spoofing that are core to whether these networks make us smart or stupid, and whether we are easily manipulated to think in certain ways. Why do we not mandate that platforms be opened to user-selectable filtering algorithms (and/or human curators)? The major platforms can control their core services, but could allow users to select separate filters that interoperate with the platform. Let users control their filters, whether just by setting key parameters, or by substituting pluggable alternative filter algorithms. (This would work much like third party analytics in financial market data systems.) Greater competition and transparency would allow users to compare alternative filters and decide what kinds of content they do or do not want. It would stimulate innovation to create new kinds of filters that might be far more useful and smart.

For example, I have proposed strategies for filters that can help counter filter bubble effects by being much smarter about how people are exposed to views that may be outside of their bubble, doing it in ways that they welcome and want to think about. My post, Filtering for Serendipity -- Extremism, "Filter Bubbles" and "Surprising Validators" explains the need, and how that might be done. The key idea is to assign levels of authority to people based on the reputational authority that other people ascribe to them (think of it as RateRank, analogous to Google's PageRank algorithm). This approach also suggests ways to create smart serendipity, something that could be very valuable as well.

The "wisdom of the crowd" may be a misnomer when the crowd is an undifferentiated mob, but,  I propose seeking the wisdom of the smart crowd -- first using the crowd to evaluate who is smart, and then letting the wisdom of the smart sub-crowd emerge, in a cyclic, self-improving process (much as Google's algorithm improves with usage, and much as science is open to all, but driven by those who gain authority, temporary as that may be).

Social graphs: Why do Facebook, Twitter, LinkedIn, and others own separate, private forms of our social graph. Why not let other user agents interoperate with a given platform’s social graph? Does the platform own the data defining my social graph relationships or do I? Does the platform control how that affects my filter or do I? Yes, we may have different flavors of social graph, such as personal for Facebook and professional for LinkedIn, but we could still have distinct sub-communities that we select when we use an integrated multi-graph, and those could offer greater nuance and flexibility with more direct user control.

User agents versus network service agents: Email systems were modularized in Internet standards long ago, so that we compose and read mail using user agents (Outlook, Apple mail, Gmail, and others) that connect with federated remote mail transfer agent servers (that we may barely be aware of) which interchange mail with any other mail transfer agent to reach anyone using any kind of user agent, thus enabling universal connectivity.

Why not do much the same, to let any social media user agent interoperate with any other, using a federated social graph and federated message transfer agents? We could then set our user agent to apply filters to let us see whichever communities we want to see at any given time. Some startups have attempted to build stand-alone social networks that focus on sub-communities like family or close friends versus hundreds of more or less remote acquaintances. Why not just make that a flexible and dynamic option, that we can control at will with a single user agent? Why require a startup to build and scale all aspects of a social media service, when they could just focus on a specific innovation? (The social media UX can be made interoperable to a high degree across different user agents, just as email user agents handle HTML, images, attachments, emojis, etc. -- and as do competing Web browsers.)

Identity: A recurring problem with many social networks is abuse by anonymous users (often people with many aliases, or even just bots). Once again, this need not be a simple binary choice. It would not be hard to have multiple levels of participant, some anonymous and some with one or more levels of authentication as real human individuals (or legitimate organizations). First class users would get validated identities, and be given full privileges, while anonymous users might be permitted but clearly flagged as such, with second class privileges. That would allow users to be exposed to anonymous content, when desired, but without confusion as to trust levels. Levels of identity could be clearly marked in feeds, and users could filter out anonymous or unverified users if desired. (We do already see some hints of this, but only to a very limited degree.)

Value transfers and extractions: As noted above, another very important problem is that the new platform businesses are driven by advertising and data sales, which means the consumer is not the customer but the product. Short of simply ending that practice (to end advertising and make the consumer the customer), those platforms could be driven to allow customer choice about such intrusions and extractions of value. Some users may be willing opt in to such practices, to continue to get "free" service, and some could opt out, by paying compensatory fees -- and thus becoming the customer. If significant numbers of users opted to become the customer, then the platforms would necessarily become far more customer-first -- for consumer customers, not the business customers who now pay the rent.

I have done extensive work on alternative strategies that adaptively customize value propositions and prices to markets of one -- a new strategy for a new social contract that can shape our commercial relationships to sustain services in proportion to the value they provide, and our ability to pay, so all can afford service. A key part of the issue is to ensure that users are compensated for the value of the data they provide. That can be done as a credit against user subscription fees (a "reverse meter"), at levels that users accept as fair compensation. That would shift incentives toward satisfying users (effectively making the advertiser their customer, rather than the other way around). This method has been described in the Journal of Revenue and Pricing Management: “A novel architecture to monetize digital offerings,” and very briefly in Harvard Business Review. More detail is my FairPayZone blog and my book (see especially the posts about the Facebook and Google business models that are listed in the opening section, above, and again at the end.*)

Analytics and metrics: we need access to relevant usage data and performance metrics to help test and assess alternatives, especially when independent components interact in our systems. Both developers and users will need guidance on alternatives. The Netflix Prize contests for improved recommender algorithms provided anonymized test data from Netflix to participant teams. Concerns about Facebook's algorithm, and the recent change that some testing suggests may do more harm than good, point to the need for independent review. Open alternatives will increase the need for transparency and validation by third parties.

(Sensitive data could be restricted to qualified organizations, with special controls to avoid issues like the Cambridge Analytica mis-use. The answer to such abuse is not greater concentration of power in one platform, as Maurice Stucke points out in Harvard Business Review, "Here Are All the Reasons It’s a Bad Idea to Let a Few Tech Companies Monopolize Our Data." (Facebook has already moved toward greater concentration of power.)

If such richness sounds overly complex, remember that complexity can be hidden by well-designed user agents and default rules. Those who are happy with a platform's defaults need not be affected by the options that other users might enable (or swap in) to customize their experience. We do that very successfully now with our choice of Web browsers and email user agents. We could have similar flexibility and choice in our platforms -- innovations that are valuable can emerge for use by early adopters, and then spread into the mainstream if success fuels demand. That is the genius of our market economy -- a spontaneous, emergent process for adaptively finding what works and has value -- in ways more effective than any hierarchy (as Ferguson extols, with reference to Smith, Hayek, and Levitt).

Augmentation of humans (and their networks)

Another very powerful aspect of networks and algorithms that many neglect is  the augmentation of human intelligence. This idea dates back some 60 years (and more), when "artificial intelligence" went through its first hype cycle -- Licklider and Engelbart observed that the smarter strategy is not to seek totally artificial intelligence, but to seek hybrid strategies that draw on and augment human intelligence. Licklider called it "man-computer symbiosis, and used ARPA funding to support the work of Engelbart on "augmenting human intellect." In an age of arcane and limited uses of computers, that proved eye-opening at a 1968 conference ("the mother of all demos"), and was one of the key inspirations for modern user interfaces, hypertext, and the Web.

The term augmentation is resurfacing in the artificial intelligence field, as we are once again realizing how limited machine intelligence still is, and that (especially where broad and flexible intelligence is needed) it is often far more effective to seek to apply augmented intelligence that works symbiotically with humans, retaining human visibility and guidance over how machine intelligence is used.

Why not apply this kind of emergent, reconfigurable augmented intelligence to drive a bottom up way to dynamically assign (and re-assign) authority in our networks, much like the way representative democracy assigns (and re-assigns) authority from the citizen up? Think of it as dynamically adaptive policy engineering (and consider that a strong bottom-up component will keep such "engineering" democratic and not authoritarian). Done well, this can keep our systems human-centered.

Reality is not binary:  "Everything is deeply intertwingled"

Ted Nelson (who coined the term "hypertext" and was another of the foundational visionaries of the Web), wrote in 1974 that "everything is deeply intertwingled." As he put it, "Hierarchical and sequential structures, especially popular since Gutenberg, are usually forced and artificial. Intertwingularity is not generally acknowledged—people keep pretending they can make things hierarchical, categorizable and sequential when they can't."

It's a race:  augmented network hierarchies that are emergently smart, balanced, and dynamically adaptable -- or disaster

If we pull together to realize this potential, we can transcend the dichotomies and conflicts that are so wickedly complex and dangerous. Just as Malthus failed to account for the emergent genius of civilization, and the non-linear improvements it produces, many of us discount how non-linear the effect of smarter networks, with more dynamically augmented and balanced structures, can be. But we are racing along a very dangerous path, and are not being nearly smart or proactive enough about what we need to do to avert disaster. What we need now is not a top-down command and control Manhattan Project, but a multi-faceted, broadly-based movement, with elements of regulation, but primarily reliant on flexible, modular architectural design.

[Update 12/14/20] A specific proposal - Stanford Working Group on Platform Scale

An important proposal that gets at the core of the problems in media platforms was published in Foreign AffairsHow to Save Democracy From Technology, by Francis Fukuyama and others. See also the report of the Stanford Working Group. The idea is to let users control their social media feeds with open market interoperable filters. That is something I proposed here (in the "Filtering rules" section, above). Other regulatory proposals that include some of the suggestions made here are summarized in Regulating our Platforms -- A Deeper Vision.

See the Selected Items tab for more on this theme.


Coda:  On becoming more smartly intertwingled

Everything in our world has always been deeply intertwingled. Human intellect augmented with technology enables us to make our world more smartly intertwingled. But we have lost our way, in the manner that Engelbart alluded to in his illustration of de-augmentation -- we are becoming deeply polarized, addicted to self-destructive dopamine-driven engagement without insight or nuance. We are being de-augmented by our own technology run amok.

(I plan to re-brand this blog as "Smartly Intertwingled" -- that is the objective that drives my work. The theme of "User-Centered Media" is just one important aspect of that.)


*On business models - FairPay (my other blog):  As noted above, a series of posts in my other blog focus on a novel approach to business models (and regulation that centers on that), and those posts remain my best presentation on those issues: