Sunday, October 20, 2019

Our Digital Platforms -- What We Want to Regulate For (Not Just Against) [Preliminary Draft]

This is now superseded by an updated and expanded post:
Regulating our Platforms -- A Deeper Vision (Working Draft)

Preliminary Draft:  This post is an initial summary of my comments at this event, as sent on 10/20/19 to some of the speakers and attendees. (I am writing an expanded post to supersede this.)
---

Summary and Expansion of Dick Reisman’s comments on attending GMF 10/17/19 event:

I very much support the proposals for a New Digital Platform Authority (as detailed in the excellent background items[*] cited on the event page) and offer some innovative perspectives.  I welcome dialog and opportunities to participate in and support related efforts. 

(My background is complementary to most of the attendees -- diverse roles in media-tech, as a manager, entrepreneur, inventor, and angel investor.  I became interested in hypermedia and collaborative social decision support systems around 1970, and observed the regulation of The Bell System, IBM, Microsoft, the Internet, and cable TV from within the industry.  As a successful inventor with over 50 software patents that have been widely licensed to serve billions of users, I have proven talent for seeing what technology can do for people.  Extreme disappointment about the harmful misdirection of recent developments in platforms and media has spurred me to continue work on this theme on a pro-bono basis.)  

My general comment is that for tech to serve democracy, we not only need to regulate to limit monopolies and other abuses, but also need to regulate with a vision of what tech should do for us -- to better enable regulation to facilitate that, and to recognize the harms of failing to do so.  If we don’t know what we should expect our systems to do, it is hard to know when or how to fix them.  The harm Facebook does becomes far more clear when we understand what it could do – in what ways it could be “bringing people closer together,” not just that it is actually driving them apart.  That takes a continuing process of thinking about the technical architectures we desire, so competitive innovation can realize and evolve that vision in the face of rapid technology and market development.

More specifically, I see architectural designs for complex systems as being most effective when built on adaptive feedback control loops that are extensible to enable emergent solutions as contexts, needs, technologies, and market environments change.  That is applicable in various forms to all the strategies I am suggesting (and to technology regulation in general).

I cited the Bell System regulation as a case in point that introduced well architected modularity in the Carterfone Decision (open connections via a universal jack, much like modern API’s), followed by the breakup into local and long-distance and manufacturing, and the later introduction of number portability.  This resonated as reflecting not only the wisdom of regulators, but expert vision of the technical architecture needed, specifically what points of modularity (interoperability) would enable innovation.  (Of course the Bell System emerged as a natural monopoly growing out of an earlier era of competing phone systems that did not interoperate.)  The modular architecture of email is another very relevant case in point (one that did not require regulation).

I noted three areas where my work suggests how to add more of that dimension to the excellent work in those reports.  One is a fundamental problem of structure, and the other two are problems of values that reinforce one another.  (The last one applies not only to the platforms, but to the fundamental challenge of sustaining news services in a digital world.)  All of these are intended not as point solutions, but as ongoing processes that involve continuing adaptation and feedback, so that the solutions are emergent as technology and competitive developments rapidly advance.

1.  System and business structure -- Modular architecture for flexibility and extensibility.  The heart of systems architecture is well-designed modularity, the separation of elements that can interoperate yet be changed in at will -- that seems central to regulation as well – especially to identify and manage exclusionary bottlenecks/gateways.  At a high level, the e-mail example is very relevant to how different “user agents” such as Outlook, Apple mail, and Gmail clients can all interoperate to interconnect all users through “message transfer agents” (through the mesh of mail servers on the Internet).  A similar decoupling should be done for social media and search (for both information and shopping).

Similar modularity could usefully separate such elements as:
  • Filtering algorithms – to be user selectable and adjustable, and to compete in an open market much as third-party financial analytics can plug in to work with market data feeds and user interfaces.
  • Social graphs – to enable different social media user interfaces to share a user’s social graph (much like email user agent / transfer agent).
  • Identity – verified / aliased / anonymous / bots could interoperate with clearly distinct levels of privilege and reputation.
  • Value transfer/extraction systems – this could address data, attention, and user-generated-content and the pricing that relates to that.
  • Analytics/metrics – controlled, transparent monitoring of activity for users and regulators.

2.  User-value objectives -- filtering algorithms controlled by and for users.  This is the true promise of information technology – not artificial intelligence, but the augmentation of human intelligence.
·         User value is complex and nuanced, but Google’s original PageRank algorithm for search results filtering demonstrates how sophisticated algorithms can optimize for user value by augmenting the human wisdom of crowds – they can understand user intent, and weigh implicit signals of authority and reputation derived from humans inputs at multiple levels, to find relevance in varying contexts. 
·         In search, the original PageRank signal was inward links to a Web page, taken as expressions of the value judgements of individual human webmasters regarding that page.  That has been enriched to weed out fraudulent “link farms” and other distortions and expanded in many other ways.
·         For the broader challenge of social media, I outline a generalization of the same recursive, multi-level weighting strategy in The Augmented Wisdom of Crowds: Rate the Raters and Weight the Ratings.  The algorithm ranks items (of all kinds) based on implicit and explicit feedback from users (in all available forms), partitioned to reflect communities of interest and subject domains, so that desired items bubble up, and undesired items are downranked.  This can also combat filter bubbles -- to augment serendipity and to identify “surprising validators” that might cut through biased assimilation.
·         That proposed architecture also provides for deeper levels of modularity:  to enable user control of filtering criteria, and flexible use of filtering tools from competing sources -- which users could combine and change at will, depending on the specific task at hand.  That enables continuous adaptation, emergence, and evolution, in an open, competitive market ecosystem of information and tools.
·         Filtering for user and societal value:  The objective is to allow for smart filtering that applies all the feedback signals available to provide what is valued by that the user at that time.  By allowing user selection of filtering parameters and algorithms, the filters can become increasingly well-tuned to the value systems of each user, in each community of interest, and each subject domain.
·         First amendment, Section 230, prohibited content issues, and community standards:  When done well, this filtering might largely address those concerns, greatly reducing the need for the blunt instrument of regulatory controls or censorship, and working in real time, at Internet-speed, with minimal need for manual intervention regarding specific items.  As I understand the legal issues, users could retain the right to post information without restriction (with narrow exceptions) -- if objectionable content is automatically downranked enough in any filtering process that a service provides (an automated form of moderation) to avoid sending it to users who do not want such content -- or who reside in jurisdictions that do not permit it.  Freedom of speech (posting), not freedom of delivery to others who have not invited it.  Thus Section 230 might be applied to posting, just as seemed acceptable when information was pulled from the open Web, while the added service of dissemination filtering might apply both user and governmental restrictions (as well as restrictions specific to user communities that desire such filtering) when information is pushed from social media. 
(Such methods might evolve to become a broad architectural base for richly nuanced forms of digital democracy.)

3.  Business model value objectives – who does the platform serve?  This is widely asserted to be the “original sin” of the Internet that prevents better solutions in the above two areas.  Without solving this problem, it will be very difficult to solve the other problems.  “It is difficult to get a man to understand something when his job depends on not understanding it”  Funding of services with the ad model makes services seem free and affordable, but drives platform services to optimize for engagement, to sell ads, instead of optimizing for value to users and society.  Users are the product, not the customer, and value is extracted from the customer to serve the platforms and the advertisers.  This is totally unlike prior forms of advertising because unprecedented detail in user data and precision targeting enables messaging and behavioral manipulations at an individual level.  That has driven algorithm design and use of the services in harmful directions instead of beneficial ones.  Many have recognized this business model problem, but few see any workable solution. I suggest a novel path forward at two levels:  an incentive ratchet to force the platforms to seek solutions, and some suggested solutions mechanisms that suggest that ratchet would bear fruit in ways that are both profitable and desirable.

Ratchet the desired business model shift with a simple dial, based on a simple metric.  A very simple and powerful regulatory strategy could be to impose taxes or mandates that gradually ratchet toward the desired state. This leverages market forces and business innovation in the same way as the very successful model of the CAFE standards for auto fuel efficiency -- it leaves the details of how to meet the standard to each company. 
·         The ratchet here is to provide compelling incentives for dominant services to ensure that X% of revenue must come from users.  Such compelling taxes or mandates might be restricted to distribution services with ad revenues above some threshold level.  (Any tax or penalty revenue might be applied to ameliorate the harms.)
·         That X% could still include advertising revenue if it is quantified as a credit back to the user (a “reverse meter” much as for co-generation of electricity).  Advertising can be valuable and non-intrusive and respectful of data -- explicitly putting a price on the value transfer from the consumer incentivizes the market to achieve that. 
·         This incentivizes individual companies to shift their behavior on their own, without need for the kind of new data intermediaries (“infomediaries”) that others have proposed without success.  It could also create more favorable conditions for such intermediaries to arise.

Digital services business model issues -- for news services as well as platforms.  (Not addressed at the event.)  Many (most prominently Zuckerberg) throw up their hands at finding business models for search or social media that are not ad-funded, primarily because of affordability issues.  The path to success here is uncertain (just as the path to fuel efficient autos is uncertain).  But many innovations emerging at the margins offer reasons to believe that better solutions can be found. 
·         One central thread is the recognition that the old economics of the invisible hand fails because there is no digital scarcity for the invisible hand to ration.  We need a new way to settle on value and price.
·         The related central thread is the idea of a social contract for digital services, emerging most prominently with regard to journalism (especially investigative and local).  We must pay now, not for what has been created already, but to fund continuing creation for the future. Behavioral economics has shown that people are not homo economicus but homo reciprocans. – they want to be fair and do right, when the situation is managed to encourage win-win behaviors. 
·         Pricing for digital services can shift from one-size-fits-all, to mass-customization of pricing that is fair to each user with respect to the value they get, the services they want to sustain, and their ability to pay.  Current all-you-can-eat subscriptions or pay-per-item models track poorly to actual value.  And, unlike imposing secretive price discrimination, this value discrimination can be done cooperatively (or even voluntarily).  Important cases in point are The Guardian’s voluntary payment model, and recurring crowdfunding models like Patreon. 
·         Synergizing with this, and breaking from norms we have become habituated to, the other important impact of digital is the shift toward a Relationship Economy – shifting focus from one-shot zero-sum transactions to ongoing win-win relationships such as subscriptions and membership.  This builds cooperation and provides new leverage for beneficial application of behavioral economic nudges to support this creative social contract, in an invisible handshake.  My own work on FairPay explains this and provides methods for applying it to make these services sustainable by user payments. See this Overview with links, including journal articles with prominent marketing scholars, brief articles in HBR and Techonomy, and many blog posts.  
·         Vouchers.  The Stigler Committee proposal for vouchers might be enhanced by integration with the above methods.  Voucher credits might be integrated with subscription/membership payments to directly subsidize individual payments, and to nudge users to donate above the voucher amounts.
·         Affordability. To see how this deeper focus on value changes our thinking, consider the economics of reverse meter credits for advertising, as suggested for the ratchet strategy above.  As an attendee noted at the event, reverse metering would seem to unfairly favor the rich, since they can better afford to pay to avoid ads.  But the platforms actually earn much more for affluent users (targeted ad rates are much higher).  If prices map to the value surplus, that will tend to balance things out – if the less affluent want service to be add free, it should be less costly for them than for the affluent.

AI as a platform regulatory issue.  Discussion after the session raised the issue of regulating AI.  There is growing concern relating to concentrations of power and other abuses, including concentrations of data, bias in inference and in natural language understanding, and lack of transparency, controls, and explainability.  That suggests a similar need for a regulator that can apply specialized technical expertise that overlaps with the issues addressed here.  AI is fundamental to the workings of social media, search, and e-commerce platforms, and also has many broader applications for which pro-active regulation may be needed.

---

[*Update:] Here are the cited items, plus two other excellent reports:
---
See the Selected Items tab for more on this theme.

2 comments:

  1. This is very informative for the newcomers. Business owners can choose to hire professional mejores call center en republica dominicana, as they are very good at maintaining the customer care services. They are well trained, experience, and you can hire them at a very affordable rate.

    ReplyDelete
  2. There is a way to earn money from PLR digital products. They are providing a good offer to resell your product bought from them. You can easily earn money without having any physical cost.

    ReplyDelete