Monday, December 30, 2019

Regulating our Platforms -- A Deeper Vision (Working Draft)


Redirection and regulation of our Internet platforms is badly needed. There are numerous powerful statements on why, and many smart people working to make that happen.

But it is hard to agree on how, and with what objectives. Most people have little clue of where to start, or why it matters -- and many of those who do are divided about what to do, and whether proposed actions are too small or too large. This is a richly complex problem -- and much of what we read is oversimplified.

I recently immersed myself in some of the best analyses from respected think tanks -- and have some innovative perspectives of my own. This post begins with pointers to some of the best thinking, and then explains what I see as missing. That is largely a question of what are we designing for.

Update 4/26/21: An important strategy for a surgical restructuring (published in Tech Policy Press) — to an open market strategy that shifts control over our feeds to the users they serve — complements the discussion here.

==================================================================
The Ideas in Brief

Our Internet platforms have gone seriously wrong, and fixing that is more complex than most observers seem to realize. The good news is that there are well-conceived proposals for creating an expert regulatory agency that can oversee significant corrections. 

At a complementary level, we should be looking ahead to what these platforms should and could be doing to better serve us. That kind of vision should inform both how we regulate and how we design.
  • One critical need is to change how the algorithms work, so they serve users and society -- to make us smarter and happier, instead of dumber and angrier.
  • Another critical need is to shift the business models so that users, not advertisers become the customers, to better align the incentives of Internet service platforms to serve their users (and actually benefit the advertisers as well).
==================================================================

Broad issues, deep thinking, and vision

Many calls for regulatory action focus on just one or a few of the following diverse categories of abuse. Some of these conflict with one another and are advocated by different parties:
  1. privacy and controls on use of personal data 
  2. moderation of disinformation and false political ads and news and other objectionable content versus freedom of speech and the "marketplace of ideas"
  3. economic sustainability of news media and quality journalism
  4. antitrust, competition, and stifling of innovation
  5. failures of artificial intelligence (AI) and machine learning (ML) -- including hidden bias
The works recommended below stand out for their broad consideration of most or all of these issues and for their informed consideration of the legal background and history of related areas of regulation -- including nuanced First Amendment, antitrust, and media technology issues. They make a strong case that, given the complexity of these issues, compounded by the rapid (and reliably surprising) dynamics of business and technology development, that neither the market nor legislators can provide the necessary understanding and foresight. Instead they unanimously see need for an expert agency with an ongoing charter much like the Federal Trade Commission or the Federal Communications Commission but with the new mix of expertise relevant to the Internet platforms.

Particularly edifying are the analyses of issues and regulation related to the safe harbor provisions of Section 230 of the Communications Decency Act that protects "interactive services" from liability for bad content provided by others. Many have called for repealing those safe harbor protections, seeing that as a license to wantonly distribute harmful content, but these deeper analyses suggests a more nuanced interpretation -- the safe harbor should continue to apply to posting of content, but should not apply to filtered distribution in social media feeds. That is one of the themes I build on with my own suggestions below.

Beyond these excellent works, the gap -- and opportunity -- that I see is to refocus our objectives. We should look beyond just limiting the harms of over-concentration of power as we see them today and in hindsight, but look ahead to where we could be going.
  • Where we should be going is a question for public policy, not tech oligarchs who move fast and break things, and are driven by their private interests. 
  • But to understand that question of where we should be going, we need to understand where we could be going (both good and bad).
We need multidisciplinary efforts to set realistic but visionary objectives that serve all stakeholders as our platforms and technology continue to evolve, and we need to explore alternative scenarios to protect against abuse by some stakeholders against others.

"The Debate Over a New Digital Platform Agency" -- some essential resources

On October 17, I was invited to attend The Debate Over a New Digital Platform Agency: Developing Digital Authority and Expertise, at the The Digital Innovation & Democracy Initiative of the German Marshall Fund of the US in Washington, DC. Three reports that resulted from the work of the panelists were suggested as background reading:
The event generated excellent discussion. I made some comments that were well-received on the further opportunities I saw, and had discussions afterwards with several of the speakers. That led to an introduction to the author of another excellent report on this theme (and a discussion with him):
I highly recommend that anyone with a serious interest in these vital challenges read these reports. The Stigler and Feld reports provide thorough treatments from a US regulatory perspective, and complement one another in important areas. The Furman report is a valuable complement from a UK perspective (and Furman reported that the UK will move ahead to establish such a regulatory body and he has been asked to advise on that). The Kornbluth and Goodman report provides a shorter overview of many of the same issues.

More recently, I found another excellent report that is more focused on the technological and business issues and points toward some of the what I have proposed:
Also worth noting, Ben Thompson's Stratechery newsletter provides excellent insights into the business structure issues of our dominant platforms.

[Update: See the updates at the end for additional valuable resources, and my comments on them.]

My own suggestions on where our platforms could and should be going

The following is an updated rework of comments I sent on 10/20/19 to some of the speakers and attendees at the GMF meeting, followed by some added comments from my later discussion with Harold Feld, and other updates: 

Summary and Expansion of Dick Reisman’s comments on attending GMF 10/17/19 event:

I very much support the proposals for a New Digital Platform Authority (as detailed in the excellent background items cited on the event page) and offer some innovative perspectives.  I welcome dialog and opportunities to participate in and support related efforts. 

(My background is complementary to most of the attendees -- diverse roles in media-tech, as a manager, entrepreneur, inventor, and angel investor.  I became interested in hypermedia and collaborative social decision support systems around 1970, and observed the regulation of The Bell System, IBM, Microsoft, the Internet, and cable TV from within the industry.  As a successful inventor with over 50 software patents that have been widely licensed to serve billions of users, I have proven talent for seeing what technology can do for people.  Extreme disappointment about the harmful misdirection of recent developments in platforms and media has spurred me to continue work on this theme on a pro-bono basis.)  

My general comment is that for tech to serve democracy, we not only need to regulate to limit monopolies and other abuses, but also need to regulate with a vision of what tech should do for us -- to better enable regulation to facilitate that, and to recognize the harms of failing to do so.  If we don’t know what we should expect our systems to do, it is hard to know when or how to fix them.  The harm Facebook does becomes far more clear when we understand what it could do – in what ways it could be “bringing people closer together,” not just that it is actually driving them apart.  That takes a continuing process of thinking about the technical architectures we desire, so competitive innovation can realize and evolve that vision in the face of rapid technology and market developments.

More specifically, I see architectural designs for complex systems as being most effective when built on adaptive feedback control loops that are extensible to enable emergent solutions, as contexts, needs, technologies, and market environments change.  That is applicable to all the strategies I am suggesting (and to technology regulation in general).
  • I cited the Bell System regulation as a case in point that introduced well-architected modularity in the Carterfone Decision (open connections via a universal jack, much like modern API’s), followed by the breakup into local and long-distance and manufacturing, and the later introduction of number portability.  This resonated as reflecting not only the wisdom of regulators, but expert vision of the technical architecture needed, specifically what points of modularity (interoperability) would enable innovation.  (Of course the Bell System emerged as a natural monopoly growing out of an earlier era of competing phone systems that did not interoperate.)  
  • The modular architecture of email is another very relevant case in point (one that did not require regulation). 
  • The original Web and Web 2.0 were built on similar modularity and APIs that facilitated openness, interoperability, and extensibility.
But the platforms have increasingly returned us to proprietary walled gardens that lock in users and lock out competitive innovation.

I noted three areas where my work suggests how to add a more visionary dimension to the excellent work in the cited reports.  One is a fundamental problem of structure, and the other two are problems of values that reinforce one another.  (The last one applies not only to the platforms, but to the fundamental challenge of sustaining news services in a digital world.)  All of these are intended not as definitive point solutions, but as ongoing processes that involve continuing adaptation and feedback, so that the solutions are continuously emergent as technology and competitive developments advance.

1.  System and business structure -- Modular architecture for flexibility and extensibility.  The heart of systems architecture is well-designed modularity, the separation of elements that can interoperate yet be changed in at will -- that seems central to regulation as well – especially to identify and manage exclusionary bottlenecks/gateways.  At a high level, the e-mail example is very relevant to how different “user agents” such as Outlook, Apple mail, and Gmail clients can all interoperate to interconnect all users through “message transfer agents” (through the mesh of mail servers on the Internet).  A similar decoupling should be done for social media and search (for both information and shopping).

Similar modularity could usefully separate such elements as:
  • Filtering algorithms – to be user selectable and adjustable, and to compete in an open market much as third-party financial analytics can plug in to work with market data feeds and user interfaces.
  • Social graphs – to enable different social media user interfaces to share a user’s social graph (much like email user agent / transfer agent).
  • Identity – verified / aliased / anonymous / bots could interoperate with clearly distinct levels of privilege and reputation.
  • Value transfer/extraction systems – this could address data, attention, and user-generated-content and the pricing that relates to that.
  • Analytics/metrics – controlled, transparent monitoring of activity for users and regulators.

2.  User-value objectives -- filtering algorithms controlled by and for users.  This is the true promise of information technology – not artificial intelligence, but the augmentation of human intelligence.
·       User value is complex and nuanced, but Google’s original PageRank algorithm for search results filtering demonstrates how sophisticated algorithms can optimize for user value by augmenting the human wisdom of crowds – the algorithm can infer user intent, and weigh implicit signals of authority and reputation derived from human activity at multiple levels, to find relevance in varying contexts. 
·       In search, the original PageRank signal was inward links to a Web page, taken as expressions of the value judgements of individual human webmasters regarding that page.  That has been enriched to weed out fraudulent “link farms” and other distortions and expanded in many other ways.
·       For the broader challenge of social media, I outline a generalization of the same recursive, multi-level weighting strategy in The Augmented Wisdom of Crowds: Rate the Raters and Weight the Ratings.  The algorithm ranks items (of all kinds) based on implicit and explicit feedback from users (in all available forms), partitioned to reflect communities of interest and subject domains, so that desired items bubble up, and undesired items are downranked.  This can also combat filter bubbles -- to augment serendipity and to identify “surprising validators” that might cut through biased assimilation.
·       That proposed architecture also provides for deeper levels of modularity:  to enable user control of filtering criteria, and flexible use of filtering tools from competing sources -- which users could combine and change at will, depending on the specific task at hand.  That enables continuous adaptation, emergence, and evolution, in an open, competitive market ecosystem of information and tools. (As noted below, the Masnick paper makes a nice case for this.) 
·       Filtering for user and societal value:  The objective is to allow for smart filtering that applies all the feedback signals available to provide what is valued by that the user at that time.  By allowing user selection of filtering parameters and algorithms, the filters can become increasingly well-tuned to each user's value system, as it applies within each community of interest, and each subject domain.
·       First amendment, Section 230, prohibited content issues, and community standards:  When done well, this filtering might largely address those concerns about bad content, greatly reducing the need for the blunt instrument of regulatory controls or censorship, and working in real time, at Internet-speed, with minimal need for manual intervention regarding specific items.  As I understand it, this finesses most of the legal issues:  users could retain the right to post information with very little restriction -- if objectionable content is automatically downranked enough in any filtering process that a service provides (an automated form of moderation) to avoid sending it to users who do not want such content -- or who reside in jurisdictions that do not permit it.  Freedom of speech (posting), not freedom of reach (delivery) to others who have not invited it
-- Thus Section 230 might be applied to posting, just as seemed acceptable when information was pulled from the open Web.
-- But the Section 230 safe harbor protections against liability might not apply to the added service of selective dissemination, when information is pushed through social media (and when ads are targeted into social media). The filtering that determines what users see might apply both user- and government-defined restrictions (as well as restrictions at the level of specific user communities that desire those restrictions). [See 2/4/20 update below on related Section 230 issues.]
(Such methods might evolve to become a broad architectural base for richly nuanced forms of digital democracy.)

[See 1/10/20 Update below on distribution filtering as the choke point for disinformation. It is here that we can reverse the wrong direction of social media that is so destructively making people dumber instead of smarter. This is now expanded slightly as a free standing post, The Dis-information Choke Point: Dis-tribution (Not Supply or Demand)]

3.  Business model value objectives – who does the platform serve?  This is widely observed to be the “original sin” of the Internet, one that prevents the emergence of better solutions in the above two areas.  Without solving this problem, it will be very difficult to solve the other problems.  “It is difficult to get a man to understand something when his job depends on not understanding it”  We call them services, but they do not serve us. Funding of services with the ad model makes those services seem free and affordable, but drives platform services businesses to optimize for engagement (to sell ads), instead of optimizing for the value to users and society.  Users are the product, not the customer, and value (attention) is extracted from the users to serve the platforms and the advertisers.

Also, modern online advertising is totally unlike prior forms of advertising because unprecedented detail in user data and precision targeting enables messaging and behavioral manipulations at an individual level.  That has driven algorithm design and use of the services in catastrophically harmful directions, instead of beneficial ones.

Many have recognized this business model problem, but few see any workable solution. I suggest a novel path forward at two levels:  an incentive ratchet to force the platforms to seek solutions, and some suggested solution mechanisms that suggest how that ratchet could bear fruit in ways that are both profitable and desirable ...in ways that few now imagine.

Ratchet the desired business model shift with a simple dial, based on a simple metric.  A very simple and powerful regulatory strategy could be to impose taxes or mandates that gradually ratchet toward the desired state. This leverages market forces and business innovation in the same way as the very successful model of the CAFE standards for auto fuel efficiency -- it leaves the details of how to meet the standard to each company
·       The ratchet here is to provide compelling incentives for dominant services to ensure that X% of revenue must come from users.  Such compelling taxes or mandates might be restricted to distribution services with ad revenues above some threshold level.  (Any tax or penalty revenue might be applied to ameliorate the harms.)
·       That X% might be permitted to still include advertising revenue if it is quantified as a credit back to the user (a “reverse meter” much as for co-generation of electricity).  Advertising can be valuable and non-intrusive and respectful of data -- explicitly putting a price on the value transfer from the consumer would incentivize the advertising market toward user value. 
·       This incentivizes individual companies to shift their behavior on their own, without need for the kind of new data intermediaries (“infomediaries” or fiduciaries) that others have proposed without success.  It could also create more favorable conditions for such intermediaries to arise.

Digital services business model issues -- for news services as well as platforms.  (Not addressed at the event, but included in some of the reports.)  Many (most prominently Zuckerberg) throw up their hands at finding business models for social media or search that are not ad-funded, primarily because of affordability issues.  The path to success here is uncertain (just as the path to fuel efficient autos is uncertain). But many innovations emerging at the margins offer reasons to believe that better solutions can be found.
·       One central thread is the recognition that the old economics of the invisible hand fails because there is no digital scarcity for the invisible hand to ration.  We need a new way to settle on value and price.
·       The related central thread is the idea of a social contract for digital services, emerging most prominently with regard to journalism (especially investigative and local).  We must pay now, not for what has been created already, but to fund continuing creation for the future. Behavioral economics has shown that people are not homo economicus but homo reciprocans – they want to be fair and do right, when the situation is managed to encourage win-win behaviors. 
·       Pricing for digital services can shift from one-size-fits-all, to mass-customization of pricing that is fair to each user with respect to the value they get, the services they want to sustain, and their ability to pay.  Current all-you-can-eat subscriptions or pay-per-item models track poorly to actual value.  And, unlike imposing secretive price discrimination, this value discrimination can be done cooperatively (or even voluntarily).  Important cases in point are The Guardian’s voluntary payment model, and recurring crowdfunding models like Patreon. Journalism is recognized to be a public good, and that can be an especially strong motivator for sustaining payments.
·       Synergizing with this, and breaking from norms we have become habituated to, the other important impact of digital is the shift toward a Relationship Economy – shifting focus from one-shot zero-sum transactions to ongoing win-win relationships such as subscriptions and membership.  This builds cooperation and provides new leverage for beneficial application of behavioral economic nudges to support this creative social contract, in an invisible handshake.  My own work on FairPay explains this and provides methods for applying it to make these services sustainable by user payments. (See this Overview with links, including journal articles with prominent marketing scholars, brief articles in HBR and Techonomy, and many blog posts, such as one specific to journalism.) 
·       Vouchers.  The Stigler Committee proposal for vouchers might be enhanced by integration with the above methods.  Voucher credits could be integrated with subscription/membership payments to directly subsidize individual payments, and to nudge users to donate above the voucher amounts.
·       Affordability. To see how this deeper focus on value changes our thinking, consider the economics of reverse meter credits for advertising, as suggested for the ratchet strategy above.  As an attendee noted at the event, reverse metering would seem to unfairly favor the rich, since they can better afford to pay to avoid ads.  But the platforms actually earn much more for affluent users (their targeted ad rates are much higher).  If prices map to the value surplus, that will tend to balance things out – if the less affluent want service to be ad-free, it should be less costly for them than for the affluent. And when ads become less intrusive and more relevant, even the affluent may be happy to accept them (how about the ads in Vogue?).

AI as a platform regulatory issue.  Discussion after the session raised the issue of regulating AI.  There is growing concern relating to concentrations of power and other abuses, including concentrations of data, bias in inference and in natural language understanding, and lack of transparency, controls, and explainability. That suggests a similar need for a regulator that can apply specialized technical expertise that overlaps and coordinates with the issues addressed here.  AI is fundamental to the workings of social media, search, and e-commerce platforms, and also has many broader applications for which pro-active regulation may be needed.

Some further reflections

From reviewing Harold Feld's book and discussing it with him:
  • He notes the growing calls for antitrust regulation to consider harms beyond price increases (which ignores the true costs of "free" services) and suggests "cost of exclusion" (COE) as a useful metric of harm to manage for. 
  • I suggest that similar logic argues for more attention to what platforms could and should be doing as a metric of harm. The idea is not to mandate what they should do, but to to avoid blocking it -- and to estimate the cost of not providing valuable services that a more competitive market that is incentivized to serve end-users would provide in some form.  
  • Feld also suggests that is is a proper objective of regulation to support promotion of good content and discourage bad content (just as was done for broadcast media). Further to that objective, my Augmented Wisdom of Crowds methods show how that can become nuanced, dynamic, reflective of user desires, domains of expertise, and communities of interest, and selectively match to the standards of many overlapping communities.  A related post highlights how this can serve as A Cognitive Immune System for Social Media -- Developing Systemic Resistance to Fake News.
  • On Section 230-related issues, an interesting question I have not seen well addressed is how the targeting of advertising interplays with filtering feeds for content of all kinds. 
    -- I advocate that filtering of content feeds should be controlled by and for the end-users of the feeds, and economic incentives should align to that.
    -- Targeting of ads (political or commercial) is currently a countervailing force that directs ads to users in ways that do not align with their wishes (and motivates filtering to inflame rather than enlighten).
    -- Reverse metering of attention and data could provide a basis to negotiate -- in this two-sided market -- over just how targeting meshes with the prioritization and presentation of items in feeds.  (A valuable new resource on the design of multi-sided platforms is The Platform Canvas.)
  • Push vs. pull: also related to managing harmful content, Feld draws useful distinctions of Broadcast/Many-to-Many vs. Common Carrier/One-to-One and Passive Listening vs. Active Participation, I suggest the distinction between Push versus Pull distribution/access is also very important to First Amendment issues:
    -- Pull is on demand requests for specific items, such as by actively searching, or direct access to a Web service.  In a free society there should presumably be very limited restrictions on what content users may pull.
    -- Push is a continuing feed, such as a social media news feed.  This can be a firehose of everything (subject to privacy constraints) or a filtered feed (as typical in current social media).  I think Feld's analysis supports the case that there is no First Amendment right of a speaker to have their speech pushed to others in a filtered feed (no free reach or free targeting, as in my posts below).  Note that filtering items in a feed uses much the same discrimination technology as filtering (ranking) of search results (for example, Google Alerts are “a standing search” that is applied to create a feed of newly posted items that match the standing search).  (I have fundamental patents from 1994, now expired, on a widely used class of push.)
  • Feld addresses the issues of filter bubbles and serendipity and proposes “wobbly algorithms” that introduce more variety (and I found recent support for that in this new CACM article). I have outlined methods for seeking Surprising Validators and serendipity in ways that are more purposeful in going far beyond just random variation.  
  • Regarding the quality of news, he addresses the widely supported idea of “tools for reliable sources,” I suggest that human rating services (like NewsGuard) are far too limited in scope and timeliness, and too open to dispute, to be more than a very partial solution.  The algorithmic methods I propose can include such expert rating services, as just one high-reputation component of a broader weighting of authority and relevance in which everyone with a reputation for sound judgement in a subject domain contributes, with a weighting that is based on their reputation.  The augmented crowd will often be smarter than the experts -- and can work far faster to flag problematic content at Internet scale.
Updating my comments from October, many observers have participated in the recent controversy over how the platforms deal with false political ads. Many fail to understand the critical difference between speech and distribution (nicely put in "Free Speech is Not the Same as Free Reach"*). I explained those issues in Free Speech, Not Free Targeting! (Using Our Own Data to Manipulate Us), and note the emerging agreement (including Feld) that limiting the microtargeting of political ads is a reasonable stopgap, until we can provide a more nuanced solution.

Technical architecture issues

I was very pleased to happen on the Masnick article, Protocols, Not Platforms: A Technological Approach to Free Speech (a couple weeks ago), as the nearest thing to the vision I have been developing that I have yet seen. It is not aimed at regulation, apparently in hopes that the market can correct itself (a hope I have shared, but no longer put much faith in). Our works are both overlapping and complementary – reinforcing and expanding in different ways on very similar visions for user-controlled, open filtering of social media and the marketplace of ideas. I recommend his paper to anyone who wants to understand how this technology can be far more supportive of user value by enabling users to mold their social media to their individual values, and as a foundation for better understanding my more specific proposals.

As background, in developing these ideas for an open market of user-controlled filtering tools, I drew on my experience in financial technology from around 1990. There was a growing open market ecosystem for transaction level financial market data (generated by the stock exchanges and other markets -- ticker feeds and the like), which was then gathered and redistributed to brokers and analysts by redistributors like Dow Jones, Telerate, and Bloomberg. An open market for analytic tools that could analyze this data and provide a rich variety of financial metrics was developing -- one that could interoperate, so that brokers and analysts could apply those analytics, or create their own custom variations (as an early form of mashup). That was an inspiration for work I did in 2002 to design a system for open collaboration on finding and developing innovations, in the days when "open innovation" was an emerging trend. That design provided very rich functions for flexible, mass collaboration that I later adapted to apply to social media (as described on my blog, starting in 2012, when I saw that current systems were not going in the direction I thought they should). 

Personal privacy versus openness, and interoperability

Privacy has emerged as a critical issue in Internet services, and one that is often in conflict with the objectives of openness and interoperability that are essential to the marketplace of services and to the broader marketplace of ideas (and also to making AI/ML as beneficial as possible). Here again there is a need for nuance and expertise to sensibly balance the issues, and there is reason to fear that current privacy legislation initiatives may fail to provide a proper balance. I believe there are more nuanced ways to meet these conflicting objectives, but leave more specific exploration of that for another time.

Moving forward

We have learned that our Web services are far more complex and have far more dangerous impacts on society than we realized. We need to move forward with more deliberation, and need a business and regulatory environment capable of guiding that. We have seen how dangerous it can be to "move fast and break things."

I am working independently on a pro-bono basis on these issues, and welcome opportunities to collaborate with others to move in the directions outlined here. (These ideas draw on two detailed patent filings from 2002 and 2010 that I have placed into the public domain.)

---
[*Update 1/2/20:] Mediating consent by augmenting the wisdom of crowds

Renee DiResta (who wrote the Free Speech is Not the Same as Free Reach post I cited above) recently wrote an excellent article, Mediating Consent, which I commented on today. Her article is an excellent statement of how we are now at a turning point in the evolution of how human society achieves consensus – or breaks down in strife. She says “The future that realizes this promise still remains to be invented.” As outlined above, I believe the core of that future has already been invented — the task is to decide to build out on that core, to validate and adjust it as needed, and to continuously evolve it as society evolves.

[Update 1/10/19:] The disinformation choke point:  distribution (not supply or demand) --

[This is now expanded slightly to be a free-standing post]

An excellent 1/8/20 report from the National Endowment for Democracy, “Demand for Deceit: How the Way We Think Drives Disinformation,” by Samuel Woolley and Katie Joseff, highlights the dual importance of both supply and demand side factors in the problem of disinformation.  That crystallizes in my mind an essential gap in this field -- smarter control of distribution -- that was implicit in my comments on algorithms (section #2 above).

There is little fundamentally new about the supply or the demand for disinformation.  What is fundamentally new is how disinformation is distributed.  That is what we most urgently need to fix. If disinformation falls in a forest… but appears in no one’s feed, does it disinform?

In social media a new form of distribution mediates between supply and demand.  The media platform does filtering that upranks or downranks content, and so governs what users see.  If disinformation is downranked, we will not see it -- even if it is posted and potentially accessible to billions of people.  Filtered distribution is what makes social media not just more information, faster, but an entirely new kind of medium.  Filtering is a new, automated form of moderation and amplification.  That has implications for both the design and the regulation of social media. 

By changing social media filtering algorithms we can dramatically reduce the distribution of disinformation.  It is widely recognized that there is a problem of distribution: current social media promote content that angers and polarizes because that increases engagement and thus ad revenues.  Instead the services could filter for quality and value to users, but they have little incentive to do so.  What little effort they ever have made to do that has been lost in their quest for ad revenue.

Social media marketers speak of "amplification." It is easy to see the supply and demand for disinformation, but marketing professionals know that it is amplification in distribution that makes all the difference. Distribution is the critical choke point for controlling this newly amplified spread of disinformation. (And as Feld points out, the First Amendment does not protect inappropriate uses of loudspeakers.)

While this is a complex area that warrants much study, as the report observes, the arguments cited against the importance of filter bubbles in the box on page 10 are less relevant to social media, where the filters are largely based on the user’s social graph (who promotes items to be fed to them, in the form of posts, likes, comments, and shares), not just active search behavior (what they search for). 

Changing the behavior of demand is clearly desirable, but a very long and costly effort. It is recognized that we cannot stop the supply. But we can control distribution -- changing filtering algorithms could have significant impact rapidly, and would apply across the board, at Internet scale and speed -- if the social media platforms could be motivated to design better algorithms. I explain further in A Cognitive Immune System for Social Media -- Developing Systemic Resistance to Fake News and In the War on Fake News, All of Us are Soldiers, Already! That is what I am advocating in my section #2.

Yes, "the way we think drives disinformation," and social media distribution algorithms drive how we think -- we can drive them for good, not bad!

[Update 2/4/20] Related Section 230 issues.

The discussion above related to posting versus distribution did not clearly address other issues that have driven lobbying against Section 230. These include companies concerned about illegal postings on Airbnb, and about copyright infringement, and other improper content. My initial take on this is that the distinction of posting versus filtered distribution outlined above should also distinguish posting from other forms of selective distribution, such as by search in which a selection or moderation function is present.

For example, Airbnb is a marketplace in which Airbnb may not offer a filtered feed, but offers search services. The essential point is that Airbnb filters searches by selection criteria -- and by its own listing standards. Thus there is an expectation of quality control. As long as Airbnb provides a quality control service, it is moderated, and thus should not have safe harbor under Section 230. If it did not do moderation, then posting on Airbnb should properly have safe harbor protections, but selective (filtered) search functions might not have safe harbor to include illegal postings. Access to such uncontrolled postings might be limited to explicit searches for a specific property identifier (essentially a URL) to retain safe harbor protection.

So here as well, it seems the proper and tractable understanding of the problem is not in the posting, but in the distribution.

[Update 9/9/20] A killer TED Talk and another excellent analysis

Yaël Eisenstat's TED Talk, "How Facebook profits from polarization," is very important, right on target, and well said! If you don’t understand why Facebook and other social media are the gravest threat to society (as they currently operate), this will be the most informative 14 minutes you can spend. (From a former CIA analyst, diplomat…and Facebook staffer.) (9/8/20)

New Digital Realities; New Oversight Solutions from the Harvard Shorenstein Center, by Tom Wheeler, Phil Verveer, and Gene Kimmelman, is another excellent think tank proposal that is right on target. (8/20/20)

[Update 12/14/20] A specific proposal - Stanford Working Group on Platform Scale

An important proposal that gets at the core of the problems in media platforms was published in Foreign AffairsHow to Save Democracy From Technology, by Francis Fukuyama and others. See also the report of the Stanford Working Group. The idea is to let users control their social media feeds with open market interoperable filters. That is something I have proposed, and provided details on how and why to do. 

[Update 2/12/21] Growing support for open market filtering services - Twitter too

More proposals for this have surfaced, including in Senate testimony, plus indications of interest from Twtter. This suggests this may be the best path for action. See this newer post and this update, and stay tuned for more.

[Important Update 4/26/21] 

This important strategy for a surgical restructuring was published in Tech Policy Press. An open market strategy that shifts control over our feeds to the users they serve complements the actions discussed here. This new article summarizes and expands on proposals from notable sources (including Twitter CEO Jack Dorsey) that get at the core of the problems in media platforms. 

---
See the Selected Items tab for more on this theme.

Thursday, December 26, 2019

2020 Vision -- The Restoration of the Customer

The Age of the Customer: You Ain't Seen Nothin' Yet

Nearly a decade has passed since Forrester said we were entering The Age of the Customer. That is apparent and has obvious implications. But as the decade of the 2020's dawns, I call out a deeper vision -- The Restoration of the Customer -- that could bring far more fundamental changes in the coming decade.

There are two surprising turns that may be taken in this decade to restore power to customers -- one that can fundamentally change how we conduct business, and one that can fundamentally change how we collaborate.

Those turns might just begin to undo many of the ills of the industrial revolution and of the computer revolution.  Both turns center on a return to enlightened human values:
  • The customer is not just a persona with a bundle of attributes that a business can learn how to manipulate, but a unique human being that has been bred since pre-history to thrive on a cooperative effort to create value and share it.
  • The user is not just a a source of attention that can be engaged to be sold to advertisers, but a customer to be served what they value -- again, a cooperative effort to create value and share it.
What those paying attention see

Forrester put the basic drivers nicely (emphasis added):
In this era, digitally-savvy customers would change the rules of business, creating extraordinary opportunity for companies that could adapt, and creating existential threat to those that could not. ...It requires leaders to think and act differently – in ways that feel foreign, unfamiliar, and counter-intuitive. And honestly, it is simply hard to do. ...These dynamics will endure as new technologies like artificial intelligence and robotics emerge to challenge core notions of what it means to be a company, what it means to build human capital, and what it means to compete and win.
...And, a deeper vision

Here I point to some little recognized ideas on how re-centering on value can change not only the dynamic of commerce, but also a parallel dynamic of customer value that is equally important.
  • First, the commercial dynamic that Forrester describes is just the foundation for reversing how the "progress" of technology cost us the human dimension in commerce -- a dimension that we had when commerce was just the way villagers did business with one another -- with human beings on both sides of an ongoing relationship. 
  • Second, we humans, as "customers" of Web services, have lost control of our experience of the world.  Our central experience of human interaction has been hijacked by platforms who "engage" us in order to profit from bombarding us with advertising and paid propaganda.
First: Back to the future of commerce

Consumers are increasingly alienated from the companies they do business with. Instead of neighbors or shopkeepers, we deal with soulless institutions that we distrust and feel abused by. That has been, increasingly, the price of productivity and material riches. But now technology has advanced far enough to restore the dimension of human values -- if we applied to do so. That does not require that we abandon the miracle of capitalism, but only that we bring it back to the marketplace of human value. Technology now makes it possible for even large faceless institutions to build human interfaces that behave with human values. That will drive institutions to interact with human in ways that are more truly human.

FairPay is a framework for centering on why and how to do that. The key is to recenter on relationships and the creation and sharing of value in ways that are tailored to each individual. Specifics on how to do that are in my FairPayZone blog, some articles written with prominent marketing scholars, and my 2016 book. Some of the best places to begin to understand this are:
Second: Who does it serve? - a course correction in how we experience the world

Social media and other online content services have changed how we experience the world, including how we interact with other people. Computer-mediation began with great hopes, but now it seems we have built a Frankenstein's monster.  As growing calls for change are beginning to focus on new levels of regulation, it is not enough to regulate against specific harms. Instead we must refocus on what we want to regulate for -- who these "services" serve, and what we want these platforms to facilitate. They were supposed to make us happy and smart -- instead they are making us angry and stupid. But technology can reverse that, if we incentivize that.

We can design new architectures for our interactive media that create value for us.  The key is to recognize that each of us is an individual, and we should be able to individualize our services, mixing and matching offerings to make just the service we want for what we are doing now. The most urgent part of that is to shape our media services to give each of us what we value. The Web stated out seeking to do that, and we can return to that vision. It won't be free, but it can be affordable. And we have seen that "free" is not really affordable (because it is not really free). If we do not change direction, our democracies and our civilization will collapse. Some starting points for seeing how:

(Cross-posted with my other blog, FairPayZone.)

Wednesday, November 06, 2019

2020: A Goldilocks Solution for False Political Ads on Social Media is Emerging

Zuckerberg has rationalized that Facebook should do nothing about lies, and Dorsey has Twitter copping to the other extreme of an indiscriminate ad ban. But a readily actionable Goldilocks solution has emerged in response – and there are reports that Facebook is considering it.*

[This post focuses on stopgap solutions for controversial and urgent concerns leading in to the 2020 election. My prior post, Free Speech, Not Free Targeting! (Using Our Own Data to Manipulate Us), addresses the deeper abuses related to microtargeting and how everything in our feeds is filtered.]

[Update 5/29/20:]
Two bills to limit microtargeting of political ads have been introduced in Congress, one by Rep. Eshoo, and one by Rep. Cicilline.  Both are along the lines of the proposals described here. (More at end.)

The real problem

While dishonest political ads are a problem, that in itself is nothing new that we cannot deal with.  What is new is microtargeting of dishonest ads, and that has created a crisis that puts the fairness of our elections in serous doubt.  Numerous sophisticated observers – including the chair of the Federal Election Commission and the former head of security at Facebook -- have identified a far better stopgap solution than an outright ban on all political ads (or doing nothing).

Since the real problem is microtargeting, the “just right” quick solution is to limit microtargeting (at least until we have better ways to control it).  Microtargeting provides the new and insidious capability for a political campaign to precisely tailor its messaging to microsegments of voters who are vulnerable to being manipulated in one way, and while sending many different, conflicting messages to other microsegments who can be manipulated in other ways – by precision targeting down to designated sets of individual voters (such as with multifacet categories or with Facebook Custom Audiences). The social media feedback cycle can further enlist those manipulated users to be used as conduits ("useful idiots") to amplify that harm throughout their social graphs (much like the familiar screech of audio feedback that is not properly damped). This new kind of message amplification has been weaponized to incite extreme radicalization and even violent action.

We must be clear that there is a right of speech, but only limited rights to amplification or targeting. We have always had political ads that lie. America was founded on the principle that the best counter to lies is not censorship, but truth. Policing lies is a very slippery slope, but when a lie is out in the open, it can be exposed, debunked, and shamed. Sunlight has proven the best disinfectant. With microtargeting there is no exposure to sunlight and shame.
  • This new microtargeted filtering service can direct user posts or paid advertising to those most vulnerable to being manipulated, without their informed permission or awareness.
  • The social media feedback cycle can further enlist those manipulated users to be used as conduits ("useful idiots") to amplify that harm throughout their social graphs (much like the familiar screech of audio feedback that is not properly damped). 
  • These abuses are hidden from others and generally not auditable. That is compounds the harm of lies, since they can be targeted to manipulate factions surreptitiously. 
Consensus for a stopgap solution

In the past week or so, limits on microtargeting have been suggested to take a range of forms, all of which seem workable and feasible:
  • Ellen Weintraub, chair of the Federal Election Commission (in the Washington Post), Don’t abolish political ads on social media. Stop microtargeting, suggests “A good rule of thumb could be for Internet advertisers to allow targeting no more specific than one political level below the election at which the ad is directed.
  • Alex Stamos, former Facebook security chief, in an interview with Columbia Journalism Review, suggests “There are a lot of ways you can try to regulate this, but I think the simplest is a requirement that the "segment" somebody can hit has a floor. Maybe 10,000 people for a presidential election, 1,000 for a Congressional.”
  • Siva Vaidhyanathan, in the NT Times, suggesting "here’s something Congress could do: restrict the targeting of political ads in any medium to the level of the electoral district of the race."
  • In my prior postI suggested “allow ads to only be run...in a way that is no more targeted than traditional media…such as to users within broad geographic areas, or to a single affinity category that is not more precise or personalized than traditional print or TV slotting options.
This seems to be an emerging consensus that this is the best we can expect to achieve in the short run, in time to protect the 2020 election. This is something that Zuckerberg, Dorsey, and others (such as Google) could just decide to do -- or might be pressured to do. NBC News reported yesterday that Facebook is considering such an action.

We should all focus on avoiding foolish debate over naive framing of this problem as a dichotomy of "free speech" versus "censorship." The real problem is not the right of free speech, but the more nuanced issues of limited rights to be heard versus the right not to be targeted in ways that use our personal data against our interests.

The longer term

In the longer term, dishonest political ads are only a part of this new problem of abuse of microtargeting, which applies to speech of all kinds -- paid or not, political or commercial, or not. Especially notable is the fact that much of what Cambridge Analytica did was to get ordinary people to spread lies created by bots posing as ordinary people. To solve these problems, we need to change how the platforms not only how identity is known, but also how content is filtered into our feeds. Filtering content into our feeds is a user service that should be designed to provide the value that users, not advertisers seek

There are huge opportunities for innovation here. My prior post explains that, shows how much we are missing because the platforms are now driven by advertiser needs for amplification of their voice, not user needs for filtering of all voices, and it points to how we might change that.


See my prior post for more, plus links to related posts.

---
*[Update 11/7:] WSJ reports Google is considering political ad targeting limits as well.
[Update 11/20:] Google has announced it will impose political ad targeting limits -- Zuck, your move.
[Update 11/22:] WSJ reports Facebook is considering similar political ad targeting limits.

The downside of targeting limits. Meanwhile, there are reports, notably in NYTimes, that highlight the downside of limiting targeting precision in this way. That is why it is prudent to view blanket limits not as a final cure, but a stopgap:

  • Political campaigns rightly point out how these limits harm legitimate campaign goals: “This change won’t curb disinformation...but it will hinder campaigns and (others) who are already working against the tide against bad actors to reach voters with facts.” “Broad targeting kills fund-raising efficiency”
  • That argues that the real solution is to recognize that platforms do have the right and obligation to police ads of all kinds, including paid political ads, in order to enable an appropriate mix of targeting privileges to legitimate campaigns -- when placing non-abusive ads -- to those who choose to receive them.
  • But since we are nowhere near a meaningful implementation of such a solution in time for major upcoming elections, we need a stopgap compromise now. That is why I originally advocated this targeting limit, while noting that it was only a stopgap.
[Update 5/29/20:]
Related to the update at the top, about the bills introduced in Congress, a nice statement quoted in the 5/26/20 Eshoo press release explains the problem in a nutshell:
It used to be true that a politician could tell different things to different voters, but journalists would check whether the politician in question was saying different things to different people and write about it if they found conflicting political promises. That is impossible now because the different messages are shown privately on social media, and a given journalist only has his or her own profile. In other words, it's impossible to have oversight. The status quo, in other words, is bad for democracy. This new bill would address this urgent problem. --Cathy O’Neil, author of Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy and CEO of ORCAA, an algorithmic auditing firm.

=========================
[Supplement 11/8:] These 11/5 updates from my prior post seem worth repeating here as added background


(Alex Stamos from CJR)
In a 10/28 CJR interview by Mathew Ingram, Talking with former Facebook security chief Alex Stamos, Stamos offers this useful diagram to clarify key elements of Facebook and other social media that are often blurred together. He clarifies the hierarchy of amplification by advertising and recommendation engines (filtering of feeds) at the top, and free expression in various forms of private messaging at the bottom. This shows how the risks of abuse that need control are primarily related to paid targeting and to filtering. Stamos points out that "the type of abuse a lot of people are talking about, political disinformation, is absolutely tied to amplification" and that at the rights of unfettered free expression get stronger at the bottom, "the right of individuals to be exposed to information they have explicitly sought out."

Stamos argues that "Tech platforms should absolutely not fact-check candidates organic (unpaid) speech," but, in support of the kind of targeting limit suggested here, he says "I recommended, along with my partners here at Stanford, for there to be a legal floor on the advertising segment size for ads of a political nature."

Ben Thompson, in Tech and Liberty, supports Stamos' arguments and distinguishes rights of speech from "the right to be heard." He notes that "Targeting... both grants a right to be heard that is something distinct from a right to speech, as well as limits our shared understanding of what there is to debate."

---
See the Selected Items tab for more on this theme.

Thursday, October 31, 2019

Free Speech, Not Free Targeting! (Using Our Own Data to Manipulate Us)

(Image adapted from The Great Hack movie)
Zuckerberg's recent arguments that Facebook should restrict free expression only in the face of imminent, clear, and egregious harm have generated a storm of discussion -- and a very insightful counter from Dorsey (at Twitter).

But most discussion of these issues misses how social media can be managed without sacrificing our constitutionally protected freedom of expression. It oversimplifies how speech works in social media and misdiagnoses the causes of harm and abuse. 

[Update: A newer 11/6 post focuses on stopgap solutions for controversial and urgent concerns leading in to the 2020 election: 2020: A Goldilocks Solution for False Political Ads on Social Media is EmergingThis post focuses on the broader and deeper abuses of microtargeting, and how everything in our feeds is filtered.]

Much of this debate seems like blind men arguing over how to control an elephant when they don't yet understand what an elephant is. That is compounded by an elephant driver who exploits that confusion to do what he likes. (Is he, too, blind? ...or just motivated not to see the harm his elephant does?)

I suggest some simple principles can lead to more productive solution. Effective regulation -- whether self-regulation by the platforms, or by government -- requires understanding that we are really dealing with a new and powerfully expanded kind of hybrid media -- which is provided by a new and powerfully expanded kind of hybrid platform. That understanding suggests how to find a proper balance that protects free expression without doing great harm.

(This is a preliminary outline that I hope to expand on and refine. In the  meantime, some valuable references are suggested.) 

The essence of the problem

I suggest these three simple principles as overarching:
  1. Clearly, we need to protect "free speech," and a "free press," the First Amendment rights that are essential to our democracy and to our "marketplace of ideas." Zuckerberg is right that we need to be vigilant against overreaching cures -- in the form of censorship -- that may be worse than the disease.
  2. But he and his opponents both seem to misunderstand the nature of these new platforms. The real problem arises from the new services these platforms enable: precision targeted delivery services are neither protected speech, nor the protected press. They are a new kind of add-on service, separate from speech or the press. 
  3. Enabling precision targeted delivery against our interests, based on data extracted from us without informed consent is an abuse of power -- by the platforms -- and by the advertisers who pay them for that microtargeted delivery service. This is not a question of whether our data is private (or even wholly ours) -- it is a question of the legitimate use of data that we have rights in versus uses of that data that we have rights to disallow (both individually and as a society). It is also a question of when manipulative use of targeted ads constitutes deceptive advertising, which is not protected speech, and what constraints should be placed on paid targeting of messages to users. 
By controlling precision targeted delivery of speech, we can limit harmful behavior in the dissemination of speech -- without censorship of that speech.

While finalizing this post, I realized that Renee DiResta made some similar points under the title Free Speech Is Not the Same As Free Reach, her 2018 Wired article that explains this problem using that slightly different but equally pointed turn of phrase. With some helpful background, DiResta observed that:
...in this moment, the conversation we should be having—how can we fix the algorithms?—is instead being co-opted and twisted by politicians and pundits howling about censorship and miscasting content moderation as the demise of free speech online. It would be good to remind them that free speech does not mean free reach. There is no right to algorithmic amplification. In fact, that’s the very problem that needs fixing.
...So what can we do about it? The solution isn’t to outlaw algorithmic ranking or make noise about legislating what results Google can return... 
...there is a trust problem, and a lack of understanding of how rankings and feeds work, and that allows bad-faith politicking to gain traction. The best solution to that is to increase transparency and internet literacy, enabling users to have a better understanding of why they see what they see—and to build these powerful curatorial systems with a sense of responsibility for what they return.
In the following sections, I outline novel suggestions for how to go farther to manage this problem of free reach/free targeting -- in a way that drives the platforms to make their algorithms more controllable by their users, for their users. Notice the semantics: targeting and reach are both done to users -- filtering is done for users.

========================================================
Sidebar: The Elements of Social Media

Before continuing -- since even Zuckerberg seems to be confused about the nature of his elephant -- let's review the essential elements of Facebook and other social media.

Posting: This is the simple part. We start with what Facebook calls the Publisher Box that allows you to "write something" to post a Status Update that you wish to be available to others. By itself, that is little more than an easy-to-update personal Web site (a "microblogging" site), that makes short content items available to anyone who seeks them out. Other users can do that by going to that your Timeline/Wall (for Friends or the Public, depending on settings that you can control)For abuse and regulatory purposes, this aspect of Facebook is essentially a user-friendly Web hosting provider -- with no new First Amendment harms or issues.

Individually Filtered News Feeds: This is where things get new and very different. Your News Feed is an individually filtered view of what your friends are saying or commenting on (including what you "Liked" as a kind of comment). Facebook's filtering algorithms filter all of that content, based on some secret algorithm, to show you the items Facebook thinks will most likely engage you. This serves as a new kind of automated moderation. Some items are upranked so they will appear in your feed, others are downranked so they will not be shown in your feed. That ranking is weighted based on the social graph that connects you to your friends, and their friends, and so on -- how much positive interest each item draws from those the fewest degrees apart from you in your social graph. That ranking is also adjusted based on all the other information Facebook has about you and our friends (from observing activity anywhere in the vast Facebook ecosystem, and from external sources). It is this new individually filtered dissemination function of social media platforms that creates this new kind of conflict between free expression and newly enabled harms. (A further important  new layer is the enablement of self-forming Groups of like-minded users who can post items to the group -- and so have them filtered into the feeds of other group members, much like a special type of Friend.)

Targeted Ads: Layered on top of the first two elements, ads are special kind of posting in which advertisers pay Facebook to have their postings selectively filtered into the news feeds of individual users. Importantly, what is new in social media is that an ad are no longer just crudely targeted to some page in a publication or some time-slot in a video channel that goes to all viewers of that page or channel. Instead, it is precision targeted (microtargeted) to a set of users who fit some narrowly defined combination of criteria (or to a Custom Audience based on specific email addresses). Thus individualized messages can be targeted to just those users predicted to be especially receptive or easily manipulated -- and to remain unseen by others. This creates an entirely new category of harm that is both powerful and secretive. (How insidious this can be has already been demonstrated in Cambridge Analytica's abuse of Facebook.)  In this respect it is much like subliminal advertising (which is banned and not afforded First Amendment protection). The questions about about the harm of paid political advertising are especially urgent and compelling, as expressed by none other than Jack Dorsey of Twitter, who has just taken an opposite stand from Zuckerberg, saying “This isn’t about free expression. This is about paying for reach. And paying to increase the reach of political speech has significant ramifications that today’s democratic infrastructure may not be prepared to handle. It’s worth stepping back in order to address.” (See more in the "Coda: The urgent issue of paid political advertising.")
========================================================

Why these principles?

For an enlightening and well-researched explanation of the legal background behind my three principles, I recommend The Case for the Digital Platform Act: Market Structure and Regulation of Digital Platforms, by Harold Feld of Public Knowledge. (My apologies if I mis-characterize any of his points here.)

Feld's Chapter V parses these issues nicely, with a detailed primer on First Amendment issues, as evolved in communications and media law and regulation history. It also provides an analysis of how these platforms are a new kind of hybrid of direct one-to-one and one-to-many communications -- and how they add a new level of self-organizing many-to-many communities (fed by the new filtering algorithms). He explains why we should preserve strong freedom of speech for the one-to-one, but judiciously regulate the one-to-many. He also notes how facilitating creation of self-organizing communities introduces a new set of dangerous issues (including the empowerment of terrorist and hate groups who were previously isolated).

I have previously expressed similar ideas, focusing on better ways to do the filtering and precision targeting of content to an individual level that powers the one-to-many communication on these platforms and drives their self-organizing communities. That filtering and targeting is quantum leaps beyond anything ever before enabled at scale. Unfortunately, it is currently optimized for advertiser value, rather than user value.

The insidious new harm in false speech and other disinformation on these platforms is not in the speech, itself -- and not in simple distribution of the speech -- but in the abuse of this new platform service of precision targeting (microtargeting). Further, the essential harm of the platforms is not that they have our personal information, but in what they do with it. As described in the sidebar above, filtering -- based on our social graphs and other personal data -- is the core service of social media, and that can be a very valuable service. This filtering acts as a new, automated, form of moderation -- one that emerges from the platform's algorithms as they both drive and are driven by the ongoing activity of its users in a powerful new kind of feedback loop. The problem we now face with social media arises when that filtering/moderation service is misguided and abused:
  • This new microtargeted filtering service can direct user posts or paid advertising to those most vulnerable to being manipulated, without their informed permission or awareness.
  • The social media feedback cycle can further enlist those manipulated users to be used as conduits ("useful idiots") to amplify that harm throughout their social graphs (much like the familiar screech of audio feedback that is not properly damped). 
So that is where some combination of self-regulation and government regulation is most needed. Feld points to many relevant precedents for content moderation that have been held to be consistent with First Amendment rights, and he suggests that this is a fruitful area for regulatory guidance. My perspective on this is:
  • Regulation and platform self-regulation can be applied to limit social media harms, without impermissible limitation of rights of speech or the press
  • Free expression always entails some risk of harm that we accept as free society.
  • The harm we can best protect against is not the posting of harmful content, but the delivering of that harmful content to those who have not specifically sought it out. 
  • That is where Zuckerberg completely misses the point (whether by greed, malice, or simple naivete -- “It is difficult to get a man to understand something, when his job depends on his not understanding it”).
  • And that is where many of Zuckerberg's opponents waste their energy fighting the wrong battle -- one they cannot and should not win. 
Freedom of speech (posting), not freedom of intrusion on others who have not invited it.

That new kind of intrusion is the essential issue that most discussion seems to be missing.
  • I suggest that users should retain the right to post information with few restriction (the narrow exceptions that have traditionally been allowed by the courts as appropriate limits to First Amendment rights). 
  • That can be allowed without undue harm, as long as objectionable content is automatically downranked enough in a filtering (moderation) process, to largely avoid sending it to users who do not want such content
  • This is consistent with the safe-harbor provisions of Section 230 of the Communications Decency Act of 1996. That was created with thought to the limited and largely unmoderated posting functions of early Web aggregators (notably CompuServe and Prodigy, as litigated at the time). That also accepted the the freedom of the myriad independent Web sites that one had to actively seek out. 
  • Given the variation in community standards that complicate the handling of First Amendment rights by global platforms, filtering can also be applied to selectively restrict distribution of postings that are objectionable in specific communities or jurisdictions, without restricting posters or other allowable recipients.
As an important complement to this understanding of the problem, I also argue that users should be granted significant ability to customize the filtering process that serves them. That could better limit the exposure of users (and communities of users) to undesired content, without censoring material they do want.
  • Filtering should be a service for users, and thus should be selectable and adjustable by users to meet their individual desires. That customization should be dynamically modifiable, as a user's desires vary from time to time and task to task. (Some similar selectability has been offered to a limited extent for search -- and should apply even more fully to feeds, recognizing that search and feeds are complementary services. 
  • Issues here relate not only to one-to-one versus one-to-many, but to distinguish the user-active "pull" of requested information (such as a Web site access) versus the user-passive "push" of unsolicited information in a platform-driven feed. Getting much smarter about that would have huge value to users, as well as limiting abuses. 
Recipient-controlled "censorship:" content filtering, user choice, and competitive innovation - 

I suggest new levels of censorship of social media postings are generally not needed, because filtering enables a new kind of recipient-controlled "censorship" of delivery.

Social media work because they offer a new kind of filtering service for users -- most particularly, filtering a feed based on one's social graph. That has succeeded in spite of the fact that the platforms currently give their users little say over how that filtering is done (beyond specifying the social graph), and largely use it to manipulate their users rather than serve them. I put that forth as a central argument for regulation and antitrust action against the platforms.

Filtering algorithms should give users the kind of content they value, when they value it:
  • to include or exclude what the user considers to be objectionable or of undesired quality generally
  • to be dynamically selectable  (or able to sense the user's mood, task, and flow state) 
  • to filter for challenge, enlightenment, enjoyment, humor, emotion, support, comraderie, or relaxation at any given time. 
I explain in detail how smart use of "augmented intelligence" that draws on human inputs can enable that in The Augmented Wisdom of Crowds: Rate the Raters and Weight the Ratings and in A Cognitive Immune System for Social Media -- Developing Systemic Resistance to Fake News. This kind of hybrid man+machine intelligence can be far more powerful (and dynamically responsive) than either human or machine intelligence alone in determining the relevance, value, and legitimacy of social media postings (and ads). With this kind of smart real-time filtering of our feeds to protect us, censorship of postings can be limited to clearly improper material. Such methods have gotten little attention because Facebook is secretive about its filtering methods, and has had little incentive to develop them to serve users in this way. (But Google's PageRank algorithm has demonstrated the power of such multilevel rate the raters techniques to determine the relevance, value, and legitimacy of content.)

A monolithic platform like Facebook would be hard-pressed to deliver that level of flexibility and innovation for a full range of user desires and skill levels even if it wanted to. Key strategies to meet this complex need are:
  • to enable users to select from an open market in filtering services, each filtering service provider tuning its algorithms to provide value that competes in the marketplace to appeal to specific segments of users 
  • to combine multiple filtering services and algorithms to produce a desired overall effect
  • to allow filtering algorithm parameters to be changed by their users to vary the mix of algorithms and the operation of individual algorithms at will
  • to also factor in whatever "expert" content rating services they want.
(For an example of how such an open market might be shaped, consider the long-successful model of the open market for analytics that are used to filter financial market data to rank investment options. Think of social media as having user interface agents, repositories of posts, repositories of social graphs, and filtering/presentation tools, where the latter correspond to the financial analytics. Each of those elements might be separable and interoperable in an open competitive market.) 

These proposals have huge implications for speech and democracy, and well as for competitive innovation in augmenting the development of human wisdom (or de-augmenting it, as is happening now). That is how Facebook and other platforms could be much better at "bringing people closer together" without being so devilishly effective at driving them apart.

The need for a New Digital Platform Agency 

While adding bureaucracy is always a concern -- especially relating to the dynamic competitive environment of emerging digital technology -- there are strong arguments for that in this context.

The world is coming to realize that the Chicago School of antitrust that dominated the recent era of narrow antitrust enforcement is not enough. Raising "costs" to consumers is not a sufficient measure of harm when direct monetary costs to consumers are "zero." The real costs are not zero. Understanding what social media could do for us provides a reference point that shows how much we are really paying for the low-value platform services we now have. We cannot afford these supposedly "free" services!

Competition for users could change the value proposition, but this space is too complex, dynamic, and dependent on industry and technical expertise to be left to self-regulation, the courts, or legislation.

We need a new, specialized agency. The Feld report (cited above) offers in-depth support for such an agency, as do the three references recommended in the announcement of a conference on The Debate Over a New Digital Platform Agency: Developing Digital Authority and Expertise. (I recently attended that conference, and plan to post more about in the near future).

Just touching on this theme, we need a specialist agency that can regulate the platforms with expertise (much as the FCC has regulated communications and mass media) to find the right balance between the First Amendment and the harmful speech that it does not protect -- and to support open, competitive innovation as this continues to evolve. Many are unaware of the important and productive history here. (I observed from within the Bell System how the FCC and the courts regulated and eventually broke it up, and how this empowered the dynamic competition that led to the open Web and the Internet of Things that we now enjoy.) Inspired by those lessons, I offer specific new suggestions for regulation in Architecting Our Platforms to Better Serve Us -- Augmenting and Modularizing the Algorithm. Creating such an agency will take time, and be challenging -- but the alternative is to put not only the First Amendment, but our democracy and our freedom at risk.

These problems are hard, both for user speech, and for the special problem of paid advertising, which gives the platforms an incentive to serve advertisers, not users. As Dorsey of Twitter put it:
These challenges will affect ALL internet communication, not just political ads. Best to focus our efforts on the root problems, without the additional burden and complexity taking money brings. Trying to fix both means fixing neither well, and harms our credibility. ...For instance, it‘s not credible for us to say: “We’re working hard to stop people from gaming our systems to spread misleading info, buuut if someone pays us to target and force people to see their political ad…well...they can say whatever they want! 😉”
I have outlined a promising path toward solutions that preserve our freedom of speech while managing proper targeting of that speech, the underlying issue that few seem to recognize. But it will be a long and winding road, one that almost certainly requires a specialized agency to set guidelines, monitor, and adjust, as we find our way in this evolving new world.

Coda: The urgent issue of paid political advertising

The current firestorm regarding paid political advertising highlights one area where my proposals for smarter filtering and expert regulation are especially urgent, and where the case for reasonable controls on speech is especially well founded. My arguments for user control of filtering would have advertising targeting options be clearly subordinate to user filtering preferences. That seems to be sound in terms of First Amendment law, and common sense. Amplifying that are the arguments I have made elsewhere (Reverse the Biz Model! -- Undo the Faustian Bargain for Ads and Data) that advertising can be done in ways that better serve both users and well-intended advertisers. All parties win when ads are relevant, useful, and non-intrusive to their recipients.

But given the urgency here, for temporary relief until such selective controls can be put into effect, Dorsey's total ban on Twitter seems well worth considering for Facebook as well. Zuckerberg's defensive waving of the flag of free expression seems naive and self-serving.

[See my newer post (11/6) on stopgap solutions for controversial and urgent concerns leading in to the 2020 election: 2020: A Goldilocks Solution for False Political Ads on Social Media is Emerging. It reorganizes and expands on updates that are retained below.]

---
See the Selected Items tab for more on this theme.

Two key summaries:
Further discussion is in these posts:

==================================================
==================================================
Updates on stopgaps have since been consolidated into an 11/6 post: 2020: A Goldilocks Solution for False Political Ads on Social Media is Emerging...
That is more complete, but this section is retained as a history of updates.

[Update 11/2/19:]

An excellent analysis of the special case of political speech related to candidates is in Sam Lessin's 2016 Free Speech and Democracy in the Age of Micro-Targeting, which makes a well-reasoned argument that:
The growth of micro-targeting and disappearing messaging on the internet means that politicians can say different things to large numbers of people individually, in a way that can’t be monitored. Requirements to put this discourse on the public record are required to maintain democracy.
Lessin has a point that the secret (and often disappearing) nature of these communications, even when invited, is a threat to democracy. I agree that his remedy of disclosure is powerful, and it is a potentially important complement to my broad remedy of user-controlled targeting filters.

2020 Stopgaps?  

As to the urgent issue of the 2020 election, acting quickly will be hard. My proposal for user-controlled targeting filters is unlikely to be feasible as soon as 2020. So what can we do now?

Perhaps most feasible for 2020, is a simplistic stop-gap solution that might be easy to apply quickly:  just enact a temporary ban -- not on placing political ads, but on the individualized targeting of political ads. Do this as a simple and safe compromise between the Zuckerberg and Dorsey policies until we have a regulatory regime to manage micro-targeting properly:
  • Avoid a total ban on political ads on social media, but allow ads to only be run just as they are in traditional media, in a way that is no more targeted than traditional media. 
  • Disallow precision targeting to individuals: allow as many or as few ads as advertisers wish to purchase, but target them to all users, or to whatever random subset of all users fill the paid allotment.
  • A slight extension of this might permit a "traditional" level of targeting, such as to users within broad geographic areas, or to a single affinity category that is not more precise or personalized than traditional print or TV slotting options.
This is consistent with my point that the harm is not the speech, but the precision targeting of the speech, and would buy time to develop a more nuanced approach. It is something that Zuckerberg, Dorsey, and others could just decide to do on their own (...or be pressured to do).

[Update 11/3/19:] Siva Vaidhyanathan made a very similar proposal to my stop-gap suggestion: "here’s something Congress could do: restrict the targeting of political ads in any medium to the level of the electoral district of the race." That seems a good compromise that could stand until we have a better solution (or become a part of a more complete solution). (I am not sure if Vaidhyanathan meant to allow targeting to the level of individual districts in a multi-district election, but it seems to me that would be sufficient to enable reasonable visibility and not much harder to do quickly than the broader bans I had suggested.)

[Update 11/5/19:] Three other experts have argued for much the same kind of limits on targeting as the effective middle-ground solution.

(Alex Stamos from CJR)
One is Alex Stamos, in a 10/28 CJR interview by Mathew Ingram, Talking with former Facebook security chief Alex Stamos. Stamos offers a useful diagram to clarify key elements of Facebook and other social media that are often blurred together. He clarifies the hierarchy of amplification by advertising and recommendation engines (filtering of feeds) at the top, and free expression in various forms of private messaging at the bottom. This shows how the risks of abuse that need control are primarily related to paid targeting and to filtering. Stamos points out that "the type of abuse a lot of people are talking about, political disinformation, is absolutely tied to amplification" and that at the rights of unfettered free expression get stronger at the bottom, "the right of individuals to be exposed to information they have explicitly sought out."

Stamos argues that "Tech platforms should absolutely not fact-check candidates organic (unpaid) speech," but, in support of the kind of targeting limit suggested here, he says "I recommended, along with my partners here at Stanford, for there to be a legal floor on the advertising segment size for ads of a political nature."

Ben Thompson, in Tech and Liberty, supports Stamos' arguments and distinguishes rights of speech from "the right to be heard." He notes that "Targeting... both grants a right to be heard that is something distinct from a right to speech, as well as limits our shared understanding of what there is to debate."

And -- I just realized there had been another powerful voice on this issue! Ellen Weintraub, chair of the Federal Election Commission (in WaPo), Don’t abolish political ads on social media. Stop microtargeting. She suggests the same kind of limits on targeting of political ads outlined here, in even more specific terms (emphasis added):
A good rule of thumb could be for Internet advertisers to allow targeting no more specific than one political level below the election at which the ad is directed. Want to influence the governor’s race in Kansas? Your Internet ads could run across Kansas, or target individual counties, but that’s it. Running at-large for the Houston City Council? You could target the whole city or individual council districts. Presidential ads could likely be safely targeted down two levels, to the state and then to the county or congressional district level.
Maybe this flurried convergence of informed opinion will actually lead to some effective action.

Until we get more key people (including the press) to have some common understanding of what the problem is, it will be very hard to get a solution. For most of us, that is just a matter of making some effort to think clearly. For some it seems to be a matter of motivated reasoning that makes them not want to understand. (Many -- not always the same people -- have suggested that both Zuckerberg and Dorsey suffer from motivated reasoning.)

...And, as addressed in the first sections of this post, maybe that will help move us toward broader action to regain the promise of social media -- to apply smart filtering to make its users smarter, not dumber!