Wednesday, November 06, 2019

2020: A Goldilocks Solution for False Political Ads on Social Media is Emerging

Zuckerberg has rationalized that Facebook should do nothing about lies, and Dorsey has Twitter copping to the other extreme of an indiscriminate ad ban. But a readily actionable Goldilocks solution has emerged in response – and there are reports that Facebook is considering it.*

[This post focuses on stopgap solutions for controversial and urgent concerns leading in to the 2020 election. My prior post, Free Speech, Not Free Targeting! (Using Our Own Data to Manipulate Us), addresses the deeper abuses related to microtargeting and how everything in our feeds is filtered.]

The real problem

While dishonest political ads are a problem, that in itself is nothing new that we cannot deal with.  What is new is microtargeting of dishonest ads, and that has created a crisis that puts the fairness of our elections in serous doubt.  Numerous sophisticated observers – including the chair of the Federal Election Commission and the former head of security at Facebook -- have identified a far better stopgap solution than an outright ban on all political ads (or doing nothing).

Since the real problem is microtargeting, the “just right” quick solution is to limit microtargeting (at least until we have better ways to control it).  Microtargeting provides the new and insidious capability for a political campaign to precisely tailor its messaging to microsegments of voters who are vulnerable to being manipulated in one way, and while sending many different, conflicting messages to other microsegments who can be manipulated in other ways – by precision targeting down to designated sets of individual voters (such as with multifacet categories or with Facebook Custom Audiences). The social media feedback cycle can further enlist those manipulated users to be used as conduits ("useful idiots") to amplify that harm throughout their social graphs (much like the familiar screech of audio feedback that is not properly damped). This new kind of message amplification has been weaponized to incite extreme radicalization and even violent action.

We must be clear that there is a right of speech, but only limited rights to amplification or targeting. We have always had political ads that lie. America was founded on the principle that the best counter to lies is not censorship, but truth. Policing lies is a very slippery slope, but when a lie is out in the open, it can be exposed, debunked, and shamed. Sunlight has proven the best disinfectant. With microtargeting there is no exposure to sunlight and shame.
  • This new microtargeted filtering service can direct user posts or paid advertising to those most vulnerable to being manipulated, without their informed permission or awareness.
  • The social media feedback cycle can further enlist those manipulated users to be used as conduits ("useful idiots") to amplify that harm throughout their social graphs (much like the familiar screech of audio feedback that is not properly damped). 
  • These abuses are hidden from others and generally not auditable. That is compounds the harm of lies, since they can be targeted to manipulate factions surreptitiously. 
Consensus for a stopgap solution

In the past week or so, limits on microtargeting have been suggested to take a range of forms, all of which seem workable and feasible:
  • Ellen Weintraub, chair of the Federal Election Commission (in the Washington Post), Don’t abolish political ads on social media. Stop microtargeting, suggests “A good rule of thumb could be for Internet advertisers to allow targeting no more specific than one political level below the election at which the ad is directed.
  • Alex Stamos, former Facebook security chief, in an interview with Columbia Journalism Review, suggests “There are a lot of ways you can try to regulate this, but I think the simplest is a requirement that the "segment" somebody can hit has a floor. Maybe 10,000 people for a presidential election, 1,000 for a Congressional.”
  • Siva Vaidhyanathan, in the NT Times, suggesting "here’s something Congress could do: restrict the targeting of political ads in any medium to the level of the electoral district of the race."
  • In my prior postI suggested “allow ads to only be run...in a way that is no more targeted than traditional media…such as to users within broad geographic areas, or to a single affinity category that is not more precise or personalized than traditional print or TV slotting options.
This seems to be an emerging consensus that this is the best we can expect to achieve in the short run, in time to protect the 2020 election. This is something that Zuckerberg, Dorsey, and others (such as Google) could just decide to do -- or might be pressured to do. NBC News reported yesterday that Facebook is considering such an action.

We should all focus on avoiding foolish debate over naive framing of this problem as a dichotomy of "free speech" versus "censorship." The real problem is not the right of free speech, but the more nuanced issues of limited rights to be heard versus the right not to be targeted in ways that use our personal data against our interests.

The longer term

In the longer term, dishonest political ads are only a part of this new problem of abuse of microtargeting, which applies to speech of all kinds -- paid or not, political or commercial, or not. Especially notable is the fact that much of what Cambridge Analytica did was to get ordinary people to spread lies created by bots posing as ordinary people. To solve these problems, we need to change how the platforms not only how identity is known, but also how content is filtered into our feeds. Filtering content into our feeds is a user service that should be designed to provide the value that users, not advertisers seek

There are huge opportunities for innovation here. My prior post explains that, shows how much we are missing because the platforms are now driven by advertiser needs for amplification of their voice, not user needs for filtering of all voices, and it points to how we might change that.


See my prior post for more, plus links to related posts.

---
*[Update 11/7:] The WSJ reports Google is considering political ad targeting limits as well.
[Update 11/20:] Google has announced it will impose political ad targeting limits -- Zuck, your move.

[Update 11/8:] It seems worth repeating from my prior post this bit and helpful diagram: 


(Alex Stamos from CJR)
In a 10/28 CJR interview by Mathew Ingram, Talking with former Facebook security chief Alex Stamos, Stamos offers this useful diagram to clarify key elements of Facebook and other social media that are often blurred together. He clarifies the hierarchy of amplification by advertising and recommendation engines (filtering of feeds) at the top, and free expression in various forms of private messaging at the bottom. This shows how the risks of abuse that need control are primarily related to paid targeting and to filtering. Stamos points out that "the type of abuse a lot of people are talking about, political disinformation, is absolutely tied to amplification" and that at the rights of unfettered free expression get stronger at the bottom, "the right of individuals to be exposed to information they have explicitly sought out."

Thursday, October 31, 2019

Free Speech, Not Free Targeting! (Using Our Own Data to Manipulate Us)

(Image adapted from The Great Hack movie)
Zuckerberg's recent arguments that Facebook should restrict free expression only in the face of imminent, clear, and egregious harm have generated a storm of discussion -- and a very insightful counter from Dorsey (at Twitter).

But most discussion of these issues misses how social media can be managed without sacrificing our constitutionally protected freedom of expression. It oversimplifies how speech works in social media and misdiagnoses the causes of harm and abuse. 

[A newer 11/6 post focuses on stopgap solutions for controversial and urgent concerns leading in to the 2020 election: 2020: A Goldilocks Solution for False Political Ads on Social Media is EmergingThis post addresses the deeper abuses related to microtargeting and how everything in our feeds is filtered.]

Much of this debate seems like blind men arguing over how to control an elephant when they don't yet understand what an elephant is. That is compounded by an elephant driver who exploits that confusion to do what he likes. (Is he, too, blind? ...or just motivated not to see the harm his elephant does?)

I suggest some simple principles can lead to more productive solution. Effective regulation -- whether self-regulation by the platforms, or by government -- requires understanding that we are really dealing with a new and powerfully expanded kind of hybrid media -- which is provided by a new and powerfully expanded kind of hybrid platform. That understanding suggests how to find a proper balance that protects free expression without doing great harm.

(This is a preliminary outline that I hope to expand on and refine. In the  meantime, some valuable references are suggested.) 

[See updates at the end -- especially on urgent issues relating to the 2020 election.]

The essence of the problem

I suggest these three simple principles as overarching:
  1. Clearly, we need to protect "free speech," and a "free press," the First Amendment rights that are essential to our democracy and to our "marketplace of ideas." Zuckerberg is right that we need to be vigilant against overreaching cures -- in the form of censorship -- that may be worse than the disease.
  2. But he and his opponents both seem to misunderstand the nature of these new platforms. The real problem arises from the new services these platforms enable: precision targeted delivery services are neither protected speech, nor the protected press. They are a new kind of add-on service, separate from speech or the press. 
  3. Enabling precision targeted delivery against our interests, based on data extracted from us without informed consent is an abuse of power -- by the platforms -- and by the advertisers who pay them for that microtargeted delivery service. This is not a question of whether our data is private (or even wholly ours) -- it is a question of the legitimate use of data that we have rights in versus uses of that data that we have rights to disallow (both individually and as a society). It is also a question of when manipulative use of targeted ads constitutes deceptive advertising, which is not protected speech, and what constraints should be placed on paid targeting of messages to users. 
By controlling precision targeted delivery of speech, we can limit harmful behavior in the dissemination of speech -- without censorship of that speech.

While finalizing this post, I realized that Renee DiResta made some similar points under the title Free Speech Is Not the Same As Free Reach, her 2018 Wired article that explains this problem using that slightly different but equally pointed turn of phrase. With some helpful background, DiResta observed that:
...in this moment, the conversation we should be having—how can we fix the algorithms?—is instead being co-opted and twisted by politicians and pundits howling about censorship and miscasting content moderation as the demise of free speech online. It would be good to remind them that free speech does not mean free reach. There is no right to algorithmic amplification. In fact, that’s the very problem that needs fixing.
...So what can we do about it? The solution isn’t to outlaw algorithmic ranking or make noise about legislating what results Google can return... 
...there is a trust problem, and a lack of understanding of how rankings and feeds work, and that allows bad-faith politicking to gain traction. The best solution to that is to increase transparency and internet literacy, enabling users to have a better understanding of why they see what they see—and to build these powerful curatorial systems with a sense of responsibility for what they return.
In the following sections, I outline novel suggestions for how to go farther to manage this problem of free reach/free targeting -- in a way that drives the platforms to make their algorithms more controllable by their users, for their users. Notice the semantics: targeting and reach are both done to users -- filtering is done for users.

========================================================
Sidebar: The Elements of Social Media

Before continuing -- since even Zuckerberg seems to be confused about the nature of his elephant -- let's review the essential elements of Facebook and other social media.

Posting: This is the simple part. We start with what Facebook calls the Publisher Box that allows you to "write something" to post a Status Update that you wish to be available to others. By itself, that is little more than an easy-to-update personal Web site (a "microblogging" site), that makes short content items available to anyone who seeks them out. Other users can do that by going to that your Timeline/Wall (for Friends or the Public, depending on settings that you can control)For abuse and regulatory purposes, this aspect of Facebook is essentially a user-friendly Web hosting provider -- with no new First Amendment harms or issues.

Individually Filtered News Feeds: This is where things get new and very different. Your News Feed is an individually filtered view of what your friends are saying or commenting on (including what you "Liked" as a kind of comment). Facebook's filtering algorithms filter all of that content, based on some secret algorithm, to show you the items Facebook thinks will most likely engage you. This serves as a new kind of automated moderationSome items are upranked so they will appear in your feed, others are downranked so they will not be shown in your feed. That ranking is weighted based on the social graph that connects you to your friends, and their friends, and so on -- how much positive interest each item draws from those the fewest degrees apart from you in your social graph. That ranking is also adjusted based on all the other information Facebook has about you and our friends (from observing activity anywhere in the vast Facebook ecosystem, and from external sources). It is this new individually filtered dissemination function of social media platforms that creates this new kind of conflict between free expression and newly enabled harms. (A further important  new layer is the enablement of self-forming Groups of like-minded users who can post items to the group -- and so have them filtered into the feeds of other group members, much like a special type of Friend.)

Targeted Ads: Layered on top of the first two elements, ads are special kind of posting in which advertisers pay Facebook to have their postings selectively filtered into the news feeds of individual users. Importantly, what is new in social media is that an ad are no longer just crudely targeted to some page in a publication or some time-slot in a video channel that goes to all viewers of that page or channel. Instead, it is precision targeted (microtargeted) to a set of users who fit some narrowly defined combination of criteria (or to a Custom Audience based on specific email addresses). Thus individualized messages can be targeted to just those users predicted to be especially receptive or easily manipulated -- and to remain unseen by others. This creates an entirely new category of harm that is both powerful and secretive. (How insidious this can be has already been demonstrated in Cambridge Analytica's abuse of Facebook.)  In this respect it is much like subliminal advertising (which is banned and not afforded First Amendment protection). The questions about about the harm of paid political advertising are especially urgent and compelling, as expressed by none other than Jack Dorsey of Twitter, who has just taken an opposite stand from Zuckerberg, saying “This isn’t about free expression. This is about paying for reach. And paying to increase the reach of political speech has significant ramifications that today’s democratic infrastructure may not be prepared to handle. It’s worth stepping back in order to address.” (See more in the "Coda: The urgent issue of paid political advertising.")
========================================================

Why these principles?

For an enlightening and well-researched explanation of the legal background behind my three principles, I recommend The Case for the Digital Platform Act: Market Structure and Regulation of Digital Platforms, by Harold Feld of Public Knowledge. (My apologies if I mis-characterize any of his points here.)

Feld's Chapter V parses these issues nicely, with a detailed primer on First Amendment issues, as evolved in communications and media law and regulation history. It also provides an analysis of how these platforms are a new kind of hybrid of direct one-to-one and one-to-many communications -- and how they add a new level of self-organizing many-to-many communities (fed by the new filtering algorithms). He explains why we should preserve strong freedom of speech for the one-to-one, but judiciously regulate the one-to-many. He also notes how facilitating creation of self-organizing communities introduces a new set of dangerous issues (including the empowerment of terrorist and hate groups who were previously isolated).

I have previously expressed similar ideas, focusing on better ways to do the filtering and precision targeting of content to an individual level that powers the one-to-many communication on these platforms and drives their self-organizing communities. That filtering and targeting is quantum leaps beyond anything ever before enabled at scale. Unfortunately, it is currently optimized for advertiser value, rather than user value.

The insidious new harm in false speech and other disinformation on these platforms is not in the speech, itself -- and not in simple distribution of the speech -- but in the abuse of this new platform service of precision targeting (microtargeting). Further, the essential harm of the platforms is not that they have our personal information, but in what they do with it. As described in the sidebar above, filtering -- based on our social graphs and other personal data -- is the core service of social media, and that can be a very valuable service. This filtering acts as a new, automated, form of moderation -- one that emerges from the platform's algorithms as they both drive and are driven by the ongoing activity of its users in a powerful new kind of feedback loop. The problem we now face with social media arises when that filtering/moderation service is misguided and abused:
  • This new microtargeted filtering service can direct user posts or paid advertising to those most vulnerable to being manipulated, without their informed permission or awareness.
  • The social media feedback cycle can further enlist those manipulated users to be used as conduits ("useful idiots") to amplify that harm throughout their social graphs (much like the familiar screech of audio feedback that is not properly damped). 
So that is where some combination of self-regulation and government regulation is most needed. Feld points to many relevant precedents for content moderation that have been held to be consistent with First Amendment rights, and he suggests that this is a fruitful area for regulatory guidance. My perspective on this is:
  • Regulation and platform self-regulation can be applied to limit social media harms, without impermissible limitation of rights of speech or the press
  • Free expression always entails some risk of harm that we accept as free society.
  • The harm we can best protect against is not the posting of harmful content, but the delivering of that harmful content to those who have not specifically sought it out. 
  • That is where Zuckerberg completely misses the point (whether by greed, malice, or simple naivete -- “It is difficult to get a man to understand something, when his job depends on his not understanding it”).
  • And that is where many of Zuckerberg's opponents waste their energy fighting the wrong battle -- one they cannot and should not win. 
Freedom of speech (posting), not freedom of intrusion on others who have not invited it.

That new kind of intrusion is the essential issue that most discussion seems to be missing.
  • I suggest that users should retain the right to post information with few restriction (the narrow exceptions that have traditionally been allowed by the courts as appropriate limits to First Amendment rights). 
  • That can be allowed without undue harm, as long as objectionable content is automatically downranked enough in a filtering (moderation) process, to largely avoid sending it to users who do not want such content
  • This is consistent with the safe-harbor provisions of Section 230 of the Communications Decency Act of 1996. That was created with thought to the limited and largely unmoderated posting functions of early Web aggregators (notably CompuServe and Prodigy, as litigated at the time). That also accepted the the freedom of the myriad independent Web sites that one had to actively seek out. 
  • Given the variation in community standards that complicate the handling of First Amendment rights by global platforms, filtering can also be applied to selectively restrict distribution of postings that are objectionable in specific communities or jurisdictions, without restricting posters or other allowable recipients.
As an important complement to this understanding of the problem, I also argue that users should be granted significant ability to customize the filtering process that serves them. That could better limit the exposure of users (and communities of users) to undesired content, without censoring material they do want.
  • Filtering should be a service for users, and thus should be selectable and adjustable by users to meet their individual desires. That customization should be dynamically modifiable, as a user's desires vary from time to time and task to task. (Some similar selectability has been offered to a limited extent for search -- and should apply even more fully to feeds, recognizing that search and feeds are complementary services. 
  • Issues here relate not only to one-to-one versus one-to-many, but to distinguish the user-active "pull" of requested information (such as a Web site access) versus the user-passive "push" of unsolicited information in a platform-driven feed. Getting much smarter about that would have huge value to users, as well as limiting abuses. 
Recipient-controlled "censorship:" content filtering, user choice, and competitive innovation - 

I suggest new levels of censorship of social media postings are generally not needed, because filtering enables a new kind of recipient-controlled "censorship" of delivery.

Social media work because they offer a new kind of filtering service for users -- most particularly, filtering a feed based on one's social graph. That has succeeded in spite of the fact that the platforms currently give their users little say over how that filtering is done (beyond specifying the social graph), and largely use it to manipulate their users rather than serve them. I put that forth as a central argument for regulation and antitrust action against the platforms.

Filtering algorithms should give users the kind of content they value, when they value it:
  • to include or exclude what the user considers to be objectionable or of undesired quality generally
  • to be dynamically selectable  (or able to sense the user's mood, task, and flow state) 
  • to filter for challenge, enlightenment, enjoyment, humor, emotion, support, comraderie, or relaxation at any given time. 
I explain in detail how smart use of "augmented intelligence" that draws on human inputs can enable that in The Augmented Wisdom of Crowds: Rate the Raters and Weight the Ratings and in A Cognitive Immune System for Social Media -- Developing Systemic Resistance to Fake News. This kind of hybrid man+machine intelligence can be far more powerful (and dynamically responsive) than either human or machine intelligence alone in determining the relevance, value, and legitimacy of social media postings (and ads). With this kind of smart real-time filtering of our feeds to protect us, censorship of postings can be limited to clearly improper material. Such methods have gotten little attention because Facebook is secretive about its filtering methods, and has had little incentive to develop them to serve users in this way. (But Google's PageRank algorithm has demonstrated the power of such multilevel rate the raters techniques to determine the relevance, value, and legitimacy of content.)

A monolithic platform like Facebook would be hard-pressed to deliver that level of flexibility and innovation for a full range of user desires and skill levels even if it wanted to. Key strategies to meet this complex need are:
  • to enable users to select from an open market in filtering services, each filtering service provider tuning its algorithms to provide value that competes in the marketplace to appeal to specific segments of users 
  • to combine multiple filtering services and algorithms to produce a desired overall effect
  • to allow filtering algorithm parameters to be changed by their users to vary the mix of algorithms and the operation of individual algorithms at will
  • to also factor in whatever "expert" content rating services they want.
(For an example of how such an open market might be shaped, consider the long-successful model of the open market for analytics that are used to filter financial market data to rank investment options. Think of social media as having user interface agents, repositories of posts, repositories of social graphs, and filtering/presentation tools, where the latter correspond to the financial analytics. Each of those elements might be separable and interoperable in an open competitive market.) 

These proposals have huge implications for speech and democracy, and well as for competitive innovation in augmenting the development of human wisdom (or de-augmenting it, as is happening now). That is how Facebook and other platforms could be much better at "bringing people closer together" without being so devilishly effective at driving them apart.

The need for a New Digital Platform Agency 

While adding bureaucracy is always a concern -- especially relating to the dynamic competitive environment of emerging digital technology -- there are strong arguments for that in this context.

The world is coming to realize that the Chicago School of antitrust that dominated the recent era of narrow antitrust enforcement is not enough. Raising "costs" to consumers is not a sufficient measure of harm when direct monetary costs to consumers are "zero." The real costs are not zero. Understanding what social media could do for us provides a reference point that shows how much we are really paying for the low-value platform services we now have. We cannot afford these supposedly "free" services!

Competition for users could change the value proposition, but this space is too complex, dynamic, and dependent on industry and technical expertise to be left to self-regulation, the courts, or legislation.

We need a new, specialized agency. The Feld report (cited above) offers in-depth support for such an agency, as do the three references recommended in the announcement of a conference on The Debate Over a New Digital Platform Agency: Developing Digital Authority and Expertise. (I recently attended that conference, and plan to post more about in the near future).

Just touching on this theme, we need a specialist agency that can regulate the platforms with expertise (much as the FCC has regulated communications and mass media) to find the right balance between the First Amendment and the harmful speech that it does not protect -- and to support open, competitive innovation as this continues to evolve. Many are unaware of the important and productive history here. (I observed from within the Bell System how the FCC and the courts regulated and eventually broke it up, and how this empowered the dynamic competition that led to the open Web and the Internet of Things that we now enjoy.) Inspired by those lessons, I offer specific new suggestions for regulation in Architecting Our Platforms to Better Serve Us -- Augmenting and Modularizing the Algorithm. Creating such an agency will take time, and be challenging -- but the alternative is to put not only the First Amendment, but our democracy and our freedom at risk.

These problems are hard, both for user speech, and for the special problem of paid advertising, which gives the platforms an incentive to serve advertisers, not users. As Dorsey of Twitter put it:
These challenges will affect ALL internet communication, not just political ads. Best to focus our efforts on the root problems, without the additional burden and complexity taking money brings. Trying to fix both means fixing neither well, and harms our credibility. ...For instance, it‘s not credible for us to say: “We’re working hard to stop people from gaming our systems to spread misleading info, buuut if someone pays us to target and force people to see their political ad…well...they can say whatever they want! 😉”
I have outlined a promising path toward solutions that preserve our freedom of speech while managing proper targeting of that speech, the underlying issue that few seem to recognize. But it will be a long and winding road, one that almost certainly requires a specialized agency to set guidelines, monitor, and adjust, as we find our way in this evolving new world.

Coda: The urgent issue of paid political advertising

The current firestorm regarding paid political advertising highlights one area where my proposals for smarter filtering and expert regulation are especially urgent, and where the case for reasonable controls on speech is especially well founded. My arguments for user control of filtering would have advertising targeting options be clearly subordinate to user filtering preferences. That seems to be sound in terms of First Amendment law, and common sense. Amplifying that are the arguments I have made elsewhere (Reverse the Biz Model! -- Undo the Faustian Bargain for Ads and Data) that advertising can be done in ways that better serve both users and well-intended advertisers. All parties win when ads are relevant, useful, and non-intrusive to their recipients.

But given the urgency here, for temporary relief until such selective controls can be put into effect, Dorsey's total ban on Twitter seems well worth considering for Facebook as well. Zuckerberg's defensive waving of the flag of free expression seems naive and self-serving.

---

[A newer 11/6 post focuses on stopgap solutions for controversial and urgent concerns leading in to the 2020 election: 2020: A Goldilocks Solution for False Political Ads on Social Media is Emerging. It reorganizes and further updates some of the information below, but some added breadth and details are here, including some good insights and a diagram by Alex Stamos.]

[Update 11/2/19:]

An excellent analysis of the special case of political speech related to candidates is in Sam Lessin's 2016 Free Speech and Democracy in the Age of Micro-Targeting, which makes a well-reasoned argument that:
The growth of micro-targeting and disappearing messaging on the internet means that politicians can say different things to large numbers of people individually, in a way that can’t be monitored. Requirements to put this discourse on the public record are required to maintain democracy.
Lessin has a point that the secret (and often disappearing) nature of these communications, even when invited, is a threat to democracy. I agree that his remedy of disclosure is powerful, and it is a potentially important complement to my broad remedy of user-controlled targeting filters.

2020 Stopgaps?  

As to the urgent issue of the 2020 election, acting quickly will be hard. My proposal for user-controlled targeting filters is unlikely to be feasible as soon as 2020. So what can we do now?

Perhaps most feasible for 2020, is a simplistic stop-gap solution that might be easy to apply quickly:  just enact a temporary ban -- not on placing political ads, but on the individualized targeting of political ads. Do this as a simple and safe compromise between the Zuckerberg and Dorsey policies until we have a regulatory regime to manage micro-targeting properly:
  • Avoid a total ban on political ads on social media, but allow ads to only be run just as they are in traditional media, in a way that is no more targeted than traditional media. 
  • Disallow precision targeting to individuals: allow as many or as few ads as advertisers wish to purchase, but target them to all users, or to whatever random subset of all users fill the paid allotment.
  • A slight extension of this might permit a "traditional" level of targeting, such as to users within broad geographic areas, or to a single affinity category that is not more precise or personalized than traditional print or TV slotting options.
This is consistent with my point that the harm is not the speech, but the precision targeting of the speech, and would buy time to develop a more nuanced approach. It is something that Zuckerberg, Dorsey, and others could just decide to do on their own (...or be pressured to do).

[Update 11/3/19:] Siva Vaidhyanathan made a very similar proposal to my stop-gap suggestion: "here’s something Congress could do: restrict the targeting of political ads in any medium to the level of the electoral district of the race." That seems a good compromise that could stand until we have a better solution (or become a part of a more complete solution). (I am not sure if Vaidhyanathan meant to allow targeting to the level of individual districts in a multi-district election, but it seems to me that would be sufficient to enable reasonable visibility and not much harder to do quickly than the broader bans I had suggested.)

[Update 11/5/19:] Three other experts have argued for much the same kind of limits on targeting as the effective middle-ground solution.

(Alex Stamos from CJR)
One is Alex Stamos, in a 10/28 CJR interview by Mathew Ingram, Talking with former Facebook security chief Alex Stamos. Stamos offers a useful diagram to clarify key elements of Facebook and other social media that are often blurred together. He clarifies the hierarchy of amplification by advertising and recommendation engines (filtering of feeds) at the top, and free expression in various forms of private messaging at the bottom. This shows how the risks of abuse that need control are primarily related to paid targeting and to filtering. Stamos points out that "the type of abuse a lot of people are talking about, political disinformation, is absolutely tied to amplification" and that at the rights of unfettered free expression get stronger at the bottom, "the right of individuals to be exposed to information they have explicitly sought out."

Stamos argues that "Tech platforms should absolutely not fact-check candidates organic (unpaid) speech," but, in support of the kind of targeting limit suggested here, he says "I recommended, along with my partners here at Stanford, for there to be a legal floor on the advertising segment size for ads of a political nature."

Ben Thompson, in Tech and Liberty, supports Stamos' arguments and distinguishes rights of speech from "the right to be heard." He notes that "Targeting... both grants a right to be heard that is something distinct from a right to speech, as well as limits our shared understanding of what there is to debate."

And -- I just realized there had been another powerful voice on this issue! Ellen Weintraub, chair of the Federal Election Commission (in WaPo), Don’t abolish political ads on social media. Stop microtargeting. She suggests the same kind of limits on targeting of political ads outlined here, in even more specific terms (emphasis added):
A good rule of thumb could be for Internet advertisers to allow targeting no more specific than one political level below the election at which the ad is directed. Want to influence the governor’s race in Kansas? Your Internet ads could run across Kansas, or target individual counties, but that’s it. Running at-large for the Houston City Council? You could target the whole city or individual council districts. Presidential ads could likely be safely targeted down two levels, to the state and then to the county or congressional district level.
Maybe this flurried convergence of informed opinion will actually lead to some effective action.

Until we get more key people (including the press) to have some common understanding of what the problem is, it will be very hard to get a solution. For most of us, that is just a matter of making some effort to think clearly. For some it seems to be a matter of motivated reasoning that makes them not want to understand. (Many -- not always the same people -- have suggested that both Zuckerberg and Dorsey suffer from motivated reasoning.)

...And, as addressed in the first sections of this post, maybe that will help move us toward broader action to regain the promise of social media -- to apply smart filtering to make its users smarter, not dumber!


---
Further information -- Other posts on these themes are listed in the sidebar of this blog, headed "On the Augmented Wisdom of Crowds: Social Media, Fake News/Disinformation, and Oligopoly" 


Two key summaries:
Further discussion is in these posts:

Tuesday, August 20, 2019

The Great Hack - The Most Important and Scariest Film of this Century

++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++

An Unseen Digital Pearl Harbor, Dissected

The Great Hack is a film every person should see!

America and other democracies have been invaded, and subverted in ways and degrees that few appreciate.

This film (on Netflix) uncovers the layers of the Russia / Cambridge Analytica hack of the US 2020 election (and Brexit), and clearly shows how deeply our social media have been subverted as a fifth column aimed at the heart of democracy and enlightened civilization.

The Great Hack provides clarity about the insidious damage being done by our seemingly benign social media -- and still running wild because too few understand or care about our state of peril -- and because those businesses profit from enabling our attackers. It provides an excellent primer for those who have not tuned in to this, and for those who do not understand the nature of the threat.

It is a much needed wake up call that far more need more urgently to heed.

"Prof. Carroll Goes to London"

What makes this a great film, and not just an important documentary, is how it is told as a the story of a (not so) common man.

Much like Jimmy Stewart's Mr. Smith Goes to Washington, this is the story of a mild-mannered citizen, Professor David Carroll of The New School in NYC, who sees a problem and seeks to follow a simple quest for truth and justice (to know what data Facebook and Cambridge Analytica have on him). It traces his awakening and journey to the belly of the beast. This time it is real, and the stakes could not be higher.

---
I found this film especially interesting, having met David at an event on the fake news problem in February 2017, and then at a number of subsequent events on this important theme (many under the auspices of NYC Media Lab and its director, Justin Hendix). It is a problem I have explored and offered some remedies for on this blog.

Wednesday, July 24, 2019

To Regulate Facebook and Google, Turn Users Into Customers


First published in Techonomy, 2/26/19 -- and more timely than ever...

There is a growing consensus that we need to regulate Facebook, Google, and other large internet platforms that harm the public in large part because they are driven by targeted advertising.  The seductive idea that we can enjoy free internet services — if we just view ads and turn over our data — has been recognized to be “the original sin” of the internet.  These companies favor the interests of the advertisers they profit from more than the interests of their billions of users.  They are powerful tools for mass-customized mind-control. Selling their capabilities to the highest bidder threatens not just consumer welfare, but society and democracy.

There is a robust debate emerging about how these companies should be regulated. Many argue for controls on data use and objectionable content on these platforms.  But poorly targeted regulation risks many adverse side-effects – for example abridging legitimate speech, and further entrenching these dominant platforms and impeding innovation by making it too costly for others to compete.

But I believe we need to treat the disease, not just play whack-a-mole with the symptoms. It’s the business model, stupid! It is widely recognized that the root cause of the problem is the extractive, ad-funded, business model that motivates manipulation and surveillance.  The answer is to require these companies to shift to revenue streams that come from their users.  Of course, shifting cold-turkey to a predominantly user-revenue-based model is hard.  But in reality, we have a simple, market-driven, regulatory method that has already proven its success in addressing a similarly challenging problem – forcing automakers to increase the fuel efficiency of the cars they make. Government has for years required staged multi-year increases in Corporate Average Fuel Efficiency. A similar strategy can be applied here.

This market-driven strategy does not mandate how to fix things. It instead mandates a measurable limit on the systems that have been shown to cause harm.  Each service provider can determine on their own how best to achieve that.  Require that X% of the revenue of any consumer data service come from its users rather than advertisers.  Government can monitor their progress, and create a timetable for steadily ratcheting up the percentage.  (This might apply only above some amount of revenues, to limit constraints on small, innovative competitors.)

It is often said of our internet platforms that “if you are not the customer, you are the product.”  This concept may oversimplify, but it is deeply powerful.  With or without detailed regulations on privacy and data use, we need to shift platform incentives by making the user become the customer, increasingly over time.

Realigning incentives for ads and data.  Advertising can provide value to users – if it is targeted and executed in a way that is non-intrusive, relevant, and useful.  The best way to make advertising less extractive of user value is by quantifying a “reverse meter” that gives users credit for their attention and data.  Some services already offer users the option to pay in order to avoid or reduce ads (Spotify is one example).  That makes the user the customer. Both advertisers and the platforms benefit by managing user attention to maximize, rather than optimize for exploitive “engagement.”

What if the mandated user revenue level is not met?  Government could tax away enough ad revenue to meet the target percentage.  That would provide a powerful incentive to address the problem.  In addition, that taxed excess ad revenue could fund mechanisms for oversight and transparency, for developing better solutions, and for remediating disinformation.

Can the platforms really shift to user revenue?  Zuckerberg has been a skeptic, but none of the big platforms has tried seriously.  When the platforms realize they must make this change, they will figure out how, even if it trims their exorbitant margins.
Users increasingly recognize that they must pay for digital services.  A system of reverse metering of ads and data use would be a powerful start.  Existing efforts that hint at the ultimate potential of better models include including crowdfundingmembership models, and cooperatives. Other emerging variations promise to be adaptive to large populations of users with diverse value perceptions and abilities to pay.

A growing focus on customer value would move us back towards leveraging a proven great strength of humanity — the deeply cooperative behavior of traditional markets.

A simple mandate requiring internet platforms to generate a growing percentage of revenue from users will not cure all ills. But it is the simplest way to drive a fundamental shift toward better corporate behavior.

---
Coda, 7/24/19:

Since the original publication of this article, this issue has become even more timely, as the FTC and Justice Department begin deep investigation into the Internet giants. 

  • There is growing consensus that there is a fundamental problem with the ad- and data-based business model
  • There is also growing consensus that we must move beyond the narrow theory of antitrust that says there can be no "harm" in a free service that does not raise direct costs to consumers (but does raise indirect costs to them and limits competition). 
  • But the targeted strategies for forcing a fundamental shift in business models outlined here are still not widely known or considered
  • It primarily focuses on these business model issues and regulatory strategies (including the auto emissions model described here), and how FairPay offers an innovative strategy that has gained recognition for how it can generate user revenue in equitable ways that do not prevent a service like Facebook or Google from being affordable by all, even those with limited ability to pay.
  • It also links to a body of work "On the deeper issues of social media and digital democracy." That includes Google-like algorithms for getting smarter about the wisdom of crowds, and structural strategies for regulation based on the specific architecture of the platforms and how power should be modularized (much as smart modularization was applied to regulating the Bell System and enabling the decades of robust innovation we now enjoy.)

Tuesday, May 21, 2019

Reisman in Techonomy: Business Growth is Actually Good for People. Here’s Why.

My 4th piece in Techonomy was published today:
Business Growth is Actually Good for People. Here’s Why..

Blurb:
We cannot—and should not—stop growing. Sensible people know the real task is to channel growth to serve human ends.
My opening and closing:
Douglas Rushkoff gave a characteristically provocative talk last week at Techonomy NYC – which provoked me to disagree strongly...
...Rushkoff delivers a powerful message on the need to re-center on human values. But his message would be more effective if it acknowledged the power of technology and growth instead of indiscriminately railing against it. We need a reawakening of human capitalism — and a Manhattan Project to get tech back on track. That will make us a better team human.

Friday, April 26, 2019

"Non-Binary" means "Non-Binary"...Mostly...Right?

A "gender non-binary female?"

Seeing the interview of Asia Kate Dillon on Late Night with Seth Meyers, I was struck by one statement -- one that suggests an insidious problem of binary thinking that pervades many of the current ills in our society. Dillon (who prefers the pronoun "they") reported gaining insight into their gender identity from the character description for their role in Billions as "a gender non-binary female," saying: “I just didn’t understand how those words could exist next to each other.”

What struck me was the questioning of how these words could be sensibly put together. Why would anyone ask that question? As I though more, I saw this as a perfect example of the much broader problem.

The curse of binary thinking

The question I ask is at a semantic level: how could that not be obvious? (regardless of one's views on gender identity). Doesn't the issue arise only if one interprets "female" in a binary way? I would have thought that one who identifies as "non-binary" would see beyond this conceptual trap of simplistic duality. Wouldn't a non-binary person be more non-binary in their thinking? Wouldn't it be obvious to a non-binary thinker that this is a matter of being non-binary and female, not of being non-binary or female?

It seems that binary thinking is so ingrained in our culture that we default to black and white readings when it is clear that most of life (outside of pure mathematics) is painted in shades of gray. It is common to think of some "females" as masculine, and some "males" as effeminate. Some view such terms as pejorative, but what is the reality? Why wouldn't a person presumed at birth to be female (for the usual blend of biological reasons) be able to be non-binary in a multitude of ways. Even biologically "female" has a multitude of aspects, which usually generally align, but sometimes diverge. Clearly, as to behavior in general and as to sexual orientation, there seems to be a spectrum, with many degrees in each of many dimensions (some barely noticed, some hard to miss).

So I write about this as an object lesson of how deeply the binary, black or white thinking or our culture distorts our view of the more deeply nuanced reality. Even one who sees themself as non-binary has a hard time escaping binary thinking. Why can the word "female" not be appropriate for a non-binary person (as we all are to some degree) -- one who has birth attributes that were ostensibly female. Isn't it just a fallacy of binary thinking to think it is not OK for a non-binary person to also be female? That a female cannot be non-binary?

I write about this because I have long taken issue with binary thinking. This is not to meant to criticize this actor in any way, but to sympathize broadly with the prevalence of this kind of blindness and absolutism in our culture. It is to empathize with those who suffer from being thought of in binary ways that fail to recognize the non-binary richness of life -- and those who suffer from thinking of themselves in a binary way. That is a harm that occurs to most of us at one time or another. As Whitman said:
Do I contradict myself?
Very well then I contradict myself,
(I am large, I contain multitudes.)
The bigger picture

Gender is just one of the host of current crises of binary thinking that lead to extreme polarization of all kinds. Political divides. The more irreconcilable divide over whether leadership must serve all of their constituency, or just those who support the leader, right or wrong. Fake news. Free speech and truth on campus vs. censorship for some zone of safety for binary thinkers. Trickle-down versus progressive economics. Capitalism versus socialism. Immigrant versus native. One race or religion versus another. Isn't the recent focus of some on "intersectionality" just an extension of binary thinking to multiple binary dimensions? Thinking in terms of binary categories (rather that category spectrums) distances and demonizes the other, blinded from seeing how much common ground there is.

The Tao symbol (which appears elsewhere in this blog) is a perfect illustration of my point, and an age-old symbol of the non-dualistic thinking central to some Asian traditions (I just noticed the irony of the actor's first name as I wrote this sentence!). We have black and white intertwined, and the dot of contrast indicates that each contains it opposite. That suggests that all females have some male in them (however large or small, and in whatever aspect) and all males have some female in them (much as some males would think that a blood libel).

Things are not black or white, but black and white. And even if nearly black or white in a single dimension, single dimensions rarely matter to the larger picture of any issue. I think we should all make a real effort to remind ourselves that that is the case for almost every issue of importance.

---

(I do not profess to be "woke," but do very much try to be "awakened" and accepting of the wondrous richness of our world. My focus here is on binary and non-binary thinking, itself. I use gender identity as the example only because of this statement that struck me. If I misunderstand or express my ideas inartfully in this fraught domain, that is not my intent. I hope it is taken in the spirit of finding greater understanding that is intended.)

(In that vein, I accept that there may be complex issues specific to gender and identity that go counter to my semantic argument in some respects. But my non-binary view is that that broader truth of non-duality still over-arches. And in an awakened non-binary world, the current last word can never be known to be the future last word.)

(See also the short post just below on the theme of this blog.)

A Note on the Theme of this Blog: Everything is Deeply Intertwingled -- and, Hopefully, Becoming Smartly Intertwingled

The next post (to appear just above) is the first to indulge my desire to comment more broadly on the theme that "everything is deeply intertwingled" (as Ted Nelson put it). That has always been a core of my worldview and has been increasingly weaving into my posts -- especially on the problems of how we deal with "truth" in our social media, which I say should move toward being more smartly intertwingled.

That post, and some that will follow, move far out of my professional expertise, but I see all of my ideas as deeply intertwingled. (I have always been intrigued by epistemology, the theory of knowledge: what can we know and how do we know it). This current  topic provided the impetus to act on my latent intent to broaden the scope of this blog to these larger issues that are now creating so much dysfunction in our society.

Beyond Ted Nelson's classic statement and his diagram (above, from Computer Lib/Dream Machines) the symbol that most elegantly conveys this perspective is the Tao symbol, which appears in many of my posts. It shows the yin and yang of female and male as intertwingling symbols of those elemental opposites — and the version with the dots in each intertwingled portion, suggests that each element also contains its opposite (a further level of intertwingling).

[Update 6/13/19, on changing the blog header:]

This blog was formerly known as “Reisman on User-Centered Media,” with the description:
On developing media platforms that are user-centered – open and adaptable to the user's needs and desires – and that earn profit from the value they create for users ...and as tools for augmenting human intellect and enlightened democracy.
That continues to be a major theme.

Tuesday, April 09, 2019

A Regulatory Framework for the Internet (with Thanks to Ben Thompson)

Summarizing Ben Thompson of Stratechery, plus my own targeted proposals

"A Regulatory Framework for the Internet," Ben Thompson's masterly framework, should be required reading for all regulators, as well as anyone concerned about tech and society. (Stratechery is one of the best tech newsletters, well worth the subscription price, but this article is freely accessible.)

I hope you will read Ben's full article, but here are some points that I find especially important, followed by the suggestions I posted on his forum (which is not publicly accessible).

Part I -- Highlights from Ben's Framework (emphasis added)

Opening with the UK government White Paper calling for increased regulation of tech companies, Ben quotes MIT Tech Review about the alarm it raised among privacy campaigners, who "fear that the way it is implemented could easily lead to censorship for users of social networks rather than curbing the excesses of the networks themselves."

Ben identifies three clear questions that make regulation problematic:
First, what content should be regulated, if any, and by whom?
Second, what is a viable way to monitor the content generated on these platforms?
Third, how can privacy, competition, and free expression be preserved?

Exploring the viral spread of the Christchurch hate crime video, he gets to a key issue:
What is critical to note, though, is that it is not a direct leap from “pre-Internet” to the Internet as we experience it today. The terrorist in Christchurch didn’t set up a server to livestream video from his phone; rather, he used Facebook’s built-in functionality. And, when it came to the video’s spread, the culprit was not email or message boards, but social media generally. To put it another way, to have spread that video on the Internet would be possible but difficult; to spread it on social media was trivial.
The core issue is business models: to set up a live video streaming server is somewhat challenging, particularly if you are not technically inclined, and it costs money. More expensive still are the bandwidth costs of actually reaching a significant number of people. Large social media sites like Facebook or YouTube, though, are happy to bear those costs in service of a larger goal: building their advertising businesses.

Expanding on business models, he describes the ad-based platforms as "Super Aggregators:"
The key differentiator of Super Aggregators is that they have three-sided markets: users, content providers (which may include users!), and advertisers. Both content providers and advertisers want the user’s attention, and the latter are willing to pay for it. This leads to a beautiful business model from the perspective of a Super Aggregator:
Content providers provide content for free, facilitated by the Super Aggregator
Users view that content, and provide their own content, facilitated by the Super Aggregator
Advertisers can reach the exact users they want, paying the Super Aggregator 
...Moreover, this arrangement allows Super Aggregators to be relatively unconcerned with what exactly flows across their network: advertisers simply want eyeballs, and the revenue from serving them pays for the infrastructure to not only accommodate users but also give content suppliers the tools to provide whatever sort of content those users may want.
...while they would surely like to avoid PR black-eyes, what they like even more is the limitless supply of attention and content that comes from making it easier for anyone anywhere to upload and view content of any type.
...Note how much different this is than a traditional customer-supplier relationship, even one mediated by a market-maker... When users pay they have power; when users and those who pay are distinct, as is the case with these advertising-supported Super Aggregators, the power of persuasion — that is, the power of the market — is absent.
He then distinguishes the three types of "free" relevant to the Internet, and how they differ:
“Free as in speech” means the freedom or right to do something
“Free as in beer” means that you get something for free without any additional responsibility
“Free as in puppy” means that you get something for free, but the longterm costs are substantial
...The question that should be asked, though, is if preserving “free as in speech” should also mean preserving “free as in beer.”
Platforms that are paid for by their users are "regulated" by the operation of market forces, but those that are ad-supported are not, and so need external regulation.

Ben concludes that:
...platform providers that primarily monetize through advertising should be in their own category: as I noted above, because these platform providers separate monetization from content supply and consumption, there is no price or payment mechanism to incentivize them to be concerned with problematic content; in fact, the incentives of an advertising business drive them to focus on engagement, i.e. giving users what they want, no matter how noxious.
 This distinct categorization is critical to developing regulation that actually addresses problems without adverse side effects
...from a theoretical perspective, the appropriate place for regulation is where there is market failure; constraining the application to that failure is what is so difficult.
That leads to Ben's figure that brings these ideas together, and delineates critical distinctions:


I agree completely, and build on that with my two proposals for highly targeted regulation...

Part II -- My proposals, as commented on in the Statechery Forum 
(including some minor edits and portions that were abridged to meet character limits):

Elegant model, beautifully explained! Should be required reading for all regulators.

FIRST:  Fix the business model! I suggest taking this model farther, and mandating that the "free beer" ad-based model be ratcheted away once a service reaches some critical level of scale. That would solve the problem -- and address your concerns about competition.

Why don't we regulate to fix the root cause? The root cause of Facebook's abuse of trust is its business model, and until we change that, its motivations will always be opposed to consumer and public trust.

Here is a simple way to force change, without over-engineering the details of the remedy. Requiring a growing percentage of revenue from users is the simplest way to drive a fundamental shift toward better corporate behavior. Others have suggested paying for data, and I suggest this is most readily done in the form of credits against a user service fee. Mandating that a target level of revenue (above a certain level) come from users could drive Facebook to offer such data credits, as a way to meet their user revenue target (even if most users pay nothing beyond that credit). We will not motivate trust until the user becomes the customer, and not the product.

There is a regulatory method that has already proven its success with a similarly challenging problem – forcing automakers to increase the fuel efficiency of the cars they make. The US government has for years mandated staged multi-year increases in Average Fuel Efficiency. This does not mandate how to fix things. It mandates a limit on the systems that have been shown to cause harm. Facebook and YouTube can determine how best to achieve that. Require that X% of the revenue come from users rather than advertisers. Government can monitor progress, with a timetable for ratcheting up the percentage. (This should apply only above some amount of revenues, to facilitate competition.)

With that motivation, Facebook and YouTube can be driven to shift from advertising revenue to customer revenue. That may seem difficult, but only for lack of trying. Credits for attention and data are a just a start. If we move in that direction, we can be less dependent on other, more problematic, kinds of regulation.

This regulatory strategy is outlined in To Regulate Facebook and Google, Turn Users Into Customers (in Techonomy). More on why that is important in Reverse the Biz Model! -- Undo the Faustian Bargain for Ads and Data. (And some suggestions on more effective ways to obtain user revenue:  Information Wants to be Free; Consumers May Want to Pay, (also in Techonomy.)

SECOND: Downrank dissemination, don't censor speech! Your points about limiting user expression, and that the real issue is harmful spreading on social media, are also vitally important.

I say the real issue is:
  1.  Not: rules for what can and cannot be said – speech is a protected right
  2.  But rather: rules for what statements are seen by who – distribution (how feeds are filtered and presented) is not a protected right.
The value of a social media service should be to disseminate the good, not the bad. (That is why we talk about “filter bubbles” – failures of value-based filtering.)

I suggest Facebook and YouTube should have little role in deciding what can be said (other than to enforce government standards of free speech and clearly prohibited speech to whatever extent practical).  What matters is who that speech is distributed to, and the network has full control of that.  Strong downranking is a sensible and practical alternative to removal -- far more effective and nuanced, and far less problematic.

I have written about new ways to use PageRank-like algorithms to determine what to downrank or uprank – “rate the raters and weight the ratings.”
  • Facebook can have a fairly free hand in downranking objectionable speech
  • They can apply community standards to what they promote -- to any number of communities, each with varying standards.
  • They could also enable open filtering, so users/communities can chose someone else’s algorithm (or set their preferences in any algorithm). 
  • With smart filtering, the spread of harmful speech can be throttled before it does much harm.
  • The “augmented wisdom of the crowd” can do that very effectively, on Internet scale, in real time.
  • No pre-emptive, exclusionary, censorship technique is as effective at scale -- nor as protective of free speech rights or community standards.
That approach is addressed at some length in these posts (where “fake news” is meant to include anything objectionable to some community):
…and some further discussion on that:
---
More of my thinking on these issues is summarized in this Open Letter to Influencers Concerned About Facebook and Other Platforms

Friday, March 15, 2019

My Latest Articles in Techonomy

Here is the growing list of my articles published in Techonomy on FairPay, business, media, and society:


Despite his supposedly "Privacy-Focused Vision," it seems clear that Zuckerberg will not voluntarily go where he must. So we must force him to make needed changes in the core Facebook business model, one way or another.   MORE
The seductive idea that we can enjoy free internet services -- if we just view ads and turn over our data -- has been recognized to be “the original sin” of the Internet. Requiring internet platforms to generate revenue from users could drive better corporate behavior.  MORE
Current approaches to dynamic pricing are consumer-hostile. The author argues that there's a better way to build win-win relationships in the digital space that use cooperation, trust, and transparency to nurture customer lifetime value.  MORE