Thursday, October 31, 2019

Free Speech, Not Free Targeting! (Using Our Own Data to Manipulate Us)

(Image adapted from The Great Hack movie)
Zuckerberg's recent arguments that Facebook should restrict free expression only in the face of imminent, clear, and egregious harm have generated a storm of discussion -- and a very insightful counter from Dorsey (at Twitter).

But most discussion of these issues misses how social media can be managed without sacrificing our constitutionally protected freedom of expression. It oversimplifies how speech works in social media and misdiagnoses the causes of harm and abuse. 

[Update: A newer 11/6 post focuses on stopgap solutions for controversial and urgent concerns leading in to the 2020 election: 2020: A Goldilocks Solution for False Political Ads on Social Media is EmergingThis post focuses on the broader and deeper abuses of microtargeting, and how everything in our feeds is filtered.]

Much of this debate seems like blind men arguing over how to control an elephant when they don't yet understand what an elephant is. That is compounded by an elephant driver who exploits that confusion to do what he likes. (Is he, too, blind? ...or just motivated not to see the harm his elephant does?)

I suggest some simple principles can lead to more productive solution. Effective regulation -- whether self-regulation by the platforms, or by government -- requires understanding that we are really dealing with a new and powerfully expanded kind of hybrid media -- which is provided by a new and powerfully expanded kind of hybrid platform. That understanding suggests how to find a proper balance that protects free expression without doing great harm.

(This is a preliminary outline that I hope to expand on and refine. In the  meantime, some valuable references are suggested.) 

The essence of the problem

I suggest these three simple principles as overarching:
  1. Clearly, we need to protect "free speech," and a "free press," the First Amendment rights that are essential to our democracy and to our "marketplace of ideas." Zuckerberg is right that we need to be vigilant against overreaching cures -- in the form of censorship -- that may be worse than the disease.
  2. But he and his opponents both seem to misunderstand the nature of these new platforms. The real problem arises from the new services these platforms enable: precision targeted delivery services are neither protected speech, nor the protected press. They are a new kind of add-on service, separate from speech or the press. 
  3. Enabling precision targeted delivery against our interests, based on data extracted from us without informed consent is an abuse of power -- by the platforms -- and by the advertisers who pay them for that microtargeted delivery service. This is not a question of whether our data is private (or even wholly ours) -- it is a question of the legitimate use of data that we have rights in versus uses of that data that we have rights to disallow (both individually and as a society). It is also a question of when manipulative use of targeted ads constitutes deceptive advertising, which is not protected speech, and what constraints should be placed on paid targeting of messages to users. 
By controlling precision targeted delivery of speech, we can limit harmful behavior in the dissemination of speech -- without censorship of that speech.

While finalizing this post, I realized that Renee DiResta made some similar points under the title Free Speech Is Not the Same As Free Reach, her 2018 Wired article that explains this problem using that slightly different but equally pointed turn of phrase. With some helpful background, DiResta observed that:
...in this moment, the conversation we should be having—how can we fix the algorithms?—is instead being co-opted and twisted by politicians and pundits howling about censorship and miscasting content moderation as the demise of free speech online. It would be good to remind them that free speech does not mean free reach. There is no right to algorithmic amplification. In fact, that’s the very problem that needs fixing.
...So what can we do about it? The solution isn’t to outlaw algorithmic ranking or make noise about legislating what results Google can return... 
...there is a trust problem, and a lack of understanding of how rankings and feeds work, and that allows bad-faith politicking to gain traction. The best solution to that is to increase transparency and internet literacy, enabling users to have a better understanding of why they see what they see—and to build these powerful curatorial systems with a sense of responsibility for what they return.
In the following sections, I outline novel suggestions for how to go farther to manage this problem of free reach/free targeting -- in a way that drives the platforms to make their algorithms more controllable by their users, for their users. Notice the semantics: targeting and reach are both done to users -- filtering is done for users.

========================================================
Sidebar: The Elements of Social Media

Before continuing -- since even Zuckerberg seems to be confused about the nature of his elephant -- let's review the essential elements of Facebook and other social media.

Posting: This is the simple part. We start with what Facebook calls the Publisher Box that allows you to "write something" to post a Status Update that you wish to be available to others. By itself, that is little more than an easy-to-update personal Web site (a "microblogging" site), that makes short content items available to anyone who seeks them out. Other users can do that by going to that your Timeline/Wall (for Friends or the Public, depending on settings that you can control)For abuse and regulatory purposes, this aspect of Facebook is essentially a user-friendly Web hosting provider -- with no new First Amendment harms or issues.

Individually Filtered News Feeds: This is where things get new and very different. Your News Feed is an individually filtered view of what your friends are saying or commenting on (including what you "Liked" as a kind of comment). Facebook's filtering algorithms filter all of that content, based on some secret algorithm, to show you the items Facebook thinks will most likely engage you. This serves as a new kind of automated moderation. Some items are upranked so they will appear in your feed, others are downranked so they will not be shown in your feed. That ranking is weighted based on the social graph that connects you to your friends, and their friends, and so on -- how much positive interest each item draws from those the fewest degrees apart from you in your social graph. That ranking is also adjusted based on all the other information Facebook has about you and our friends (from observing activity anywhere in the vast Facebook ecosystem, and from external sources). It is this new individually filtered dissemination function of social media platforms that creates this new kind of conflict between free expression and newly enabled harms. (A further important  new layer is the enablement of self-forming Groups of like-minded users who can post items to the group -- and so have them filtered into the feeds of other group members, much like a special type of Friend.)

Targeted Ads: Layered on top of the first two elements, ads are special kind of posting in which advertisers pay Facebook to have their postings selectively filtered into the news feeds of individual users. Importantly, what is new in social media is that an ad are no longer just crudely targeted to some page in a publication or some time-slot in a video channel that goes to all viewers of that page or channel. Instead, it is precision targeted (microtargeted) to a set of users who fit some narrowly defined combination of criteria (or to a Custom Audience based on specific email addresses). Thus individualized messages can be targeted to just those users predicted to be especially receptive or easily manipulated -- and to remain unseen by others. This creates an entirely new category of harm that is both powerful and secretive. (How insidious this can be has already been demonstrated in Cambridge Analytica's abuse of Facebook.)  In this respect it is much like subliminal advertising (which is banned and not afforded First Amendment protection). The questions about about the harm of paid political advertising are especially urgent and compelling, as expressed by none other than Jack Dorsey of Twitter, who has just taken an opposite stand from Zuckerberg, saying “This isn’t about free expression. This is about paying for reach. And paying to increase the reach of political speech has significant ramifications that today’s democratic infrastructure may not be prepared to handle. It’s worth stepping back in order to address.” (See more in the "Coda: The urgent issue of paid political advertising.")
========================================================

Why these principles?

For an enlightening and well-researched explanation of the legal background behind my three principles, I recommend The Case for the Digital Platform Act: Market Structure and Regulation of Digital Platforms, by Harold Feld of Public Knowledge. (My apologies if I mis-characterize any of his points here.)

Feld's Chapter V parses these issues nicely, with a detailed primer on First Amendment issues, as evolved in communications and media law and regulation history. It also provides an analysis of how these platforms are a new kind of hybrid of direct one-to-one and one-to-many communications -- and how they add a new level of self-organizing many-to-many communities (fed by the new filtering algorithms). He explains why we should preserve strong freedom of speech for the one-to-one, but judiciously regulate the one-to-many. He also notes how facilitating creation of self-organizing communities introduces a new set of dangerous issues (including the empowerment of terrorist and hate groups who were previously isolated).

I have previously expressed similar ideas, focusing on better ways to do the filtering and precision targeting of content to an individual level that powers the one-to-many communication on these platforms and drives their self-organizing communities. That filtering and targeting is quantum leaps beyond anything ever before enabled at scale. Unfortunately, it is currently optimized for advertiser value, rather than user value.

The insidious new harm in false speech and other disinformation on these platforms is not in the speech, itself -- and not in simple distribution of the speech -- but in the abuse of this new platform service of precision targeting (microtargeting). Further, the essential harm of the platforms is not that they have our personal information, but in what they do with it. As described in the sidebar above, filtering -- based on our social graphs and other personal data -- is the core service of social media, and that can be a very valuable service. This filtering acts as a new, automated, form of moderation -- one that emerges from the platform's algorithms as they both drive and are driven by the ongoing activity of its users in a powerful new kind of feedback loop. The problem we now face with social media arises when that filtering/moderation service is misguided and abused:
  • This new microtargeted filtering service can direct user posts or paid advertising to those most vulnerable to being manipulated, without their informed permission or awareness.
  • The social media feedback cycle can further enlist those manipulated users to be used as conduits ("useful idiots") to amplify that harm throughout their social graphs (much like the familiar screech of audio feedback that is not properly damped). 
So that is where some combination of self-regulation and government regulation is most needed. Feld points to many relevant precedents for content moderation that have been held to be consistent with First Amendment rights, and he suggests that this is a fruitful area for regulatory guidance. My perspective on this is:
  • Regulation and platform self-regulation can be applied to limit social media harms, without impermissible limitation of rights of speech or the press
  • Free expression always entails some risk of harm that we accept as free society.
  • The harm we can best protect against is not the posting of harmful content, but the delivering of that harmful content to those who have not specifically sought it out. 
  • That is where Zuckerberg completely misses the point (whether by greed, malice, or simple naivete -- “It is difficult to get a man to understand something, when his job depends on his not understanding it”).
  • And that is where many of Zuckerberg's opponents waste their energy fighting the wrong battle -- one they cannot and should not win. 
Freedom of speech (posting), not freedom of intrusion on others who have not invited it.

That new kind of intrusion is the essential issue that most discussion seems to be missing.
  • I suggest that users should retain the right to post information with few restriction (the narrow exceptions that have traditionally been allowed by the courts as appropriate limits to First Amendment rights). 
  • That can be allowed without undue harm, as long as objectionable content is automatically downranked enough in a filtering (moderation) process, to largely avoid sending it to users who do not want such content
  • This is consistent with the safe-harbor provisions of Section 230 of the Communications Decency Act of 1996. That was created with thought to the limited and largely unmoderated posting functions of early Web aggregators (notably CompuServe and Prodigy, as litigated at the time). That also accepted the the freedom of the myriad independent Web sites that one had to actively seek out. 
  • Given the variation in community standards that complicate the handling of First Amendment rights by global platforms, filtering can also be applied to selectively restrict distribution of postings that are objectionable in specific communities or jurisdictions, without restricting posters or other allowable recipients.
As an important complement to this understanding of the problem, I also argue that users should be granted significant ability to customize the filtering process that serves them. That could better limit the exposure of users (and communities of users) to undesired content, without censoring material they do want.
  • Filtering should be a service for users, and thus should be selectable and adjustable by users to meet their individual desires. That customization should be dynamically modifiable, as a user's desires vary from time to time and task to task. (Some similar selectability has been offered to a limited extent for search -- and should apply even more fully to feeds, recognizing that search and feeds are complementary services. 
  • Issues here relate not only to one-to-one versus one-to-many, but to distinguish the user-active "pull" of requested information (such as a Web site access) versus the user-passive "push" of unsolicited information in a platform-driven feed. Getting much smarter about that would have huge value to users, as well as limiting abuses. 
Recipient-controlled "censorship:" content filtering, user choice, and competitive innovation - 

I suggest new levels of censorship of social media postings are generally not needed, because filtering enables a new kind of recipient-controlled "censorship" of delivery.

Social media work because they offer a new kind of filtering service for users -- most particularly, filtering a feed based on one's social graph. That has succeeded in spite of the fact that the platforms currently give their users little say over how that filtering is done (beyond specifying the social graph), and largely use it to manipulate their users rather than serve them. I put that forth as a central argument for regulation and antitrust action against the platforms.

Filtering algorithms should give users the kind of content they value, when they value it:
  • to include or exclude what the user considers to be objectionable or of undesired quality generally
  • to be dynamically selectable  (or able to sense the user's mood, task, and flow state) 
  • to filter for challenge, enlightenment, enjoyment, humor, emotion, support, comraderie, or relaxation at any given time. 
I explain in detail how smart use of "augmented intelligence" that draws on human inputs can enable that in The Augmented Wisdom of Crowds: Rate the Raters and Weight the Ratings and in A Cognitive Immune System for Social Media -- Developing Systemic Resistance to Fake News. This kind of hybrid man+machine intelligence can be far more powerful (and dynamically responsive) than either human or machine intelligence alone in determining the relevance, value, and legitimacy of social media postings (and ads). With this kind of smart real-time filtering of our feeds to protect us, censorship of postings can be limited to clearly improper material. Such methods have gotten little attention because Facebook is secretive about its filtering methods, and has had little incentive to develop them to serve users in this way. (But Google's PageRank algorithm has demonstrated the power of such multilevel rate the raters techniques to determine the relevance, value, and legitimacy of content.)

A monolithic platform like Facebook would be hard-pressed to deliver that level of flexibility and innovation for a full range of user desires and skill levels even if it wanted to. Key strategies to meet this complex need are:
  • to enable users to select from an open market in filtering services, each filtering service provider tuning its algorithms to provide value that competes in the marketplace to appeal to specific segments of users 
  • to combine multiple filtering services and algorithms to produce a desired overall effect
  • to allow filtering algorithm parameters to be changed by their users to vary the mix of algorithms and the operation of individual algorithms at will
  • to also factor in whatever "expert" content rating services they want.
(For an example of how such an open market might be shaped, consider the long-successful model of the open market for analytics that are used to filter financial market data to rank investment options. Think of social media as having user interface agents, repositories of posts, repositories of social graphs, and filtering/presentation tools, where the latter correspond to the financial analytics. Each of those elements might be separable and interoperable in an open competitive market.) 

These proposals have huge implications for speech and democracy, and well as for competitive innovation in augmenting the development of human wisdom (or de-augmenting it, as is happening now). That is how Facebook and other platforms could be much better at "bringing people closer together" without being so devilishly effective at driving them apart.

The need for a New Digital Platform Agency 

While adding bureaucracy is always a concern -- especially relating to the dynamic competitive environment of emerging digital technology -- there are strong arguments for that in this context.

The world is coming to realize that the Chicago School of antitrust that dominated the recent era of narrow antitrust enforcement is not enough. Raising "costs" to consumers is not a sufficient measure of harm when direct monetary costs to consumers are "zero." The real costs are not zero. Understanding what social media could do for us provides a reference point that shows how much we are really paying for the low-value platform services we now have. We cannot afford these supposedly "free" services!

Competition for users could change the value proposition, but this space is too complex, dynamic, and dependent on industry and technical expertise to be left to self-regulation, the courts, or legislation.

We need a new, specialized agency. The Feld report (cited above) offers in-depth support for such an agency, as do the three references recommended in the announcement of a conference on The Debate Over a New Digital Platform Agency: Developing Digital Authority and Expertise. (I recently attended that conference, and plan to post more about in the near future).

Just touching on this theme, we need a specialist agency that can regulate the platforms with expertise (much as the FCC has regulated communications and mass media) to find the right balance between the First Amendment and the harmful speech that it does not protect -- and to support open, competitive innovation as this continues to evolve. Many are unaware of the important and productive history here. (I observed from within the Bell System how the FCC and the courts regulated and eventually broke it up, and how this empowered the dynamic competition that led to the open Web and the Internet of Things that we now enjoy.) Inspired by those lessons, I offer specific new suggestions for regulation in Architecting Our Platforms to Better Serve Us -- Augmenting and Modularizing the Algorithm. Creating such an agency will take time, and be challenging -- but the alternative is to put not only the First Amendment, but our democracy and our freedom at risk.

These problems are hard, both for user speech, and for the special problem of paid advertising, which gives the platforms an incentive to serve advertisers, not users. As Dorsey of Twitter put it:
These challenges will affect ALL internet communication, not just political ads. Best to focus our efforts on the root problems, without the additional burden and complexity taking money brings. Trying to fix both means fixing neither well, and harms our credibility. ...For instance, it‘s not credible for us to say: “We’re working hard to stop people from gaming our systems to spread misleading info, buuut if someone pays us to target and force people to see their political ad…well...they can say whatever they want! 😉”
I have outlined a promising path toward solutions that preserve our freedom of speech while managing proper targeting of that speech, the underlying issue that few seem to recognize. But it will be a long and winding road, one that almost certainly requires a specialized agency to set guidelines, monitor, and adjust, as we find our way in this evolving new world.

Coda: The urgent issue of paid political advertising

The current firestorm regarding paid political advertising highlights one area where my proposals for smarter filtering and expert regulation are especially urgent, and where the case for reasonable controls on speech is especially well founded. My arguments for user control of filtering would have advertising targeting options be clearly subordinate to user filtering preferences. That seems to be sound in terms of First Amendment law, and common sense. Amplifying that are the arguments I have made elsewhere (Reverse the Biz Model! -- Undo the Faustian Bargain for Ads and Data) that advertising can be done in ways that better serve both users and well-intended advertisers. All parties win when ads are relevant, useful, and non-intrusive to their recipients.

But given the urgency here, for temporary relief until such selective controls can be put into effect, Dorsey's total ban on Twitter seems well worth considering for Facebook as well. Zuckerberg's defensive waving of the flag of free expression seems naive and self-serving.

[See my newer post (11/6) on stopgap solutions for controversial and urgent concerns leading in to the 2020 election: 2020: A Goldilocks Solution for False Political Ads on Social Media is Emerging. It reorganizes and expands on updates that are retained below.]

---
See the Selected Items tab for more on this theme.

Two key summaries:
Further discussion is in these posts:

==================================================
==================================================
Updates on stopgaps have since been consolidated into an 11/6 post: 2020: A Goldilocks Solution for False Political Ads on Social Media is Emerging...
That is more complete, but this section is retained as a history of updates.

[Update 11/2/19:]

An excellent analysis of the special case of political speech related to candidates is in Sam Lessin's 2016 Free Speech and Democracy in the Age of Micro-Targeting, which makes a well-reasoned argument that:
The growth of micro-targeting and disappearing messaging on the internet means that politicians can say different things to large numbers of people individually, in a way that can’t be monitored. Requirements to put this discourse on the public record are required to maintain democracy.
Lessin has a point that the secret (and often disappearing) nature of these communications, even when invited, is a threat to democracy. I agree that his remedy of disclosure is powerful, and it is a potentially important complement to my broad remedy of user-controlled targeting filters.

2020 Stopgaps?  

As to the urgent issue of the 2020 election, acting quickly will be hard. My proposal for user-controlled targeting filters is unlikely to be feasible as soon as 2020. So what can we do now?

Perhaps most feasible for 2020, is a simplistic stop-gap solution that might be easy to apply quickly:  just enact a temporary ban -- not on placing political ads, but on the individualized targeting of political ads. Do this as a simple and safe compromise between the Zuckerberg and Dorsey policies until we have a regulatory regime to manage micro-targeting properly:
  • Avoid a total ban on political ads on social media, but allow ads to only be run just as they are in traditional media, in a way that is no more targeted than traditional media. 
  • Disallow precision targeting to individuals: allow as many or as few ads as advertisers wish to purchase, but target them to all users, or to whatever random subset of all users fill the paid allotment.
  • A slight extension of this might permit a "traditional" level of targeting, such as to users within broad geographic areas, or to a single affinity category that is not more precise or personalized than traditional print or TV slotting options.
This is consistent with my point that the harm is not the speech, but the precision targeting of the speech, and would buy time to develop a more nuanced approach. It is something that Zuckerberg, Dorsey, and others could just decide to do on their own (...or be pressured to do).

[Update 11/3/19:] Siva Vaidhyanathan made a very similar proposal to my stop-gap suggestion: "here’s something Congress could do: restrict the targeting of political ads in any medium to the level of the electoral district of the race." That seems a good compromise that could stand until we have a better solution (or become a part of a more complete solution). (I am not sure if Vaidhyanathan meant to allow targeting to the level of individual districts in a multi-district election, but it seems to me that would be sufficient to enable reasonable visibility and not much harder to do quickly than the broader bans I had suggested.)

[Update 11/5/19:] Three other experts have argued for much the same kind of limits on targeting as the effective middle-ground solution.

(Alex Stamos from CJR)
One is Alex Stamos, in a 10/28 CJR interview by Mathew Ingram, Talking with former Facebook security chief Alex Stamos. Stamos offers a useful diagram to clarify key elements of Facebook and other social media that are often blurred together. He clarifies the hierarchy of amplification by advertising and recommendation engines (filtering of feeds) at the top, and free expression in various forms of private messaging at the bottom. This shows how the risks of abuse that need control are primarily related to paid targeting and to filtering. Stamos points out that "the type of abuse a lot of people are talking about, political disinformation, is absolutely tied to amplification" and that at the rights of unfettered free expression get stronger at the bottom, "the right of individuals to be exposed to information they have explicitly sought out."

Stamos argues that "Tech platforms should absolutely not fact-check candidates organic (unpaid) speech," but, in support of the kind of targeting limit suggested here, he says "I recommended, along with my partners here at Stanford, for there to be a legal floor on the advertising segment size for ads of a political nature."

Ben Thompson, in Tech and Liberty, supports Stamos' arguments and distinguishes rights of speech from "the right to be heard." He notes that "Targeting... both grants a right to be heard that is something distinct from a right to speech, as well as limits our shared understanding of what there is to debate."

And -- I just realized there had been another powerful voice on this issue! Ellen Weintraub, chair of the Federal Election Commission (in WaPo), Don’t abolish political ads on social media. Stop microtargeting. She suggests the same kind of limits on targeting of political ads outlined here, in even more specific terms (emphasis added):
A good rule of thumb could be for Internet advertisers to allow targeting no more specific than one political level below the election at which the ad is directed. Want to influence the governor’s race in Kansas? Your Internet ads could run across Kansas, or target individual counties, but that’s it. Running at-large for the Houston City Council? You could target the whole city or individual council districts. Presidential ads could likely be safely targeted down two levels, to the state and then to the county or congressional district level.
Maybe this flurried convergence of informed opinion will actually lead to some effective action.

Until we get more key people (including the press) to have some common understanding of what the problem is, it will be very hard to get a solution. For most of us, that is just a matter of making some effort to think clearly. For some it seems to be a matter of motivated reasoning that makes them not want to understand. (Many -- not always the same people -- have suggested that both Zuckerberg and Dorsey suffer from motivated reasoning.)

...And, as addressed in the first sections of this post, maybe that will help move us toward broader action to regain the promise of social media -- to apply smart filtering to make its users smarter, not dumber!

4 comments:

  1. We are offering Best AWS certification training courses. Build your AWS cloud skills with expert instructor- led classes. Live projects, Hands-on training,24/7 support.
    https://onlineidealab.com/aws-certification/

    ReplyDelete
  2. I'm glad I found this blog! Occasionally, students want to know the keys to writing productive literary essays. Your first-class knowledge of this great job can become a suitable foundation for these people. Good

    Data Science Training in Bangalore

    ReplyDelete
  3. You should get certification in the relevant courses if you need to be considered for recruiting data experts.
    data science training in borivali

    ReplyDelete