Zuckerberg has rationalized that
Facebook should do nothing about lies, and Dorsey has Twitter copping to the
other extreme of an indiscriminate ad ban. But a readily actionable Goldilocks
solution has emerged in response – and there are reports
that Facebook is considering it.*
[This post focuses on stopgap solutions for controversial and urgent concerns leading in to the 2020 election. My prior post, Free Speech, Not Free Targeting! (Using Our Own Data to Manipulate Us), addresses the deeper abuses related to microtargeting and how everything in our feeds is filtered.]
[Update 5/29/20:]
Two bills to limit microtargeting of political ads have been introduced in Congress, one by Rep. Eshoo, and one by Rep. Cicilline. Both are along the lines of the proposals described here. (More at end.)
[Update 5/29/20:]
Two bills to limit microtargeting of political ads have been introduced in Congress, one by Rep. Eshoo, and one by Rep. Cicilline. Both are along the lines of the proposals described here. (More at end.)
The real problem
While dishonest political ads are a problem, that in itself is nothing new that we cannot deal with. What is new is microtargeting of dishonest ads, and that has created a crisis that puts the fairness of our elections in serous doubt. Numerous sophisticated observers – including the chair of the Federal Election Commission and the former head of security at Facebook -- have identified a far better stopgap solution than an outright ban on all political ads (or doing nothing).
While dishonest political ads are a problem, that in itself is nothing new that we cannot deal with. What is new is microtargeting of dishonest ads, and that has created a crisis that puts the fairness of our elections in serous doubt. Numerous sophisticated observers – including the chair of the Federal Election Commission and the former head of security at Facebook -- have identified a far better stopgap solution than an outright ban on all political ads (or doing nothing).
Since the real problem is
microtargeting, the “just right” quick solution is to limit microtargeting (at least until we have better ways to control it). Microtargeting provides the new and
insidious capability for a political campaign to precisely tailor its messaging
to microsegments of voters who are vulnerable to being manipulated in one way, and while sending many different,
conflicting messages to other microsegments who can be manipulated in other ways – by precision targeting down to designated sets of
individual voters (such as with multifacet categories or with Facebook Custom Audiences). The social media feedback cycle can further enlist those manipulated users to be used as conduits ("useful idiots") to amplify that harm throughout their social graphs (much like the familiar screech of audio feedback that is not properly damped). This new kind of message amplification
has been weaponized to incite extreme radicalization and even violent action.
We must be clear that there is a right of speech, but only limited rights to
amplification or targeting. We have always had political ads that lie. America
was founded on the principle that the best counter to lies is not censorship,
but truth. Policing lies is a very slippery slope, but when a lie is out in the
open, it can be exposed, debunked, and shamed. Sunlight has proven the
best disinfectant. With microtargeting there is no exposure to sunlight
and shame.
- This new microtargeted filtering service can direct user posts or paid advertising to those most vulnerable to being manipulated, without their informed permission or awareness.
- The social media feedback cycle can further enlist those manipulated users to be used as conduits ("useful idiots") to amplify that harm throughout their social graphs (much like the familiar screech of audio feedback that is not properly damped).
- These abuses are hidden from others and generally not auditable. That is compounds the harm of lies, since they can be targeted to manipulate factions surreptitiously.
Consensus for a stopgap solution
In the past week or so, limits on microtargeting have been suggested to take a range of forms, all of which seem workable and feasible:
In the past week or so, limits on microtargeting have been suggested to take a range of forms, all of which seem workable and feasible:
- Ellen Weintraub, chair of the Federal Election Commission (in the Washington Post), Don’t abolish political ads on social media. Stop microtargeting, suggests “A good rule of thumb could be for Internet advertisers to allow targeting no more specific than one political level below the election at which the ad is directed.
- Alex Stamos, former Facebook security chief, in an interview with Columbia Journalism Review, suggests “There are a lot of ways you can try to regulate this, but I think the simplest is a requirement that the "segment" somebody can hit has a floor. Maybe 10,000 people for a presidential election, 1,000 for a Congressional.”
- Siva Vaidhyanathan, in the NT Times, suggesting "here’s something Congress could do: restrict the targeting of political ads in any medium to the level of the electoral district of the race."
- In my prior post, I suggested “allow ads to only be run...in a way that is no more targeted than traditional media…such as to users within broad geographic areas, or to a single affinity category that is not more precise or personalized than traditional print or TV slotting options.
This seems to be an emerging consensus that this is the best we can expect to achieve in the short run, in time to
protect the 2020 election. This is something that Zuckerberg, Dorsey, and others (such as Google) could just decide to do -- or might be pressured to do. NBC News reported yesterday that Facebook is considering such an action.
We should all focus on avoiding foolish debate over naive framing of this problem as a dichotomy of "free speech" versus "censorship." The real problem is not the right of free speech, but the more nuanced issues of limited rights to be heard versus the right not to be targeted in ways that use our personal data against our interests.
The longer term
In the longer
term, dishonest political ads are only a part of this new problem of abuse of microtargeting,
which applies to speech of all kinds -- paid or not, political or commercial, or not. Especially notable is the fact
that much of what Cambridge Analytica did was to get ordinary people to spread
lies created by bots posing as ordinary people. To solve these problems, we need to change how the platforms not only how identity is known, but also how content is filtered into
our feeds. Filtering content into our feeds is a user service that should be designed to provide
the value that users, not advertisers seek.
There are huge opportunities for innovation here. My prior post explains that, shows how much we are missing because the platforms are now driven by advertiser needs for amplification of their voice, not user needs for filtering of all voices, and it points to how we might change that.
See my prior post for more, plus links to related posts.
---
*[Update 11/7:] WSJ reports Google is considering political ad targeting limits as well.
[Update 11/20:] Google has announced it will impose political ad targeting limits -- Zuck, your move.
[Update 11/22:] WSJ reports Facebook is considering similar political ad targeting limits.
The downside of targeting limits. Meanwhile, there are reports, notably in NYTimes, that highlight the downside of limiting targeting precision in this way. That is why it is prudent to view blanket limits not as a final cure, but a stopgap:
=========================
[Supplement 11/8:] These 11/5 updates from my prior post seem worth repeating here as added background:
In a 10/28 CJR interview by Mathew Ingram, Talking with former Facebook security chief Alex Stamos, Stamos offers this useful diagram to clarify key elements of Facebook and other social media that are often blurred together. He clarifies the hierarchy of amplification by advertising and recommendation engines (filtering of feeds) at the top, and free expression in various forms of private messaging at the bottom. This shows how the risks of abuse that need control are primarily related to paid targeting and to filtering. Stamos points out that "the type of abuse a lot of people are talking about, political disinformation, is absolutely tied to amplification" and that at the rights of unfettered free expression get stronger at the bottom, "the right of individuals to be exposed to information they have explicitly sought out."
Stamos argues that "Tech platforms should absolutely not fact-check candidates organic (unpaid) speech," but, in support of the kind of targeting limit suggested here, he says "I recommended, along with my partners here at Stanford, for there to be a legal floor on the advertising segment size for ads of a political nature."
Ben Thompson, in Tech and Liberty, supports Stamos' arguments and distinguishes rights of speech from "the right to be heard." He notes that "Targeting... both grants a right to be heard that is something distinct from a right to speech, as well as limits our shared understanding of what there is to debate."
---
See the Selected Items tab for more on this theme.
There are huge opportunities for innovation here. My prior post explains that, shows how much we are missing because the platforms are now driven by advertiser needs for amplification of their voice, not user needs for filtering of all voices, and it points to how we might change that.
See my prior post for more, plus links to related posts.
*[Update 11/7:] WSJ reports Google is considering political ad targeting limits as well.
[Update 11/20:] Google has announced it will impose political ad targeting limits -- Zuck, your move.
[Update 11/22:] WSJ reports Facebook is considering similar political ad targeting limits.
The downside of targeting limits. Meanwhile, there are reports, notably in NYTimes, that highlight the downside of limiting targeting precision in this way. That is why it is prudent to view blanket limits not as a final cure, but a stopgap:
- Political campaigns rightly point out how these limits harm legitimate campaign goals: “This change won’t curb disinformation...but it will hinder campaigns and (others) who are already working against the tide against bad actors to reach voters with facts.” “Broad targeting kills fund-raising efficiency”
- That argues that the real solution is to recognize that platforms do have the right and obligation to police ads of all kinds, including paid political ads, in order to enable an appropriate mix of targeting privileges to legitimate campaigns -- when placing non-abusive ads -- to those who choose to receive them.
- But since we are nowhere near a meaningful implementation of such a solution in time for major upcoming elections, we need a stopgap compromise now. That is why I originally advocated this targeting limit, while noting that it was only a stopgap.
[Update 5/29/20:]
Related to the update at the top, about the bills introduced in Congress, a nice statement quoted in the 5/26/20 Eshoo press release explains the problem in a nutshell:
Related to the update at the top, about the bills introduced in Congress, a nice statement quoted in the 5/26/20 Eshoo press release explains the problem in a nutshell:
It used to be true that a politician could tell different things to different voters, but journalists would check whether the politician in question was saying different things to different people and write about it if they found conflicting political promises. That is impossible now because the different messages are shown privately on social media, and a given journalist only has his or her own profile. In other words, it's impossible to have oversight. The status quo, in other words, is bad for democracy. This new bill would address this urgent problem. --Cathy O’Neil, author of Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy and CEO of ORCAA, an algorithmic auditing firm.
=========================
[Supplement 11/8:] These 11/5 updates from my prior post seem worth repeating here as added background:
(Alex Stamos from CJR) |
Stamos argues that "Tech platforms should absolutely not fact-check candidates organic (unpaid) speech," but, in support of the kind of targeting limit suggested here, he says "I recommended, along with my partners here at Stanford, for there to be a legal floor on the advertising segment size for ads of a political nature."
Ben Thompson, in Tech and Liberty, supports Stamos' arguments and distinguishes rights of speech from "the right to be heard." He notes that "Targeting... both grants a right to be heard that is something distinct from a right to speech, as well as limits our shared understanding of what there is to debate."
---
See the Selected Items tab for more on this theme.