But it is hard to agree on how, and with what objectives. Most people have little clue of where to start, or why it matters -- and many of those who do are divided about what to do, and whether proposed actions are too small or too large. This is a richly complex problem -- and much of what we read is oversimplified.
I recently immersed myself in some of the best analyses from respected think tanks -- and have some innovative perspectives of my own. This post begins with pointers to some of the best thinking, and then explains what I see as missing. That is largely a question of what are we designing for.
Update 4/26/21: An important strategy for a surgical restructuring (published in Tech Policy Press) — to an open market strategy that shifts control over our feeds to the users they serve — complements the discussion here.
==================================================================
Many calls for regulatory action focus on just one or a few of the following diverse categories of abuse. Some of these conflict with one another and are advocated by different parties:
Particularly edifying are the analyses of issues and regulation related to the safe harbor provisions of Section 230 of the Communications Decency Act that protects "interactive services" from liability for bad content provided by others. Many have called for repealing those safe harbor protections, seeing that as a license to wantonly distribute harmful content, but these deeper analyses suggests a more nuanced interpretation -- the safe harbor should continue to apply to posting of content, but should not apply to filtered distribution in social media feeds. That is one of the themes I build on with my own suggestions below.
Beyond these excellent works, the gap -- and opportunity -- that I see is to refocus our objectives. We should look beyond just limiting the harms of over-concentration of power as we see them today and in hindsight, but look ahead to where we could be going.
"The Debate Over a New Digital Platform Agency" -- some essential resources
On October 17, I was invited to attend The Debate Over a New Digital Platform Agency: Developing Digital Authority and Expertise, at the The Digital Innovation & Democracy Initiative of the German Marshall Fund of the US in Washington, DC. Three reports that resulted from the work of the panelists were suggested as background reading:
More recently, I found another excellent report that is more focused on the technological and business issues and points toward some of the what I have proposed:
[Update: See the updates at the end for additional valuable resources, and my comments on them.]
My own suggestions on where our platforms could and should be going
AI as a platform regulatory issue. Discussion after the session raised the issue
of regulating AI. There is growing
concern relating to concentrations of power and other abuses, including concentrations
of data, bias in inference and in natural language understanding, and lack of transparency,
controls, and explainability. That
suggests a similar need for a regulator that can apply specialized technical
expertise that overlaps and coordinates with the issues addressed here. AI is fundamental to the workings of social
media, search, and e-commerce platforms, and also has many broader applications
for which pro-active regulation may be needed.
Some further reflections
From reviewing Harold Feld's book and discussing it with him:
Technical architecture issues
I was very pleased to happen on the Masnick article, Protocols, Not Platforms: A Technological Approach to Free Speech (a couple weeks ago), as the nearest thing to the vision I have been developing that I have yet seen. It is not aimed at regulation, apparently in hopes that the market can correct itself (a hope I have shared, but no longer put much faith in). Our works are both overlapping and complementary – reinforcing and expanding in different ways on very similar visions for user-controlled, open filtering of social media and the marketplace of ideas. I recommend his paper to anyone who wants to understand how this technology can be far more supportive of user value by enabling users to mold their social media to their individual values, and as a foundation for better understanding my more specific proposals.
As background, in developing these ideas for an open market of user-controlled filtering tools, I drew on my experience in financial technology from around 1990. There was a growing open market ecosystem for transaction level financial market data (generated by the stock exchanges and other markets -- ticker feeds and the like), which was then gathered and redistributed to brokers and analysts by redistributors like Dow Jones, Telerate, and Bloomberg. An open market for analytic tools that could analyze this data and provide a rich variety of financial metrics was developing -- one that could interoperate, so that brokers and analysts could apply those analytics, or create their own custom variations (as an early form of mashup). That was an inspiration for work I did in 2002 to design a system for open collaboration on finding and developing innovations, in the days when "open innovation" was an emerging trend. That design provided very rich functions for flexible, mass collaboration that I later adapted to apply to social media (as described on my blog, starting in 2012, when I saw that current systems were not going in the direction I thought they should).
Personal privacy versus openness, and interoperability
Privacy has emerged as a critical issue in Internet services, and one that is often in conflict with the objectives of openness and interoperability that are essential to the marketplace of services and to the broader marketplace of ideas (and also to making AI/ML as beneficial as possible). Here again there is a need for nuance and expertise to sensibly balance the issues, and there is reason to fear that current privacy legislation initiatives may fail to provide a proper balance. I believe there are more nuanced ways to meet these conflicting objectives, but leave more specific exploration of that for another time.
We have learned that our Web services are far more complex and have far more dangerous impacts on society than we realized. We need to move forward with more deliberation, and need a business and regulatory environment capable of guiding that. We have seen how dangerous it can be to "move fast and break things."
I am working independently on a pro-bono basis on these issues, and welcome opportunities to collaborate with others to move in the directions outlined here. (These ideas draw on two detailed patent filings from 2002 and 2010 that I have placed into the public domain.)
---
[*Update 1/2/20:] Mediating consent by augmenting the wisdom of crowds
Renee DiResta (who wrote the Free Speech is Not the Same as Free Reach post I cited above) recently wrote an excellent article, Mediating Consent, which I commented on today. Her article is an excellent statement of how we are now at a turning point in the evolution of how human society achieves consensus – or breaks down in strife. She says “The future that realizes this promise still remains to be invented.” As outlined above, I believe the core of that future has already been invented — the task is to decide to build out on that core, to validate and adjust it as needed, and to continuously evolve it as society evolves.
[Update 1/10/19:] The disinformation choke point: distribution (not supply or demand) --
[This is now expanded slightly to be a free-standing post]
An excellent 1/8/20 report from the National Endowment for Democracy, “Demand for Deceit: How the Way We Think Drives Disinformation,” by Samuel Woolley and Katie Joseff, highlights the dual importance of both supply and demand side factors in the problem of disinformation. That crystallizes in my mind an essential gap in this field -- smarter control of distribution -- that was implicit in my comments on algorithms (section #2 above).
==================================================================
The Ideas in Brief
Broad issues, deep thinking, and vision
Our Internet platforms have gone seriously wrong, and fixing that is more complex than most observers seem to realize. The good news is that there are well-conceived proposals for creating an expert regulatory agency that can oversee significant corrections.
At a complementary level, we should be looking ahead to what these platforms should and could be doing to better serve us. That kind of vision should inform both how we regulate and how we design.
- One critical need is to change how the algorithms work, so they serve users and society -- to make us smarter and happier, instead of dumber and angrier.
- Another critical need is to shift the business models so that users, not advertisers become the customers, to better align the incentives of Internet service platforms to serve their users (and actually benefit the advertisers as well).
==================================================================
Many calls for regulatory action focus on just one or a few of the following diverse categories of abuse. Some of these conflict with one another and are advocated by different parties:
- privacy and controls on use of personal data
- moderation of disinformation and false political ads and news and other objectionable content versus freedom of speech and the "marketplace of ideas"
- economic sustainability of news media and quality journalism
- antitrust, competition, and stifling of innovation
- failures of artificial intelligence (AI) and machine learning (ML) -- including hidden bias
Particularly edifying are the analyses of issues and regulation related to the safe harbor provisions of Section 230 of the Communications Decency Act that protects "interactive services" from liability for bad content provided by others. Many have called for repealing those safe harbor protections, seeing that as a license to wantonly distribute harmful content, but these deeper analyses suggests a more nuanced interpretation -- the safe harbor should continue to apply to posting of content, but should not apply to filtered distribution in social media feeds. That is one of the themes I build on with my own suggestions below.
Beyond these excellent works, the gap -- and opportunity -- that I see is to refocus our objectives. We should look beyond just limiting the harms of over-concentration of power as we see them today and in hindsight, but look ahead to where we could be going.
- Where we should be going is a question for public policy, not tech oligarchs who move fast and break things, and are driven by their private interests.
- But to understand that question of where we should be going, we need to understand where we could be going (both good and bad).
"The Debate Over a New Digital Platform Agency" -- some essential resources
On October 17, I was invited to attend The Debate Over a New Digital Platform Agency: Developing Digital Authority and Expertise, at the The Digital Innovation & Democracy Initiative of the German Marshall Fund of the US in Washington, DC. Three reports that resulted from the work of the panelists were suggested as background reading:
- Stigler Committee on Digital Platforms. Final Report, George J. Stigler Center for the Study of the Economy and the State, The University of Chicago Booth School of Business, 2019.
- Unlocking Digital Competition: Report of the Digital Competition Expert Panel. JasonFurman, et al., HM Treasury, United Kingdom, 2019.
- Bringing Truth to The Internet. Karen Kornbluh and Ellen P Goodman, Democracy, no. 53, 2019.
- The Case for the Digital Platform Act: Market Structure and Regulation of Digital Platforms. Harold Feld, Roosevelt Institute, 2019.
More recently, I found another excellent report that is more focused on the technological and business issues and points toward some of the what I have proposed:
- Protocols, Not Platforms: A Technological Approach to Free Speech, Mike Masnick, Knight First Amendment Institute at Columbia University, 2019.
[Update: See the updates at the end for additional valuable resources, and my comments on them.]
My own suggestions on where our platforms could and should be going
The following is an updated rework of comments I sent on 10/20/19 to some of the speakers and attendees at the GMF meeting, followed by some added comments from my later discussion with Harold Feld, and other updates:
Summary and Expansion of Dick Reisman’s comments on attending GMF 10/17/19 event:
Summary and Expansion of Dick Reisman’s comments on attending GMF 10/17/19 event:
I very much support the proposals for a New Digital Platform
Authority (as detailed in the excellent background items cited on the event
page) and offer some innovative perspectives.
I welcome dialog and opportunities to participate in and support related
efforts.
(My background is complementary to most of the attendees -- diverse
roles in media-tech, as a manager, entrepreneur, inventor, and angel investor. I became interested in hypermedia and
collaborative social decision support systems around 1970, and observed the
regulation of The Bell System, IBM, Microsoft, the Internet, and cable TV from
within the industry. As a successful
inventor with over 50 software patents that have been widely licensed to serve
billions of users, I have proven talent for seeing what technology can do for
people. Extreme disappointment about the
harmful misdirection of recent developments in platforms and media has spurred
me to continue work on this theme on a pro-bono basis.)
My general comment is that for tech to serve democracy, we not
only need to regulate to limit monopolies and other abuses, but also need to regulate
with a vision of what tech should do for us -- to better enable regulation to
facilitate that, and to recognize the harms of failing to do so. If we don’t know what we should expect our
systems to do, it is hard to know when or how to fix them. The harm Facebook does becomes far more
clear when we understand what it could do – in what ways it could be “bringing
people closer together,” not just that it is actually driving them apart. That takes a continuing process of thinking
about the technical architectures we desire, so competitive innovation can
realize and evolve that vision in the face of rapid technology and market developments.
More specifically, I see architectural designs for complex
systems as being most effective when built on adaptive feedback control loops
that are extensible to enable emergent solutions, as contexts, needs, technologies,
and market environments change. That is
applicable to all the strategies I am suggesting (and to
technology regulation in general).
- I cited the Bell System regulation as a case in point that introduced well-architected modularity in the Carterfone Decision (open connections via a universal jack, much like modern API’s), followed by the breakup into local and long-distance and manufacturing, and the later introduction of number portability. This resonated as reflecting not only the wisdom of regulators, but expert vision of the technical architecture needed, specifically what points of modularity (interoperability) would enable innovation. (Of course the Bell System emerged as a natural monopoly growing out of an earlier era of competing phone systems that did not interoperate.)
- The modular architecture of email is another very relevant case in point (one that did not require regulation).
- The original Web and Web 2.0 were built on similar modularity and APIs that facilitated openness, interoperability, and extensibility.
I noted three areas where my work suggests how to add a more visionary dimension to the excellent work in the cited reports. One is a fundamental problem of structure,
and the other two are problems of values that reinforce one another. (The last one applies not only to the
platforms, but to the fundamental challenge of sustaining news services in a
digital world.) All of these are
intended not as definitive point solutions, but as ongoing processes that involve
continuing adaptation and feedback, so that the solutions are continuously emergent as
technology and competitive developments advance.
1. System and
business structure -- Modular architecture for flexibility and extensibility. The heart of systems architecture is
well-designed modularity, the separation of elements that can interoperate yet
be changed in at will -- that seems central to regulation as well – especially
to identify and manage exclusionary bottlenecks/gateways. At a high level, the e-mail example is very
relevant to how different “user agents” such as Outlook, Apple mail, and Gmail
clients can all interoperate to interconnect all users through “message
transfer agents” (through the mesh of mail servers on the Internet). A similar decoupling should be done for social
media and search (for both information and shopping).
Similar modularity could usefully separate such elements as:
- Filtering algorithms – to
be user selectable and adjustable, and to compete in an open market much as
third-party financial analytics can plug in to work with market data feeds
and user interfaces.
- Social graphs – to enable
different social media user interfaces to share a user’s social graph
(much like email user agent / transfer agent).
- Identity – verified /
aliased / anonymous / bots could interoperate with clearly distinct levels
of privilege and reputation.
- Value transfer/extraction systems
– this could address data, attention, and user-generated-content and the pricing
that relates to that.
- Analytics/metrics – controlled, transparent monitoring of activity for users and regulators.
This is explained in Architecting Our Platforms to Better Serve Us -- Augmenting and
Modularizing the Algorithm.
2. User-value
objectives -- filtering algorithms controlled by and for users. This is the true promise of information
technology – not artificial intelligence, but the augmentation of human
intelligence.
· User value is complex and nuanced, but Google’s original
PageRank algorithm for search results filtering demonstrates how sophisticated
algorithms can optimize for user value by augmenting the human wisdom of crowds
– the algorithm can infer user intent, and weigh implicit signals of authority and
reputation derived from human activity at multiple levels, to find relevance in
varying contexts.
· In search, the original PageRank signal was
inward links to a Web page, taken as expressions of the value judgements of individual
human webmasters regarding that page. That
has been enriched to weed out fraudulent “link farms” and other distortions and
expanded in many other ways.
· For the broader challenge of social media, I
outline a generalization of the same recursive, multi-level weighting strategy
in The
Augmented Wisdom of Crowds: Rate the Raters and Weight the Ratings. The algorithm ranks items (of all kinds)
based on implicit and explicit feedback from users (in all available forms),
partitioned to reflect communities of interest and subject domains, so that
desired items bubble up, and undesired items are downranked. This can also combat filter bubbles -- to
augment serendipity and to identify “surprising validators” that might cut
through biased assimilation.
· That proposed architecture also provides for deeper
levels of modularity: to enable user
control of filtering criteria, and flexible use of filtering tools from
competing sources -- which users could combine and change at will, depending on
the specific task at hand. That enables
continuous adaptation, emergence, and evolution, in an open, competitive market
ecosystem of information and tools. (As noted below, the Masnick paper makes a nice case for this.)
· Filtering for user and societal value: The objective is to allow for smart filtering
that applies all the feedback signals available to provide what is valued by
that the user at that time. By allowing
user selection of filtering parameters and algorithms, the filters can become
increasingly well-tuned to each user's value system, as it applies within each community of
interest, and each subject domain.
· First amendment, Section 230, prohibited
content issues, and community standards:
When done well, this filtering might largely address those concerns about bad content, greatly
reducing the need for the blunt instrument of regulatory controls or censorship,
and working in real time, at Internet-speed, with minimal need for manual
intervention regarding specific items.
As I understand it, this finesses most of the legal issues: users could retain the right to
post information with very little restriction -- if objectionable
content is automatically downranked enough in any filtering process that a
service provides (an automated form of moderation) to avoid sending it to
users who do not want such content -- or who reside in jurisdictions that do
not permit it. Freedom of speech
(posting), not freedom of reach (delivery) to others who have not invited it.
-- Thus Section 230 might be applied to posting, just as seemed acceptable when information was pulled from the open Web.
-- But the Section 230 safe harbor protections against liability might not apply to the added service of selective dissemination, when information is pushed through social media (and when ads are targeted into social media). The filtering that determines what users see might apply both user- and government-defined restrictions (as well as restrictions at the level of specific user communities that desire those restrictions). [See 2/4/20 update below on related Section 230 issues.]
-- Thus Section 230 might be applied to posting, just as seemed acceptable when information was pulled from the open Web.
-- But the Section 230 safe harbor protections against liability might not apply to the added service of selective dissemination, when information is pushed through social media (and when ads are targeted into social media). The filtering that determines what users see might apply both user- and government-defined restrictions (as well as restrictions at the level of specific user communities that desire those restrictions). [See 2/4/20 update below on related Section 230 issues.]
(Such methods might evolve to become a broad architectural
base for richly nuanced forms of digital democracy.)
[See 1/10/20 Update below on distribution filtering as the choke point for disinformation. It is here that we can reverse the wrong direction of social media that is so destructively making people dumber instead of smarter. This is now expanded slightly as a free standing post, The Dis-information Choke Point: Dis-tribution (Not Supply or Demand)]
[See 1/10/20 Update below on distribution filtering as the choke point for disinformation. It is here that we can reverse the wrong direction of social media that is so destructively making people dumber instead of smarter. This is now expanded slightly as a free standing post, The Dis-information Choke Point: Dis-tribution (Not Supply or Demand)]
3. Business model value
objectives – who does the platform serve? This is widely observed to be the “original sin” of the Internet, one that prevents the emergence of better solutions in the above two areas. Without solving this problem, it will be very
difficult to solve the other problems.
“It is difficult to get a man to understand something when his job
depends on not understanding it” We call them services, but they do not serve us. Funding
of services with the ad model makes those services seem free and affordable,
but drives platform services businesses to optimize for engagement (to sell ads), instead
of optimizing for the value to users and society.
Users are the product, not the customer, and value (attention) is extracted from the
users to serve the platforms and the advertisers.
Also, modern online advertising is totally unlike prior forms of advertising because unprecedented detail in user data and precision targeting enables messaging and behavioral manipulations at an individual level. That has driven algorithm design and use of the services in catastrophically harmful directions, instead of beneficial ones.
Many have recognized this business model problem, but few see any workable solution. I suggest a novel path forward at two levels: an incentive ratchet to force the platforms to seek solutions, and some suggested solution mechanisms that suggest how that ratchet could bear fruit in ways that are both profitable and desirable ...in ways that few now imagine.
Also, modern online advertising is totally unlike prior forms of advertising because unprecedented detail in user data and precision targeting enables messaging and behavioral manipulations at an individual level. That has driven algorithm design and use of the services in catastrophically harmful directions, instead of beneficial ones.
Many have recognized this business model problem, but few see any workable solution. I suggest a novel path forward at two levels: an incentive ratchet to force the platforms to seek solutions, and some suggested solution mechanisms that suggest how that ratchet could bear fruit in ways that are both profitable and desirable ...in ways that few now imagine.
Ratchet the desired business model shift with a simple
dial, based on a simple metric. A
very simple and powerful regulatory strategy could be to impose taxes or
mandates that gradually ratchet toward the desired state. This leverages
market forces and business innovation in the same way as the very successful
model of the CAFE standards for auto fuel efficiency -- it leaves the details
of how to meet the standard to each company.
· The ratchet here is to provide compelling
incentives for dominant services to ensure that X% of revenue must come from
users. Such compelling taxes or mandates
might be restricted to distribution services with ad revenues above some threshold
level. (Any tax or penalty revenue might
be applied to ameliorate the harms.)
· That X% might be permitted to still include advertising revenue
if it is quantified as a credit back to the user (a “reverse meter” much as for
co-generation of electricity). Advertising
can be valuable and non-intrusive and respectful of data -- explicitly putting
a price on the value transfer from the consumer would incentivize the advertising market toward user value.
· This incentivizes individual companies to shift
their behavior on their own, without need for the kind of new data
intermediaries (“infomediaries” or fiduciaries) that others have proposed without
success. It could also create more
favorable conditions for such intermediaries to arise.
This is explained in To
Regulate Facebook and Google, Turn Users Into Customers, Who Should Pay the Piper for Facebook? (& the rest), Reverse the Biz Model! -- Undo the Faustian Bargain for Ads and
Data, and A
Regulatory Framework for the Internet (with Thanks to Ben Thompson).
Digital services business model issues -- for news
services as well as platforms. (Not
addressed at the event, but included in some of the reports.) Many (most
prominently Zuckerberg) throw up their hands at finding business models for social media or search that are not ad-funded, primarily because of
affordability issues. The path to
success here is uncertain (just as the path to fuel efficient autos is
uncertain). But many innovations
emerging at the margins offer reasons to believe that better solutions can be
found.
· One central thread is the recognition that the
old economics of the invisible hand fails because there is no digital
scarcity for the invisible hand to ration.
We need a new way to settle on value and price.
· The related central thread is the idea of a
social contract for digital services, emerging most prominently with regard
to journalism (especially investigative and local). We must pay now, not for what has been
created already, but to fund continuing creation for the future. Behavioral economics has shown that people are not homo economicus but homo
reciprocans – they want to be fair and do right, when the situation is
managed to encourage win-win behaviors.
· Pricing for digital services can shift from
one-size-fits-all, to mass-customization of pricing that is fair to each user
with respect to the value they get, the services they want to sustain, and
their ability to pay. Current
all-you-can-eat subscriptions or pay-per-item models track poorly to actual
value. And, unlike imposing secretive price
discrimination, this value discrimination can be done cooperatively
(or even voluntarily). Important cases
in point are The Guardian’s voluntary payment model, and recurring crowdfunding
models like Patreon. Journalism is recognized to be a public good, and that can be an especially strong motivator for sustaining payments.
· Synergizing with this, and breaking from norms
we have become habituated to, the other important impact of digital is the
shift toward a Relationship
Economy – shifting focus from one-shot zero-sum transactions to ongoing
win-win relationships such as subscriptions and membership. This builds cooperation and provides new
leverage for beneficial application of behavioral economic nudges to support this
creative social contract, in an invisible handshake. My own work on FairPay explains this and
provides methods for applying it to make these services sustainable by user
payments. (See this Overview
with links, including journal articles with prominent marketing scholars, brief
articles in HBR and Techonomy, and many blog posts, such as one specific to journalism.)
· Vouchers.
The Stigler Committee proposal for vouchers might be enhanced by
integration with the above methods. Voucher
credits could be integrated with subscription/membership payments to directly
subsidize individual payments, and to nudge users to donate above the voucher
amounts.
· Affordability. To see how this deeper focus on
value changes our thinking, consider the economics of reverse meter credits for
advertising, as suggested for the ratchet strategy above. As an attendee noted at the event, reverse
metering would seem to unfairly favor the rich, since they can better afford to
pay to avoid ads. But the platforms actually
earn much more for affluent users (their targeted ad rates are much higher). If prices map to the value surplus, that will
tend to balance things out – if the less affluent want service to be ad-free,
it should be less costly for them than for the affluent. And when ads become less intrusive and more relevant, even the affluent may be happy to accept them (how about the ads in Vogue?).
Some further reflections
From reviewing Harold Feld's book and discussing it with him:
- He notes the growing calls for antitrust regulation to consider harms beyond price increases (which ignores the true costs of "free" services) and suggests "cost of exclusion" (COE) as a useful metric of harm to manage for.
- I suggest that similar logic argues for more attention to what platforms could and should be doing as a metric of harm. The idea is not to mandate what they should do, but to to avoid blocking it -- and to estimate the cost of not providing valuable services that a more competitive market that is incentivized to serve end-users would provide in some form.
- Feld also suggests that is is a proper objective of regulation to support promotion of good content and discourage bad content (just as was done for broadcast media). Further to that objective, my Augmented Wisdom of Crowds methods show how that can become nuanced, dynamic, reflective of user desires, domains of expertise, and communities of interest, and selectively match to the standards of many overlapping communities. A related post highlights how this can serve as A Cognitive Immune System for Social Media -- Developing Systemic Resistance to Fake News.
- On Section 230-related issues, an interesting question I have not seen well addressed is how the targeting of advertising interplays with filtering feeds for content of all kinds.
-- I advocate that filtering of content feeds should be controlled by and for the end-users of the feeds, and economic incentives should align to that.
-- Targeting of ads (political or commercial) is currently a countervailing force that directs ads to users in ways that do not align with their wishes (and motivates filtering to inflame rather than enlighten).
-- Reverse metering of attention and data could provide a basis to negotiate -- in this two-sided market -- over just how targeting meshes with the prioritization and presentation of items in feeds. (A valuable new resource on the design of multi-sided platforms is The Platform Canvas.) - Push vs. pull: also related to managing harmful content, Feld draws useful distinctions of Broadcast/Many-to-Many vs. Common Carrier/One-to-One and Passive Listening vs. Active Participation, I suggest the distinction between Push versus Pull distribution/access is also very important to First Amendment issues:
-- Pull is on demand requests for specific items, such as by actively searching, or direct access to a Web service. In a free society there should presumably be very limited restrictions on what content users may pull.
-- Push is a continuing feed, such as a social media news feed. This can be a firehose of everything (subject to privacy constraints) or a filtered feed (as typical in current social media). I think Feld's analysis supports the case that there is no First Amendment right of a speaker to have their speech pushed to others in a filtered feed (no free reach or free targeting, as in my posts below). Note that filtering items in a feed uses much the same discrimination technology as filtering (ranking) of search results (for example, Google Alerts are “a standing search” that is applied to create a feed of newly posted items that match the standing search). (I have fundamental patents from 1994, now expired, on a widely used class of push.) - Feld addresses the issues of filter bubbles and serendipity and proposes “wobbly algorithms” that introduce more variety (and I found recent support for that in this new CACM article). I have outlined methods for seeking Surprising Validators and serendipity in ways that are more purposeful in going far beyond just random variation.
- Regarding the quality of news, he addresses the widely supported idea of “tools for reliable sources,” I suggest that human rating services (like NewsGuard) are far too limited in scope and timeliness, and too open to dispute, to be more than a very partial solution. The algorithmic methods I propose can include such expert rating services, as just one high-reputation component of a broader weighting of authority and relevance in which everyone with a reputation for sound judgement in a subject domain contributes, with a weighting that is based on their reputation. The augmented crowd will often be smarter than the experts -- and can work far faster to flag problematic content at Internet scale.
Technical architecture issues
I was very pleased to happen on the Masnick article, Protocols, Not Platforms: A Technological Approach to Free Speech (a couple weeks ago), as the nearest thing to the vision I have been developing that I have yet seen. It is not aimed at regulation, apparently in hopes that the market can correct itself (a hope I have shared, but no longer put much faith in). Our works are both overlapping and complementary – reinforcing and expanding in different ways on very similar visions for user-controlled, open filtering of social media and the marketplace of ideas. I recommend his paper to anyone who wants to understand how this technology can be far more supportive of user value by enabling users to mold their social media to their individual values, and as a foundation for better understanding my more specific proposals.
As background, in developing these ideas for an open market of user-controlled filtering tools, I drew on my experience in financial technology from around 1990. There was a growing open market ecosystem for transaction level financial market data (generated by the stock exchanges and other markets -- ticker feeds and the like), which was then gathered and redistributed to brokers and analysts by redistributors like Dow Jones, Telerate, and Bloomberg. An open market for analytic tools that could analyze this data and provide a rich variety of financial metrics was developing -- one that could interoperate, so that brokers and analysts could apply those analytics, or create their own custom variations (as an early form of mashup). That was an inspiration for work I did in 2002 to design a system for open collaboration on finding and developing innovations, in the days when "open innovation" was an emerging trend. That design provided very rich functions for flexible, mass collaboration that I later adapted to apply to social media (as described on my blog, starting in 2012, when I saw that current systems were not going in the direction I thought they should).
Personal privacy versus openness, and interoperability
Privacy has emerged as a critical issue in Internet services, and one that is often in conflict with the objectives of openness and interoperability that are essential to the marketplace of services and to the broader marketplace of ideas (and also to making AI/ML as beneficial as possible). Here again there is a need for nuance and expertise to sensibly balance the issues, and there is reason to fear that current privacy legislation initiatives may fail to provide a proper balance. I believe there are more nuanced ways to meet these conflicting objectives, but leave more specific exploration of that for another time.
Moving forward
We have learned that our Web services are far more complex and have far more dangerous impacts on society than we realized. We need to move forward with more deliberation, and need a business and regulatory environment capable of guiding that. We have seen how dangerous it can be to "move fast and break things."
I am working independently on a pro-bono basis on these issues, and welcome opportunities to collaborate with others to move in the directions outlined here. (These ideas draw on two detailed patent filings from 2002 and 2010 that I have placed into the public domain.)
---
[*Update 1/2/20:] Mediating consent by augmenting the wisdom of crowds
Renee DiResta (who wrote the Free Speech is Not the Same as Free Reach post I cited above) recently wrote an excellent article, Mediating Consent, which I commented on today. Her article is an excellent statement of how we are now at a turning point in the evolution of how human society achieves consensus – or breaks down in strife. She says “The future that realizes this promise still remains to be invented.” As outlined above, I believe the core of that future has already been invented — the task is to decide to build out on that core, to validate and adjust it as needed, and to continuously evolve it as society evolves.
[Update 1/10/19:] The disinformation choke point: distribution (not supply or demand) --
[This is now expanded slightly to be a free-standing post]
An excellent 1/8/20 report from the National Endowment for Democracy, “Demand for Deceit: How the Way We Think Drives Disinformation,” by Samuel Woolley and Katie Joseff, highlights the dual importance of both supply and demand side factors in the problem of disinformation. That crystallizes in my mind an essential gap in this field -- smarter control of distribution -- that was implicit in my comments on algorithms (section #2 above).
There is little fundamentally new about the supply or the demand for disinformation. What is fundamentally new is how disinformation is distributed. That is what we most urgently need to fix. If disinformation falls in a forest… but appears in no one’s feed, does it disinform?
In social media a new form of distribution mediates between supply and demand. The media platform does filtering that upranks or downranks content, and so governs what users see. If disinformation is downranked, we will not see it -- even if it is posted and potentially accessible to billions of people. Filtered distribution is what makes social media not just more information, faster, but an entirely new kind of medium. Filtering is a new, automated form of moderation and amplification. That has implications for both the design and the regulation of social media.
By changing social media filtering algorithms we can dramatically reduce the distribution of disinformation. It is widely recognized that there is a problem of distribution: current social media promote content that angers and polarizes because that increases engagement and thus ad revenues. Instead the services could filter for quality and value to users, but they have little incentive to do so. What little effort they ever have made to do that has been lost in their quest for ad revenue.
Social media marketers speak of "amplification." It is easy to see the supply and demand for disinformation, but marketing professionals know that it is amplification in distribution that makes all the difference. Distribution is the critical choke point for controlling this newly amplified spread of disinformation. (And as Feld points out, the First Amendment does not protect inappropriate uses of loudspeakers.)
Social media marketers speak of "amplification." It is easy to see the supply and demand for disinformation, but marketing professionals know that it is amplification in distribution that makes all the difference. Distribution is the critical choke point for controlling this newly amplified spread of disinformation. (And as Feld points out, the First Amendment does not protect inappropriate uses of loudspeakers.)
While this is a complex area that warrants much study, as the report observes, the arguments cited against the importance of filter bubbles in the box on page 10 are less relevant to social media, where the filters are largely based on the user’s social graph (who promotes items to be fed to them, in the form of posts, likes, comments, and shares), not just active search behavior (what they search for).
Changing the behavior of demand is clearly desirable, but a very long and costly effort. It is recognized that we cannot stop the supply. But we can control distribution -- changing filtering algorithms could have significant impact rapidly, and would apply across the board, at Internet scale and speed -- if the social media platforms could be motivated to design better algorithms. I explain further in A Cognitive Immune System for Social Media -- Developing Systemic Resistance to Fake News and In the War on Fake News, All of Us are Soldiers, Already! That is what I am advocating in my section #2.
Yes, "the way we think drives disinformation," and social media distribution algorithms drive how we think -- we can drive them for good, not bad!
[Update 2/4/20] Related Section 230 issues.
The discussion above related to posting versus distribution did not clearly address other issues that have driven lobbying against Section 230. These include companies concerned about illegal postings on Airbnb, and about copyright infringement, and other improper content. My initial take on this is that the distinction of posting versus filtered distribution outlined above should also distinguish posting from other forms of selective distribution, such as by search in which a selection or moderation function is present.
For example, Airbnb is a marketplace in which Airbnb may not offer a filtered feed, but offers search services. The essential point is that Airbnb filters searches by selection criteria -- and by its own listing standards. Thus there is an expectation of quality control. As long as Airbnb provides a quality control service, it is moderated, and thus should not have safe harbor under Section 230. If it did not do moderation, then posting on Airbnb should properly have safe harbor protections, but selective (filtered) search functions might not have safe harbor to include illegal postings. Access to such uncontrolled postings might be limited to explicit searches for a specific property identifier (essentially a URL) to retain safe harbor protection.
So here as well, it seems the proper and tractable understanding of the problem is not in the posting, but in the distribution.
[Update 9/9/20] A killer TED Talk and another excellent analysis
YaĆ«l Eisenstat's TED Talk, "How Facebook profits from polarization," is very important, right on target, and well said! If you don’t understand why Facebook and other social media are the gravest threat to society (as they currently operate), this will be the most informative 14 minutes you can spend. (From a former CIA analyst, diplomat…and Facebook staffer.) (9/8/20)
New Digital Realities; New Oversight Solutions from the Harvard Shorenstein Center, by Tom Wheeler, Phil Verveer, and Gene Kimmelman, is another excellent think tank proposal that is right on target. (8/20/20)
Yes, "the way we think drives disinformation," and social media distribution algorithms drive how we think -- we can drive them for good, not bad!
[Update 2/4/20] Related Section 230 issues.
The discussion above related to posting versus distribution did not clearly address other issues that have driven lobbying against Section 230. These include companies concerned about illegal postings on Airbnb, and about copyright infringement, and other improper content. My initial take on this is that the distinction of posting versus filtered distribution outlined above should also distinguish posting from other forms of selective distribution, such as by search in which a selection or moderation function is present.
For example, Airbnb is a marketplace in which Airbnb may not offer a filtered feed, but offers search services. The essential point is that Airbnb filters searches by selection criteria -- and by its own listing standards. Thus there is an expectation of quality control. As long as Airbnb provides a quality control service, it is moderated, and thus should not have safe harbor under Section 230. If it did not do moderation, then posting on Airbnb should properly have safe harbor protections, but selective (filtered) search functions might not have safe harbor to include illegal postings. Access to such uncontrolled postings might be limited to explicit searches for a specific property identifier (essentially a URL) to retain safe harbor protection.
So here as well, it seems the proper and tractable understanding of the problem is not in the posting, but in the distribution.
[Update 9/9/20] A killer TED Talk and another excellent analysis
YaĆ«l Eisenstat's TED Talk, "How Facebook profits from polarization," is very important, right on target, and well said! If you don’t understand why Facebook and other social media are the gravest threat to society (as they currently operate), this will be the most informative 14 minutes you can spend. (From a former CIA analyst, diplomat…and Facebook staffer.) (9/8/20)
New Digital Realities; New Oversight Solutions from the Harvard Shorenstein Center, by Tom Wheeler, Phil Verveer, and Gene Kimmelman, is another excellent think tank proposal that is right on target. (8/20/20)
[Update 12/14/20] A specific proposal - Stanford Working Group on Platform Scale
An important proposal that gets at the core of the problems in media platforms was published in Foreign Affairs, How to Save Democracy From Technology, by Francis Fukuyama and others. See also the report of the Stanford Working Group. The idea is to let users control their social media feeds with open market interoperable filters. That is something I have proposed, and provided details on how and why to do.
[Update 2/12/21] Growing support for open market filtering services - Twitter too
More proposals for this have surfaced, including in Senate testimony, plus indications of interest from Twtter. This suggests this may be the best path for action. See this newer post and this update, and stay tuned for more.
[Important Update 4/26/21]
This important strategy for a surgical restructuring was published in Tech Policy Press. An open market strategy that shifts control over our feeds to the users they serve complements the actions discussed here. This new article summarizes and expands on proposals from notable sources (including Twitter CEO Jack Dorsey) that get at the core of the problems in media platforms.
No comments:
Post a Comment