Tuesday, August 20, 2019

The Great Hack - The Most Important and Scariest Film of this Century


An Unseen Digital Pearl Harbor, Dissected

The Great Hack is a film every person should see!

America and other democracies have been invaded, and subverted in ways and degrees that few appreciate.

This film (on Netflix) uncovers the layers of the Russia / Cambridge Analytica hack of the US 2020 election (and Brexit), and clearly shows how deeply our social media have been subverted as a fifth column aimed at the heart of democracy and enlightened civilization.

The Great Hack provides clarity about the insidious damage being done by our seemingly benign social media -- and still running wild because too few understand or care about our state of peril -- and because those businesses profit from enabling our attackers. It provides an excellent primer for those who have not tuned in to this, and for those who do not understand the nature of the threat.

It is a much needed wake up call that far more need more urgently to heed.

"Prof. Carroll Goes to London"

What makes this a great film, and not just an important documentary, is how it is told as a the story of a (not so) common man.

Much like Jimmy Stewart's Mr. Smith Goes to Washington, this is the story of a mild-mannered citizen, Professor David Carroll of The New School in NYC, who sees a problem and seeks to follow a simple quest for truth and justice (to know what data Facebook and Cambridge Analytica have on him). It traces his awakening and journey to the belly of the beast. This time it is real, and the stakes could not be higher.

I found this film especially interesting, having met David at an event on the fake news problem in February 2017, and then at a number of subsequent events on this important theme (many under the auspices of NYC Media Lab and its director, Justin Hendix). It is a problem I have explored and offered some remedies for on this blog.

Wednesday, July 24, 2019

To Regulate Facebook and Google, Turn Users Into Customers

First published in Techonomy, 2/26/19 -- and more timely than ever...

There is a growing consensus that we need to regulate Facebook, Google, and other large internet platforms that harm the public in large part because they are driven by targeted advertising.  The seductive idea that we can enjoy free internet services — if we just view ads and turn over our data — has been recognized to be “the original sin” of the internet.  These companies favor the interests of the advertisers they profit from more than the interests of their billions of users.  They are powerful tools for mass-customized mind-control. Selling their capabilities to the highest bidder threatens not just consumer welfare, but society and democracy.

There is a robust debate emerging about how these companies should be regulated. Many argue for controls on data use and objectionable content on these platforms.  But poorly targeted regulation risks many adverse side-effects – for example abridging legitimate speech, and further entrenching these dominant platforms and impeding innovation by making it too costly for others to compete.

But I believe we need to treat the disease, not just play whack-a-mole with the symptoms. It’s the business model, stupid! It is widely recognized that the root cause of the problem is the extractive, ad-funded, business model that motivates manipulation and surveillance.  The answer is to require these companies to shift to revenue streams that come from their users.  Of course, shifting cold-turkey to a predominantly user-revenue-based model is hard.  But in reality, we have a simple, market-driven, regulatory method that has already proven its success in addressing a similarly challenging problem – forcing automakers to increase the fuel efficiency of the cars they make. Government has for years required staged multi-year increases in Corporate Average Fuel Efficiency. A similar strategy can be applied here.

This market-driven strategy does not mandate how to fix things. It instead mandates a measurable limit on the systems that have been shown to cause harm.  Each service provider can determine on their own how best to achieve that.  Require that X% of the revenue of any consumer data service come from its users rather than advertisers.  Government can monitor their progress, and create a timetable for steadily ratcheting up the percentage.  (This might apply only above some amount of revenues, to limit constraints on small, innovative competitors.)

It is often said of our internet platforms that “if you are not the customer, you are the product.”  This concept may oversimplify, but it is deeply powerful.  With or without detailed regulations on privacy and data use, we need to shift platform incentives by making the user become the customer, increasingly over time.

Realigning incentives for ads and data.  Advertising can provide value to users – if it is targeted and executed in a way that is non-intrusive, relevant, and useful.  The best way to make advertising less extractive of user value is by quantifying a “reverse meter” that gives users credit for their attention and data.  Some services already offer users the option to pay in order to avoid or reduce ads (Spotify is one example).  That makes the user the customer. Both advertisers and the platforms benefit by managing user attention to maximize, rather than optimize for exploitive “engagement.”

What if the mandated user revenue level is not met?  Government could tax away enough ad revenue to meet the target percentage.  That would provide a powerful incentive to address the problem.  In addition, that taxed excess ad revenue could fund mechanisms for oversight and transparency, for developing better solutions, and for remediating disinformation.

Can the platforms really shift to user revenue?  Zuckerberg has been a skeptic, but none of the big platforms has tried seriously.  When the platforms realize they must make this change, they will figure out how, even if it trims their exorbitant margins.
Users increasingly recognize that they must pay for digital services.  A system of reverse metering of ads and data use would be a powerful start.  Existing efforts that hint at the ultimate potential of better models include including crowdfundingmembership models, and cooperatives. Other emerging variations promise to be adaptive to large populations of users with diverse value perceptions and abilities to pay.

A growing focus on customer value would move us back towards leveraging a proven great strength of humanity — the deeply cooperative behavior of traditional markets.

A simple mandate requiring internet platforms to generate a growing percentage of revenue from users will not cure all ills. But it is the simplest way to drive a fundamental shift toward better corporate behavior.

Coda, 7/24/19:

Since the original publication of this article, this issue has become even more timely, as the FTC and Justice Department begin deep investigation into the Internet giants. 

  • There is growing consensus that there is a fundamental problem with the ad- and data-based business model
  • There is also growing consensus that we must move beyond the narrow theory of antitrust that says there can be no "harm" in a free service that does not raise direct costs to consumers (but does raise indirect costs to them and limits competition). 
  • But the targeted strategies for forcing a fundamental shift in business models outlined here are still not widely known or considered
  • It primarily focuses on these business model issues and regulatory strategies (including the auto emissions model described here), and how FairPay offers an innovative strategy that has gained recognition for how it can generate user revenue in equitable ways that do not prevent a service like Facebook or Google from being affordable by all, even those with limited ability to pay.
  • It also links to a body of work "On the deeper issues of social media and digital democracy." That includes Google-like algorithms for getting smarter about the wisdom of crowds, and structural strategies for regulation based on the specific architecture of the platforms and how power should be modularized (much as smart modularization was applied to regulating the Bell System and enabling the decades of robust innovation we now enjoy.)

Tuesday, May 21, 2019

Reisman in Techonomy: Business Growth is Actually Good for People. Here’s Why.

My 4th piece in Techonomy was published today:
Business Growth is Actually Good for People. Here’s Why..

We cannot—and should not—stop growing. Sensible people know the real task is to channel growth to serve human ends.
My opening and closing:
Douglas Rushkoff gave a characteristically provocative talk last week at Techonomy NYC – which provoked me to disagree strongly...
...Rushkoff delivers a powerful message on the need to re-center on human values. But his message would be more effective if it acknowledged the power of technology and growth instead of indiscriminately railing against it. We need a reawakening of human capitalism — and a Manhattan Project to get tech back on track. That will make us a better team human.

Friday, April 26, 2019

"Non-Binary" means "Non-Binary"...Mostly...Right?

A "gender non-binary female?"

Seeing the interview of Asia Kate Dillon on Late Night with Seth Meyers, I was struck by one statement -- one that suggests an insidious problem of binary thinking that pervades many of the current ills in our society. Dillon (who prefers the pronoun "they") reported gaining insight into their gender identity from the character description for their role in Billions as "a gender non-binary female," saying: “I just didn’t understand how those words could exist next to each other.”

What struck me was the questioning of how these words could be sensibly put together. Why would anyone ask that question? As I though more, I saw this as a perfect example of the much broader problem.

The curse of binary thinking

The question I ask is at a semantic level: how could that not be obvious? (regardless of one's views on gender identity). Doesn't the issue arise only if one interprets "female" in a binary way? I would have thought that one who identifies as "non-binary" would see beyond this conceptual trap of simplistic duality. Wouldn't a non-binary person be more non-binary in their thinking? Wouldn't it be obvious to a non-binary thinker that this is a matter of being non-binary and female, not of being non-binary or female?

It seems that binary thinking is so ingrained in our culture that we default to black and white readings when it is clear that most of life (outside of pure mathematics) is painted in shades of gray. It is common to think of some "females" as masculine, and some "males" as effeminate. Some view such terms as pejorative, but what is the reality? Why wouldn't a person presumed at birth to be female (for the usual blend of biological reasons) be able to be non-binary in a multitude of ways. Even biologically "female" has a multitude of aspects, which usually generally align, but sometimes diverge. Clearly, as to behavior in general and as to sexual orientation, there seems to be a spectrum, with many degrees in each of many dimensions (some barely noticed, some hard to miss).

So I write about this as an object lesson of how deeply the binary, black or white thinking or our culture distorts our view of the more deeply nuanced reality. Even one who sees themself as non-binary has a hard time escaping binary thinking. Why can the word "female" not be appropriate for a non-binary person (as we all are to some degree) -- one who has birth attributes that were ostensibly female. Isn't it just a fallacy of binary thinking to think it is not OK for a non-binary person to also be female? That a female cannot be non-binary?

I write about this because I have long taken issue with binary thinking. This is not to meant to criticize this actor in any way, but to sympathize broadly with the prevalence of this kind of blindness and absolutism in our culture. It is to empathize with those who suffer from being thought of in binary ways that fail to recognize the non-binary richness of life -- and those who suffer from thinking of themselves in a binary way. That is a harm that occurs to most of us at one time or another. As Whitman said:
Do I contradict myself?
Very well then I contradict myself,
(I am large, I contain multitudes.)
The bigger picture

Gender is just one of the host of current crises of binary thinking that lead to extreme polarization of all kinds. Political divides. The more irreconcilable divide over whether leadership must serve all of their constituency, or just those who support the leader, right or wrong. Fake news. Free speech and truth on campus vs. censorship for some zone of safety for binary thinkers. Trickle-down versus progressive economics. Capitalism versus socialism. Immigrant versus native. One race or religion versus another. Isn't the recent focus of some on "intersectionality" just an extension of binary thinking to multiple binary dimensions? Thinking in terms of binary categories (rather that category spectrums) distances and demonizes the other, blinded from seeing how much common ground there is.

The Tao symbol (which appears elsewhere in this blog) is a perfect illustration of my point, and an age-old symbol of the non-dualistic thinking central to some Asian traditions (I just noticed the irony of the actor's first name as I wrote this sentence!). We have black and white intertwined, and the dot of contrast indicates that each contains it opposite. That suggests that all females have some male in them (however large or small, and in whatever aspect) and all males have some female in them (much as some males would think that a blood libel).

Things are not black or white, but black and white. And even if nearly black or white in a single dimension, single dimensions rarely matter to the larger picture of any issue. I think we should all make a real effort to remind ourselves that that is the case for almost every issue of importance.


(I do not profess to be "woke," but do very much try to be "awakened" and accepting of the wondrous richness of our world. My focus here is on binary and non-binary thinking, itself. I use gender identity as the example only because of this statement that struck me. If I misunderstand or express my ideas inartfully in this fraught domain, that is not my intent. I hope it is taken in the spirit of finding greater understanding that is intended.)

(In that vein, I accept that there may be complex issues specific to gender and identity that go counter to my semantic argument in some respects. But my non-binary view is that that broader truth of non-duality still over-arches. And in an awakened non-binary world, the current last word can never be known to be the future last word.)

(See also the short post just below on the theme of this blog.)

A Note on the Theme of this Blog: Everything is Deeply Intertwingled -- and, Hopefully, Becoming Smartly Intertwingled

The next post (to appear just above) is the first to indulge my desire to comment more broadly on the theme that "everything is deeply intertwingled" (as Ted Nelson put it). That has always been a core of my worldview and has been increasingly weaving into my posts -- especially on the problems of how we deal with "truth" in our social media, which I say should move toward being more smartly intertwingled.

That post, and some that will follow, move far out of my professional expertise, but I see all of my ideas as deeply intertwingled. (I have always been intrigued by epistemology, the theory of knowledge: what can we know and how do we know it). This current  topic provided the impetus to act on my latent intent to broaden the scope of this blog to these larger issues that are now creating so much dysfunction in our society.

Beyond Ted Nelson's classic statement and his diagram (above, from Computer Lib/Dream Machines) the symbol that most elegantly conveys this perspective is the Tao symbol, which appears in many of my posts. It shows the yin and yang of female and male as intertwingling symbols of those elemental opposites — and the version with the dots in each intertwingled portion, suggests that each element also contains its opposite (a further level of intertwingling).

[Update 6/13/19, on changing the blog header:]

This blog was formerly known as “Reisman on User-Centered Media,” with the description:
On developing media platforms that are user-centered – open and adaptable to the user's needs and desires – and that earn profit from the value they create for users ...and as tools for augmenting human intellect and enlightened democracy.
That continues to be a major theme.

Tuesday, April 09, 2019

A Regulatory Framework for the Internet (with Thanks to Ben Thompson)

Summarizing Ben Thompson of Stratechery, plus my own targeted proposals

"A Regulatory Framework for the Internet," Ben Thompson's masterly framework, should be required reading for all regulators, as well as anyone concerned about tech and society. (Stratechery is one of the best tech newsletters, well worth the subscription price, but this article is freely accessible.)

I hope you will read Ben's full article, but here are some points that I find especially important, followed by the suggestions I posted on his forum (which is not publicly accessible).

Part I -- Highlights from Ben's Framework (emphasis added)

Opening with the UK government White Paper calling for increased regulation of tech companies, Ben quotes MIT Tech Review about the alarm it raised among privacy campaigners, who "fear that the way it is implemented could easily lead to censorship for users of social networks rather than curbing the excesses of the networks themselves."

Ben identifies three clear questions that make regulation problematic:
First, what content should be regulated, if any, and by whom?
Second, what is a viable way to monitor the content generated on these platforms?
Third, how can privacy, competition, and free expression be preserved?

Exploring the viral spread of the Christchurch hate crime video, he gets to a key issue:
What is critical to note, though, is that it is not a direct leap from “pre-Internet” to the Internet as we experience it today. The terrorist in Christchurch didn’t set up a server to livestream video from his phone; rather, he used Facebook’s built-in functionality. And, when it came to the video’s spread, the culprit was not email or message boards, but social media generally. To put it another way, to have spread that video on the Internet would be possible but difficult; to spread it on social media was trivial.
The core issue is business models: to set up a live video streaming server is somewhat challenging, particularly if you are not technically inclined, and it costs money. More expensive still are the bandwidth costs of actually reaching a significant number of people. Large social media sites like Facebook or YouTube, though, are happy to bear those costs in service of a larger goal: building their advertising businesses.

Expanding on business models, he describes the ad-based platforms as "Super Aggregators:"
The key differentiator of Super Aggregators is that they have three-sided markets: users, content providers (which may include users!), and advertisers. Both content providers and advertisers want the user’s attention, and the latter are willing to pay for it. This leads to a beautiful business model from the perspective of a Super Aggregator:
Content providers provide content for free, facilitated by the Super Aggregator
Users view that content, and provide their own content, facilitated by the Super Aggregator
Advertisers can reach the exact users they want, paying the Super Aggregator 
...Moreover, this arrangement allows Super Aggregators to be relatively unconcerned with what exactly flows across their network: advertisers simply want eyeballs, and the revenue from serving them pays for the infrastructure to not only accommodate users but also give content suppliers the tools to provide whatever sort of content those users may want.
...while they would surely like to avoid PR black-eyes, what they like even more is the limitless supply of attention and content that comes from making it easier for anyone anywhere to upload and view content of any type.
...Note how much different this is than a traditional customer-supplier relationship, even one mediated by a market-maker... When users pay they have power; when users and those who pay are distinct, as is the case with these advertising-supported Super Aggregators, the power of persuasion — that is, the power of the market — is absent.
He then distinguishes the three types of "free" relevant to the Internet, and how they differ:
“Free as in speech” means the freedom or right to do something
“Free as in beer” means that you get something for free without any additional responsibility
“Free as in puppy” means that you get something for free, but the longterm costs are substantial
...The question that should be asked, though, is if preserving “free as in speech” should also mean preserving “free as in beer.”
Platforms that are paid for by their users are "regulated" by the operation of market forces, but those that are ad-supported are not, and so need external regulation.

Ben concludes that:
...platform providers that primarily monetize through advertising should be in their own category: as I noted above, because these platform providers separate monetization from content supply and consumption, there is no price or payment mechanism to incentivize them to be concerned with problematic content; in fact, the incentives of an advertising business drive them to focus on engagement, i.e. giving users what they want, no matter how noxious.
 This distinct categorization is critical to developing regulation that actually addresses problems without adverse side effects
...from a theoretical perspective, the appropriate place for regulation is where there is market failure; constraining the application to that failure is what is so difficult.
That leads to Ben's figure that brings these ideas together, and delineates critical distinctions:

I agree completely, and build on that with my two proposals for highly targeted regulation...

Part II -- My proposals, as commented on in the Statechery Forum 
(including some minor edits and portions that were abridged to meet character limits):

Elegant model, beautifully explained! Should be required reading for all regulators.

FIRST:  Fix the business model! I suggest taking this model farther, and mandating that the "free beer" ad-based model be ratcheted away once a service reaches some critical level of scale. That would solve the problem -- and address your concerns about competition.

Why don't we regulate to fix the root cause? The root cause of Facebook's abuse of trust is its business model, and until we change that, its motivations will always be opposed to consumer and public trust.

Here is a simple way to force change, without over-engineering the details of the remedy. Requiring a growing percentage of revenue from users is the simplest way to drive a fundamental shift toward better corporate behavior. Others have suggested paying for data, and I suggest this is most readily done in the form of credits against a user service fee. Mandating that a target level of revenue (above a certain level) come from users could drive Facebook to offer such data credits, as a way to meet their user revenue target (even if most users pay nothing beyond that credit). We will not motivate trust until the user becomes the customer, and not the product.

There is a regulatory method that has already proven its success with a similarly challenging problem – forcing automakers to increase the fuel efficiency of the cars they make. The US government has for years mandated staged multi-year increases in Average Fuel Efficiency. This does not mandate how to fix things. It mandates a limit on the systems that have been shown to cause harm. Facebook and YouTube can determine how best to achieve that. Require that X% of the revenue come from users rather than advertisers. Government can monitor progress, with a timetable for ratcheting up the percentage. (This should apply only above some amount of revenues, to facilitate competition.)

With that motivation, Facebook and YouTube can be driven to shift from advertising revenue to customer revenue. That may seem difficult, but only for lack of trying. Credits for attention and data are a just a start. If we move in that direction, we can be less dependent on other, more problematic, kinds of regulation.

This regulatory strategy is outlined in To Regulate Facebook and Google, Turn Users Into Customers (in Techonomy). More on why that is important in Reverse the Biz Model! -- Undo the Faustian Bargain for Ads and Data. (And some suggestions on more effective ways to obtain user revenue:  Information Wants to be Free; Consumers May Want to Pay, (also in Techonomy.)

SECOND: Downrank dissemination, don't censor speech! Your points about limiting user expression, and that the real issue is harmful spreading on social media, are also vitally important.

I say the real issue is:
  1.  Not: rules for what can and cannot be said – speech is a protected right
  2.  But rather: rules for what statements are seen by who – distribution (how feeds are filtered and presented) is not a protected right.
The value of a social media service should be to disseminate the good, not the bad. (That is why we talk about “filter bubbles” – failures of value-based filtering.)

I suggest Facebook and YouTube should have little role in deciding what can be said (other than to enforce government standards of free speech and clearly prohibited speech to whatever extent practical).  What matters is who that speech is distributed to, and the network has full control of that.  Strong downranking is a sensible and practical alternative to removal -- far more effective and nuanced, and far less problematic.

I have written about new ways to use PageRank-like algorithms to determine what to downrank or uprank – “rate the raters and weight the ratings.”
  • Facebook can have a fairly free hand in downranking objectionable speech
  • They can apply community standards to what they promote -- to any number of communities, each with varying standards.
  • They could also enable open filtering, so users/communities can chose someone else’s algorithm (or set their preferences in any algorithm). 
  • With smart filtering, the spread of harmful speech can be throttled before it does much harm.
  • The “augmented wisdom of the crowd” can do that very effectively, on Internet scale, in real time.
  • No pre-emptive, exclusionary, censorship technique is as effective at scale -- nor as protective of free speech rights or community standards.
That approach is addressed at some length in these posts (where “fake news” is meant to include anything objectionable to some community):
…and some further discussion on that:
More of my thinking on these issues is summarized in this Open Letter to Influencers Concerned About Facebook and Other Platforms

Friday, March 15, 2019

My Latest Articles in Techonomy

Here is the growing list of my articles published in Techonomy on FairPay, business, media, and society:

Despite his supposedly "Privacy-Focused Vision," it seems clear that Zuckerberg will not voluntarily go where he must. So we must force him to make needed changes in the core Facebook business model, one way or another.   MORE
The seductive idea that we can enjoy free internet services -- if we just view ads and turn over our data -- has been recognized to be “the original sin” of the Internet. Requiring internet platforms to generate revenue from users could drive better corporate behavior.  MORE
Current approaches to dynamic pricing are consumer-hostile. The author argues that there's a better way to build win-win relationships in the digital space that use cooperation, trust, and transparency to nurture customer lifetime value.  MORE

Thursday, January 31, 2019

Zucked -- Roger McNamee's Wake Up Call ...And Beyond

Zucked: Waking Up to the Facebook Catastrophe is an authoritative and frightening call to arms -- but I was disappointed that author Roger McNamee did not address some of the suggestions for remedies that I shared with him last June (posted as An Open Letter to Influencers Concerned About Facebook and Other Platforms).

Here are brief comments on this excellent book, and highlights of what I would add. Many recognize the problem with the advertising-based business model, but few seem to be serious about finding creative ways to solve it. It is not yet proven that my suggestions will work quite as I envision, but the deeper need is to get people thinking about finding and testing more win-win solutions. His book makes a powerful case for why this is urgently needed.

McNamee's urgent call to action

McNamee offers the perspective of a powerful Facebook and industry insider. A prominent tech VC, he was an early investor and mentor to Zuckerberg -- the advisor who suggested that he not sell to Yahoo, and who introduced him to Sandberg. He was alarmed in early-mid 2016 by early evidence of manipulation affecting the UK and US elections, but found that Zuckerberg and Sandberg were unwilling to recognize and act on his concerns. As he became more concerned, he joined with others to raise awareness of this issue and work to bring about needed change.

He provides a rich summary of how we got here, most of the issues we now face, and the many prominent voices for remedial action. He addresses the business issues and the broader questions of governance, democracy, and public policy. He tells us: “A dystopian technology future overran our lives before we were ready.” (As also quoted in the sharply favorable NY Times review.)

It's the business model, stupid!

McNamee adds his authoritative voice to the many observers who have concluded that the business model that serves advertisers to enable consumers to obtain "free" services distorts incentives, causing businesses to optimize for advertisers, not for users:
Without a change in incentives, we should expect the platforms to introduce new technologies that enhance their already-pervasive surveillance capabilities...the financial incentives of advertising business models guarantee that persuasion will always be the default goal of every design."
He goes on to suggest:
The most effective path would be for users to force change. Users have leverage...
The second path is government intervention. Normally I would approach regulation with extreme reluctance, but the ongoing damage to democracy, public health, privacy, and competition justifies extraordinary measures. The first step would be to address the design and bushiness model failures that make internet platforms vulnerable to exploitation. ...Facebook and Google have failed at self-regulation.
My suggestions on the business model, and related regulatory action

This is where I have novel suggestions -- outlined on my FairPayZone blog, and communicated to McNamee last June -- that have not gotten wide attention, and are ignored in Zucked. These are at two levels.

The auto emissions regulatory strategy. This is a simple, proven regulatory approach for forcing Facebook (and similar platforms) to shift from advertising-based revenue to user-based revenue. That would fundamentally shift incentives from user manipulation to user value.

If Facebook or other consumer platforms fail to move to do that voluntarily, this simple regulatory strategy could force that -- in a market-driven way. The government could simply mandate that X% of their revenue must come from their users -- with a timetable for gradually increasing X.  This is how auto emissions mandates work -- don't mandate how to fix things, just mandate a measurable result, and let the business figure out how best to achieve that. Since reverse-metered ads (with a specific credit against user fees) would count as a form of reader revenue, that would provide an immediate incentive for Facebook to provide such compensation -- and to begin developing other forms of user revenue. This strategy is outlined in Privacy AND Innovation ...NOT Oligopoly -- A Market Solution to a Market Problem.

The deeper shift to user revenue models. Creative strategies can enable Facebook (and other businesses) to shift from advertising revenue to become substantially user-funded. Zuckerberg has
thrown up his hands at finding a better way: "I don’t think the ad model is going to go away, because I think fundamentally, it’s important to have a service like this that everyone in the world can use, and the only way to do that is to have it be very cheap or free."

Who Should Pay the Piper for Facebook? (& the rest), explains this new business model architecture -- with a focus on how it can be applied to let Facebook be "cheap or free" for those who get limited value and have limited ability to pay, but still be paid for, at fair levels for those who get more value and who are willing and able to pay for that. This architecture, called FairPay, has gained recognition for operationalizing a solution that businesses can begin to apply now.
  • A reverse meter for ads and data. This FairPay architecture still allows for advertising to continue to defray the cost of service, but on a more selective, opt-in basis --  by applying a "reverse meter" that credits the value of user attention and data against each user's service fees -- at agreed upon terms and rates. That shifts the game from the advertiser being the customer of the platform, to to the advertiser being the customer of the user (facilitated by the platform). In that way advertising is carried only if done in a manner that is acceptable to the user. That aligns the incentives of the user, the advertiser, and the platform. Others have proposed similar directions, but I take it farther, in ways that Facebook could act on now.
  • A consumer-value-first model for user-revenue. Reverse metering is a good starting place for re-aligning incentives, but Facebook can go much deeper, to transform how its business operates.The simplest introduction to the transformative twist of the FairPay strategy is in my Techonomy article, Information Wants to be Free; Consumers May Want to Pay   (It has also been outlined in in Harvard Business Review, and more recently in the Journal of Revenue and Pricing Management.) The details will depend on context, and will need testing to fully develop and refine over time, but the principles are clear and well supported.

    This involves ways to mass-customize pricing of Facebook, to be "cheap or free" where appropriate, and to set customized fair prices for each user who obtain real value and can be enticed to pay for that. That is adaptive to individual usage and value-- and eliminates the risk of having to pay when the value actually obtained did not warrant that. That aligns incentives for transparency, trust, and co-creation of real value for each user. Behavioral economics has shown that people are willing to pay and will do so even voluntarily -- when they see good reason to help sustain the creation of value that they actually want and receive. We just need business models that understand and build on that.
Bottom line. Whatever the details, unless the Facebook shifts direction on its own to aggressively move in the direction of user payments -- which now seems unlikely -- regulatory pressure will be needed to force that (just as with auto emissions). A user revolt might force similar changes as well, but the problem is far too urgent to wait and see.

The broader call -- augmenting the wisdom of crowds

Shifting to a user-revenue-based business model will change incentives and drive significant progress to remedy many of the problems that McNamee and many others have raised. McNamee provides a wide-ranging overview of many of those problems and most of the initiatives that promise to help resolve them, but there, too, I offer suggestions that have not gained attention.

Most fundamental is the power of social media platforms to shape collective intelligence. Many have come to see that, while technology has great power to augment human intelligence, applied badly, it can have the opposite effect of making us more stupid. We need to steer hard for a more positive direction, now that we see how dangerous it is to take good results for granted, and how easily things can go bad. McNamee observes that "We...need to address these problems the old fashioned way, by talking to one another and finding common ground." Effective social media design can help us do that.

Another body of my work relates to how to design social media feeds and filtering algorithms to do just that, as explained in The Augmented Wisdom of Crowds:  Rate the Raters and Weight the Ratings:
  • The core issue is one of trust and authority -- it is hard to get consistent agreement in any broad population on who should be trusted or taken as an authority, no matter what their established credentials or reputation. Who decides what is fake news? What I suggested is that this is the same problem that has been made manageable by getting smarter about the wisdom of crowds -- much as Google's PageRank algorithm beat out Yahoo and AltaVista at making search engines effective at finding content that is relevant and useful.

    As explained further in that post, the essence of the method is to "rate the raters" -- and to weight those ratings accordingly. Working at Web scale, no rater's authority can be relied on without drawing on the judgement of the crowd. Furthermore, simple equal voting does not fully reflect the wisdom of the crowd -- there is deeper wisdom about those votes to be drawn from the crowd.

    Some of the crowd are more equal than others. Deciding who is more equal, and whose vote should be weighted more heavily can be determined by how people rate the raters -- and how those raters are rated -- and so on. Those ratings are not universal, but depend on the context: the domain and the community -- and the current intent or task of the user. Each of us wants to see what is most relevant, useful, appealing, or eye-opening -- for us -- and perhaps with different balances at different times. Computer intelligence can distill those recursive, context-dependent ratings, to augment human wisdom.
  • A major complicating issue is that of biased assimilation. The perverse truth seems to be that "balanced information may actually inflame extreme views." This is all too clear in the mirror worlds of pro-Trump and anti-Trump factions and their media favorites like Fox, CNN, and MSNBC. Each side thinks the other is unhinged or even evil, and layers a vicious cycle of distrust around anything they say. It seems one of the few promising counters to this vicious cycle is what Cass Sunstein referred to as surprising validators: people one usually gives credence to, but who suggest one's view on a particular issue might be wrong. An example of a surprising validator was the "Confession of an Anti-GMO Activist." This item is  readily identifiable as a "turncoat" opinion that might be influential for many, but smart algorithms can find similar items that are more subtle, and tied to less prominent people who may be known and respected by a particular user. There is an opportunity for electronic media services to exploit this insight that "what matters most may be not what is said, but who, exactly, is saying it."
If and when Facebook and other platforms really care about delivering value to their users (and our larger society), they will develop this kind of ability to augment the wisdom of the crowd. (Similar large-scale ranking technology is already proven in uses for advertising and Google search.) Our enlightened, democratic civilization will disintegrate or thrive, depending on whether they do that.

The facts of the facts. One important principle which I think McNamee misunderstands (as do many), is his critique that "To Facebook, facts are not absolute; they are a choice to be left initially to users and their friends but then magnified by algorithms to promote engagement." Yes, the problem is that the drive for engagement distorts our drive for the facts -- but the problem is not that "To Facebook, facts are not absolute." As I explain in The Tao of Fake Newsfacts are not absolute --we cannot rely on expert authorities to define absolute truth -- human knowledge emerges from an adaptive process of collective truth-seeking by successive approximation and the application of collective wisdom. It is always contingent on that, not absolute. That is how scholarship and science and democratic government work, that is what the psychology of cognition and knowledge demonstrates, and that is what effective social media can help all of us do better.

Other monopoly platform excesses - openness and interoperability

McNamee provides a good survey of many of the problems of monopoly (or oligopoly) power in the platforms, and some of the regulatory and antitrust remedies that are needed to restore the transparency, openness, and flexibility and market-driven incentives needed for healthy innovation. These include user ownership of their data and metadata, portability of the users' social graphs to promote competition, and audits and transparency of algorithms.

I have addressed similar issues, and go beyond McNamee's suggestions to emphasize the need for openness and interoperability of competing and complementary services -- see Architecting Our Platforms to Better Serve Us -- Augmenting and Modularizing the Algorithm. This draws on my early career experience watching antitrust regulatory actions relating to AT&T (in the Bell System days), IBM (in the mainframe era), and Microsoft (in the early Internet browser wars).

The wake up call

There are many prominent voices shouting wake up calls. See the partial list at the bottom of An Open Letter to Influencers Concerned About Facebook and Other Platforms, and MacNamee's Bibliographic Essay at the end of Zucked (excellent, except for the omission that I address here).

All are pointing in much the same direction. We all need to do what we can to focus the powers that be -- and the general population -- to understand and address this problem. The time to turn this rudderless ship around is dangerously short, and effective action to set a better direction and steer for it has barely begun. We have already sailed blithely into killer icebergs, and many more are ahead.

This is cross-posted from both of my blogs, FairPayZone.com and Reisman on User-Centered Media, which delve further into these issues.

Monday, January 14, 2019

The Real Crisis: The War to Save Democracy in 2020 Has Begun - Journalism Needs to Mobilize

This is one of the more complete of many prescriptions for journalists to manage this real crisis (and deflate the fake ones) -- but it is just another cry against the storm with no concrete plan for action.

The imminent crisis

The 2016 election in the US, similar problems around the world, "fake news," and disinformation have surfaced as crucial problems. Many are at work on solutions, but most will take time to be effective. We do not have that time.

In the US, and for the rest of the world, the most imminent threat is that Trump will use the press as he did in 2016 -- and still does. He still orchestrates a Trump-centric media circus. As Bruni points out, we need to restore meaningful conversation on the issues, whatever the policies at issue. 

Even most of those who support many of Trump's policies are dismayed at the dysfunction of this media circus (entertaining as it may be) -- this is not a partisan issue, but one all reasonable people can support.

Journalism needs to rally around best practices for containing this real and present danger now. Define them, follow them, and call out those who do not. To do that, leading journalists, publishers and J-schools should organize a Manhattan Project to unify and act now! If you do not do it right now, you may never have another chance.

Such a project should be inclusive, drawing in all who share the core values of intelligent discourse. 

Are you journalists, or cheerleaders (and profiteers) in a flame war?

Bruni's starter list

It is a long op-ed, well worth reading, and no doubt there are other important practices and tactics, but let's begin with some extracts from Bruni's op-ed (see the original for attribution of quotes):
“Pocahontas” won’t be lonely for long. …how much heed will we in the media pay to this stupidity? …That’s a specific question but also an overarching one — about the degree to which we’ll let him set the terms of the 2020 presidential campaign, about our appetite for antics versus substance, and about whether we’ll repeat the mistakes that we made in 2016 
Trump tortures us. Deliberately, yes, but I’m referring to the ways in which he keeps yanking our gaze his way.
“When you cover this as spectacle…what’s lost is context, perspective and depth. And when you cover this as spectacle, he is the star.” 
Trump was and is a perverse gift to the mainstream, establishment media, a magnet for eyeballs at a juncture when we were struggling economically and desperately needed one. Just present him as the high-wire act and car crash that he is; the audience gorges on it. But readers…[are] starved of information about the fraudulence of his supposed populism and the toll of his incompetence. And he wins. He doesn’t hate the media...He uses us.
Regarding their fitness for office, they [Trump and Clinton] were treated identically? In retrospect, that’s madness. It should have been in real time, too.
We need to do something else, too, which is to recognize that Trump now has an actual record in office and to discuss that with as much energy as we do his damned Twitter feed.
“Instead of covering Trump’s tweets on a live, breaking basis, just cover them in the last five minutes of a news show. They’re presidential statements, but we can balance them.”
We can also allow his challengers to talk about themselves as much as they do about him. …“It was deeply unfair… the question was always, ‘What’s your reaction to what Trump just said?,’ there’s no way to drive your own message.”
“It got to the point where it was one outrage after another, and we just moved on each time” …Instead, we should hold on to the most outrageous, unconscionable moments. We should pause there…. We can’t privilege the incremental over …the enduring. It lets Trump off the hook.
"…if you put enough experts on arguing… people will watch. And that’s what we’re doing with our politics. The media is not using their strength, their franchise, to elevate and illuminate the conversation. They’re just getting you all jazzed up about the game.”
But the lure of less demanding labors …is always there, especially because readers and viewers…reward it. What they lap up …is Trump the Baby…the Buffoon…the Bully… And that’s on them.
The real story of Trump isn’t his amorality and outrageousness. It’s Americans’ receptiveness to that. 
“Trump basically ran on blowing the whole thing up.…It’s critically important that we find ways to get at what it is people imagine government should be doing and…really look at what kind of leadership we need.”
A Manhattan Project for Journalism - the war to extinguish the flame war

When America became the "arsenal for democracy" in the battle against fascism, we mobilized for conventional warfare -- and with a massive Manhattan Project to change the game with an A-bomb. The best minds were assembled, tested many alternative strategies, and then focused the best resources in the world on what worked.

Trump has conquered the presidency with an artful flame war. Many have written very intelligently about the issues and strategies that Bruni raises. There is no silver bullet (or A-bomb), but there are a suite of strategies that promise to contain the nonsense -- but only if widely understood and practiced. No one person or organization has the knowledge or ability to do this alone. Bruni's points (and similar suggestions from many others) can be distilled, formalized and supplemented to provide a guide to best practices, both at high level, and in the guts of how journalism is practiced. Our best minds for journalism must come together and quickly define these best-practices, and then we must see to it that all understand them and work to enforce them.

If we have clear guidelines, we can call out and marginalize those who fan the flames - whether Trump and his supporters, or others.

Fair process is not partisan - the real challenge for "mainstream media"

Such a focus on process is not partisan, but simply a matter of a fairness to all citizens, and to the spirit of enlightened democracy that made America great. To the extent Trump or others (on either side of any issue) make responsible policy proposals and argue them responsibly, this would treat them fairly. To the extent they do not, it marginalizes them fairly.

Obviously our current government will not make this happen - no new "fairness doctrine" can be expected now. Journalists are uniquely positioned to step up to their responsibility. It must be a voluntary effort. Some prominent pundits and outlets will not cooperate, for political or business reasons. But a truly responsible "mainstream media" can work together to become a powerful force for reason. If we do not all hang together to fight the flame war, we will all hang separately.

Real Journalists of the World, Unite!

(If any broad effort to do this already exists, please let me know.)

(I am not a journalist, but one focus of my career has been on how technology can augment our collaborative intelligence. Journalism in this age is a form of such augmentation -- or more lately, de-augmentation.  I am ready to contribute to this effort as I can.)

Originally posted on my User-Centered Media blog.

Thursday, January 03, 2019

2019 New Year's Resolution: Let's Work Together to Invent a Better 2020!

My forecast for 2019: The best way to predict the future is to invent it -- let's work together on inventing a better 2020!

We face two over-arching and related challenges, one in the world of technology, and one in the larger world of enlightened democratic society.

At the broadest level, 2019 promises to be perhaps the worst and most traumatic year in recent American history. My point is not one of politics or policy (I bite my tongue), but of our basic processes of democratic society -- how we all work together to understand the world and make decisions. We now see all to well how much harm technology has done to that -- not by itself, but as an amplifier of the worst in us.

Within that world of technology, many have come to realize that we have taken a wrong turn in building vast and deeply influential infrastructures that are sustained by advertising. That perverts the profit incentive from creating value for we the people, to exploiting us to profit advertisers. That drive for engagement and targeting inherently conflicts with the creation of real value for users and society. We seem to not even be looking very hard for any solution beyond band-aids that barely alter 1) the perverse incentives of advertising, and 2) the failing zero-sum economics of artificial scarcity.

We seem to be at a loss for how to solve these problems at either level. I suggest that is simply a failure of will, imagination, and experimentation that we can all help rectify. Many prominent thought leaders have said much the same. I list some of them, and offer creative suggestions in An Open Letter to Influencers Concerned About Facebook and Other Platforms. I hope you will read it, as well as the related material it links to.

My suggestions are more specific, actionable, and practicalThat letter summarizes and links to ideas I have been developing for many years, but have taken on new urgency. They are well-supported, but as yet unproven in their full form. I can't be sure that my solutions will work, but there seems to be growing consensus that the problems are real, even if few have suggested any actionable path to solving them. (I have been a successful inventor and futurist for many decades. I have often been wrong about details of my answers, but have rarely have been far wrong about problems and issues. Very smart and well-informed people think I am on the right track here.)

But whether or not I am right about the solutions, we all have to make it a priority try to find, test, and refine the best solutions we can to confront these critical problems.

Still, few in technology, business, or government have turned from business as usual to rise to the urgent challenges we now face -- and even those who alert us to these problems seem to have few concrete strategies for effective action.

Please consider the urgency and importance of these issues at both levels, see if my suggestions or those of others resonate -- and add your voice in those directions -- or work to suggest better directions.

If we do not begin to make real progress in 2019, we may face a very dark 2020 and beyond.

If we do begin to turn this ship around, we can recharge the great promise of technology to augment our intellect and our society, and to create a new economics of abundance.

This is cross posted from both of my blogs, FairPayZone.com and Reisman on User-Centered Media, which delve further into these issues.

Friday, November 23, 2018

"As We Will Think" -- The Legacy of Ted Nelson, Original Visionary of the Web

From Nelson, As We Will Think (1972 version)
The vision of the Web in 1945

From a recent email from the Editor of The Atlantic:
In July 1945, Vannevar Bush, then the director of the U.S. Office of Scientific Research and Development—the military’s R&D lab, the predecessor to DARPA—published an essay in The Atlantic that would become one of the seminal pieces of technology literature of the 20th century.
Entitled “As We May Think,” the essay laid out a vision for a new kind of relationship between human and machine. Bush introduced an idea he called the memex: a sprawling, shared store of humankind’s knowledge that could be used for social good, not destruction. In the following years, preeminent technologists—including Doug Engelbart, whose work eventually led to the invention of the mouse, the word processor, and hyperlinks; and Sir Tim Berners-Lee, inventor of the World Wide Web—cited “As We May Think” as inspiration for their work.
The seminal re-visioning in the 1960's

It seems ironic that even the Atlantic seems to be neglecting Ted Nelson's role as an equally seminal visionary of the Web -- especially given that one of his early works was an explicit call to re-center on and realize Bush's vision, a work that plays off Bush's title as "As We Will Think."

I tweeted back to the Atlantic:
Some further tweets raised the question whether that was online somewhere, and it seems to be only in The Wayback Machine (archive.org), as a 1972 version -- and a poor copy at that, missing the original figures.

It happens that I have a better copy of the 1972 version, as well as another version that is labelled as being from 1968. So I am posting scans of both versions online (links below). I include some comments on provenance and on my recent email interchange with Nelson below (both of which lead me to believe the 1968 date is correct). But first...

Why Nelson matters 

A fuller explanation of  why Nelson matters is in my post from a few years ago, Digital Camelot - The Once and Future Web of Engelbart and Nelson, but here I caption its core message:

If you care about modern culture and how technology is shaping it, this is worth thinking about -- A powerful eulogy for where the Web might have gone, and still may someday, and the friendship of the two people most responsible for envisioning the Web*  --  Ted Nelson's eulogy for his friend Doug Engelbart, as reported by John Markoff in The Times -- with Nelson's inimitable flair.

As Markoff says:
Theodor Holm Nelson, who coined the term hypertext, has been a thorn in the side of the computing establishment for more than a half century. Last week, in an encomium to his friend Douglas Engelbart, he took his critique to Shakespearean levels. It deserves a wider audience. 
Dr. Engelbart and Ted Nelson became acquaintances at the dawn of the modern computing era. They had envisioned and invented the computing that we have come to take for granted.
I first encountered both of them in 1969, and what I saw set the direction for my life's work.  Engelbart gave "The Mother of All Demos" in 1968 (and I first saw him give a follow-up the next year, and read most of his work).  Nelson dreamed of hypertext and hypermedia, and wrote papers on what he called "hypertext" in the 'mid-60s and the highly influential Whole Earth Catalog of "Computer Lib / Dream Machines" in 1974.

As Nelson laments, both received a degree of recognition, but both were marginalized. Powerful as it may be, expediency took the Web in more limiting directions.

Their ideas remain profound and forward looking. Anyone who really cares about the future of media, intellect, and culture, and how information technology can augment that, should consider their work. Just because the Web took a turn to expediency in the past does not mean it will not realize its richer potential in the future. (One hint of that is noted in the next section [of that post].) ...

Nelson's insight

Ted's iconoclastic and visionary style is apparent from the opening of his "As We Will Think" (1968 version):
Bush was right, His famous article is, however, generally misinterpreted, for it has little to do with "information retrieval" as prosecuted today, Bush rejected indexing and discussed instead new forms of interwoven documents. 
It is possible that Bush's vision will be fulfilled substantially as he saw it, and that information retrieval systems of the kinds now popular will see less use than anticipated.
As the technological base has changed, we must recast his thesis slightly, and regard Bush's "memex" as three things: the personal presentation, editing and file console; a digital feeder network for the delivery of documents in full-text digital form; and new types of documents, or hypertexts, which are especially worth receiving and sending in this manner.
In addition, we also consider a likely design for specialist hypertexts, and discuss problems of their publication. 
Twenty-three years ago, in a widely acclaimed article, Vannevar Bush made certain predictions about the way we of the future would handle written information (1). We are not yet doing so. Yet the Bush article is often cited as the historical beginning, or as a technological watershed, of the field of information retrieval. It is frequently cited without interpretation (2,3). Although some commentators have said its predictions were improbable (4), in general its precepts have been ignored by acclamation...
In hindsight, it is obvious that Ted was right about Bush's vision. The memex pre-saged wonders far beyond the mundane notion of "information retrieval" as generally understood in the 1960s (even if not all of Bush, Engelbart and Nelson's visions have been embodied in the Web).

For an interesting update that theme, see this 2016 Quartz article and its reference to Werner Herzog's interview of Ted for his film Lo and Behold, and a short video of Ted expanding on what he spoke of. Both provide a nice live demos of the "parallel textface," much as shown in the above image from Ted's 1972 article. This also explains Ted's ideas of "transclusion" of elements from one work into another, as a rich kind of mashup that retains the identity of the original elements.  He explains how that can support creator rights to what is linked in, and micropayment-based payment/compensation models.* I have often heard people speculate about some of these exciting ideas, thinking they were new (and sometimes that they invented them). Few realize that Ted described all this in the '60s and '70s,

Those trying to invent this "deeply intertwingled" future might want to stand on Ted's shoulders. Ted may not have had the entrepreneurial genius of Steve Jobs, but his inventive vision is second to none.

The Atlantic might want to talk to him...

1968? really? -- it seems yes

Nelson actually wrote previously about his ideas for hypertext (in the mid 1960s), so the exact date of this particular paper may not be of great importance, but its earliest provenance is a be a bit of a puzzle.

I recently corresponded with Ted by email, and he was intrigued by these finds -- happy to have the full 1972 version and puzzled by the "1968" version. He said he did not recall a formal publication from that date, but that he might have provided a version at the ACS Annual Meeting then.

Both papers are from my hard copy file, just as they appear in the scans now posted (with the hand annotations apparently being mine, from when I first read them). I believe I had ordered them from my company library and that the label "ACS Annual Meeting 1968" was the citation information with which I ordered that copy. (I presumed that referred to the American Chemical Society, which seemed a bit far afield, but Ted did have a wide range.) So it seemed to remain a puzzle.

However as I was drafting this post, I noticed that the 1968 version says "Twenty-three years ago..." while the 1972 version says "Twenty-seven years ago..." That would seem to be compelling evidence that my "1968" version actually was from that year.

To add more personal history, I had the pleasure of meeting with Ted in 1970 to explore assisting in an experimental hypertext implementation under Claude Kagan's direction, as part of my masters degree fellowship work at the AT&T Western Electric Research Center. That project did not materialize, but chatting with Ted about hypertext was one of the most memorable hours of my career.

Early works by Ted Nelson from my collection

The following are items by Ted that may not to be generally available online. I collected these from 1969 onward, and plan to post scans of all them as I get time (after checking whether comparable copies are already accessible elsewhere).
  • As We Will Think ("ACS Annual Meeting 1968" version
    (unable to confirm citation and date)
  • As We Will Think ("Online 72 Conference Proceedings" version 
    (fuller than the scan at archive.org, includes original figures/photos)
  • “Hypertext Editing System,” published by Brown University on 5/6/69 for the Spring Joint Computer Conference, 5/14-16/69
    (My first exposure to hypertext. I clicked a link and saw the future. It was at the IBM booth running on a "mid-sized" IBM 360/50 mainframe with a 2250 vector graphics workstation equipped with a light-pen. Coincidentally, I knew Andy van Dam and some of the developers from my time at Brown the years just before.)
  • Short Computer Lib “$5 First Edition” ©’73, on typewriter paper hand-duplexed, 12 pages including Dream Machines flip-side.
  • A File Structure for the Changing and the Indeterminate, ACM National Conference 1965
  • Xanadu Draft Brochure, 27 November 1969
  • Computer Decisions 9/70  -- No More Teacher’s Dirty Looks
  • Hypertext Note 0-9, various dates in ‘67
  • Decision/Creativity Systems dated 19 July 1970
  • Hypertexts 20 Mar 70
  • Getting it out of our system, in Schechter, ‘67
  • A 14 December 1970 PDP10 teletype printout of Ted's “final report” for Claude Kagan of Western Electric (maybe incomplete with related fragments) -- as noted above, I met with Ted around that time to discuss assisting in this project while in my master's degree fellowship program. (I suspect this was not distributed beyond Ted, Claude, and me.)
(Copies will also be placed on Google Drive.)

*An innovation of my own is relevant to this use of micropayments. Micropayments have a long history of enthusiasm and failure. The problem is that micropayments add up to macropayments, resulting in the shock of a nasty surprise when the bill is presented, or the fear of such a surprise. My short answer to how to fix that is to make the micro-payments variable, including some form of volume discounts and price caps, and to provide forgiveness when the value received is not satisfactory. Details of how do that are in this recent post on my other blog: "The Case Against Micropayments" -- From Fear and Surprise to The Comfy Chair.

Historical Clarification:  I should note that my original tweet, above was imprecise in saying "bringing Bush to the attention of those you name." I gather that Engelbart arrived at the basic idea of hypertext independently, and only later became aware of Ted's work. However my understanding is the Tim Berner-Lee was influenced by Ted, as referenced in his proposal for the WWW.