Friday, April 26, 2019

"Non-Binary" means "Non-Binary"...Mostly...Right?

A "gender non-binary female?"

Seeing the interview of Asia Kate Dillon on Late Night with Seth Meyers, I was struck by one statement -- one that suggests an insidious problem of binary thinking that pervades many of the current ills in our society. Dillon (who prefers the pronoun "they") reported gaining insight into their gender identity from the character description for their role in Billions as "a gender non-binary female," saying: “I just didn’t understand how those words could exist next to each other.”

What struck me was the questioning of how these words could be sensibly put together. Why would anyone ask that question? As I though more, I saw this as a perfect example of the much broader problem.

The curse of binary thinking

The question I ask is at a semantic level: how could that not be obvious? (regardless of one's views on gender identity). Doesn't the issue arise only if one interprets "female" in a binary way? I would have thought that one who identifies as "non-binary" would see beyond this conceptual trap of simplistic duality. Wouldn't a non-binary person be more non-binary in their thinking? Wouldn't it be obvious to a non-binary thinker that this is a matter of being non-binary and female, not of being non-binary or female?

It seems that binary thinking is so ingrained in our culture that we default to black and white readings when it is clear that most of life (outside of pure mathematics) is painted in shades of gray. It is common to think of some "females" as masculine, and some "males" as effeminate. Some view such terms as pejorative, but what is the reality? Why wouldn't a person presumed at birth to be female (for the usual blend of biological reasons) be able to be non-binary in a multitude of ways. Even biologically "female" has a multitude of aspects, which usually generally align, but sometimes diverge. Clearly, as to behavior in general and as to sexual orientation, there seems to be a spectrum, with many degrees in each of many dimensions (some barely noticed, some hard to miss).

So I write about this as an object lesson of how deeply the binary, black or white thinking or our culture distorts our view of the more deeply nuanced reality. Even one who sees themself as non-binary has a hard time escaping binary thinking. Why can the word "female" not be appropriate for a non-binary person (as we all are to some degree) -- one who has birth attributes that were ostensibly female. Isn't it just a fallacy of binary thinking to think it is not OK for a non-binary person to also be female? That a female cannot be non-binary?

I write about this because I have long taken issue with binary thinking. This is not to meant to criticize this actor in any way, but to sympathize broadly with the prevalence of this kind of blindness and absolutism in our culture. It is to empathize with those who suffer from being thought of in binary ways that fail to recognize the non-binary richness of life -- and those who suffer from thinking of themselves in a binary way. That is a harm that occurs to most of us at one time or another. As Whitman said:
Do I contradict myself?
Very well then I contradict myself,
(I am large, I contain multitudes.)
The bigger picture

Gender is just one of the host of current crises of binary thinking that lead to extreme polarization of all kinds. Political divides. The more irreconcilable divide over whether leadership must serve all of their constituency, or just those who support the leader, right or wrong. Fake news. Free speech and truth on campus vs. censorship for some zone of safety for binary thinkers. Trickle-down versus progressive economics. Capitalism versus socialism. Immigrant versus native. One race or religion versus another. Isn't the recent focus of some on "intersectionality" just an extension of binary thinking to multiple binary dimensions? Thinking in terms of binary categories (rather that category spectrums) distances and demonizes the other, blinded from seeing how much common ground there is.

The Tao symbol (which appears elsewhere in this blog) is a perfect illustration of my point, and an age-old symbol of the non-dualistic thinking central to some Asian traditions (I just noticed the irony of the actor's first name as I wrote this sentence!). We have black and white intertwined, and the dot of contrast indicates that each contains it opposite. That suggests that all females have some male in them (however large or small, and in whatever aspect) and all males have some female in them (much as some males would think that a blood libel).

Things are not black or white, but black and white. And even if nearly black or white in a single dimension, single dimensions rarely matter to the larger picture of any issue. I think we should all make a real effort to remind ourselves that that is the case for almost every issue of importance.

---

(I do not profess to be "woke," but do very much try to be "awakened" and accepting of the wondrous richness of our world. My focus here is on binary and non-binary thinking, itself. I use gender identity as the example only because of this statement that struck me. If I misunderstand or express my ideas inartfully in this fraught domain, that is not my intent. I hope it is taken in the spirit of finding greater understanding that is intended.)

(In that vein, I accept that there may be complex issues specific to gender and identity that go counter to my semantic argument in some respects. But my non-binary view is that that broader truth of non-duality still over-arches. And in an awakened non-binary world, the current last word can never be known to be the future last word.)

(See also the short post just below on the theme of this blog.)

A Note on the Theme of this Blog: Everything is Deeply Intertwingled -- and, Hopefully, Becoming Smartly Intertwingled

The next post (to appear just above) is the first to indulge my desire to comment more broadly on the theme that "everything is deeply intertwingled" (as Ted Nelson put it). That has always been a core of my worldview and has been increasingly weaving into my posts -- especially on the problems of how we deal with "truth" in our social media. I say we should move toward making things more smartly intertwingled.

That post, and some that will follow, move far out of my professional expertise, but I see all of my ideas as deeply intertwingled. (I have always been intrigued by epistemology, the theory of knowledge: what can we know and how do we know it). This current  topic provided the impetus to act on my latent intent to broaden the scope of this blog to these larger issues that are now creating so much dysfunction in our society.

Beyond Ted Nelson's classic statement and his diagram (above, from Computer Lib/Dream Machines) the symbol that most elegantly conveys this perspective is the Tao symbol, which appears in many of my posts. It shows the yin and yang of female and male as intertwingling symbols of those elemental opposites — and the version with the dots in each intertwingled portion, suggests that each element also contains its opposite (a further level of intertwingling).

[Update 6/13/19, on changing the blog header:]

This blog was formerly known as “Reisman on User-Centered Media,” with the description:
On developing media platforms that are user-centered – open and adaptable to the user's needs and desires – and that earn profit from the value they create for users ...and as tools for augmenting human intellect and enlightened democracy.
That continues to be a major theme.

Tuesday, April 09, 2019

A Regulatory Framework for the Internet (with Thanks to Ben Thompson)

Summarizing Ben Thompson of Stratechery, plus my own targeted proposals

"A Regulatory Framework for the Internet," Ben Thompson's masterly framework, should be required reading for all regulators, as well as anyone concerned about tech and society. (Stratechery is one of the best tech newsletters, well worth the subscription price, but this article is freely accessible.)

I hope you will read Ben's full article, but here are some points that I find especially important, followed by the suggestions I posted on his forum (which is not publicly accessible).

Part I -- Highlights from Ben's Framework (emphasis added)

Opening with the UK government White Paper calling for increased regulation of tech companies, Ben quotes MIT Tech Review about the alarm it raised among privacy campaigners, who "fear that the way it is implemented could easily lead to censorship for users of social networks rather than curbing the excesses of the networks themselves."

Ben identifies three clear questions that make regulation problematic:
First, what content should be regulated, if any, and by whom?
Second, what is a viable way to monitor the content generated on these platforms?
Third, how can privacy, competition, and free expression be preserved?

Exploring the viral spread of the Christchurch hate crime video, he gets to a key issue:
What is critical to note, though, is that it is not a direct leap from “pre-Internet” to the Internet as we experience it today. The terrorist in Christchurch didn’t set up a server to livestream video from his phone; rather, he used Facebook’s built-in functionality. And, when it came to the video’s spread, the culprit was not email or message boards, but social media generally. To put it another way, to have spread that video on the Internet would be possible but difficult; to spread it on social media was trivial.
The core issue is business models: to set up a live video streaming server is somewhat challenging, particularly if you are not technically inclined, and it costs money. More expensive still are the bandwidth costs of actually reaching a significant number of people. Large social media sites like Facebook or YouTube, though, are happy to bear those costs in service of a larger goal: building their advertising businesses.

Expanding on business models, he describes the ad-based platforms as "Super Aggregators:"
The key differentiator of Super Aggregators is that they have three-sided markets: users, content providers (which may include users!), and advertisers. Both content providers and advertisers want the user’s attention, and the latter are willing to pay for it. This leads to a beautiful business model from the perspective of a Super Aggregator:
Content providers provide content for free, facilitated by the Super Aggregator
Users view that content, and provide their own content, facilitated by the Super Aggregator
Advertisers can reach the exact users they want, paying the Super Aggregator 
...Moreover, this arrangement allows Super Aggregators to be relatively unconcerned with what exactly flows across their network: advertisers simply want eyeballs, and the revenue from serving them pays for the infrastructure to not only accommodate users but also give content suppliers the tools to provide whatever sort of content those users may want.
...while they would surely like to avoid PR black-eyes, what they like even more is the limitless supply of attention and content that comes from making it easier for anyone anywhere to upload and view content of any type.
...Note how much different this is than a traditional customer-supplier relationship, even one mediated by a market-maker... When users pay they have power; when users and those who pay are distinct, as is the case with these advertising-supported Super Aggregators, the power of persuasion — that is, the power of the market — is absent.
He then distinguishes the three types of "free" relevant to the Internet, and how they differ:
“Free as in speech” means the freedom or right to do something
“Free as in beer” means that you get something for free without any additional responsibility
“Free as in puppy” means that you get something for free, but the longterm costs are substantial
...The question that should be asked, though, is if preserving “free as in speech” should also mean preserving “free as in beer.”
Platforms that are paid for by their users are "regulated" by the operation of market forces, but those that are ad-supported are not, and so need external regulation.

Ben concludes that:
...platform providers that primarily monetize through advertising should be in their own category: as I noted above, because these platform providers separate monetization from content supply and consumption, there is no price or payment mechanism to incentivize them to be concerned with problematic content; in fact, the incentives of an advertising business drive them to focus on engagement, i.e. giving users what they want, no matter how noxious.
 This distinct categorization is critical to developing regulation that actually addresses problems without adverse side effects
...from a theoretical perspective, the appropriate place for regulation is where there is market failure; constraining the application to that failure is what is so difficult.
That leads to Ben's figure that brings these ideas together, and delineates critical distinctions:


I agree completely, and build on that with my two proposals for highly targeted regulation...

Part II -- My proposals, as commented on in the Statechery Forum 
(including some minor edits and portions that were abridged to meet character limits):

Elegant model, beautifully explained! Should be required reading for all regulators.

FIRST:  Fix the business model! I suggest taking this model farther, and mandating that the "free beer" ad-based model be ratcheted away once a service reaches some critical level of scale. That would solve the problem -- and address your concerns about competition.

Why don't we regulate to fix the root cause? The root cause of Facebook's abuse of trust is its business model, and until we change that, its motivations will always be opposed to consumer and public trust.

Here is a simple way to force change, without over-engineering the details of the remedy. Requiring a growing percentage of revenue from users is the simplest way to drive a fundamental shift toward better corporate behavior. Others have suggested paying for data, and I suggest this is most readily done in the form of credits against a user service fee. Mandating that a target level of revenue (above a certain level) come from users could drive Facebook to offer such data credits, as a way to meet their user revenue target (even if most users pay nothing beyond that credit). We will not motivate trust until the user becomes the customer, and not the product.

There is a regulatory method that has already proven its success with a similarly challenging problem – forcing automakers to increase the fuel efficiency of the cars they make. The US government has for years mandated staged multi-year increases in Average Fuel Efficiency. This does not mandate how to fix things. It mandates a limit on the systems that have been shown to cause harm. Facebook and YouTube can determine how best to achieve that. Require that X% of the revenue come from users rather than advertisers. Government can monitor progress, with a timetable for ratcheting up the percentage. (This should apply only above some amount of revenues, to facilitate competition.)

With that motivation, Facebook and YouTube can be driven to shift from advertising revenue to customer revenue. That may seem difficult, but only for lack of trying. Credits for attention and data are a just a start. If we move in that direction, we can be less dependent on other, more problematic, kinds of regulation.

This regulatory strategy is outlined in To Regulate Facebook and Google, Turn Users Into Customers (in Techonomy). More on why that is important in Reverse the Biz Model! -- Undo the Faustian Bargain for Ads and Data. (And some suggestions on more effective ways to obtain user revenue:  Information Wants to be Free; Consumers May Want to Pay, (also in Techonomy.)

SECOND: Downrank dissemination, don't censor speech! Your points about limiting user expression, and that the real issue is harmful spreading on social media, are also vitally important.

I say the real issue is:
  1.  Not: rules for what can and cannot be said – speech is a protected right
  2.  But rather: rules for what statements are seen by who – distribution (how feeds are filtered and presented) is not a protected right.
The value of a social media service should be to disseminate the good, not the bad. (That is why we talk about “filter bubbles” – failures of value-based filtering.)

I suggest Facebook and YouTube should have little role in deciding what can be said (other than to enforce government standards of free speech and clearly prohibited speech to whatever extent practical).  What matters is who that speech is distributed to, and the network has full control of that.  Strong downranking is a sensible and practical alternative to removal -- far more effective and nuanced, and far less problematic.

I have written about new ways to use PageRank-like algorithms to determine what to downrank or uprank – “rate the raters and weight the ratings.”
  • Facebook can have a fairly free hand in downranking objectionable speech
  • They can apply community standards to what they promote -- to any number of communities, each with varying standards.
  • They could also enable open filtering, so users/communities can chose someone else’s algorithm (or set their preferences in any algorithm). 
  • With smart filtering, the spread of harmful speech can be throttled before it does much harm.
  • The “augmented wisdom of the crowd” can do that very effectively, on Internet scale, in real time.
  • No pre-emptive, exclusionary, censorship technique is as effective at scale -- nor as protective of free speech rights or community standards.
That approach is addressed at some length in these posts (where “fake news” is meant to include anything objectionable to some community):
…and some further discussion on that:
---
More of my thinking on these issues is summarized in this Open Letter to Influencers Concerned About Facebook and Other Platforms

---
See the Selected Items tab for more on this theme.