A better way to understand climate change and global warming may be to focus less on quantifying the direction of changes, but on quantifying the volatility of weather extremes of all kinds -- temperature, precipitation, humidity, wind, storms, etc.
Many have noted that "global warming" is not just a matter of warming, and that we might better focus on solving the problem with better messaging. Tom Friedman has referred to it as "global wierding," saying, "The frequency, intensity and cost of extreme weather events all increase. The wets get wetter, the hots get hotter, the dry periods get drier, the snows get heavier, the hurricanes get stronger. Weather is too complex to attribute any single event to climate change, but the fact that extreme weather events are becoming more frequent and more expensive — especially in a world of crowded cities like Houston and New Orleans — is indisputable." Brad Plumer made similar points about the need for more understandable messaging.
I have been suggesting that we track and report a “Weather-VIX” (WVIX) -- much as financial markets track a "Volatility IndeX" (VIX). In financial markets, the VIX is often understood as a "fear index." For weather, it might be seen as a "disruption index."
A Weather-VIX volatility index for our weather, would be a complementary metric to average temperature trends. By tracking the volatility of weather (from day to day), wouldn't we see a very significant and increased volatility in temperatures, precipitation, and wind speed? Unlike the small changes in average temperature, volatility trends might be far more dramatic, and much less easily dismissed as just a natural fluctuation. Refocusing on volatility would also remove silly arguments that extremes of cold refute global "warming" -- of course the warming is not always "global," and is not always consistent at any given time. We can better understand that the weather will not be volatile at any given place at every given time, but tracking volatility in each region would give clearer evidence of increasing overall volatility, and how that varies from region to region.
This WVIX could also be tied to the monetary costs of extremes in both directions -- “WVIX-cost.”
Even if only based on data for the last hundred years or so (and only in locations with good data), we might see that violent and erratic weather is already accelerating to increasingly costly levels. Insurance companies will be among the first to see and quantify this as an actuarial cost, but with a simple WVIX index, we will all be able to understand this effect more clearly.
"Everything is deeply intertwingled" – Ted Nelson’s insight that inspired the Web. People can be smarter about dealing with that - in media services, social media, AI, and society and life more broadly. Technology can augment that -- most notably as the Augmented Wisdom of Crowds (see the Selected Items tab below). The former name, “Reisman on User-Centered Media” still applies: open and adaptable to each user's needs and desires – and sharing in the value they create for users.
Showing posts with label intertwingled. Show all posts
Showing posts with label intertwingled. Show all posts
Sunday, May 31, 2020
The "Weather-VIX" -- A Volatility IndeX for Weather?
Posted by
Richard Reisman - Sociotechnical network/systems thinker, visionary, inventor, pioneer | Author: @TechPolicyPress, FairPay | Nonresident Senior Fellow, Foundation for American Innovation
at
3:01 PM
10 comments:


Labels:
climate,
climate change,
global warming,
intertwingled,
storms,
VIX,
volatility,
volatility index,
weather,
WVIX
Monday, May 18, 2020
The Pandemic Reminds Us "Everything is Deeply Intertwingled" – We Need Better Logics for That
A bat catches a cold in Wuhan, and weeks later the whole
world coughs. America and China battle a trade war, and then there is a
shortage of PPE and ventilator parts from China. Poor neighborhoods suffer high death rates because
of poor health, but even celebrities and heads of state go into ICUs. The economy craters, and we argue over relief
to businesses versus workers based on which is more disposed to misuse what
they might be given. Health officials
say flatten the curve, financiers say reopen, and corporations say they don’t
dare reopen without testing. The Federal
government is too polarized to fix much of anything, and has forgotten their
real job of governing by consensus.
Modern technologies of global connection -- both physical
and virtual -- make the pandemic emerge in weeks instead of years, and make all
the butterfly effects far more complex.
That is our new curse, but also our new blessing. We have global travel and supply chains, global
communications and media networks -- a global village composed of local
villages. Techies moved fast and broke society,
and now discourse seems too polarized to fix it.
All of these effects are driven by market forces -- however
regulated. Marketplaces of goods and
services and marketplaces of ideas. These
marketplaces are driven by complex
interplays of top-down structure and bottom-up emergence from billions of
actors --and systems of actors.
Technology has made these forces more dynamic and turbulent, but technology can enable smarter and better-regulated marketplaces -- if we re-focus. We cannot undo this onrushing dynamic -- we need to get smarter about how we use technology to help us go forward.
Technology has made these forces more dynamic and turbulent, but technology can enable smarter and better-regulated marketplaces -- if we re-focus. We cannot undo this onrushing dynamic -- we need to get smarter about how we use technology to help us go forward.
The pandemic may be the kick in the ass we need to reform
society over a wide range of domains and levels. Seeing the commonalities can help us capture
a new synergy. If rise to that
challenge, the future will be bright. If we fail it will be dark. Many see that, but few focus on the root
causes.
Peter Drucker said “The greatest danger in times of turbulence is not
the turbulence, it is to act with yesterday’s logic.” Two new logics can help
us correct the failures of our current logics.
Ever-growing intertwingularity

The symbolic circle of the Tao reminds us of that truth
is never entirely black or white, but shades of gray that depend on the light
we view it in and the perspective we view it from. Just how much is subject to argument,
discovery, and rediscovery, as reality emerges.
This is age-old, but it is more urgent than ever that we come to grips
with it. 2020 will mark a turning point
in human history.
For decades our world and our markets have been increasingly
stressed, even as we seemed to be progressing.
Tensions of nationality, race, ideology, religion, economics,
technology, and governance are raging. “Things
fall apart; the centre cannot hold /…The best lack all conviction, while
the worst / Are full of passionate intensity.”
It is now urgent that we re-center more wisely on our better
convictions.
The Enlightenment has run aground because those who saw the
light and had the benefits did not pay enough attention to sharing that. Liberals turned away from “the deplorables”
instead of caring for and raising them up.
Capitalists extracted short-term profits and enriched themselves with
stock buybacks -- exploiting workers instead of empowering them. Factions and political parties fought
zero-sum struggles to control the existing pie instead of engaging in win-win
cooperation to create and share a larger pie.
The Chinese ideograph for crisis is composed from the
characters for danger plus opportunity.
Many retreat in fear of the danger and seek to throw blame and erect
walls, but wiser heads look to the opportunity.
Most see opportunity in narrow domains, but some look to the big picture. We now face an urgent and historic
opportunity to refocus on a more enlightened and productive kind of cooperation
across the full range of issues.
Those who see and work on these problems in particular
domains of concern and expertise can unite in spirit and vision with those in other
domains. We can forge a new Age of
Enlightenment – a Reformation of The Enlightenment. An awakening of interconnection and
cooperative spirit is emerging. Our
challenge is to synergize it. Some
elements:
- Economic and health insecurity for some leads to insecurity for all. A safety net is needed.
- Market systems need slack to respond to black swan events. “Just-in-time” and “lean” are efficient only when not overstressed. Global supply chains need resilience and redundancy. Too much slack and safety drain our wealth and will, but too little lead to disaster.
- Moving fast and breaking things can break things that cannot be fixed. Experience can blind us, but inexperience can kill.
- Power among local, state, national, and global government must be properly balanced and adaptable to stress. Power and wealth must be shared fairly among people, factions, and nations, or those left wanting will throw rocks at the crystal palace. The resurgence of nationalism, factionalism, and the crisis of disinformation are symptoms of perceived unfairness. Government that is too small is just as bad as too big.
Our modern, high-tech world is far too complex for purely
top-down or bottom-up management and governance -- we need a smart and adaptive
blend. That requires openness, transparency, trust, and fairness, so even when
there is disagreement, there is a common sense of reasonableness and good
spirit.
New Logics for Intertwingularity
My recent work has focused on two new ways to deal better with
this growing complexity. These new
logics that do not just exhort people to be better and wiser, but better align
interests so that virtue is rewarded.
One relates to failures of our marketplace of ideas –
especially our social media and other collaborative systems. Computer-augmented
human collaboration first emerged in the 1960s, and was used for disaster preparedness
(natural and nuclear). It progressed
slowly until the Web made it far more powerful and accessible to consumers, but
we failed to direct those social media systems to serve us well. Struggling to find a business model, they hit
on advertising. We now recognize that to be “the
original sin of the Internet” because it misdirects our platforms to serve
advertisers and not users. Algorithms
can help augment human intelligence to make us smarter collectively -- instead
of making us stupider, as social media now do.
Systems that elucidate nuance, context, and perspective can empower us
to draw out and augment
the wisdom of crowds (as explained in detail on this blog) to deal more smartly
with our deeply intertwingled world. That
could drive a new Age of Enlightenment in which technology augments the
marketplace of ideas in the ways that we have always relied on to mediate
consent – an emergent mix of top-down guidance and bottom-up emergence that can
lead to new, yet natural, forms of digital democracy.
The other relates to failures of our economic
marketplace – how we can shift from the short-term. zero-sum logic of extractive
mass-market capitalism to more
long-term, win-win forms of market cooperation. That can restore the emergent, distributed,
and human logic of traditional markets that Adam Smith saw as socially beneficial
-- before modern mass-marketing alienated producers from consumers and lost
sight of broader human values. Our
digital economy now enables new ways to shift from fighting over a current pie
to cooperating to co-create a larger pie -- and to share it fairly. That logic can empower a reformation
of market capitalism from within that could actually be more profitable,
and thus self-motivating. We can apply
the power of computer-mediated marketplaces to let businesses and consumers negotiate
at a human level -- about the values they care about, how to co-create that
value, and how to share in the benefits.
We have begun to think in terms of customer journeys, but have been
trying to fit customers into segments or personas. Instead, we need to design
for segments of one that are custom-fit to each customer, to build
relationships with each customer on human terms.
These two logics are interrelated: a flawed economic logic for consumer platform services has been built on advertising revenue (“the Internet’s original sin”). That has warped incentives to favor engagement with junk content that sells ads, rather then the value to users of quality content. An improved logic for value will create incentives for our platforms to facilitate a logic for a better marketplace of ideas.
The brief descriptions of these new logics may sound like
just more exhortations, but the posts that they link to provide details of operational
mechanisms -- and evidence that their elements have proven effective. These new combinations of elements can
quickly become second nature, because they draw on and re-channel natural
behaviors that promise to make them highly self-reinforcing.
Many allied visions for better logics of emergence are finding
new relevance in this era of crisis. We
have only to join
together and rise to the occasion.
We say that “we are all in this together” – we need to open our minds to
really think that way, and to work with new logics and “choice architectures” that
make that natural. With better logics, our
instinctive behaviors can once again synergize to flow in increasingly enlightened
ways.
---
For more about the new logic for the marketplace of ideas (and intertwingularity in general), see this list of selected items on the SmartlyIntertwingled.com blog.
For more about the new logic for the economic marketplace, see this list of selected items on the FairPayZone.com blog.
Posted by
Richard Reisman - Sociotechnical network/systems thinker, visionary, inventor, pioneer | Author: @TechPolicyPress, FairPay | Nonresident Senior Fellow, Foundation for American Innovation
at
5:04 PM
No comments:


Labels:
augmentation,
automation,
business models,
coronavirus,
COVID,
enlightenment,
FairPay,
intertwingled,
marketplace,
Media,
platforms,
politics,
regulation,
revenue models,
social media,
technology,
wisdom of crowds
Monday, January 20, 2020
Personalized Nutrition -- Because Everything is Deeply Intertwingled!
Nutrition is hard to get right because everything is deeply intertwingled. Personalized Nutrition is changing that!
This new perspective on nutrition is gaining attention, as an aspect of personalized medicine, and is the subject of a new paper, Toward the Definition of Personalized Nutrition: A Proposal by The American Nutrition Association. (I saw it as it was finalized, since my wife, Dana Reed, is a co-author, and a board member and part of the nutrition science team at ANA.)
The key idea is:
I frequently cite the conundrum from Woody Allen's Sleeper, when the 1970s protagonist had just been awakened by doctors after 200 years:
I come from this not from biology, but from machine learning and predictive analytics. My focus is on getting smarter about how everything is intertwingled.
One of the most intriguing companies I have run across is Nutrino, a startup acquired by Medtronic, that analyzes data from continuous glucose monitors used by diabetics to understand the factors that affect their glucose response over time. They correlate to specific food intakes, activity, sleep, mood, blood tests, genomics, biomics, and more. They call it a FoodPrint, "a digital signature of how our body reacts to different foods. It is contextually driven and provides correlations, insights and predictions that become the underpinning for personal and continually improving nutrition recommendations." This is one of the first successful efforts to tease out how what I eat (and what else I do) really affects me as an individual, in all of its real-world intertwingularity.
It is time to move beyond the current so-called "gold standard" of intervention-based studies, the randomized double blind placebo controlled (RDBPC) clinical tests. Reality is far too intertwingled for that to be more than narrowly useful. It is time to embrace big data, correlation, and predictive analytics. Some early recognition of this is that drugmakers are getting the FDA to accept mining of patient data as a way to avoid need for clinical trials.
We have a long way to go, but I want to know how likely it is that a given amount of deep fat or hot fudge, or wheat germ or kale (in combination with the rest of my diet, behavior and risk factors), will have a significant effect, over a time frame that can motivate whether or not I indulge in my chocolate or eat my spinach.
It is not enough to know that the dose makes the poison -- I want to know if the average man's poison is really just my meat.
Before very long we will know.
This new perspective on nutrition is gaining attention, as an aspect of personalized medicine, and is the subject of a new paper, Toward the Definition of Personalized Nutrition: A Proposal by The American Nutrition Association. (I saw it as it was finalized, since my wife, Dana Reed, is a co-author, and a board member and part of the nutrition science team at ANA.)
The key idea is:
I have always been something less than a poster child for following nutrition guidelines, for reasons that this report cites: "...guidelines have only limited ability to address the myriad inputs that influence the unique manifestation of an individual’s health or disease status."Personalized nutrition (PN) is rooted in the concept that one size does not fit all; differences in biochemistry, metabolism, genetics, and microbiota contribute to the dramatic inter-individual differences observed in response to nutrition, nutrient status, dietary patterns, timing of eating, and environmental exposures. PN has been described in a variety of ways, and other terms such as “precision nutrition,” “individualized nutrition,” and “nutritional genomics” have similar, sometimes overlapping, meanings in the literature.
I frequently cite the conundrum from Woody Allen's Sleeper, when the 1970s protagonist had just been awakened by doctors after 200 years:
Dr. Melik: This morning for breakfast he requested something called "wheat germ, organic honey and tiger's milk."Overstated to be sure, but the real issue is that "one man's meat is another man's poison." Determining which is which for a given person has been impractical, but now we are not only learning that this is far more intertwingled than was thought, but we are gaining the ability to tease out what applies to a given person.
Dr. Aragon: [chuckling] Oh, yes. Those are the charmed substances that some years ago were thought to contain life-preserving properties.
Dr. Melik: You mean there was no deep fat? No steak or cream pies or... hot fudge?
Dr. Aragon: Those were thought to be unhealthy... precisely the opposite of what we now know to be true.
I come from this not from biology, but from machine learning and predictive analytics. My focus is on getting smarter about how everything is intertwingled.
One of the most intriguing companies I have run across is Nutrino, a startup acquired by Medtronic, that analyzes data from continuous glucose monitors used by diabetics to understand the factors that affect their glucose response over time. They correlate to specific food intakes, activity, sleep, mood, blood tests, genomics, biomics, and more. They call it a FoodPrint, "a digital signature of how our body reacts to different foods. It is contextually driven and provides correlations, insights and predictions that become the underpinning for personal and continually improving nutrition recommendations." This is one of the first successful efforts to tease out how what I eat (and what else I do) really affects me as an individual, in all of its real-world intertwingularity.
It is time to move beyond the current so-called "gold standard" of intervention-based studies, the randomized double blind placebo controlled (RDBPC) clinical tests. Reality is far too intertwingled for that to be more than narrowly useful. It is time to embrace big data, correlation, and predictive analytics. Some early recognition of this is that drugmakers are getting the FDA to accept mining of patient data as a way to avoid need for clinical trials.
We have a long way to go, but I want to know how likely it is that a given amount of deep fat or hot fudge, or wheat germ or kale (in combination with the rest of my diet, behavior and risk factors), will have a significant effect, over a time frame that can motivate whether or not I indulge in my chocolate or eat my spinach.
It is not enough to know that the dose makes the poison -- I want to know if the average man's poison is really just my meat.
Before very long we will know.
Posted by
Richard Reisman - Sociotechnical network/systems thinker, visionary, inventor, pioneer | Author: @TechPolicyPress, FairPay | Nonresident Senior Fellow, Foundation for American Innovation
at
4:31 PM
6 comments:


Labels:
Big Data,
correlation,
healthcare,
intertwingled,
personalized medicine,
personalized nutrition,
predictive analytics
Tuesday, May 21, 2019
Reisman in Techonomy: Business Growth is Actually Good for People. Here’s Why.
My 4th piece in Techonomy was published today:
Business Growth is Actually Good for People. Here’s Why..
Blurb:
Business Growth is Actually Good for People. Here’s Why..
Blurb:
We cannot—and should not—stop growing. Sensible people know the real task is to channel growth to serve human ends.My opening and closing:
Douglas Rushkoff gave a characteristically provocative talk last week at Techonomy NYC – which provoked me to disagree strongly...
...Rushkoff delivers a powerful message on the need to re-center on human values. But his message would be more effective if it acknowledged the power of technology and growth instead of indiscriminately railing against it. We need a reawakening of human capitalism — and a Manhattan Project to get tech back on track. That will make us a better team human.
Posted by
Richard Reisman - Sociotechnical network/systems thinker, visionary, inventor, pioneer | Author: @TechPolicyPress, FairPay | Nonresident Senior Fellow, Foundation for American Innovation
at
3:33 PM
2 comments:


Labels:
augmentation,
Augmented Intelligence,
augmented wisdom of crowds,
automation,
business models,
democracy,
disinformation,
hypermedia,
hypertext,
innovation,
intertwingled,
social media,
wisdom of crowds
Friday, April 26, 2019
"Non-Binary" means "Non-Binary"...Mostly...Right?
A "gender non-binary female?"
Seeing the interview of Asia Kate Dillon on Late Night with Seth Meyers, I was struck by one statement -- one that suggests an insidious problem of binary thinking that pervades many of the current ills in our society. Dillon (who prefers the pronoun "they") reported gaining insight into their gender identity from the character description for their role in Billions as "a gender non-binary female," saying: “I just didn’t understand how those words could exist next to each other.”
What struck me was the questioning of how these words could be sensibly put together. Why would anyone ask that question? As I though more, I saw this as a perfect example of the much broader problem.
The curse of binary thinking
The question I ask is at a semantic level: how could that not be obvious? (regardless of one's views on gender identity). Doesn't the issue arise only if one interprets "female" in a binary way? I would have thought that one who identifies as "non-binary" would see beyond this conceptual trap of simplistic duality. Wouldn't a non-binary person be more non-binary in their thinking? Wouldn't it be obvious to a non-binary thinker that this is a matter of being non-binary and female, not of being non-binary or female?
It seems that binary thinking is so ingrained in our culture that we default to black and white readings when it is clear that most of life (outside of pure mathematics) is painted in shades of gray. It is common to think of some "females" as masculine, and some "males" as effeminate. Some view such terms as pejorative, but what is the reality? Why wouldn't a person presumed at birth to be female (for the usual blend of biological reasons) be able to be non-binary in a multitude of ways. Even biologically "female" has a multitude of aspects, which usually generally align, but sometimes diverge. Clearly, as to behavior in general and as to sexual orientation, there seems to be a spectrum, with many degrees in each of many dimensions (some barely noticed, some hard to miss).
So I write about this as an object lesson of how deeply the binary, black or white thinking or our culture distorts our view of the more deeply nuanced reality. Even one who sees themself as non-binary has a hard time escaping binary thinking. Why can the word "female" not be appropriate for a non-binary person (as we all are to some degree) -- one who has birth attributes that were ostensibly female. Isn't it just a fallacy of binary thinking to think it is not OK for a non-binary person to also be female? That a female cannot be non-binary?
I write about this because I have long taken issue with binary thinking. This is not to meant to criticize this actor in any way, but to sympathize broadly with the prevalence of this kind of blindness and absolutism in our culture. It is to empathize with those who suffer from being thought of in binary ways that fail to recognize the non-binary richness of life -- and those who suffer from thinking of themselves in a binary way. That is a harm that occurs to most of us at one time or another. As Whitman said:
Gender is just one of the host of current crises of binary thinking that lead to extreme polarization of all kinds. Political divides. The more irreconcilable divide over whether leadership must serve all of their constituency, or just those who support the leader, right or wrong. Fake news. Free speech and truth on campus vs. censorship for some zone of safety for binary thinkers. Trickle-down versus progressive economics. Capitalism versus socialism. Immigrant versus native. One race or religion versus another. Isn't the recent focus of some on "intersectionality" just an extension of binary thinking to multiple binary dimensions? Thinking in terms of binary categories (rather that category spectrums) distances and demonizes the other, blinded from seeing how much common ground there is.
The Tao symbol (which appears elsewhere in this blog) is a perfect illustration of my point, and an age-old symbol of the non-dualistic thinking central to some Asian traditions (I just noticed the irony of the actor's first name as I wrote this sentence!). We have black and white intertwined, and the dot of contrast indicates that each contains it opposite. That suggests that all females have some male in them (however large or small, and in whatever aspect) and all males have some female in them (much as some males would think that a blood libel).
Things are not black or white, but black and white. And even if nearly black or white in a single dimension, single dimensions rarely matter to the larger picture of any issue. I think we should all make a real effort to remind ourselves that that is the case for almost every issue of importance.
---
(I do not profess to be "woke," but do very much try to be "awakened" and accepting of the wondrous richness of our world. My focus here is on binary and non-binary thinking, itself. I use gender identity as the example only because of this statement that struck me. If I misunderstand or express my ideas inartfully in this fraught domain, that is not my intent. I hope it is taken in the spirit of finding greater understanding that is intended.)
(In that vein, I accept that there may be complex issues specific to gender and identity that go counter to my semantic argument in some respects. But my non-binary view is that that broader truth of non-duality still over-arches. And in an awakened non-binary world, the current last word can never be known to be the future last word.)
(See also the short post just below on the theme of this blog.)
Seeing the interview of Asia Kate Dillon on Late Night with Seth Meyers, I was struck by one statement -- one that suggests an insidious problem of binary thinking that pervades many of the current ills in our society. Dillon (who prefers the pronoun "they") reported gaining insight into their gender identity from the character description for their role in Billions as "a gender non-binary female," saying: “I just didn’t understand how those words could exist next to each other.”
What struck me was the questioning of how these words could be sensibly put together. Why would anyone ask that question? As I though more, I saw this as a perfect example of the much broader problem.
The curse of binary thinking
The question I ask is at a semantic level: how could that not be obvious? (regardless of one's views on gender identity). Doesn't the issue arise only if one interprets "female" in a binary way? I would have thought that one who identifies as "non-binary" would see beyond this conceptual trap of simplistic duality. Wouldn't a non-binary person be more non-binary in their thinking? Wouldn't it be obvious to a non-binary thinker that this is a matter of being non-binary and female, not of being non-binary or female?
It seems that binary thinking is so ingrained in our culture that we default to black and white readings when it is clear that most of life (outside of pure mathematics) is painted in shades of gray. It is common to think of some "females" as masculine, and some "males" as effeminate. Some view such terms as pejorative, but what is the reality? Why wouldn't a person presumed at birth to be female (for the usual blend of biological reasons) be able to be non-binary in a multitude of ways. Even biologically "female" has a multitude of aspects, which usually generally align, but sometimes diverge. Clearly, as to behavior in general and as to sexual orientation, there seems to be a spectrum, with many degrees in each of many dimensions (some barely noticed, some hard to miss).
So I write about this as an object lesson of how deeply the binary, black or white thinking or our culture distorts our view of the more deeply nuanced reality. Even one who sees themself as non-binary has a hard time escaping binary thinking. Why can the word "female" not be appropriate for a non-binary person (as we all are to some degree) -- one who has birth attributes that were ostensibly female. Isn't it just a fallacy of binary thinking to think it is not OK for a non-binary person to also be female? That a female cannot be non-binary?
I write about this because I have long taken issue with binary thinking. This is not to meant to criticize this actor in any way, but to sympathize broadly with the prevalence of this kind of blindness and absolutism in our culture. It is to empathize with those who suffer from being thought of in binary ways that fail to recognize the non-binary richness of life -- and those who suffer from thinking of themselves in a binary way. That is a harm that occurs to most of us at one time or another. As Whitman said:
Do I contradict myself?
Very well then I contradict myself,
(I am large, I contain multitudes.)The bigger picture
Gender is just one of the host of current crises of binary thinking that lead to extreme polarization of all kinds. Political divides. The more irreconcilable divide over whether leadership must serve all of their constituency, or just those who support the leader, right or wrong. Fake news. Free speech and truth on campus vs. censorship for some zone of safety for binary thinkers. Trickle-down versus progressive economics. Capitalism versus socialism. Immigrant versus native. One race or religion versus another. Isn't the recent focus of some on "intersectionality" just an extension of binary thinking to multiple binary dimensions? Thinking in terms of binary categories (rather that category spectrums) distances and demonizes the other, blinded from seeing how much common ground there is.

Things are not black or white, but black and white. And even if nearly black or white in a single dimension, single dimensions rarely matter to the larger picture of any issue. I think we should all make a real effort to remind ourselves that that is the case for almost every issue of importance.
---
(I do not profess to be "woke," but do very much try to be "awakened" and accepting of the wondrous richness of our world. My focus here is on binary and non-binary thinking, itself. I use gender identity as the example only because of this statement that struck me. If I misunderstand or express my ideas inartfully in this fraught domain, that is not my intent. I hope it is taken in the spirit of finding greater understanding that is intended.)
(In that vein, I accept that there may be complex issues specific to gender and identity that go counter to my semantic argument in some respects. But my non-binary view is that that broader truth of non-duality still over-arches. And in an awakened non-binary world, the current last word can never be known to be the future last word.)
(See also the short post just below on the theme of this blog.)
Posted by
Richard Reisman - Sociotechnical network/systems thinker, visionary, inventor, pioneer | Author: @TechPolicyPress, FairPay | Nonresident Senior Fellow, Foundation for American Innovation
at
1:33 PM
1 comment:


Labels:
binary,
echo chamber,
fake news,
intertwingled,
Nelson,
news,
non-binary,
non-duality,
polarization,
politics,
social media,
surprising validators,
Tao,
truth
A Note on the Theme of this Blog: Everything is Deeply Intertwingled -- and, Hopefully, Becoming Smartly Intertwingled
The next post (to appear just above) is the first to indulge my desire to comment more broadly on the theme that "everything is deeply intertwingled" (as Ted Nelson put it). That has always been a core of my worldview and has been increasingly weaving into my posts -- especially on the problems of how we deal with "truth" in our social media. I say we should move toward making things more smartly intertwingled.
That post, and some that will follow, move far out of my professional expertise, but I see all of my ideas as deeply intertwingled. (I have always been intrigued by epistemology, the theory of knowledge: what can we know and how do we know it). This current topic provided the impetus to act on my latent intent to broaden the scope of this blog to these larger issues that are now creating so much dysfunction in our society.
Beyond Ted Nelson's classic statement and his diagram (above, from Computer Lib/Dream Machines) the symbol that most elegantly conveys this perspective is the Tao symbol, which appears in many of my posts. It shows the yin and yang of female and male as intertwingling symbols of those elemental opposites — and the version with the dots in each intertwingled portion, suggests that each element also contains its opposite (a further level of intertwingling).
[Update 6/13/19, on changing the blog header:]
This blog was formerly known as “Reisman on User-Centered Media,” with the description:
That post, and some that will follow, move far out of my professional expertise, but I see all of my ideas as deeply intertwingled. (I have always been intrigued by epistemology, the theory of knowledge: what can we know and how do we know it). This current topic provided the impetus to act on my latent intent to broaden the scope of this blog to these larger issues that are now creating so much dysfunction in our society.
Beyond Ted Nelson's classic statement and his diagram (above, from Computer Lib/Dream Machines) the symbol that most elegantly conveys this perspective is the Tao symbol, which appears in many of my posts. It shows the yin and yang of female and male as intertwingling symbols of those elemental opposites — and the version with the dots in each intertwingled portion, suggests that each element also contains its opposite (a further level of intertwingling).
[Update 6/13/19, on changing the blog header:]
This blog was formerly known as “Reisman on User-Centered Media,” with the description:
On developing media platforms that are user-centered – open and adaptable to the user's needs and desires – and that earn profit from the value they create for users ...and as tools for augmenting human intellect and enlightened democracy.That continues to be a major theme.
Thursday, April 26, 2018
Architecting Our Platforms to Better Serve Us -- Augmenting and Modularizing the Algorithm
We dreamed that our Internet platforms would serve us miraculously, but now see that they have taken a wrong turn in many serious respects. That realization has reached a crescendo in the press and in Congress with regard to Facebook and Google's advertising-driven services, but it reaches far more deeply.
"Titans on Trial: Do internet giants have too much power? Should governments intervene?" -- I had the honor last night of attending this stimulating mock trial, with author Ken Auletta as judge and FTC Commissioner Terrell McSweeny and Rob Atkinson, President of the Information Technology and Innovation Foundation (ITIF) as opposing advocates (hosted by Genesys Partners). My interpretation of the the jury verdict (voted by all of the attendees, who were mostly investors or entrepreneurs) was: yes, most agree that regulation is needed, but it must be nuanced and smartly done, not heavy handed. Just how to do that will be a challenge, but it is a challenge that we must urgently consider.
I have been outlining views on this that go in some novel directions, but are generally consistent with the views of many other observers. This post takes a broad view of those suggestions, drawing from several earlier posts.
One of the issues touched on below is a core business model issue -- the idea that the ad-model of "free" services in exchange for attention to ads is "the original sin of the Internet." It has made users of Facebook and Google (and many others) "the product, not the customer," in a way that distorts incentives and fails to serve the user interest and the public interest. As the Facebook fiasco makes clear, these business model incentives can drive these platforms to provide just enough value to "engage" us to give up our data and attend to the advertiser's messages and manipulation and even to foster dopamine-driven addiction, but not necessarily to offer consumer value (services and data protection) that truly serves our interests.
That issue is specifically addressed in a series of posts in my other blog that focuses on a novel approach to business models (and regulation that centers on that), and those posts remain the most focused presentations on those particular issues:
Rethinking our networks -- and the algorithms that make all the difference
Drawing on my long career as a systems analyst/engineer/designer, manager, entrepreneur, inventor, and investor (including early days in the Bell System when it was a regulated monopoly providing "universal service"), I have recently come to share the fear of many that we are going off the rails.
But in spite of the frenzy, it seems we are still failing to refocus on better ways to design, manage, use, and govern our networks -- to better balance the best of hierarchy and openness. Few who understand technology and policy are yet focused on the opportunities that I see as reachable, and now urgently needed.
New levels of man-machine augmentation and new levels of decentralizing and modularizing intelligence can make these network smarter and more continuously adaptable to our wishes, while maintaining sensible and flexible levels of control -- and with the innovative efficiency of an open market. We can build on distributed intelligence in our networks to find more nuanced ways to balance openness and stability (without relying on unchecked levels of machine intelligence). Think of it as a new kind of systems architecture for modular engineering of rules that blends top-down stability with bottom-up emergence, to apply checks and balances that work much like our representative democracy. This is a still-formative development of ideas that I have written about for years, and plan to continue into the future.
First some context. The crucial differences among all kinds of networks (including hierarchies) are in the rules (algorithms, code, policies) that determine which nodes connect, and with what powers. We now have the power to create a new synthesis. Modern computer-based networks enable our algorithms to be far more nuanced and dynamically variable. They become far more emergent in both structure and policy, while still subject to basic constraints needed for stability and fairness.
Traditional networks have rules that are either relatively open (but somewhat slow to change), or constrained by laws and customs (and thus resistant to change). Even our current social and information networks are constrained in important ways. Some examples:
Initially this was rigid and regulated in great detail by the government, but the Carterfone decision showed how to open the old AT&T Bell System network to allow connection of devices not tested and approved by AT&T. Many forget how only AT&T phones could be used (except for a few cases of alternative devices like early fax machines that went through cumbersome and often arbitrary AT&T approval processes). Remember the acoustic modem coupler, needed because modems could not be directly connected? That changed when the FCC's decision opened the network up to any device that met defined electrical interface standards (using the still-familiar RJ11, a "Registered Jack").
Similarly only AT&T long-distance connections could be used, until the antitrust Consent Decree opened up competition among the "Baby Bells" and broke them off from Long Lines to compete on equal terms with carriers like MCI and Sprint. Manufacturing was also opened to new competitors.
In software systems, such plug-like interfaces are known as APIs (Application Program Interfaces), and are now widely accepted as the standard way to let systems interoperate with one another -- just enough, but no more -- much like a hardware jack does. This creates a level of modularity in architecture that lets multiple systems, subsystems, and components interoperate as interchangeable parts -- extending the great advance of the first Industrial Revolution to software.
What I suggest as the next step in evolution of our networks is a new kind of common carrier model that recognizes networks like Facebook, Google, and Twitter as common utilities once they reach some level of market dominance. Then antitrust protections would mandate open APIs to allow substitution of key components by customers -- to enable them to choose from an open market of alternatives that offer different features and different algorithms. Some specific suggestions are below (including the very relevant model of sophisticated interoperablilty in electronic mail networks), but first, a bit more on the motivations.
Modularity, emergence, markets, transparency, and democracy
Systems architects have long recognized that modularity is essential to making complex systems feasible and manageable. Software developers saw from the early days that monolithic systems did not scale -- they were hard to build, maintain, or modify. (The picture here of the tar pits is from Fred Brooks classic 1972 book in IBM's first large software project.) Web 2.0 extended that modularity to our network services, using network APIs that could be opened to the marketplace. Now we see wonderful examples of rich applications in the cloud that are composed of elements of logic, data, and analytics from a vast array of companies (such as travel services that seamlessly combine air, car rental, hotel, local attractions, loyalty programs, advertising, and tracking services from many companies).
The beauty of this kind of modularity is that systems can be highly emergent, based on the transparency and stability of published, open APIs, to quickly adapt to meet needs that were not anticipated. Some of this can be at the consumer's discretion, and some is enabled by nimble entrepreneurs. The full dynamics of the market can be applied, yet basic levels of control can be retained by the various players to ensure resilience and minimize abuse or failures.
The challenge is how to apply hierarchical control in the form of regulation in a way that limits risks, while enabling emergence driven by market forces. What we need is new focus on how to modularize critical common core utility services and how to govern the policies and algorithms that are applied, at multiple levels in the design of these systems (another, more hidden and abstract, kind of hierarchy). That can be done through some combination of industry self-regulation (where a few major players have the capability to do that, probably faster and more effectively than government), but by government where necessary (preferably only to the extent and duration necessary).
That obviously will be difficult and contentious, but it is now essential, if we are not to endure a new age of disorder, revolution, and war much like the age of religious war that followed Gutenberg (as Ferguson described). Silicon Valley and the rest of the tech world need to take responsibility for the genie they have let out of the bottle, and to mobilize to deal with it, and to get citizens and policymakers to understand the issues.
Once that progresses and is found to be effective, similar methods may eventually be applied to make government itself more modular, emergent, transparent, and democratic -- moving carefully toward "Democracy 2.0." (The carefully part is important -- Ferguson rightfully noted the dangers we face, and we have done a poor job of teaching our citizens, and our technologists, even the traditional principles of history, civics, and governance that are prerequisite to a working democracy.)
Opening the FANG walled gardens (with emphasis on Facebook and Google, plus Twitter)
This section outlines some rough ideas. (Some were posted in comments on an article in The Information by Sam Lessin, titled, "The Tower of Babel: Five Challenges of the Modern Internet.")
The fundamental principle is that entrepreneurs should be free to innovate improvements to these "essential" platforms -- which can then be selected by consumer market forces. Just as we moved beyond the restrictive walled gardens of AOL, and the early closed app stores (initially limited to apps created by Apple), we have unleashed a cornucopia of innovative Web services and apps that have made our services far more effective (and far more valuable to the platform owners as well, in spite of their early fears). Why should first movers be allowed to block essential innovation? Why should they have sole control and knowledge of the essential algorithms that are coming to govern major aspects of our lives? Why shouldn't our systems evolve toward fitness functions that we control and understand, with just enough hierarchical structure to prevent excessive instability at any given time?
Consider the following specific areas of opportunity.
Filtering rules. Filters are central to the function of Facebook, Google, and Twitter. As Ferguson observes, there are issues of homophily, filter bubbles, echo chambers, and fake news, and spoofing that are core to whether these networks make us smart or stupid, and whether we are easily manipulated to think in certain ways. Why do we not mandate that platforms be opened to user-selectable filtering algorithms (and/or human curators)? The major platforms can control their core services, but could allow users to select separate filters that interoperate with the platform. Let users control their filters, whether just by setting key parameters, or by substituting pluggable alternative filter algorithms. (This would work much like third party analytics in financial market data systems.) Greater competition and transparency would allow users to compare alternative filters and decide what kinds of content they do or do not want. It would stimulate innovation to create new kinds of filters that might be far more useful and smart.
For example, I have proposed strategies for filters that can help counter filter bubble effects by being much smarter about how people are exposed to views that may be outside of their bubble, doing it in ways that they welcome and want to think about. My post, Filtering for Serendipity -- Extremism, "Filter Bubbles" and "Surprising Validators" explains the need, and how that might be done. The key idea is to assign levels of authority to people based on the reputational authority that other people ascribe to them (think of it as RateRank, analogous to Google's PageRank algorithm). This approach also suggests ways to create smart serendipity, something that could be very valuable as well.
The "wisdom of the crowd" may be a misnomer when the crowd is an undifferentiated mob, but, I propose seeking the wisdom of the smart crowd -- first using the crowd to evaluate who is smart, and then letting the wisdom of the smart sub-crowd emerge, in a cyclic, self-improving process (much as Google's algorithm improves with usage, and much as science is open to all, but driven by those who gain authority, temporary as that may be).
Social graphs: Why do Facebook, Twitter, LinkedIn, and others own separate, private forms of our social graph. Why not let other user agents interoperate with a given platform’s social graph? Does the platform own the data defining my social graph relationships or do I? Does the platform control how that affects my filter or do I? Yes, we may have different flavors of social graph, such as personal for Facebook and professional for LinkedIn, but we could still have distinct sub-communities that we select when we use an integrated multi-graph, and those could offer greater nuance and flexibility with more direct user control.
User agents versus network service agents: Email systems were modularized in Internet standards long ago, so that we compose and read mail using user agents (Outlook, Apple mail, Gmail, and others) that connect with federated remote mail transfer agent servers (that we may barely be aware of) which interchange mail with any other mail transfer agent to reach anyone using any kind of user agent, thus enabling universal connectivity.
Why not do much the same, to let any social media user agent interoperate with any other, using a federated social graph and federated message transfer agents? We could then set our user agent to apply filters to let us see whichever communities we want to see at any given time. Some startups have attempted to build stand-alone social networks that focus on sub-communities like family or close friends versus hundreds of more or less remote acquaintances. Why not just make that a flexible and dynamic option, that we can control at will with a single user agent? Why require a startup to build and scale all aspects of a social media service, when they could just focus on a specific innovation? (The social media UX can be made interoperable to a high degree across different user agents, just as email user agents handle HTML, images, attachments, emojis, etc. -- and as do competing Web browsers.)
Identity: A recurring problem with many social networks is abuse by anonymous users (often people with many aliases, or even just bots). Once again, this need not be a simple binary choice. It would not be hard to have multiple levels of participant, some anonymous and some with one or more levels of authentication as real human individuals (or legitimate organizations). First class users would get validated identities, and be given full privileges, while anonymous users might be permitted but clearly flagged as such, with second class privileges. That would allow users to be exposed to anonymous content, when desired, but without confusion as to trust levels. Levels of identity could be clearly marked in feeds, and users could filter out anonymous or unverified users if desired. (We do already see some hints of this, but only to a very limited degree.)
Value transfers and extractions: As noted above, another very important problem is that the new platform businesses are driven by advertising and data sales, which means the consumer is not the customer but the product. Short of simply ending that practice (to end advertising and make the consumer the customer), those platforms could be driven to allow customer choice about such intrusions and extractions of value. Some users may be willing opt in to such practices, to continue to get "free" service, and some could opt out, by paying compensatory fees -- and thus becoming the customer. If significant numbers of users opted to become the customer, then the platforms would necessarily become far more customer-first -- for consumer customers, not the business customers who now pay the rent.
I have done extensive work on alternative strategies that adaptively customize value propositions and prices to markets of one -- a new strategy for a new social contract that can shape our commercial relationships to sustain services in proportion to the value they provide, and our ability to pay, so all can afford service. A key part of the issue is to ensure that users are compensated for the value of the data they provide. That can be done as a credit against user subscription fees (a "reverse meter"), at levels that users accept as fair compensation. That would shift incentives toward satisfying users (effectively making the advertiser their customer, rather than the other way around). This method has been described in the Journal of Revenue and Pricing Management: “A novel architecture to monetize digital offerings,” and very briefly in Harvard Business Review. More detail is my FairPayZone blog and my book (see especially the posts about the Facebook and Google business models that are listed in the opening section, above, and again at the end.*)
Analytics and metrics: we need access to relevant usage data and performance metrics to help test and assess alternatives, especially when independent components interact in our systems. Both developers and users will need guidance on alternatives. The Netflix Prize contests for improved recommender algorithms provided anonymized test data from Netflix to participant teams. Concerns about Facebook's algorithm, and the recent change that some testing suggests may do more harm than good, point to the need for independent review. Open alternatives will increase the need for transparency and validation by third parties.
(Sensitive data could be restricted to qualified organizations, with special controls to avoid issues like the Cambridge Analytica mis-use. The answer to such abuse is not greater concentration of power in one platform, as Maurice Stucke points out in Harvard Business Review, "Here Are All the Reasons It’s a Bad Idea to Let a Few Tech Companies Monopolize Our Data." (Facebook has already moved toward greater concentration of power.)
If such richness sounds overly complex, remember that complexity can be hidden by well-designed user agents and default rules. Those who are happy with a platform's defaults need not be affected by the options that other users might enable (or swap in) to customize their experience. We do that very successfully now with our choice of Web browsers and email user agents. We could have similar flexibility and choice in our platforms -- innovations that are valuable can emerge for use by early adopters, and then spread into the mainstream if success fuels demand. That is the genius of our market economy -- a spontaneous, emergent process for adaptively finding what works and has value -- in ways more effective than any hierarchy (as Ferguson extols, with reference to Smith, Hayek, and Levitt).
Augmentation of humans (and their networks)
Another very powerful aspect of networks and algorithms that many neglect is the augmentation of human intelligence. This idea dates back some 60 years (and more), when "artificial intelligence" went through its first hype cycle -- Licklider and Engelbart observed that the smarter strategy is not to seek totally artificial intelligence, but to seek hybrid strategies that draw on and augment human intelligence. Licklider called it "man-computer symbiosis, and used ARPA funding to support the work of Engelbart on "augmenting human intellect." In an age of arcane and limited uses of computers, that proved eye-opening at a 1968 conference ("the mother of all demos"), and was one of the key inspirations for modern user interfaces, hypertext, and the Web.
The term augmentation is resurfacing in the artificial intelligence field, as we are once again realizing how limited machine intelligence still is, and that (especially where broad and flexible intelligence is needed) it is often far more effective to seek to apply augmented intelligence that works symbiotically with humans, retaining human visibility and guidance over how machine intelligence is used.
Why not apply this kind of emergent, reconfigurable augmented intelligence to drive a bottom up way to dynamically assign (and re-assign) authority in our networks, much like the way representative democracy assigns (and re-assigns) authority from the citizen up? Think of it as dynamically adaptive policy engineering (and consider that a strong bottom-up component will keep such "engineering" democratic and not authoritarian). Done well, this can keep our systems human-centered.
Reality is not binary: "Everything is deeply intertwingled"
Ted Nelson (who coined the term "hypertext" and was another of the foundational visionaries of the Web), wrote in 1974 that "everything is deeply intertwingled." As he put it, "Hierarchical and sequential structures, especially popular since Gutenberg, are usually forced and artificial. Intertwingularity is not generally acknowledged—people keep pretending they can make things hierarchical, categorizable and sequential when they can't."
It's a race: augmented network hierarchies that are emergently smart, balanced, and dynamically adaptable -- or disaster
If we pull together to realize this potential, we can transcend the dichotomies and conflicts that are so wickedly complex and dangerous. Just as Malthus failed to account for the emergent genius of civilization, and the non-linear improvements it produces, many of us discount how non-linear the effect of smarter networks, with more dynamically augmented and balanced structures, can be. But we are racing along a very dangerous path, and are not being nearly smart or proactive enough about what we need to do to avert disaster. What we need now is not a top-down command and control Manhattan Project, but a multi-faceted, broadly-based movement, with elements of regulation, but primarily reliant on flexible, modular architectural design.
---
See the Selected Items tab for more on this theme.
---
Coda: On becoming more smartly intertwingled
Everything in our world has always been deeply intertwingled. Human intellect augmented with technology enables us to make our world more smartly intertwingled. But we have lost our way, in the manner that Engelbart alluded to in his illustration of de-augmentation -- we are becoming deeply polarized, addicted to self-destructive dopamine-driven engagement without insight or nuance. We are being de-augmented by our own technology run amok.
(I plan to re-brand this blog as "Smartly Intertwingled" -- that is the objective that drives my work. The theme of "User-Centered Media" is just one important aspect of that.)
--------------------------------------------------------------------------------------------
*On business models - FairPay (my other blog): As noted above, a series of posts in my other blog focus on a novel approach to business models (and regulation that centers on that), and those posts remain my best presentation on those issues:
"Titans on Trial: Do internet giants have too much power? Should governments intervene?" -- I had the honor last night of attending this stimulating mock trial, with author Ken Auletta as judge and FTC Commissioner Terrell McSweeny and Rob Atkinson, President of the Information Technology and Innovation Foundation (ITIF) as opposing advocates (hosted by Genesys Partners). My interpretation of the the jury verdict (voted by all of the attendees, who were mostly investors or entrepreneurs) was: yes, most agree that regulation is needed, but it must be nuanced and smartly done, not heavy handed. Just how to do that will be a challenge, but it is a challenge that we must urgently consider.
I have been outlining views on this that go in some novel directions, but are generally consistent with the views of many other observers. This post takes a broad view of those suggestions, drawing from several earlier posts.
One of the issues touched on below is a core business model issue -- the idea that the ad-model of "free" services in exchange for attention to ads is "the original sin of the Internet." It has made users of Facebook and Google (and many others) "the product, not the customer," in a way that distorts incentives and fails to serve the user interest and the public interest. As the Facebook fiasco makes clear, these business model incentives can drive these platforms to provide just enough value to "engage" us to give up our data and attend to the advertiser's messages and manipulation and even to foster dopamine-driven addiction, but not necessarily to offer consumer value (services and data protection) that truly serves our interests.
That issue is specifically addressed in a series of posts in my other blog that focuses on a novel approach to business models (and regulation that centers on that), and those posts remain the most focused presentations on those particular issues:
- Who Should Pay the Piper for Facebook? (& the rest) -- the core solution I propose
- Privacy AND Innovation ...NOT Oligopoly -- A Market Solution to a Market Problem -- a simple regulatory approach to change incentives without overreach.
Drawing on my long career as a systems analyst/engineer/designer, manager, entrepreneur, inventor, and investor (including early days in the Bell System when it was a regulated monopoly providing "universal service"), I have recently come to share the fear of many that we are going off the rails.
But in spite of the frenzy, it seems we are still failing to refocus on better ways to design, manage, use, and govern our networks -- to better balance the best of hierarchy and openness. Few who understand technology and policy are yet focused on the opportunities that I see as reachable, and now urgently needed.
New levels of man-machine augmentation and new levels of decentralizing and modularizing intelligence can make these network smarter and more continuously adaptable to our wishes, while maintaining sensible and flexible levels of control -- and with the innovative efficiency of an open market. We can build on distributed intelligence in our networks to find more nuanced ways to balance openness and stability (without relying on unchecked levels of machine intelligence). Think of it as a new kind of systems architecture for modular engineering of rules that blends top-down stability with bottom-up emergence, to apply checks and balances that work much like our representative democracy. This is a still-formative development of ideas that I have written about for years, and plan to continue into the future.
First some context. The crucial differences among all kinds of networks (including hierarchies) are in the rules (algorithms, code, policies) that determine which nodes connect, and with what powers. We now have the power to create a new synthesis. Modern computer-based networks enable our algorithms to be far more nuanced and dynamically variable. They become far more emergent in both structure and policy, while still subject to basic constraints needed for stability and fairness.
Traditional networks have rules that are either relatively open (but somewhat slow to change), or constrained by laws and customs (and thus resistant to change). Even our current social and information networks are constrained in important ways. Some examples:
- The US constitution defines the powers and the structures for the governing hierarchy, and processes for legislation and execution, made resilient by its provisions for self-amendable checks and balances.
- Real-world social hierarchies have structures based on empowered people that tend to shift more or less slowly.
- Facebook has a social graph that is emergent, but the algorithms for filtering who sees what are strictly controlled by, and private to, Facebook. (In January they announced a major change -- unilaterally -- perhaps for the better for users and society, if not for content publishers, but reports quickly surfaced that it had unintended consequences when tested.)
- Google has a page graph that is given dynamic weight by the PageRank algorithm, but the management of that algorithm is strictly controlled by Google. It has been continuously evolving in important respects, but the details are kept secret to make it harder to game.
Our vaunted high-tech networks are controlled by corporate hierarchies (FANG: Facebook, Amazon, Netflix, and Google in much of the world, and BAT: Baidu, Alibaba, and Tencent in China) -- but are subject to limited levels of government control that vary in the US, EU, and China. This corporate control is a source of tension and resistance to change -- and a barrier to more emergent adaptation to changing needs and stressors (such as the Russian interference in our elections). These new monopolistic hierarchies extract high rents from the network -- meaning us, the users -- mostly indirectly, in the form of advertising and sales of personal data.
Smarter, more open and emergent algorithms -- APIs and a common carrier governance model
Smarter, more open and emergent algorithms -- APIs and a common carrier governance model
The answer to the question of governance is to make our network algorithms not only smarter, but more open to appropriate levels of individual and multi-party control. Business monopolies or oligarchies (or governments) may own and control essential infrastructure, but we can place limits on what they control and what is open. In the antitrust efforts of the past century governments found need to regulate rail and telephone networks as common carriers, with limited corporate-owner power to control how they are used, giving marketplace players (competitors and consumers) a share in that control.

Similarly only AT&T long-distance connections could be used, until the antitrust Consent Decree opened up competition among the "Baby Bells" and broke them off from Long Lines to compete on equal terms with carriers like MCI and Sprint. Manufacturing was also opened to new competitors.
In software systems, such plug-like interfaces are known as APIs (Application Program Interfaces), and are now widely accepted as the standard way to let systems interoperate with one another -- just enough, but no more -- much like a hardware jack does. This creates a level of modularity in architecture that lets multiple systems, subsystems, and components interoperate as interchangeable parts -- extending the great advance of the first Industrial Revolution to software.
What I suggest as the next step in evolution of our networks is a new kind of common carrier model that recognizes networks like Facebook, Google, and Twitter as common utilities once they reach some level of market dominance. Then antitrust protections would mandate open APIs to allow substitution of key components by customers -- to enable them to choose from an open market of alternatives that offer different features and different algorithms. Some specific suggestions are below (including the very relevant model of sophisticated interoperablilty in electronic mail networks), but first, a bit more on the motivations.
Modularity, emergence, markets, transparency, and democracy
Systems architects have long recognized that modularity is essential to making complex systems feasible and manageable. Software developers saw from the early days that monolithic systems did not scale -- they were hard to build, maintain, or modify. (The picture here of the tar pits is from Fred Brooks classic 1972 book in IBM's first large software project.) Web 2.0 extended that modularity to our network services, using network APIs that could be opened to the marketplace. Now we see wonderful examples of rich applications in the cloud that are composed of elements of logic, data, and analytics from a vast array of companies (such as travel services that seamlessly combine air, car rental, hotel, local attractions, loyalty programs, advertising, and tracking services from many companies).
The beauty of this kind of modularity is that systems can be highly emergent, based on the transparency and stability of published, open APIs, to quickly adapt to meet needs that were not anticipated. Some of this can be at the consumer's discretion, and some is enabled by nimble entrepreneurs. The full dynamics of the market can be applied, yet basic levels of control can be retained by the various players to ensure resilience and minimize abuse or failures.
The challenge is how to apply hierarchical control in the form of regulation in a way that limits risks, while enabling emergence driven by market forces. What we need is new focus on how to modularize critical common core utility services and how to govern the policies and algorithms that are applied, at multiple levels in the design of these systems (another, more hidden and abstract, kind of hierarchy). That can be done through some combination of industry self-regulation (where a few major players have the capability to do that, probably faster and more effectively than government), but by government where necessary (preferably only to the extent and duration necessary).
That obviously will be difficult and contentious, but it is now essential, if we are not to endure a new age of disorder, revolution, and war much like the age of religious war that followed Gutenberg (as Ferguson described). Silicon Valley and the rest of the tech world need to take responsibility for the genie they have let out of the bottle, and to mobilize to deal with it, and to get citizens and policymakers to understand the issues.
Once that progresses and is found to be effective, similar methods may eventually be applied to make government itself more modular, emergent, transparent, and democratic -- moving carefully toward "Democracy 2.0." (The carefully part is important -- Ferguson rightfully noted the dangers we face, and we have done a poor job of teaching our citizens, and our technologists, even the traditional principles of history, civics, and governance that are prerequisite to a working democracy.)
Opening the FANG walled gardens (with emphasis on Facebook and Google, plus Twitter)
This section outlines some rough ideas. (Some were posted in comments on an article in The Information by Sam Lessin, titled, "The Tower of Babel: Five Challenges of the Modern Internet.")
The fundamental principle is that entrepreneurs should be free to innovate improvements to these "essential" platforms -- which can then be selected by consumer market forces. Just as we moved beyond the restrictive walled gardens of AOL, and the early closed app stores (initially limited to apps created by Apple), we have unleashed a cornucopia of innovative Web services and apps that have made our services far more effective (and far more valuable to the platform owners as well, in spite of their early fears). Why should first movers be allowed to block essential innovation? Why should they have sole control and knowledge of the essential algorithms that are coming to govern major aspects of our lives? Why shouldn't our systems evolve toward fitness functions that we control and understand, with just enough hierarchical structure to prevent excessive instability at any given time?
Consider the following specific areas of opportunity.
Filtering rules. Filters are central to the function of Facebook, Google, and Twitter. As Ferguson observes, there are issues of homophily, filter bubbles, echo chambers, and fake news, and spoofing that are core to whether these networks make us smart or stupid, and whether we are easily manipulated to think in certain ways. Why do we not mandate that platforms be opened to user-selectable filtering algorithms (and/or human curators)? The major platforms can control their core services, but could allow users to select separate filters that interoperate with the platform. Let users control their filters, whether just by setting key parameters, or by substituting pluggable alternative filter algorithms. (This would work much like third party analytics in financial market data systems.) Greater competition and transparency would allow users to compare alternative filters and decide what kinds of content they do or do not want. It would stimulate innovation to create new kinds of filters that might be far more useful and smart.
For example, I have proposed strategies for filters that can help counter filter bubble effects by being much smarter about how people are exposed to views that may be outside of their bubble, doing it in ways that they welcome and want to think about. My post, Filtering for Serendipity -- Extremism, "Filter Bubbles" and "Surprising Validators" explains the need, and how that might be done. The key idea is to assign levels of authority to people based on the reputational authority that other people ascribe to them (think of it as RateRank, analogous to Google's PageRank algorithm). This approach also suggests ways to create smart serendipity, something that could be very valuable as well.
The "wisdom of the crowd" may be a misnomer when the crowd is an undifferentiated mob, but, I propose seeking the wisdom of the smart crowd -- first using the crowd to evaluate who is smart, and then letting the wisdom of the smart sub-crowd emerge, in a cyclic, self-improving process (much as Google's algorithm improves with usage, and much as science is open to all, but driven by those who gain authority, temporary as that may be).
Social graphs: Why do Facebook, Twitter, LinkedIn, and others own separate, private forms of our social graph. Why not let other user agents interoperate with a given platform’s social graph? Does the platform own the data defining my social graph relationships or do I? Does the platform control how that affects my filter or do I? Yes, we may have different flavors of social graph, such as personal for Facebook and professional for LinkedIn, but we could still have distinct sub-communities that we select when we use an integrated multi-graph, and those could offer greater nuance and flexibility with more direct user control.
User agents versus network service agents: Email systems were modularized in Internet standards long ago, so that we compose and read mail using user agents (Outlook, Apple mail, Gmail, and others) that connect with federated remote mail transfer agent servers (that we may barely be aware of) which interchange mail with any other mail transfer agent to reach anyone using any kind of user agent, thus enabling universal connectivity.
Why not do much the same, to let any social media user agent interoperate with any other, using a federated social graph and federated message transfer agents? We could then set our user agent to apply filters to let us see whichever communities we want to see at any given time. Some startups have attempted to build stand-alone social networks that focus on sub-communities like family or close friends versus hundreds of more or less remote acquaintances. Why not just make that a flexible and dynamic option, that we can control at will with a single user agent? Why require a startup to build and scale all aspects of a social media service, when they could just focus on a specific innovation? (The social media UX can be made interoperable to a high degree across different user agents, just as email user agents handle HTML, images, attachments, emojis, etc. -- and as do competing Web browsers.)
Value transfers and extractions: As noted above, another very important problem is that the new platform businesses are driven by advertising and data sales, which means the consumer is not the customer but the product. Short of simply ending that practice (to end advertising and make the consumer the customer), those platforms could be driven to allow customer choice about such intrusions and extractions of value. Some users may be willing opt in to such practices, to continue to get "free" service, and some could opt out, by paying compensatory fees -- and thus becoming the customer. If significant numbers of users opted to become the customer, then the platforms would necessarily become far more customer-first -- for consumer customers, not the business customers who now pay the rent.
I have done extensive work on alternative strategies that adaptively customize value propositions and prices to markets of one -- a new strategy for a new social contract that can shape our commercial relationships to sustain services in proportion to the value they provide, and our ability to pay, so all can afford service. A key part of the issue is to ensure that users are compensated for the value of the data they provide. That can be done as a credit against user subscription fees (a "reverse meter"), at levels that users accept as fair compensation. That would shift incentives toward satisfying users (effectively making the advertiser their customer, rather than the other way around). This method has been described in the Journal of Revenue and Pricing Management: “A novel architecture to monetize digital offerings,” and very briefly in Harvard Business Review. More detail is my FairPayZone blog and my book (see especially the posts about the Facebook and Google business models that are listed in the opening section, above, and again at the end.*)
Analytics and metrics: we need access to relevant usage data and performance metrics to help test and assess alternatives, especially when independent components interact in our systems. Both developers and users will need guidance on alternatives. The Netflix Prize contests for improved recommender algorithms provided anonymized test data from Netflix to participant teams. Concerns about Facebook's algorithm, and the recent change that some testing suggests may do more harm than good, point to the need for independent review. Open alternatives will increase the need for transparency and validation by third parties.
(Sensitive data could be restricted to qualified organizations, with special controls to avoid issues like the Cambridge Analytica mis-use. The answer to such abuse is not greater concentration of power in one platform, as Maurice Stucke points out in Harvard Business Review, "Here Are All the Reasons It’s a Bad Idea to Let a Few Tech Companies Monopolize Our Data." (Facebook has already moved toward greater concentration of power.)
If such richness sounds overly complex, remember that complexity can be hidden by well-designed user agents and default rules. Those who are happy with a platform's defaults need not be affected by the options that other users might enable (or swap in) to customize their experience. We do that very successfully now with our choice of Web browsers and email user agents. We could have similar flexibility and choice in our platforms -- innovations that are valuable can emerge for use by early adopters, and then spread into the mainstream if success fuels demand. That is the genius of our market economy -- a spontaneous, emergent process for adaptively finding what works and has value -- in ways more effective than any hierarchy (as Ferguson extols, with reference to Smith, Hayek, and Levitt).
Augmentation of humans (and their networks)
Another very powerful aspect of networks and algorithms that many neglect is the augmentation of human intelligence. This idea dates back some 60 years (and more), when "artificial intelligence" went through its first hype cycle -- Licklider and Engelbart observed that the smarter strategy is not to seek totally artificial intelligence, but to seek hybrid strategies that draw on and augment human intelligence. Licklider called it "man-computer symbiosis, and used ARPA funding to support the work of Engelbart on "augmenting human intellect." In an age of arcane and limited uses of computers, that proved eye-opening at a 1968 conference ("the mother of all demos"), and was one of the key inspirations for modern user interfaces, hypertext, and the Web.
The term augmentation is resurfacing in the artificial intelligence field, as we are once again realizing how limited machine intelligence still is, and that (especially where broad and flexible intelligence is needed) it is often far more effective to seek to apply augmented intelligence that works symbiotically with humans, retaining human visibility and guidance over how machine intelligence is used.
Why not apply this kind of emergent, reconfigurable augmented intelligence to drive a bottom up way to dynamically assign (and re-assign) authority in our networks, much like the way representative democracy assigns (and re-assigns) authority from the citizen up? Think of it as dynamically adaptive policy engineering (and consider that a strong bottom-up component will keep such "engineering" democratic and not authoritarian). Done well, this can keep our systems human-centered.
Reality is not binary: "Everything is deeply intertwingled"

It's a race: augmented network hierarchies that are emergently smart, balanced, and dynamically adaptable -- or disaster

[Update 12/14/20] A specific proposal - Stanford Working Group on Platform Scale
An important proposal that gets at the core of the problems in media platforms was published in Foreign Affairs, How to Save Democracy From Technology, by Francis Fukuyama and others. See also the report of the Stanford Working Group. The idea is to let users control their social media feeds with open market interoperable filters. That is something I proposed here (in the "Filtering rules" section, above). Other regulatory proposals that include some of the suggestions made here are summarized in Regulating our Platforms -- A Deeper Vision.
---
See the Selected Items tab for more on this theme.
---
Coda: On becoming more smartly intertwingled
Everything in our world has always been deeply intertwingled. Human intellect augmented with technology enables us to make our world more smartly intertwingled. But we have lost our way, in the manner that Engelbart alluded to in his illustration of de-augmentation -- we are becoming deeply polarized, addicted to self-destructive dopamine-driven engagement without insight or nuance. We are being de-augmented by our own technology run amok.
(I plan to re-brand this blog as "Smartly Intertwingled" -- that is the objective that drives my work. The theme of "User-Centered Media" is just one important aspect of that.)
--------------------------------------------------------------------------------------------
*On business models - FairPay (my other blog): As noted above, a series of posts in my other blog focus on a novel approach to business models (and regulation that centers on that), and those posts remain my best presentation on those issues:
- Who Should Pay the Piper for Facebook? (& the rest) -- the core solution I propose
- Privacy AND Innovation ...NOT Oligopoly -- A Market Solution to a Market Problem -- a simple regulatory approach to change incentives without overreach.
Posted by
Richard Reisman - Sociotechnical network/systems thinker, visionary, inventor, pioneer | Author: @TechPolicyPress, FairPay | Nonresident Senior Fellow, Foundation for American Innovation
at
1:44 PM


Labels:
democracy,
Facebook,
fake news,
filter bubble,
Google,
hierarchies,
incentives,
innovation,
Internet,
intertwingled,
Media,
social media,
social network,
Twitter,
wisdom of crowds
Saturday, January 13, 2018
"The Square and the Tower" — Augmenting and Modularizing the Algorithm (a Review and Beyond)
[Note: A newer post updates this one and removes much of the book review portion, to concentrate on the forward-looking platform issues: Architecting Our Platforms to Better Serve Us -- Augmenting and Modularizing the Algorithm.]
---
Niall Ferguson's new book, The Square and the Tower: Networks and Power from the Freemasons to Facebook is a sweeping historical review of the perennial power struggle between top-down hierarchies and more open forms of networks. It offers a thought-provoking perspective on a wide range of current global issues, as the beautiful techno-utopian theories of free and open networks increasingly face murder by two brutal gangs of facts: repressive hierarchies and anarchistic swarms.
Ferguson examines the ebb and flow of power, order, and revolution, with important parallels between the Gutenberg revolution (which led to 130 years of conflict) and our digital revolution, as well as much in between. There is valuable perspective on the interplay of social networks (old and new), the hierarchies of governments (liberal and illiberal), anarchists/terrorists, and businesses (disruptive and monopolistic). One can disagree with Ferguson's conservative politics yet find his analysis illuminating.
Drawing on a long career as a systems analyst/engineer/designer, manager, entrepreneur and inventor, I have recently come to share much of Ferguson's fear that we are going off the rails. He cites important examples like the 9/11 attacks, counterattacks, and ISIS, the financial meltdown of 2008, and most concerning to me, the 2016 election as swayed by social media and hacking. However -- discouraging as these are -- he seems to take an excessively binary view of network structure, and to discount the ability of open networks to better reorganize and balance excesses and abuse. He argues that traditional hierarchies should reestablish dominance.
In that regard, I think Ferguson fails to see the potential for better ways to design, manage, use, and govern our networks -- and to better balance the best of hierarchy and openness. To be fair, few technologists are yet focused on the opportunities that I see as reachable, and now urgently needed.
New levels of man-machine augmentation and new levels of decentralizing and modularizing intelligence can make these network smarter and more continuously adaptable to our wishes, while maintaining sensible and flexible levels of control. We can build on distributed intelligence in our networks to find more nuanced ways to balance openness and stability (without relying on unchecked levels of machine intelligence). Think of it as a new kind of systems architecture for modular engineering of rules that blends top-down stability with bottom-up emergence that apply checks and balances that work much like our representative democracy. This is a still-formative exploration of some ideas that I have written about, and plan to expand on in the future. First some context.
The Square (networks), the Tower (hierarchies) and the Algorithms that make all the difference
Ferguson's title comes from his metaphor of the medieval city of Sienna, with a large public square that serves as a marketplace and meeting place, and a high tower of government (as well as a nearby cathedral) that displayed the power of those hierarchies. But as he elaborates, networks have complex architectures and governance rules that are far richer than the binary categories of either "network" ( a peer to peer network with informal and emergent rules) or "hierarchy" (a constrained network with more formal directional rankings and restrictions on connectivity).
The crucial differences among all kinds of networks are in the rules (algorithms, code, policies) that determine which nodes connect, and with what powers. While his analysis draws out the rich variety of such structures, in many interesting examples, with diagrams, what he seems to miss is any suggestion of a new synthesis. Modern computer-based networks enable our algorithms to be far more nuanced and dynamically variable. They become far more emergent in both structure and policy, while still subject to basic constraints needed for stability and fairness.
Traditional networks have rules that are either relatively open (but somewhat slow to change), or constrained by laws and customs (and thus resistant to change) -- and even our current social and information networks are constrained in important ways. For example,
Initially this was rigid and regulated in great detail by the government (very hierarchical), but the Carterfone decision showed how to open the old AT&T Bell System network to allow connection of devices not tested and approved by AT&T. Many forget how only AT&T phones could be used (except for special cases of alternative devices like early faxes (Xerox "telecopiers") that went through cumbersome and often arbitrary AT&T approval processes). That changed when the FCC's decision opened the network up to any device that met defined electrical interface standards (using the still-familiar RJ11, a "Registered Jack"). Similarly only AT&T long-distance connections could be used, until the antitrust Consent Decree opened up competition among the "Baby Bells" and broke them off from Long Lines to compete on equal terms with carriers like MCI and Sprint.
In software systems, such plug-like interfaces are known as APIs (Application Program Interfaces), and are now widely accepted as the standard way to let systems inter-operate with one another -- just enough, but no more -- much like a hardware jack does. This creates a level of modularity in architecture that lets multiple systems, subsystems, and components inter-operate as interchangeable parts -- the great advance of the first Industrial Revolution.
What I suggest as the next step in evolution of our networks is a new kind of common carrier model that recognizes networks like Facebook, Google, and Twitter as common utilities once they reach some level of market dominance. Then antitrust protections would mandate open APIs to allow substitution of key components by customers -- to enable them to choose from an open market of alternatives that offer different features and different algorithms. Some specific suggestions are below, but first, a bit more on the motivations.
Modularity, emergence, markets, transparency, and democracy
Systems architects have long recognized that modularity is essential to making complex systems feasible and manageable. Software developers saw from the early days that monolithic systems did not scale -- they were hard to build, maintain, or modify. Web 2.0 extended that modularity to our network services, using network APIs that could be opened to the marketplace. Now we see wonderful examples of rich applications in the cloud composed of elements of logic, data, and analytics from a vast array of companies (such as travel services that seamlessly combine air, car rental, hotel, local attractions, loyalty programs, and tracking services from many companies).
The beauty of this kind of modularity is that systems can be highly emergent, based on the transparency and stability of published APIs, to quickly adapt to meet needs that were not anticipated. Some of this can be at the consumer's discretion, and some is enabled by nimble entrepreneurs. The full dynamics of the market can be applied, yet basic levels of control can be retained by the various players to ensure resilience and minimize abuse or failures.
The challenge that Ferguson makes clear is how to apply hierarchical control in the form of regulation in a way that limits risks, while enabling emergence driven by market forces. What we need is new focus on how to modularize critical common core utility services and how to govern the policies and algorithms that are applied, at multiple levels in the design of these systems (another kind of hierarchy). That can be done through some combination of industry self-regulation (where a few major players have the capability to do that, probably faster and more effectively than government), but by government where necessary (and only to the extent and duration necessary).
That obviously will be difficult and contentious, but it is now essential, if we are not to endure a new age of disorder, revolution, and war much like the age that followed Gutenberg. Silicon Valley and the rest of the tech world need to take responsibility for the genie they have let out of the bottle, and mobilize to deal with it.
Once that progresses and is found to be effective, similar methods can be applied to make government itself more modular, emergent, transparent, and democratic -- moving carefully toward "Democracy 2.0." (The carefully part is important -- Ferguson rightfully notes the dangers we face, and we have done a poor job of teaching our citizens, and our technologists, the principles of history, civics, and governance that are prerequisite to a working democracy.)
Opening the FANG walled gardens (with emphasis on Facebook and Google, plus Twitter)
This section outlines some rough ideas. (Some were posted in comments on an article in The Information by Sam Lessin, titled, "The Tower of Babel: Five Challenges of the Modern Internet" -- another tower.)
The fundamental principle is that entrepreneurs should be free to innovate improvements to these "essential" platforms, to be selected by consumer market forces. Just as we moved beyond the restrictive walled gardens of AOL, and the early closed app stores (limited to apps created by Apple or Motorola or Verizon), we have unleashed a cornucopia of innovative Web services and apps that have made our services far more effective (and far more valuable to the platform owners as well, in spite of their fears). Why should first movers be allowed to block essential innovation? Why should they have sole control of the essential algorithms that are coming to govern major aspects of our lives? Why shouldn't our systems evolve toward fitness functions that we control, with just enough hierarchical structure to prevent excessive instability at any given time?
Filtering rules. Filters are central to the function of Facebook, Google, and Twitter. As Ferguson observes, there are issues of homophily, filter bubbles, echo chambers, and fake news, and spoofing that are core to whether these networks make us smart or stupid, and whether we are easily manipulated to think in certain ways. Why not mandate that platforms be opened to user-selectable filtering algorithms (and/or human curators)? The major platforms can control their core services, but could allow for separate filters that inter-operate. Let users control their filters, whether just by setting key parameters, or by substituting pluggable alternative filters. This would be much like third party analytics in financial market data systems. Greater competition and transparency would allow users to compare alternative filters and decide what kinds of content they do or do not want. It would stimulate innovation to create new kinds of filters that might be far more useful.
For example, I have proposed strategies for filters that can help counter filter bubble effects by being much smarter about how people are exposed to views that may be outside of their bubble, doing it in ways that they welcome and think about. My post, Filtering for Serendipity -- Extremism, "Filter Bubbles" and "Surprising Validators" explains the need, and how that might be done. The key idea is to assign levels of authority to people based on the reputational authority that other people ascribe to them (think of it a RateRank, analogous to Google's PageRank algorithm). This approach also suggests ways to create smart serendipity, something that could be very valuable as well.
The "wisdom of the crowd" may be a misnomer when the crowd is an undifferentiated mob, but, what I propose seeking the wisdom of the smart crowd -- first using the crowd to evaluate who is smart, and then letting the wisdom of the smart sub-crowd emerge, in a cyclic, self-improving process (much as Google's algorithm improves with usage).
Social graphs and user agents: Why do Facebook, Twitter, LinkedIn, and others own separate, private forms of our social graph. Why not let other user agents interoperate with a given platform’s social graph? Does the platform own my social graph or do I? Does the platform control how that affects my filter or do I? Yes, we may have different flavors of social graph, such as personal for Facebook and professional for LinkedIn, but we could still have distinct sub-communities that we select when we use an integrated multi-graph, and those could offer greater nuance and flexibility with more direct user control.
Email systems were modularized long ago, so that we compose and read mail using user agents (Outlook, Apple mail, Gmail, and others) that connect with remote mail transfer agent servers (that we may barely be aware of) which interchange mail with any other mail transfer agent to reach anyone using any kind of user agent, thus enabling universal connectivity. Why not do the same to let any social media user agent inter-operate with any other, using a common social graph? We would then set our user agent to apply filters to let us see whichever communities we want to see at any given time.
Identity: A recurring problem with many social networks is abuse by anonymous users (often people with many aliases, or even just bots). Once again, this need not be a simple binary choice. It would not be hard to have multiple levels of participant, some anonymous and some with one or more levels of authentication as real human individuals (or legitimate organizations). These could then be clearly marked in feeds, and users could filter out anonymous or unverified users if desired.
Value transfers and extractions: Another important problem, and one that Ferguson cites is that the new platform businesses are driven by advertising and data sales, which means the consumer is not the customer but the product. Short of simply ending that practice (to end advertising and make the consumer the customer), those platforms could be driven to allow customer choice about such intrusions and extractions of value. Some users may be willing opt in to such practices, to continue to get "free" service, and some could opt out, by paying compensatory fees -- and thus becoming the customer. If significant numbers of users opted to become the customer, then the platforms would necessarily become far more customer-first -- for consumer customers, not the business customers who now pay the rent. (I have done extensive work on such alternative strategies, as described in my FairPayZone blog and my book.)
Analytics and metrics: we need access to relevant usage data and performance metrics to help test and assess alternatives, especially when independent components interact in our systems. Both developers and users will need guidance on alternatives. The Netflix Prize contests for improved recommender algorithms provided anonymized test data from Netflix to participant teams. Concerns about Facebook's algorithm, and the recent change that some testing suggests may do more harm than good, point to the need for independent review. Open alternatives will increase the need for transparency and validation by third parties. (Sensitive data could be restricted to qualified organizations.) [This paragraph added 1/14.]
If such richness sounds overly complex, remember that complexity can be hidden by well-designed user agents and default rules. Those who are happy with a platform's defaults need not be affected by the options that other users might enable (or swap in) to customize their experience. But we could have the choice, and innovations that are valuable can emerge for use by early adopters, and then spread into the mainstream if success fuels demand. That is the genius of our market economy -- a spontaneous, emergent process for adaptively finding what works and has value -- more effective than any hierarchy (as Ferguson extols, with reference to Smith, Hayek, and Levitt).
Augmentation of humans (and their networks)
Another very powerful aspect of networks and algorithms that Ferguson (and many others) neglect is the augmentation of human intelligence. This idea dates back some 60 years (and more), when "artificial intelligence" went through its first hype cycle -- Licklider and Engelbart observed that the smarter strategy is not to seek totally artificial intelligence, but to seek hybrid strategies that draw on and augment human intelligence. Licklider called it "man-computer symbiosis, and used ARPA funding to support the work of Engelbart on "augmenting human intellect." In an age of mundane uses of computers, that proved eye-opening ("the mother of all demos") at a 1968 conference, and was one of the key inspirations for modern user interfaces, hypertext, and the Web.
The term augmentation is resurfacing in the artificial intelligence field, as we are once again realizing how limited machine intelligence still is, and that (especially where broad and flexible intelligence is needed) it is often far more effective to seek to apply augmented intelligence that works symbiotically with humans, retaining human visibility and guidance over how machine intelligence is used.
Why not apply this kind of emergent, reconfigurable augmented intelligence to drive a bottom up way to dynamically assign (and re-assign) authority in our networks, much like the way representative democracy assigns (and re-assigns) authority from the citizen up? Think of it as dynamically adaptive policy engineering (and consider that a strong bottom-up component will keep such "engineering" democratic and not authoritarian). Done well, this can keep our systems human-centered.
Not binary: networks versus hierarchies -- "Everything is deeply intertwingled"
Ted Nelson (who coined the term "hypertext" and was also one of the foundational visionaries of the Web), wrote in 1974 that "everything is deeply intertwingled." Ferguson's exposition illuminates how true that is of history. Unfortunately, his artificially binary dichotomy of hierarchies versus networks tends to mask this, and seems to blind him to how much more intertwingled we can expect our networks to be in the future. As Nelson put it, "Hierarchical and sequential structures, especially popular since Gutenberg, are usually forced and artificial. Intertwingularity is not generally acknowledged—people keep pretending they can make things hierarchical, categorizable and sequential when they can't."
It's a race: augmented network hierarchies that are emergently smart, balanced, and dynamically adaptable -- or disaster
If we pull together to realize this potential, we can transcend the dichotomies and conflicts of the Square and the Tower that Ferguson reveals as so wickedly complex and dangerous. Just as Malthus failed to account for the emergent genius of civilization, and the non-linear improvements it produces, Ferguson seems to discount how non-linear the effect of smarter networks with more dynamically augmented and balanced structures can be. But he is right to be very fearful, and to raise the alarm -- we are racing along a very dangerous path, and are not being nearly smart or proactive enough about what we need to do to avert disaster. What we need now is not a top-down command and control Manhattan Project, but a multi-faceted, broadly-based movement.
---
First published in Reisman on User-Centered Media, 1/13/18.
---
See the Selected Items tab for more on this theme.
---

Ferguson examines the ebb and flow of power, order, and revolution, with important parallels between the Gutenberg revolution (which led to 130 years of conflict) and our digital revolution, as well as much in between. There is valuable perspective on the interplay of social networks (old and new), the hierarchies of governments (liberal and illiberal), anarchists/terrorists, and businesses (disruptive and monopolistic). One can disagree with Ferguson's conservative politics yet find his analysis illuminating.
Drawing on a long career as a systems analyst/engineer/designer, manager, entrepreneur and inventor, I have recently come to share much of Ferguson's fear that we are going off the rails. He cites important examples like the 9/11 attacks, counterattacks, and ISIS, the financial meltdown of 2008, and most concerning to me, the 2016 election as swayed by social media and hacking. However -- discouraging as these are -- he seems to take an excessively binary view of network structure, and to discount the ability of open networks to better reorganize and balance excesses and abuse. He argues that traditional hierarchies should reestablish dominance.
In that regard, I think Ferguson fails to see the potential for better ways to design, manage, use, and govern our networks -- and to better balance the best of hierarchy and openness. To be fair, few technologists are yet focused on the opportunities that I see as reachable, and now urgently needed.
New levels of man-machine augmentation and new levels of decentralizing and modularizing intelligence can make these network smarter and more continuously adaptable to our wishes, while maintaining sensible and flexible levels of control. We can build on distributed intelligence in our networks to find more nuanced ways to balance openness and stability (without relying on unchecked levels of machine intelligence). Think of it as a new kind of systems architecture for modular engineering of rules that blends top-down stability with bottom-up emergence that apply checks and balances that work much like our representative democracy. This is a still-formative exploration of some ideas that I have written about, and plan to expand on in the future. First some context.
The Square (networks), the Tower (hierarchies) and the Algorithms that make all the difference
Ferguson's title comes from his metaphor of the medieval city of Sienna, with a large public square that serves as a marketplace and meeting place, and a high tower of government (as well as a nearby cathedral) that displayed the power of those hierarchies. But as he elaborates, networks have complex architectures and governance rules that are far richer than the binary categories of either "network" ( a peer to peer network with informal and emergent rules) or "hierarchy" (a constrained network with more formal directional rankings and restrictions on connectivity).
The crucial differences among all kinds of networks are in the rules (algorithms, code, policies) that determine which nodes connect, and with what powers. While his analysis draws out the rich variety of such structures, in many interesting examples, with diagrams, what he seems to miss is any suggestion of a new synthesis. Modern computer-based networks enable our algorithms to be far more nuanced and dynamically variable. They become far more emergent in both structure and policy, while still subject to basic constraints needed for stability and fairness.
Traditional networks have rules that are either relatively open (but somewhat slow to change), or constrained by laws and customs (and thus resistant to change) -- and even our current social and information networks are constrained in important ways. For example,
- The US constitution defines the powers and the structures for the governing hierarchy, and processes for legislation and execution, made resilient by its provisions for self-amendable checks and balances.
- Real-world social hierarchies have structures based on empowered people that tend to shift more or less slowly.
- Facebook has a social graph that is emergent, but the algorithms for filtering who sees what are strictly controlled by and private to Facebook. (They have just announced a major change -- unilaterally -- hopefully for the better for users and society, if not for content publishers.)
- Google has a page graph that is given dynamic weight by the PageRank algorithm, but the management of that algorithm is strictly controlled by Google. It has been continuously evolving in important respects, but the details are kept secret to make it harder to game.
As Ferguson points out, our vaunted high-tech networks are controlled by corporate hierarchies (he refers to FANG, Facebook, Amazon, Netflix, and Google, and BAT, Baidu, Alibaba, and Tencent) -- but subject to levels of government control that vary in the US, EU, and China. This corporate control is a source of tension and resistance to change -- and a barrier to more emergent adaptation to changing needs and stressors (such as the Russian interference in our elections). These new monopolistic hierarchies extract high rents from the network -- meaning us, the users -- mostly in the form of advertising and sales of personal data.
A fuller summary of Ferguson's message is in his WSJ preview article, "In Praise of Hierarchy." That headlines which side of fence he is on.
A fuller summary of Ferguson's message is in his WSJ preview article, "In Praise of Hierarchy." That headlines which side of fence he is on.
Smarter, more open and emergent algorithms -- APIs and a common carrier governance model
My view on this is more positive -- in that the answer to the question of governance is to make our network algorithms not only smarter, but more open to individual and multi-party control. Business monopolies or oligarchies (or governments) may own and control essential infrastructure, but we can place limits on what they control and what is open. In the antitrust efforts of the past century governments found need to regulate rail and telephone networks as common carriers, with limited corporate-owner power to control how they are used, giving marketplace players (competitors and consumers) a large share in that control.

In software systems, such plug-like interfaces are known as APIs (Application Program Interfaces), and are now widely accepted as the standard way to let systems inter-operate with one another -- just enough, but no more -- much like a hardware jack does. This creates a level of modularity in architecture that lets multiple systems, subsystems, and components inter-operate as interchangeable parts -- the great advance of the first Industrial Revolution.
What I suggest as the next step in evolution of our networks is a new kind of common carrier model that recognizes networks like Facebook, Google, and Twitter as common utilities once they reach some level of market dominance. Then antitrust protections would mandate open APIs to allow substitution of key components by customers -- to enable them to choose from an open market of alternatives that offer different features and different algorithms. Some specific suggestions are below, but first, a bit more on the motivations.
Modularity, emergence, markets, transparency, and democracy
Systems architects have long recognized that modularity is essential to making complex systems feasible and manageable. Software developers saw from the early days that monolithic systems did not scale -- they were hard to build, maintain, or modify. Web 2.0 extended that modularity to our network services, using network APIs that could be opened to the marketplace. Now we see wonderful examples of rich applications in the cloud composed of elements of logic, data, and analytics from a vast array of companies (such as travel services that seamlessly combine air, car rental, hotel, local attractions, loyalty programs, and tracking services from many companies).
The beauty of this kind of modularity is that systems can be highly emergent, based on the transparency and stability of published APIs, to quickly adapt to meet needs that were not anticipated. Some of this can be at the consumer's discretion, and some is enabled by nimble entrepreneurs. The full dynamics of the market can be applied, yet basic levels of control can be retained by the various players to ensure resilience and minimize abuse or failures.
The challenge that Ferguson makes clear is how to apply hierarchical control in the form of regulation in a way that limits risks, while enabling emergence driven by market forces. What we need is new focus on how to modularize critical common core utility services and how to govern the policies and algorithms that are applied, at multiple levels in the design of these systems (another kind of hierarchy). That can be done through some combination of industry self-regulation (where a few major players have the capability to do that, probably faster and more effectively than government), but by government where necessary (and only to the extent and duration necessary).
That obviously will be difficult and contentious, but it is now essential, if we are not to endure a new age of disorder, revolution, and war much like the age that followed Gutenberg. Silicon Valley and the rest of the tech world need to take responsibility for the genie they have let out of the bottle, and mobilize to deal with it.
Once that progresses and is found to be effective, similar methods can be applied to make government itself more modular, emergent, transparent, and democratic -- moving carefully toward "Democracy 2.0." (The carefully part is important -- Ferguson rightfully notes the dangers we face, and we have done a poor job of teaching our citizens, and our technologists, the principles of history, civics, and governance that are prerequisite to a working democracy.)
Opening the FANG walled gardens (with emphasis on Facebook and Google, plus Twitter)
This section outlines some rough ideas. (Some were posted in comments on an article in The Information by Sam Lessin, titled, "The Tower of Babel: Five Challenges of the Modern Internet" -- another tower.)
The fundamental principle is that entrepreneurs should be free to innovate improvements to these "essential" platforms, to be selected by consumer market forces. Just as we moved beyond the restrictive walled gardens of AOL, and the early closed app stores (limited to apps created by Apple or Motorola or Verizon), we have unleashed a cornucopia of innovative Web services and apps that have made our services far more effective (and far more valuable to the platform owners as well, in spite of their fears). Why should first movers be allowed to block essential innovation? Why should they have sole control of the essential algorithms that are coming to govern major aspects of our lives? Why shouldn't our systems evolve toward fitness functions that we control, with just enough hierarchical structure to prevent excessive instability at any given time?
Filtering rules. Filters are central to the function of Facebook, Google, and Twitter. As Ferguson observes, there are issues of homophily, filter bubbles, echo chambers, and fake news, and spoofing that are core to whether these networks make us smart or stupid, and whether we are easily manipulated to think in certain ways. Why not mandate that platforms be opened to user-selectable filtering algorithms (and/or human curators)? The major platforms can control their core services, but could allow for separate filters that inter-operate. Let users control their filters, whether just by setting key parameters, or by substituting pluggable alternative filters. This would be much like third party analytics in financial market data systems. Greater competition and transparency would allow users to compare alternative filters and decide what kinds of content they do or do not want. It would stimulate innovation to create new kinds of filters that might be far more useful.
For example, I have proposed strategies for filters that can help counter filter bubble effects by being much smarter about how people are exposed to views that may be outside of their bubble, doing it in ways that they welcome and think about. My post, Filtering for Serendipity -- Extremism, "Filter Bubbles" and "Surprising Validators" explains the need, and how that might be done. The key idea is to assign levels of authority to people based on the reputational authority that other people ascribe to them (think of it a RateRank, analogous to Google's PageRank algorithm). This approach also suggests ways to create smart serendipity, something that could be very valuable as well.
The "wisdom of the crowd" may be a misnomer when the crowd is an undifferentiated mob, but, what I propose seeking the wisdom of the smart crowd -- first using the crowd to evaluate who is smart, and then letting the wisdom of the smart sub-crowd emerge, in a cyclic, self-improving process (much as Google's algorithm improves with usage).
Social graphs and user agents: Why do Facebook, Twitter, LinkedIn, and others own separate, private forms of our social graph. Why not let other user agents interoperate with a given platform’s social graph? Does the platform own my social graph or do I? Does the platform control how that affects my filter or do I? Yes, we may have different flavors of social graph, such as personal for Facebook and professional for LinkedIn, but we could still have distinct sub-communities that we select when we use an integrated multi-graph, and those could offer greater nuance and flexibility with more direct user control.
Email systems were modularized long ago, so that we compose and read mail using user agents (Outlook, Apple mail, Gmail, and others) that connect with remote mail transfer agent servers (that we may barely be aware of) which interchange mail with any other mail transfer agent to reach anyone using any kind of user agent, thus enabling universal connectivity. Why not do the same to let any social media user agent inter-operate with any other, using a common social graph? We would then set our user agent to apply filters to let us see whichever communities we want to see at any given time.
Identity: A recurring problem with many social networks is abuse by anonymous users (often people with many aliases, or even just bots). Once again, this need not be a simple binary choice. It would not be hard to have multiple levels of participant, some anonymous and some with one or more levels of authentication as real human individuals (or legitimate organizations). These could then be clearly marked in feeds, and users could filter out anonymous or unverified users if desired.
Value transfers and extractions: Another important problem, and one that Ferguson cites is that the new platform businesses are driven by advertising and data sales, which means the consumer is not the customer but the product. Short of simply ending that practice (to end advertising and make the consumer the customer), those platforms could be driven to allow customer choice about such intrusions and extractions of value. Some users may be willing opt in to such practices, to continue to get "free" service, and some could opt out, by paying compensatory fees -- and thus becoming the customer. If significant numbers of users opted to become the customer, then the platforms would necessarily become far more customer-first -- for consumer customers, not the business customers who now pay the rent. (I have done extensive work on such alternative strategies, as described in my FairPayZone blog and my book.)
Analytics and metrics: we need access to relevant usage data and performance metrics to help test and assess alternatives, especially when independent components interact in our systems. Both developers and users will need guidance on alternatives. The Netflix Prize contests for improved recommender algorithms provided anonymized test data from Netflix to participant teams. Concerns about Facebook's algorithm, and the recent change that some testing suggests may do more harm than good, point to the need for independent review. Open alternatives will increase the need for transparency and validation by third parties. (Sensitive data could be restricted to qualified organizations.) [This paragraph added 1/14.]
If such richness sounds overly complex, remember that complexity can be hidden by well-designed user agents and default rules. Those who are happy with a platform's defaults need not be affected by the options that other users might enable (or swap in) to customize their experience. But we could have the choice, and innovations that are valuable can emerge for use by early adopters, and then spread into the mainstream if success fuels demand. That is the genius of our market economy -- a spontaneous, emergent process for adaptively finding what works and has value -- more effective than any hierarchy (as Ferguson extols, with reference to Smith, Hayek, and Levitt).
Augmentation of humans (and their networks)
Another very powerful aspect of networks and algorithms that Ferguson (and many others) neglect is the augmentation of human intelligence. This idea dates back some 60 years (and more), when "artificial intelligence" went through its first hype cycle -- Licklider and Engelbart observed that the smarter strategy is not to seek totally artificial intelligence, but to seek hybrid strategies that draw on and augment human intelligence. Licklider called it "man-computer symbiosis, and used ARPA funding to support the work of Engelbart on "augmenting human intellect." In an age of mundane uses of computers, that proved eye-opening ("the mother of all demos") at a 1968 conference, and was one of the key inspirations for modern user interfaces, hypertext, and the Web.
The term augmentation is resurfacing in the artificial intelligence field, as we are once again realizing how limited machine intelligence still is, and that (especially where broad and flexible intelligence is needed) it is often far more effective to seek to apply augmented intelligence that works symbiotically with humans, retaining human visibility and guidance over how machine intelligence is used.
Why not apply this kind of emergent, reconfigurable augmented intelligence to drive a bottom up way to dynamically assign (and re-assign) authority in our networks, much like the way representative democracy assigns (and re-assigns) authority from the citizen up? Think of it as dynamically adaptive policy engineering (and consider that a strong bottom-up component will keep such "engineering" democratic and not authoritarian). Done well, this can keep our systems human-centered.
Not binary: networks versus hierarchies -- "Everything is deeply intertwingled"

It's a race: augmented network hierarchies that are emergently smart, balanced, and dynamically adaptable -- or disaster

---
First published in Reisman on User-Centered Media, 1/13/18.
---
See the Selected Items tab for more on this theme.
Posted by
Richard Reisman - Sociotechnical network/systems thinker, visionary, inventor, pioneer | Author: @TechPolicyPress, FairPay | Nonresident Senior Fellow, Foundation for American Innovation
at
3:11 PM
No comments:


Labels:
democracy,
echo chamber,
Engelbart,
Facebook,
FairPay,
fake news,
filter bubble,
Google,
hierarchies,
intertwingled,
Nelson,
networks,
social network,
surprising validators,
Twitter,
utopia,
wisdom of crowds
Subscribe to:
Posts (Atom)