Showing posts with label innovation. Show all posts
Showing posts with label innovation. Show all posts

Wednesday, July 24, 2019

To Regulate Facebook and Google, Turn Users Into Customers

First published in Techonomy, 2/26/19 -- and more timely than ever...

There is a growing consensus that we need to regulate Facebook, Google, and other large internet platforms that harm the public in large part because they are driven by targeted advertising.  The seductive idea that we can enjoy free internet services — if we just view ads and turn over our data — has been recognized to be “the original sin” of the internet.  These companies favor the interests of the advertisers they profit from more than the interests of their billions of users.  They are powerful tools for mass-customized mind-control. Selling their capabilities to the highest bidder threatens not just consumer welfare, but society and democracy.

There is a robust debate emerging about how these companies should be regulated. Many argue for controls on data use and objectionable content on these platforms.  But poorly targeted regulation risks many adverse side-effects – for example abridging legitimate speech, and further entrenching these dominant platforms and impeding innovation by making it too costly for others to compete.

But I believe we need to treat the disease, not just play whack-a-mole with the symptoms. It’s the business model, stupid! It is widely recognized that the root cause of the problem is the extractive, ad-funded, business model that motivates manipulation and surveillance.  The answer is to require these companies to shift to revenue streams that come from their users.  Of course, shifting cold-turkey to a predominantly user-revenue-based model is hard.  But in reality, we have a simple, market-driven, regulatory method that has already proven its success in addressing a similarly challenging problem – forcing automakers to increase the fuel efficiency of the cars they make. Government has for years required staged multi-year increases in Corporate Average Fuel Efficiency. A similar strategy can be applied here.

This market-driven strategy does not mandate how to fix things. It instead mandates a measurable limit on the systems that have been shown to cause harm.  Each service provider can determine on their own how best to achieve that.  Require that X% of the revenue of any consumer data service come from its users rather than advertisers.  Government can monitor their progress, and create a timetable for steadily ratcheting up the percentage.  (This might apply only above some amount of revenues, to limit constraints on small, innovative competitors.)

It is often said of our internet platforms that “if you are not the customer, you are the product.”  This concept may oversimplify, but it is deeply powerful.  With or without detailed regulations on privacy and data use, we need to shift platform incentives by making the user become the customer, increasingly over time.

Realigning incentives for ads and data.  Advertising can provide value to users – if it is targeted and executed in a way that is non-intrusive, relevant, and useful.  The best way to make advertising less extractive of user value is by quantifying a “reverse meter” that gives users credit for their attention and data.  Some services already offer users the option to pay in order to avoid or reduce ads (Spotify is one example).  That makes the user the customer. Both advertisers and the platforms benefit by managing user attention to maximize, rather than optimize for exploitive “engagement.”

What if the mandated user revenue level is not met?  Government could tax away enough ad revenue to meet the target percentage.  That would provide a powerful incentive to address the problem.  In addition, that taxed excess ad revenue could fund mechanisms for oversight and transparency, for developing better solutions, and for remediating disinformation.

Can the platforms really shift to user revenue?  Zuckerberg has been a skeptic, but none of the big platforms has tried seriously.  When the platforms realize they must make this change, they will figure out how, even if it trims their exorbitant margins.
Users increasingly recognize that they must pay for digital services.  A system of reverse metering of ads and data use would be a powerful start.  Existing efforts that hint at the ultimate potential of better models include including crowdfundingmembership models, and cooperatives. Other emerging variations promise to be adaptive to large populations of users with diverse value perceptions and abilities to pay.

A growing focus on customer value would move us back towards leveraging a proven great strength of humanity — the deeply cooperative behavior of traditional markets.

A simple mandate requiring internet platforms to generate a growing percentage of revenue from users will not cure all ills. But it is the simplest way to drive a fundamental shift toward better corporate behavior.

---
Coda, 7/24/19:

Since the original publication of this article, this issue has become even more timely, as the FTC and Justice Department begin deep investigation into the Internet giants. 

  • There is growing consensus that there is a fundamental problem with the ad- and data-based business model
  • There is also growing consensus that we must move beyond the narrow theory of antitrust that says there can be no "harm" in a free service that does not raise direct costs to consumers (but does raise indirect costs to them and limits competition). 
  • But the targeted strategies for forcing a fundamental shift in business models outlined here are still not widely known or considered
  • It primarily focuses on these business model issues and regulatory strategies (including the auto emissions model described here), and how FairPay offers an innovative strategy that has gained recognition for how it can generate user revenue in equitable ways that do not prevent a service like Facebook or Google from being affordable by all, even those with limited ability to pay.
  • It also links to a body of work "On the deeper issues of social media and digital democracy." That includes Google-like algorithms for getting smarter about the wisdom of crowds, and structural strategies for regulation based on the specific architecture of the platforms and how power should be modularized (much as smart modularization was applied to regulating the Bell System and enabling the decades of robust innovation we now enjoy.)

Tuesday, May 21, 2019

Reisman in Techonomy: Business Growth is Actually Good for People. Here’s Why.

My 4th piece in Techonomy was published today:
Business Growth is Actually Good for People. Here’s Why..

Blurb:
We cannot—and should not—stop growing. Sensible people know the real task is to channel growth to serve human ends.
My opening and closing:
Douglas Rushkoff gave a characteristically provocative talk last week at Techonomy NYC – which provoked me to disagree strongly...
...Rushkoff delivers a powerful message on the need to re-center on human values. But his message would be more effective if it acknowledged the power of technology and growth instead of indiscriminately railing against it. We need a reawakening of human capitalism — and a Manhattan Project to get tech back on track. That will make us a better team human.

Thursday, April 26, 2018

Architecting Our Platforms to Better Serve Us -- Augmenting and Modularizing the Algorithm

We dreamed that our Internet platforms would serve us miraculously, but now see that they have taken a wrong turn in many serious respects. That realization has reached a crescendo in the press and in Congress with regard to Facebook and Google's advertising-driven services, but it reaches far more deeply.

"Titans on Trial: Do internet giants have too much power? Should governments intervene?" -- I had the honor last night of attending this stimulating mock trial, with author Ken Auletta as judge and FTC Commissioner Terrell McSweeny and Rob Atkinson, President of the Information Technology and Innovation Foundation (ITIF) as opposing advocates (hosted by Genesys Partners). My interpretation of the the jury verdict (voted by all of the attendees, who were mostly investors or entrepreneurs) was: yes, most agree that regulation is needed, but it must be nuanced and smartly done, not heavy handed. Just how to do that will be a challenge, but it is a challenge that we must urgently consider.

I have been outlining views on this that go in some novel directions, but are generally consistent with the views of many other observers. This post takes a broad view of those suggestions, drawing from several earlier posts.

One of the issues touched on below is a core business model issue -- the idea that the ad-model of "free" services in exchange for attention to ads is "the original sin of the Internet." It has made users of Facebook and Google (and many others) "the product, not the customer," in a way that distorts incentives and fails to serve the user interest and the public interest. As the Facebook fiasco makes clear, these business model incentives can drive these platforms to provide just enough value to "engage" us to give up our data and attend to the advertiser's messages and manipulation and even to foster dopamine-driven addiction, but not necessarily to offer consumer value (services and data protection) that truly serves our interests.

That issue is specifically addressed in a series of posts in my other blog that focuses on a novel approach to business models (and regulation that centers on that), and those posts remain the most focused presentations on those particular issues:
This rest of this post adapts a broader outline of ideas previously embedded in a book review (on Neal Ferguson's "The Square and the Tower: Networks and Power from the Freemasons to Facebook," a historical review of power in the competing forms of networks and hierarchies). Here I abridge and update that post to concentrate on on our digital platforms. (Some complementary points on the need for new thinking on regulation -- and the need for greater tech literacy and nuance -- are in a recent HBR article, "The U.S. Needs a New Paradigm for Data Governance.")

Rethinking our networks -- and the algorithms that make all the difference

Drawing on my long career as a systems analyst/engineer/designer, manager, entrepreneur, inventor, and investor (including early days in the Bell System when it was a regulated monopoly providing "universal service"), I have recently come to share the fear of many that we are going off the rails.

But in spite of the frenzy, it seems we are still failing to refocus on better ways to design, manage, use, and govern our networks -- to better balance the best of hierarchy and openness. Few who understand technology and policy are yet focused on the opportunities that I see as reachable, and now urgently needed.

New levels of man-machine augmentation and new levels of decentralizing and modularizing intelligence can make these network smarter and more continuously adaptable to our wishes, while maintaining sensible and flexible levels of control -- and with the innovative efficiency of an open market.   We can build on distributed intelligence in our networks to find more nuanced ways to balance openness and stability (without relying on unchecked levels of machine intelligence). Think of it as a new kind of systems architecture for modular engineering of rules that blends top-down stability with bottom-up emergence, to apply checks and balances that work much like our representative democracy. This is a still-formative development of ideas that I have written about for years, and plan to continue into the future.

First some context. The crucial differences among all kinds of networks (including hierarchies) are in the rules (algorithms, code, policies) that determine which nodes connect, and with what powers. We now have the power to create a new synthesis. Modern computer-based networks enable our algorithms to be far more nuanced and dynamically variable. They become far more emergent in both structure and policy, while still subject to basic constraints needed for stability and fairness.

Traditional networks have rules that are either relatively open (but somewhat slow to change), or constrained by laws and customs (and thus resistant to change). Even our current social and information networks are constrained in important ways. Some examples:
  • The US constitution defines the powers and the structures for the governing hierarchy, and processes for legislation and execution, made resilient by its provisions for self-amendable checks and balances. 
  • Real-world social hierarchies have structures based on empowered people that tend to shift more or less slowly.
  • Facebook has a social graph that is emergent, but the algorithms for filtering who sees what are strictly controlled by, and private to, Facebook. (In January they announced a major change --  unilaterally -- perhaps for the better for users and society, if not for content publishers, but reports quickly surfaced that it had unintended consequences when tested.)
  • Google has a page graph that is given dynamic weight by the PageRank algorithm, but the management of that algorithm is strictly controlled by Google. It has been continuously evolving in important respects, but the details are kept secret to make it harder to game.
Our vaunted high-tech networks are controlled by corporate hierarchies (FANG: Facebook, Amazon, Netflix, and Google in much of the world, and BAT: Baidu, Alibaba, and Tencent in China) -- but are subject to limited levels of government control that vary in the US, EU, and China. This corporate control is a source of tension and resistance to change -- and a barrier to more emergent adaptation to changing needs and stressors (such as the Russian interference in our elections). These new monopolistic hierarchies extract high rents from the network -- meaning us, the users -- mostly indirectly, in the form of advertising and sales of personal data.

Smarter, more open and emergent algorithms -- APIs and a common carrier governance model

The answer to the question of governance is to make our network algorithms not only smarter, but more open to appropriate levels of individual and multi-party control. Business monopolies or oligarchies (or governments) may own and control essential infrastructure, but we can place limits on what they control and what is open. In the antitrust efforts of the past century governments found need to regulate rail and telephone networks as common carriers, with limited corporate-owner power to control how they are used, giving marketplace players (competitors and consumers) a share in that control. 

Initially this was rigid and regulated in great detail by the government, but the Carterfone decision showed how to open the old AT&T Bell System network to allow connection of devices not tested and approved by AT&T. Many forget how only AT&T phones could be used (except for a few cases of alternative devices like early fax machines that went through cumbersome and often arbitrary AT&T approval processes). Remember the acoustic modem coupler, needed because modems could not be directly connected? That changed when the FCC's decision opened the network up to any device that met defined electrical interface standards (using the still-familiar RJ11, a "Registered Jack").

Similarly only AT&T long-distance connections could be used, until the antitrust Consent Decree opened up competition among the "Baby Bells" and broke them off from Long Lines to compete on equal terms with carriers like MCI and Sprint. Manufacturing was also opened to new competitors.

In software systems, such plug-like interfaces are known as APIs (Application Program Interfaces), and are now widely accepted as the standard way to let systems interoperate with one another -- just enough, but no more -- much like a hardware jack does. This creates a level of modularity in architecture that lets multiple systems, subsystems, and components  interoperate as interchangeable parts -- extending the great advance of the first Industrial Revolution to software.

What I suggest as the next step in evolution of our networks is a new kind of common carrier model that recognizes networks like Facebook, Google, and Twitter as common utilities once they reach some level of market dominance. Then antitrust protections would mandate open APIs to allow substitution of key components by customers -- to enable them to choose from an open market of alternatives that offer different features and different algorithms. Some specific suggestions are below (including the very relevant model of sophisticated interoperablilty in electronic mail networks), but first, a bit more on the motivations.

Modularity, emergence, markets, transparency, and democracy

Systems architects have long recognized that modularity is essential to making complex systems feasible and manageable. Software developers saw from the early days that monolithic systems did not scale -- they were hard to build, maintain, or modify. (The picture here of the tar pits is from Fred Brooks classic 1972 book in IBM's first large software project.)  Web 2.0 extended that modularity to our network services, using network APIs that could be opened to the marketplace. Now we see wonderful examples of rich applications in the cloud that are composed of elements of logic, data, and analytics from a vast array of companies (such as travel services that seamlessly combine air, car rental, hotel, local attractions, loyalty programs, advertising, and tracking services from many companies).

The beauty of this kind of modularity is that systems can be highly emergent, based on the transparency and stability of published, open APIs, to quickly adapt to meet needs that were not anticipated. Some of this can be at the consumer's discretion, and some is enabled by nimble entrepreneurs. The full dynamics of the market can be applied, yet basic levels of control can be retained by the various players to ensure resilience and minimize abuse or failures.

The challenge is how to apply hierarchical control in the form of regulation in a way that limits risks, while enabling emergence driven by market forces. What we need is new focus on how to modularize critical common core utility services and how to govern the policies and algorithms that are applied, at multiple levels in the design of these systems (another, more hidden and abstract, kind of hierarchy). That can be done through some combination of industry self-regulation (where a few major players have the capability to do that, probably faster and more effectively than government), but by government where necessary (preferably only to the extent and duration necessary).

That obviously will be difficult and contentious, but it is now essential, if we are not to endure a new age of disorder, revolution, and war much like the age of religious war that followed Gutenberg (as Ferguson described). Silicon Valley and the rest of the tech world need to take responsibility for the genie they have let out of the bottle, and to mobilize to deal with it, and to get citizens and policymakers to understand the issues.

Once that progresses and is found to be effective, similar methods may eventually be applied to make government itself more modular, emergent, transparent, and democratic -- moving carefully toward "Democracy 2.0." (The carefully part is important -- Ferguson rightfully noted the dangers we face, and we have done a poor job of teaching our citizens, and our technologists, even the traditional principles of history, civics, and governance that are prerequisite to a working democracy.)

Opening the FANG walled gardens (with emphasis on Facebook and Google, plus Twitter)

This section outlines some rough ideas. (Some were posted in comments on an article in The Information by Sam Lessin, titled, "The Tower of Babel: Five Challenges of the Modern Internet.")

The fundamental principle is that entrepreneurs should be free to innovate improvements to these "essential" platforms -- which can then be selected by consumer market forces. Just as we moved beyond the restrictive walled gardens of AOL, and the early closed app stores (initially limited to apps created by Apple), we have unleashed a cornucopia of innovative Web services and apps that have made our services far more effective (and far more valuable to the platform owners as well, in spite of their early fears). Why should first movers be allowed to block essential innovation? Why should they have sole control and knowledge of the essential algorithms that are coming to govern major aspects of our lives? Why shouldn't our systems evolve toward fitness functions that we control and understand, with just enough hierarchical structure to prevent excessive instability at any given time?

Consider the following specific areas of opportunity.

Filtering rules. Filters are central to the function of Facebook, Google, and Twitter. As Ferguson observes, there are issues of homophily, filter bubbles, echo chambers, and fake news, and spoofing that are core to whether these networks make us smart or stupid, and whether we are easily manipulated to think in certain ways. Why do we not mandate that platforms be opened to user-selectable filtering algorithms (and/or human curators)? The major platforms can control their core services, but could allow users to select separate filters that interoperate with the platform. Let users control their filters, whether just by setting key parameters, or by substituting pluggable alternative filter algorithms. (This would work much like third party analytics in financial market data systems.) Greater competition and transparency would allow users to compare alternative filters and decide what kinds of content they do or do not want. It would stimulate innovation to create new kinds of filters that might be far more useful and smart.

For example, I have proposed strategies for filters that can help counter filter bubble effects by being much smarter about how people are exposed to views that may be outside of their bubble, doing it in ways that they welcome and want to think about. My post, Filtering for Serendipity -- Extremism, "Filter Bubbles" and "Surprising Validators" explains the need, and how that might be done. The key idea is to assign levels of authority to people based on the reputational authority that other people ascribe to them (think of it as RateRank, analogous to Google's PageRank algorithm). This approach also suggests ways to create smart serendipity, something that could be very valuable as well.

The "wisdom of the crowd" may be a misnomer when the crowd is an undifferentiated mob, but,  I propose seeking the wisdom of the smart crowd -- first using the crowd to evaluate who is smart, and then letting the wisdom of the smart sub-crowd emerge, in a cyclic, self-improving process (much as Google's algorithm improves with usage, and much as science is open to all, but driven by those who gain authority, temporary as that may be).

Social graphs: Why do Facebook, Twitter, LinkedIn, and others own separate, private forms of our social graph. Why not let other user agents interoperate with a given platform’s social graph? Does the platform own the data defining my social graph relationships or do I? Does the platform control how that affects my filter or do I? Yes, we may have different flavors of social graph, such as personal for Facebook and professional for LinkedIn, but we could still have distinct sub-communities that we select when we use an integrated multi-graph, and those could offer greater nuance and flexibility with more direct user control.

User agents versus network service agents: Email systems were modularized in Internet standards long ago, so that we compose and read mail using user agents (Outlook, Apple mail, Gmail, and others) that connect with federated remote mail transfer agent servers (that we may barely be aware of) which interchange mail with any other mail transfer agent to reach anyone using any kind of user agent, thus enabling universal connectivity.

Why not do much the same, to let any social media user agent interoperate with any other, using a federated social graph and federated message transfer agents? We could then set our user agent to apply filters to let us see whichever communities we want to see at any given time. Some startups have attempted to build stand-alone social networks that focus on sub-communities like family or close friends versus hundreds of more or less remote acquaintances. Why not just make that a flexible and dynamic option, that we can control at will with a single user agent? Why require a startup to build and scale all aspects of a social media service, when they could just focus on a specific innovation? (The social media UX can be made interoperable to a high degree across different user agents, just as email user agents handle HTML, images, attachments, emojis, etc. -- and as do competing Web browsers.)

Identity: A recurring problem with many social networks is abuse by anonymous users (often people with many aliases, or even just bots). Once again, this need not be a simple binary choice. It would not be hard to have multiple levels of participant, some anonymous and some with one or more levels of authentication as real human individuals (or legitimate organizations). First class users would get validated identities, and be given full privileges, while anonymous users might be permitted but clearly flagged as such, with second class privileges. That would allow users to be exposed to anonymous content, when desired, but without confusion as to trust levels. Levels of identity could be clearly marked in feeds, and users could filter out anonymous or unverified users if desired. (We do already see some hints of this, but only to a very limited degree.)

Value transfers and extractions: As noted above, another very important problem is that the new platform businesses are driven by advertising and data sales, which means the consumer is not the customer but the product. Short of simply ending that practice (to end advertising and make the consumer the customer), those platforms could be driven to allow customer choice about such intrusions and extractions of value. Some users may be willing opt in to such practices, to continue to get "free" service, and some could opt out, by paying compensatory fees -- and thus becoming the customer. If significant numbers of users opted to become the customer, then the platforms would necessarily become far more customer-first -- for consumer customers, not the business customers who now pay the rent.

I have done extensive work on alternative strategies that adaptively customize value propositions and prices to markets of one -- a new strategy for a new social contract that can shape our commercial relationships to sustain services in proportion to the value they provide, and our ability to pay, so all can afford service. A key part of the issue is to ensure that users are compensated for the value of the data they provide. That can be done as a credit against user subscription fees (a "reverse meter"), at levels that users accept as fair compensation. That would shift incentives toward satisfying users (effectively making the advertiser their customer, rather than the other way around). This method has been described in the Journal of Revenue and Pricing Management: “A novel architecture to monetize digital offerings,” and very briefly in Harvard Business Review. More detail is my FairPayZone blog and my book (see especially the posts about the Facebook and Google business models that are listed in the opening section, above, and again at the end.*)

Analytics and metrics: we need access to relevant usage data and performance metrics to help test and assess alternatives, especially when independent components interact in our systems. Both developers and users will need guidance on alternatives. The Netflix Prize contests for improved recommender algorithms provided anonymized test data from Netflix to participant teams. Concerns about Facebook's algorithm, and the recent change that some testing suggests may do more harm than good, point to the need for independent review. Open alternatives will increase the need for transparency and validation by third parties.

(Sensitive data could be restricted to qualified organizations, with special controls to avoid issues like the Cambridge Analytica mis-use. The answer to such abuse is not greater concentration of power in one platform, as Maurice Stucke points out in Harvard Business Review, "Here Are All the Reasons It’s a Bad Idea to Let a Few Tech Companies Monopolize Our Data." (Facebook has already moved toward greater concentration of power.)

If such richness sounds overly complex, remember that complexity can be hidden by well-designed user agents and default rules. Those who are happy with a platform's defaults need not be affected by the options that other users might enable (or swap in) to customize their experience. We do that very successfully now with our choice of Web browsers and email user agents. We could have similar flexibility and choice in our platforms -- innovations that are valuable can emerge for use by early adopters, and then spread into the mainstream if success fuels demand. That is the genius of our market economy -- a spontaneous, emergent process for adaptively finding what works and has value -- in ways more effective than any hierarchy (as Ferguson extols, with reference to Smith, Hayek, and Levitt).

Augmentation of humans (and their networks)

Another very powerful aspect of networks and algorithms that many neglect is  the augmentation of human intelligence. This idea dates back some 60 years (and more), when "artificial intelligence" went through its first hype cycle -- Licklider and Engelbart observed that the smarter strategy is not to seek totally artificial intelligence, but to seek hybrid strategies that draw on and augment human intelligence. Licklider called it "man-computer symbiosis, and used ARPA funding to support the work of Engelbart on "augmenting human intellect." In an age of arcane and limited uses of computers, that proved eye-opening at a 1968 conference ("the mother of all demos"), and was one of the key inspirations for modern user interfaces, hypertext, and the Web.

The term augmentation is resurfacing in the artificial intelligence field, as we are once again realizing how limited machine intelligence still is, and that (especially where broad and flexible intelligence is needed) it is often far more effective to seek to apply augmented intelligence that works symbiotically with humans, retaining human visibility and guidance over how machine intelligence is used.

Why not apply this kind of emergent, reconfigurable augmented intelligence to drive a bottom up way to dynamically assign (and re-assign) authority in our networks, much like the way representative democracy assigns (and re-assigns) authority from the citizen up? Think of it as dynamically adaptive policy engineering (and consider that a strong bottom-up component will keep such "engineering" democratic and not authoritarian). Done well, this can keep our systems human-centered.

Reality is not binary:  "Everything is deeply intertwingled"

Ted Nelson (who coined the term "hypertext" and was another of the foundational visionaries of the Web), wrote in 1974 that "everything is deeply intertwingled." As he put it, "Hierarchical and sequential structures, especially popular since Gutenberg, are usually forced and artificial. Intertwingularity is not generally acknowledged—people keep pretending they can make things hierarchical, categorizable and sequential when they can't."

It's a race:  augmented network hierarchies that are emergently smart, balanced, and dynamically adaptable -- or disaster

If we pull together to realize this potential, we can transcend the dichotomies and conflicts that are so wickedly complex and dangerous. Just as Malthus failed to account for the emergent genius of civilization, and the non-linear improvements it produces, many of us discount how non-linear the effect of smarter networks, with more dynamically augmented and balanced structures, can be. But we are racing along a very dangerous path, and are not being nearly smart or proactive enough about what we need to do to avert disaster. What we need now is not a top-down command and control Manhattan Project, but a multi-faceted, broadly-based movement, with elements of regulation, but primarily reliant on flexible, modular architectural design.


[Update 12/14/20] A specific proposal - Stanford Working Group on Platform Scale

An important proposal that gets at the core of the problems in media platforms was published in Foreign AffairsHow to Save Democracy From Technology, by Francis Fukuyama and others. See also the report of the Stanford Working Group. The idea is to let users control their social media feeds with open market interoperable filters. That is something I proposed here (in the "Filtering rules" section, above). Other regulatory proposals that include some of the suggestions made here are summarized in Regulating our Platforms -- A Deeper Vision.

---
See the Selected Items tab for more on this theme.

---

Coda:  On becoming more smartly intertwingled

Everything in our world has always been deeply intertwingled. Human intellect augmented with technology enables us to make our world more smartly intertwingled. But we have lost our way, in the manner that Engelbart alluded to in his illustration of de-augmentation -- we are becoming deeply polarized, addicted to self-destructive dopamine-driven engagement without insight or nuance. We are being de-augmented by our own technology run amok.


(I plan to re-brand this blog as "Smartly Intertwingled" -- that is the objective that drives my work. The theme of "User-Centered Media" is just one important aspect of that.)


--------------------------------------------------------------------------------------------

*On business models - FairPay (my other blog):  As noted above, a series of posts in my other blog focus on a novel approach to business models (and regulation that centers on that), and those posts remain my best presentation on those issues:

Monday, June 19, 2017

Bricks and Clicks -- Showrooming, Riggio, and Bezos

Last week brought two notable news items about Amazon and the future or retail. Most noted was the deal to buy Whole Foods, which would greatly accelerate Amazon's move to the center of the still rudimentary combination of bricks and clicks. Drawing some limited attention was the issuance of an Amazon patent (filed in May, 2012) on smart ways to turn showrooming to a store's advantage.

This was especially relevant to me for several reasons:
  • I have been dabbling in bricks and clicks since at least twenty years ago, when I tried to pitch Steve Riggio of Barnes & Noble, on a bricks and clicks strategy to counter Amazon (to no avail).
  • It was gratifying to see that the Amazon patent cited seventeen of my patents or applications as prior art. 
  • My 2013 post, The Joy of Showrooming:  From Profit Drain to Profit Center, outlined promising ideas much along the lines of aspects of Amazon's patent. My ideas were for collaboration of the showroom owner and the Web competitor, to optimize the best value exchange, based on showrooming referral fees that compensate the showroom owner for the showroom service provided.
The IoBC (Internet of Bricks and Clicks)

The Internet of Things has gotten much press about how it connects not just people and businesses, but literally everyThing. Few grasp even the early impacts of this sea change, and even the technically sophisticated can only dimly grasp where it will take us.

Bricks and clicks is just an aspect of that. It is much like the famous New Yorker's View of the World magazine cover by Steinberg.
  • Traditional retail businesses view online from their store-centered perspective, adding online services to counter and co-opt the enemy attack.
  • Online businesses view stores from the online-centered perspective, dabbling in stores and depots to expand their beachhead.
  • Only a few, like Bezos, see the big picture of an agnostic, flexible blend of resources and capabilities that most effectively provide what we want, when and how we want it.
To see the larger view we must climb above our attachments to stores and warehouses, or Web sites and apps. We must consider the objectives of the customers and how best to give them whatever they want, with whatever resources can be applied, as costs permit. How we orchestrate those resources to meet those objectives will change rapidly, as our systems and their integration improves. Only the most far-sighted and nimble will see and go more than a few steps down this path.

The WSJ op-ed on the merger and article on patent give some hint of the kind of changes we can look forward to (and I expand on that below). Fasten your seat belts, it's going to be a bumpy ride.

The User-Centered view

This blog, User-Centered Media, is focused on my dominant perspective on technology -- as a tool to improve our lives. While many of the most creative people in technology work for businesses that sell technology products (resources) for others to use, for most of my career I worked for companies that wanted to use technology. Vendors want to make what they know and sell what they make. Users want to find whatever resources they need and put them together to help their people do things -- whether the resources are people, computers, Web sites, networks, devices, stores, warehouses, or transport.

So far bricks and clicks have been developing from the two poles, but that has not taken us very far. But it seems we may be at an inflection point:
  • The two articles I cited above give a hint of what we will begin to see form in the middle. Amazon has been in the lead and just took a big step forward. The WSJ op-ed observes: "Mr. Bezos’s ambition is...oriented toward accelerating consumer gratification however possible."
  • Additional perspectives on Bezos' user-centered innovation style were in the Times a few days ago
  • In a NY Times Upshot article today quoting Erik Brynjolfsson, "The bigger and more profound way that technology affects jobs is by completely reinventing the business model...Amazon didn’t go put a robot into the bookstores and help you check out books faster. It completely reinvented bookstores."
  • More detail on the guts of integration is reviewed in another Times article today
  • Still another Times article today looks at the symmetry of the Amazon-Whole Foods deal and the Walmart-Bonobos deal. 
  • And still another looks at the similar synergies of the Target investment in Casper. 
Bezos is well on the way to this broad reinvention of retail, but, as suggested in a recent (May!) article on how VR/AR might factor into this by my old friend Gary Arlen (and our online comments back and forth), we are still far from game over.

With that, I indulge in some comments on my own dabblings in this space.

The Barnes & Noble that might have been

Just over twenty years ago I was invited to a B&N focus group when Amazon was just nipping at their heels and they had just built their Web site. Based on my reactions to what I saw, I wrote to Riggio, then COO, on May 8, 1997:
B&N on the Web presents an exciting opportunity to leverage your store-based business with your online business at three levels:
  • In-store uses of the Web as a kiosk to provide a new richness in self-service (using Firefly, book reviews, and other aids). 
  • Hybrid uses of remote Web access as a prelude to a store visit (such as to pre-select a book, find the nearest store that has it, and put it on hold for pickup)
  • Remote uses that you are off to a good start on (featuring collaborative filtering and other recommender and community-building tools)
I can help not only with the new media side of this, but also with the back-end integration. 
Just first steps -- but all in software, with no need for any changes at all to their physical logistics. Where might B&N be now if Riggio responded to my letter?

The Joy of Showrooming

In May 2013 (May seems a good month for this), I did a post, The Joy of Showrooming: From Profit Drain to Profit Center. It addresses new methods much the same as those in the Amazon patent (which was filed a year earlier than my post, so I am glad I chose not to apply for a patent!). After some comments on the emerging concerns about showrooming and price-checking apps, I said:
What I suggest is to take this threat, and view it as an opportunity:
  • What if showrooming activity could be tracked, and e-tailers convinced to pay a "showroom fee" to the provider of a showrooming service, if the sale came from that showroom? 
  • What if the retailer could filter Internet traffic from their store, and trace which URLs are for competitors, track purchase transactions that emanate from the store, and pass through only those that go to retailers that agree to pay the fee? 
There are a number of ways this can be done, and that can lead to a new retail ecology that benefits all.
It is not clear where Amazon wants to go with this, and whether they want to build an ecosystem with other strong players. But as Arlen and I agree, these are still early days -- others may seize the opportunity to build a showrooming referral ecosystem (but now should consider to what extent the Amazon patent might impede them).

(I have not checked which of my patents were cited by Amazon, but in general my inventions relate to a user-centered view of networked resources, and of the IoT. And for those who might care about IP issues, all of my patents have been sold, and I am not actively involved in getting more.)

[Update 6/20: A Forrester blog post made an interesting observation: "Amazon knows that to win at brick and mortar, retail theater is paramount. Whole Foods locations are destinations where the idea of “Retail Theater” still thrives." Showrooming is theater.]

The Future of Retail

In 2014 I co-led the very stimulating MIT Enterprise Forum of NYC: Think Tank Session III: The Future of Retail: Reinventing how people get stuff (one of a series on The Future of X). This Think Tank brought together thirteen "lead participants" with diverse expertise in the field for an open brainstorming with about sixty other technology entrepreneur "participants." While aspects of the video are now dated, those who care to look are likely to find many examples of forward thinking that are still timely.

And now for something not completely different

While most of my work on my current main project, FairPay, is not specifically oriented to bricks, it does bring the same kind of user-centered, customer-first thinking to retail. Much of the focus is on the digital realm, but it suggests some new directions that relate to the physical realm as well. For more ideas on those aspects of the future of retail, check out my other blog, The FairPay Zone.

---
[Update 3/31/18: Sloan Management Review published an excellent article on this theme: The Store Is Dead — Long Live the Store.]



Thursday, October 15, 2015

Patents for Entrepreneurs – Crown Jewels or Shiny Objects? -- MITEF-NYC Panel NYC 11/19

“If you don’t have a patent, you don’t have a prayer on Shark Tank,” as John Oliver began his diatribe on the problems with patents.  Black humor with questionable substance, but never has there been such widespread and deep confusion about patents, from the man on the street, to the press, the courts, Congress, the Supreme Court, and President Obama.  Joking aside, how should entrepreneurs view patents?
That is the subject of this MIT Enterprise Forum of NYC panel session on 11/19 that I am co-organizing:  Patents for Entrepreneurs – Crown Jewels or Shiny Objects?
We assemble a panel of entrepreneurs who have successfully navigated these issues and shepherded companies through the life-cycle of seeking and using patents -- working with investors and licensees.  We bolster that with patent lawyers who can update us on the fundamental legal turmoil that bears on this.
This is not Patents 101 -- it is aimed at a strategic perspective for entrepreneurs and those investing in their companies.

The John Oliver bit is very funny, but does a disservice to the real issues of why the patent system is valuable. For a perspective on the harder reality, check out this post on a respected IP blog, A toxic concoction of myth, media and money is killing the patent system.

But this is also not a debate on IP policy -- the focus will be on understanding the current landscape, directions, and uncertainties for good or bad, to address the strategic questions of whether and how young companies should seek patents.

(My personal view is that while there should be an important place for patents, those trying to fix the system have broken it so badly that the value of patents for many kinds of innovation is now highly doubtful -- at least until the pendulum swings back a bit. I did well as an inventor with patents in the past, but am no longer spending much time on that now.)



Thursday, December 12, 2013

"The Future of TV" - MITEF-NYC Think Tank Session 1/14 (First in a Series)

Given our success at thought leadership in MITEF-NYC events, we are trying a new kind of Think Tank session that builds on the inventiveness of our MITEF community.  The Future of TV is the first of a series of Think Tank sessions on different industries, "The Future of X."
MITEF Think Tank Session: The Future of TV - January 14, 2014, NYC
Propositions for an audience that has seized (the remote) control

The first of a series of MITEF Think Tank Sessions on The Future of X
Tech-based opportunities for changing industries and changing audiences


...It is up to the brightest minds in technology to find solutions to these exciting challenges--and how to profit from that. MITEF-NYC is therefore introducing a new event format. In a highly engaged and interactive setting, a Think Tank session will gather ideas and concrete answers on how technology and innovation can shape the future of television
Some prominent industry participants (including seasoned executives and consultants, respected columnists, and successful entrepreneurs and inventors) are already registered, and space is filling up. Now is the time to register if you have not already.

We look forward to a stimulating session with strong audience participation. We hope to not only generate lots of good ideas for innovators, but also for learning how this format works and can be extended. 

Future events might target other industries in disruptive transitions where technology is both a challenge and an opportunity, such as publishing, music, retail, transportation, and education.

Some background on our Think Tank format...

As a long time co-chair of the programming committee, and co-organizer of this event, I am enjoying the process of trying this new format. I have been involved in applying several successful formats at MITEF since the late '90s, notably panel sessions, and we have done many ground-breaking events that established our reputation for thought leading coverage of important developments, panels of top-level innovators, and sophisticated audiences of entrepreneurs and those who work with them.

The Think Tank idea arose at a member/volunteer brainstorming session we did in August to generate new program ideas for the coming year, and emerged as the brainchild of one of our recently joined members, now co-organizer of this event, Katja Bartholmess. She is an energetic champion of new ideas with an eclectic background in e-commerce and branding. She pointed out that we have such smart audiences that maybe we don't need formal speakers and panelists, just a little catalyzing. We began to work together, settled on TV as the place to start the series, and are working to find a format that is manageable, brings out the best in our base of attendees, and lets them create. We thought of calling it a salon, brainstorming, or a round table, but went with think tank. We see our roles as facilitators, to help frame and herd the discussion, but with the real energy coming from our participants. We also hope to find ways to give the session continuing life, to build on ideas and human connections made that evening.

We are identifying a small number of "lead" participants who have experience innovating in the industry to help stimulate discussion, and shape it with their knowledge, but view them as "first among equals," with the idea that all of our audience participate actively, and that creative outsiders can often bring new thinking, outside the box, to sometime see opportunities that insiders ignore.

Comments and suggestions are welcome.

Saturday, October 06, 2012

i[Carter]Phone? -- Apple and Anti-Competitive Tying

Apple is pushing the laws prohibiting anti-competitive behavior, as noted in an interesting article by James Stewart in today's NYTimes, with reference to Maps and the iTunes Store.  It considers how Apple's efforts at total control of their ecosystem may be both harmful and illegal--at some point, if not yet.

For some time I have had similar concerns, and have been wondering how long until we see an "iCarterPhone Decision."  What do I mean by that?  Followers of communications history will remember the Carterfone Decision (1968) as a landmark step toward the breakup of the Bell System monopoly. Until then it was illegal to attach a phone not approved by AT&T to the US telephone network.  This was based on the AT&T argument that attaching any device not fully tested and approved by them to the network might introduce voltages or other electrical effects that would run through the wires and harm their central office equipment, potentially causing widespread harm.  The only permissible way to add a specialized device like the one sold by Carterfone was to use a Rube Goldberg-like acoustic coupler, with rubber cups that relayed sound in or out of a standard Bell telephone handset ear and mouthpiece  with no direct electrical connection (and with issues of signal quality).  Some of you remember early modems that connected to computers that way. The Carterfone Decision changed all that, and opened the way for the vibrant market in phones, answering machines, faxes, modems, etc. that we now take for granted.

The iPhone/iTunes ecology smacks of much the same kind of anticompetitive control, with restrictions that limit consumer rights, raise consumer costs, and limit competitive innovation.  The Times addresses the current flap over Apple's inferior maps app, as well as Department of Justice price fixing charges against Apple relating to e-books sold through the iTunes Store.  Similar issues apply to control of apps in general that Apple does not like for one reason or another --such as has been the case with Skype, Google, Flash, and many others.  Contrast this with Microsoft PCs that allow you to run any software from any source, with no involvement of Microsoft whatsoever.  Of course we are free of migrate to the Android ecosystem to get greater openness, and many have chosen to do just that.

As the Times article notes, Apple is not dominant the way Microsoft was (or AT&T), and thus its tying sales in the App Store may not reach a level actionable under antitrust laws. (Its alleged price fixing is another story.) But at an ecosystem level, given its disproportionate number of apps, it does already have a level of dominance that might warrant correction.

Other areas in which Apple is riding roughshod on the market (and consumers) relate to other kinds of proprietary behavior.  Apple champions open standards like HTML5 over proprietary standards like Flash when the proprietary standards belong to the competitor, and it suits their interests to smash them , but insists on proprietary standards of its own, such as for its iPhone connectors and its AirPlay protocol, for which it charges exorbitant prices (adapter retail $29?) or licensing fees (AirPlay speakers retail price bump $100?).

It will be very interesting to see how this develops -- whether the market rebels or the government finds cause to draw a line, or they just fail to maintain their edge.  From the market perspective, Apple is walking a very fine line, balancing the positive perception of product quality against the negative perception of arrogance and rapaciousness. Jobs was able to ride that balance for a very profitable run, but the maps fiasco, and the increasing success of Android (and maybe Microsoft, or someone yet to appear) suggests that this is a precarious and anti-consumer position, and that Apple's days of dictating to consumers and its ecosystem partners may be numbered.


Monday, January 23, 2012

A New Age in Patent Liquidity -- NYC 2/15 -- MIT Enterprise Forum Panel Session

This is a panel that should be very relevant to all entrepreneurs who have an interest in getting and monetizing patents, as well as those who work with them. "A New Age in Patent Liquidity: New Opportunities for Entrepreneurs," is presented by MIT Enterprise Forum of NYC.

I will be on the panel to present the perspective of an entrepreneur/inventor who has successfully navigated the Kafkaesque world of patents, which can be rewarding, but also hugely frustrating, costly, and risky.  I described some of the twists and turns of my adventures in a 2008 blog post "'The Six Phases of a Technology Flop' ...Patents, and Plan B." The theme was how I started seeking to build a software/services business, but also sought patents as a hedge to protect my investment -- a "Plan B." When the business failed to keep up with better-connected competitors with deeper pockets, I turned to the patents to try to capture value for my innovations.  Working with partners who brought the expertise and funding needed to do that, and eventually to undertake a patent suit, I went part way through infringement cases against Microsoft and Apple.  Some additional background on that is in last year's post that tells how Intellectual Ventures changed the game with a very creative, win-win deal.

I also expect to touch on my 2008 sale of another portfolio of patents to another very innovative company, RPX, as well as my ongoing work developing other patents.  I am pleased that Kevin Barhydt, VP, Head of Acquisitions for RPX (and formerly at IV) will also be on the panel.

From my perspective, IV, RPX, and others are making a real difference is offering inventors and other patent owners a way to monetize their IP for reasonable compensation -- in a market that is rational, and has a middle ground between "take a hike" and the nuclear option of litigation, with its huge costs in money, time, and disruption.

It is a pleasure to be a panelist and organizer for this event, especially given that I was the moderator and an organizer of MITEF's well-received 2000 panel session  "Patents for Dot-coms," which had an equally distinguished panel.

Monday, October 10, 2011

The Necessity of Steve Jobs: ...Inventor? ...or Necessitor?

The recent comparisons of Steve Jobs to Edison and Ford brought me back to an important point: Invention is the mother of necessity. We don't realize we need something until an "inventor" shows us what it can be, and what it can do for us.

Which came first? Is necessity the mother of invention? (as the saying goes) ...or is invention the mother of necessity? Is inventing unrecognized necessities the real heart of inventing? As Jobs famously said: "It’s not the consumers’ job to know what they want.”

Jobs was more important as a necessitor, than as an inventor.  It struck me that the point some have raised -- that Jobs did not invent the technologies he popularized -- has some validity, but fails to balance the picture with this important point.  It is true that the mouse, the "drag-and-drop" graphical user interface, hypertext, music downloads, MP3 players,smartphones, tablets, touchscreens, computer animation, and many more key "inventions" applied by Jobs were not invented by him.  It seems widely recognized that Jobs' key contribution was that he saw how such things could be put to use in new configurations, and to serve needs that others did not see or saw less clearly (and also that he had the drive and resources to realize his visions...)

This resonated with me, because I have often felt that my own history as an inventor has a similar focus (even if hardly on the scale of Jobs').  The contribution is not so much in solving a recognized technical problem, but in seeing what technical problems should be solved, and why, and what else that would mean.  (That is why the theme of this blog is "user-centered media" -- that is pretty much the theme of much of my work.)

In a sense, this relates to innovation at the level of "systems thinking."  The necessitor does not just solve a problem, but creates a whole new system, within the larger system of people, technology, economics, and culture.  Jobs saw that what was missing in the music business was a new model for aggregated, simplified sales of music, and integration of an e-commerce system (the iTunes store) with a user agent (iTunes) and a device (iPod).  Once people saw that, they needed it.  No one created the wholistic vision that enabled that necessity to be recognized and acted on until Jobs did.

Similarly, some argue that Edison's real impact was not the light bulb, but the electric distribution system and related infrastructure that he recognized as needed to make the light bulb broadly useful.  It is perhaps more apparent that Ford was not so much an inventor of cars and mass production, but a necessitor, who realized that we needed simple black cars, and lots of them.  Often such cases are not simple inventions, but whole systems of invention.  One necessity/invention leads to other necessities/inventions, to whole ecologies of inventions.

So which came first? the necessity or the invention?  I suggest, as in most things, the answer is a non-dualistic "yes, both."  It is hard to separate the two.  Our patent system seems to think of inventions as the thing that matters.  The constitution defines patents to be for "any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvements thereof."  This has always seemed to me a limited view of what inventors do.

I suggest an equal form of "invention" is what Robert Kennedy spoke of:  "I dream of things that never were, and ask why not?"  Once we take that step, we may need to invent some technology, but often what we need to do is take the vision, understand all that it entails, and assemble a whole system from technologies that may have previously existed, but not been combined and adapted in the right way.  This kind of systems thinking, is on a much different level than the more commonly recognized engineering tasks of solving the technical problems to meet a previously recognized need.

...This also has led me to questions about the place for such contributions in the patent system.  It seems to me that such contributions may be equally deserving of some kind of patent protection, to reward the creative thinking that advances our "useful arts" and our civilization in general.  Just as with more narrow senses of technical invention, this takes not just inspiration, but perspiration (to paraphrase Edison).  But just how this kind of invention of necessity fits (or could be fit) with our current patent system seems a bit unclear.

------
[Should anyone know of any good thinking by others on this theme, I would welcome references.]

Wednesday, April 27, 2011

My Intellectual Ventures Inventor Profile

Recently I had the pleasure of being interviewed by Intellectual Ventures for a story about my work as an inventor. I have been looking forward to seeing it posted in their new Inventor Spotlight area. Unfortunately, I still have to wait a bit. My story was one of the first to be written, but my deal was fairly complex, and they want to work up to that. So, while I wait for them to present the story, here is a teaser.

For those of you who have not been paying attention, Intellectual Ventures is remaking the patent business. They have gradually become less secretive -- having raised $5 billion to acquire over 30,000 patents since 2000, they are having a huge effect, much of it yet to be seen, and are still viewed with awe by some, and fear by others. Their story has been covered extensively in the press.

As an inventor, and a believer in what technology can enable, I think they are changing things very much for the better.

Some of my history as an inventor -- my twelve year struggle from conception to monetization of my first patented invention -- was outlined in a 2008 blog post. That did not get into how I partnered with others to develop my patents, leading to a sale for $35 million. I faced most of the challenges of the lone inventor, unable to get large companies to a reasonable deal without litigation, even with professional partners to lead and fund the effort. I always viewed litigation as a very unpleasant and wasteful prospect, and two years into a hugely expensive and draining case (even with other people's money), I was eager to end it as soon as possible.

That is where the market came to the rescue. The IV case study will give more details, but, in brief, I saw them change the game from a brutal, zero-sum battle (attractive only to lawyers) to a win-win business proposition that was beneficial to all. They brought unique insight into the market forces, great cleverness in structuring deals that I understand to have been first of their kind, and mastery in moving the warring sides to a deal quickly, overcoming many stumbling blocks.

The deal provided my company, Teleshuttle, with the resources to let me focus on my work as an inventor, which is the work I love and do best.*

I look forward to seeing the story of this landmark deal on IV's Web site, and to IV's contribution to developing the market becoming more widely known and understood. IV deserves credit for leading the way toward a world in which invention is more sensibly valued, rewarded, and stimulated -- to make life better for all of us.

________

*For example, there is my current work on the FairPay pricing process, described extensively on [the FairPayZone**] blog: I have patent filings related to this, but they may or may not ever have any value. Nevertheless, because some of my patents have brought in funds, I can develop FairPay essentially as a pro-bono project, just because I think it is an idea the world will benefit from.

There is a parallel here: Just as IV found a way to arrange a fair value exchange between me as innovator and those who benefit from my ideas, I put forth FairPay as a way to arrange a fair value exchange between those who create content/services and those who benefit from that.

________

[**This post was originally posted on the FairPayZone blog on 4/27/11, but has been moved here as more fitting. 

Comments:  a few comments can be found on the original posting at FairPayZone.com]