Tuesday, December 10, 2013

Twiiter Ad Targeting + Comcast Show It -- Crossing the Chasm?

Two very interesting moves toward mass market Interactive TV and T-commerce were made recently, and I suggest putting them together will lead to a major step beyond.

Twitter announced that its ad targeting now allows advertisers to send you promoted tweets if you tweet about a program during which their ad appeared.  A one-two punch, to follow up on the TV ad impression. A crude but clever way to synchronize Internet with TV, but potentially on massive scale, and thus a big step.  You tweet; that tells them what you are watching; they know what ads air with it; and let the advertisers tweet you. There are better ways (such as ACR and triggers), but not in wide deployment (but that will come).

Comcast announced that it is working with Twitter and NBC to enable tweets to include a Show It button that when clicked, tunes your TV to the program that was tweeted about (or sets your DVR to record it).  That is if you have Comcast's new X1 set top box. In any case you can also view the video on your Twitter device (second screen on phone or tablet). Comcast says they hope this will become a standard, used by other networks and distribution partners.

Given the scale of Twitter and Comcast, this could finally get Interactive TV across the chasm. Their reach can bring it to the masses, show the economic imperative, and lead to far richer versions. If the money is shown to be there, along with the mass market to deliver it, that will launch a major build-out.

[***Update 12/12:  A nice analyst commentary by Joel Espelien that agrees on the importance of these moves (in spite of little press attention) was posted today. (In the I almost told you so department, I had this post largely complete in mid-October, but let it sit as a draft until this week.)]

Soon:  Once these first steps take hold, other advanced features will be easy to add.

Some of this is suggested by Comcast: expansion to Facebook, and presumably any other Web service or app. This enables the idea of "the Web as program guide" that I described in my CoTV work,
...where all media types can be fully interlinked, in a manner that is fully consistent. Hotspots can serve as link anchors whether in text, image or video, and targets can be of any media type -- rich combinations of hypermedia browsing and navigation across devices.
It also seems that one valuable enhancement in TV advertising will be easy: telescoping ads synchronized across two screens.  From an ad on your TV to a video follow up on your phone or tablet. This has been done for years on some TV platforms, but not on a scale that gets much recognition or much interest from advertisers. But there is money there, if it reaches scale.

Telescoping to a second screen could be easy with Twitter Ad Targeting combined with See It with just one small twist.
  • As it is now, See It is described as linking to a program, in the sense of content you could select by channel, or from a program guide. Nice, but what about advertising?
  • As it could be, it might also enable links to advertising video-on-demand.  Comcast may have enabled this already, but if not, it should not be hard to add.
That enables telescoping across two screens. Ad Targeting flags a viewer who is seeing the ad on TV, and sends a link to his companion device that links (via See It) to a commercial video that picks up from there. A 15 or 30 second spot can link to a longer form video that drives home the message -- and can also include an online call to action, with all of the ease of interaction via a phone or tablet. All with widely deployed and widely used technology.

Thursday, May 23, 2013

The Joy of Showrooming: From Profit Drain to Profit Center

Showrooming has become a major scourge of bricks-and-mortar retail, but maybe it is the way to a new golden age.  I suggest that showrooming can become a major source of profit, and can enable retail businesses to fully exploit the value of showrooms in our experience economy.

Brick-and-mortar stores like Best Buy and Walmart are struggling as e-tailers like Amazon and many others are stealing their lunch with more efficient channels and lower prices.  Adding insult to injury, they are losing increasing portions of business to the practice of "showrooming," where a customer comes into a store to view merchandise, checks for better prices online, and then buys from Amazon or others.  Services like Amazon's Price Check app have become popular to facilitate just that.  It is reported that 60% of Best Buy customers use thier smartphones to comparison shop.

Faced with this serious challenge, the reaction of retailers has been to try to impede it.  The main response has been to increasingly customize products so that they cannot be found and compared online, having manufacturers create SKUs that exist only for them (whether name brand or private label).  Oher counters are more dynamic pricing and price-match offers.

What I suggest is to take this threat, and view it as an opportunity:
  • What if showrooming activity could be tracked, and e-tailers convinced to pay a "showroom fee" to the provider of a showrooming service, if the sale came from that showroom? 
  • What if the retailer could filter Internet traffic from their store, and trace which URLs are for competitors, track purchase transactions that emanate from the store, and pass through only those that go to retailers that agree to pay the fee? 
There are a number of ways this can be done, and that can lead to a new retail ecology that benefits all.

Showrooming has great economic value.  It enabled customers to see, feel, and try products.  That has been a serious limitation of e-commerce, and has been exploited by those who recognize the value of show, like Apple.  Now Samsung is working with Best Buy to do that as well.  But this has been on a closed basis, for single brands.

How could showrooming fees be obtained? 
  • One method that seems promising is for retailers to place microcells in their stores to carry cellular traffic, and work with the carriers to track and monitor the traffic, and seek agreements with electronic competitors to pay a fee in exchange for the showrooming service.  This might be done through independent parties that ensure privacy, aid in the negotiations, and address any legal issues that may be involved (such as the wireless carriers, or specialized third parties).  Sellers that refuse to pay such fees might be blocked. (This can be done using some combination of blacklists and whitelists.  A microcell alone might not eliminate all uncontrolled mobile access, but, if necessary stores might use shielding to prevent that.)  While blocking traffic may have legal ramifications, if it is done for such valid economic reasons, and in a way that ensures passage of non-commercial messaging, waivers might be sought as necessary.  Mobile carriers might find this a desirable way to obtain new revenues.
  • Offering free WiFi service would provide a similar way to filter and track such traffic--and to introduce related in-store messaging services as well.  Even without any limits to cellular access, this might provide a value-added service channel, and do the job well enough, in simpler manner.
  • Offering comparative shopping tools that are showroom-fee-compliant and given favored treatment might be a way to get get customers to facilitate such a process without need to filter all mobile traffic.
  • This could also be facilitated via loaner mobile devices or in-store kiosks.
  • An even higher touch service might be achieved with personal shopper programs.  The personal shopper might facilitate the online sale with partners that cooperate. 
If we open our eyes to the positive side of showrooming, this can lead to a virtuous value cycle for all.

On the e-tailer side, working with showroomers can provide numerous valuable benefits that are worth a reasonable fee:
  • Customers can experience the items, know exactly what they are getting, and order with less likelihood of returns, reducing costs to the electronic retailer.
  • Cooperating retailers can get featured access and be more likely to be found.
  • Correct identification of products can be facilitated (reducing return costs and increasing satisfaction).
On the retail showroom side, working with electronic retailers in return for showrooming fees can provide a whole new range of possibilities for profit:
  • Instead of spending money creating inefficiencies just to prevent showrooming, those costs can be eliminated.
  • Fees can be negotiated based on the level of showrooming services offered, ranging from a boxed item on a shelf, to play with the item, to guided demonstratins and advice by sales staff and full personal shopper services.
  • With greater probability of compensation from both direct sales and showroomed sales, physical retailers can spend more to make their showrooms into rich and valuable customer experiences.
  • Showrooms might even emerge as free-standing businesses--sort of an experiential shopping Disneyland--with all sales and fulfilment done by an open market of electronic retailers.
  • By passing some sales to electronic counterparties, the physical retailer finds real offsetting savings by eliminating need for much of their inventory and eliminating returns.  By cutting these costs, showrooming fees need not be equal to the full in-house sales margin.
This creates opportunities for more complex cooperation:
  • Physical retailers can serve as pickup and return centers for electronic retailers, whether for products they do carry or for those they don't. Amazon, ThankYou Rewards, and others already are seeking just such physical services in non-competing venues, but much richer possibilities might be enabled in a more open ecosystem. (A complementary step in this direction is the ShopRunner service.)
  • The various elements, costs, and benefits of the retail supply chain and service ecology can be deconstructed, and broken into elements that can be costed and charged for individually, and handled by whatever party is most capable of doing it in a way the is efficient and serves all. This can cover the range from marketing, selection guidance, demos, sales transactions, fulfillment, service and support.  Best Buy might even swap inventory for pickups of Amazon sales, with bulk replenishment by Amazon.  It might also facilitate blurring boundaries between sales and subscription services.
There is great economic value to a rich showroom experience.  We see that in many department and specialty stores, in Apple stores, and in The Samsumg Experience stores.  People like to kick the tires, see the alternatives in real space, talk to sales people face to face, and have a fun family or social experience.  What is the sense of a business ecology that pushes bricks-and-mortar retail toward commoditization and increasing friction?  Why not find a way for complementary players to work together?

---
Ideas in bottles.  This is one of what may become a series, publishing some of my inventions without any effort to seek a patent.  Just putting the idea in a bottle, throwing it into the Internet sea, and seeing where it floats, for whoever wants to apply it.  Of course this post just skims the surface, and patent opportunities remain in the details (such as filtering and payment/settlement systems) for those with the inclination to develop them (and at some point I might decide to jump in myself).  Creating such an ecosystem will not be easy, but I think there is very great potential here.  I would be happy to collaborate with those who might pursue this.

---
[Update 6/16/17: The press made much of an Amazon patent on much the same ideas, including ideas for doing counteroffers, that was issued this week (and was filed a year prior to this post). (I was pleased to see that the Amazon patent cited 17 of my earlier patents/applications as prior art, and also happy that I chose not to file for a patent on these particular ideas!) For those interested in applying the ideas outlined here, the newly issued Amazon patent may add a complication.]

Friday, October 26, 2012

SmartGlass: A Big Step toward Convergence of TV and the Web Across Screens

Microsoft Xbox SmartGlass is CoTV 1.0 (maybe) -- on to CoTV 2.0!

With the release of Xbox SmartGlass, I am gratified to see many of the concepts I described as "Coactive TV" in 2002 finally being realized. I had been seeing increasing progress in recent years, as noted in my January post, but those have been very fragmented, partial steps, and I was being optimistic to refer to it then as CoTV 1.0. SmartGlass might be considered a major complementary step toward what (when integrated with those other pieces) will become representative of what I had in mind as CoTV 1.0.

The basic concept of CoTV is that we have multiple screens and input devices, and multiple content sources that have a Web of interconnections.  What we really want (even if most do not realize it yet) is to use the right combination of screens and input devices, at the right time, in the right way -- to work with whatever content we want at a given time. What connects them is the cloud, and our devices should use the cloud to support our media use seamlessly, not constrain it.

As noted in that January post, and more fully in my January "CoTV Now" summary, we are getting there, but there is still much more to come -- what might be looked to as CoTV 2.0 and beyond.  Now we seem to be at a significant milestone.  That makes this a good time to review where we are now, and to look to what will follow.  Based on the announcement materials, it seems as follows.

Now/emerging (CoTV 1.0):

  • Numerous  iPad, iPhone, Android (and soon Surface) companion apps
  • Social TV
  • Producer and third party enhancements on the second screen
  • AirPlay (and Miracast) screen-shifting 
  • and now a much richer any-screen experience with SmartGlass that includes rich remote control and enhancements, and steps toward full multi-screen hypermedia browsing.

Still to come (CoTV 2.0):


  • Selectable, Alternative "Enhancement Channels" 
  • Screen targeting 
  • Flexible session-shifting
  • Link-and-pause (and sync bookmarks)
  • Full multi-screen hypermedia browsing  
  • TV Context parameter/API
  • Full Coactive Internet commerce and advertising
  • Third-party linking rights/fees
Some links expanding on this are listed below.

-------------

I want my CoTV!  ...SmartGlass promises to be a reasonable start!

(Apple, your move. AirPlay was nice, but SmartGlass goes much farther.  Google?  Others?)

-------------

On SmartGlass:


A very nice video overview:
Xbox SmartGlass and Internet Explorer for Xbox - E3 2012 HD

Some descriptions:
Introducing the New Entertainment Experience from Xbox
Xbox SmartGlass goes beyond the second screen
Introducing Xbox SmartGlass

More video:
E3 2012: Xbox Media Briefing Smartglass Highlights
E3 2012: Xbox SmartGlass
Xbox SmartGlass Walkthrough

On CoTV:

CoTV Now



Monday, October 08, 2012

Filtering for Serendipity -- Extremism, "Filter Bubbles" and "Surprising Validators"

[The Augmented Wisdom of Crowds:  Rate the Raters and Weight the Ratings, (2018) puts this in a much broader framework and outlines an architecture for augmenting social media and other collaborative systems.]

[A post-2016-election update on this theme:
2016: Fake News, Echo Chambers, Filter Bubbles and the "De-Augmentation" of Our Intellect]


Balanced information may actually inflame extreme views -- that is the counter-intuitive suggestion in a NY Times op-ed by Cass Sunstein, "Breaking Up the Echo" (9/17/12).   Sunstein is drawing on some very interesting research,* and this points toward an important new direction for our media systems.

I suggest this is especially important to digital media, in that we can counter this problem with more intelligent filters for managing our supply of information.  This could be one of the most important ways for technology to enhance modern society. Technology has made us more foolish in some respects, but the right technology can make us much smarter.

Sunstein's suggestion is that what we need are what he calls "surprising validators," people one gives credence to who suggest one's view might be wrong.  While all media and public discourse can try to leverage this insight, an even greater opportunity is for electronic media services to exploit this insight that "what matters most may be not what is said, but who, exactly, is saying it."

Much attention has been given to the growing lack of balance in our discourse, and there have been efforts to seek to address that.
  • It has been widely lamented that the mass media are creating an "echo chamber" -- such as Fox News on the right vs. MSNBC on the left.  
  • It has also been noted that Internet media bring a further vicious cycle of polarization, as nicely described in the 2011 TED talk (and related book) by Eli Pariser, "Beware online "filter bubbles," services that filter out things not to one's taste.
  • Similarly, extremist views that were once muted in communities that provided balance are now finding kindred spirits in global niches, and feeding upon their own lunacy.
This is increasingly damaging to society, as we see the nasty polarization of our political discourse, the gridlock in Washington, and growing extremism around the world. The "global village" that promises to bring us together is often doing the opposite.

It would seem that the remedy is to try to bring greater balance into our media. There have been laudable efforts to build systems that recognize disagreement and suggest balance, such as services like SettleItFactCheck, and Snopes, and, a particularly interesting effort, the Intel Dispute Finder (no longer active).
  • The notable problem with this is Sunstein's warning that even if we can expose people to greater balance, that may not be enough to reduce such polarization, and that balancing corrections can even be counter-productive, because "biased assimilation" causes people to dismiss the opposing view and become even more strident. 
  • Thus it is not enough to simply make our filter bubbles more permeable, to let in more balanced information.  What we need is an even smarter kind of filter and presentation system.  We have begun to exploit the "wisdom of crowds," but we have done little to refine that wisdom by applying tools to shape it intelligently.
From that perspective, consider Sunstein's suggestions:
People tend to dismiss information that would falsify their convictions. But they may reconsider if the information comes from a source they cannot dismiss. People are most likely to find a source credible if they closely identify with it or begin in essential agreement with it. In such cases, their reaction is not, “how predictable and uninformative that someone like that would think something so evil and foolish,” but instead, “if someone like that disagrees with me, maybe I had better rethink.”
Our initial convictions are more apt to be shaken if it’s not easy to dismiss the source as biased, confused, self-interested or simply mistaken. This is one reason that seemingly irrelevant characteristics, like appearance, or taste in food and drink, can have a big impact on credibility. Such characteristics can suggest that the validators are in fact surprising — that they are “like” the people to whom they are speaking.
It follows that turncoats, real or apparent, can be immensely persuasive. If civil rights leaders oppose affirmative action, or if well-known climate change skeptics say that they were wrong, people are more likely to change their views.
Here, then, is a lesson for all those who provide information. What matters most may be not what is said, but who, exactly, is saying it. 
This struck a chord with me, as something to build on.  Applying the idea of "surprising validators"  (people who can make us think again):
  • The media and social network systems that are personalized to serve each of us can understand who says what, who I identify and agree with in a given domain, and when a person I respect holds views that are different from views that I have expressed that I might be wrong about.  Such people may be "friends" in my social network, or distant figures that I am known to consider wise.  (Of course it is the friends I consider wise, not those I like but view as misguided, that need to be identified and leveraged.)
  • By alerting me that people I identify and agree with think differently on a given point, such systems can make me think again -- if not to change my mind, at least to consider the idea that reasonable people can differ on this point. 
  • Such an approach could build on the related efforts for systems that recognize disagreement and suggest balance noted above.  ...But as Sunstein suggests, the trick is to focus on the surprising validators.
  • Surprising validators can be identified in terms of a variety of dimensions of values, beliefs, tastes, and stature that can be sensed and algorithmically categorized (both overall and by subject domain).  In this way the voices for balance who are most likely to be given credence by each individual can be selectively raised to their attention.  
  • Such surprising validations (or reasons to re-think) might be flagged as such, to further aid people in being alert to the blinders of biased assimilation and to counter foolish polarization.
This provides a specific, practical method for directly countering the worst aspects of the echo chambers and filter bubbles.

More broadly, what we need to counter the filter bubble are ways to engineer serendipity into our information filters -- we need methods for exposing us to the things we don't realize we should know, and don't know how to set filters for.  Identifying surprising validators is just one aspect of this, but this might be one of the easiest to engineer (since it builds directly on the relationship of what we know and who we know, a relationship that is increasingly accessible to technology), and one of the most urgently needed.

Of course the reason that engineering serendipity is hard is because it is something of an oxymoron--how can we define a filter for the accident of desirable surprise?  But with surprising validators we have a model that may be extended more broadly--focused not on disputes, but on crossing other kinds of boundaries--based on who else has made a similar crossing--still in terms of what we know and who we know, and other predictors of what is likely to resonate as desirable surprise. Perhaps we might think of these as "surprising combinators."

[Update 10/21/19:] Serendipity and flow. Some specific hints on how to engineer serendipity can be drawn from a recent article, "Why Aren't We Curious about the Things We Want to Be Curious About?"  This reinforces my suggestion (in the paragraph just above) that it be "in terms of what we know and who we know" adding the insight that "We’re maximally curious when we sense that the environment offers new information in the right proportion to complement what we already know" and suggesting that it has to do with finding "the just-right match to your current knowledge that will maintain your curiosity." This seems to be another case of seeking a "flow state," the energized and enjoyable happy medium between not so challenging or alien as to be too frustrating, yet not so easy and familiar as to be boring. I suggest that smart filtering technology will help us find flow, and do it in ways that adapt in real time to our moods and our ongoing development.

This offers a way to more intelligently shape the "wisdom of crowds," a process that could become a powerful force for moderation, balance, and mutual understanding. We need not just to make our "filter bubbles" more permeable, but much like a living cell, we need to engineer a semi-permeable membrane that is very smart about what it does or does not filter.

Applying this kind of strategy to conventional discourse would be complex and difficult to do without pervasive computer support, but within our electronic filters (topical news filters and recommenders, social network services, etc.) this is just another level of algorithm. Just as Google took old academic ideas about hubs and authority, and applied these seemingly subtle and insignificant signals to make search engines significantly more relevant, new kinds of filter services can use the subtle signals of surprising validators (and surprising combinators) to make our filters more wisely permeable.

That may be society's most urgent need in information and media services.  Only when we can bring a new level of collaboration, a more intelligently shaped wisdom of crowds, will we benefit from the full potential of the Internet.  We need our technology to be more a part of the solution, and less a part of the problem.  If we can't learn to understand one another better, and reverse the current slide into extremism, nothing else will matter very much.

[Update:]  Note that the kind of filtering suggested here would ideally be personalized to each individual user, fully reflecting the "everything is deeply interwingled" and non-binary nuance of their overlapping Venn diagram of values, beliefs, tastes, communities of interest, and domains of expertise. However, in use-cases where that level of individual data analysis is impractical or impermissible, it could be done at a less granular level, based on simple categories, personas, or the like. For example, a news service that lacks detailed user data might categorize readers based on just a current session to identify who might be a surprising validator, or what might be serendipitous.

[Update 12/7/20:] Biden wins in 2020 with Surprising Validators!
A compelling report by Kevin Roose in the NY Times shows how Surprising Validators enabled Biden's "Rebel Alliance" to cut a hole in Trump's "Death Star" -- "…the sources that were most surprising were the one who had the most impact." "Perhaps the campaign's most unlikely validator was Fox News." This was by the campaign, external to the platforms' algorithms, but think how much more powerful this could be when fully integrated.

---
See the Selected Items tab for more on this theme.

[See also my earlier post on this theme:
Full Frontal Reality: how to combat the growing lunatic fringe.]

-------------
*The work Sunstein apparently refers to can be found by searching for "Biased Assimilation and Attitude Polarization," the title of a much-cited 1979 paper. I found some very interesting research and plan to review this further, seeking methods suited to algorithmic use. One interesting current center of study is the Yale Law School Cultural Cognition Project.

(On a personal note, this is an effort I have seen as having huge benefit to society since my first exposure to early work on computer-aided conferencing and decision support systems in the early 1970's.  I continue to see this as a vital challenge to pursue, and I welcome dialog and collaboration with others who share that mission.)  

Saturday, October 06, 2012

i[Carter]Phone? -- Apple and Anti-Competitive Tying

Apple is pushing the laws prohibiting anti-competitive behavior, as noted in an interesting article by James Stewart in today's NYTimes, with reference to Maps and the iTunes Store.  It considers how Apple's efforts at total control of their ecosystem may be both harmful and illegal--at some point, if not yet.

For some time I have had similar concerns, and have been wondering how long until we see an "iCarterPhone Decision."  What do I mean by that?  Followers of communications history will remember the Carterfone Decision (1968) as a landmark step toward the breakup of the Bell System monopoly. Until then it was illegal to attach a phone not approved by AT&T to the US telephone network.  This was based on the AT&T argument that attaching any device not fully tested and approved by them to the network might introduce voltages or other electrical effects that would run through the wires and harm their central office equipment, potentially causing widespread harm.  The only permissible way to add a specialized device like the one sold by Carterfone was to use a Rube Goldberg-like acoustic coupler, with rubber cups that relayed sound in or out of a standard Bell telephone handset ear and mouthpiece  with no direct electrical connection (and with issues of signal quality).  Some of you remember early modems that connected to computers that way. The Carterfone Decision changed all that, and opened the way for the vibrant market in phones, answering machines, faxes, modems, etc. that we now take for granted.

The iPhone/iTunes ecology smacks of much the same kind of anticompetitive control, with restrictions that limit consumer rights, raise consumer costs, and limit competitive innovation.  The Times addresses the current flap over Apple's inferior maps app, as well as Department of Justice price fixing charges against Apple relating to e-books sold through the iTunes Store.  Similar issues apply to control of apps in general that Apple does not like for one reason or another --such as has been the case with Skype, Google, Flash, and many others.  Contrast this with Microsoft PCs that allow you to run any software from any source, with no involvement of Microsoft whatsoever.  Of course we are free of migrate to the Android ecosystem to get greater openness, and many have chosen to do just that.

As the Times article notes, Apple is not dominant the way Microsoft was (or AT&T), and thus its tying sales in the App Store may not reach a level actionable under antitrust laws. (Its alleged price fixing is another story.) But at an ecosystem level, given its disproportionate number of apps, it does already have a level of dominance that might warrant correction.

Other areas in which Apple is riding roughshod on the market (and consumers) relate to other kinds of proprietary behavior.  Apple champions open standards like HTML5 over proprietary standards like Flash when the proprietary standards belong to the competitor, and it suits their interests to smash them , but insists on proprietary standards of its own, such as for its iPhone connectors and its AirPlay protocol, for which it charges exorbitant prices (adapter retail $29?) or licensing fees (AirPlay speakers retail price bump $100?).

It will be very interesting to see how this develops -- whether the market rebels or the government finds cause to draw a line, or they just fail to maintain their edge.  From the market perspective, Apple is walking a very fine line, balancing the positive perception of product quality against the negative perception of arrogance and rapaciousness. Jobs was able to ride that balance for a very profitable run, but the maps fiasco, and the increasing success of Android (and maybe Microsoft, or someone yet to appear) suggests that this is a precarious and anti-consumer position, and that Apple's days of dictating to consumers and its ecosystem partners may be numbered.


Monday, January 23, 2012

A New Age in Patent Liquidity -- NYC 2/15 -- MIT Enterprise Forum Panel Session

This is a panel that should be very relevant to all entrepreneurs who have an interest in getting and monetizing patents, as well as those who work with them. "A New Age in Patent Liquidity: New Opportunities for Entrepreneurs," is presented by MIT Enterprise Forum of NYC.

I will be on the panel to present the perspective of an entrepreneur/inventor who has successfully navigated the Kafkaesque world of patents, which can be rewarding, but also hugely frustrating, costly, and risky.  I described some of the twists and turns of my adventures in a 2008 blog post "'The Six Phases of a Technology Flop' ...Patents, and Plan B." The theme was how I started seeking to build a software/services business, but also sought patents as a hedge to protect my investment -- a "Plan B." When the business failed to keep up with better-connected competitors with deeper pockets, I turned to the patents to try to capture value for my innovations.  Working with partners who brought the expertise and funding needed to do that, and eventually to undertake a patent suit, I went part way through infringement cases against Microsoft and Apple.  Some additional background on that is in last year's post that tells how Intellectual Ventures changed the game with a very creative, win-win deal.

I also expect to touch on my 2008 sale of another portfolio of patents to another very innovative company, RPX, as well as my ongoing work developing other patents.  I am pleased that Kevin Barhydt, VP, Head of Acquisitions for RPX (and formerly at IV) will also be on the panel.

From my perspective, IV, RPX, and others are making a real difference is offering inventors and other patent owners a way to monetize their IP for reasonable compensation -- in a market that is rational, and has a middle ground between "take a hike" and the nuclear option of litigation, with its huge costs in money, time, and disruption.

It is a pleasure to be a panelist and organizer for this event, especially given that I was the moderator and an organizer of MITEF's well-received 2000 panel session  "Patents for Dot-coms," which had an equally distinguished panel.

Wednesday, January 18, 2012

Coactive TV -- The World of TV is getting there, and more is yet to come...

The kind of advanced "coactive" TV that I been promoting since 2002 is finally reaching the mainstream, but there is still much more to come.

As noted in a new page on the CoTV Web site, "Coactive TV: User-centered Convergence Today and Tomorrow:" 
The increasing prevalence of "media multitasking" (simultaneous use of TV and the Web) on laptops and smartphones began to change perceptions, and 2-screen ITV began to be seen as desirable in itself. Users were creating their own manual ITV experiences by finding relevant Web services on their own.  That set the stage for the emergence of CoTV 1.0, which was then kick-started by the iPad.  One indication of CoTV crossing the chasm into mainstream attention was the survey by Katherine Boehret of the influential Mossberg/Wall Street Journal/All Things D team on 12/20/11.

Another indication this is getting real was the number of announcements at CES. As reported by Bill Niemeyer in the 1/13 OTT Monitor from The Diffusion Group:
One key takeaway from CES that has floated above the noise pertains to Automated Content Recognition (ACR) for TV and video platforms. CES saw announcements from a number of ACR vendors including Audible Magic, Civolution, Gracenote, and Zeitera.
What is ACR? It's a variety of technologies that allow a device or service to recognize automatically a specific piece of content and synchronize to it within seconds. ACR can be based on audio/video watermarking or fingerprinting (i.e., cloud-based pattern matching used by mobile music app services like Shazam). Let your cell phone hear a brief bit of a song and Shazam will tell you what it is and even provide synchronized lyrics.
How can ACR be used in OTT [Over The Top]? It can synchronize interactive experiences for programs - whether viewed live or time-shifted - as well as advertising or e-commerce apps. Distinct from watermarking, which requires insertion in the content, fingerprinting can be done completely outside the realm of content providers, networks, and PayTV operators. That said, developing third-party synced apps without infringing on copyrights could be tricky.
With ACR, literally "the possibilities are endless" (to use a trite phrase). It's a powerful tool that needs to be put in the hands of creatives to realize fully its artistic potential, as well as clever business-side types to see how much "extended revenue" it can create.
But this is just the start. To look further into the future of advanced TV and video-based hypermedia, check out the section on "CoTV Tomorrow -- CoTV 2.0" on that new CoTV page.  A partial list of advanced features:


  • Selectable, Alternative "Enhancement Channels" 
  • Screen targeting
  • Flexible session-shifting
  • Link-and-pause (and sync bookmarks)
  • Full hypermedia browsing  
  • TV Context parameter/API
  • Full Coactive Internet commerce and advertising
  • Third-party linking rights/fees

Monday, October 10, 2011

The Necessity of Steve Jobs: ...Inventor? ...or Necessitor?

The recent comparisons of Steve Jobs to Edison and Ford brought me back to an important point: Invention is the mother of necessity. We don't realize we need something until an "inventor" shows us what it can be, and what it can do for us.

Which came first? Is necessity the mother of invention? (as the saying goes) ...or is invention the mother of necessity? Is inventing unrecognized necessities the real heart of inventing? As Jobs famously said: "It’s not the consumers’ job to know what they want.”

Jobs was more important as a necessitor, than as an inventor.  It struck me that the point some have raised -- that Jobs did not invent the technologies he popularized -- has some validity, but fails to balance the picture with this important point.  It is true that the mouse, the "drag-and-drop" graphical user interface, hypertext, music downloads, MP3 players,smartphones, tablets, touchscreens, computer animation, and many more key "inventions" applied by Jobs were not invented by him.  It seems widely recognized that Jobs' key contribution was that he saw how such things could be put to use in new configurations, and to serve needs that others did not see or saw less clearly (and also that he had the drive and resources to realize his visions...)

This resonated with me, because I have often felt that my own history as an inventor has a similar focus (even if hardly on the scale of Jobs').  The contribution is not so much in solving a recognized technical problem, but in seeing what technical problems should be solved, and why, and what else that would mean.  (That is why the theme of this blog is "user-centered media" -- that is pretty much the theme of much of my work.)

In a sense, this relates to innovation at the level of "systems thinking."  The necessitor does not just solve a problem, but creates a whole new system, within the larger system of people, technology, economics, and culture.  Jobs saw that what was missing in the music business was a new model for aggregated, simplified sales of music, and integration of an e-commerce system (the iTunes store) with a user agent (iTunes) and a device (iPod).  Once people saw that, they needed it.  No one created the wholistic vision that enabled that necessity to be recognized and acted on until Jobs did.

Similarly, some argue that Edison's real impact was not the light bulb, but the electric distribution system and related infrastructure that he recognized as needed to make the light bulb broadly useful.  It is perhaps more apparent that Ford was not so much an inventor of cars and mass production, but a necessitor, who realized that we needed simple black cars, and lots of them.  Often such cases are not simple inventions, but whole systems of invention.  One necessity/invention leads to other necessities/inventions, to whole ecologies of inventions.

So which came first? the necessity or the invention?  I suggest, as in most things, the answer is a non-dualistic "yes, both."  It is hard to separate the two.  Our patent system seems to think of inventions as the thing that matters.  The constitution defines patents to be for "any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvements thereof."  This has always seemed to me a limited view of what inventors do.

I suggest an equal form of "invention" is what Robert Kennedy spoke of:  "I dream of things that never were, and ask why not?"  Once we take that step, we may need to invent some technology, but often what we need to do is take the vision, understand all that it entails, and assemble a whole system from technologies that may have previously existed, but not been combined and adapted in the right way.  This kind of systems thinking, is on a much different level than the more commonly recognized engineering tasks of solving the technical problems to meet a previously recognized need.

...This also has led me to questions about the place for such contributions in the patent system.  It seems to me that such contributions may be equally deserving of some kind of patent protection, to reward the creative thinking that advances our "useful arts" and our civilization in general.  Just as with more narrow senses of technical invention, this takes not just inspiration, but perspiration (to paraphrase Edison).  But just how this kind of invention of necessity fits (or could be fit) with our current patent system seems a bit unclear.

------
[Should anyone know of any good thinking by others on this theme, I would welcome references.]

Tuesday, August 23, 2011

Social TV -- The "Killer App" for Coactive TV -- Ready for Ubiquity

Social TV promises to be the killer app for coactive TV (CoTV).  (A "killer application" is an application that is so desirable to users that it drives the adoption of a larger technology.  The concept emerged when spreadsheets and word processors drove the adoption of PCs, which have obviously broadened to far wider importance.)

There are a number of signs that Social TV is emerging as such a killer app (some mentioned in previous posts).
  • IntoNow launched in January 2011 and was quickly acquired by Yahoo on 4/25/11, and Spot411 re-launched 7/18/11 as TVplus.  Both have gotten prominent press and both do fully automatic syncing to any program, without need for any involvement by the TV distributor. 
  • The Wikipedia article on Social Television was created in 5/07 with 3,244 bytes, grew to 5,528 by the end of 2009, then grew to 10,469 by the end of 2010, and to 16,851 by 8/23/11.  It now includes a list of 32 such systems (not all of which involve two-screens).
  • One of the most popular FIOS TV apps was the Twitter app.
Being a killer app does not mean it will ultimately dominate the use of the platform, but only that it drives early adoption.  I suggest there are other killer apps for coactive TV as well, and that the long term value will span a wide range of apps.
  • From a user viewpoint, EPGs (electronic program guides) are another important killer app, not least because it is one the MSOs (multi-system operators, TV distributors) are embracing along with users.  EPGs showcase the value of the companion device to allow interaction with a nice UI, and without interfering with current viewing.   The irresistible power of the iPad UI and relatively open ecosystem has finally convinced the MSOs that they must go outside the box (at least as to the set-top box and the TV screen).  Comcast and Time Warner Cable have moved quickly to offer tablet-based EPGs and DVR programming.  The coactive EPG will evolve into the full "Media Concierge" service that I have been blogging about since 2005). 
  • The real money to drive all of this is in advertising.  Obviously this will drive the service providers and advertisers, but I submit that users too will recognize and increasingly demand the value of well targeted ads that exploit the flexibility of coactive UIs to be unobtrusive.  Well targeted ads can be a valuable service, as long as they are no more intrusive than the viewer wants them to be (which may vary from time to time, and from ad to ad).  Coactive ads--driving from a short spot to a companion microsite (whether linked to live, or deferred using a bookmarking feature)--can be far less intrusive and far more useful than a longer TV ad with no coactive companion element. A good UI can give the user control over when and how such ads appear.
All of these promising killer apps have synergy with one another.  Coactive TV is at heart hypermedia, and thus "everything is deeply intertwingled." (Quoting Ted Nelson, who also coined the terms hypertext and hypermedia.)
  • Social TV apps can work both as program enhancements and to provide program guide/media concierge services.  
  • Social TV can also be about ads, such as during the Superbowl, or when any ad of interest to my social circle appears.
  • All of these will drive usage of enhancement content (such as IMDB pages), which will create further synergies.
But there is one more thing that is essential, and that is ubiquity. While full, ubiquitous coactivity is not central to all Social TV, I suggest it is essential to enabling it to reach scale.
  • Synchronizing Web browsing to TV can be done manually, and has for decades.  Viewers have created their own Social TV ever since the first two people sat with a laptop in front of a TV, and ever since the first online chat about a TV program.  It can also be automated with program specific apps.  ABC did it a decade ago with Enhanced TV for the Oscars and other shows, and now on the iPad for Grey's Anatomy, but program and network apps cannot create massive synergy.
  • What is essentially to enabling Social TV (and most other CoTV apps) to cross the chasm is ubiquity.  Siloing companion apps to a separate app for each network or program or advertiser is hugely self-defeating.  How many users will load more than a few apps, and how many will bother to open those apps more than once?  Just as the Web eliminated the need for separate apps for every content service, a ubiquitous CoTV service will require only a single context-linking app to reach services for every program, to every Web service. There will be all kinds of mashups driven by that context, but an effective context-linking service must be essentially universal.
A truly ubiquitous coactive TV service will be always on, and always aware of a viewer's TV context (except when disabled).  Such a ubiquitous service can activate any Web service and any application, in a rich ecology much like that on the Web.  That way a user can just set up the coactive companion context service just once, and get synchronized for any program or ad, to any social networking service, content service, or whatever -- whether directly, or via mashups.  (Just how such services can be structured to enable flexibility and user control was described in my published patent disclosures, and will be a subject of  future posts.)

It now appears that Social TV is the next big thing in TV, and will drive full coactivity -- but a whole lot of other functions will ride its coattails.

Wednesday, April 27, 2011

My Intellectual Ventures Inventor Profile

Recently I had the pleasure of being interviewed by Intellectual Ventures for a story about my work as an inventor. I have been looking forward to seeing it posted in their new Inventor Spotlight area. Unfortunately, I still have to wait a bit. My story was one of the first to be written, but my deal was fairly complex, and they want to work up to that. So, while I wait for them to present the story, here is a teaser.

For those of you who have not been paying attention, Intellectual Ventures is remaking the patent business. They have gradually become less secretive -- having raised $5 billion to acquire over 30,000 patents since 2000, they are having a huge effect, much of it yet to be seen, and are still viewed with awe by some, and fear by others. Their story has been covered extensively in the press.

As an inventor, and a believer in what technology can enable, I think they are changing things very much for the better.

Some of my history as an inventor -- my twelve year struggle from conception to monetization of my first patented invention -- was outlined in a 2008 blog post. That did not get into how I partnered with others to develop my patents, leading to a sale for $35 million. I faced most of the challenges of the lone inventor, unable to get large companies to a reasonable deal without litigation, even with professional partners to lead and fund the effort. I always viewed litigation as a very unpleasant and wasteful prospect, and two years into a hugely expensive and draining case (even with other people's money), I was eager to end it as soon as possible.

That is where the market came to the rescue. The IV case study will give more details, but, in brief, I saw them change the game from a brutal, zero-sum battle (attractive only to lawyers) to a win-win business proposition that was beneficial to all. They brought unique insight into the market forces, great cleverness in structuring deals that I understand to have been first of their kind, and mastery in moving the warring sides to a deal quickly, overcoming many stumbling blocks.

The deal provided my company, Teleshuttle, with the resources to let me focus on my work as an inventor, which is the work I love and do best.*

I look forward to seeing the story of this landmark deal on IV's Web site, and to IV's contribution to developing the market becoming more widely known and understood. IV deserves credit for leading the way toward a world in which invention is more sensibly valued, rewarded, and stimulated -- to make life better for all of us.

________

*For example, there is my current work on the FairPay pricing process, described extensively on [the FairPayZone**] blog: I have patent filings related to this, but they may or may not ever have any value. Nevertheless, because some of my patents have brought in funds, I can develop FairPay essentially as a pro-bono project, just because I think it is an idea the world will benefit from.

There is a parallel here: Just as IV found a way to arrange a fair value exchange between me as innovator and those who benefit from my ideas, I put forth FairPay as a way to arrange a fair value exchange between those who create content/services and those who benefit from that.

________

[**This post was originally posted on the FairPayZone blog on 4/27/11, but has been moved here as more fitting. 

Comments:  a few comments can be found on the original posting at FairPayZone.com]

Friday, April 15, 2011

"Squeeze More In" for Video Devices -- Never get stuck again!

Wouldn't it be nice to have a video device with infinite capacity that never got full?
  • Have you ever shot so much video that your phone/camera got full and made you stop?
  • Did you miss getting something on video because the phone/camera was full and you did not have the time or opportunity to upload or delete some video to make space for more?
  • Has your DVR ever erased a show you wanted because you lacked space for a new recording?
  • Has your DVR ever failed to record a new show because all your old video was marked "do not delete"?
If so, what you need is Progressive Deletion -- a compression method that lets your device "squeeze more in."  Of course it is not infinite, but it can enable a lot of squeezing.

Progressive Deletion is a new spin on video compression that is the subject of one of my recently issued patents.  It is not yet available on any device, but I am seeking manufacturers who want to offer this new feature to their users.  If you are in the video industry and know people who might build this, please let me know -- and tell them!  If you are a user who likes the idea, I am interested in hearing that also.

Background on Progressive Deletion is on the Web, but briefly, here is the basic concept.
  • Many image compression algorithms allow for varying levels of compression, where the more you compress, the less the quality retained, and many cameras and DVRs allow you to set any of several levels of compression.
  • Generally you pick one, and are stuck with it.  But more flexibilty is applied in "progressive" video transmissions, where you might receive only a high significance layer if you have limited bandwidth, to get moderate quality, or additional lower significance layers if you have more bandwidth.  The added layers add more quality when combined at the video player.  
  • But either way, once you have the video saved on your device, it can't be made smaller without reformatting, and that takes time (if enabled at all)
  • Progressive Deletion methods take this one step farther by storing video in your device layer by layer, so that an entire layer can be instantly deleted if you want to sacrifice some quality to free some space.
Thus you can "squeeze more in" by simply telling the device to delete some low significance layers, and just keep on shooting or recording.  That could also be set as an automatic operation -- your device might indicate that you are reaching a deletion point, and just do it if you keep shooting or recording, with no interruption at all.  Of course you might also be given the option to select specific videos to be squeezed or not.

All of this is done without changing the compression method, just by changing storage order (from by time, to by layer). It maintains compatibility with standard formats by just exporting the standard ordering when video is uploaded or transmitted (or importing from standard ordering when downloading).



Friday, April 08, 2011

TVs, iPads, Time Warner Cable, and Viacom -- Copyright vs. Copyrape

The latest overreach of copyright owners over the Time Warner Cable iPad TV app is an interesting encapsulation of all that is wrong with the current excesses of copyright.

I see this as a key policy issue relating my theme of "user-centered media" -- one that gets to the heart of the social contract behind copyright and all intellectual property. The key question is the balance of what is good for users, and what is fair incentive to content creators. The Constitution wisely embraced that balance, but many have lost sight of it.

The case reported by the NY Times Media Decoder is a classic of overreach (and one in which I find myself in the surprising position of supporting the cable companies).  As the Times reports, Viacom pleads that Time Warner Cable’s actions “will interfere with Viacom’s opportunities to license content to third-party broadband providers and to successfully distribute programming on its own broadband delivery sites.”  Let's think about that, both from a technical and a policy perspective.  This is not really a question of TWC vs. Viacom, but of the public vs. the rights-holders.

First, an quick look shows how silly this is from a technical perspective:

  • Doesn't Time Warner's distribution of Viacom channels to TVs in the home limit "Viacom’s opportunities to license content to third-party broadband providers?"  If I could not get The Daily Show from TWC, I would certainly be much more inclined to use Hulu for it.  Why does Viacom allow that?
  • How is an iPad different from a TV?  (Answer keeping in mind that this is the 21st Century.) 
  • I have a Blu-Ray player and a Mac Mini both HDMI-connected to my TV, so I can watch any "third-party broadband provider" programming on my TV, and don't need TWC if Viacom is available through other providers. I can view such broadband provider content on any screen I like, and they are completely free to compete with TWC.  (Such any-screen connectivity will soon be the norm.)
  • Why should TWC be locked off the iPad when Hulu is sold rights to distribute Viacom programming to any Internet-connected screen, including both TVs and iPads.
  • Remember also that both TWC cable TV and broadband in TWC-served homes both arrive as RF signals running in different channels on the same coaxial cable!
  • Once more, what century is this?
But this is really a deeper question of policy, and both content owners and content users seem to forget the basic social contract that drives copyright.  Let's step back to those basics:
  • Copyright is designed to maximize social welfare by encouraging content creation
  • Copyright owners are given limited rights as their incentive to create content
  • The public pays to compensate creators for content.
  • The public also may pay distributors and device manufacturers for facilitating access to content, but that has nothing to do with copyright.  (I have to pay for a book of Shakespeare plays or the Bible, or to download it to my cell phone in Timbuktu, but the content is free.)
Thus various parties have rights to compensation:
  • The creators of Viacom content have rights to compensation for their content.
  • Viacom is entitled to collect such copyright-related compensation, as well as compensation for their contribution to distribution (as well as some profit).
  • Distributors and device providers are entitled to compensation for distribution and devices (which, again, has nothing to do with copyright).
But the viewer who pays for content has purchased the right to enjoy that content.  That can take a number of forms, but none are tied to what screen the viewer is using.  For a single viewer (or household, or whatever unit is bought):
  • I can pay for time-limited access (such as a streamed subscription) or for permanent access (such as a download or DVD).
  • Those costs may be bundled with distribution and device costs, but the underlying copyright fee is a simple and distinct component of that bundle.
  • Any copyright-based limitation to content by device or location or technology that goes beyond the simple distinction of time-limited or unlimited access is without foundation.  Such limitations might be forced by technical limitations, but once those technical limitations disappear, they have no basis.
So if I pay for a Viacom program (content) one time, for a month, or forever, I should have unlimited rights to enjoy viewing that content, one time, for that month, or forever.  There might also be software fees for apps, and bandwidth fees for distribution, but there is no basis for any further content viewing fee.*

The copyright owners seem to have forgotten that the maze of licensing that they have so many lawyers working on is mostly an accident of technological history.  Hopefully the courts will not lose sight of the basics and let the tail wag the dog, now that technology is liberating us.  Hopefully they will not let copyright turn into copyrape.

Content does not want to be free, if its creator wants compensation (subject to his limited copyright).  But once I have paid for a license, I should be free to view it as I like, for whatever amount of viewing I have paid the creator for.  Watching multiple times might be a multiple use, but watching on one screen rather than another is not a different use.

Ask not for whom the copyright tolls, it tolls for thee -- for the public welfare.

---------------------------------------
*A related issue is the current idea that cable-sourced access might be limited to in-home viewing.  That too is an artificial limitation,with no inherent justification.  It may be hard to prevent abuse of licenses, if I can let all my friends view my TWC content in their homes, but as long as TWC has the technical means to limit viewing to valid subscribers (and those viewing with them) why should there be any geographic restriction limiting out of home viewing? 

Monday, March 14, 2011

How do you explain something that's never existed before?

This is one of my favorite images, and largely speaks for itself.  So you can stop here (all else is commentary).  

----
A "better mousetrap" is easy to explain.  The first mousetrap, like the first wheel, is not so easy.  

This cartoon is from the October 1981 announcement of the Xerox Star workstation, the productized version of the Alto, the very first WIMP (Window, Icon, Menu, Pointing device) Graphical User Interface.  (To anyone who has the full advertisement this was clipped from, I would love to have a better, more complete copy.)

Relevant to my theme of user-centered media, this gets to the idea that the user may not know what he really would like.  In many respects, Steve Jobs is a champion of user-centered media (even if maybe not user-centered business practices).  Asked why Apple doesn’t do focus groups, Jobs responded: "We figure out what we want. You can’t go out and ask people ‘what’s the next big thing?’ There’s a great quote by Henry Ford. He said, "If I’d have asked my customers what they wanted, they would have told me ‘A faster horse.’"”  Of course we need to think of the user, and usually should listen to them, but to innovate, one must look far in front of them.

All the best really new ideas are simple at heart, but have many aspects and embodiments.  Like an embryo, it may be all there at conception at some level, but the details that work in the world unfold as you let it grow to maturity.  Depending on context, some aspects grow faster and are more apparent than others.  But explaining them is no easy task.  I have enjoyed seeing many new ideas in early stages, but am still trying to learn how to explain them.  Steve Jobs has the advantage of being able to build them and show them off.  I have not had his resources.  And sometimes no one has the resources until the time is right.

I tried to convince Mobil Corporation to buy a Star workstation to experiment with when I was in their technology planning group in 1981, but it was too expensive, even for Mobil (which was the first company to buy a Cray supercomputer)!  The workstation cost about $100K, but  as I recall, a useful single-user system also needed a file server, print server, and communications server, totaling about $250K minimum (about $600K in current dollars).

I watched hypertext unfold since 1969.  Ted Nelson did a masterful job explaining it (inspired by Vannevar Bush's 1945 vision in The Atlantic Monthly), and Doug Engelbart spectacularly demonstrated similar techniques in what was called "the mother of all demos" in 1968.  But it was slow to reach wide recognition until technology advanced, Berners-Lee simplified it, and Andreesen packaged it.  

As to my own inventions, I have struggled with the challenge of trying to explain online/local hybrids in 1994 (now in RSS, AJAX, and HTML5), coactive TV companion devices (now emerging for iPads) in 2002, and now for FairPay in 2010.  (A companion post is on my FairPayZone blog.

-----------------
[Caption text:  "How do you explain something that's never existed before? ... He had a similar problem"]

Thursday, February 17, 2011

Hyperlocal News symposium by MIT Enterprise Forum of NYC -- 2/24

Hyperlocal News: A New of World of Journalism, Sustainable Business Models, and the $30B Local Ad Market promises to be a very interesting NYC panel session.  As board organizer of the event for MITEF-NYC, I am pleased to have a very strong and diverse mix of panelists, and look forward to some stimulating dialog.  Aside from major players like the NY Times and Patch, we have a smaller startup, the Alternative Press, and Outside.in, a technology/infrastructure provider.

A very nice preview article on the event, and on the MIT Enterprise Forum, was published today by The Alternative Press.

From my "user-centered media" perspective, hyperlocal is an interesting development, with farther to go in use-centered control of locality -- as to geography, time, and context.  Instead of just a newspaper focused on my community, I want to see more context sensitivity and control.  Sometimes I want to know:
  • why there are sirens in my neighborhood right now (more and longer than usual, this being Manhattan)?
  • what are the fireworks I see on the Hudson now, and how do I get advance notice of them?
  • what events match my interest profile (graded by distance vs. level of interest)?
  • about my home location, my work location, or a location I am visiting or passing through.
I don't know that the panel will get to these questions, but there are many other interesting ones they will address.  I have been involved in various online news services since the late '80s, and what I see as interesting is not always shared by the powers that be.

One of my recent projects (with impact yet to be determined) is a radically new pricing process for digital media called FairPay.  This has strong potential for news services, including hyperlocal ones.  More on that is on the at my FairPay Zone blog.

Of course I will be at the event, and will be happy to discuss FairPay, and other user-centered issues, with anyone there.

The Daily, iPad, and Apps ...or Web browsing with HTML5 -- Which paradigm?

The appearance of  The Daily from News Corp. is seen as a big step in the online journalism business, as described in a WSJ article.

I played with it briefly and it brings me back to some key questions about the future of media.  It will be very interesting to see how it does.  There are a range of important issues, and here are some impressions.

The interesting business issue is how app models are seen as a last chance to give publishers another bite at the monetization apple (pun intended) vs. free Web content.  This depends to some extent on whether Apple and other app stores let publishers keep enough money and enough control of the customer relationship (which Apple clearly hates to do, but Google is more open to).  But with HTML5 Web apps as alternative, that may become a harder sell than Murdoch now hopes.

Underlying this is the big technology question of whether the app fad loses out to HTML5 Web browsing.  In many respects, the app/widget model is a giant step backward.  Pre Web, there were "apps" for every online service, and they were all unique and non-interoperable with a clutter of invocations and divergent UIs.  The Web/browser brought a "World Wide Web" of consistency and interoperability that still enabled flexibility and varying look/feel.  A key issue is how to benefit from apps/widgets without going back to another age of islands and silos?  I built some of the first pre-Web publisher "apps" for TV Guide (hello again News Corp), Golf Magazine, Sierra, and others in the early '90s, and saw first hand how much the Web simplified things for both publishers and users.

The question is why bother downloading apps, when it seems HTML5 will soon give pretty much the same UI with no download?  Most of the current UI benefits of apps will soon go away.  The lasting benefit of the app store is central merchandising/sales (and a home page UI), and as Google shows with their Web app store, this can be done as little more than a Web site.  A few useful links are an Engadget article, the Chrome Webstore, and its FAQ.

Check out NY Times and SI Snapshot Chrome apps for an app-like experience in a browser, with little or nothing to download.  The NYTimes chrome site actually runs in Safari on the iPad and looks/acts much like the iPad app (but seems to give a different content mix).  The only essential thing the app store really adds is the home page array of icons (and maybe a different way to get people to pay).

I will bet on the browser.  It offers the best overall and most open user-centered experience.  And I think there are other ways to solve the monetization problem.  (One in particular is my FairPay pricing process, with an example of usage for a newspaper on my FairPay Zone blog.)