Dynamic Privacy UX

I’ve heard the term “Identity based computing” thrown around a bit recently, but maybe we should be thinking Dynamic Privacy.  “Identity based” seems not right given that for many years UX has been aware of the user: notifications just for you; your weather; your stocks; more recently, the computer being unlocked by the proximity of a biometrically authenticated device (Apple Watch).

But in watching a preview video of the iPhone X experience a little piece struck me: the phone recognizing your face, and then doing a full reveal of the previously private contents of notifications. This seemed to be taking information disclosure to another level. Subtle, but check it out (the video starts at the 2:15 mark):

A bit of a leap, but it reminded me of the Minority Report / Blade Runner ads seeming to address you personally as you walk by.

dynamic privacy
Source: technophilespodcast.com

Tricky stuff. There’s a fine line between convenience, and embarrassment. If the iPhone is laying on the conference room table in a meeting, and the contents of my texts is intentionally hidden lest they reveal something personal (“Honey, did you like what I packed in your lunch box today?”, or “I can’t believe Harry is droning on again”), maybe it’s actually better that way.

Of course, this is where AI — in this case Siri — will come into play, and further entrench the OS-based agents. Siri should know that I’m in a meeting based on my calendar, and despite FaceID correctly identifying me, OS X would know to be discreet. Perhaps FaceID will end up calibrating some degree of how much you’re paying attention: you need to be able to see that you have notifications, but maybe content is revealed progressively as the phone is brought to a proximity or viewing angle deemed private enough.

Being geo- and calendar-aware is relatively straightforward (oh how the bar has been raised!). Going the next step to being aware of actually who you a with might be tough because you’re straddling OS ecosystems (Siri having to be aware of, say, your Google apps).

@techvitamin 2.6: Lauren Woodman, CEO of NetHope

This episode is mostly about doing great work in difficult places for people who really need help. Fundamentally that's what Lauren Woodman and her NetHope team do. It's not about speech recognition, AI (yet), or your wifi fridge telling you via OCR that your milk is about to sour.
Lauren Woodman, CEO of NetHope

This episode is mostly about driven professionals doing great work in difficult places for people who really need help. Fundamentally that’s what Lauren Woodman and her NetHope team do. It’s not about speech recognition, AI (yet), or your wifi fridge telling you that your avocado ice cream is about to melt.

We talk about providing connectivity in giant, semi-permanent refugee camps, and about streams of migrants — otherwise educated and smartphone carrying people trying to live their lives — and giving them basic services that of course we’d crumble without. A constant thread: having the grit to work through solutions in places where the environment (physical and political) is potentially hostile. They do work that’s super tangible (getting satellite dishes up), and work that’s less so (data security policy so refugees are protected even in cyberspace). It’s applied, 100% non-frivolous tech.

The Dadaab Refugee “Camp”

There are many things to admire about Lauren, not least that she’s so effectively made the transition to the world of development from tech — a culture not known for patience or diplomacy. She and her team inhabit a space between governments, and a truly who’s who set of partners, including non-profits, massive NGOs, and tech giants (The Bill and Melinda Gates Foundation, Path, Google, Microsoft, Facebook, Cisco, Dell, etc.).  This snippet gives you a sense for how crisp Lauren is about her very unique organization

NetHope is constantly on the lookout for how to operate more efficiently, both in terms of making their partners’ funds go further, but how to scale, including running training programs — NetHope Academies — so they can get abundant local human resources spooled up. The next frontier is using data in ways that were never before possible.

Lauren’s a Smith College and Johns Hopkins SAIS grad who’s on the front line of some of the world’s most painful humanitarian situations, and she’s well worth the listen.

Play

@techvitamin 2.3: Matt Revis, VP Product, Jibo

As they say, hardware is hard. Matt Revis -- a veteran of the speech recognition wars at Nuance, and now VP of product at robotics startup Jibo -- is no stranger to this. Getting various software keyboards and versions of Dragon shipped by OEMs on hundreds of millions of handsets (smart and no so smart) takes a willingness to grind, and Matt has that in spades. Good thing too because he's jumped into an exploding segment -- intelligent home devices -- with relentless, well-funded competitors who have platforms and data that provide quite a moat. Jibo is taking a different approach than, say, Echo or Google Home. They believe an anthropomorphic little robot, tuned to interact and genuinely connect with different members of the family, is a differentiated play versus static appliances with disembodied personas (Alexa, Google Assistant, etc.). Much of this strategy is based on research done by Cynthia Breazeal, the magnetic robotics star who pioneered this work at MIT's Media Lab before its spinout into Jibo. Both Matt and Steve Chambers (Nuance's dynamic #2 for years) have signed up to help Cynthia bring the little robot to market.
Matt Revis, VP Product, Jibo

As they say, hardware is hard. Matt Revis — a veteran of the speech recognition wars at Nuance, and now VP of Product Management at social robotics startup Jibo — is not someone to shy away from a tough challenge.

Getting various software keyboards and versions of Dragon shipped by OEMs on hundreds of millions of handsets (smart and some not so smart) takes a willingness to grind, and Matt has that in spades. Good thing too, because he’s jumped into an exploding segment — intelligent home devices — with relentless, well-funded competitors who have platforms and data that may provide quite a moat.

Jibo is taking a different approach than, say, Echo or Google Home. They believe a slightly anthropomorphic little robot, tuned to interact and genuinely connect with different members of the family, is a differentiated play versus static appliances with disembodied personas (Alexa, Google Assistant, etc.). Jibo is all about being relatable, and funny, and someone you’re invested in as they “grow”.

Much of this strategy is based on research done by Cynthia Breazeal, the charismatic robotics star who pioneered this work at MIT’s Media Lab before its spinout into Jibo. Both Matt and Steve Chambers (Nuance’s dynamic #2 for years) have signed up to help Cynthia bring the little robot to market.

It won’t be easy. The tech (think Alexa strapped to an Echo that moves in place but also has facial recognition and a display) has a lot of surface area where the table stakes are moving very quickly. And once they’ve figured all of that out, then they need to build and sell it.

But Matt (and Steve) believed in speech-based personal assistants years before Siri, and if anybody can do it they can. In this episode, Matt and I discuss many of their challenges, their unique approach, and how they doing. It’s “the hardest thing I’ve ever done, and the most fun — both by a lot”, and you’ll hear the authentic voice of the entrepreneur.  Have a listen to the podcast, but also watch the Jibo Program Update below, which gives you a sense of the V1.0 product, but also of how the business is managing the expectations of a community eager to get its hands on the guy.

Here’s a snippet from the full podcast:

 

Play

@techvitamin 2.1: Steve Murch, Founder/CEO of BigOven

Podcast conversation with Steve Murch and John Pollard about BigOven, food, machine learning and how people and families organize the creation of meals
Steve Murch

Steve Murch is a smart, humble and motivated guy who’s had an incredible run in tech.

Back in 1991 — after he’d done all the school (Carnegie Mellon, Stanford, and HBS) — he was part of the initial Access team at Microsoft, launched their first online multiplayer games (zone.com), and after leaving founded VacationSpot (think HomeAway) with Greg Slyngstad — which they sold to Expedia in 2000. While at Expedia he led the Vacation Packages business, and helped revolutionize the way people buy travel.  He was also the Chairman of Escapia for five years, and has been a lecturer at UW’s Foster School of Business.

Faced with the grim prospect of early retirement or staying in the tech mix, he focussed on getting back to crafting software, eventually leading to BigOven…a recipe db/meal planner with more than 12M downloads.

In this episode we talk a lot about food, how tech can impact the planning and yes, cooking of meals, and what techs like machine learning and Echo may mean for how people behave in the kitchen and store.

Food waste and families eating together are both important priorities for BigOven, and Steve provides compelling evidence for why. Just one data point: it was found that eating two meals as a family per week was the only variable that could be correlated with being National Merit Scholars. (!)

As mentioned in the show…check out the 1956 vision of computers in the kitchen starting around 25 seconds in:

Also mentioned in the show, an experiment by Tesco in Korea for a virtualized store in a subway…just use your mobile device to scan, and have the groceries delivered:

Podcast conversation with Steve Murch and John Pollard about BigOven, food, machine learning and how people and families organize the creation of meals
Tesco’s Virtual Supermarket in Korea (Photo credit: designboom)

 

Play

Quick takes on Apple’s September 7 keynote

I think a lot of people are pretty jaded about Apple keynotes now. For years we’ve seen Jony Ive videos with the requisite “Aluminium” voiceover, the device machining porn, and amazingly skilled close-up photography.

We’ve repeatedly heard them claim that “this is the best X we’ve ever created” — with zero press saying “you should f**ing hope so.”

All worth listening to and watching, if for no other reason than to delight in the balls it takes to completely ignore innovation happening elsewhere. Apple’s been so blatantly and shamelessly copied in so many ways for so many years, I don’t begrudge them this. But nerve they have.

So some thoughts on this last keynote:

  • Love the sprinkling in of the term “machine learning” in different places. Apple PR has been working overtime to burnish the company’s anemic rep here.
  • I love watches and have paid a lot of attention to Apple Watch. Their next gen — “Series 2” — fills some big gaps that make it more viable in the category’s sweet spot: fitness. They now have on board GPS and better, swimproof water fastness. A ton of engineering was required to get to these things — in particular power management — but they likely won’t get much breakthrough credit. The device ID is little changed, but coupled with a much better WatchOS, Series 2 looks solid.
  • Headphone jack gone. Fine because they’re giving away an adapter/dongle which will keep usable any quality gear you’re currently using. Not fine because you won’t be able to listen to your old headsets and charge the phone at the same time (w/o some kind of new accessory which will not be free). Feels like a new normal kind of thing that we’ll be acclimated to very quickly.
  • AirPods. Probably a stand-alone $1-2B business in short order. Bigger than that longer term. Glossy intro, complete with Jony Ive-narrated video. Love the vision. Potential big hole for version 1.0: active noise cancellation which was not mentioned, and is a little weird not to have on such a cutting edge device. Do I need to bring another headset for flights? If these passively block noise to the same extent as the corded earphones from Apple, then they won’t really work well on planes. When I lose one, do I need to replace both? Can I buy extra charger packs, or is that another thing to schlep around? They don’t look like they’ll stay in your ears with any kind of heart rate increasing activity. Is their sound quality good? Apple’s shamelessly claimed their headsets have sounded good for years when they haven’t — yielding an enormous headset aftermarket. These look like they are optimized for driving (hands-free), but will be less good for 1) exercise (staying in ear), 2) workstations (noise reduction), 3) high-end music enjoyment (fidelity). Good thing they’ll be able to cross sell Beats headphones …
  • Storage on iPhone 7: at least the base configuration is up to 32GB. Given how easy it is to shoot high quality video, that’ll be barely adequate.
  • Pulled the plug on Gold Apple Watch: it was always a bit weird, Apple selling a $10K+ bauble (the Oligarch Edition) that would be obsolete in a few years. It’s one thing to demand premium prices for well designed and high performing tech gear. It’s another to openly go into luxury categories where there is zero performance increase. I’m glad they’ve reeled this back in.
  • iPhone 7 camera improvements: Apple will breathlessly claim a design breakthrough for the device, but it really seems like the major improvement in this phone generation is the camera hardware and the supporting system. The iPhone 7 Plus has the dual cameras (that other phones have been sporting for awhile). It’s Apple’s hardware/software integration that’ll make this the best smartphone camera on the market.
  • They spent some serious presentation time on Pokemon Go for the Apple Watch. Data point revealed: 500 Million downloads of Pokemon Go…I’m not sure having an Apple Watch version is really going to boost momentum here or ease usage pain points. Weird that they gave this so much time.

For the two big volume devices — Watch and iPhone — it felt like gap filling and incrementalism respectively.