Tag Archives: ibm watson

Guiding Eyes story featured on US national television

It is a real privilege to work with the inspirational Guiding Eyes for the Blind.  They have an incredible mission to place guide dogs with blind individuals.

The project involves looking at all the data they have moved to the IBM Cloud: genetic data from 30 years of dog breeding and thousands of questionnaires in Word format from trainers who spend two years raising puppies.

The potential for cognitive applications like IBM Watson to find new insights which can help improve their programs is tantalizing. A group at SJSU is working on the data right now. Early indications suggest that genetic data may not be a strong determinant for whether or not a dog performs well. Fuel to fire the nature vs nurture debate?

We spent a weekend in upstate New York with the Barbarian crew and Tucker Walsh telling this story from the viewpoint of a dog.

CNBC ran a feature on this story and Creativity called us out.

We pulled together all the key story elements on this Medium page.


Insights on the future of tech from Qualcomm’s Dr Paul E Jacobs

Funny this week that driving home from a startup event at the Computer History Museum in Mountain View I should stumble on to an NPR broadcast from the very same location, but this time featuring an interview with the very smart, very well-spoken Dr Paul E Jacobs from Qualcomm.

On the subject of being well-spoken, Dr Jacobs points out that he is essentially an engineer at heart and being confident on large stages or in front of TV cameras didn’t exactly come naturally: it’s something he’s worked at. Now isn’t that heartening?

Talking through his history, Dr Jacobs offers a clear direction on how exactly he became so prescient in understanding that the telecom industry would require mobile computing chips that would support a lot more than just speach. He explains how in the early 90’s (ie. around the time where the internet was just a networking experiment linking a bunch of colleges) he was working on speech optimization and speech compression algorithms. This planted the fundamental idea that speech was just one potential form of data that can flow through the pipes of a telecom network. It took the better part of 20 years for the smartphone to arrive and finally relegate speech to just another app on the phone we keep in our pocket. Dr Jacobs’ vision has helped ensure that the Qualcomm is a major provider of the chips needed to support this revolution.

So, what happens when you ask someone this prescient about the future?

Interesting Dr Jacobs concentrates a lot on healthcare. For instance, he talks about a sensor so small it can sit in your bloodstream and notice the indicators for a heart attack which typically show up two weeks before the event. Then what does it do? Give you a call on your smartphone and tell you to watch out!

He talks about advances in artificial intelligence and cognitive computing (with a fleeting reference to IBM Watson). Robots that learn by being physically guided rather than being programmed. He talks about just how close we are to producing computer systems that can mirror and reproduce human thought and potentially finally conquer the Turing Machine.

I’d strongly encourage you to take the time to listen to the whole fascinating interview:

More info on KQED

Big data means big changes for management: MIT

Last week I had the great opportunity to gatecrash an MIT alumni event on the topic of MIT and The Digital Economy packed with luminaries of the business/technology field such as Andrew McAfee and Erik Brynjolfsson of MIT, Tim O’Reilly, Jeremy Howard from Kaggle and Vivek Ranadivé from TIBCO.

Whilst the overall discussion centered around the past, present and future of the ‘Digital Economy’, the event really focused on how big data is changing business and the way we work.

As an aside, I found this particularly interesting as in some of the work I’ve been doing, we tend to split out big data from business analytics. We think of big data being predominantly focused on the fact that as the world goes digital (be that the growth in social media data or sensors driving the internet of things) we now have to deal with data in greater volumes than ever, that is being generated at a higher velocity that can arrive in a larger variety of formats. A term I’ve heard is that of the ‘data wrangler’ who needs to be able to deal with the collection and storage of all those petabytes. Business analytics on the flipside, relates to the science/art of being able to extract meaning from that data. The kind of meaning that can help businesses function more effectively.

During this MIT session, this distinction was not made. Big data was taken to relate to both the storage and processing of information in order to uncover patterns or make decisions previously unattainable. Could this be the semantic direction to go? It looks like the market is currently moving in this direction:


(via Google Trends)

So, with that said, let’s focus on ‘big data’. My key takeaways from the event:

The reframing of Artificial Intelligence problems into Big Data problems

The classic example which both Andrew McAfee and Erik Brynfolfsson offer up here is that of driving.

You don’t need to go more than ten years back to a time where the problem was analyzed by looking at us humans and what it takes for us to drive. You’ve got our past experience of the road. You’ve got all that sensory data we take in when we reach a junction. Trying to reverse-engineer all the judgements and decisions we make and pouring those into a computer represents a huge challenge.

Now look at the way Google and others are developing ‘driverless cars’. Through its Street View and Earth services, Google has a very detailed data-rich map of the roads. Add to that information sensors that constantly monitor the speed and position of all objects around the vehicle (and can even factor in blind spots), and you have a system that can control a vehicle (arguably more safely than we can) using data processing.

They also point to the improvements in IBM Watson in its challenge of participating in the US quiz show Jeopardy! The system was able to improve its power in answering questions as the amount of data it was fed grew.

In his final presentation, Andrew McAfee summed this up particularly well: “more data trumps better algorithms”.

Big data can beat domain knowledge

Kaggle’s Jeremy Howard explains that their big data challenges are typically won by folks who have no knowledge of the industry or company who posed the challenge. The so-called ‘experts’ and companies who frame the problems they have been working on, are generally outperformed by ‘novices’ adept at processing the data. Jeremy, who is himself a very successful Kaggle competitor, suggests that there is real value in initially casting an unbiased eye on the data. This is kind of worrying as it does suggest that often what we assume we know about the world (and build up through experience) can often be wrong. Having said that, once Jeremy has formed his initial impression, he says he will look to learn about the domain in order to refine his model.

So, where domain knowledge can be of limited value in solving big data problems, Andrew McAfee points out that even big data companies do see the need for domain-specific skills. PASSUR, who use data to help airplanes flow in and out of airports more efficiently, are in the process of hiring airport ground staff with heavy experience. What these folks can do is expose what are the questions you should be posing to your big data experts.

At the business level, this does mean that big data can potentially lead to better strategic decisions than those made by senior management. At the very least, senior management (or any mangement for that matter) should look to analyze the greatest pool of data before making key decisions.

Analysis of data-in-motion offers huge opportunities for improving customer experience

One of the notions brought in by big data is the idea that we can process and react to data-in-motion. This means being able to respond to signals instantly, rather than waiting for the data to be stored (say in a relational database) and then querying that (eg. using SQL).

TIBCO CEO Vivek Ranadivé posited an excellent example. When you use an ATM (cash machine), why does it prompt you for language choice after you insert your card? By taking your card number and looking at your communication preferences, the bank should be clever enough to at least make an intelligent guess at what language you require for the interface (with ideally a small link somewhere to switch choice in case it guessed wrongly). Obviously any but the smallest bank has a large number of customers and we need our ATMs to respond quickly, so this kind of query needs to be optimized to occur in near-real-time.

The same is true for many customer experience challenges, including offering micro-targeted advertising or augmentation of customer service phone calls.

So, one thing that was clear from the sessions, is that we are still way down on the maturity curve in terms of our ability to use big data to make a meaningful impact on business. As both computing power and the amount of data at our disposal continues to grow, big data will become increasingly important to business. 

More on the event