Category Archives: big-data

Big data means big changes for management: MIT

Last week I had the great opportunity to gatecrash an MIT alumni event on the topic of MIT and The Digital Economy packed with luminaries of the business/technology field such as Andrew McAfee and Erik Brynjolfsson of MIT, Tim O’Reilly, Jeremy Howard from Kaggle and Vivek Ranadivé from TIBCO.

Whilst the overall discussion centered around the past, present and future of the ‘Digital Economy’, the event really focused on how big data is changing business and the way we work.

As an aside, I found this particularly interesting as in some of the work I’ve been doing, we tend to split out big data from business analytics. We think of big data being predominantly focused on the fact that as the world goes digital (be that the growth in social media data or sensors driving the internet of things) we now have to deal with data in greater volumes than ever, that is being generated at a higher velocity that can arrive in a larger variety of formats. A term I’ve heard is that of the ‘data wrangler’ who needs to be able to deal with the collection and storage of all those petabytes. Business analytics on the flipside, relates to the science/art of being able to extract meaning from that data. The kind of meaning that can help businesses function more effectively.

During this MIT session, this distinction was not made. Big data was taken to relate to both the storage and processing of information in order to uncover patterns or make decisions previously unattainable. Could this be the semantic direction to go? It looks like the market is currently moving in this direction:

bigdata_ggtrends

(via Google Trends)

So, with that said, let’s focus on ‘big data’. My key takeaways from the event:

The reframing of Artificial Intelligence problems into Big Data problems

The classic example which both Andrew McAfee and Erik Brynfolfsson offer up here is that of driving.

You don’t need to go more than ten years back to a time where the problem was analyzed by looking at us humans and what it takes for us to drive. You’ve got our past experience of the road. You’ve got all that sensory data we take in when we reach a junction. Trying to reverse-engineer all the judgements and decisions we make and pouring those into a computer represents a huge challenge.

Now look at the way Google and others are developing ‘driverless cars’. Through its Street View and Earth services, Google has a very detailed data-rich map of the roads. Add to that information sensors that constantly monitor the speed and position of all objects around the vehicle (and can even factor in blind spots), and you have a system that can control a vehicle (arguably more safely than we can) using data processing.

They also point to the improvements in IBM Watson in its challenge of participating in the US quiz show Jeopardy! The system was able to improve its power in answering questions as the amount of data it was fed grew.

In his final presentation, Andrew McAfee summed this up particularly well: “more data trumps better algorithms”.

Big data can beat domain knowledge

Kaggle’s Jeremy Howard explains that their big data challenges are typically won by folks who have no knowledge of the industry or company who posed the challenge. The so-called ‘experts’ and companies who frame the problems they have been working on, are generally outperformed by ‘novices’ adept at processing the data. Jeremy, who is himself a very successful Kaggle competitor, suggests that there is real value in initially casting an unbiased eye on the data. This is kind of worrying as it does suggest that often what we assume we know about the world (and build up through experience) can often be wrong. Having said that, once Jeremy has formed his initial impression, he says he will look to learn about the domain in order to refine his model.

So, where domain knowledge can be of limited value in solving big data problems, Andrew McAfee points out that even big data companies do see the need for domain-specific skills. PASSUR, who use data to help airplanes flow in and out of airports more efficiently, are in the process of hiring airport ground staff with heavy experience. What these folks can do is expose what are the questions you should be posing to your big data experts.

At the business level, this does mean that big data can potentially lead to better strategic decisions than those made by senior management. At the very least, senior management (or any mangement for that matter) should look to analyze the greatest pool of data before making key decisions.

Analysis of data-in-motion offers huge opportunities for improving customer experience

One of the notions brought in by big data is the idea that we can process and react to data-in-motion. This means being able to respond to signals instantly, rather than waiting for the data to be stored (say in a relational database) and then querying that (eg. using SQL).

TIBCO CEO Vivek Ranadivé posited an excellent example. When you use an ATM (cash machine), why does it prompt you for language choice after you insert your card? By taking your card number and looking at your communication preferences, the bank should be clever enough to at least make an intelligent guess at what language you require for the interface (with ideally a small link somewhere to switch choice in case it guessed wrongly). Obviously any but the smallest bank has a large number of customers and we need our ATMs to respond quickly, so this kind of query needs to be optimized to occur in near-real-time.

The same is true for many customer experience challenges, including offering micro-targeted advertising or augmentation of customer service phone calls.

So, one thing that was clear from the sessions, is that we are still way down on the maturity curve in terms of our ability to use big data to make a meaningful impact on business. As both computing power and the amount of data at our disposal continues to grow, big data will become increasingly important to business. 

More on the event