Category Archives: analytics

Finding a Google+ numeric ID

If you are working with the Google+ API for tracking or other purposes, you may have noticed that you need to match against the numeric ID for the G+ user account.

This isn’t the easiest thing to find and took me and a vendor quite a while so let me share what I figured out.

There are effectively 2 ways:

1) Via the URL of the profile page

This one is pretty straightforward Winking smile For some accounts, clicking on a profile will take you to a URL like this:
https://plus.google.com/116376707133710026460/posts

Here the numeric ID is 116376707133710026460.

2) Via the link URL in posts

Now, for some G+ accounts, the URL structure is different. For instance, if you click on my G+ profile, you end up here:
https://plus.google.com/+darylpereira/posts

So, how do I get to my numeric ID? You go to one of the user’s posts (you need to have public posts for this to work) and right-click and select ‘Copy link address’ or ‘Copy URL’ or whatever the option is for your browser:

Google Plus Numeric ID

If you paste this URL into a doc, you’ll see it’s in this format:
https://plus.google.com/112665405531807287877

So, the numeric ID for my account is: 112665405531807287877. 

Voila! I hope this helps if ever you have a service which needs numeric Google Plus IDs.

Big data means big changes for management: MIT

Last week I had the great opportunity to gatecrash an MIT alumni event on the topic of MIT and The Digital Economy packed with luminaries of the business/technology field such as Andrew McAfee and Erik Brynjolfsson of MIT, Tim O’Reilly, Jeremy Howard from Kaggle and Vivek Ranadivé from TIBCO.

Whilst the overall discussion centered around the past, present and future of the ‘Digital Economy’, the event really focused on how big data is changing business and the way we work.

As an aside, I found this particularly interesting as in some of the work I’ve been doing, we tend to split out big data from business analytics. We think of big data being predominantly focused on the fact that as the world goes digital (be that the growth in social media data or sensors driving the internet of things) we now have to deal with data in greater volumes than ever, that is being generated at a higher velocity that can arrive in a larger variety of formats. A term I’ve heard is that of the ‘data wrangler’ who needs to be able to deal with the collection and storage of all those petabytes. Business analytics on the flipside, relates to the science/art of being able to extract meaning from that data. The kind of meaning that can help businesses function more effectively.

During this MIT session, this distinction was not made. Big data was taken to relate to both the storage and processing of information in order to uncover patterns or make decisions previously unattainable. Could this be the semantic direction to go? It looks like the market is currently moving in this direction:

bigdata_ggtrends

(via Google Trends)

So, with that said, let’s focus on ‘big data’. My key takeaways from the event:

The reframing of Artificial Intelligence problems into Big Data problems

The classic example which both Andrew McAfee and Erik Brynfolfsson offer up here is that of driving.

You don’t need to go more than ten years back to a time where the problem was analyzed by looking at us humans and what it takes for us to drive. You’ve got our past experience of the road. You’ve got all that sensory data we take in when we reach a junction. Trying to reverse-engineer all the judgements and decisions we make and pouring those into a computer represents a huge challenge.

Now look at the way Google and others are developing ‘driverless cars’. Through its Street View and Earth services, Google has a very detailed data-rich map of the roads. Add to that information sensors that constantly monitor the speed and position of all objects around the vehicle (and can even factor in blind spots), and you have a system that can control a vehicle (arguably more safely than we can) using data processing.

They also point to the improvements in IBM Watson in its challenge of participating in the US quiz show Jeopardy! The system was able to improve its power in answering questions as the amount of data it was fed grew.

In his final presentation, Andrew McAfee summed this up particularly well: “more data trumps better algorithms”.

Big data can beat domain knowledge

Kaggle’s Jeremy Howard explains that their big data challenges are typically won by folks who have no knowledge of the industry or company who posed the challenge. The so-called ‘experts’ and companies who frame the problems they have been working on, are generally outperformed by ‘novices’ adept at processing the data. Jeremy, who is himself a very successful Kaggle competitor, suggests that there is real value in initially casting an unbiased eye on the data. This is kind of worrying as it does suggest that often what we assume we know about the world (and build up through experience) can often be wrong. Having said that, once Jeremy has formed his initial impression, he says he will look to learn about the domain in order to refine his model.

So, where domain knowledge can be of limited value in solving big data problems, Andrew McAfee points out that even big data companies do see the need for domain-specific skills. PASSUR, who use data to help airplanes flow in and out of airports more efficiently, are in the process of hiring airport ground staff with heavy experience. What these folks can do is expose what are the questions you should be posing to your big data experts.

At the business level, this does mean that big data can potentially lead to better strategic decisions than those made by senior management. At the very least, senior management (or any mangement for that matter) should look to analyze the greatest pool of data before making key decisions.

Analysis of data-in-motion offers huge opportunities for improving customer experience

One of the notions brought in by big data is the idea that we can process and react to data-in-motion. This means being able to respond to signals instantly, rather than waiting for the data to be stored (say in a relational database) and then querying that (eg. using SQL).

TIBCO CEO Vivek Ranadivé posited an excellent example. When you use an ATM (cash machine), why does it prompt you for language choice after you insert your card? By taking your card number and looking at your communication preferences, the bank should be clever enough to at least make an intelligent guess at what language you require for the interface (with ideally a small link somewhere to switch choice in case it guessed wrongly). Obviously any but the smallest bank has a large number of customers and we need our ATMs to respond quickly, so this kind of query needs to be optimized to occur in near-real-time.

The same is true for many customer experience challenges, including offering micro-targeted advertising or augmentation of customer service phone calls.

So, one thing that was clear from the sessions, is that we are still way down on the maturity curve in terms of our ability to use big data to make a meaningful impact on business. As both computing power and the amount of data at our disposal continues to grow, big data will become increasingly important to business. 

More on the event

The YouTube recommendation engine: a lesson in transparent analytics

Recommendation engines are all the rage. Whether it is in the realm of social commerce (see IBM Coremetrics Intelligent Offer) or in location-based social applications like Foursquare.

As the attention span of the browsing population shrinks below that of your average goldfish, so the need to create razor-sharp, perfectly honed navigation systems increases. There’s a demand on publishers to use whatever information they have to provide a more contextualized browsing experience.

That’s all well and good, but have you ever looked at a recommendation and wondered what on earth was the system thinking when it picked it? You’ve spent months on the site exploring hardcore thrash metal so why on earth are you being offered a book on floral knitting patterns?

I just went onto Youtube and noticed that they have actually got pretty transparent with their recommendation engine:

Notice the ‘because you watched’.

As we use more analytics systems to build interfaces, being explicit about how decisions are made becomes increasingly important: 

Show what determined the recommendation: This answers the question of why on earth am I seeing this recommendation? In this instance YouTube bases this on what you have watched previously, but this could just as well relate to what others with similar interests have liked (the Amazon approach).

Allow you to interact with the recommendation: YouTube allows you to remove recommendations from the list that you don’t think are appropriate. One thing it doesn’t do is spell out whether that feedback is factored into future recommendations. Some systems (such as Pandora and Netflix) use a thumbs-up/down or rating system with the implicit understanding that this information will be fed into the calculations of future recommendations. As James Taylor, the Decision Management expert pointed out to me some years ago, recommendation engines have their limit. If I booked a once-in-the-lifetime trip to Bermuda last year, there’s no point in showing me vacations to Antigua six months later. Allowing me to vote this kind of recommendation down can help systems disentangle one-shot whims from longer term patterns of behavior.      

The question of privacy: Being transparent about analytics systems and and how exactly visitors are being tracked can go a long way to allay the growing public fears around the growing mountain of data produced by the internet in general and social networking sites in particular. Indeed, here in California there has been considerable press around a bill to increase the privacy of social networks. Justin Brookman, director of the Project on Consumer Privacy at the Center for Democracy and Technology has said, “I think the idea of telling people what is going on and giving them control over their information from the beginning is a good idea for social networks and others places as well”. Privacy advocates are asking publishers to be more open about how data is being used.

As user interfaces become more reliant on analytics tools to offer a more personalized experience, there are significant advantages to displaying upfront exactly why we are being shown the recommendations we see.

Lotusphere 2011: building collaborative business intelligence with Cognos 10

Over on the IBM Software Blog, Cognos Product Marketing Manager Brendan Farnand explains just why business intelligence solutions from Cognos have a place at the Lotusphere social business event:

"Everyone involved in a decision or a solution needs to know who else is involved, what transpired before they were asked to contribute and what other ideas are out there for that decision or solution."

Business intelligence shouldn’t happen in isolation. As I’ve pointed out before, many reports from sales figures to customer service levels have added value if key constituents can comment on the results and define follow-up actions. Pairing key functions from the Lotus suite with Cognos Business intelligence allows exactly that:

image

As I won’t be at Lotusphere this year, I’m looking forward to following Brendan on Twitter

If you can’t make it to Lotusphere, check out this Tech Talk webinar where Brendan highlights Cognos’ built-in collaboration and social networking functionality.

Optimize SEM and SEO Lead Gen Campaigns with Web Analytics (Webinar)

Integrating search engine marketing (SEM) and search engine optimization (SEO) projects and teams is a best practice that can deliver a powerful Virtuous Cycle.  Built on the foundation of an analytics platform such as Coremetrics Continuous Optimization Platform, an integrated approach to SEO and SEO can significantly improve the ROI from your web presence.

Multiple surveys and studies have indicated that SEO projects consistently provide extremely attractive returns on investment.  Yet eCommerce and online marketing teams frequently struggle to quantify SEO ROI: both prior to the project as part of an internal budgeting process, and after the project to evaluate its success.  Using a recent case study of a global powersports company, we will demonstrate how Coremetrics Digital Agency worked with the client to optimize their lead generation engine by integrating Search Engine Marketing with Coremetrics’ Web Analytics. Building on this SEM experience we then targeted keyword phrases with the potential for the highest, measurable SEO ROI.

We will show the virtuous circle at play between SEO and SEM:

image

For instance, you can see significant improvements to your SEM campaigns by applying lessons learnt from analyzing your SEO efforts (such as which keywords drive most interactions).

Attend this upcoming seminar with Coremetrics’ John Zoglin, Senior Director, Search Marketing Services to learn more.

Date: Wednesday, December 15, 2010
Time: 1:00 PM EST | 10:00 AM PST 
Register now!

More about Coremetrics


Using structured data analytics to make better business decisions

imageIn the current edition of Analytics, a cross-brand team from IBM (Irv Lustig, Brenda Dietrich, Christer Johnson and Christopher Dziekan) explain IBM’s view of the structured data analytics landscape.

Key to this model are three categories of structured data analysis:

1. Descriptive Analytics: A set of technologies and processes that use data to understand and analyze business performance
2. Predictive Analytics: The extensive use of data and mathematical techniques to uncover explanatory and predictive models of business performance representing the inherit relationship between data inputs and outputs/outcomes.
3. Prescriptive Analytics: A set of mathematical techniques that computationally determine a set of high-value alternative actions or decisions given a complex set of objectives, requirements, and constraints, with the goal of improving business performance.

As the authors explain, this model can help businesses make better decisions, rather than just simply automate standardized processes.

Let’s use the example of a fictional global shoe manufacturer we’ll call ‘Footloose’ to see how each category could be used to increase business performance.

Descriptive analytics

These are your flexible dashboards that let you focus in on key areas of the business. For Footloose, this could be all the standard operations dashboards eg. like the one showing monthly shoe sales by region. Footloose should be able to see how actual sales fared against the forecast. Where there are deviations (say the sales of sandals in Spain has gone through the roof), they can use descriptive analytics to drill-down into the data. They may see that the growth is coming from the Madrid and possibly related to a major marketing push during a hot spell in that region.

IBM Cognos solutions offers this kind of descriptive analytics (including business intelligence) that can be implemented to measure and explore how a company is performing.

Predictive analytics

Here we use data from the past to make predictions about the future. For Footloose, this could include combining seasonal sales variations for a sports shoe with the longer term uptrend they have been seeing for the last few years. Footloose can also use predictive analytics to improve their web presence: they can launch a recommendation engine to suggest what a visitor might want to view next based on what they (and people like them) have looked at in the past (like the book suggestion service Amazon offers).

IBM SPSS offers a set of predictive analytic tools which allow business users to employ predictive insights at the point where decisions are being made.

Prescriptive analytics

How can we achieve the best outcome, whilst addressing any uncertainty in the data? Prescriptive analytics can help us answer this question. Let’s say Footloose has made its prediction about what shoe sales are likely to be over the coming year. Now they just need to figure out how to respond to those predictions. Sales of sandals are expected to remain high in Spain so they need to increase their distribution channel there. How should they achieve this? Increase the fleet of vehicles or buid more (costly) distribution centers.

Footloose can plug the data into an optimization model (costs of building a new plant, buying new trucks, gas) to calculate what would be the most efficient supply chain to deliver the extra required capacity.

IBM ILOG Optimization has technologies specialized for these kind of calculations where there are large data sets with potential uncertainty.

I’ve used this example to present a simplified view of IBM’s approach to structured data analysis and how IBM technologies can be used in tandem to improve business performance. A key advantage of these technologies is that their utility stretches across various industries and applications.

For a fuller explanation of this field, I’d definitely recommend reading the full article in Analytics Magazine

Jeff Jonas interview with Forbes: IBM’s business analytics strategy

Jeff Jonas, chief scientist at IBM’s Entity Analytics group, recently sat down with Forbes’ Kym McNicholas to explain IBM’s strategy in the growing business analytics space.

Here are some of the key themes Jeff discussed:

Enhanced customer service
Business analytics allows businesses to make smarter decisions at the point of interaction, whether that is an application for a credit card, or an order coming in to a call center. Another facet is the growth of geolocation services like Foursquare and Gowalla giving us much richer information on people’s movements. Using this data, when people search for information on the web, we can incorporate analytics into the results, making them more local, more relevant.  

HR and the hiring process
Tying together an applicant’s information across disparate systems can help detect issues, such as whether that person has previously shoplifted from the store to which they are applying (it happens!).

Data security
As companies have more and more information on their customer base, keeping this data secure is a growing problem. Analytics can help: for instance companies can see who accesses data to help uncover data thefts and violations. Analytics tools can also be used to add a further level of encryption, so even if data is stolen, records cannot be matched back to individuals.

Forbes were kind enough to create a video of the interview too:

image

If you don’t know Jeff, among his many achievements, led the development of facial recognition software used to thwart aggressive card count teams (such as the MIT team covered in the book ‘Bringing Down the House’) for the casino industry.

Read Jeff’s full bio

Read Jeff’s blog.

IBM Business Analytics Software protecting children in care in Nevada

imageAs the Business Analytics Forum gets underway in Las Vegas, eWEEK feature an article on how IBM’s analytic software is helping Clark County Family Services Department in Nevada improve the delivery of social services. Prior to using IBM analytics, spreadsheets were used to monitor care workers and the level of service provided. Due to population increase and the need to conform to state legislation and policies, the department needed a solution that would make reporting easier, help the department comply with new regulations, and measure business performance.

According to Eboni Washington, a IT supervisor in the Family Services Department,
 
“Before this we had a lot of children not being seen each month. And now we have an automated system, rather than some workers keeping a hand count of who they have seen and what they have done each month.”

For more information, read the full article in eWEEK.

#baforum