Interesting article in HBR
over here, implying that "big set" data analysis reaches limitations to its effectiveness fairly fast:
Firstly, remember the Netflix competition to improve their algorithm:
Five years ago, the company launched a competition to improve on the Cinematch algorithm it had developed over many years. It released a record-large (for 2007) dataset, with about 480,000 anonymized users, 17,770 movies, and user/movie ratings ranging from 1 to 5 (stars). Before the competition, the error of Netflix's own algorithm was about 0.95 (using a root-mean-square error, or RMSE, measure), meaning that its predictions tended to be off by almost a full "star." The Netflix Prize of $1 million would go to the first algorithm to reduce that error by just 10%, to about 0.86.
In just two weeks, several teams had beaten the Netflix algorithm, although by very small amounts, but after that, progress was surprisingly slow. It took about three years before the BellKor's Pragmatic Chaos team managed to win the prize with a score of 0.8567 RMSE. The winning algorithm was a very complex ensemble of many different approaches — so complex that it was never implemented by Netflix.
I recall the guys at a UK Netflix lookalike, LoveFilm, telling me that about 5 factors got the 80/20 prediction, so there was clearly a massive falling off in effectiveness as data analysis complexity increased.
But that is predicting intention behavior of demand, so what about retention - is this any easier, after all, one should have bucketloads of data and llots of historical nous dealing with ones's customers? It would appear not:
A study [pdf here] that Brij Masand and I [Gregory Piatetsky-Shapiro] conducted would suggest the answer is no. We looked at some 30 different churn-modeling efforts in banking and telecom, and surprisingly, although the efforts used different data and different modeling algorithms, they had very similar lift curves. The lists of top 1% likely defectors had a typical lift of around 9-11. Lists of top 10% defectors all had a lift of about 3-4. Very similar lift curves have been reported in other work. (See here and here.) All this suggests a limiting factor to prediction accuracy for consumer behavior such as churn.
(Lift is the ratio of actual churn vs the churners in the "big data" analysis, so if a "Big Data" algorithm predicts a list of customers that has 20% of actual churners in it, vs an averagel churn of 2%, that is a "Lift" of (20/2) = 10. That still means the list is 80% wrong though.
And how about predicting Ad effectiveness?
The average CTR% [Click Through Rate] for display ads has been reported as low as 0.1-0.2%. Behavioral and targeted advertising have been able to improve on that significantly, with researchers reporting up to seven-fold improvements. But note that a seven-fold improvement from 0.2% amounts to 1.4% — meaning that today's best targeted advertising is ignored 98.6% of the time.
(Actually, 0.1% sounds high to me, I'd think it was almost an order of magnitude lower nowadays)
Interestingly the article predicts Big Data will help more in the emerging services:
Big data analytics can improve predictions, but the biggest effects of big data will be in creating wholly new areas. Google, for example, can be considered one of the first successes of big data; the fact of its growth suggests how much value can be produced. While analytics may be a small part of its overall code, Google's ability to target ads based on queries is responsible for over 95% of its revenue. Social networks, too, will rely on big data to grow and prosper. The success of Facebook, Twitter, and LinkedIn social networks depends on their scale, and big data tools and analytics will be required for them to keep growing.
Google's reducing profits may be a sign that it's advantage is coming to an end, which - if the view here is right about dimiinishing returns - does not augur well going forward. Also, as they warn:
if you're counting on it to make people much more predictable, you're expecting too much.
Quite. And yet, and yet...one more tweak...
Also, bear in mind there are some big impacts in pivotal areas. A small change in a competitive area like say churn can have tremendous impact, especially if one is in a zero sum game (eg mature mobile phone markets), and played over multiple cycles. For example, assume 2 companies with equal market share, both with 20% churn. A very simple simulation will show if one player can get a sustained reduction of 1% of that 20% churn - ie to 19.8% in monthly customer retention over say 36 cycles (3 years) will give that player 53.5% vs the others' reduced 46.5% share - a shift of 7% points of market share - not a bad structural change in any saturated market, in fact shifts like that can drive competitors out.....
The answer, as always, is to accurately understand the costs vs the benefits.