Monday, July 25. 2016
System Dynamic model of Brexit (Simplified)
Now that the emotion around the UK Referendum has (hopefully) died down a bit, its time to look at why the (apparently) surprising Brexit result occurred. Based on experience during the referendum campaigns and reading quite a lot afterwards, there were 3 major causes that stood out.
A Mismatch of Belief Systems
In essence, the Remain camp believed an argument based on the EU as a desirable, stable platform going forward, plus the moral and economic benefits of it's precepts, was persuasive. Added to that they clearly felt that Establishment voices of authority would persuade the undecided, and if that didn't work "project Fear" - promising Doom if the UK left the EU, would persuade the floating voters - as they believed it did in the Scottish Independence referendum. (By the way, I think this was false - my recollection was that, at the end, the "Remain in UK" camp in the Scottish referendum made a lot of concessions to the Scots in the last days as Fear wasn't really working out).
Attitude - Hubris and Nemesis, and polls
Before the campaign, Remain was supposed to win, very comfortably, according to the polls. Bookies were offering 1/5 odds on a Leave win. In my view that influenced the campaigns, in that Remain was initially overconfident whereas Leave knew they were in for a real scrap, and as it became clearer that Remain was not going to be a slam dunk then panic set in, whereas Leave grew in confidence. This was exacerbated by the much larger risks to the senior people involved in the incumbent Remain camp.
The System Dynamics of a disaster
However, the biggest failure of the Remain campaign was to not address the situation dynamically as it progressed. The (simplified) System Dynamic diagram above tries to capture this. In essence, Remain's arguments, exaggerated by Project Fear logic, tended to over-egg the risks (or at least be perceived to do so) and that led to an increasing resistance to the message and it gave a foothold to pro Leave media to start to land some telling blows. The Remain response was to double down on Project Fear, with increasingly exaggerated claims of Doom which allowed Leave to both lampoon Remain's claims and headroom to make even more exaggerated claims of its own. Cycle this through a few times, increase the hectoring volume, and more and more people just switched off to the messages of Doom. As it became clear Leave was gaining, they started to gain in confidence, getting that all important momentum - undecided people like winners. Then Remain visibly panicked, and output became more and more unbelievable (we eventually got to Brexit starting World War 3). By the time the Conservative government, who had largely run the Remain show, started to criticize Labour for not doing enough, it was clear Remain were mortally wounded.
What should Remain have done instead?
This is a simplified model - there are some sub-loops and unique events not shown, but in essence they fit in this overall model. Our experience of System Dynamic models is it is these high level models that often give the major insights, the detail is often bedevilled with layers of assumptions, where anything can happen with small changes. At any rate, the simple model would suggest 5 main actions:
(i) Make sure the ingoing assumptions are all valid, and defendable. GIGO, as they say.
Can we Trump this model?
But the main point of making such a model is to see if it is predictive. So we plan to turn it onto the US election, where our hypothesis is that fairly similar dynamics are taking place. So, what we have been doing is following the US election with our systems for a few months to try and calibrate the US systemically, and now the Presidential candidates are anointed we have a simple 2 horse system - and a 2 horse dynamic model to test out on it.
There are differences which we will have to set up for, but it should be an interesting experiment (and a decent test of our social analytics systems)
Monday, July 18. 2016
Succinctly put in The Atlantic - (and summarised here), when faced with saying "Yes" to something new, risky, etc, research shows:
“If you are a manager, if you commit a false positive, you are going to embarrass yourself, and potentially ruin your career.” Managers, he says, are terrified of committing false positives, meaning saying something will be a hit when in fact it will flop.
In other words, one is seldom tarred with the results of saying No, and the best way to proect yourself when saying Yes is to only say Yes to "defendable" ideas (worked before, never get fired for buying IBM etc etc)
So, how to avoid this and let innovation flow? The paper argue that peers in an area are the best at judging someone else's work in the space, not managers. (Possibly...but they also say Science advances one dead scientist at a time...)
Anyhow, in most companies, Managers have the Yes/No role and Peers are seldom on tap, so how to get them out of the game theory bind above? The best way is to let Managers act like peers, in an experiment, Justin Berg found that:
"...asked managers to spend five minutes brainstorming about their own ideas before they judged other people’s ideas.” [That] "was enough to open their minds. Because when they came in to select ideas, they were looking for reasons to say no. Get them into a brainstorming mindset first, and now they’re not thinking evaluatively, they’re thinking creatively.”
All very interesting, but if the organisation still kicks you for False Positives, not sure this gets us anywhere further
Thursday, July 14. 2016
Pokemon Go has become quite the thing, but regular readers of this blog will know we are fascinated by one thing in the main - yes, where is all that user data going?
In less than a week Niantic (a Google spinout*) created one of the largest personal location databases in history, as well as other personal data, storage and camera co-ords. Here's what it does (thanks to USA today):
For Android users, the game can access both the precise and general locations of the device as well as its camera – permissions inherently necessary to play the game. The game can also access users’ USB storage, contacts, network connections and more.
Niantic can also share the data it collects with Pokémon Co., which is partially owned by longstanding videogame and console maker Nintendo Co. More interesting is, as the WSJ notes, the choice of what data to collect - they didn't need it all for the service to work, yet chose to collect it.
Here's what the Terms & Conditions say:
“We may disclose any information about you (or your authorized child) that is in our possession or control to government or law enforcement officials or private parties as we, in our sole discretion, believe necessary or appropriate,” the agreement states
Fairly standard deal for the Social Nets, but the amount of location data this system has is like no other, As USA Today points out, what is yet to become clear is the business model, so its hard to work out what they will do with the data. However, the company has said in the past that they will licence out the technology to other companies, which raises the prospect of the virtualspace being filled with virtual objects, and hordes of people chasing them.
There are already reports of 3rd parties using the data for their own ends (mugging, hassling people by making their homes in-game locations, etc) so this could get quite inconvenient..
Update - this last point has started to get serious, as the Grauniad notes, it's not clear who owns your space in cyberspace:
This will probably become a major issue, as it looks like it will open all the same issues as owning the air above your property, or the oil underneath. In general, if history is any guide, there will be abuse of other people's space by overenthusiastic or unscrupulous operators, followed by a clear need for regulation.
* Both Nintendo and Google are investors in Niantic, which started within Google before spinning off into a stand-alone company last year.
Tuesday, July 5. 2016
The Wisdom of Crowds showed us that a group of people were wiser than a group of experts - and then came the caveats. Firstly, it was shown that all the Wise people hae to decide independently of each other.
Now, it seems that small is better - Santa Fe Institute
Sounds suspiciously like a group of experts again, though there is the random selection requirement again. It also depends what the question is:
Where previous research on collective intelligence deals mainly with decisions of how much or how many, the current study applies to this-or-that decisions under a majority vote. The researchers mathematically modeled group accuracy under different group sizes and combinations of task difficulties. They found that in situations similar to a real world expert panel, where group members encounter a combination of mostly easy tasks peppered with more difficult ones, small groups proved more accurate than larger ones. This effect is independent of other influences on group accuracy, such as following an opinion leader or having group discussions before voting.
What about voting as a means of determining the majority opinion of a populace?
"These results, of course, do not mean that we should abandon large scale referendums like Brexit and national elections,” Galesic adds. “Choices between different policies and candidates often do not have a 'right' and a 'wrong' answer: different people simply prefer different things, and the outcomes of these decisions are complex, with a spectrum of consequences. It is important to account for everyone's opinion about the general direction in which they want their country to go -- including underrepresented groups.
Not sure where this leaves us practically though - for some problems it's better to have smaller groups, others larger, depending on the question.
Sunday, July 3. 2016
The first fatal crash of a "self driving car" is pushing a lot of questions to the surface, on a whole lot of levels.
Is the Technology up to it yet?
Firstly, its is clear now that the technology is not yet ready for wide scale deployment - a camera probably should not be the prime mode of sensing (night and difficult lighting conditions, lens obstructed) and the current radar is clearly sub-optimal - there are a lot of potential obstacles below car roof height.
Was it tested enough?
These cars have done many millions of miles, and are said to be less risky than driving you own car (it has twice the miles/death ratio of average motoring - but the well heeled Tesla driving demographic is far from average). Another key question is what sort of miles? Has it been pushed hard, beyond the envelope, by test drivers. These are not "far-beyond the edge" driving conditions. It almost smacks of software industry culture - push a Beta product out there, let the customers find the bugs. But bugs in heavy, powerful, fast mechatronic devices can kill.
It was billed last year as the "arrival of your autopilot". Problem is, some people believed it. Fortunately, not too many people can afford them, and as mentioned above most of those who bought it are well heeled - less envelope pushers as a % of drivers. But there are some, posting videos of his experiments with hands free driving on YouTube (the driver concerned was one such)
The inevitable outcome - Regulation - is probably a Good Thing right now
As the WSJ explains, despite misgivings regulators were persuaded to stay their hand:
Auto-safety regulators, meanwhile, were relatively silent on the technology even though many experts viewed Tesla’s program as the most aggressive self-driving system on U.S. roads. The National Highway Traffic Safety Administration, embroiled in managing a sharp increase in safety recalls, including tens of millions of rupture-prone air bags, lacks authority to approve or disapprove of the advanced technology or meaningfully slow its deployment.
Now they will. And despite AI-Car supporters' cries of "men walking with red flags in front of cars" it is a necessary, and in the medum run a good step for the self driving car industry to get it a lot more right first.
What will kill the AI-Car industry stone dead is if it kills a few more people.
Friday, July 1. 2016
As is becoming usual, most pundits got the British EU Referendum (Brexit) winner wrong (to be fair, the two sides were so close running that most were within statistical margins of errors, just that most picked the wrong side as the winner - and that makes all the difference of course, just ask the speculators who were caught). But one company, TNS, got it right. Being datawonks we were fascinated about how they did it - summary below from article in El Reg:
1. Balance out the Politically Engaged (mainly Remainers) to correct for shy people
We asked respondents about their likelihood of voting in the next General Election and used this (along with some demographic information) to model turnout using data we collected at the 2015 General Election. We compensated for this imbalance by weighting the turnout level of our sample down to a more realistic level; decreasing the number of politically engaged individuals and giving us a more representative sample.
2. Rebalance using known demographic drivers (education, class etc)
3. Remove Confirmation bias
There was also added risk due to the fact that 16 per cent of registered voters were still undecided and we were unsure as to whether they would vote. Our understanding is that some polling companies tried to take this into account by allocating a higher proportion of the undecided voters to Remain; this also seems to have been factored in by the betting market as it consistently showed a higher probability of Remain than the opinion polls. [TNS did not do this]
I can't help but hypothesize, given how everyone was clustering around the same broad numbers, that the mental model that meant they did not assume the "Undecideds" would vote one way or the other was what tipped it.
Update - article here showing endemic errors in the poling methods meant that Leave as always in the lead, but polls couldn't se it
Friday, June 24. 2016
Today, the UK voted to leave the European Union (EU), whatever that may mean.
Whatever you think of the outcome (this are not really a political blog), we are officially in Uncharted Seas, in Interesting Times - History is beginning again
From a technology point of view there are some interesting questions, for example about the UK's adoption of EU data rules, or whether the UK "Tech" sector is better of relocating to Berlin or Dublin or somesuch, or whatever.
We have been going for 10 years this year, and have had to do some quite interesting predictive work over the last 10 years, from the market opportunity of niche products future of entire industries, but we would never have predicted this until a few weeks ago when it was clear social media sentiment was rapidly shifting
Change is a constant.....the next 10 promise to be just as interesting.
Monday, June 13. 2016
It is hard to see exactly what synergies the deal brings - picture above courtesy Matt Zeitlin
Microsoft has bought Linked In, the question top of mind to us is "why - and why pay so much?".
They have paid about 50% over the odds (Linked In shares were c $130, now are c $195 on MSFT bid of $196), that is a considerable premium.
It would seem to be a bet on the "Future of Work" - Satya Nadella (MSFT CEO) says that:
Quite how this translates practically is hard to see, as the diagram shows above synergies are not exactly obvious so its an accretion play. But the business cases trotted out are speculative and not particulalry compelling:
- Microsoft Office combined with LinkedIn's network so Microsoft can point to a specialized expert through LinkedIn
This is not the stuff of a 50% level of valuation premium, it's fluff for diverting tech journos. Yet Nadella is no fool, so what is really in play here? Thinking laterally, it gives MSFT access to a lot more data about YOU!:
- Access to the social graphs and details about a lot of working people globally, i.e. the list of nearly every customer, and insight into many companies that Microsoft has or wants - a CRM system wet-dream
Now that is the sort of thing that has got real value, and the price then starts makes sense - deter others from entering any bidding war.
Of course, it could not be be, as one wag on Twitter suggested, MSFT's attempt to "consolidate its dominance over the most joyless aspects of your computing life" - though if we follow Lewinsky's Law, that the most dull tech is the most profitable, it also explains the valuation
Friday, June 3. 2016
It gets, er, better - from troncInc's own press release:
“tronc pools the company’s leading media brands and leverages innovative technology to deliver personalized and interactive experiences to its 60m monthly users,”
And then there is troncX, an “online curation and monetization engine” which utilizes artificial intelligence technology “to accelerate digital growth”.
Oddly enough, the marketplace has not been wowed, in fact some have been uncouth enough to even suggest that the brand name was badly troncated, or even that the Branding Consultants were tronc (OK, OK, nearly everyone is hooting with laughter and taking the piss)
But all is not lost - firstly, its an excellent Wildean Strategy and lets not ignore that they have all those right-on-the-money buzzwords - Monetization, premium, AI, digital, accelerate - and it starts with a small letter to boot. Not a bad start for Unicorn bingo, but Broadstuff analysis shows a serious flaw - surely, surely to hit max points they should have gone for that double "oo" thing?
You know it makes sense....besides, what (more) could go wrong?
Friday, May 27. 2016
Godwin's Law of Online Discourse - that as an online discussion continues, the probability of a comparison to Hitler or to Nazis approaches 1 - has been proven.
From Mr Godwin himself:
On the law itself, he notes that:
That hasn't gone so well it seems...he also noted the propensity of London mayors to invoke it nowadays:
(Page 1 of 282, totaling 2813 entries) » next page
More Broad Stuff
Poll of the Week
Will Augmented reality just be a flash in the pan?
Creative Commons Licence
Original content in this work is licensed under a Creative Commons License