Monday, March 10. 2014
Kudos Gangstersout blog
Two pieces of news in quick succession - Friday, drones are cleared for use commecially in the US* - Pando Daily:
And then today: news in that two major US legal practices, LeClairRyan and McKenna Long, have set up Drone case chasing groups. - Washington Post:
What a marvellous world......still, as the picture above shows, the hunting season could be prolonged all year
*Update: The FAA have appealed, which means the drones don't fly until its settled, and there will be lots of lawyers droning on about drones
Friday, March 7. 2014
Some weeks ago I gave a talk about the "Dark Side of Open Data" at the Open Data Institute, where I predicted that the major beneficiaries of government data were not going to be private citizens, taxpayers, or enthusiastic small startups, but large enterprises with deep pockets and less than altruistic service models. The slide I used noted that history tells us any potential goldmine will be mined, and the obvious business model would be:
As to who would do this, the question I asked I posed was "Which side are all the sharpest knives on?". No surprises then, that today I read in a McKinsey article on trends in Big Data that:
...there was a growing awareness, among participants, of the potential of tapping swelling reservoirs of external data—sometimes known as open data—and combining them with existing proprietary data to improve models and business outcomes. (See “What executives should know about open data.”) Hedge funds have been among the first to exploit a flood of newly accessible government data, correlating that information with stock-price movements to spot short-term investment opportunities.
Which immediately begs the question as, given the government is giving away the data, and the taxpayer funding it, should they be getting a better deal and not letting it go for $0.00?. I contend, in a world where companies such as Facebook valued at c $ 175 bn will pay $19bn for companies like Whatsapp primarily for their user data assets, that the answer is "no".
Another slide I put up was a rather perceptive comment by Jo Bates, of Manchester Metropolitan University, from 2012:
The current ‘transparency agenda’ [of the UK government, supported by prominent Open Data advocates] should be recognised as an initiative that also aims to enable the marketisation of public services, and this is something that is not readily apparent to the general observer.
The iise is that there is major asymmetry bewteen those that stand to gain 9a few corporation s and companies) and those that stand to lose (citizens who have their data appropriated and misused with no recompense). That point is made loud and clear by the McKinsey news...and this is just the beginning, I'd predict. My last slide but one was about what I predict we will see for the next few years:
- The combination of enthusiasts who see no problems, and commercial interests who intend to make money from the exact problems it will cause, will ensure data will get out without adequate protections or safeguards, at low cost (to the buyers)
So it is no great surprise that hedge funds are early entrants, nor that this week news emerged that 13 years of UK health data had already been sold under the radar to insurance companies for a pittance (to be fair, it was sold for modelling purposes, but the fact remains no one had agreed their data should be sold).
However, there are signs of hope. Days after I gave my talk, the Health Secretary had to abandon plans to sell off health data after a vigorous public protest campaign (waged heavily by social media....) and days later decided they would not sell patient data to such customers. In fact, what looks like an early day charter emerged, as the Government promised to:
....provide "rock-solid" assurance to patients that confidential information will not be sold for commercial insurance purposes, the Department of Health said.
Reading the comments to that report though, it is clear that all the shenanigans and the backlash that finally brought the Government to this point has significantly reduced any trust that this new recommendation will actually be followed - especially as they are going to try yet again to change the law, to be able to make data accessible in a few months time.
The other interesting event today was an abortion charity being heavily fined for being somewhat cavalier with peoples' data and giving it to a hacker. While its a pity its a charity, unless penalties for slack data care are pretty heavy there will be little incentive to look after peoples' data and it will be open season for hackers.
Wednesday, March 5. 2014
Yes, another one has found it has some Bitcoins missing:
A bitcoin bank has been forced to close after hackers stole 896 bitcoin, worth £365,000, in an attack on Sunday....
We told ya so....
Monday, March 3. 2014
In the 90's and 'Noughties I made many trips to San Francisco/ the Valley, and as the 90's dotcom bubble built up on I noticed two "non-stock" signals of its frothiness - house prices and occupation of the SoMa (South of Market) area by trendy bars and techie startups:
- House prices rose to the point that educated non techies couldn't afford them, so people like teachers were priced out. This is starting to happen again. (By the way, my "top of market" indicator was when teachers in SF/SV decided to sell and go and teach elsewhere/semi retire based on the huge house price gains)
So, another sign of the BubbleTime.
Of course, this time it Will Be Different....
Of course it will....
Incidentally, I recall going back in c 2003 after a 2 year absence and there was a house price tumble almost back to Palo Alto, plus SoMa was full of winos and old newspapers again....
Friday, February 28. 2014
The Tube, if it told the Truth - Kudos Buzzfeed
Every time you think that Twitter has become more silly than it was, something existential like the above emerges in your feed and you stay hooked. That is all you need to know about Twitter's ongoing value proposition.
(Actually.....I have a meeting in town today, I can either get there from Tourist Tat or Eric Pickles.....oh the choices)
Wednesday, February 26. 2014
I haven't heard much about Prediction Markets for a while, but here is a new one - predicting Innovation - Innovation Excellence:
Prediction markets were popularized in James Surowiecki’s 2004 book, The Wisdom of Crowds. They are systems which forecast the outcome of projects or events based on how willing individuals are to buy “stock” in them. People buy shares in the topics they think will succeed. Each topic or event then gets a value similar to a stock market price. These prices can be interpreted as predictions of the likelihood of the event.
Much was predicted for Prediction Markets a few years back, but they faded from view as results were not as stellar as, er, predicted (especially in the US elections), but hope always burns. The reason is typically that the preconditions for them to work are ignored, i.e. that all choices must be made by a heterogenous and fairly large number of people who are in no way influenced by one another or any common intrinsic factors.
If this can be pulled off in companies (or by companies crowdsourcing innovation) it will be a very interesting.
One to watch.
Tuesday, February 25. 2014
Impact of mathematical techniques on operations, by industry - McKinsey
McKinsey has discovered you can use Operations Research (or Decision Maths as it is known these days) mathematical techniques to analyse and optimise manufacturing operations - McKinsey Insights:
The application of larger data sets, faster computational power, and more advanced analytic techniques is spurring progress on a range of lean-management priorities. Sophisticated modeling can help to identify waste, for example, thus empowering workers and opening up new frontiers where lean problem solving can support continuous improvement. Powerful data-driven analytics also can help to solve previously unsolvable (and even unknown) problems that undermine efficiency in complex manufacturing environments: hidden bottlenecks, operational rigidities, and areas of excessive variability. Similarly, the power of data to support improvement efforts in related areas, such as quality and production planning, is growing as companies get better at storing, sharing, integrating, and understanding their data more quickly and easily.
Not only that, but you can apply Lean operating techniques in manufacturing companies too:
Nonetheless, to get the most from data-fueled lean production, companies have to adjust their traditional approach to kaizen (the philosophy of continuous improvement). In our experience, many find it useful to set up special data-optimization labs or cells within their existing operations units. This approach typically requires forming a small team of econometrics specialists, operations-research experts, and statisticians familiar with the appropriate tools. By connecting these analytics experts with their frontline colleagues, companies can begin to identify opportunities for improvement projects that will both increase performance and help operators learn to apply their lean problem-solving skills in new ways.
Amazing stuff....except its very, very old news. Monte Carlo simulations and capacity planning algorithms have been around for decades, a lot of it even pre-dates WW2. Value analysis started at 3M in the 1960's. Richard Schonberger wrote the groundbreaking Japanese Manufacturing Techniques in 1982 (I still have my copy) and he was merely Westernising something the Japanese had been doing for 2 decades by then. And then I saw this, which really made me smile wryly:
Similarly, a leading steel producer used advanced analytics to identify and capture margin-improvement opportunities worth more than $200 million a year across its production value chain. This result is noteworthy because the company already had a 15-year history of deploying lean approaches and had recently won an award for quality and process excellence. The steelmaker began with a Monte Carlo simulation, widely used in biology, computational physics, engineering, finance, and insurance to model ranges of possible outcomes and their probabilities
The wry smile was because I did much the same, in 1994-5, for a steelmaker, using some of these exact same techniques - while I was consulting at McKinsey to boot. I have the obligatory picture of big rolling mills from a grateful client, and the prize I won in the McKinsey internal "Practice Olympics" to prove it In fact I'd bet the McKinsey Quarterly in the 1970's, 80's and 90's will be full of analyses like this one. There truly is nothing new under the sun.
But with New Improved Big Data it can all be rebadged bright and new....except it doesn't work this way. There was a shedload of Big Data in the Old Days too (shop floor data capture techniques underpin most of the Internet of Things, and did you know some of the first broadband networks in the world went in at manufacturers in the 1980's). Manufacturing has always had a lot of data, and Big Manufacturers bought Big Iron to process Big Datasets then too (except it was called data with a small "d" then). The Monte Carlo methods, or N jobs on M machines Optimisation (for examples) are still the same algorithms they were in the 1930's and 50's.
And you know what - you just cannot simulate the minute operation laden details of a shop floor or logistics network reliably. No matter how big your dataset, or your computers, or your machine tool onboard intelligence, there is just too much variability. Which is why the Just In Time/Lean movement came about as the better approach - the aim was to simplify the problem, rather than hit it with huge algorithm models and simulations so complex no one fully understood what they were doing anymore (just ask the banks what happens going down that route) - the aim of JiT/Lean was to actually reduce the problem variability, to get back to Small Data if you like.
And you know what else - despite the analytical miracles I and many others performed in the day, despite the extraordinary efforts by managements and workers, so many of those steel mills (and clothing companies, and manufacturers of a million other widgets) moved East. There is only so much you can do against cheap labour, national subsidies and guaranteed government contracts.
And that brings me to something else in the story, which is what is really going on here I suspect - its not Big Data, its Big Economics:
Sure, its partly about raw material prices changing - when they are too high to buy or too low to sell you really have to be efficient at manufacturing. But when you are getting to this level of number crunching, after 20 years of Lean projects, in my experience it's because the endgame is appearing on the horizon, its a last of the summer wine story, the end of an S curve. Interestingly, it seems like all the McKinsey consultants and the project were in India, and Eastern labour costs are rising, as is oil for those long ship rides back to the European and US markets, so much so in in fact that there is an increasing trend in re-shoring, as production is coming back to the US and EU. Big picture, the low cost Eastern windfall is ending, and you have to start getting much smarter again about the actual manufacturing process. You can get benefits from doing it right with Big Iron and Big Algorithms, no doubt - but this sounds like back to the future....I suspect they are now using bigger and bigger number crunching to eke the last 20% of improvements from the various kaizen projects ongoing, trying to keep the factories in situ as the Big Economics shift yet again.
And you didn't need Big Data to tell you that....
(Hat tip to my colleagues at the Agile Elephant for the link)
Monday, February 24. 2014
The (non) regulatory annual Bitcoin crash
News today that Mt Gox may well have been turned over - Forbes:
This was always going to happen, as we've pointed out before, and there is no restitution - no one is insuring bitcoin holders against losses. And, just as predictably, post crash there will be regulation:
It's probably going to happen again before that though, as Bitcoin's decentralisation and lack of oversight is both its strength and Achilles heel
Thursday, February 20. 2014
News just in that Facebook has bought WhatsApp for $SillyMoney - $19.6bn - TechCrunch:
With 450 million monthly users and a million more signing up each day, WhatsApp was just too far ahead in the international mobile messaging race for Facebook to catch up, as you can see in the chart above we made last year. Facebook either had to surrender the linchpin to mobile social networking abroad, or pony up and acquire WhatsApp before it got any bigger. It chose the latter.
Facebook couldn't afford not to have it, if someone else had bought it that would have made a direct attack on Facebook's chosen strategic way out of its own declining user engagement, i.e. mobile applications and messaging. Facebook is absolutely determined not to be overtaken by the "next wave" Social Networks. But its own "new wave" systems were kludgy, so the price of not having your lunch eaten in 2016 is c $20bn in 2014. There is a bit of irony in that WhatsApp strongly do not believe in advertisng, their founder once saying that:
"There's nothing more personal to you than communicating with friends and family, and interrupting that with advertising is not the right solution,"
Clearly $19bn is a mind changing amount
What this really shows is that the days of Facebook's organic growth are over, and from now on in they are going to have to acquire revenue, and thats a very expensive way of doing it at their size if you are continually needing to buy the guys who will eat your lunch. You had to believe a lot to believe Facebook's valuation - that just got harder. But its the Bubbletime, so all will be good - for a while
Update 1 - According to El Reg, Whatsapp does indulge in datascraping of a user's address book, so that does make it an interesting prospect for added value datamining.
Update 2 - Azeem Azhar of PeerIndex has a smart bit on analysis - if WhatsApp had stayed independent it still would have destroyed Facebook's mobile story, which really outlines why Facebook had to act to stop itself being eaten for lunch:
Wednesday, February 19. 2014
From the BBC, a report on a series of universities trying to build a system that can counter social media borne rumours, lies lies and gossip. The Pheme (maneda fter the Greek goddess of Gossip) is a collaboration between five universities — Sheffield, Warwick, King's College London, Saarland in Germany and MODUL University Vienna — and four companies: ATOS in Spain, iHub in Kenya, Ontotext in Bulgaria and swissinfo.ch, led by Kalina Bontcheva of from the University of Sheffield. Pheme will classify online rumours into four types:
Apparently different types of digital disngenuity leave their own type of digital footprints and can be recognized. The system will also look at the accounts spreading it and look for bots. Idea is then to search for information that is true from known sources and re-seed the stream of the original falsehood followed with "the truth". It will be ready late 2015 apparently.
The obvious flaw is if it can detect falsehoods, any half decent falsehood spreading system can detect it and re-seed the same trails. The other sad flaw is many people will rather believe a convenient lie than an uncomfortable truth. The war for the truth is about to be fought in the cyber-memespace to an unprecented degree - I wonder if there wll be a new subscience of memetics, called "phemetics".
(Page 1 of 271, totaling 2709 entries) » next page
More Broad Stuff
Poll of the Week
Will Augmented reality just be a flash in the pan?
Creative Commons Licence
Original content in this work is licensed under a Creative Commons License