Tuesday, March 10. 2015
News this week that GigaOm and FriendFeed have closed down.
GigaOm was one of the (better, IMO) Tech blogs-cum-digital news sites, the problem was (i) so many others also opened up at the same time, and (a salutary lesson) GigaOm was more about sound analysis rather than froth or shilling. If there is one lesson from Digital media it is that heavyweight content usually sinks, we are still largely in the "digital weeds" phase of ecosystem development in digital media . TechCrunch* soild itself to AOL, this does raise questions of whether there will be a larger shakeout soon.
Friendfeed was one of the (many) experimental social network approaches trid out in the 'noughties - some modes like Twitter & Facebook "stuck", many didn't. Joost, Seesmic, Plurk....remember them? The Friendfeed founders were smart, and Google bought Friendfeed for its people, but the product was left (as so often) to wither slowly on a decaying vine.
In reality though, the two are merely part of the huge evolutionary process going on as part of the overall Digital Transformation into Homo Surfiens, no different to the huge experimemntation of say the shift of shipping from sail to steam, or development of aircraft, or any other radical new technology you can mention. A huge number of experiments are tried in the Darwinian soup of technology evolution, some climb up the slippery stick and make it as "best ways forward" and jump the chasm, the others are subsumed into the mud silt at the bottom of the evolutionary ecosystem as experiments that failed....till next time.
Ashes to ashes, dust to dust, startups to silt.... but the overall Digital Transformation continues apace. It's often salutary to think back on the world 5, 10, 15, 20 years ago and look at what has changed in ICT as Moore's law carries on carrying on. And just think, if it wasm't for Edison we'd be surfing our tablets by candlelight....
*Update - I note its founder, Mike Arrington, tweeted today that he was glad he never took outside funding for TechCrunch. Therein probably hangs a tale. To paraphrase Tolstoy, all succesful companies are the same, every failed company fails in its own way.
Saturday, January 31. 2015
The British Army has resurrected the Chindits, one of the special force units used in WW2 to operate behind enemy lines - in their case to operate behind Japanese lines in Burma. It was formed by Brigadier Orde Wingate following a successful usage of Boer commando style tactics in the British/Commonwealth reconquest of Italian East Africa in 1940.
The new Chindits are to fight behind the lines in the digital wars - Torygraph:
...the new unit's focus will be on "unconventional" non-lethal, non-military methods such as "shaping behaviours through the use of dynamic narratives", an Army spokesman said.
It's part of the (belated, in my opinion) realisation among Western armies that from the end of the Korean War onwards, asymmetric warfare is the new "conventional" warfare. The new Chindits are designed to combine "conventional" irregular warfare psy-ops and disinformation techniques with a major digital capability to use social media and other digital information technologies.
One thing to keep in mind - the Chindit experiment ultimately failed militarily, their raids were characterised by dysentry wracked men carrying huge loads marching uselessly for days without contacting any enemy, high casualty rates due to disease not enemy action, and US (air) support was essential to keep them in the field*. They were retained more for PR purposes on the home front than any military effectiveness.
One hopes that the same will not happen now.....
(*This is not a criticism of the brave men who served in the Chindits, it's that an approach finessed for dry African conditions ultimately did not work owing to the jungle environment)
Thursday, January 8. 2015
There is a crisis in the British Accident & Emergency (A&E) system at the moment, the causes are are a "perfect storm" of previous decisions, mainly around reduction in funding in the end to end care system overall, and the usual crop of winter bugs, especially ones that over-impact the ever-increasing numbers of elderly. But this week I discovered a possible self-inflicted cause - their own expert self-diagnosis system on the NHS website will erroneously send people who can safely medicate at home to A&E.
(As I understand it is the same system that the medically unqualified telephone helpers on the 111 system use for diagnosis, with similar results, it would seem).
TL;DR- I would suggest the NHS expert system is too poor a model of symptoms and then defaults to the lowest risk position - i.e. to send anybody with the slightest probabiliy of something serious to A&E, causing an unwarranted increase in demand for the most expensive part of the formal medical sytem. And that this lesson applies to similar expert systems being touted everywhere by the Technorati as the Next Big Thing.
The idea is laudable - you go online or ring up the 111 helpers and "self provision" your diagnosis, and potentially self treat, thus saving the expensive professional medical system a lot of time, capacity & money. An excellent idea, especially as the "family doctor" system in the UK only really operates for about 10 out of 24 hours at best, and even to access that in a hurry is damn difficult if you are working. So you go to the online help out of hours, or even in hours if you can't get an appointment that day.The flaw is that both the web system and the tele-helper are relying on an Expert Algorithm, that unfortunately isn't.
This is how it (doesn't) work. A member of my family started getting pretty ill with aching limbs, nausea and vomiting, and a blinding headache. If you go on the symptom website with any permutation of these symptoms, the system soon decides that you may have meningitis, a very nasty condition, and tells you to get to A&E post haste, if not sooner.
However, these symptoms are also the near-identical symptoms for Norivirus, or Winter-Flu, a nasty bug but one that can be safely self treated at home in the vast majority of cases.
Needless to say, the probability of my family member having Norivirus with those symptoms, in winter, while the bug is occurring at its max, is infinitely more likely than having meninigitis. The expert system does not seem to have this "probabiity" function however, and appears to default straight to a lowest risk stance "this could be meningitis - go to A&E now!"
Even so, this Expert System is probably OK for the single case - but now scale it to a country of 50 million or so people, make it the easiest method of getting advice, scale down on the more skilled alternative (aka "doctors") as a second opinion and and multiply me by all the other people getting hit by the current Norivrus bug going round, and you have very likely generated a large number of unnecessary visits to A&E by the "Worried-not-quite-well" who have had the bejeezus frightened out of them by having an automated system that can't discriminate properly between a dangerous condition and a nasty winter bug, and drops to the lowest risk position of "get thee to A&E".
Also, you'd expect the system to try and do a few clarifying questions, given the huge difference in potential outcomes in a diagnosis of norivirus vs meningitis. There are a few to be fair - far fewer than I'd have expected though - but even then they are odd. For example, one question is "have you flown in from abroad in the last 3 weeks". Well yes, we have, as a matter of fact. But I was then expecting the system to ask where from - we flew in from alpine Europe, not exactly a dangerous disease hothouse. Now I don't know if the lack of the "where did you fly from" question in this "Expert" system was to make it easier to use, or to not offend anyone, but to my mind it was a pretty useless way of separating norivirus from meningitis.
The main issue with the question chain though, was you sat there looking at the screen thinking that none of these options properly described the issue - where was the "other/tell us what really happened/that isn't right" button?
Anyway, our diagnosis was sorted by talking to a real expert system - our doctor - over the telephone the next morning. "Oh yes, there is a lot of it going round right now, here's a prescription coming via email, go to XXX pharmacy and pick up the medicine". Five minutes on the 'phone with a real expert system = no panicked visit to A&E, no blocking resources for those who need it more, etc etc. But to use this option you (i) have to take the risk that it isn't meningitis on yourself over an anxious night, and (ii) have the self confidence that is probably the right call despite the prevalence of lurid medical "advice" Google gives at the touch of a button. Not a good failsafe resort for a nationally deployed medical expert system.
I would therefore bet a lot of money that a lot of people, especially out of hours, used that website or 111 and, when told to get to A&E PDQ, went there in a panic and helped no little bit in creating the huge crunch in A&E over the last few days. It may have only been a small % uplift, but the way capacity constrained complex systems work is they don't degrade linearly, they degrade in a non linear (aka accelleratingly worse) mode until they collapse.
Now it is something of a belief in the Technorati that these expert systems are inevitable and will be the saviour of many services/automate many people out of jobs/be the great leap forward for mankind etc etc (choose your Future of Technology belief set) but this experience maps onto something far more prosaic that I have seen repeatedly in the 30 or so years I've been building system models and simulations, to wit that:
- no simulation ever captures the overall complexity of reality
In other words, be very careful of how these early-day expert systems are deployed, as their errors could cost a hell of a lot more than the theoretical savings they may generate (unless, as in the case of the banks in 2007/8, you can make the public pay of course).
Tuesday, October 7. 2014
It's always fun to look at all the various predictions of the Next New Things (given they vary quite widely by predictor, by year - us being no exception). Anyway, here are Gartners for the next few years (abridged by Broadstuff)
For what its worth, based on Broadstuff's advanced TTID algorithm and patented BGA prediction methodology (see end of post for definition), we can safely predict that:
1. Most of these will take far longer to play out than pedicted, and many that do play out will have far less impact than supposed
Which of course is what Gartner's other great prediction system invention, the Hype Curve, tells us - only when something passes out of any hype trend, does it finally become useful.
The other thing I am left scratiching my head about, is that given the Great Hollowing Out*, if all these come to pass (most of these trends imply yet more waged employees being dumped or offshored), where will the money come from to buy all those marvellous new things these new lean businesses make? Even Henry Ford saw that one coming when he upped wages so employees could buy his cars....
* I am always amused by The Economist calling the Hollowing Out trend all a Myth in 2004, when it was patently obvious it was already happening. But of course, for that we use the MRD approach
TTID = This Time It's Different. The algorithm states that whenever this claim is made, it isn't, and put your hand tightly on your wallet
BGA = Bill Gates Algorithm - This (X) will have far less impact in 2 years than we think, and far more in 10. We apply the Chasm upgrade though, which states that most New New things will never jump over the chasm and will be dashed on therocks in trying. About 3 of the above will be survive, on average - which 3 would you bet on?
MRD = Mandy Rice Davies corollary - "Well, They Would, Wouldn't They" - Always look carefully at where someone is coming from before following where they lead....
Friday, August 22. 2014
Richard Dawkins (respected/hated Evolutionary Biologist and loved/hated Atheist) has touched off yet another Twitterstorm, via that unfortunate habit Evolutionary Biologists (and Vulcans) have of looking at humans from a viewpoint of mass mathematical game theory participants, rather than as - well, humans. Cue yet another Twitterstorm du Jour. (Huffington Post summaries it best):
The irony of Dawkins being called "immoral" by religious and various other "strong beliefs" based groups often proposing far worse things is piquant, but there is an even bigger irony here with Mr Dawkins doing this. He actually was the first person to coin the term "meme" and to postulate how they work. So, depending on your point of view on Mr D, he is either a master memeticist or a complete c*nt who has been hoisted on his own memetic petard by the #Offended on Social Media.
Oh yes - there were 12 points to the Huffpo article, and these are the clinchers I think:
11. Attempting to squeeze a few last hits out of the now-subsiding "outrage", a journalist will write a meta-piece attempting to explain the anatomy of a Dawkins Twitter scandal*.
He is clearly a master memetic tactician therefore, Twitterstorms and meta-pieces being the sign of memetic success - but whether continuing to offend large numbers of people in exchange for viral Twitter publicity is a good strategic memetic play is less clear. Wildean theory says it is effective, but in a Social Media Age where everything you say remains online to be held against you, it may not be. After all, one of the first lessons of social game theory is being nice wins - eventually...
*13. Attempting to extract the last ounces of traffic, a blogger writes a snarky piece on the whole affaire...
Monday, August 11. 2014
Very interesting article in the Economist about "Entrepreneurial" vs "Innovation" economics. the whole article is well worth reading as it is one of those very rare items in the UK "Tech Startup" space - a systemic analysis with actual numbers. There are 3 key points dealing with the UK's current Startup / "Every Person is an Entrepreneur" craze:
Firstly, State money is wasted on funding too many entrpreneurial SMES with too little money, they do nothing for the economy overall:
....once you take into account the number of SME jobs lost after the first three years of their creation, there is very little net job creation by these firms. Only 1% of new enterprises have sales of more than £1 million six years after they start. Research at the University of Sussex shows that median sales of a six-year-old firm is less than £23,000 (Storey, 2006). These firms also tend to be the least productive and least innovative (R&D spending—the best measure we have for inputs in the innovation process—in Tech City is not higher than in other parts of London or Britain). Indeed, the few high growth innovative firms (about 6% of the total SME group, Nesta, 2011)—those that really should be supported—do not directly benefit from the hype that surrounds SMEs and startups: once they get the funds these are too diluted to make a difference.
Secondly, what the State should be funding is an Innovation ecosystem, not an Entrepreneurial/Startup one per se
Innovation-led “smart” growth has occurred mainly in countries with a big group of medium to large companies, and a small group of SMEs that is spun out from some of those large companies or from universities. These firms have benefited immensely from government funded research. Indeed, in my book I show how many firms in Silicon Valley have benefitted directly from early-stage funding by government, as well as the ability to build their products on top of government funded technologies.
The author points out that nearly every "entrepreneurial startup" in Silicon Valley today would not exist without huge US government funded projects that underpin it's technology, and direct low cost (aka non VC) early days investment - Apple is a case in point:
Every technology that makes the iPhone smart was government-funded (internet, GPS, touch-screen display, SIRI). Apple spends relatively little on R&D compared with other IT firms precisely because it uses existing technology. It applies its remarkable design skills to these technologies, effectively surfing on a government-funded wave. Apple, Compaq and Intel also all enjoyed the benefits of early-stage public funds (SBIC in the case of Apple, SBIR in the case of Compaq and Intel).
Thirdly the UK's state spend on innovation and pull through is small by competitive standards. Silicon Valley was largely built on the huge government backed spendiing, not the VC community - and it is probably still the real case:
Silicon Valley firms were initially not funded mainly by venture capital. It came in after the ball had got rolling thanks to funding by the Department of Defence, the Department of Health and, more recently, the Department of Energy. In fact, there is increasing evidence that many startups are told by venture-capital firms to go first to SBIR and then come back (Block and Keller, 2013).Venture-capital funds are not providing the kind of patient long-term finance needed for radical innovations. They are too focused on a profitable “exit”—usually through an IPO or a sale to a bigger company—within 3-5 years. But innovation often takes 15-20 years.
This sort of state pull through is what China is using too, and the numbers are measured in $ Trillions. In fact even in Europe, the UK underperforms hugely on this sort of state investment:
In Germany such links are created by well-funded Fraunhofer Institutes. In Britain these are being imitated through the Catapult centres, which in theory should be linked to Tech City-type projects, either through procurement policy or via learning. Currently there are no links between these. And whereas the Fraunhofer system has an annual research budget of €1.8 billion ($2.4 billion) and a network of 20,000 staff across 60 centres (in 2010), Britain’s Catapult centres were given just £200m to spend over 4 years. When the Tech-City gurus in Number 10 Downing Street criticise the Technology Strategy Board, which is in charge of the Catapult strategy, for not being more like Darpa, they ignore the very different size of TSB’s budget in comparison with Darpa—and even more the fact that the TSB does not have the market creating potential that Darpa does.
Leads to a fourth point, about the competence of No 10's "Tech City Gurus" and advisors - but that's for another post. To end though, the observation is if small beer is what the government is willing to put into the game, its better to spend it on tertiary education and R&D, where impacts are proven, rather than launch a million underfunded startups:
Research at the University of Cambridge (Hughes 2008) suggests that the British government spends (directly and indirectly) close to £8 billion ($13 billion) annually on SMEs—more than it spends on the police and close to the amount it spends on universities. Is this warranted? How do we know it would not be better to simply direct that money to teachers where there is plenty of evidence that quality education raises human capital and growth.
This post is just a summary, I recommend reading the article.
Friday, March 28. 2014
Interesting 2 paragraphs from Fred Wilson's blog, talking about "what's next":
But the roadmap has been clear for the past seven years (maybe longer). The next thing was mobile. Mobile is now the last thing. And all of these big tech companies are looking for the next thing to make sure they don’t miss it.. And they will pay real money (to you and me) for a call option on the next thing.
I'm intrigued by the idea of a call option, I think it could be executed better than via VC funding though, Fred - now that would be disruptive
But I think Fred's largely right that Mobile was the last Next Thing - though strictly speaking its not "Mobile" now per se, but PC level processing power meeting Moore's Law and shrinking in size and price so it can be easily portable, with a damn good UI (think iPaq then iPhone). These "Smart" phones and "tablets" killed good old Planet Mobile dead in about 3 years (Motorola, Nokia, Blackberry - where are they now? They were earth shaking giants a few short years ago!)
Anyway, where is the Next Big Thing to be found is the question Fred asks. The future is of course here, just unevenly spread, so the trick is to see what bits of the future are here, now - and actually are going somewhere. Ten things that have changed exponentially in the "networked technology" areas we follow, in the time we've been writing Broadstuff (est 2006) are:
- Robotics (including the flying type)
As you can see, these are hardly New New Things, just things that were already here in 2006 and even then clearly had high potential. What's interesting is that they were all already on very predictable development vectors in 2006, but no one looked at them as killer technologies in those days. That was because at that time, their rate of development was still mainly all theoretical, and not provably valuable. To compare, here are 10 other things that were also floating around in 2006/7 that I thought also could happen sooner and haven't yet, but still may as they are all Big Next Next Things potentially.
These are all here today, unevenly distributed, and still chugging along - but at slower rates than the various laws of networking, learning, Moores et al would predict. Typically there is a something in them that is missing, obstinately sticking at current capability or economically unavailable, awaiting the "key" to their leap over the Chasm. But all it takes is a small shift (think iPaq vs iPhone again) and over they go.
All you have to do to build your own mind-boggling portfolio of New Next Things To Watch is read the various Gartner Hype Curves for the last 10 years, and you will see a slew of things on the hot S curve one year and disappearing 2-3 years later. They don't go away though, and are still evolving in the Darwinian mud of technology species, it's just that something hasn't yet quite worked out for them yet. And somewhere in that stew already, are the next 10 New New Things.
Tuesday, February 25. 2014
Impact of mathematical techniques on operations, by industry - McKinsey
McKinsey has discovered you can use Operations Research (or Decision Maths as it is known these days) mathematical techniques to analyse and optimise manufacturing operations - McKinsey Insights:
The application of larger data sets, faster computational power, and more advanced analytic techniques is spurring progress on a range of lean-management priorities. Sophisticated modeling can help to identify waste, for example, thus empowering workers and opening up new frontiers where lean problem solving can support continuous improvement. Powerful data-driven analytics also can help to solve previously unsolvable (and even unknown) problems that undermine efficiency in complex manufacturing environments: hidden bottlenecks, operational rigidities, and areas of excessive variability. Similarly, the power of data to support improvement efforts in related areas, such as quality and production planning, is growing as companies get better at storing, sharing, integrating, and understanding their data more quickly and easily.
Not only that, but you can apply Lean operating techniques in manufacturing companies too:
Nonetheless, to get the most from data-fueled lean production, companies have to adjust their traditional approach to kaizen (the philosophy of continuous improvement). In our experience, many find it useful to set up special data-optimization labs or cells within their existing operations units. This approach typically requires forming a small team of econometrics specialists, operations-research experts, and statisticians familiar with the appropriate tools. By connecting these analytics experts with their frontline colleagues, companies can begin to identify opportunities for improvement projects that will both increase performance and help operators learn to apply their lean problem-solving skills in new ways.
Amazing stuff....except its very, very old news. Monte Carlo simulations and capacity planning algorithms have been around for decades, a lot of it even pre-dates WW2. Value analysis started at 3M in the 1960's. Richard Schonberger wrote the groundbreaking Japanese Manufacturing Techniques in 1982 (I still have my copy) and he was merely Westernising something the Japanese had been doing for 2 decades by then. And then I saw this, which really made me smile wryly:
Similarly, a leading steel producer used advanced analytics to identify and capture margin-improvement opportunities worth more than $200 million a year across its production value chain. This result is noteworthy because the company already had a 15-year history of deploying lean approaches and had recently won an award for quality and process excellence. The steelmaker began with a Monte Carlo simulation, widely used in biology, computational physics, engineering, finance, and insurance to model ranges of possible outcomes and their probabilities
The wry smile was because I did much the same, in 1994-5, for a steelmaker, using some of these exact same techniques - while I was consulting at McKinsey to boot. I have the obligatory picture of big rolling mills from a grateful client, and the prize I won in the McKinsey internal "Practice Olympics" to prove it In fact I'd bet the McKinsey Quarterly in the 1970's, 80's and 90's will be full of analyses like this one. There truly is nothing new under the sun.
But with New Improved Big Data it can all be rebadged bright and new....except it doesn't work this way. There was a shedload of Big Data in the Old Days too (shop floor data capture techniques underpin most of the Internet of Things, and did you know some of the first broadband networks in the world went in at manufacturers in the 1980's). Manufacturing has always had a lot of data, and Big Manufacturers bought Big Iron to process Big Datasets then too (except it was called data with a small "d" then). The Monte Carlo methods, or N jobs on M machines Optimisation (for examples) are still the same algorithms they were in the 1930's and 50's.
And you know what - you just cannot simulate the minute operation laden details of a shop floor or logistics network reliably. No matter how big your dataset, or your computers, or your machine tool onboard intelligence, there is just too much variability. Which is why the Just In Time/Lean movement came about as the better approach - the aim was to simplify the problem, rather than hit it with huge algorithm models and simulations so complex no one fully understood what they were doing anymore (just ask the banks what happens going down that route) - the aim of JiT/Lean was to actually reduce the problem variability, to get back to Small Data if you like.
And you know what else - despite the analytical miracles I and many others performed in the day, despite the extraordinary efforts by managements and workers, so many of those steel mills (and clothing companies, and manufacturers of a million other widgets) moved East. There is only so much you can do against cheap labour, national subsidies and guaranteed government contracts.
And that brings me to something else in the story, which is what is really going on here I suspect - its not Big Data, its Big Economics:
Sure, its partly about raw material prices changing - when they are too high to buy or too low to sell you really have to be efficient at manufacturing. But when you are getting to this level of number crunching, after 20 years of Lean projects, in my experience it's because the endgame is appearing on the horizon, its a last of the summer wine story, the end of an S curve. Interestingly, it seems like all the McKinsey consultants and the project were in India, and Eastern labour costs are rising, as is oil for those long ship rides back to the European and US markets, so much so in in fact that there is an increasing trend in re-shoring, as production is coming back to the US and EU. Big picture, the low cost Eastern windfall is ending, and you have to start getting much smarter again about the actual manufacturing process. You can get benefits from doing it right with Big Iron and Big Algorithms, no doubt - but this sounds like back to the future....I suspect they are now using bigger and bigger number crunching to eke the last 20% of improvements from the various kaizen projects ongoing, trying to keep the factories in situ as the Big Economics shift yet again.
And you didn't need Big Data to tell you that....
(Hat tip to my colleagues at the Agile Elephant for the link)
Thursday, February 20. 2014
News just in that Facebook has bought WhatsApp for $SillyMoney - $19.6bn - TechCrunch:
With 450 million monthly users and a million more signing up each day, WhatsApp was just too far ahead in the international mobile messaging race for Facebook to catch up, as you can see in the chart above we made last year. Facebook either had to surrender the linchpin to mobile social networking abroad, or pony up and acquire WhatsApp before it got any bigger. It chose the latter.
Facebook couldn't afford not to have it, if someone else had bought it that would have made a direct attack on Facebook's chosen strategic way out of its own declining user engagement, i.e. mobile applications and messaging. Facebook is absolutely determined not to be overtaken by the "next wave" Social Networks. But its own "new wave" systems were kludgy, so the price of not having your lunch eaten in 2016 is c $20bn in 2014. There is a bit of irony in that WhatsApp strongly do not believe in advertisng, their founder once saying that:
"There's nothing more personal to you than communicating with friends and family, and interrupting that with advertising is not the right solution,"
Clearly $19bn is a mind changing amount
What this really shows is that the days of Facebook's organic growth are over, and from now on in they are going to have to acquire revenue, and thats a very expensive way of doing it at their size if you are continually needing to buy the guys who will eat your lunch. You had to believe a lot to believe Facebook's valuation - that just got harder. But its the Bubbletime, so all will be good - for a while
Update 1 - According to El Reg, Whatsapp does indulge in datascraping of a user's address book, so that does make it an interesting prospect for added value datamining.
Update 2 - Azeem Azhar of PeerIndex has a smart bit on analysis - if WhatsApp had stayed independent it still would have destroyed Facebook's mobile story, which really outlines why Facebook had to act to stop itself being eaten for lunch:
Sunday, February 16. 2014
Bertrand Duperrin's summary presentation
I've written up the 2 days of case studies from the Enterprise 2.0 Summit I attended last week in Paris, they are over here - Day 1 and Day 2 - on the Agile Elephant blog. There are a number of common threads emerging from the studies. Being lazy, I've copied Emanuele Quintarelli's list to start with, based on his study (as they concur with my analysis), I've added my thoughts in italics:
- The project is explicitly supported and sponsored by the top management (70% vs 34% for laggards). Long lasting processes, technology and process change should be somewhat mandated by the formal organization. Informal projects are easy to start but need formal acceptance to embed themselves in the enterprise
Other stand-out observations so far from the case studies:
- Any system needs time to embed/mature/settle in (words varied, but the concept was the same) before it becomes stable and self sustaining, you can't "make a baby in 1 month with 9 mothers" as it were.
Over the 2 days I did sense a departure of what the case studies were showing vs. what some of the Social Business theorists were espousing, in general the case studies showed that pragmatism and evolutionary development (what Dachis' Dion Hinchliffe called "Sustainable Transformation") was the order of the day vs more revolutionary/dramatic transformational approaches.
Update - I have also put up Bertrand Duperrin's slides (at top) now that they have been posted up, it was a very good talk on the subject as well
(Page 1 of 4, totaling 40 entries) » next page
More Broad Stuff
Poll of the Week
Will Augmented reality just be a flash in the pan?
Creative Commons Licence
Original content in this work is licensed under a Creative Commons License