Thursday, April 30. 2015
Everywhere you turn there is this focus on the "Attention Economy", defined by Wikipedia as follows:
..content has grown increasingly abundant and immediately available, attention becomes the limiting factor in the consumption of information. Attention economics applies insights from other areas of economic theory to enable content consumers, producers, and intermediaries to better mediate and manage the flow of information in light of the scarcity of consumer attention
Which is all very well, but what has really occurred is an arms race between various service providers to divert your attention to their new new thing, rather than any others, and certainly not to the (slightly dull) thing you probably really should be doing.
I call this the Distraction Economy, and its really an anti-economic effect as:
Taking these in turn:
Accrete no or very little Value
There is an infinite array of diversions seeking to distract you. The internet is always wrong, there is always one more interesting tweet-link to read, one more comment for someone's Facebook wall, one more go on the game du jour. And what are you losing - they are all free, right? Well that is the billion dollar question. What is the value of your time spent on these distraction? There are two ways of measuring this:
Problem is that, for the average joe (or josephine) in the New Digital Economy the Conversion Value is near zero, as is the Conversion Rate, so this is an essentially zero-value accretion. So the +ve value is near zero
However, there is a negative opportunity cost - chances are most distractions reduce your own time and effectiveness actually spent on valuable tasks - i.e. unless you have zero valuable tasks to do, paying attention to the "attention economy"'s products is going to cost you (see part 3 below)
First came the idea of the Great User Experience, then when that arms race was tailing off came Gamification, now that is reaching its limits there is Addictification - the best and brightest minds of a generation are being used to try and ensure that people will pay attention to non essentials. Neuroscience, behavioural science, mathematics and a fistfull of 'ologies are being used to try and make this or that piece of digital bubblegum register in people's attention-span and grab that 15 minutes of fame.
As noted above, time being diverted is very unlikely to be accretive in itself, and is highly likely value destructive as:
(i) Less time being spent on valuable tasks - that has to have an opportunity cost impact in the medium and long term
(ii) It has been shown, in study after study, that distracting yourself from a task reduces your efficiency in performing that task, plus also imposes a "setup" and "teardown" time penalty as your brain switches over from task A to task B. The worrying thin about the distraction economy is that it feels like you are more effective, when in fact the hard metrics show the opposite.
What's the answer?
Firstly, it is to recognise that most of the bright shiny products of the "attention economy" are not there to make you money, they are there to make money out of you.
Secondly, to recognise there is an entire industry out there trying to make these grab your attention, with a miniscule industry building antidotes.
So the only real solution is to time-limit the channels - turn off ambient alerts, make a time to do emails/twitter/facebook etc, and keep to it.
Wednesday, April 15. 2015
TS Eliot wrote that the world ends not with a bang, but a whimper. It is our observation that the worlds of most monopolies effectively end with regulation of some sort or other, either breaking them up , forcing access, or regulating away super-profits (or some combination). And now, after several years of rumblings, the EU has finally started to look at Google. Today Margrethe Vestager, the European Union competition commissioner, announced an investigation into Google's practices for favouring its own shopping sites in guiding searches - NYT:
Google has been accused of these things for quite a few years, and there have been rumblings of EU action for some time so the only question in our minds is "why now". The NYT notes that:
“The decision by the commission to position itself as the lead competition authority for the digital age may trigger anger among some U.S. politicians
It also allows the EU, currently embattled by internal accusations of being sclerotic, unrepresentative, comatose etc etc to actually look like it is taking a lead in Doing Something in an area of considerable concern to EU technology firms.
The EU is also considering looking at Google's practice with Android:
The European Commission also said on Wednesday that it was stepping up a separate investigation into whether phone makers that agree to use Android — and that also want Google applications like YouTube — face contractual requirements to place those applications and other Google-branded applications in prominent positions on a mobile device.
If these are proven true, it will be clear that Google, a noisy advocate of net netrality on others' platforms, does not practice net neutrality on it's own.
Throubles usually come in threes, they say - any bets on an EU investigation into tax avoidance practices by US multinationals in Europe soon?
Tuesday, April 14. 2015
Fascinating chart in HBR (above) on the relative change in valuation of Brands and Customers since the emerge of Digital monitoring and social media as it becomes easier to measure customer behaviour and brand impact, and to know which half of advertising works:
And the results...
Oh dear - the demise of Brand Marketing will no doubt fill millions with gloom......
Monday, April 13. 2015
Went to Chinwag's Fintech event last Thursday to catch up on what's going on, well in Fintech really, looked like an interesting variety of speakers. Notes follow below:
To kick it off, Cass Professor Gianvito Lanzolla set the secene with a discussion of the current structure of technology change in Finance, he looked at the drivers of change
1. Tech drivers in the near future
Dave Birch of Consult Hyperion followed up with his view on what people are doing today
Tech in banking is a "William Gibson" time (aka future is here, just unevenly distributed). The Meme of the Moment is around "mobile", but what that seems to mean in practice is contactless mobile devices - which is hardly disruptive. Dave introduced the concept of "Chromewash" e.g. - Apple Pay (today's darling) runs on "yesterday's rails", i.e. merely a continuance of what has come before but overlaid with New New hype.
He looked at revenues from European banks - c 50% comes from Interest, Transactions c 30 %, Fees c 10%, Other c 10%. The raft of Fintech innovators are already in most obvious spaces of stuff banks do, so profit pools most at risk are the large ones - loan areas (Interest revenue) and Transaction disaggregation - he pointed Braintree as an example of a startup and to an Anthony Jenkins FT article last week, noting that the pressure is on to end universal banking:
We shall see....Dave also made the point that you can't dissociate Tech pressures from regulatory pressures (not new news - cf Carlota Perez et al), Regulation drives tech to huge degree eg regulations forcing direct API access to customer accounts means that banks can't compete on regulated API anyomre, so must compete on other "added value" (to whom areas:
- non regulated
His view is that a major area of disruption will be Identity, it's a mess and identity theft is out of control, opens field to players from other industries. Endgame is Amazonisation of banks, banks are no longer where we store money, they are where you store identity.
The Panel session had the above people plus Jonathan Kramer, Sales Director of per-to-peer lender Zopa and Mutaz Qubbaj, CEO & Co-founder of Squirrel, moderated by Ben Rooney, co-Editor in Chief at startup Informilo, ex WSJ - who quipped that Chair is journo who has just started up - quip "you know you're in a bubble when journalists leave good jobs to form startups"
The main Panel Discussion issue was "where are the opportunities in the near future? These broke down into:
Working with banks
- KYC, CTF, AML (and other incremental improvement of all TLA technologies....)
Competing with banks
- where banks do things inefficiently eg consumer and SME lending
Banks are hugely vertically and horizontally integrated, tend to ossify and can be outmanouvred. But banks always have 1st option on trust and first mover advantage - though are everywhere slow and reluctantly trusted, much less liked (even less than RyanAir apparently).
Then came the issue of Innovation vs. Regulation:
Regulation not bad per se, but is usually poorly done/enforced (and typically regulating the stable where the horse has already bolted).
There is very little startup activity that is better (from a major's point of view) than can be gained than by playing a regulatory arbitrage game that only big boys can play ("Too big to fail" being the zenith of this issue).
Any truly radical new, small play, if it gets too big gets regulated (ie Zopa can scale, but how big does it get before regulation comes in). Zopa said it now wants peer to peer lending to be regulated (to be safe from high risk-adopting new entrants who would force tighter regulation, I presume). Another example, Funding Circle who make loans to small companies, is not regulated yet - but what happens if this market grows - there is always going to be a rotten apple/major incident and brings in tighter refulation. (Ditto Crowdfunding.... )
And finally - The obligatory Bitcoin question!
I think Dave Birch had this one down best - bitcoin itself is not particularly interesting, but blockchain is interesting as it allows trading without settlements. This is not just for currency, but for all sorts of 3rd party transactions like managing dishwasher guarantees, and also has a strong intersection with IoT
Anyway, to a large extent it showed that change will probably be slower than the enthusiasts think, as startups have a hard time plus negative inducements in growing too large and/or too fast, and regulation can often be used by incumbents to protect positions to a very high degree.
My "Note to self" was to look again at mobile payments in Africa and what else is being done on these platforms, as there are no legacy systems and typically that is where "next gen" disruptive technologies prove themselves.
Thursday, April 9. 2015
Now this is interesting - Wired:
We've wondered aloud on this blog several times about:
Its all a very murky area, as we have noted quite a few times over the years, and really needs case law to sort it out. Now it's starting:
The lawsuit, which will be heard for the first time at a civil court in Vienna on 9 April, has technically been brought against Facebook's European office in Dublin, through which all of its non-US and Canadian accounts are registered. Launched in August 2014, the lawsuit quickly attracted huge numbers of people wanting to participate and claim their share of any damages, which are being sought at a "token level" of €500 (£360) per claimant. While 25,000 users are involved in the first stage of the lawsuit, another 55,000 have registered to participate in a second round if proceedings develop as Schrems hopes.
In essence, the case alleges Facebook is in contravention of EU data protection laws and is in effect pushing the EU to face up to it and apply the law. The case also alleges Facebook was involved in the PRISM spying program that Edward Snowden blew open. It is expected that Facebook will argue the case is inadmissible in Austrian law.
One to watch.....this strikes at the heart of the business model of nearly every "for free" web and app service going today, including most of the so called "Unicorns" ($1bn+ valuation social business companies).
Tuesday, April 7. 2015
Yes, its 2015, and yes this is still news to many, if the twitter response an article in The Economist is anything to go by (and remember, The Economist is writing for the more enquiring minds in society....).
Anyway, the article is about Bruce Schneier, who has put out a book stating this obvious, albeit inconvenient truth, essential reading along with Jaron Lanier's analysis of "Siren Servers" in 2013. Schneier writes that:
Many smartphone apps can afford to be free because the companies that develop them sell the users’ personal data, something barely explained in the terms and conditions. If the service is free, then you’re the product, goes an old saw in Silicon Valley.
You may wish to read our papers from 2008 explaining this - in essence, if you ain't paying, you ain't the customer - start here).
Mr Schneier points out something else that everyone playing with Social Networks has known for (at least) a decade too
..people do not need to disclose their details directly. Such information can also be inferred from patterns of behaviour and social networks, and the many harms that this can cause go beyond creepiness. It can mean higher online shopping prices if algorithms predict that an individual may pay them, and even racial discrimination if algorithms profile a person, by noting postcodes or answers to questions that are imperfectly tied to race. With few rules and little transparency, worse is possible.
Of course, The Economist does not like some of his solutions that make this snooping harder, being the good free trading economics mag they are:
But he also argues for stronger rules to prevent companies from collecting so much data in the first place; this would quite likely curtail unanticipated but valuable uses, like the Google Flu Trends programme
That is similar to the arguments that all your medical data should be "open" - as some good, somewhere may come of it (never mentioning that you, of course, will be exposed to every bit of malfeasance that comes from it, with no redress - see our explanation on that starting here). But at least they accept that many of the journos pontificating in this area don't have a clue
Some recent books on digital privacy have been written by journalists, with an emphasis on sugary narrative instead of original analysis. This one comes from a practitioner, and offers a deep but accessible look at surveillance in the post-Snowden, big-data era.
Which is why Schneier, Lanier et al (and us) are far more sceptical of the Wondeful New Future. And you haven't seen anything yet, just wait till the Internet of Snooping Things gets going. But nothing must spoil the relentless Good News Story....
Friday, April 3. 2015
Two rather interesting snippets on the veracity of Big Data analytics. Firstly, from HBR, this finding (summarised below):
Researchers recruited 61 analysts (mostly academics) and asked them to assess whether soccer referees were more likely to give red cards to players with darker skin tones. The analysts split up into 29 teams, and were given a dataset that included numerous variables about both players and referees.
With 29 teams and 29 slightly different results, it is clear that any analytical depends on somewhat subjective decisions about the best approach to use and which variables should be included. After another round of debate, the analysts "converged toward agreement that there is a small, statistically significant relationship between player skin tone and receiving red cards, the cause of which is unknown".
The second is also from HBR, again by Walter Frick (who wrote the above article), with research showing that even if the algorithm was right, most people tend to prefer their gut feeling, even more so if they have seen an algorithm fail, even a little bit. And they’re harder on algorithms in this way than they are on other people:
Initially this looks like irrational behaviour, but of course if you juxtapose it with the above research that shows that one should probably not trust any one algorithm, it makes sense.
Apparently most people, when asked why they trusted human judgement, felt that humans were better at learning over several iterations (this is not demonstrably true, by the way).
I recall being told 30 years ago when starting out with complex simulation modelling that "people would rather stick with a problem they can't solve than a solution they can't understand".
Interestingly, in a forthcoming paper, the same researchers found that people are significantly more willing to trust and use algorithms if they’re allowed to tweak the output a little bit. If, say, the algorithm predicted a student would perform in the top 10% of their MBA class, participants would have the chance to revise that prediction up or down by a few points. This made them more likely to bet on the algorithm
Which brings us back to 29 teams with 29 different models.....
Thursday, April 2. 2015
Some time ago I read a fascinating essay on the Economics of Social Status by Kevin Simler. It essentially looks at status as an economic good, and I found it an interesting way of thinking of online social capital (aka "whuffie"). Anyway, I was reading it again as background to some thinking I'm doing in new types of Organisation Structures" (see the Anna Karenina Principle, here) and I re-read this bit:
"A community is a group of people who agree on how to measure status among their members".
Whenever "Transaction Costs" are mentioned I think of Ronald Coase and the Theory of the Firm, which says that people begin to organise their production in firms when the transaction cost of coordinating work through market exchange, given imperfect information, is greater than within the firm.
On re-reading this, I think there is something profound here for online social networks, in that the transaction costs (a proxy for efficiency and effectiveness) of any social network organisation structure are fundamentally about the ability to trade social capital (a proxy for trust).
There is also something interesting here for all the many wannabe systems vying to replace the good old hierachical way of organising work - it's not the easy movement of information that is key, it is the easy movement of social capital - aka influence - that drives the benefits of more connected organisations. There is an implication here, that simplifying social capital markers (and that includes hierarchy) is key, which implies that heterachies, while fine for non-human systems, may be less effective as organisations for humans.
Food for thought.....
Thursday, March 26. 2015
An interesting day the SCTE Spring Lecture yesterday – thanks to those speaking and organising the event. The headline topic was DOCSIS 3.1, which is the next evolution of cable modem technology. I have spent the last few years working with DSL-based ISPs, so it was good to find out what’s been going on with cable.
DOCSIS 3.1 updates cable modem technology by increasing the potential spectrum available and improving the efficiency of is of spectrum use. It also opens the door to better integration between the TV data sides of cable technology, as well as standardising platform management interfaces. However, it seems the most important thing for cable companies is to get more bits through their existing infrastructure. This is good for the companies and the consumer.
Like any infrastructure business, telcos and cable cos want to squeeze their assets as long and as much as possible. If we had invented broadband access as a green field technology, we would have laid a fibre to every home. Of course, digging trenches is expensive and engineering should be the art of the possible. In the telco world, the DSL family of standards were developed to squeeze data over the twisted pairs originally designed for voice. (A quick historical note - ADSL was originally developed to carry video on demand over ATM, back in those seemingly distant days when the Internet was a curiosity for geeks and academics.)
A few years later, in the late 90’s, DOCSIS was developed to overlay data on cable TV networks. Because cable TV uses co-axial cable rather than twisted pairs, this was always an easier task. Cable does have a disadvantage because the cable is shared by many customers and therefore the bandwidth has to be shared. This is known as the “contention ratio”, but DSL suffers from shared resources as data goes deeper into the network and this is really the point of the Internet i.e. a shared medium that can be used by everyone intermittently.
Cable companies have dealt with contention and increasing demand for bandwidth by -
• allocating more bandwidth to broadband data
• improving efficiency (bits/hertz) by upgrading the technology to allow better performance over the same network.
• improving the network quality to reduce noise and so improve bits/hertz (also known as the constellation size.)
• segmenting the network to reduce the number of customers “sharing” each data feed.
From the mundane exercise of sending technicians to tighten connectors (improving network quality) to the high-tech of advanced line coding (new technology), these approaches all have costs and benefits and so it becomes one side of a business case. The other side is the competitive environment. In the UK, Virgin Media are offering 152Mbit/s as their top tier and one has to assume that this pitched so as to outstrip anything BT can do using DSL and twisted pair! In Europe, teclos are using Fibre to the Home (FTTH) to reach Gigabit speeds and the cable operators are responding. DOCSIS 3.1 will make this much easier for them.
As an aside, a question was raised about the theoretical limit to the amount of bandwidth a human can consume. We didn’t get a good answer, which was fair enough as it is really a question that involves cognitive psychology. However, it did bring to mind a review I read many years ago for a V.32 (9.6K) modem that said something to the effect that it was all very clever, but no one can type that fast!
Wednesday, March 18. 2015
Layers of News Media over time - Business will be no different(from Baekdalmedia.com)
We've written about this a few times on Broadstuff, but not for Social Business. I wrote about this application of Riepl's Law to Enterprsises over at the Agile Elephant Blog, but in short:
It’s never going to happen. Email will be here for a long time still, so get used to living with it. Social Business systems that can’t cope with email will die a long time before email will.
The reason for this is that, in the entire history of new media from the invention of speech onwards, newer and further developed types of media never replace the existing modes of media and their usage patterns. Instead, a convergence takes place in their field, leading to a different way and field of use for these older forms. The diagram above shows how mewdia generations have gone in News, it will be no different for Business communications. (Source Baekdalmedia.com)
This observation is called Riepl’s Law.
This was first noticed by Wolfgang Riepl. Riepl was the chief editor of Nuremberg’s biggest newspaper at the time, and was stated as above in his dissertation about ancient modes of news communications.
More Broad Stuff
Poll of the Week
Will Augmented reality just be a flash in the pan?
Creative Commons Licence
Original content in this work is licensed under a Creative Commons License