Wednesday, June 3. 2015
As I can’t get alone to InfoSec15 this year, I thought I would get a flavour of what’s going on by pointing our experimental text mining tool in that direction. See the graph below. Bruce Schneier and John McAfee seem to have the influence they deserve, followed by BSOD (is Windows 10 here already? ) and “trashier tearaway”, which seems to be a reference to an article in The Register about the perils of allowing BYO-IoT-D (Bring Your Own Internet of Things Device!) into secure networks. also, full marks to whoever thought up “malvertising”!
Hope to get along next year and find out what’s going on in person.
(Apologies for my earlier, half-formed draft of this post – now deleted – just proves I really am too old to multitask!)
Thursday, March 26. 2015
An interesting day the SCTE Spring Lecture yesterday – thanks to those speaking and organising the event. The headline topic was DOCSIS 3.1, which is the next evolution of cable modem technology. I have spent the last few years working with DSL-based ISPs, so it was good to find out what’s been going on with cable.
DOCSIS 3.1 updates cable modem technology by increasing the potential spectrum available and improving the efficiency of is of spectrum use. It also opens the door to better integration between the TV data sides of cable technology, as well as standardising platform management interfaces. However, it seems the most important thing for cable companies is to get more bits through their existing infrastructure. This is good for the companies and the consumer.
Like any infrastructure business, telcos and cable cos want to squeeze their assets as long and as much as possible. If we had invented broadband access as a green field technology, we would have laid a fibre to every home. Of course, digging trenches is expensive and engineering should be the art of the possible. In the telco world, the DSL family of standards were developed to squeeze data over the twisted pairs originally designed for voice. (A quick historical note - ADSL was originally developed to carry video on demand over ATM, back in those seemingly distant days when the Internet was a curiosity for geeks and academics.)
A few years later, in the late 90’s, DOCSIS was developed to overlay data on cable TV networks. Because cable TV uses co-axial cable rather than twisted pairs, this was always an easier task. Cable does have a disadvantage because the cable is shared by many customers and therefore the bandwidth has to be shared. This is known as the “contention ratio”, but DSL suffers from shared resources as data goes deeper into the network and this is really the point of the Internet i.e. a shared medium that can be used by everyone intermittently.
Cable companies have dealt with contention and increasing demand for bandwidth by -
• allocating more bandwidth to broadband data
• improving efficiency (bits/hertz) by upgrading the technology to allow better performance over the same network.
• improving the network quality to reduce noise and so improve bits/hertz (also known as the constellation size.)
• segmenting the network to reduce the number of customers “sharing” each data feed.
From the mundane exercise of sending technicians to tighten connectors (improving network quality) to the high-tech of advanced line coding (new technology), these approaches all have costs and benefits and so it becomes one side of a business case. The other side is the competitive environment. In the UK, Virgin Media are offering 152Mbit/s as their top tier and one has to assume that this pitched so as to outstrip anything BT can do using DSL and twisted pair! In Europe, teclos are using Fibre to the Home (FTTH) to reach Gigabit speeds and the cable operators are responding. DOCSIS 3.1 will make this much easier for them.
As an aside, a question was raised about the theoretical limit to the amount of bandwidth a human can consume. We didn’t get a good answer, which was fair enough as it is really a question that involves cognitive psychology. However, it did bring to mind a review I read many years ago for a V.32 (9.6K) modem that said something to the effect that it was all very clever, but no one can type that fast!
Sunday, September 15. 2013
Bruce Schneier’s Cryptogram newsletter is always good read and this month is especially good. It mainly covers the “Snowden” revelations about the NSA’s on-line surveillance. These are a few of the points that I found significant, but I would recommend the original at https://www.schneier.com/crypto-gram-1309.html
Sunday, April 21. 2013
Broadsight Hierarchy of Internet Needs (after Maslow)
This week, I was allowed out of the office for a trip to the Spring Lecture day organized by the Society of Cable Telephony Engineers (SCTE.) It was an eclectic mix of lectures, although I found a theme emerging about the economics of bandwidth. If you are interested in the details of these lecture (and some on other topics), you can see video of the lectures at http://tv.theiet.org (and search for “The Society for Broadband Professionals”)
Several of the lectures were on the use of new technology to extend the life of existing plant.
Mourad Veeneman, from Liberty Global, spoke about the upgrades to the DOCSIS standard with the 3.1 revision and the focus on getting higher data rates over cable to compete with the emerging technology of fibre to the home (FTTH.) Like many upgrades to transmission (line coding) standards, DOCSIS 3.1 seeks to take advantage of advances in modulation and error correction theories supported by extra processing power at both ends. Also to make use of more spectrum on the cable.
Stephen Cooke of Genesis Technical Systems, spoke about a new architecture to enable rural broadband by sharing the pairs in the bundle from the exchange (Telco Office) to the local distribution points (pedestals for North American viewers.) By sharing the data bandwidth on the exchange back haul, what little bandwidth might be available on a long line can be used more effectively by pooling at the distribution point and creating a resilient ring around the community being severed. This doesn’t rely on a leap in coding technology, but the insight that Internet connectivity is always contended, so in this case we might as well move the contention out to the edge of the network. A cable drop might have in the order of 20 to 30 pairs and at the end of a long line, those pairs might support 500k each, but bundled together that’s still a respectable speed. The Genesis system also allows regeneration at intermediate points, so will usually do a lot getter that that figure suggests.
We also had a lecture on the use efficient development and operation of fibre in the network core. For me, these lectures illustrate the point that bandwidth must be provided in a way that is economic (and practical) to customers and makes a profit for operators. There is often a temptation to start a project with the idea that we should “do this right” and not be held back by “legacy.” However, engineering (in my view) should always be “the art of the possible” or perhaps “the art of the profitable.” This means we need to find clever fixes to maximise current plant, equipment and organisations. DSL has been a brilliant way of leveraging copper pairs that were designed for 3KHz voice and extending their life. DOCSIS has done a similar thing for cable networks, many of which were laid before the Internet had even routed a packet.
By the way, this principle can also be applied to software. It's tempting to start again "with a clean slate", but it can be dangerous to underestimate the value of old software that has matured under years of trial, error (and hopefully, fixing!) A nice illustration of this is the the adoption of DOCSIS Provisioning of Ethernet PON (DPoE) by the EPON FTTH standard. This allows ISPs to use their old cable billing and OSS systems by connecting to a DPoE API that presents a "virtual cable modem." This was discussed in the talk by Jim Farmer from Aurora Networks.
So it’s clear that bandwidth is valuable and has to be provided economically. However, creating and capturing value are not the same thing and a couple of points illustrate this.
The first point regarding the growing conflict between 4G Mobile services (LTE in the jargon) and existing TV services was discussed by Dipl. Ing. Carsten Engelke from ANGA. Both terrestrial TV and cable services run in bands that go up to the 700 and 800MHz range. This applies to digital and analogue TV (for those countries that still have analogue.) The mobile industry has persuaded governments (thought the ITU World Radio Conference process) to clear the 800MHz band and soon, the 700MHz band for extra 4G data bandwidth. This means that terrestrial TV has to move, which imposes costs on the broadcasters without benefits. To be fair, they have a privileged position in the first place and can do with less bandwidth once they turn off analogue. A bigger problem is for the cable industry, where their business model is essentially to compete with DSL and FTTH. This means that they are relying on most or all of the bandwidth in their cables. Although it’s early days for LTE in the 800MHz band, there are examples of the LTE signal getting into the cable and blocking TV. Although it’s a signal from the mobile tower, it becomes a problem for the cable company. If the signal is leaking in through a customer's TV (due to unintended pick-up) there may not be much they can do about. So the mobile company has captured some value to the cost of the cable company. I am not making judgements about companies here as they are just using the bandwidth that has been licensed to them. However, it is worth making the point that externalities become complex and may become more so with increasing use of power line and cognitive radio.
The other example is allocation of Wi-Fi frequencies, which have been a huge benefit to businesses and consumers, but risk becoming a neglected spectrum user. Wi-Fi has two main bands allocated to it. Most common in the 2.5G range and another in the 5G range. (5G is getting to the point where radio is starting to behave like infra-red and doesn’t like to go through walls, so is less useful than 2.5G.) With only 3 non-overlapping channels in the 2.5G range, Wi-Fi is oversubscribed in most medium and high-density residential areas. It wouldn’t be hard to add another band to mitigate this problem and would, arguably, be a significant benefit to users. However, no one appears to be pushing for this at the WRC and so it’s unlikely to happen. We could speculate that this is because residential Wi-Fi is not directly billable and so no one can capture the value and therefore have an interest in promoting it.
To sum up - as bandwidth provision increases, we keep finding ways to use it, from GIFs to music/speech to video and now HD/4K video. I am sure we will carry on filling up the pipes, so thanks to the engineers who keep re-building them. This brings to mind a review I read (a long time ago) about the first 9600bit/s modem, which went something like “this is all very well, but no one can type that fast – so what’s the point?”
Friday, February 15. 2013
I spent an hour or so trying to get my son's iPad working with Sky Go. It worked for 11 months and then suddenly stopped for reasons that are not apparent to me or (so far) Sky's technical support people. While this is frustrating for me and my son (he has to resort to sharing the TV in the living room!) it is an example of how hard it is to reliably run TV services across the Internet to customer owned devices. (A bit like BYOD in the corporate world!)
To be fair to Sky, their customer service is no worse that their competitors and they do try to "push the envelope" on new technologies, so their will be some pain from time to time. However, they have bundled Sky Go into their service packages as they did in the past with Sky+ and entry level broadband in an attempt to reduce churn and now they have to support it!
OTT (or "over the top") TV is the idea of delivering (mainly) TV content via the Internet rather than paying for your own distribution system like satellite, cable or terrestrial transmitters. YouTube and BBC iPlayer where early (and free) entrant into this sector. Netflix and Hula are probably the best known pay providers in OTT and all the traditional operators are trying or thinking about getting into this market, either because they see it as "strategic" or as a defensive ploy.
OTT sounds like a free ride as you don't have to buy network capacity to the home? However, the real cost of OTT is the need to make the service work with all devices that you "support." This is not so bad if it's a "free" service and it's probably no coincidence that YouTube got going (and still is largely) free. People don't generally complain if the they are not paying in for first place. However, when you start selling a service and say it will work on a list of devices, customers expect it to work or they will churn out.
In the old world of closed networks and devices, broadcast platforms (e.g. BSkyB, Virgin, UPC, DirectTV, DISH, etc) tested each new set top box (STB), firmware download or network update to death. And even then, they rolled it out slowly to "trialists" or "friendly customers" (most customers are unfriendly, you see ) before unleashing it on the whole population. The reason they can do this process effectively is that everything is locked down - they know exactly what the hardware is, they know the versions of all the software components and they know the network environment. These factors are all reproduced as closely as possible in the test lab, so the test results are reasonably accurate. There are still surprises when new products are put into the real world - real people do things that were not anticipated or unexpected environmental factors cause problems e.g. certain fluorescent lights interfere with remote controls. These environmental factors are usually caught in trials, but scaling issues can still be missed until full deployment.
So, even with everything under control and nailed down, it's quite hard to catch all the bugs. In the OTT world, operators usually offer support for half a dozen devices. Doesn't sound like too many, right? Well, each device probably has half a dozen operating system versions in field .... and maybe several hardware versions. In the case of PCs in particular, there may be several web browsers and several user contexts. Apart from games consoles (which usually run one application/game at a time) there are an almost infinite combination of software that co-exists on the device and may interfere with your content delivery app. Now multiply these factors together and you get a huge multidimensional matrix of possible environments that could be tested. Oh yes, I also forgot the local network context, which will change depending on your ISP and home router, etc.
Delivering OTT to many devices is a hard problem, but not insoluble. We are seeing the following approaches -
1. Aggregation. Hula, Netflix, Lovefilm Instant are mainly content aggregators (House of Cards excepted!) and their value is in their ability to ingest content, transform it for target devices and manage that process. Apple is a special case of an aggregator as they own the delivery chain down to the device (and have the customers' credit card details!) Amazon are also trying to play here. So the cost of testing is spread over a large portfolio of content and therefore purchases.
2. Targeting of Devices This is the idea of focusing your effort on the top devices (by market share) until the cost of supporting a device (and/or web browser) is not justified by the likely revenues. Even the BBC, with their commitment to broad access, had to cull their supported devices.
3. Risk Based Testing we can't test the whole matrix of possible contexts, but we can test the most likely and/or representative. Good feedback from customer support is important here.
4 Customer Support There will be many more customers who can't get the service to work than in the traditional model of a set top box supplied by the operator and customer support needs to reflect this. We see the use of forums to encourage users to help of users either for kudos, discounts or just good will.
The increasing maturity of software stacks also helps in this process. APIs that really do abstract the hardware and lower layers and software that traps errors make life much more predictable for application developers and testers.
The final twist on this is DRM. It is hard enough to make an end to end system work "in the clear." DRM adds a parrallel delivery chain. Not only do I need the content, I also need the licence and decryption keys. These have to be delivered in sync to make it all work. That would be hard enough, but DRM is usually designed to work as a "black box." This is (understandably) to stop people (hackers) looking in and defeating it. However the effect is that we often end up without a meaningful error message when things break. (My problem with Sky appears to be DRM related. There is no error message and the "clear" promo video plays.) On top of this, DRM is designed to be sensitive to unexpected changes in the environment as these might be hacking attempts. For example, the system clock going out of tolerance or the presence of an unknown/suspicious app on your device. Again, it usually won't say what it doesn't like as this information can be useful to hackers.
So far, I haven't mentioned UltraViolet which is a Hollywood sponsored attempt to allow a "buy once, play anywhere" model of DRM. It also defines a common format for the content itself. I think that it's unlikely that we will make a common format stick as the format is driven by the constraints of the user device and even if we can agree now, new devices will come along with new constraints and capabilities. The other problem with UltraViolet is the absence of Apple, who would say that they have already solved this problem (if you buy Apple!) So UltraViolet becomes another row (or rows) in the matrix!
So, what are the conclusions? Basically, OTT is hard and we should expect to see a few aggregators emerge, either as brands or as white label providers (e.g. KIT Digital.) These aggregators will probably be the names mentioned already, as scale is the key to solving this problem. The more customers you have, the cheaper testing and development is per customer.
By the way, my spell checker keeps trying to change "aggregators" to "aggressors" which is probably how it feels if you are an incumbent operator
Friday, June 26. 2009
After a hard day's work at Broadsight Towers, some of the team decided to let our hair down last might by attending a lecture on Web Oriented Architectures at the Institute for Engineering and Technology (IET) in London. It was given by Mark Edgington. Here’s our quick précis, with apologies to Mark!....
The basic premise was that Service Oriented Architecture (SOA) is a heavyweight architecture that is required for enterprise solutions whereas web oriented architecture (WOA) is a lighter weight solution that covers most of the use cases. Although web based architecture is not a formal specification, there is a significant amount of custom and practice. Because of this popularity and also because it’s a simple architecture, it has good inter-operability. Think of all the bots and clients talking to Twitter!
SOA is appropriate for applications that need security, transactional integrity and all those good enterprise grade things. They tend to be internal to an organization and this is just as well, because components from different vendors are often not interoperable.
Don't be religious! The SOA and WOA approaches both have pros and cons. Choose the one that is fit for your purpose. This might mean using both approaches on the same project for different users or purposes
It seems to us that many complex standards emerge, then after a while the community realize that they can get 80% of the benefit from 20% of the complexity and invent a cut down standard for "everyday" use – X.500 and LDAP, X.400 and SMTP, ATM and TCP/IP.
Our only quibble with the lecture was that SOA was presented as synonymous with the SOAP/WDSL family of standards and WOA with REST. We find that the usage tends to refer to the business and technical model at an abstract level.
The IET are running another series next year, starting in September 2009. Have a look at IET Events if you are in London.”
Thursday, May 7. 2009
So, Broadsight are spread thinly this week. While Alan is sunning himself at the Telco 2.0 Conference in Nice, the rest of the team were at the Digital Britain Unconference at the ICA in London.
It was an interesting and worthwhile event, set up as a reaction to the lack of consultation during the preparation of the official Digital Britain Report. As many readers will know, the official interim report leaves a lot to be desired. Without wanting to rehash all the arguments, the government seems to be perpetuating vested interests (e.g. traditional content owners) at the expense of, well, everything else! There is a nod to inclusion with a 2Mbps universal service obligation, but that seems inadequate and the thinking is confused.
The Unconference was a useful "mass brainstorm" and the crowd sourced some interesting points that were missing for the report e.g. enforceable privacy, building an innovation culture, user generated content from individuals and communities. Many people were concerned and rather cynical about the approach to copyright, of course.
I know that the organisers (that includes us btw) are going to write this up and present it to Stephen Carter as part of the consultation before the final report is written and we should be grateful to them for doing this, as a necessary process.
However, my overriding thought is that this is all so important for everyone that it shouldn't be left to a small group with a special interest in the technology. We wouldn't leave civil engineers as the only people to respond to proposals to build new roads, but "digital" is still seen as a ghetto and not the infrastructure of everyday life.
Sunday, March 15. 2009
Here's an interesting post on Bruce Schneier's blog. It talks about the growing trend of governments and corporations to store almost all on-line and telecommunication information indefinitely. Given that so much of our lives and business is moving on line, this leaves little space for the "ephemeral" conversations, which are not intended to be "on the record".
Unless you buy the "nothing to hide, nothing to fear" argument, this is a worrying trend.
It is hard to see how to role back this tide of surveillance and perhaps the way to deal with it is to change the way we think about what can be considered binding and "incriminating" (in the widest sense). The ability to be untraceable or unrecorded does "oil the wheels" of society and business. Many people do things that are quite legal, but would not like them to be generally known for all sorts of legitimate reasons. There are also plenty of "grey areas" where people do things that are on the margins or beyond the margins of legality. We may find many unintended consequences if ever-increasing surveillance technology stamps out these practices.
It's good to see Alan and everyone else is having fun at SXSW. Back in Broadsight Towers we continue with the detailed analysis of trends and careful consideration of complex arguments and try to rise above the superficial and febrile stream of tweets
I noticed that Alan's earlier post is recommending that all the SXSW'ers get off the net and go and talk to each other. I should think so, given how much CO2 they have all emitted to be in close physical proximity! It reminded me of the 'Analysis' podcast from the BBC that I listen to on Friday. This was about the way that the Web is 're-wiring' our brains so we depend on 'fixes' of rapid, bit size info-chunks. I am happy to report that I was able to concentrate on this podcast and give it my total attention for the full 30 minute running time, so clearly I have not been re-wired yet!
The basic argument was that the constant flow of and easy access to, information prevents children from developing the capacity to recognise and evaluate structured and authoritative sources, or put together their own complex and structured arguments.
I think that this is an area for concern, but taking a "snapshot" now is misleading as the technology and social structures around it are far from mature. My gut feeling (very reasoned and structured, natch!) is that we will find ways to use address this issue as the web develops.
Anyway, 'Analysis' gives some food for thought and is always worth a listen.
(Page 1 of 3, totaling 30 entries) » next page
More Broad Stuff
Poll of the Week
Will Augmented reality just be a flash in the pan?
Creative Commons Licence
Original content in this work is licensed under a Creative Commons License