Wednesday, April 19. 2017
Obviously a very serious entry and in no way, an excuse for the headline.....
This is Kasper, our Portuguese Water Dog, who is now on the Internet, thanks to his GPS/GSM tracker. Kasper has a fascination with the local muntjacs. Fortunately, they run much faster than him, but he does sometimes end up lost after a chase. So we have fitted him out with a tracker and it works quite well, allowing us to track him in real time via a mobile app. The battery lasts over 24 hours, so if he was seriously lost we would have a good chance to locate him. He has his own location systems and usually finds us first!
The point of this (really!) is the growing ubiquity of the IoT (Internet of Things) and in this case the IoD (Internet of Dogs).
Sunday, September 6. 2015
This is just a quick note on my own experiences and not a full review
I have upgraded my second best work computer from W7 to W10 and initial impressions are good. In fact, all pro so far.
The start button seems to be back by default and uses part of the pop-up menu to show Windows 8 style tiles. I am not a huge fan of having pre-set content pushed to my desktop, but they could be quite nice once personalised.
The upgrade process was quite painless and has automatically picked up my "pro" licence, so I can used Bitlocker. In fact, this is a feature enhancement, as Bitlocker was only available (to encrypt drives) on Windows 7 Enterprise and Ultimate. (Since W7, all versions have been able to read Bitlocker removable drives.) One of my colleagues did report that his licence was lost and he ended up with the basic version of W10, although there is an option to ask Microsoft to issue the correct licence to fix this.
W10 offers two ways to back up, with the legacy W7 "Backup and Restore" or the native "File History". Both of these can work to a network location (although this may not be true on the basic version of W10.)
The UI has a nice clean style, in spite of my attempts to clutter the desktop with random icons!
Last but not least, there is an option to revert or roll back to (in my case) Windows 7. I haven't tested this, but it could be a useful feature if issues arise. Of course, the usual health warnings apply and you should back up your system and especially, your data before upgrading. The Windows 7 "Create a System Image" is probably best if you need to do a clean restore shortly after upgrading. The main drawback is that it is only a snap shot and would overwrite any data updates from the time of the system image to the time of the restore.
I have learnt to be wary of Windows updates, especially after Vista and Windows 8.0. However, Microsoft seem to have gone for continuity this time and I am cautiously optimistic. I will report back once I have had a chance to play with the upgrade.
Monday, August 24. 2015
We have been reflecting on the data breaches at Mumsnet and Ashley Madison as well as the user revolt over Spotify’s attempt at a data land grab. We are still at the start of the information age and users are still learning the value and power of personal data. We believe that there are some lessons to learn here.
Lesson 1 - nothing is secure! We should know this by now! Even the NSA is not secure, as Edward Snowden helpfully demonstrated. Once you have given your information to a third party you have lost control of it, so take care about who you trust and what you tell them. For example, does my cable company need to know my real date of birth? Invent an “Internet Birthday” and only tell banks and governments your real DOB (banks so your credit check works and governments as they get grumpy when citizens don’t co-operate!)
Ashley Madison were bordering on the insane to claim (as reported by the Independent) that their servers where “kind of untouchable”. The only untouchable server is turned off, buried and disconnected!
Even after the data breach, the Ashley Madison website has pictures of padlocks and assurances of discretion. However, if you think that the value of the information to the user and compare it to the funds available to Ashley Madison to keep it secure, it doesn’t add up. The fact that a user’s email is “on the list” has potentially life changing consequences. At least, it will risk their relationship and family. Some people might say that they deserve that, although for the purposes of this post, we are not making moral judgements and just considering the relative value of information in different contexts. However, most people would be concerned about those users who have listed gay preferences and are therefore exposed to physical danger in the countries where they live (as reported in the same Independent article.)
Of course, if you live in the wrong country, there are all sorts of lists that might get you into danger. Political activism in repressive countries is one of the things that the TOR Router was invented for, although it’s better known in the mainstream media for facilitating unsavoury transactions on the “dark web”. Data security is not the same as anonymity and in the case of paid-for services, anonymity is only an option if you can pay by Bitcoin.
Lesson 2 - users should consider how damaging a piece of information would be if revealed. This is really a variation on lesson 1, but with an emphasis on risk management. Because we mediate an increasing proportion of our lives via the Internet, there is more and more information that could potentially be taken out of context. This might be a youthful indiscretion posted on social media and picked up by a potential employer. It may be photos intended only for your partner. It may be that you are on a list of activists or a site like Ashley Madison. Most people would not want any of these things shared, but users can be naively trusting. You need to ask if the protection of said information will be given the same priority as you would give it and given the persistent nature of digital information, for how long?
The Mumsnet Data Breach provides an interesting contrast. Although users may have been inconvenienced by the breach, there is nothing on Mumsnet that anyone would be ashamed to own up to, or at least is not in (semi) public view already. From the reports, the only valuable information that seems to have been revealed from Mumsnet are personal details such as user email / password combinations and some postcodes. As Mumsnet have reset all their passwords, this only becomes a problem for users that use the same password for many sites. Unfortunately a depressing number of people do this and are vulnerable to breaches and phishing.
Lesson 3 – use a different password for each site. If you can’t remember that many passwords, append your password with some letters from the site name e.g. “passwordMU” (by the way “password” should not be used as a password!) This approach will stop automatic bots from reusing your password on other sites. Alternatively, use the browser function to store passwords. I would recommend Firefox as it allows you to share passwords across several systems using a “zero knowledge” protocol, meaning that their servers can never know your passwords (even if hacked.)
I haven’t talked about banking or financial websites and apps so far. From a user’s point of view (at least for the time being) the risk is more about inconvenience that loss of funds. The banks are still bearing the loss of data breaches to keep consumer confident in on-line banking. To be fair to the banks, there are also improving on-line security with two factor authentication as standard for most on-line banking systems.
Lesson 4 - Email addresses are not secure identifiers. As email addresses are public, it’s quite easy to “borrow” email addresses. Spammers do this all the time as real email addresses stand more chance of traversing spam filters, especially if they are previously known to the intended recipient. There are reports that some of the email addresses on the Ashley Madison list were not put there by their legitimate owners. Of course, they would say that wouldn’t they! However, I am inclined to be sympathetic to such claims as Ashley Madison did not require emails to be verified and their “freemium” model is likely to attract “spam” profiles. These may be to initiate “Nigerian” scams, build botnets, etc.
Lesson 5 - This is well made by Paul Mason in the Guardian and is about the value of aggregated data. The examples of passwords and specific data points (“this user is an adulterer”) are easy to see. What is less obvious is how seemly innocuous data (location, buying patterns, etc) can by combined to make predictions about users and gather intelligence. On one level this is just creepy. For example, predicting women are pregnant before they know themselves. However, given what we know about the power of loyalty cards, it is more than likely that harvesting such rich data will give huge insight into our behaviour and intensions, conscious and unconscious.
We are moving towards a world of “total information awareness” - in fact, the name of a post-9/11 spying programme but nicely descriptive. Although recent events have highlighted the risks, there could be many positive sides. For example, your doctor could call you to say that you might be ill, rather than the other way around. However, we should go into this brave new world with our eyes open.
Wednesday, June 3. 2015
As I can’t get alone to InfoSec15 this year, I thought I would get a flavour of what’s going on by pointing our experimental text mining tool in that direction. See the graph below. Bruce Schneier and John McAfee seem to have the influence they deserve, followed by BSOD (is Windows 10 here already? ) and “trashier tearaway”, which seems to be a reference to an article in The Register about the perils of allowing BYO-IoT-D (Bring Your Own Internet of Things Device!) into secure networks. also, full marks to whoever thought up “malvertising”!
Hope to get along next year and find out what’s going on in person.
(Apologies for my earlier, half-formed draft of this post – now deleted – just proves I really am too old to multitask!)
Thursday, March 26. 2015
An interesting day the SCTE Spring Lecture yesterday – thanks to those speaking and organising the event. The headline topic was DOCSIS 3.1, which is the next evolution of cable modem technology. I have spent the last few years working with DSL-based ISPs, so it was good to find out what’s been going on with cable.
DOCSIS 3.1 updates cable modem technology by increasing the potential spectrum available and improving the efficiency of is of spectrum use. It also opens the door to better integration between the TV data sides of cable technology, as well as standardising platform management interfaces. However, it seems the most important thing for cable companies is to get more bits through their existing infrastructure. This is good for the companies and the consumer.
Like any infrastructure business, telcos and cable cos want to squeeze their assets as long and as much as possible. If we had invented broadband access as a green field technology, we would have laid a fibre to every home. Of course, digging trenches is expensive and engineering should be the art of the possible. In the telco world, the DSL family of standards were developed to squeeze data over the twisted pairs originally designed for voice. (A quick historical note - ADSL was originally developed to carry video on demand over ATM, back in those seemingly distant days when the Internet was a curiosity for geeks and academics.)
A few years later, in the late 90’s, DOCSIS was developed to overlay data on cable TV networks. Because cable TV uses co-axial cable rather than twisted pairs, this was always an easier task. Cable does have a disadvantage because the cable is shared by many customers and therefore the bandwidth has to be shared. This is known as the “contention ratio”, but DSL suffers from shared resources as data goes deeper into the network and this is really the point of the Internet i.e. a shared medium that can be used by everyone intermittently.
Cable companies have dealt with contention and increasing demand for bandwidth by -
• allocating more bandwidth to broadband data
• improving efficiency (bits/hertz) by upgrading the technology to allow better performance over the same network.
• improving the network quality to reduce noise and so improve bits/hertz (also known as the constellation size.)
• segmenting the network to reduce the number of customers “sharing” each data feed.
From the mundane exercise of sending technicians to tighten connectors (improving network quality) to the high-tech of advanced line coding (new technology), these approaches all have costs and benefits and so it becomes one side of a business case. The other side is the competitive environment. In the UK, Virgin Media are offering 152Mbit/s as their top tier and one has to assume that this pitched so as to outstrip anything BT can do using DSL and twisted pair! In Europe, teclos are using Fibre to the Home (FTTH) to reach Gigabit speeds and the cable operators are responding. DOCSIS 3.1 will make this much easier for them.
As an aside, a question was raised about the theoretical limit to the amount of bandwidth a human can consume. We didn’t get a good answer, which was fair enough as it is really a question that involves cognitive psychology. However, it did bring to mind a review I read many years ago for a V.32 (9.6K) modem that said something to the effect that it was all very clever, but no one can type that fast!
Sunday, September 15. 2013
Bruce Schneier’s Cryptogram newsletter is always good read and this month is especially good. It mainly covers the “Snowden” revelations about the NSA’s on-line surveillance. These are a few of the points that I found significant, but I would recommend the original at https://www.schneier.com/crypto-gram-1309.html
Sunday, April 21. 2013
Broadsight Hierarchy of Internet Needs (after Maslow)
This week, I was allowed out of the office for a trip to the Spring Lecture day organized by the Society of Cable Telephony Engineers (SCTE.) It was an eclectic mix of lectures, although I found a theme emerging about the economics of bandwidth. If you are interested in the details of these lecture (and some on other topics), you can see video of the lectures at http://tv.theiet.org (and search for “The Society for Broadband Professionals”)
Several of the lectures were on the use of new technology to extend the life of existing plant.
Mourad Veeneman, from Liberty Global, spoke about the upgrades to the DOCSIS standard with the 3.1 revision and the focus on getting higher data rates over cable to compete with the emerging technology of fibre to the home (FTTH.) Like many upgrades to transmission (line coding) standards, DOCSIS 3.1 seeks to take advantage of advances in modulation and error correction theories supported by extra processing power at both ends. Also to make use of more spectrum on the cable.
Stephen Cooke of Genesis Technical Systems, spoke about a new architecture to enable rural broadband by sharing the pairs in the bundle from the exchange (Telco Office) to the local distribution points (pedestals for North American viewers.) By sharing the data bandwidth on the exchange back haul, what little bandwidth might be available on a long line can be used more effectively by pooling at the distribution point and creating a resilient ring around the community being severed. This doesn’t rely on a leap in coding technology, but the insight that Internet connectivity is always contended, so in this case we might as well move the contention out to the edge of the network. A cable drop might have in the order of 20 to 30 pairs and at the end of a long line, those pairs might support 500k each, but bundled together that’s still a respectable speed. The Genesis system also allows regeneration at intermediate points, so will usually do a lot getter that that figure suggests.
We also had a lecture on the use efficient development and operation of fibre in the network core. For me, these lectures illustrate the point that bandwidth must be provided in a way that is economic (and practical) to customers and makes a profit for operators. There is often a temptation to start a project with the idea that we should “do this right” and not be held back by “legacy.” However, engineering (in my view) should always be “the art of the possible” or perhaps “the art of the profitable.” This means we need to find clever fixes to maximise current plant, equipment and organisations. DSL has been a brilliant way of leveraging copper pairs that were designed for 3KHz voice and extending their life. DOCSIS has done a similar thing for cable networks, many of which were laid before the Internet had even routed a packet.
By the way, this principle can also be applied to software. It's tempting to start again "with a clean slate", but it can be dangerous to underestimate the value of old software that has matured under years of trial, error (and hopefully, fixing!) A nice illustration of this is the the adoption of DOCSIS Provisioning of Ethernet PON (DPoE) by the EPON FTTH standard. This allows ISPs to use their old cable billing and OSS systems by connecting to a DPoE API that presents a "virtual cable modem." This was discussed in the talk by Jim Farmer from Aurora Networks.
So it’s clear that bandwidth is valuable and has to be provided economically. However, creating and capturing value are not the same thing and a couple of points illustrate this.
The first point regarding the growing conflict between 4G Mobile services (LTE in the jargon) and existing TV services was discussed by Dipl. Ing. Carsten Engelke from ANGA. Both terrestrial TV and cable services run in bands that go up to the 700 and 800MHz range. This applies to digital and analogue TV (for those countries that still have analogue.) The mobile industry has persuaded governments (thought the ITU World Radio Conference process) to clear the 800MHz band and soon, the 700MHz band for extra 4G data bandwidth. This means that terrestrial TV has to move, which imposes costs on the broadcasters without benefits. To be fair, they have a privileged position in the first place and can do with less bandwidth once they turn off analogue. A bigger problem is for the cable industry, where their business model is essentially to compete with DSL and FTTH. This means that they are relying on most or all of the bandwidth in their cables. Although it’s early days for LTE in the 800MHz band, there are examples of the LTE signal getting into the cable and blocking TV. Although it’s a signal from the mobile tower, it becomes a problem for the cable company. If the signal is leaking in through a customer's TV (due to unintended pick-up) there may not be much they can do about. So the mobile company has captured some value to the cost of the cable company. I am not making judgements about companies here as they are just using the bandwidth that has been licensed to them. However, it is worth making the point that externalities become complex and may become more so with increasing use of power line and cognitive radio.
The other example is allocation of Wi-Fi frequencies, which have been a huge benefit to businesses and consumers, but risk becoming a neglected spectrum user. Wi-Fi has two main bands allocated to it. Most common in the 2.5G range and another in the 5G range. (5G is getting to the point where radio is starting to behave like infra-red and doesn’t like to go through walls, so is less useful than 2.5G.) With only 3 non-overlapping channels in the 2.5G range, Wi-Fi is oversubscribed in most medium and high-density residential areas. It wouldn’t be hard to add another band to mitigate this problem and would, arguably, be a significant benefit to users. However, no one appears to be pushing for this at the WRC and so it’s unlikely to happen. We could speculate that this is because residential Wi-Fi is not directly billable and so no one can capture the value and therefore have an interest in promoting it.
To sum up - as bandwidth provision increases, we keep finding ways to use it, from GIFs to music/speech to video and now HD/4K video. I am sure we will carry on filling up the pipes, so thanks to the engineers who keep re-building them. This brings to mind a review I read (a long time ago) about the first 9600bit/s modem, which went something like “this is all very well, but no one can type that fast – so what’s the point?”
Friday, February 15. 2013
I spent an hour or so trying to get my son's iPad working with Sky Go. It worked for 11 months and then suddenly stopped for reasons that are not apparent to me or (so far) Sky's technical support people. While this is frustrating for me and my son (he has to resort to sharing the TV in the living room!) it is an example of how hard it is to reliably run TV services across the Internet to customer owned devices. (A bit like BYOD in the corporate world!)
To be fair to Sky, their customer service is no worse that their competitors and they do try to "push the envelope" on new technologies, so their will be some pain from time to time. However, they have bundled Sky Go into their service packages as they did in the past with Sky+ and entry level broadband in an attempt to reduce churn and now they have to support it!
OTT (or "over the top") TV is the idea of delivering (mainly) TV content via the Internet rather than paying for your own distribution system like satellite, cable or terrestrial transmitters. YouTube and BBC iPlayer where early (and free) entrant into this sector. Netflix and Hula are probably the best known pay providers in OTT and all the traditional operators are trying or thinking about getting into this market, either because they see it as "strategic" or as a defensive ploy.
OTT sounds like a free ride as you don't have to buy network capacity to the home? However, the real cost of OTT is the need to make the service work with all devices that you "support." This is not so bad if it's a "free" service and it's probably no coincidence that YouTube got going (and still is largely) free. People don't generally complain if the they are not paying in for first place. However, when you start selling a service and say it will work on a list of devices, customers expect it to work or they will churn out.
In the old world of closed networks and devices, broadcast platforms (e.g. BSkyB, Virgin, UPC, DirectTV, DISH, etc) tested each new set top box (STB), firmware download or network update to death. And even then, they rolled it out slowly to "trialists" or "friendly customers" (most customers are unfriendly, you see ) before unleashing it on the whole population. The reason they can do this process effectively is that everything is locked down - they know exactly what the hardware is, they know the versions of all the software components and they know the network environment. These factors are all reproduced as closely as possible in the test lab, so the test results are reasonably accurate. There are still surprises when new products are put into the real world - real people do things that were not anticipated or unexpected environmental factors cause problems e.g. certain fluorescent lights interfere with remote controls. These environmental factors are usually caught in trials, but scaling issues can still be missed until full deployment.
So, even with everything under control and nailed down, it's quite hard to catch all the bugs. In the OTT world, operators usually offer support for half a dozen devices. Doesn't sound like too many, right? Well, each device probably has half a dozen operating system versions in field .... and maybe several hardware versions. In the case of PCs in particular, there may be several web browsers and several user contexts. Apart from games consoles (which usually run one application/game at a time) there are an almost infinite combination of software that co-exists on the device and may interfere with your content delivery app. Now multiply these factors together and you get a huge multidimensional matrix of possible environments that could be tested. Oh yes, I also forgot the local network context, which will change depending on your ISP and home router, etc.
Delivering OTT to many devices is a hard problem, but not insoluble. We are seeing the following approaches -
1. Aggregation. Hula, Netflix, Lovefilm Instant are mainly content aggregators (House of Cards excepted!) and their value is in their ability to ingest content, transform it for target devices and manage that process. Apple is a special case of an aggregator as they own the delivery chain down to the device (and have the customers' credit card details!) Amazon are also trying to play here. So the cost of testing is spread over a large portfolio of content and therefore purchases.
2. Targeting of Devices This is the idea of focusing your effort on the top devices (by market share) until the cost of supporting a device (and/or web browser) is not justified by the likely revenues. Even the BBC, with their commitment to broad access, had to cull their supported devices.
3. Risk Based Testing we can't test the whole matrix of possible contexts, but we can test the most likely and/or representative. Good feedback from customer support is important here.
4 Customer Support There will be many more customers who can't get the service to work than in the traditional model of a set top box supplied by the operator and customer support needs to reflect this. We see the use of forums to encourage users to help of users either for kudos, discounts or just good will.
The increasing maturity of software stacks also helps in this process. APIs that really do abstract the hardware and lower layers and software that traps errors make life much more predictable for application developers and testers.
The final twist on this is DRM. It is hard enough to make an end to end system work "in the clear." DRM adds a parrallel delivery chain. Not only do I need the content, I also need the licence and decryption keys. These have to be delivered in sync to make it all work. That would be hard enough, but DRM is usually designed to work as a "black box." This is (understandably) to stop people (hackers) looking in and defeating it. However the effect is that we often end up without a meaningful error message when things break. (My problem with Sky appears to be DRM related. There is no error message and the "clear" promo video plays.) On top of this, DRM is designed to be sensitive to unexpected changes in the environment as these might be hacking attempts. For example, the system clock going out of tolerance or the presence of an unknown/suspicious app on your device. Again, it usually won't say what it doesn't like as this information can be useful to hackers.
So far, I haven't mentioned UltraViolet which is a Hollywood sponsored attempt to allow a "buy once, play anywhere" model of DRM. It also defines a common format for the content itself. I think that it's unlikely that we will make a common format stick as the format is driven by the constraints of the user device and even if we can agree now, new devices will come along with new constraints and capabilities. The other problem with UltraViolet is the absence of Apple, who would say that they have already solved this problem (if you buy Apple!) So UltraViolet becomes another row (or rows) in the matrix!
So, what are the conclusions? Basically, OTT is hard and we should expect to see a few aggregators emerge, either as brands or as white label providers (e.g. KIT Digital.) These aggregators will probably be the names mentioned already, as scale is the key to solving this problem. The more customers you have, the cheaper testing and development is per customer.
By the way, my spell checker keeps trying to change "aggregators" to "aggressors" which is probably how it feels if you are an incumbent operator
Friday, June 26. 2009
After a hard day's work at Broadsight Towers, some of the team decided to let our hair down last might by attending a lecture on Web Oriented Architectures at the Institute for Engineering and Technology (IET) in London. It was given by Mark Edgington. Here’s our quick précis, with apologies to Mark!....
The basic premise was that Service Oriented Architecture (SOA) is a heavyweight architecture that is required for enterprise solutions whereas web oriented architecture (WOA) is a lighter weight solution that covers most of the use cases. Although web based architecture is not a formal specification, there is a significant amount of custom and practice. Because of this popularity and also because it’s a simple architecture, it has good inter-operability. Think of all the bots and clients talking to Twitter!
SOA is appropriate for applications that need security, transactional integrity and all those good enterprise grade things. They tend to be internal to an organization and this is just as well, because components from different vendors are often not interoperable.
Don't be religious! The SOA and WOA approaches both have pros and cons. Choose the one that is fit for your purpose. This might mean using both approaches on the same project for different users or purposes
It seems to us that many complex standards emerge, then after a while the community realize that they can get 80% of the benefit from 20% of the complexity and invent a cut down standard for "everyday" use – X.500 and LDAP, X.400 and SMTP, ATM and TCP/IP.
Our only quibble with the lecture was that SOA was presented as synonymous with the SOAP/WDSL family of standards and WOA with REST. We find that the usage tends to refer to the business and technical model at an abstract level.
The IET are running another series next year, starting in September 2009. Have a look at IET Events if you are in London.”
(Page 1 of 4, totaling 33 entries) » next page
More Broad Stuff
Poll of the Week
Will Augmented reality just be a flash in the pan?
Creative Commons Licence
Original content in this work is licensed under a Creative Commons License