Members of the Executive Briefing Service, please click here to read full article. Non-Members, please click here to subscribe.
The battle for the edge
This document examines the role of “edge” devices that sit at the periphery of a telco’s network – products like mobile phones or broadband gateways that live in the user’s hand or home. Formerly called “terminals”, with the inclusion of ever-better chips and software, such devices are now getting “smarter”. In particular, they are capable of absorbing many new functions and applications – and permit the user or operator to install additional software at a later point in time.
In fact, there is fairly incontrovertible evidence that “intelligence” always moves towards the edge of telecom networks, particularly when it can exploit the Internet and IP data connections. This has already been seen in PCs connected to fixed broadband, or in the shift from mainframes to client/server architectures in the enterprise. The trend is now becoming clearer in mobile, with the advent of the iPhone and other smartphones, as well as 3G-connected notebooks. Home networking boxes like set-tops, gaming consoles and gateways are further examples, which also get progressively more powerful.
This is all a consequence of Moore’s Law: as processors get faster and cheaper, there is a tendency for simple massmarket devices to gain more computing capability and take on new roles. Unsurprisingly, we therefore see a continued focus on the “edge” as a key battleground – who controls and harnesses that intelligence? Is it device vendors, operators, end users themselves, or 3rd-party application providers (“over-the-top players”, to use the derogatory slang term)? Is the control at a software, application or hardware level? Can operators deploy a device strategy that complements their network capabilities, to strengthen their position within the digital value chain and foster two-sided business models? Do developments like Android and femtocells help? Should the focus be on dedicated single-application devices, or continued attempts to control the design, OS or browser of multi-purpose products like PCs and smartphones?
Where’s the horsepower?First, an illustration of the power of the edge.
If we go back five years, the average mobile phone had a single processor, probably an ARM7, clocking perhaps 30MHz. Much of this was used for the underlying radio and telephony functions, with a little “left over” for some basic applications and UI tools, like Java games.
Today, many the higher-end devices have separate applications processors, and often graphics and other accelerators too. An iPhone has a 600MHz+ chip, and Toshiba recently announced one of the first devices with a 1GHz Qualcomm Snapdragon. Even midrange featurephones can have 200MHz+ to play with, most of which is actually usable for “cool stuff” rather than the radio. [note: 1,000,000,000,000MHz (Megahertz) = 1,000,000,000GHz (Gigahertz) = 1,000,000THz (Terahertz) = 1,000PHz (Petahertz) = 1EHz (Exahertz)] Now project forward another five years. The average device (in developed markets at least) will have 500MHz, with top-end devices at 2GHz+, especially if they are not phones but 3G-connected PCs or MIDs. (These numbers are simplified – in the real world there’s lots of complexity because of different sorts of chips like digital signal processors, graphics accelerators or multicore processors). Set-top boxes, PVRs, game consoles and other CPE devices are growing smarter in parallel.
Now multiply by (say) 8 billion endpoints – mobile handsets, connected PCs, broadband modems, smart consumer electronics and so forth. In developed markets, people may well have 2-4 such devices each. That’s 4 Exahertz (EHz, 1018) of application-capable computing power in people’s hands or home networks, without even considering ordinary PCs and “smart TVs” as well. And much – probably most – of that power will be uncontrolled by the operators, instead being the playground of user- or vendor-installed applications.
Even smart pipes are dumb in comparisonIt’s tricky to calculate an equivalent figure for “the network”, but let’s take an approximation of 10 million network nodes (datapoint: there are 3 million cell sites worldwide), at a generous 5GHz each. That means there would be 50 Petahertz (PHz, 1015) in the carrier cloud. In other words, about an 80th of the collective compute power of the edge.
Now clearly, it’s not quite as bad as that makes it sound – the network can obviously leverage intelligence in a few big control points in the core like DPI boxes, as traffic funnels through them. But at the other end of the pipe is the Internet, with Google and Amazon’s and countless other companies’ servers and “cloud computing” infrastructures. Trying to calculate the aggregate computing power of the web isn’t easy either, but it’s likely to be in the Exahertz range too. Google is thought to have 0.5-1.0 million servers on its own, for example.
So one thing is certain – the word “terminal” is obsolete. Whatever else happens, the pipe will inevitably become “dumber” (OK, less smart) than the edge, irrespective of smart Telco 2.0 platforms and 4G/NGN networks.
Now, add in all the cool new “web telco” companies (eComm 2009 was full of them) like BT/Ribbit, Voxeo, Jaduka, IfByPhone, Adhearsion and the Telco 2.0 wings of longtime infrastructure players like Broadsoft and Metaswitch (not to mention Skype and Google Voice), and the legacy carrier network platforms look even further disadvantaged.
Intelligent mobile devices tend to be especially hard to control, because they can typically connect to multiple networks – the operator cellular domain, public or private WiFi, Bluetooth, USB and so forth – which makes it easier for applications to “arbitrage” between them for access, content and services – and price.
Members Executive Briefing Service, please click here to read full article. Non-Members, please click here to subscribe.