The secret to fostering innovation is really quite simple: 1) hire the best and brightest minds with diverse perspectives and abilities; 2) create an environment that is challenging, stimulating and encourages collaboration; 3) challenge the teams to ‘go big’ without fear of failure. This sounds quite easy to replicate, however, it takes tremendous management foresight and willingness to ‘bet’ on an unforeseen future far different from the present, and towards which the path is initially unknown.
This has always been the strategy for innovation at Bell Labs. By challenging ourselves to think beyond the present and find solutions to seemingly insurmountable problems, we have made fundamental breakthroughs in science, mathematics and technology. We invented the transistor, the laser, discovered the ‘Big Bang’ microwave background radiation, defined the fundamental limits of any communication medium, created UNIX, C and C++, discovered new states of matter and invented the CCD (the device that is the basis of your mobile phone camera). Bell Labs researchers have won 8 Nobel prizes, 2 Turing prizes, 2 Emmy’s, a Grammy, an Oscar and many U.S. national medals of science.
Recently, we extended this model in a couple of important ways. First, we increased our collaboration with innovators outside the company –with open source communities, industrial partners and individuals, to increase the innovator pool and speed the resolution of problems. We established industrial R&D partnerships with Qualcomm, Intel and Freescale, amongst others, to increase the speed and scope of our innovation. Last year we launched the Bell Labs Prize, a competition to solicit ‘game-changing’ ideas in ICT, and then have the finalists work with our Bell Labs ‘brain trust’ to make these ideas ‘real’.
Second, we’ve created internal start-ups to bring our new innovations to market in an expedited manner. For example, our very successful SDN solution is built around an internal venture called Nuage, which benefits from less corporate overhead or mandatory practices, greater flexibility in recruitment and compensation, and hence the ability to move faster, while benefiting from the go-to-market and customer relationship advantages of the corporate parent-a great recipe for innovation at speed and big market impact.
IoT’s Cascading Effect
The Internet of Things (IoT) will completely change the way communications networks scale and operate. Networks were originally built to carry voice, then data and now video. While the required bandwidth has driven a million-fold growth in capacity over the last 50 years, historically that growth was relatively predictable (40-50 percent every 12-18 months). In contrast with the IoT the volume of devices connecting to networks in the next 5-10 years is staggering. However, these devices will use the network in much less predictable ways. You may have 100 billion devices, but many will just require little network capacity (just enough for a ping or status update here and there). While such devices are not a significant bandwidth burden, they become a huge control plane burden-this alone could bring networks to a grinding halt.
Communication from these devices will need to be managed and prioritized intelligently-for example, a sensor on a bridge or tunnel, or on a person, or in a car detecting an impending failure, or in a building detecting a gas leak, will need top priority to send an alert, yet it may only be sent once in the life of the device. In contrast, a sensor indicating that a light bulb needs changing, or the current location of a truck doesn’t need such high priority. There is also the class of machines that consume and generate large amounts of video-rich data (e.g. entertainment systems in cars, video cameras, or remote surgical tools) that will need large bandwidth allocations to send and receive data. Building a network that can support these requirements is an enormous challenge, the likes of which we have rarely seen before.
Future of wireless networks
In our mobile society, our expectation is that we will have instant access to our information wherever we are–in the office, at home or on the ski slopes. To be mobile requires wireless connectivity, but wireless connectivity actually comes in two basic flavors: ‘best effort nomadic’ and ‘quality enhanced dynamic’. Different technologies can serve these different needs. WiFi technology is low cost, does not support full mobility or quality of service, but has large(r) swathes of spectrum available in unlicensed bands. It is also subject to interference, so it is good for ‘best-effort’ nomadic services; this might be as much as 75 percent of all network traffic. However, when you want to move continuously and have connectivity with a minimum guaranteed quality, you need to use cellular technologies, which can perform hand-offs between cells, and licensed spectrum which is not subject to interference. ‘Getting it right’ means using a combination of these two approaches, in a seamless way, based on the needs of the user.
For users this should be completely transparent and automated. We are doing a great deal of work to make this process completely seamless. This is what we think will be a leading feature of the next generation of wireless networks, 5G, which will be less about a specific wireless technology (3G, WiFi, LTE) and more about taking advantage of the entire available network infrastructure and picking the best technology to deliver the expected quality at any given time.
Capacity Vs Demand
The biggest challenge for telecom operators and their suppliers is to increase capacity to keep up with demand, while maintaining the right economics. The two fundamental changes required are: the creation of dynamically scalable networks and networked cloud infrastructure. This will demand the virtualization of network functions (NFV), to allow them to scale at the optimum cost point, and software defined networking (SDN) to dynamically connect virtual instances to each other and to the underlying packet transport and switching infrastructure. Lastly, in order to get the desired performance in terms of latency and bandwidth, new ‘edge cloud’ hosting infrastructure will have to be created. In short, we will be moving from a static, and tightly integrated centralized services delivery model to a dynamically optimized, lightly integrated distributed services delivery model, for communications between both people and machines.
The example I like to give is Wall Street-during the hours when the stock exchange is open, the bandwidth required to connect computer terminals, phones and trading systems is enormous, and with the lowest possible latency. However, when the markets close, this capacity is effectively idle and ‘wasted’. Imagine, if this idle capacity could be re-purposed so a hospital could perform remote surgical procedures, or a business could update and analyze data records, or backhaul data traffic from an entertainment event. With the flexibility to reallocate bandwidth on demand, we will feel as if the network has ‘infinite capacity’, to support the continuous expansion of this ‘network of you’.