Innovation Myths

Innovation has gained a sort-of mystical aura in our world. Move fast and break stuff. We recognize and lionize innovators in just about every way possible. The result is a general attitude of innovate or die—if you cannot innovate, then you will not progress in your career or life. Maybe it’s time to take a step back and bust some of the innovation myths created by this near idolization of innovation.

You can’t innovate where you are. Reality: innovation is not tied to a particular place and time. “But I work for an enterprise that only uses vendor gear… Maybe if I worked for a vendor, or was deeply involved in open source…” Innovation isn’t just about building new products! You can innovate by designing a simpler network that meets business needs, or by working with your vendor on testing a potential new product. Ninety percent of innovation is just paying attention to problems, along with a sense of what is “too complex,” or where things might be easier.

You don’t work in open source or open standards? That’s not your company’s problem, that’s your problem. Get involved. It’s not just about protocols, anyway. What about certifications, training, and the many other areas of life in information technology? Just because you’re in IT doesn’t mean you have to only invent new technologies.

Innovation must be pursued—it doesn’t “just happen.” We often tell ourselves stories about innovation that imply it “is the kind of thing we can accomplish with a structured, linear process.” The truth is the process of innovation is unpredictable and messy. Why, then, do we tell innovation stories that sound so purposeful and linear?

That’s the nature of good storytelling. It takes a naturally scattered collection of moments and puts them neatly into a beginning, middle, and end. It smoothes out all the rough patches and makes a result seem inevitable from the start, despite whatever moments of uncertainty, panic, even despair we experienced along the way.

Innovation just happens. Either the inspiration just strikes, or it doesn’t, right? You’re just walking along one day and some really innovative idea just jumps out at you. You’re struck by lightning, as it were. This is the opposite of the previous myth, and just as wrong in the other direction.

Innovation requires patience. According to Keith’s Law, any externally obvious improvement in a product is really the result of a large number of smaller changes hidden within the abstraction of the system itself. Innovation is a series of discoveries over months and even years. Innovations are gradual, incremental, and collective—over time.

Innovation often involves combining existing components. If you don’t know what’s already in the field (and usefully adjacent fields), you won’t be able to innovate. Innovation, then, requires a lot of knowledge across a number of subject areas. You have to work to learn to innovate—you can’t fake this.

Innovation often involves a group of people, rather than lone actors. We often emphasize lone actors, but they rarely work alone. To innovate, you have to inteniontally embed yourself in a community with a history of innovation, or build such a community yourself.

Innovation must take place in an environment where failure is seen as a good thing (at least your were trying) rather than a bad one.

Innovative ideas don’t need to be sold. Really? Then let’s look at Qiubi, which “failed after only 7 months of operation and after having received $2 billion in backing from big industry players.” The idea might have been good, but it didn’t catch on. The idea that you can “build a better mousetrap” and “the world will beat a path to your door,” just isn’t true, and it never has been.

The bootom line is…Innovation does require a lot of hard work. You have to prepare your mind, learn to look for problems that can be solved in novel ways, be inquisitive enough to ask why, and if there is a better way, stubborn enough to keep trying, and confident enough to sell your innovation to others. But you can innovate where you are—to believe otherwise is a myth.

The Hedge 61: Pascal Thubert and the RAW Working Group

RAW is a new working group recently chartered by the IETF to work on:

…high reliability and availability for IP connectivity over a wireless medium. RAW extends the DetNet Working Group concepts to provide for high reliability and availability for an IP network utilizing scheduled wireless segments and other media, e.g., frequency/time-sharing physical media resources with stochastic traffic: IEEE Std. 802.15.4 timeslotted channel hopping (TSCH), 3GPP 5G ultra-reliable low latency communications (URLLC), IEEE 802.11ax/be, and L-band Digital Aeronautical Communications System (LDACS), etc. Similar to DetNet, RAW will stay abstract to the radio layers underneath, addressing the Layer 3 aspects in support of applications requiring high reliability and availability.

download

The Senior Trap

How do you become a “senior engineer?” It’s a question I’m asked quite often, actually, and one that deserves a better answer than the one I usually give. Charity recently answered the question in a round-a-bout way in a post discussing the “trap of the premature senior.” She’s responding to an email from someone who is considering leaving a job where they have worked themselves into a senior role. Her advice?

Quit!

This might seem to be counter-intuitive, but it’s true. I really wanted to emphasize this one line—

There is a world of distance between being expert in this system and being an actual expert in your chosen craft. The second is seniority; the first is merely .. familiarity

Exactly! Knowing the CLI for one vendor’s gear, or even two vendor’s gear, is not nearly the same as understanding how BGP actually works. Quoting the layers in the OSI model is just not the same thing as being able to directly apply the RINA model to a real problem happening right now. You’re not going to gain the understanding of “the whole ball of wax” by staying in one place, or doing one thing, for the rest of your life.

If I have one piece of advice other than my standard two, which are read a lot (no, really, A LOT!!!!) and learn the fundamentals, it has to be do something else.

Charity says this is best done by changing jobs—but this is a lot harder in the networking world than it is in the coding world. There just aren’t as many network engineering jobs as there are coding jobs. So what can you do?

First, do make it a point to try to work for both vendors and operators throughout your career. These are different worlds—seriously. Second, even if you stay in the same place for a long time, try to move around within that company. For instance, I was a Cisco for sixteen years. During that time, I was in tech support, escalation, engineering, and finally sales (yes, sales). Since then, I’ve worked in a team primarily focused on research at an operator, in engineering at a different vendor, then in an operationally oriented team at a provider, then marketing, and now (technically) software product management. I’ve moved around a bit, to say the least, even though I’ve not been at a lot of different companies.

Even if you can’t move around a lot like this for whatever reason, always take advantage of opportunities to NOT be the smartest person in the room. Get involved in the IETF. Get involved in open source projects. Run a small conference. Teach at a local college. I know it’s easy to say “but this stuff doesn’t apply to the network I’m actually working on.” Yes, you’re right. And yet—that’s the point, isn’t it? You don’t expand your knowledge by only learning things that apply directly to the problem you need to solve right now.

Of course, if you’re not really interested in becoming a truly great network engineer, then you can just stay “senior” in a single place. But I’m guessing that if you’re reading this blog, you’re interested in becoming a truly great network engineer.

Pay attention to the difference between understanding things and just being familiar with them. The path to being great is always hard, it always involves learning, and it always involves a little risk.

Technologies that Didn’t: Asynchronous Transfer Mode

One of the common myths of the networking world is there were no “real” networks before the early days of packet-based networks. As myths go, this is not even a very good myth; the world had very large-scale voice and data networks long before distributed routing, before packet-based switching, and before any of the packet protocols such as IP. I participated in replacing a large scale voice and data network, including hundreds of inverse multiplexers that tied a personnel system together in the middle of the 1980’s. I also installed hundreds of terminal emulation cards in Zenith Z100 and Z150 systems in the same time frame to allow these computers to connect to mainframes and newer minicomputers on the campus.

All of these systems were run through circuit-switched networks, which simply means the two end points would set up a circuit over which data would travel before the data actually traveled. Packet switched networks were seen as more efficient at the time because the complexity of setting these circuits up, along with the massive waste of bandwidth because the circuits were always over provisioned and underused.

The problem, at that time, with packet-based networks was the sheer overhead of switching packets. While frames of data could be switched in hardware, packets could not. Each packet could be a different length, and each packet carried an actual destination address, rather than some sort of circuit identifier—a tag. Packet switching, however, was quickly becoming the “go to” technology solution for a lot of problems because of its efficient use of network resources, and simplicity of operation.
Asynchronous Transfer Mode, or ATM, was widely seen as a compromise technology that would provide the best circuit and packet switching in a single technology. Data would be input into the network in the form of either packets or circuits. The data would then be broken up into fixed sized cells, which would then be switched based on a fixed label-based header. This would allow hardware to switch the cells in a way that is like circuit switching, while retaining many of the advantages of a circuit switched network. In fact, ATM allowed for both circuit- and packet-switched paths to be both be used in the same network.
With all this goodness under one technical roof, why didn’t ATM take off? The charts from the usual prognosticators showed markets that were forever “up and to the right.”

The main culprit in the demise of ATM turned out to be the size of the cell. In order to support a good combination of voice and data traffic, the cell size was set to 53 octets. A 48-octet packet, then, should take up a single cell with a little left over. Larger packets, in theory, should be able to be broken into multiple cells and carried over the network with some level of efficiency, as well. The promise of the future was ATM to the desktop, which would solve the cell size overhead problem, since applications would generate streams pre-divided into the correctly sized packets to use the fixed cell size efficiently.
The reality, however, was far different. The small cell size, combined with the large overhead of carrying both a network layer header, the ATM header, and the lower layer data link header, caused ATM to be massively inefficient. Some providers at the time had research showing that while they were filling upwards of 80% of any given link’s bandwidth, the goodput, the amount of data being transmitted over the link, was less than 40% of the available bandwidth. There were problems with out of order cells and reassembly to add on top of this, causing entire packets worth of data, spread across multiple cells, to be discarded. The end was clearly near when articles appeared in popular telecommunications journals comparing ATM to shredding the physical mail, attaching small headers to each resulting chad, and reassembling the original letters at the recipient’s end of the postal system. The same benefits were touted—being able to pack mail trucks more tightly, being able to carry a wider array of goods over a single service, etc.

In the end, ATM to the desktop never materialized, and the inefficiencies of ATM on long-haul links doomed the technology to extinction.

Lessons learned? First, do not count on supporting “a little inefficiency” while the ecosystem catches up to a big new idea. Either the system has immediate, measurable benefits, or it does not. If it does not, it is doomed from the first day of deployment. Second, do not try to solve all the problems in the world at once. Build simple, use it, and then build it better over time. While we all hate being beta testers, sometimes real-world beta testing is the only way to know what the world really wants or needs. Third, up-and-to-the-right charts are easy to justify and draw. They are beautiful and impressive on glossy magazine pages, and in flashy presentations. But they should always be considered carefully. The underlying technology, and how it matches real-world needs, are more important than any amount of forecasting and hype.