Archive for 2015
Skipping the Hype Cycle
Sipping from the firehose is a big problem in the tech industries. Every time I turn around there’s yet another new technology to make everything better. If you can’t quote rule 11, you need to learn it by heart. Now. I’ll wait right here while you do the memorization thing.
So — why the hype cycle? Essentially, it comes down to this: we’re emotionally driven creatures. Advertisers have known this for years; to wit —
Mark the last words in that quote: it’s about selling an experience, rather than a product. Then ask yourself this simple question: what — really — is the difference between a unified, hyperconverged, high performance cloud platform and a mainframe? Or, to back a few years, what is really the difference between a layer 3 switch and a router? The answer really lies in one word.
Experience.
Not how much you have, but the experience of the buyer. What does it feel like to buy this product? Does it make you feel smart (this is what everyone else is doing)? Does it make you feel comfortable (nobody ever gets fired for buying IBM/Cisco/etc.)? Does it make you feel like you’re on the cutting edge? Does it make you, well, just, feel?
We tend to think of the engineering market as somehow detached from this emotional cycle. After all, we’re not talking about the latest in fashion (particularly in a world where anti-fashion is the new fashion, if you can grock that one). We’re not talking about cars. We’re talking about computers, and routers, and switches, and services.
I can just hear you saying, “I’m an engineer! I don’t let my emotions get in the way!” Yeah, right. If there’s anything I’ve learned from my study of philosophy over the last several years, it’s that those who most believe they are least susceptible to emotional manipulation are actually the most likely to fall into focusing on emotions and what advertisers call “the experience.”
Philosophy is a long story of balanced systems being decried for their irrational roots, followed by someone trying to provide rational roots, followed by someone realizing the rational roots are bunk, followed by the collapse of the rational roots into pure unadulterated mysticism with no rational component. If it weren’t so, we might have actually learned something through philosophy by now.
Our rational approach to the problem at hand is often what makes us vulnerable to the emotional manipulation of the hype cycle.
Okay — so how do we get off the hype cycle? To go back to the beginning, rule 11 plays a major role. Let’s look at some basic rules.
First, learn to recognize when technology is being repeated. This means learning the theory (hard as that might be), and learning to see things as a set of problems and solutions that span across many generations of time. Getting there is going to take reshaping some models, and that’s going to hurt. Just keep reading my blog, I have a whole series of posts in these areas. (see what I did there?).
Second, stop looking for the technology that’s going to solve your business problems. There isn’t one.
Third, stop hiring and following people based on the hype cycle. Engineering sense is a real thing, and it’s an important thing, even if the HR system can’t see it. Learn it, develop it, hire it, teach it.
Finally, learn to balance the emotional in the buy with the rational. Look at the underlying problems, look at the available models, and match them together. In other words, be an engineer, not a consumer. There’s a difference for a reason.
Big Data for Social Engineering
This is a white hat tool, of course, a form of social engineering penetration testing. Two points of interest, though.
First, you can be pretty certain hackers are already using this sort of tool today to find the right person to contact, how to contact them, and to discover the things they know people will respond to. The rule of thumb you should keep in mind is — at least 80% of the time, hackers are already using the tools researchers come up with to do penetration testing. Remember all those fake people inhabiting the world of twitter, facebook, and the like? Some of them might not be just another click farm — some of them might be clickbait for hackers to find out who you are.
Second, this can teach us something about the human psyche and our ability to be hacked. The particular ploy described in the article is one straight out of Obedience to Authority, pitting someone’s knowledge of what is right and wrong against someone they think is an authority figure. Though not quite as stark as the electric shock machine, the idea that we can be tricked in this way should be sobering.
To put it in another context, isn’t this just advertising in the big data age? Find out about the person, who they respect, and what they believe, and then use those as vulnerabilities to get them to behave a certain way, or believe a certain thing? Something more serious to think about than the latest switch’s port count, isn’t it?
Are Walled Gardens the Future of the ‘net?
From the very beginning, the walled garden has been the opposite of what those who work on and around the ‘net have wanted. The IETF, and the protocols it has developed over the years, have always been about free and open access to anyone who wants to learn networking, coding, or even just what the latest baseball score for their favorite team. Of course, a number of tech giants (remember Compuserve?) fought to build walled gardens using the tools of the Internet. A user would dial into a modem pool, and access the world through a small portal that would provide a consistent and controlled interface for their entire experience, from email to news to chat to…
The same battle rages in recent times, as well. Phone makers, mobile providers, and even social media networks would desperately like to make your only interface into the global Internet a single O/S or app. From this one app, you’ll be able to talk to your friends, pay your bills, save all your data, and, in general, live your entire life. And for those times when you can’t get to what you want outside the app or social network, they will gladly go all those other places you want to go, providing identity services, and protecting you from all the big bad wolves out on the “real Internet.”
The question comes down to this: which model will win? An open DNS system which takes you to open interconnection points where servers are configured with services (most commonly a web page, but that’s not the only service out there, after all)? Or will either the device makers, providers, or social media folks win, creating a perfectly closed environment where, of course, you can “still send email to your friends on AOL from your Compuserve account, so long as you don’t mind losing all that stuff we’ve shoved into our mail application…”
Being a network engineer, I’ve always believed that somehow data will out, and people will find the open Internet a more fruitful place to be. But I’m starting to wonder. Will people trade security, the perception of security, and the convenience of a mobile phone for the real world? I’m beginning to think, the nature of humans being what it is, the answer might actually be “yes.”
Why the rant this week, and not some other week? Because I ran across a story about a service in China called WeChat (or Weixin 微信) that emulates, in fine fashion, Compuserve in the days of yore. The service, in fact, is essentially a portal that allows various vendors and people to install apps within the app, which can be used on any cell phone anyplace to do everything from hailing a cab to paying your water bill. The draw, for the folks developing the app within the app, is that they develop for the one platform, and they reach people on every possible type of phone. The draw, for the people using the app, is they get a familiar user experience for everything they do, even things they don’t do very often. The draw, for the app platform builder, is that they’re making some seven times their nearest competitor per user by leveraging fees on the various transactions taking place on their platform.
It seems like a win/win/win, except that naggling little voice in the back of my head whispering walled garden.
But what’s so wrong with the walled garden, after all? Isn’t a good thing that technology is getting to the point that we can focus on the consuming, rather than the creating or building (in fact, the point of the blog behind the post pointed to above is that the cell phone is the center of the world — again, most people are interested in consuming systems and information, not in creating them). I’ve heard it said before that isn’t such a bad thing. It’s like the car. People first built cars, then they tolerated them, now they consume them. Very few people even know how an internal combustion engine works, much less what anything beyond the stereo system actually does (other than a few fanatics, that is). The world of cars has moved from engineering cars to using them.
Maybe. But somehow I think still think we’re losing something in the walled garden. Of course we’re losing a bit of freedom, but that’s not even what I’m thinking about here. Let me put the problem this way.
Moving from engineering cars to driving them can be seen as a good thing. After all the less time I think about building cars, the more time I have to do something else with my mind, like reading Plato or Plantinga. I can listen to a podcast rather than listening to the engine or breaks for signs of some new problem. But are we facing the same thing in the world of technology?
When I use a cell phone to consume information, rather than create it, what am I freeing my mind up to do? The answer to this question is the one that really makes me worry about the future.
Liskov Substitution and Modularity in Network Design
Furthering the thoughts I’ve put into the forthcoming book on network complexity…
One of the hardest things for designers to wrap their heads around is the concept of unintended consequences. One of the definitional points of complexity in any design is the problem of “push button on right side, weird thing happens over on the left side, and there’s no apparent connection between the two.” This is often just a result of the complexity problem in its base form — the unsolvable triangle (fast/cheap/quality — choose two). The problem is that we often don’t see the third leg of the triangle.
The Liskov substitution principle is one of the mechanisms coders use to manage complexity in object oriented design. The general idea is this: suppose I build an object that describes rectangles. This object can hold the width and the height of the rectangle, and it can return the area of the rectangle. Now, assume I build another object called “square” that overloads the rectangle object, but it forces the width and height to be the same (a square is type of rectangle that has all equal sides, after all). This all seems perfectly normal, right?
Now let’s say I do this:
- declare a new square object
- set the width to 10
- set the height to 5
- read the area
What’s the answer going to be? Most likely 25 — because the order of operations set the height after the width, and internally the object sets the width and height to be equal, so the last value input into either field wins.
What’s the problem? Isn’t this what I should expect? The confusion is this — the square class is based on the rectangle class, so which behavior wins? But the result is pushing a button over here, and ending up with an unexpected result over there. Taking this one step further, what if you modified the rectangle class to include depth, and then added a function that returns volume? A user might expect the square class to represent a perfectly formed cube (all sides equal), based on the it’s behavior in the past — but that’s not what is going to happen. The solution, from a coding perspective, is to build a new class that underlies both the square and the rectangle — to find a more fundamental construct, and use that as a foundation.
In general, you want to find a foundation which will not change no matter what you build on it — in other words, you want to find a foundation that, when substituted for another foundation in the future, will not modify the objects sitting on top of the foundation.
Hopefully, you’ve tracked me this far. I know this is a bit abstract, but it comes back to network design in an important way. The simplest place to see this is in the data center, where you have an underlay and an overlay. To apply Liskov’s substitution principle here, you could say, “I want to build a physical underlay that will allow me to change it in the future without impacting the overlay.” Or, “I want to be able to change the overlay without impacting how the applications run on the fabric.” Now — take this concept and apply it to the entire network, wide area to data center fabric.
You should always strive to build a physical infrastructure that can be replaced without impacting the control plane. You should also strive to build a control plane that can be replaced without impacting the operation of the applications running on the network. Just like you should be able to replace the physical layer under IP, and not impact the operation of TCP on top in any meaningful way.
Now — the real world is always messier than the virtual worlds we build in our minds. Abstractions are always going to leak, and the interaction surface between any pair of underlying and overlying layers is always going to be deeper and broader than you think when you first look at the problem. None of this negates the end goals, however. Keep the interaction surfaces in a design shallow and narrow, and thinking through “what happens if I replace this piece with a new one later on?”
Hierarchical and modular design, by the way, already operate on these sorts of principles (in theory). They’re just rules of thumb, or design patterns, laid on top of the more foundational concepts. The closer we get to the foundational principles in play, the more we can take this sort of thinking and apply it along every interaction surface in a design, and the more we can move from black art to science in designing networks that work.
Engineering Lessons, IPv6 Edition
Yes, we really are going to reach a point where the RIRs will run out of IPv4 addresses. As this chart from Geoff’s blog shows —
Why am I thinking about this? Because I ran across a really good article by Geoff Huston over at potaroo about the state of the IPv4 address pool at APNIC. The article is a must read, so stop right here, right click on this link, open it in a new tab, read it, and then come back. I promise this blog isn’t going anyplace while you’re over on Geoff’s site. But my point isn’t to ring the alarm bells on the IPv4 situation. Rather, I’m more interested in how we got here in the first place. Specifically, why has it taken so long for the networking industry to adopt IPv6?
Inertia is a tempting answer, but I’m not certain I buy this as the sole reason for lack of deployment. IPv6 was developed some fifteen years ago; since then we’ve deployed tons of new protocols, tons of new networking gear, and lots of other things. Remember what a cell phone looked like fifteen years ago? In fact, if we’d have started fifteen years ago with simple dual mode devices, we could easily be fully deployed in IPv6 today. As it is, we’re really just starting now.
We didn’t see a need? Perhaps, but that’s difficult to maintain, as well. When IPv6 was originally developed (remember — fifteen years ago), we all knew there was an addressing problem. I suspect there’s another reason.
I suspect that IPv6, in it’s original form tried to boil the ocean, and the result might have been too much change too fast for the networking community to handle in such a fundamental area of the stack. What engineering lessons might we draw from the long times scales around IPv6 deployment?
For those who weren’t in the industry those many years ago, there were several drivers behind IPv6 beyond just the need for more address space. For instance, the entire world exploded with “no more NATs.” In fact, many engineers, to this day, still dislike NATs, and see IPv6 as a “solution” to the NAT “problem.” Mailing lists roiled with long discussions about NAT, security by obscurity (still waiting for someone who strongly believes that obscurity is useless to step onto a modern battlefield with a state of the art armor system painted bright orange), and a thousand other topics. You see, ARP really isn’t all that efficient, so let’s do something a little different and create an entirely new neighbor discovery system. And then there’s that whole fragmentation issue we’ve been dealing with for IPv4 for all these years. And…
Part of the reason it’s taken so long to deploy IPv6, I think, is because it’s not just about expanding the address space. IPv6, for various reasons, has tried to address every potential failing ever found in IPv4.
Don’t miss my point here. The design and engineering decisions made for IPv6 are generally solid. But all of us — and I include myself here — tend to focus too much on building that practically perfect protocol, rather than building something that was “good enough,” along with stretchy spots where obvious change can be made in the future.
In this specific case, we might have passed over one specific question too easily — how easy will this be to deploy in the real world? I’m not saying there weren’t discussions around this very topic, but the general answer was, “we have fifteen years to deploy this stuff.” And, yet… Here we are fifteen years later, and we’re still trying to convince people to deploy it. Maybe a bit of honest reflection might be useful just about now.
I’m not saying we shouldn’t deploy IPv6. Rather, I’m saying we should try and take a lesson from this — a lesson in engineering process. We needed, and need, IPv6. We probably didn’t need the NAT wars. We needed, and need, IPv6. But we probably didn’t need the wars over fragmentation.
What we, as engineers, tend to do is to build solutions that are complete, total, self contained, and practically perfect. What we, as engineers, should do is build platforms that are flexible, usable, and can support a lot of different needs. Being a perfectionists isn’t just something you say during the interview to that one dumb question about your greatest weakness. Sometimes you — we, really — do need to learn to stop what we’re doing, take a look around, and ask — why are we doing this?
The Future of Network Engineering
Two different articles caught my attention this last week. They may not seem to be interrelated, but given my “pattern making mind,” I always seem to find connections. The first is an article from Network Computing discussing the future of network engineering skill sets.
Patrick Hubbard goes on to talk about the hand grenade John Chambers left in the room 3 that there would be major mergers, failures, and acquisitions in the next twenty years, leaving the IT industry a very different place. The takeaway? That individual engineers need to “up their game,” learning new technologies faster, hitting the books and the labs on a more regular basis. Given the view in the industry of Cisco as a “safe harbor” for IT skills, this is something of a hand grenade in the room, coming from Chambers at Cisco Live.
The second article predicts a hand grenade, as well, though of a different sort. This one is via SDN Central, and it relates to remarks made by Jennifer Granick.
If both of these two forward looking thinkers are right, network engineers shouldn’t be brushing up their skills. If the merging of companies and the move to all cloud, all the time, are right, then the network engineer as a generalist and the enterprise vendor are both going to go the way of the dodo bird. This isn’t about changing skill sets, it’s about learning to live in a world where most of the providers have consumed the vendor space pretty much completely, and there’s little left in the way of IT jobs other than working either as a contract negotiator at a larger company using cloud services, or as an engineer at one of the cloud providers, or perhaps, like the automotive mechanic, someone who focuses on using the tools provided by the manufacturer to repair problems in devices and circuits laid down by the “real engineers” who work for the provider.
When you put these two trends together, in fact, it sounds really pessimistic, doesn’t it?
A couple of thoughts.
First, technology will always change. Until it doesn’t, that is. Of course airline flight has changed over the years, but I would bet most really good airplane mechanics of thirty years ago would still recognize the physical components of an airplane built yesterday. Whether we would give them a chance to prove their worth as mechanics in today’s world is an active question (this is a serious culture problem in an engineering world that eats people), but whether they could learn and understand the pieces they don’t already know, given the chance, is probably a given. It’s likely that we’re on the cusp of something similar to the leveling out of technology in the IT world that just about every technology has been through in the past. The real question will be, “what’s next,” not, “will there be a place to work on IT that looks something like what we use today in the future.”
Second, We are going through a centralization cycle right now. We tend to go through these, particularly in the virtual world. We centralize everything, then we decentralize, then we centralize, and then (hopefully) we get sick of it and start thinking through the real problems, and real solutions. I don’t think we’re stuck on the “centralize! centralize! centralize!” treadmill forever. At least I hope not, because for the same reasons articulated by Ms Granick. If we continue to centralize, the impact isn’t going to be the end of network engineering, it’s going to be the end of anything resembling real freedom in our world.
But when you spend too much time in the virtual world, you tend to forget there’s a real one out there. Meat space isn’t yet another playground, it’s the real thing. We tend to get caught up in the “soap opera” of day to day life in the tech world, where a year old technology is on the downside of the hype cycle (a two year old child is still barely beginning life, remember). This tends to lend a sense of urgency and despair that might not be warranted.
Are things going to change? Yes. Do we all need to read a bit more, study new stuff, and get better at our jobs? Yes.
At the same time, do we all need to start thinking in bigger terms, about the engineering space as a set of skills and people that have human limits, rather than as a technology treadmill? Definitely — yes! We each need to balance between the new and exciting, gaining a longer term understanding, and moving towards growth in people skills.
In short, there’s a solid reason to learn, a solid reason to grow, but there’s no reason to panic.
Engineering Sense
Why didn’t they ask Evans?
For those who haven’t read the famous Agatha Christie novel, the entire point revolves around a man uttering these words just before dying. Who is Evans? What does this person know that can lead to the murderer of the man on the golf course? Bobby and Frankie, the heroes of the story, are led on one wild goose chase after another, until they finally discover it’s not what Evans knows but who Evans knows that really matters.
Okay… But this isn’t a blog about mysteries, it’s about engineering. What does Evans have to do with engineering? Troubleshooting, as Fish says, is often like working through a mystery novel. But I think the analogy can be carried farther than this. Engineering, even on the design side, is much like a mystery novel. It’s often the context of the question, or the context of the answer to the question, that solves the mystery. It’s Poirot straightening the items sitting on a mantelpiece twice, it’s the dog that didn’t bark, and it’s the funny footprints and the Sign of Four.
Just like the detective in a mystery novel, the engineer can only solve the problem if they can put an apparently nonsensical question into the right context. But what is the right context? The answer to this question is obviously the big variable. How do you know?
The key is a two step process.
First, know as many contexts as possible. This might seem simplistic, but it’s actually a hard answer wearing simple clothes. There’s only one way to know a lot of different contexts — to encounter them. There are a few different ways to encounter a context, of course — by studying technologies and problems you don’t already know about, by confronting problems you’ve not confronted before, by intentionally stretching out and learning where you can. Hence the importance of reading widely, taking on problems you don’t know how to solve, going after certifications (especially early in your career), taking on degrees, and other intentional learning paths.
The first step is, then, to reach outside your comfort zone and learn something new.
Second, to learn how to turn these contexts into shorthand through abstraction and modeling. I talk a lot about models, I know, but they are a key point in learning how to be a good engineer. This is a skill that takes time, of course — in fact, it’s an almost physical skill. Many people don’t expect the physicalness of building virtual worlds when they first get into engineering — they somehow expect virtual solutions to be infinitely pliable, and think that the further you get from the physical world the more degrees of freedom you have. It’s actually the opposite — the first degree of freedom from the physical world brings a lot of new possibilities. The second not so much. The third almost none. There is a law of diminishing returns it takes a long time to make this connection.
Third, learn how to match the contexts you’ve encountered in the past to the problem you face right now. This model matching process is a skill that takes time to develop. Much like the detective that knows just what sort of information to look for in order to produce the key to the problem, the engineer needs to take the wide array of learned context from the past, distilled into a shorthand, and apply them to the problem faced today.
This, in short, is the process of developing what I call an engineering sense — that “fifth sense” of the engineer that knows, on seeing a problem, just what sorts of solutions might be successfully applied. This is what you should strive to build through your engineering career. The ability to look at a problem and give a rough set of solutions that might produce a solution, the ability to choose a direction of investigation through which a problem can be solved — this is worth more than knowing fifteen CLIs and twenty programming languages, or the product lines of a thousand vendors.
If you develop a solid engineering sense, the answer to the question, “Why didn’t they ask Evans,” might baffle you, but you’ll at least know where to start looking.
“Why Didn’t They Ask Evans First Edition Cover 1934” by Source. Licensed under Fair use via Wikipedia.