Archive for 2015
Thoughts on Certifications
Should you stack up certifications, or should you learn something new? To put the question a different way: should Ethan get his CCDE? This week a couple of posts filtered through to my RSS feed that seem worth responding to on the certification front. Let’s begin with the second question first. This week, Ethan posted:
I think the first part of Ethan’s argument is valid and correct: there comes a point you’ve wrung the value out of a certification (or certification path), and it’s time to move on. But how can you judge when that time has come? My thinking is based around this chart, taken from one of the first posts on this blog:
The point is to intentionally target the middle curve for where you want to be. To quote the second article that caught my interest this week:
In other words, you need to know where you’re going and what sort of knowledge you need to get there. For me, right now, that means a PhD in Philosophy — learning isn’t just a hobby, it’s a path to something else. Whether I eventually go into teaching part time, or if I just learn more about worldview, culture, life, research, and writing in ways that help me as an engineer, then this is what I need to learn now. I’d like to develop, for instance, a certificate program in ethics in the engineering world that runs deeper than a few rules of thumb and some platitudes, or understand the intersection of Baconian Progressivism and technology in a way few people do (my proposed thesis, in fact, is in the area of privacy).
It’s the second part of Ethan’s case I tend to take issue with — the impression that the CCDE is a “service provider” or “large scale enterprise” test, and the implication the test covers skills that aren’t going to be useful in the “average network.” I just don’t believe in the “large scale versus small scale” divide any longer, nor in the “service provider versus enterprise” divide. Networks are networks; interesting problems are interesting problems.
The CCDE is specifically focused on a set of skills that will allow you to do successful design no matter what technology you encounter. The technology is nothing more than a framework for the skill set. Learning MPLS is not the end goal of the CCDE. Rather, MPLS problems provide a useful framework into which to pour overlay design problems. Maybe you’ll never encounter an MPLS network in your life, but you will encounter VXLAN, GRE tunnels, IPsec SA’s, and a host of other sorts of tunneling technologies. Tunnels are tunnels are tunnels are tunnels are tunnels.
In fact, I would argue that if you must learn any and every technology “in detail” to be able to design to it, you’re not doing this design thing right. Much the same way I would argue that if you must know every CLI command in the book to be good at troubleshooting, you don’t really know troubleshooting. Now I think most engineers know this somewhere deep down in their hearts of one’s and zero’s, but we’re so quickly overwhelmed with the bits and bytes, the octets and id’s, that we tend to forget it.
The way I see it is the CCDE teaches something completely different than the CCIE.* I don’t really know if getting a certification in some other technology — unless it’s really a completely different technology area — is what I would consider “broadening your horizons” once you get past the CCIE (the point made in Network World). The value of the CCDE, from my perspective, is that it intentionally teaches something different than every other certification I know of. It steps outside the “technology certification” mold, and focuses on a skill set.
When you consider a certification, don’t think about the technology the certification represents, think about the skill set and where that fits into your “pattern of knowledge.” This is the point we often miss in our scramble to get that next certification. Or even the next degree. I’m not trying to beat up on Ethan here, just pointing out what I’ve always said here: we need to be intentional about what we learn, and why we’re learning it. Doing anything else is just wasting your time in the long run.
Now, to go back to the beginning: should Ethan get the CCDE? Of course I think the answer is yes — but I tend to be a little biased towards the program. Perhaps even a little too biased. In the end, I’m not the judge of where Ethan is going in life and what his goals are, so I don’t actually know — Ethan is the only person who really knows the answer to that question. The first part of being intentional about learning is to know where you’re heading.
* I might be a little sensitive to this because the CCDE is my “baby,” and like most folks, I don’t think it’s ugly. But I don’t really think so — I tend to be pretty good at being objective about the stuff I create, even to the point of calling it ugly before anyone else does.
Review: The Art of the Humble Inquiry
Humble Inquiry: The Gentle Art of Asking Instead of Telling
Edgar H Schein
Edgar Schein says we have a cultural issue. We like to tell people what we think, rather than asking them what they’re trying to tell us. Overall, especially in the world of information technology, I tend to agree. To counter this problem, he suggests that we perfect the art of the humble inquiry — redirecting our thinking from the immediate solution that comes to mind, or even from the question that was asked, and trying to get to the what the person we’re talking to is actually asking.
He gives numerous examples throughout the book; perhaps my favorite is of the person who asked stopped their car while he was doing yard work to ask directions to a particular street. Rather than answering, he asked where they were trying to get to. They were, in fact, off course for their original plan, but he directed them down a different path that got them there faster than if they’d turned around and found their way back to that original path. This is a perfect example of asking returning a specific question with a larger question — an authentic request for information that turns the situation from one of frustration into one of getting to the original goal.
The author also provides a useful section on how to differentiate between four kinds of questions, among which is the humble inquiry. There are times when it makes sense to lead someone we’re interacting with through questions, and others when it’s important to gain more information rather than making a decision. What’s never okay is the “gotcha question,” a form of attack in the guise of a question. While the author doesn’t cover it, it’s also never okay to hit someone with a complex question or a question predicated in a false dichotomy — two things that are all too common in our modern sound bite culture. He also gives a four part loop for decision making that’s very similar, but slightly different from, the OODA loop.
There are some parts of the book I didn’t find quite so useful, though, primarily in the last few chapters where the author moves off into psychology; much of this is based on views of the human psyche I don’t really accept, nor consider true (should I have phrased that in the form of a question?). Overall, though, this is a good, short, read on learning how to stop trying to take control of every situation, and instead learning to ask what the person who’s asking you really wants. I’d give it 4 stars, overall.
The Odd Hours Solution
For many years, when I worked out in the center of the triangle of runways and taxiways, I would get up at around 4, swim a mile in the indoor poor (36 laps), shower, grab breakfast, run by base weather just to check the bigger pieces of equipment out (mostly the RADAR system), and then I’d head out to the shop. We could mostly only get downtime on the airfield equipment (particularly the VOR, TACAN, and glideslopes) in the early morning hours — unless, of course, there was a war on. Then we couldn’t get downtime at all. By 2:30 I was done with my work day, and I headed home to get whatever else done.
When I left the USAF, after being trapped in some 9–5 jobs, I joined the cisco TAC. Our shift started at 8 or 8:30, when we took over the 1–800 number from Brussels, and our shift lasted until around 2 in the afternoon (it varied over time, as the caseloads and TACs were moved around). Freed from 9–5, I started getting to work at around 5:30 again. I could spend the first two or three hours following up on cases (did you know that no-one in the US answers their phone before 8AM ET?), take cases during shift, and then work in the lab or do “other stuff” before going home around 3 in the afternoon. Another time management trick I used while working in TAC was converting every case to email; the phone, while often pleasant, is an inefficient form of communication when you’re dealing with a high caseload.
Okay — so why am I telling you this? It’s not like you care what hours I’ve worked in my life, right?
Because controlling your working hours is one of the most effective devices I’ve found to both force myself to macrotask, and to stem the tide of multitasking for at least a few hours out of each day. Getting up early in the morning is only one way I work to control the amount of time I’m multitasking. Another for instance — I normally work from home. It’s easy right now because I’m a remote worker (okay, honesty time, I’ve been a near remote worker for the last ten to fifteen years, starting from the time I joined engineering at Cisco for the most part). When I go to the office — say I fly to SJ for the day — I don’t take my laptop into the building. My laptop stays in the hotel room. I’ve recently (within the last two years) extended this to conferences like the IETF — when I’m out talking to people, I focus on the people. Focus works both ways — it’s either the computer, or it’s people, it’s never both at the same time.
I’ve taken this idea of controlling my schedule for optimal time use farther than most people would, of course — my wife and I homeschool our children, in part, because we like to just go and do without asking for permission from the local school. There’s no absenteeism when you’re kids are with you.
But the point shouldn’t be missed. When I start working in the morning, I often don’t even check email, or the news, or anything else. I start in on whatever project I’m working on, pushing two or three hours of solid work in before anyone can even call or email me. Most of my books, blog posts, and other things have all been written between 6 and 9 in the morning.
Now, mornings might not work for you (though I can tell you everyone has a lot more energy in the morning than they do after dinner — late night just isn’t as effective as the early morning for anyone I know). But whatever the time of day, the general idea is this: divide your days into times for specific things. Set aside time for projects, and time for talking. Don’t let the two overlap. It’ll greatly improve your efficiency.
Multitasking, Microtasking, and Macrotasking
One of the most frustrating things in my daily life is reaching lunch and not having a single thing I can point to as “done” for the day. I’m certain this is something every engineer faces from time to time — or even all the time (like me), because even Dilbert has something to say about it.
This is all the more frustrating for me because I actually don’t have clones (contrary to rumor #1), and I actually do sleep (contrary to rumor #2). I even spend time with my wife and kids from time to time, as well as volunteer at a local church and seminary (teaching philosophy/ethics/logic/theology/worldview/apologetics to a high school class, and being a web master/all around IT resource, guest lecturer, etc., in the other). My life’s motto seems to be waste not a moment, from reading to writing to research to, well just about everything that doesn’t involve other people (I try to never be in a hurry when dealing with people, though this it’s honestly hard to do).
So, without clones, and with sleep, how can we all learn to be more productive? I’m no master of time (honestly), but my first rule is: reduce multitasking. Why?
We all know this is good for us, right? But how do you actually reduce multitasking in the real world? I have a number of techniques I use; for this post I’m going to focus on a single technique I use to reduce the impact of multitasking (see what I did there?).
When we think of multitasking, we tend to think of what I call microtasking — self-interruption either because of the tyranny of the immediate, or because we can’t finish what we’re doing until we’ve done something else. A perfect example of microtasking might be the story my grandfather used to tell me about the farmer who played checkers ’til dinner. He didn’t start out with playing checkers in mind, of course; he started out to bail the hay. But on picking up his the bailing fork in the morning, he realized the handle was in bad shape, so he decided to run into town and buy a new one. While there, he realized he needed to fill his truck up with gas, so he stopped for a second, only to glance across the street and think, “while I’m here, I might as well go to the barber’s and get my hair cut…” While getting his hair cut, he realized it was just about time for lunch, and some of his friends were in town, so he stepped into the local diner for a bite with them. After lunch, of course, they started a checker game.
And soon enough, it was dinner time.
The point is, of course, set out to do one thing, and do that one thing. Do one thing ’til you’re done. Then do something else. Instead of microtasking, macrotask. I have a very large todo list; each morning, I get up, pick one thing, and refuse to do pretty much anything else off my list until that one thing is done. And if I don’t get it done? I get up the next morning and start again where I left off. What happens if I’m interrupted along the way? I return to it as quickly as possible and work on it ’til it’s done. Once my head is “in” something, I want to keep it there ’til I’m done.
In the process, if I find a broken tool, or something new to write about, or something new I need to research, I put those on the todo list. I don’t stop what I’m doing to work on something else. I can’t stop interruptions (more on interruptions in a later post), but I can minimize their impact by working on a single task ’til I’m done. What I don’t do is say, “well, now that I’ve broken off in the middle of that sentence, I might as well work on something else for a while.” Don’t do that.
Of course it doesn’t always work. Sometimes the bailing fork just needs to be fixed, and I re queue my “main task” ’til it’s fixed. There is still the tyranny of the immediate to manage. But what I find the most difficult is the feeling that I have so much on my todo list that I should be time slicing, or doing a little of this and a little of that, because I could be getting more done. I have to remind myself this feeling is just that — a feeling. Experience shows the feeling is wrong.
So, to start: don’t microtask. Instead, macrotask.
You can’t get rid of it entirely, of course, but you can reduce the multitasking that naturally springs up in your life like weeds.
Lifehacker and Joel on Software have excellent pieces about multitasking available, as well.
Defining SDN Down
If a WAN product that uses software to control the flow of traffic is an SD-WAN, and a data center than uses software to build a virtual topology is an SD-DC, and a storage product that uses software to emulate traditional hardware storage products is SD storage, and a network where the control plane has been pulled into some sort of controller an SDN, aren’t my profile on LinkedIn, and my twitter username @rtggeek software defined people (SDP)? A related question — if there are already IoT vendors, and the IoT already has a market, can we declare the hype cycle dead and move on with our lives? Or is hype too useful to marketing folks to let it go that easily? One thing we do poorly in the networking world is define things. We’re rather sloppy about the language we use — and it shows.
Back on topic, but still to the point — maybe it’s time to rethink the way we use the phrase software defined. Does SD mean one thing emulating another? Does SD mean centralized control? Does SD mean software controlled? Does SD mean separating the control plane from the data plane? Does SD mean OpenFlow?
I’ll give you my definitions; you can disagree in the comments.
SDN specifically means placing an API between the database/table/information that is actually used (first hand) to forward traffic through a device so some controller software located off the device can manage the forwarding table remotely. I would carry this farther to state that the forwarding device, itself, should not be running a control plane that discovers reachability across multiple forwarding hops in the network, and doesn’t generally discover anything more than a single hop of topology. Hence, protocols like EIGRP, OSPF, BGP, and IS-IS are not something that would run on the device with the remotely managed forwarding table. Things like ARP, while arguably part of the control plane, are still run locally on the device — but anything that provides reachability through another device is decidedly not running on the device.
The location of the controller in this sort of situation is up for grabs. It could be simply a routing process running in a VM someplace else. It could control multiple devices. There could be one device managing a few racks, an entire data center fabric, or even an entire network. What’s important is where the API lies, and what type of information is discovered locally.
A programmable network, on the other hand, is one in which some or all of the control plane has been moved off the forwarding devices and into some sort of controller. The primary API, in this case, is to the table which feeds the actual forwarding table — the table from which the actual forwarding table is developed. This is what I would normally call the Routing Information Base (RIB). Some sort of multihop reachability discovery could/would still be running in such a device; IS-IS might still be used to discover the topology and reachable destinations, while the controller overlays the network with what I would consider policy, or rather anything that overrides the shortest path in order to modify traffic flow for any reason.
It seems, to me (if these are acceptable definitions) that the future specifically lies in a combination of SDNs and programmable networks. SDNs, in their pure sense, are useful within a limited scope; some number of devices placed under the control of a single controller. The controller then participates in a distributed control plane that carries reachability and topology information between individual devices and other controllers. On top of this lays a set of controllers that manage the policy in the network, directly injecting information into the network at the distributed control plane level rather than configuring policy through things like interface level access lists. If we’re going to get to a truly dynamic network, we need to get away from using things that show up in a local configuration for policy in the control plane.
Somehow we need to stop yabbering about SD-this and SD-that, and actually build a model we can all agree on, with a more defined set of terms, so we can have an actual conversation that doesn’t involve misdirection and marketing hype.
In the mean time, I think I’ll back and check on my SDP.
Rule 11 is your friend
It’s common enough in the networking industry — particularly right now — to bemoan the rate of change. In fact, when I worked in the Cisco Technical Assistance Center (TAC), we had a phrase that described how we felt about the amount of information and the rate of change: sipping through the firehose. This phrase has become ubiquitous in the networking world to describe the feeling we all feel of being left out, left behind, and just plain not able to keep up.
It’s not much better today, either. SDNs threaten to overturn the way we build control planes, white boxes threaten to upend the way we view vendor relationships, virtualization threatens to radically alter the way we think about the relationship between services and the network, and cloud computing promises just to make the entire swatch of network engineers redundant. It’s enough to make a reasonable engineer ask some rather hard questions, like whether it’s better to flip burgers or move into management (because the world always needs more managers). Some of this is healthy change, of course — we need to spend more time thinking about why we’re doing what we’re doing, and the competition of the cloud is probably a good thing. But there’s another aspect here I don’t think we’ve thought about enough.
Sure there’s a firehose here. But there are fields all over the world where there’s a veritable firehose of new information, new thinking, and new products being designed, developed, and introduced. The actual work of building buildings has radically changed over the last 50–100 years. There have been some folks thrown out of the business in the process, but what we tend to see is more buildings being put up faster, not a bunch of mid life hamburger flippers who used to design buildings. All around us we see tons of new technology being pressed into service, and yet we don’t seem to always have the massive fear of dislocation combined with the constant angst that always seems to be in the air in network engineering (and the information technology industry at large).
I know it’s easy to fly the black flag and say, “well, if you can’t keep up, get out.” I don’t know if this is precisely fair to the old, grizzled folks who have families and lives outside work. I don’t even know if this is fair to the newbies coming in—a career field that eats people by the time they are 50, and says, “just save up while you make enough to do so, and forget having a family,” just doesn’t seem all that healthy to me. Instead, we need to find ways to mitigate the firehose. Somehow, we need to learn to cut it down so we can actually learn, and understand, and still live our lives.
But before I talk about Rule 11, let me be honest for a second — this industry isn’t going to change unless we change it. There’s no real reason for it to change. After all, 20 year olds cost less than 50 year olds to keep on staff, the firehose makes a lot of money for vendors, and it’s a large ego boost in asking questions like, “did you see the latest vendor x box,” or in “beating” someone in an interview.
For those of us who do want to change the networking world, or even just to keep up without sipping from the firehose, what can we use as a handle? This is where Rule 11 comes in. To refresh your memory—
Most people sniggle when they read this, because it really is funny. But if rule 11 is true, 90% of the water coming out of the firehose is, in fact, recycled.
Do you see it yet? If you can successfully build a mental model of each technology, and then learn to expand that mental model to each new technology you encounter, you will be able to mitigate the firehose.
If we’re going to survive as an industry, we need to get past the firehose. We need to stop thinking about the sheet metal and the cable colors, and start thinking about processes, ideas, and models. We need to stop flying by the seat of our pants, and start trying to make this stuff into real engineering, rather than black magic. Yes, I moved from working on airfield electronics to network engineering because I craved the magical side of this world, but magic just isn’t a sustainable business model, nor a sustainable way of life.
Memorize — or Think?
I have several friends with either photographic, or near photographic, memories. For instance, I work with someone (on the philosophical side of my life) who is just astounding in this respect. If you walk into his office and ask about some concept you’ve just run across, no matter how esoteric, he can give you a rundown of every major book in the field covering the topic, important articles from several journals, and even book chapters that are germane and important. I’ve actually had him point me to the text of a footnote when I asked about a specific concept.
It seems, to me, that the networking industry often focuses on this sort of thing. Quick, can you name the types of line cards available for the Plutobarb CNX1000, how many of each you can put in the chassis, what the backplane speed is, and what the command is to configure OSPF type 3 filters on Tuesdays between three and four o’clock? When we hit this sort of question, and can’t answer it, people look at us like we’re silly or something.
Right?
I know, because I’ve been there. I’ve had people ask me the strangest questions in interviews, such as how many spine and leaf boxes it would take to support a specific number of edge ports given a specific set of boxes (sorry folks, I can’t work it out in my head, I need a calculator and hopefully a white board), how many subnets are there in a /22 (these I can calculate in my head for v4, and I’m trying to get to the point of being able to do it in v6), how to configure a specific feature on three different boxes, and a few other odds and ends. In fact, the majority of the interviews I’ve been involved in, across my career, have involved at least one person asking me these sorts of questions.
Most of the time, the answer I really want to blurt out is, “I don’t care.” Most of the time, I don’t. I did, one time, challenge the interviewer to an esoteric match, though. I was asked about something odd to which I didn’t know the answer, so I answered, “I’ll make a deal with you — for every esoteric question you ask, I get to ask one back. If I stump you more than you stump me, I pass the interview. Deal?”
I think I understand why we do this — one of the first temptations of the teacher is to ask questions that are easy to grade, rather than questions that actually cover the required material.
But I also think we need to think more about what we’re trying to build as an engineering community. Should we memorize more, or think more? I know, to some degree, that this is a false dichotomy; you can’t think without something to think about. Memorization is a critical skill. Even as a PhD student in a philosophy program I need to memorize things (in fact, lots of things). If I don’t memorize the author’s line of argument in a text we’re studying, for instance, I’m pretty useless in class discussion.
There needs to be a balance here, though. The question is — how do we reach the balance? What does that balance look like? Of course, the balance isn’t going to be the same for everyone, and every position. Sometimes you just have to develop a habit of actions that will serve you well in times of crisis, for instance. But other times you don’t. How do you know? I have some suggestions here, but feel free to add more in the comments. My suggestions are…
First, ask why do I care? If the job is in a NOC, and the candidate is going to be troubleshooting OSPF adjacency problems at 2AM, they probably need to know the OSPF adjacency process, including things like what multicast addresses OSPF uses for particular reasons, the stages of the process, etc., very well. So first, know what they need to know.
Second, ask how should I ask this? If you’re asking about troubleshooting OSPF adjacencies, do you ask what commands you would use to troubleshoot an OSPF adjacency problem? Or do you ask what the stages of an OSPF adjacency are (in detail)? Or do you set up a broken adjacency and ask the candidate to mock up fixing it? Each of these tries to get to the same information, but in different ways. Which one makes the most sense for your environment? And which one asks more for thinking skills rather than memorization skills?
Let me try to encapsulate these two into a simpler form, though.
What would you allow someone to search for during an interview?
Command line information? Protocol operation details? Protocol operation theory? Or nothing?