In the argument between OSPF and BGP in the data center fabric over at Justin’s blog, I am decidedly in the camp of IS-IS. Rather than lay my reasons out here, however (a topic for another blog post?), I want to focus on something else Justin said that I think is incredibly important for network engineers to understand.

I think whiteboards are the most important tool for network design currently available, which makes me sad. I wish that wasn’t true, I want much better tools. I can’t even tell you the number of disasters averted by 2-3 great network engineers arguing over a whiteboard.

I remember—way back—when I was working on the problems around making a link-state protocol work well in a Mobile Ad Hoc Network (MANET), we had two competing solutions presented to the IETF. The first solution was primarily based on whiteboarding through various options and coming up with one that should reduce flooding to an acceptable level. The second was less optimal on the whiteboard but supported by simulations showing it should reduce flooding more effectively.

Which solution “won?” I don’t know what “winning” might mean here, but the solution designed on the whiteboard has been widely deployed and is now showing up in other places—take a look at distoptflood and the flooding optimizations in RIFT. Both are like what we designed all those years ago to make OSPF work in a MANET.

Does this mean simulations, labbing, and testing are useless? No.

Each of these tools brings a different set of strengths to the table when trying to solve a problem. For instance, there is no way to optimize the flooding in a protocol unless you really know how flooding works, what information you have available, where things might fail, what features are designed into the protocol to prevent or work around failures, etc. These things are what you need to put together and understand on a whiteboard.

You cannot think of every possible situation that needs to be simulated, nor can you simulate every possible order of operation, or every possible timing problem that might occur. If you know the protocol, however, you can cover most of this ground and design in fail-safes. Simulations are necessary, but not sufficient to network and protocol design.

On the other side of the coin, Justin points out the stack they were using was known to have a weak BGP implementation and a strong OSPF implementation. This, and other scale and timing issues, will only show up under a simulation. For instance, you cannot answer the question how large can the LSDB become before the processor keels over and dies on a whiteboard—it’s just not possible. The whiteboard is necessary, but not sufficient to network and protocol design.

The danger is that we attempt to replace one with the other—that we either ignore the value of simulation because it’s hard (or even impossible), or we ignore the power of the whiteboard because we don’t understand the protocols well enough to stand at the whiteboard and argue. Every team needs to have at least one person who can “see” how the network will converge just because they understand the protocol. And every team needs to have at least one person who can throw together a simulation and show how the network will converge in the same situation.

Whiteboards and simulations are both crucial tools. Learn the protocols and design well enough to use the one and find or build the tools needed to use the other. Missing either one is going to leave a blind spot in your abilities as an engineer.

Which one are you missing in your skill set right now? If you know the answer to that question, then you know at least one thing you need to learn “next.”

For those not following the current state of the ITU, a proposal has been put forward to (pretty much) reorganize the standards body around “New IP.” Don’t be confused by the name—it’s exactly what it sounds like, a proposal for an entirely new set of transport protocols to replace the current IPv4/IPv6/TCP/QUIC/routing protocol stack nearly 100% of the networks in operation today run on. Ignoring, for the moment, the problem of replacing the entire IP infrastructure, what can we learn from this proposal?

What I’d like to focus on is deterministic networking. Way back in the days when I was in the USAF, one of the various projects I worked on was called PCI. The PCI network was a new system designed to unify the personnel processing across the entire USAF, so there were systems (Z100s, 200s, and 250s) to be installed in every location across the base where personnel actions were managed. Since the only wiring we had on base at the time was an old Strowger mainframe, mechanical crossbars at a dozen or so BDFs, and varying sizes of punch-downs at BDFs and IDFs, everything for this system needed to be punched- or wrapped-down as physical circuits.

It was hard to get anything like real bandwidth over paper-wrapped cables with lead shielding that had been installed in the 1960s (or before??), wrap-downs, and 66 blocks. The max we could get in terms of bandwidth was about a T1, and often less. We needed more bandwidth than this, so we installed inverse multiplexers that would combine the bandwidth across multiple physical circuits to something large enough for the PCI system to run on.

The problem we ran into with these inverse multiplexers was they would sometimes fall out of synchronization, meaning frames would be delivered out of order—one of the two cardinal violations of deterministic networks. Since PCI was designed around a purely deterministic networking model, these failures caused havoc with HR. Not a good thing.

The second and third cardinal rules of deterministic networking are that there will be no jitter, and the network will deliver all packets accepted by the network. To make these rules work, there must be some form of entrance gating (circuit setup, generally speaking, with busy signals), fixed packet sizes, and strict quality of service.

In contrast, the rules of packet-switched (non-deterministic) networking are: all packets are accepted, packets can be (almost) any size, there is no guarantee any particular packet will be delivered, and there’s no way to know what the jitter and delay on delivering any particular packet might be.

Some kinds of payloads just need deterministic semantics, while others just need deterministic semantics. The problem, then, is how to build a single network that supports both kinds of semantics. Solving this problem is where thinking through the situation using a problem-solution mindset can help you, as a network engineer (or protocol designer, or software creator) understand what can be done, what cannot be done, and what the limitations are going to be no matter what solution you choose.

There are, at base, only three solutions to this problem. The first is to build a network that supports both deterministic and packet-switched traffic from the control planes to the switching path. The complexity of doing something like this would ultimately outweigh any possible benefit, so let’s leave that solution aside. The second is to emulate a deterministic network on top of a packet-switched network, and the third is to emulate a packet-switched network on top of a deterministic network. Consider the last option first—emulating a packet-switched network on top of a deterministic network. For those who are old enough, this is IP-over-ATM. It didn’t work. The inefficiencies of trying to stuff variably sized packets into the fixed-size frames required to create a deterministic network were so significant that it just … didn’t work. The control planes were difficult to deploy and manage—think ATM LANE, for instance—and the overall network was just really complicated.

Today, what we mostly do is emulate deterministic networks over packet-switched networks. This design allows traffic that does well in a packet-switched environment to run perfectly fine, and traffic that likes “some” deterministic properties, like voice, to work fine with a little effort on the part of the network designer and operator. Building something with all the properties of a genuinely deterministic network on top of a packet-switched network, however, is difficult.

To get there, you need a few things. First, you need some way to strictly control/steer flows, so the mix of packet and deterministic traffic is going allow you to meet deterministic requirements on every link through the network. Second, you need some excellent QoS mechanism that knows how to provide deterministic results while mixing in packet-switched traffic “underneath.” Third, you need to overprovision enough to take up any “slack” and variability, as well as to account for queuing and clock on/clock off delays through switching devices.

The truth is we can come pretty close—witness the ability of IP networks to carry video streaming and what looks like traditional voice today. Can it get better? I’m confident we can build systems that are ever closer to emulating a truly deterministic network on top of a packet-switched network—so long as we mind the tradeoffs and are willing to throw capacity and hardware at the problem.

Can we ever truly emulate packet-switched networks on top of deterministic networks? I don’t see how. It’s not just that we’ve tried and failed; it’s that the math just doesn’t ever seem to “work right” in some way or another.

So while the “new IP” proposal brings up an interesting problem—future applications may need more deterministic networking profiles—it doesn’t explain why we should believe either building a completely parallel deterministic network or trying to flip the stack to emulate a packet-switched network on top of a deterministic network, makes more sense than what we are doing today.

Looking at this from a problem/solution perspective helps clarify the situation, and produce a conclusion about which path is best, without even getting into specific protocols or implementations. Really understanding the problem you are trying to solve, even at an abstract level, and then working through all the possible solutions, even ones that might not have been invented yet (although I can promise you then have been invented), can help you get your mind around the engineering possibilities.

It is probably not how you’re accustomed to looking at network design, protocol selection, etc. But it’s one you should start using.

On a Spring 2019 walk in Beijing I saw two street sweepers at a sunny corner. They were beat-up looking and grizzled but probably younger than me. They’d paused work to smoke and talk. One told a story; the other’s eyes widened and then he laughed so hard he had to bend over, leaning on his broom. I suspect their jobs and pay were lousy and their lives constrained in ways I can’t imagine. But they had time to smoke a cigarette and crack a joke. You know what that’s called? Waste, inefficiency, a suboptimal outcome. Some of the brightest minds in our economy are earnestly engaged in stamping it out. They’re winning, but everyone’s losing. —Tim Bray

This, in a nutshell, is what is often wrong with our design thinking in the networking world today. We want things to be efficient, wringing the last little dollar, and the last little bit of bandwidth, out of everything.

This is also, however, a perfect example of the problem of triads and tradeoffs. In the case of the street sweeper, we might thing, “well, we could replace those folks sitting around smoking a cigarette and cracking jokes with a robot, making things much more efficient.” We might notice the impact on the street sweeper’s salaries—but after all, it’s a boring job, and they are better off doing something else anyway, right?

We’re actually pretty good at finding, and “solving” (for some meaning of “solving,” of course), these kinds of immediately obvious tradeoffs. It’s obvious the street sweepers are going to lose their jobs if we replace them with a robot. What might not be so obvious is the loss of the presence of a person on the street. That’s a pair of eyes who can see when a child is being taken by someone who’s not a family member, a pair of ears that can hear the rumble of a car that doesn’t belong in the neighborhood, a pair of hands that can help someone who’s fallen, etc.

This is why these kinds of tradeoffs always come in (at least) threes.

Let’s look at the street sweepers in terms of the SOS triad. Replacing the street sweepers with a robot or machine certainly increases optimization. According to the triad, though, increasing optimization in one area should result in some increase in complexity someplace, and some loss of optimization in other places.

What about surfaces? The robot must be managed, and it must interact with people and vehicles on the street—which means people and vehicles must also interact with the robot. Someone must build and maintain the robot, so there must be some sort of system, with a plethora of interaction surfaces, to make this all happen. So yes, there may be more efficiency, but there are now more interaction surfaces to deal with now, too. These interaction surfaces increase complexity.

What about state? In a sense, there isn’t much change in state other than moving it—purely in terms of sweeping the street, anyway. The sweeper and the robot must both understand when and how to sweep the street, etc., so the state doesn’t seem to change much here.

On the other hand, that extra set of eyes and ears, that extra mind, that is no longer on the street in a personal way represents a loss of state. The robot is an abstraction of the person who was there before, and abstraction always represents a loss of state in some way. Whether this loss of state decreases the optimal handling of local neighborhood emergencies is probably a non-trivial problem to consider.

The bottom line is this—when you go after efficiency, you need to think in terms of efficiency of what, rather than efficiency as a goal-in-itself. That’s because there is no such thing as “efficiency-in-itself,” there is only something you are making more efficient—and a lot of things you potentially making less efficient.

Automate your network, certainly, or even buy a system that solves “all the problems.” But remember there are tradeoffs—often a large number of tradeoffs you might not have thought about—and those tradeoffs have consequences.

It’s not “if you haven’t found the tradeoff, you haven’t looked hard enough…” Don’t stop at one. It’s “if you haven’t found the tradeoffs, you haven’t looked hard enough.” It’s a plural for a reason.

Post-mortem reviews seem to be quite common in the software engineering and application development sides of the IT world—but I do not recall a lot of post-mortems in network engineering across my 30 years. This puzzling observation sprang to mind while I was reading a post over at the ACM this last week about how to effectively learn from the post-mortem exercise.

The common pattern seems to be setting aside a one hour meeting, inviting a lot of people, trying to shift blame while not actually saying you are shifting blame (because we are all supposed to live in a blame-free environment now—fix the problem, not the blame!), and then … a list is created on a whiteboard, pictures are taken, and everyone walks away with a rock-solid plan to never do that again.

In a few months’ time, the same team will be in the same room, draw the same drawings, and say the same things all over again. At least that is the way it seems to me. If there is an effective post-mortem process in use by a company someplace, I do not think I have seen it.

From the article—

Are we missing anything in this prevalent rinse-and-repeat cycle of how the industry generally addresses incidents that could be helpful? Put another way: As we experience incidents, work through them, and deal with their aftermath, if we set aside incident-specific, and therefore fundamentally static, remediation items, both in technology and process, are we learning anything else that would be useful in addressing and responding to incidents? Can we describe that knowledge? And if so, how would we then make use of it to leverage past pain and improve future chances at success?

I tend to think, from the few times I have seen network post-mortems performed, that the reason they do not work well is because we slip into the same appliance/configuration frame of mind so quickly. We want to understand what configuration was entered incorrectly, or what defect should be reported back to the vendor, rather than thinking about organizational and process changes. The smaller the detail, the safer the conclusions, after all—aim small, miss big, is what we say in the shooting world.

We focus so much on mean time to innocence, and how to create a practically perfect process that will never fail, that we fail to do the one thing we should be doing: learning.

Okay, so enough whining—what can be done about this situation? A few practical suggestions come to mind. These are not, of course, well-thought-out solutions, but rather, perhaps, “part of the solution.”

Rather than trying to figure out the root cause, spend that precious hour of post-mortem time mapping out three distinct workflows. The first should be the process that set up the failure. What drove the installation of this piece of hardware or software? What drove the deployment of this protocol? How did we get to the place where this failure had that effect? Once this is mapped out, see if there is anything in that process, or even in the political drivers and commitments made during that process, that could or should be modified to really change the way technology is deployed in your network.

The second process you should map out is the steps taken to detect the problem. Dwell time is a huge problem in modern networks—the time between a failure occurring and being detected. You should constantly focus on bringing dwell time down while paying close attention to the collateral damage of false positives. Mapping out how this failure was detected, and where it should have been caught sooner, can help improve telemetry systems, ultimately decreasing MTTR.

The third, and final, workflow you map out should be the troubleshooting process itself. People rarely map out their troubleshooting process for later reference, but this little trick I learned from way back in tube-type electronics days used to save me hours of time in the field. As you troubleshoot, make a flow chart. Record what you checked, why you checked it, how you checked it, and what you learned from the check. This flowchart, or workflow, is precious material in the post-mortem process. What can you instrument, or make easier to find, to reduce troubleshooting time in the next go-round? How can you traverse the network and find the root cause faster next time? These are crucial questions you can only answer with the use of a troubleshooting workflow.

I don’t know if you already do post-mortems or not, or how valuable you think they are—but I would suggest they can be, and are, quite useful. So long as you get out of the narrows and focus on systems and workflows. Aim small, miss big—but aim big and you’ll either hit the target or, at worst, miss small.

Ironies of Automation

In 1983 I was just joining the US Air Force, and still deeply involved in electronics (rather than computers). I had written a few programs in BASIC and assembler on a COCOII with a tape drive, and at least some of the electronics I worked on were used vacuum tube triodes, plate oscillators, and operational amplifiers. This was a magical time, though—a time when “things” were being automated. In fact, one of the reasons I left electronics was because the automation wave left my job “flat.” Instead of looking into the VOR shelter to trace through a signal path using a VOM (remember the safety L!) and oscilloscope, I could sit at a terminal, select a few menu items, grab the right part off the depot shelf, replace, and go home.

Maybe the newer way of doing things was better. On the other hand, maybe not.

What brings all this to mind is a paper from 1983 titled The Ironies of Automation.  It might often seem, because of our arrogant belief that we can remake the world through disruption (was the barbarian disruption of Rome in 455 the good sort of disruption, or the bad sort?), we often think we can learn nothing from the past. Reality check: the past is prelude.

What can the past teach us about automation? This is as good a place to start as any other:

There are two general categories of task left for an operator in an automated system. He may be expected to monitor that the automatic system is operating correctly, and if it is not he may be expected to call a more experienced operator or to take-over himself. We will discuss the ironies of manual take-over first, as the points made also have implications for monitoring. To take over and stabilize the process requires manual control skills, to diagnose the fault as a basis for shut down or recovery requires cognitive skills.

This is the first of the ironies of automation Lisanne Bainbridge discusses—and this is the irony I’d like to explore. The irony she is articulating is this: the less you work on a system, the less likely you are to be able to control that system efficiently. Once a system is automated, however, you will not work on the system on a regular basis, but you will be required to take control of the system when the automated controller fails in some way. Ironically, in situations where the automated controller fails, the amount of control required to make things right again will be greater than in normal operation.

In the case of machine operation, it turns out that the human operator is required to control the machine in just the situations where the least amount of experience is available. This is analogous to the automated warehouse in which automated systems are used to stack and sort material. When the automated systems break down, there is absolutely no way for the humans involved to figure out why things are stacked the way they are, nor how to sort things out to get things running again.

This seems intuitive. When I’m running the mill through manual control, after I’ve been running it for a while (I’m out of practice right now), I can “sense” when I’m feeding too fast, meaning I need to slow down to prevent chatter from ruining the piece, or worse—a crash resulting in broken bits of bit flying all over the place.

How does this apply to network operations? On the one hand, it seems like once we automate all the things we will lose the skills of using the CLI to do needed things very quickly. I always say “I can look that command up,” but if I were back in TAC, troubleshooting a common set of problems every day, I wouldn’t want to spend time looking things up—I’d want to have the right commands memorized to solve the problem quickly so I can move to the next case.

This seems to argue against automation entirely, doesn’t it? Perhaps. Or perhaps it just means we need to look at the knowledge we need (and want) in a little different way (along with the monitoring systems we use to obtain that knowledge).

Humans think quick and slow. We either react based on “muscle memory,” or we must think through a situation, dig up the information we need, and weigh out the right path forward. When you are pulling a piece of stainless through a bit and the head starts to chatter, you don’t want to spend time assessing the situation and deciding what to do—you want to react.

But if you are working on an automated machine, and the bit starts to chatter, you might want to react differently. You might want to stop the process entirely and think through how to adjust the automated sequence to prevent the bit from chattering the next time through. In manual control, each work piece is important because each one is individually built. In the automated sequence, the work piece itself is subsumed within the process.

It isn’t that you know “less” in the automated process, it’s that you know different things. In the manual process, you can feel the steel under the blade, the tension and torque, and rely on your muscle memory to react when its needed. In the automated process, you need to know more about the actual qualities of the bit and metal under the bit, the mount, and the mill itself. You have to have more of an immediate sense of how things work if you are doing it manually, but you have to have more of a sense of the theory behind why things work the way if it is automated.

A couple of thoughts in this area, then. First, when we are automating things, we need to be very careful to assume there is no “fast thinking” when things ultimately do fail (it’s not if, it’s when). We need to think through what information we are collecting, and how that information is being presented (if you read the original paper, the author spends a great deal of time discussing how to present information to the operator to overcome the ironies she illuminates) so we take maximum advantage of the “slow path” in the human brain, and stop relying on the “fast path” so much. Second, as we move towards an automated world, we need to start learning, and teaching, more about why and less about how, so we can prepare the “slow path” to be more effective—because the slow path is the part of our thinking that’s going to get more of a workout.

Simon Weckhert recently hacked Google Maps into guiding drivers around a street through a rather simple mechanism: he placed 95 cellphones, all connected to Google Maps, in a little wagon and walked down the street with the wagon in tow. Maps saw this group of cell phones as a very congested street—95 cars cannot even physically fit into the street he was walking down—and guided other drivers around the area. The idea is novel, and the result rather funny, but it also illustrates a weakness in our “modern scientific mindset” that often bleeds over into network engineering.

The basic problem is this: we assume users will use things the way we intend them to. This never works out in the real world, because users are going to use wrenches as hammers, cell phones as if they were high-end cameras, and many other things in ways they were never intended. To make matters worse, users often “infer” the way something works, and adapt their actions to get what they want based on their inference. For instance, everyone who drives “reverse-engineers” the road in their head, thinking about what the maximum safe speed might be, etc. Social media users do the same thing when posting or reading through their timeline, causing people to create novel and interesting ideas about how these things work that have no bearing on reality.

As folks who work in the world of networks, we often “reverse-engineer” a vendor product in much the same way drivers “reverse-engineer” roads and social media users “reverse-engineer” the news feed—we observe how it works in some circumstances, we read some of the documentation, we infer how it must work based on the information we have, and then we design around how we think it works. Sometimes this is a result of abstraction—the vendor has saved us from learning all the “technical details” to make our lives easier. And sometimes abstraction does make our lives easier—but sometimes abstraction makes our lives harder.

I’m reminded of a time I was working with a cable team to bring a wind speed/direction system back up. The system in question relied on several miles of 12c12 cable across which a low voltage signal was driven off a generator attached to an impeller. The folks working on the cable could “see” power flowing on the meter after their repair, so why wouldn’t it work?

In some cases, then, our belief about how these things work is completely wrong, and we end up designing precisely the wrong thing, or doing precisely the wrong thing to bring a failed network back on-line.

Folks involved in networks face this on the other side of the equation, as well—we supply application developers and business users with a set of abstractions they don’t’ really need to understand. In using them, however, they develop “folk theories” about how a network works, coming to conclusions that are often counter-productive to what they are trying to get done. The person in the airline lounge that tells you to reboot your system to see if the WiFi will work doesn’t really understand what the problem is, they just know “this worked once before, so maybe it will work now.”

There is nothing wrong per se with this kind of “reverse-engineering”—we’re going to encounter it every time we abstract things, and abstracting things is necessary to scale. On the other hand, we’re supposed to be the “engineer in the middle”—the person who knows how to relate to the vendor and the user, bridging the gap between product and service. That’s how we add value.

There are some places, like with vendor-supplied gear, that we are dealing with an abstraction we simply cannot rip the lid off. There are many times when we cannot learn the “innards” because there are 24 hours in a day, you cannot learn all that needs to be learned in the available timeframe, and there are times, as a human, that you need to back off and “do something else.” But… there are times when you really need to know what “lies beneath the abstraction”—how things really work.

I suspect the times when understanding “how it really works” would be helpful are very common—and that we would all live a world with a little less vendor hype during the day, and a lot less panic during the night, if we put a little more priority on learning how networks work.

One of my pet peeves about the network “engineering” world is this: we do too little engineering and too much administration. What brought this to mind this week is an article about Margaret Hamilton about the time she spent working on software development for the Apollo space program, and the lessons she learned about software development there. To wit—

Engineering—back in 1969 as well as here in 2020—carries a whole set of associated values with it, and one of the most important is the necessity of proofing for disaster before human usage. You don’t “fail fast” when building a bridge: You ensure the bridge works first.

Sounds simple in theory—but it is not in practice.

Let’s take, as an example, replacing some of the capacity in your data center designed on a rather traditional two-layer hierarchy, aggregation, and core. If you’ve built your network with a decent modular design, you buy enough new routers (or switches—but let’s use routers here) to build out a new aggregation module, the additional firewalls and other middleboxes you need, and the additional line cards to scale the core up. You unit test everything you can in the lab, understanding that you will not be able to fully test in the product network until you arrange a maintenance window. If you’re automating things, you build (and potentially test) the scripts—if you are smart, you will test these scripts in a virtual environment before using them.

You arrange the maintenance window, install the hardware, and … run the scripts. If it works, you go to bed, take a long nap, and get back to work doing “normal maintenance stuff” the next day. Of course, it rarely works, so you preposition some energy bars, make certain you have daycare plans, and put the vendor’s tech support number on speed dial.

What’s wrong with this picture? Well, many things, but primarily: this is not engineering. Was there any thought put into how to test beyond the individual unit level? Is there any way to test realistic traffic flows while connecting the new module to the network without impacting the rest of the network’s operation? Is there any real rollback plan in case things go wrong? Can there be?

In “modern” network design, none of these things tend to exist because they cannot exist. They cannot exist because we have not truly learned to do design life-cycles or truly modular designs. In the software world, if you don’t do modular design, it’s either because you didn’t think it through, or because you thought it through and decided the trade-off just wasn’t worth it. In the networking world, we play around the edges of resilient, modular designs, but networking folks don’t tend to know the underlying technologies—and how they work—well enough to understand how to divide a problem into modules correctly, and the interfaces between those modules.

Let’s consider the same example, but with some engineering principles applied. Instead of a traditional two-layer hierarchy, you have a single-SKU spine and leaf fabric with clearly defined separation between the fabric and pods, clearly defined underlay and overlay protocols, etc. Now you can build a pod and test it against a “fake fabric” before attaching it to the production fabric, including any required automation. Then you can connect the pod to the production fabric and bring up just the underlay protocol, testing the entire underlay before pushing the overlay out to the edge. Then you can push the overlay to the edge and test that before putting any workload on the new pod. Then you can test fake load on the new pod before pushing production traffic onto the pod…

Each of these tests, other than the initial test against a lab environment, can take place on the production network with little or no risk to the entire system. You’re not physically modifying current hardware (except plugging in new cables!), so it’s easy to roll changes back. You know the lower layer parts work before putting the higher layer parts in place. Because the testing happens on the real network, these are canaries rather than traditional “certification” style tests. Because you have real modularization, you can fail fast without causing major harm to any system. Because you are doing things in stages, you can build tests that determine clean and correct operation before moving to the next stage.

This is an engineered solution—thought has been put into proper modules, how those modules connect, what information is carried across those modules, etc. Doing this sort of work requires knowing more than how to configure—or automate—a set of protocols based on what a vendor tells you to do. Doing this sort of work requires understanding what failure looks like at each point in the cycle and deciding whether to fail out or fix it.

It may not meet the “formal” process mathematicians might prefer, but neither is it the “move fast and break stuff” attitude many see in “the Valley.” It is fail fast, but not fail foolishly. And its where we need to move to retain the title of “engineer” and not lose the confidence of the businesses who pay us to build networks that work.