The world of provider interconnection is a little … “mysterious” … even to those who work at transit providers. The decision of who to peer with, whether such peering should be paid, settlement-free, open, and where to peer is often cordoned off into a separate team (or set of teams) that don’t seem to leak a lot of information. A recent paper on current interconnection practices published in ACM SIGCOMM sheds some useful light into this corner of the Internet, and hence is useful for those just trying to understand how the Internet really works.

To write the paper, the authors sent requests to fill out a survey through a wide variety of places, including NOG mailing lists and blogs. They ended up receiving responses from all seven regions (based on the RIRs, who control and maintain Internet numbering resources like AS numbers and IP addresses), 70% from ISPs, 14% from content providers, and 7% from “Enterprise” and infrastructure operators. Each of these kinds of operators will have different interconnection needs—I would expect ISPs to engage in more settlement-free peering (with roughly equal traffic levels), content providers to engage in more open (settlement-free connections with unequal traffic levels), IXs to do mostly local peering (not between regions), and “enterprises” to engage mostly in paid peering. The survey also classified respondents by their regional footprint (how many regions they operate in) and size (how many customers they support).

The survey focused on three facets of interconnection: time required to form a connection, the reasons given for interconnecting, and parameters included in the peering agreement. These largely describe the status quo in peering—interconnections as they are practiced today. As might be expected, connections at IXs are the quickest to form. Since IXs are normally set up to enable peering; it makes sense that the preset processes and communications channels enabled by an IX would make the peering process a lot faster. According to the survey results, the most common timeframe to complete peering is days, with about a quarter taking weeks.

Apparently, the vast majority (99%!) of peering arrangements are by “handshake,” which means there is no legal contract behind them. This is one reason Network Operator Groups (NOGs) are so important (a topic of discussion in the Hedge 31, dropping next week); the peering workshops are vital in building and keeping the relationships behind most peering arrangements.

On-demand connectivity is a new trend in inter-AS peering. For instance, interxion recently worked with LINX and several other IXs to develop a standard set of APIs allowing operators to peer with one another in a standard way, often reducing the technical side of the peering process to minutes rather than hours (or even days).  Companies are moving into this space, helping operators understand who they should peer with, and building pre-negotiated peering contracts with many operators. While current operators seem to be aware of these options, they do not seem to be using these kinds of services yet.

While this paper is interesting, it does leave many corners of the inter-AS peering world un-exposed. For instance—I would like to know how correct my assumptions are about the kinds of peering used by each of the different classes of providers is, and whether there are regional differences in the kinds of peering. While its interesting to survey the reasons providers pursue peering, it would be interesting to understand the process of making a peering determination more fully. What kinds of tools are available, and how are they used? These would be useful bits of information for an operator who only connects to the Internet, rather than being part of the Internet infrastructure (perhaps a “non-infrastructure operator,” rather than “enterprise”) in understanding how their choice of upstream provider can impact the performance of their applications and network.

Note: this is another useful, but slightly older, paper on the topic of peering.

The last few weeks have seen a massive shift towards working from home because of the various “stay at home” orders being put in place around the world—a trend I consider healthy in the larger scheme of things. Of course, there has also been an avalanche of “tips for working from home” articles. I figured I’d add my own to the pile.

A bit of background—I first started working from home regularly around twenty years ago, while on the global Escalation Team at Cisco. I started by working from home one day a week, to focus on projects, slowly increasing over time, ultimately working from the office one day a week when I transitioned to the Deployment and Architecture Team. Since that team was scattered all over the world—we had a few folks in Raleigh, two in Reading (England), one Brussels, one in Scott’s Valley, and one or two in the San Jose (California) area, we had to learn to work together remotely if we had any hope of being effective. Further, we (as a family) have home-schooled “all the way through”—my oldest daughter is nearing the end of high school, currently overlapping with some college work. I have always had the entire family at home most of the time.

So, forthwith, some thoughts and experiences, including some you might not have ever heard, and some myth busting along the way.

Probably the most common piece of advice I hear is you should set a schedule and stick to it. On the other hand, the most common thing people like about working from home is the flexibility of the schedule. Somehow no-one ever seems to put these two together and say “hey, one of these things just doesn’t belong here!” (remember that old song, courtesy of public television). When you are working from home, setting a fixed schedule similar to being in an office is precisely the wrong thing to do. This is a myth.

Instead, what you should do is to try to intentionally carve out several hours a day of “quiet time” to get things done. This might be different times in the day for your house, and its not likely to be consistent on a daily, weekly, or monthly basis. The most productive times of the day are going to shift—get used to it. Most of my family are late sleepers or tend to get up in the morning and do “quiet things,” so the mornings tend to be much more productive for me than the afternoons. This isn’t always true; sometimes the evenings are quieter. There are few days, however, when there isn’t some quiet time during each day.

The key to scheduling is to plan project work for these quiet times, whenever they happen to be. Given the schedule of my house, I get up in the morning and immediately dive into project work. I leave email and reading RSS feeds for the afternoon, because I know I won’t be able to concentrate as well during those time. On the other hand, I try not to get frustrated if my routine is broken up—just put off the project work until the time when it is quiet. Roll with the punches, rather than trying to punch against the rolls, when it comes to time. The only fixed point should be to think about how many hours of quiet time you need per week and figure out how to get it.

Don’t worry about having 8 hours solid, or whatever. Take breaks to eat lunch with family or friends. Take breaks to have a cup of hot liquid. Run out to the store in the afternoon. Exercise in the middle of the day. Take advantage of the flexible schedule to break work up into multiple pieces, rather than being glued to an office desk for 8 hours, regardless of your productivity level through that time. This is not the office—don’t act like it is.

The second most common piece of advice I hear is find a space and stick to it. This is not a myth—it is absolutely true. I have a table in the basement in one place, and a dedicated room as an office in another. When we go on vacation as a family, I either make certain to have a large enough room to have space to work, or I scope out someplace I can “set up shop” someplace in the hotel. Space is just that important.

Sometimes you hear get the right equipment. This, also, I can attest to. Tools, however, are a bit of an obsession for me; I will spend hours perfecting a tool, even if it does not, ultimately, save me as much time as I originally thought. I am a stickler for the right monitory, chair, desk space, footrests, and keyboards. I play with lighting endlessly; I have finally settled on a setup with a dim monitor, dim room lighting, bias lighting behind the monitor, and a backlit keyboard. I’ve concluded that reducing the sheer amount of light being poured into my eyes, across the board, helps increase the amount of time I can keep working. Some people prefer two monitor setups—I prefer a single large monitor (31in right now). I don’t use a mouse, but a Wacom tablet. I’ve recently added an air purifier to my office space—I think this makes a difference. I don’t have a trash can close by—its useful to force myself to get up every now and again. Your physical environment is a matter of personal preference but pay attention to it. This will probably have as much impact on your productivity as anything else you do.

Overall, don’t let other people tell you what tools are the best. I’ve been told for more years I can remember that I should stop using Microsoft Word for long-form writing. You should switch to markdown, you should switch to Scrivener, you should use FocusWriter… Well, whatever. I’ve written … around 13 books, and I have two more in progress. I’ve written thousands of blog posts and articles. I’ve written thousands of pages of research papers, and I’m currently working on my dissertation. All in Microsoft Word. I’ve tried other tools, but ultimately I’ve found that Word, with my own customized interface settings, to be just as fast or faster than anything else I’ve worked with.

One thing you almost never hear is a team change. If one person is remote, act like everyone is. There is nothing more disruptive, or less useful, than having one person on the phone while everyone else is sitting in a conference room. If one person has to dial in, everyone should dial in. Setting up times to “just hang out” as a “meeting” is also sometimes helpful. If you have a fully remote team, dedicate “in real life” time to doing things you cannot do on meetings, and dedicate meeting time to things you cannot do through email, a chat app, or some other means.

Finally, some people are neat nicks while others are sloppy. I tend towards the extreme neat end of things, arranging even my virtual desktop “just so” (I actually do not have any icons on my desktop at all, and I’m very picky about what I put in my start menu and/or command bar).

The key thing is, I think, to pay attention to what helps and what doesn’t, and not be afraid to experiment to find a better way.

Security often lives in one of two states. It’s either something “I” take care of, because my organization is so small there isn’t anyone else taking care of it. Or it’s something those folks sitting over there in the corner take care of because the organization is, in fact, large enough to have a separate security team. In both cases, however, security is something that is done to networks, or something thought about kind-of off on its own in relation to networks.

I’ve been trying to think of ways to challenge this way of thinking for many years—a long time ago, in a universe far away, I created and gave a presentation on network security at Cisco Live (raise your hand if you’re old enough to have seen this presentation!).

Reading through my paper pile this week, I ran into a viewpoint in the Communications of the ACM that revived my older thinking about network security and gave me a new way to think about the problem. The author’s expression of the problem of supply chain security can be used more broadly. The illustration below is replicated from the one in the original article; I will use this as a starting point.

This is a nice way to visualize your attack surface. The columns represent applications or systems and the rows represent vulnerabilities. The colors represent the risk, as explained across the bottom of the chart. One simple way to use this would be just to list all the things in the network along the top as columns, and all the things that can go wrong as rows and use it in the same way. This would just be a cut down, or more specific, version of the same concept.

Another way to use this sort of map—and this is just a nub of an idea, so you’ll need to think about how to apply it to your situation a little more deeply—is to create two groups of columns; one column for each application that relies on network services, and one for network infrastructure devices and services you rely on. Rows would be broken up into three classes, from the top to bottom—protection, services, and systems. In the protection group you would have things the network does to protect data and applications, like segmentation, preventing data exfiltration, etc. In the services group, you would mostly have various forms of denial of service and configuration. In the systems group, you would have individual hardware devices, protocols, software packages used to make the network “go,” etc. Maybe something like the illustration below.

If you place the most important applications towards the left, and the protection towards the top, the more severe vulnerabilities will be in the upper left corner of the chart, with less severe areas falling to the right and (potentially) towards the bottom. You would fill this chart out starting in the upper left, figuring out what each kind of “protection” the network as a service can offer to each application. These should, in turn, roll down to the services the network offers and their corresponding configurations. These should, in turn, roll across to the devices and software used to create these services, and then roll back down to the vulnerabilities of those services and devices. For instance, if sales management relies on application access control, and application access control relies on proper filtering, and filtering is configured on BGP and some sort of overlay virtual link to a cloud service… You start to get the idea of where different kinds of services rely on underlying capabilities, and then how those are related to suppliers, hardware, etc.

You can color the squares in different ways—the way the original article does, perhaps, or your reliance on an outside vendor to solve this problem, etc. Once the basic chart is in place you can use multiple color schemes to get different views of the attack surface by using the chart as a sort of heat map.

Again, this is something of a nub of an idea, but it is a potentially interesting way to get a single view of the entire network ecosystem from a security standpoint, know where things are weak (and hence need work), and understand where cascading failures might happen.

There is no enterprise, there is no service provider—there are problems, and there are solutions. I’m certain everyone reading this blog, or listening to my podcasts, or listening to a presentation I’ve given, or following along in some live training or book I’ve created, has heard me say this. I’m also certain almost everyone has heard the objections to my argument—that hyperscaler’s problems are not your problems, the technologies and solutions providers user are fundamentally different than what enterprises require.

Let me try to recap some of the arguments I’ve heard used against my assertion.

The theory that enterprise and service provider networks require completely different technologies and implementations is often grounded in scale. Service provider networks are so large that they simply must use different solutions—solutions that you cannot apply to any network running at a smaller scale.

The problem with this line of thinking is it throws the baby out with the bathwater. Google is using automation to run their network? Well, then… you shouldn’t use automation because Google’s problems are not your problems. Microsoft is deploying 100g Ethernet over fiber? Then clearly enterprise networks should be using Token Ring or ARCnet because… Microsoft’s problems are not your problems.

The usual answer is—“I’m not saying we shouldn’t take good ideas when we see them, but we shouldn’t design networks the way someone else does just because.” I don’t see how this clarifies the solution, though—when is it a good idea or a bad one? What is our criterion to decide what to adopt and what not to adopt? Simply saying “X’s problems aren’t your problems” doesn’t really give me any actionable information—or at least I’m not getting it if it’s buried in there someplace.

Instead—maybe—just maybe—we are looking at this all wrong. Maybe there is some other way classify networks that will help us see the problem set better.

I don’t think networks are undifferentiated—I think the enterprise/service provider/hyerpscaler divide is not helpful to understand how different networks are … different, and how to correctly identify an environment and build to it. Reading a classic paper in software design this week—Programs, Life Cycles, and Laws of Software Evolution—brought all this to mind. In writing this paper, Meir Lehman was facing many of the same classification problems, just in software development rather than in building networks.

Rather than saying “enterprise software is different than service provider software”—an assertion absolutely no-one makes—or even “commercial software is different than private software, and developers working in these two areas cannot use the same tools and techniques,” Lehman posits there are three kinds of software systems. He calls these S-Programs, in which the problem and solution can be fully specified; P-Programs, in which the problem can be fully specified, but the program can only be partially specified because of complexity and scale; and E-Programs, where the program itself become part of the world it models. Lehman thinks most software will move towards S-Program status as time moves on—something that hasn’t happened (the reasons are out of scope for this already-too-long-blog-post).

But the classification is useful. For S-Programs, the inputs and outputs can be fully specified, full-on testing can take place before the software is deployed, and lifecycle management is largely about making the software more fully conform to its original conditions. Maybe there are S-Networks, too? Single-purpose networks which are aimed at fulfilling on well-defined thing, and only that thing. Lehman talks about learning how to breaking larger problems into smaller one so the S-Problems can be dealt with separately—is this anything different than separating out the basic problem of providing IP connectivity in a DC fabric underlay, or even providing basic IP connectivity in a transit or campus network, treating it as a separate module with fairly well design goals and measurements?

Lehman talks about P-Programs, where the problem is largely definable, but the solutions end up being more heuristic. Isn’t this similar to a traffic engineering overlay, where we largely know what the goals are, but we don’t necessarily know what specific solution is going to needed at any moment, and the complete set of solutions is just too large to initially calculate? What about E-Programs, where the software becomes a part of the world it models? Isn’t this like the intent-based stuff we’ve been talking about networking for going one 30 years now?

Looking at it another way, isn’t it possible that some networks are largely just S-Networks? And others are largely E-Networks? And that these classifications have nothing to do with whether the network is being built by what we call an “enterprise” or a “service provider?” Isn’t is possible that S-Networks should probably all use the same basic sort of structure and largely be classified as a “commodity,” while E-Networks will all be snowflakes, and largely classified as having high business importance?

Just like I don’t think the OSI model is particularly helpful in teaching and understanding networks any longer, I don’t find the enterprise/service/hyperscaler model very useful in building and operating networks. The service enterprise/service provider divide tends to artificially limit idea transfer when it wants to be transferred, and artificially “hype up” some networks while degrading others—largely based on perceptions of scale.

Scale != complexity. It’s not about service providers and enterprises. It doesn’t matter if Google’s problems are not your problems; borrowing from the hyperscale is not a “bad thing.” It’s just a “thing.” Think clearly about the problem set, understand the problem set, and borrow liberally. There is no such thing as a “service provider technology,” nor is there any such thing as an “enterprise technology.” There are problems, and there are solutions. To be an engineer is to connect the two.

Over at his blog The Forwarding Plane, Nick Buraglio posted about embracing change and how technology is mostly unimportant. In the technology-driven world networking folks live in, how can technology be mostly inconsequential? One answer is people drive technology, rather than the other way around—but this misses the real-world consequences of technological adoption on culture. To paraphrase Andy Crouch, technology makes some things possible that were once impossible, some things easy that were once hard, some things hard that were once easy, and some things impossible that were once possible.

There is another answer to this question, though—the real versus the perceived rate of change. When I was a kid, I would ride around with my uncle in his Jeep, a 1968 CJ5 with a soft top and soft doors. He would take the doors off when he took the top down, and—these older Jeeps being much smaller than current models—you could look just to your right and see the road passing by just there under your feet. What always amazed me was I could make myself think I was moving at different speeds just by changing my focus. If I looked across a field at a telephone pole in the distance, it didn’t seem like I was moving all that quickly. If I stared down at the white line on the side of the road, it looked like I was moving very fast indeed. By shifting my focus from here to there, I could adjust my perceived speed.

Here is where the focus on details becomes critical in networking. We do tend to focus on the details. To make matters worse, the average network operator tends to be something of a generalist. Being a generalist focused on details can be a frightening experience.

If you live entirely in the world of Ethernet, then you see past and future changes in the context of the history of Ethernet. This is something like looking at an object a few hundred feet off the road, perhaps. Things are moving quickly, but they aren’t insanely fast, blurry, up close and personal. If you live in wholly in the world of routing protocols, you are going to have a different picture, but the apparent speed is going to be similar, or perhaps even slower.

If you’re a generalist who focuses on detail, though, you’re going to be staring at the white line—at all the features, physical form factors, and products created by a combination of the changes made in routing and Ethernet. If there are two changes in Ethernet, and two in routing, product marketing will create at least four, and probably eight, new features out of the combination of these two, across twenty or thirty product lines. Each of these features will likely be called something different and sold to solve completely different problems.

Staring at the white line is fun at first, then mesmerizing, then it is frightening… then finally it is just plain dull. But let’s talk about the terrifying bit because it’s the scary stage that makes us all reject change out of fear for the future. And, trust me, a kid sitting in a car with no doors staring down at the white line while his uncle drives 60 miles-per-hour is going to be frightened from time to time.

The point Nick is making is we should back off the details and embrace the change. This is great advice—but how? It can feel like you’re going to run off the road if you don’t keep staring at the white line. The answer lies in putting your eyes someplace else—on the posts way out in the field. Ethernet still solves the same problems Bob Metcalf designed it to solve, and it always solves those problems using a small’ish set of solutions. Routing still solves the same problems it did when Dijkstra was mulling around toy problems to show off the processing power of a new computer some 60-odd years ago, and it still solves these problems using a small set of tools.

If you stop looking at the white line and start looking at the poles out in the distance, you’ll not only save your sanity, you’ll also permit yourself to start looking at the sociological and business impacts of new technology, including what matters and what doesn’t.

Two hundred years ago, if you wanted to get from Memphis to say… Lake Providence, Mississippi, you could take a boat directly between the two. Today you would take a car, and the only paths between the two are pretty round-a-about and “small country road” sorts of affairs. On the other hand, getting from Memphis Tennessee to Atlanta, Georgia, is now easy, while a couple of hundred years ago would be a big deal indeed. The sociological changes wrought by moving from rivers to roads are almost impossible to fathom. But you wouldn’t know that if you just stared at the white lines.

Note: I’m off in the weeds a little this week thinking about cyber-insurance because of a paper that landed in one of my various feeds—while this isn’t something we often think about as network operators, it does impact the overall security of the systems we build.

When you go to the doctor for a yearly checkup, do you think about health or insurance? You probably think about health, but the practice of going to the doctor for regular checkups began because of large life insurance companies in the United States. These companies began using statistical methods to make risk, or to build actuarial tables they could use to set the premiums properly. Originally, life insurance companies relied on the “hunches” of their salesmen, combined with some checking by people in the “back office,” to determine the correct premium. Over time, they developed networks of informers in local communities, such as doctors, lawyers, and even local politicians, who could describe the life of anyone in their area, providing the information the company needed to set premiums correctly.

Over time, however, statistical methods came into play, particularly relying on an initial visit with a doctor. The information these insurance companies gathered, however, gave them insight into what habits increased or decreased longevity—they decided they should use this information to help shape people’s lives so they would live longer, rather than just using it to discover the correct premiums. To gather more information, and to help people live better lives, life insurance companies started encouraging yearly doctor visits, even setting up non-profit organizations to support the doctors who gave these examinations. Thus was born the yearly doctor’s visit, the credit rating agencies, and a host of other things we take for granted in modern life.

You can read about the early history of life insurance and its impact on society in How Our Days Became Numbered.

What does any of this have to do with networks? Only this—we are in much the same position in the cyber-insurance market right now as the life insurance market in the late 1800s through the mid-1900s—insurance agents interview a company and make a “hunch bet” on how much to charge the company for cyber-insurance. Will cyber-insurance ever mature to the same point as life insurance? According to a recent research paper, the answer is “probably not.”  Why not?

First, legal restrictions will not allow a solution such as the one imposed by payment processors. Second, there does not seem to be a lot of leverage in cyber-insurance premiums. The cost of increasing security is generally much higher than any possible premium discount, making it cheaper for companies just to pay the additional premium than to improve their security posture. Third, there is no real evidence tying the use of specific products to reductions in security breaches. Instead, network and data security tend to be tied to practices rather than products, making it harder for an insurer to precisely specify what a company can and should to improve their posture.

Finally, the largest problem is measurement. What does it look like for a company to “go to the doctor” regularly? Does this mean regular penetration tests? Standardizing penetration tests is difficult, and it can be far too easy to counter pentests without improving the overall security posture. Like medical care in the “early days,” there is no way to know you have gathered enough information on the population to know if you correctly understand the kinds of things that improve “health”—but there is no way to compel reporting (much less accurate reporting), nor is there any way to compel insurance companies to share the information they have about cyber incidents.

Will cyber-insurance exist as a “separate thing” in the future? The authors largely answer in the negative. The pressures of “race to the bottom,” providing maximal coverage with minimal costs (which they attribute to the structure of the cyber-insurance market), combined with lack of regulatory clarity and inaccurate measurements, will probably end up causing cyber-insurance to “fold into” other kinds of insurance.

Whether this is a positive or negative result is a matter of conjecture—the legacy of yearly doctor’s visits and public health campaigns is not universally “good,” after all.

Ironies of Automation

In 1983 I was just joining the US Air Force, and still deeply involved in electronics (rather than computers). I had written a few programs in BASIC and assembler on a COCOII with a tape drive, and at least some of the electronics I worked on were used vacuum tube triodes, plate oscillators, and operational amplifiers. This was a magical time, though—a time when “things” were being automated. In fact, one of the reasons I left electronics was because the automation wave left my job “flat.” Instead of looking into the VOR shelter to trace through a signal path using a VOM (remember the safety L!) and oscilloscope, I could sit at a terminal, select a few menu items, grab the right part off the depot shelf, replace, and go home.

Maybe the newer way of doing things was better. On the other hand, maybe not.

What brings all this to mind is a paper from 1983 titled The Ironies of Automation.  It might often seem, because of our arrogant belief that we can remake the world through disruption (was the barbarian disruption of Rome in 455 the good sort of disruption, or the bad sort?), we often think we can learn nothing from the past. Reality check: the past is prelude.

What can the past teach us about automation? This is as good a place to start as any other:

There are two general categories of task left for an operator in an automated system. He may be expected to monitor that the automatic system is operating correctly, and if it is not he may be expected to call a more experienced operator or to take-over himself. We will discuss the ironies of manual take-over first, as the points made also have implications for monitoring. To take over and stabilize the process requires manual control skills, to diagnose the fault as a basis for shut down or recovery requires cognitive skills.

This is the first of the ironies of automation Lisanne Bainbridge discusses—and this is the irony I’d like to explore. The irony she is articulating is this: the less you work on a system, the less likely you are to be able to control that system efficiently. Once a system is automated, however, you will not work on the system on a regular basis, but you will be required to take control of the system when the automated controller fails in some way. Ironically, in situations where the automated controller fails, the amount of control required to make things right again will be greater than in normal operation.

In the case of machine operation, it turns out that the human operator is required to control the machine in just the situations where the least amount of experience is available. This is analogous to the automated warehouse in which automated systems are used to stack and sort material. When the automated systems break down, there is absolutely no way for the humans involved to figure out why things are stacked the way they are, nor how to sort things out to get things running again.

This seems intuitive. When I’m running the mill through manual control, after I’ve been running it for a while (I’m out of practice right now), I can “sense” when I’m feeding too fast, meaning I need to slow down to prevent chatter from ruining the piece, or worse—a crash resulting in broken bits of bit flying all over the place.

How does this apply to network operations? On the one hand, it seems like once we automate all the things we will lose the skills of using the CLI to do needed things very quickly. I always say “I can look that command up,” but if I were back in TAC, troubleshooting a common set of problems every day, I wouldn’t want to spend time looking things up—I’d want to have the right commands memorized to solve the problem quickly so I can move to the next case.

This seems to argue against automation entirely, doesn’t it? Perhaps. Or perhaps it just means we need to look at the knowledge we need (and want) in a little different way (along with the monitoring systems we use to obtain that knowledge).

Humans think quick and slow. We either react based on “muscle memory,” or we must think through a situation, dig up the information we need, and weigh out the right path forward. When you are pulling a piece of stainless through a bit and the head starts to chatter, you don’t want to spend time assessing the situation and deciding what to do—you want to react.

But if you are working on an automated machine, and the bit starts to chatter, you might want to react differently. You might want to stop the process entirely and think through how to adjust the automated sequence to prevent the bit from chattering the next time through. In manual control, each work piece is important because each one is individually built. In the automated sequence, the work piece itself is subsumed within the process.

It isn’t that you know “less” in the automated process, it’s that you know different things. In the manual process, you can feel the steel under the blade, the tension and torque, and rely on your muscle memory to react when its needed. In the automated process, you need to know more about the actual qualities of the bit and metal under the bit, the mount, and the mill itself. You have to have more of an immediate sense of how things work if you are doing it manually, but you have to have more of a sense of the theory behind why things work the way if it is automated.

A couple of thoughts in this area, then. First, when we are automating things, we need to be very careful to assume there is no “fast thinking” when things ultimately do fail (it’s not if, it’s when). We need to think through what information we are collecting, and how that information is being presented (if you read the original paper, the author spends a great deal of time discussing how to present information to the operator to overcome the ironies she illuminates) so we take maximum advantage of the “slow path” in the human brain, and stop relying on the “fast path” so much. Second, as we move towards an automated world, we need to start learning, and teaching, more about why and less about how, so we can prepare the “slow path” to be more effective—because the slow path is the part of our thinking that’s going to get more of a workout.