According to Maor Rudick, in a recent post over at Cloud Native, programming is 10% writing code and 90% understanding why it doesn’t work. This expresses the art of deploying network protocols, security, or anything that needs thought about where and how. I’m not just talking about the configuration, either—why was this filter deployed here rather than there? Why was this BGP community used rather than that one? Why was this aggregation range used rather than some other? Even in a fully automated world, the saying holds true.

So how can you improve the understandability of your network design? Maor defines understandability as “the dev who creates the software is to effortlessly … comprehend what is happening in it.” Continuing—“the more understandable a system is, the easier it becomes for the developers who created it to change it in a way that is safe and predictable.” What are the elements of understandability?

Documentation must be complete, clear, concise, and organized. The two primary failings I encounter in documentation are completeness and organization. Why something is done, when it was last changed, and why it was changed are often missing. The person making the change just assumes “I’ll remember this, or someone will figure it out.” You won’t, and they won’t. Concise is the “other side” of complete … Recording unsubstantial changes just adds information that won’t ever be needed. You have to  balance between enough and too much, of course.

Organization is another entire problem in documentation—most people have a favorite way to organize things. When you get a team of people all organizing things based on their favorite way, you end up with a mess. Going back in time … I remember that just about everyone who was assigned to the METNAV shop began their time by re-organizing the tools. Each time the re-organization made things so much easier to find, and improved the MTTR for the airfield equipment we supported … After a while, you’d think someone would ask, “Does re-organizing all the tools every year really help? Or are you just making stuff up for new folks to do?”

Moving beyond documentation, what else can we do to make our networks more understandable?

First, we can focus on actually making networks simpler. I don’t mean just glossing things over with a pretty GUI, or automating thousands of lines of configuration using Python. I mean taking steps by using protocols that are simpler to run, require less configuration, and produce more information you can use for troubleshooting—choose something like IS-IS for your DC fabric underlay rather than BGP, unless you really have several hundred thousand of underlay destinations (hint, if you’ve properly separated “customer” routes in the overlay from “infrastructure” routes in the underlay, you shouldn’t have this kind of routing tangle in the underlay anyway).

What about having multiple protocols that do the same job? Do you really need three or four routing protocols, four or five tunneling protocols, and five or six … well, you get the idea. Reducing the sheer number of protocols running in your network can make a huge difference in the tooling troubleshooting time. What about having four or five kinds of boxes in your network that fulfill the same role? Okay—so maybe you have three DC fabrics, and you run each one using a different vendor. But is there is any reason to have three DC fabrics, each of which has a broad mix of equipment from five different vendors? I doubt it.

Second, you can think about what you would measure in the case of failure, how you would measure it, and put the basic piece in place in the design phase to do those measurements. Don’t wait until you need the data to figure out how to get at it, and what the performance results of trying to get it are going to be.

Third, you can think about where you put policy in your network. There is no “right” answer to this question, other than … be consistent. The first option is to put all your policy in one place—say, on the devices that connect the core to the aggregation, or the devices in the distribution layer. The second option is to always put the policy as close to the source or destination of the traffic impacted by the policy. In a DC fabric, you should always put policy and external connectivity in the T0 or ToR, never in the spine (it’s not a core, it’s a spine).

Maybe you have other ideas on how to improve understandability in networks … If you do, get in touch and let’s talk about it. I’m always looking for practical ways to make networks more understandable.

I think we can all agree networks have become too complex—and this complexity is a result of the network often becoming the “final dumping ground” of every problem that seems like it might impact more than one system, or everything no-one else can figure out how to solve. It’s rather humorous, in fact, to see a lot of server and application folks sitting around saying “this networking stuff is so complex—let’s design something better and simpler in our bespoke overlay…” and then falling into the same complexity traps as they start facing the real problems of policy and scale.

This complexity cannot be “automated away.” It can be smeared over with intent, but we’re going to find—soon enough—that smearing intent on top of complexity just makes for a dirty kitchen and a sub-standard meal.

While this is always “top of mind” in my world, what brings it to mind this particular week is a paper by Jen Rexford et al. (I know Jen isn’t on the lead position in the author list, but still…) called A Clean Slate 4D Approach to Network Control and Management. Of course, I can appreciate the paper in part because I agree with a lot of what’s being said here. For instance—

We believe the root cause of these problems lies in the control plane running on the network elements and the management plane that monitors and configures them. In this paper, we argue for revisiting the division of functionality and advocate an extreme design point that completely separates a network’s decision logic from the protocols that govern interaction of network elements.

In other words, we’ve not done our modularization homework very well—and our lack of focus on doing modularization right is adding a lot of unnecessary complexity to our world. The four planes proposed in the paper are decision, dissemination, discovery, and data. The decision plane drives network control, including reachability, load balancing, and access control. The dissemination plane “provides a robust and efficient communication substrate” across which the other planes can send information. The discovery plane “is responsible for discovering the physical components of the network,” giving each item an identifier, etc. The data plane carries packets edge-to-edge.

I do have some quibbles with this architecture, of course. To begin, I’m not certain the word “plane” is the right one here. Maybe “layers,” or something else that implies more of a modular concept with interactions, and less a “ships in the night” sort of affair. My more substantial disagreement is with the placement of “interface configuration” and where reachability is placed in the model.

Consider this: reachability and access control are, in a sense, two sides of the same coin. You learn where something is to make it reachable, and then you block access to it by hiding reachability from specific places in the network. There are two ways to control reachability—by hiding the destination, or by blocking traffic being sent to the destination. Each of these has positive and negative aspects.

But notice this paradox—the control plane cannot hide reachability towards something it does not know about. You must know about something to prevent someone from reaching it. While reachability and access control are two sides of the same coin, they are also opposites. Access control relies on reachability to do its job.

To solve this paradox, I would put reachability into discovery rather than decision. Discovery would then become the discovery of physical devices, paths, and reachability through the network. No policy would live here—discovery would just determine what exists. All the policy about what to expose about what exists would live within the decision plane.

While the paper implies this kind of system must wait for some day in the future to build a network using these principles, I think you can get pretty close today. My “ideal design” for a data center fabric right now is (modified) IS-IS or RIFT in the underlay and eVPN in the overlay, with a set of controllers sitting on top. Why?

IS-IS doesn’t rely on IP, so it can serve in a mostly pure discovery role, telling any upper layers where things are, and what is changing. eVPN can provide segmented reachability on top of this, as well as any other policies. Controllers can be used to manage eVPN configuration to implement intent. A separate controller can work with IS-IS on inventory and lifecycle of installed hardware. This creates a clean “break” between the infrastructure underlay and overlays, and pretty much eliminates any dependence the underlay has on the overlay. Replacing the overlay with a more SDN’ish solution (rather than BGP) is perfectly do-able and reasonable.

While not perfect, the link-state underlay/BGP overlay model comes pretty close to implementing what Jen and her co-authors are describing, only using protocols we have around today—although with some modification.

But the main point of this paper stands—a lot of the reason for the complexity we fact today is simply because we modularize using aggregation and summarization and call the job “done.” We aren’t thinking about the network as a system, but rather as a bunch of individual appliances we slap together into places, which we then connect through other places (or strings, like between tin cans).

Something to think about the next time you get that 2AM call.

On a Spring 2019 walk in Beijing I saw two street sweepers at a sunny corner. They were beat-up looking and grizzled but probably younger than me. They’d paused work to smoke and talk. One told a story; the other’s eyes widened and then he laughed so hard he had to bend over, leaning on his broom. I suspect their jobs and pay were lousy and their lives constrained in ways I can’t imagine. But they had time to smoke a cigarette and crack a joke. You know what that’s called? Waste, inefficiency, a suboptimal outcome. Some of the brightest minds in our economy are earnestly engaged in stamping it out. They’re winning, but everyone’s losing. —Tim Bray

This, in a nutshell, is what is often wrong with our design thinking in the networking world today. We want things to be efficient, wringing the last little dollar, and the last little bit of bandwidth, out of everything.

This is also, however, a perfect example of the problem of triads and tradeoffs. In the case of the street sweeper, we might thing, “well, we could replace those folks sitting around smoking a cigarette and cracking jokes with a robot, making things much more efficient.” We might notice the impact on the street sweeper’s salaries—but after all, it’s a boring job, and they are better off doing something else anyway, right?

We’re actually pretty good at finding, and “solving” (for some meaning of “solving,” of course), these kinds of immediately obvious tradeoffs. It’s obvious the street sweepers are going to lose their jobs if we replace them with a robot. What might not be so obvious is the loss of the presence of a person on the street. That’s a pair of eyes who can see when a child is being taken by someone who’s not a family member, a pair of ears that can hear the rumble of a car that doesn’t belong in the neighborhood, a pair of hands that can help someone who’s fallen, etc.

This is why these kinds of tradeoffs always come in (at least) threes.

Let’s look at the street sweepers in terms of the SOS triad. Replacing the street sweepers with a robot or machine certainly increases optimization. According to the triad, though, increasing optimization in one area should result in some increase in complexity someplace, and some loss of optimization in other places.

What about surfaces? The robot must be managed, and it must interact with people and vehicles on the street—which means people and vehicles must also interact with the robot. Someone must build and maintain the robot, so there must be some sort of system, with a plethora of interaction surfaces, to make this all happen. So yes, there may be more efficiency, but there are now more interaction surfaces to deal with now, too. These interaction surfaces increase complexity.

What about state? In a sense, there isn’t much change in state other than moving it—purely in terms of sweeping the street, anyway. The sweeper and the robot must both understand when and how to sweep the street, etc., so the state doesn’t seem to change much here.

On the other hand, that extra set of eyes and ears, that extra mind, that is no longer on the street in a personal way represents a loss of state. The robot is an abstraction of the person who was there before, and abstraction always represents a loss of state in some way. Whether this loss of state decreases the optimal handling of local neighborhood emergencies is probably a non-trivial problem to consider.

The bottom line is this—when you go after efficiency, you need to think in terms of efficiency of what, rather than efficiency as a goal-in-itself. That’s because there is no such thing as “efficiency-in-itself,” there is only something you are making more efficient—and a lot of things you potentially making less efficient.

Automate your network, certainly, or even buy a system that solves “all the problems.” But remember there are tradeoffs—often a large number of tradeoffs you might not have thought about—and those tradeoffs have consequences.

It’s not “if you haven’t found the tradeoff, you haven’t looked hard enough…” Don’t stop at one. It’s “if you haven’t found the tradeoffs, you haven’t looked hard enough.” It’s a plural for a reason.

Many years ago I attended a presentation by Dave Meyers on network complexity—which set off an entire line of thinking about how we build networks that are just too complex. While it might be interesting to dive into our motivations for building networks that are just too complex, I starting thinking about how to classify and understand the complexity I was seeing in all the networks I touched. Of course, my primary interest is in how to build networks that are less complex, rather than just understanding complexity…

This led me to do a lot of reading, write some drafts, and then write a book. During this process, I ended coining what I call the complexity triad—State, Optimization, and Surface. If you read the book on complexity, you can see my views on what the triad consisted of changed through in the writing—I started out with volume (of state), speed (of state), and optimization. Somehow, though, interaction surfaces need to play a role in the complexity puzzle.

First, you create interaction surface when you modularize anything—and you modularize to control state (the scope to set apart failure domains, the speed and volume to enable scaling). Second, adding interaction surfaces adds complexity by creating places where information must be exchanged—which requires protocols and other things. Finally, reducing state through abstraction at an interaction surface is the primary cause of many forms of suboptimal behavior in a control plane, and causes unintended consequences. Since interaction surfaces are so closely tied to state and optimization, then, I added surfaces to the triad, and merged the two kinds of state into one, just state.

I have been thinking through the triad again in the last several weeks for various reasons, and I’m not certain it’s quite right still because I’m not convinced surfaces are really a tradeoff against state and optimization. It seems more accurate to say that state and optimization trade off through interaction surfaces. This does not make it any less of a triad, but it might mean I need to find a little different way to draw it. One way to illustrate it is as a system of moving parts, such as the illustration below.

If you think of the interaction surface between modules 1 and 2—two topological parts, or a virtual topology on top of a physical—then the abstraction is the amount of information allowed to pass between the two modules. For instance, in aggregation the length of the aggregated prefixes, or the aggregated prefix metrics, etc.

When you “turn the crank,” so-to-speak, you adjust the volume, speed (velocity), breadth, or depth of information being passed between the modules—either more or less information, faster or slower, in more places or fewer, or the reaction of the module receiving the state. Every time you turn the crank, however, there is not one reaction but many. Notices optimization 1 will turn in the opposite direction from optimization 2 in the diagram—so turning the crank for 1 to be more optimal will always result in 2 becoming less optimal. There are tens or hundreds of such interactions in any system, and it is impossible for any person to know or understand all of them.

For instance, if you aggregate hundreds of /64’s to tens of /60’s, you reduce the state and optimize by reducing the scope of the failure domain. On the other hand, because you have less specific routing information, traffic is (most likely) going to flow along less-than-optimal paths. If you “turn the crank” by aggregating those same hundreds of /64’s to a 0::0, you will have more “airtight” failure domains or modules, but less optimal traffic flow. Hence …

If you haven’t found the tradeoffs, you haven’t looked hard enough.

What understanding the SOS triad allows you, combined with a fundamental knowledge of how these things work, is to know where to look for the tradeoffs. Maybe it would be better to illustrate the SOS triad with surfaces at the bottom all the time, acting as a sort of fulcrum or balance point between state and optimization… Or maybe a completely different illustration would be better. This is something for me to think about more and figure out.

Complexity interacts with these interaction surfaces as well, of course—the more complex a system becomes, the more complex the interaction surface within the system become or the more of them you have. A key point in design of any kind is balancing the number of interaction surfaces with their complexity, depth, and breath—in other words, where should you modularize, what should each module contain, what sort of state passed between the modules, where does state pass between the modules, etc. Somehow, mentally, you have to factor in the unintended consequences of hiding information (the first corollary to Keith’s Law, in effect), and the law of leaky abstractions (all nontrivial abstractions leak).

This is a far different way of looking at networks and their design than what you learned in any random certification, and its probably not even something you will find in a college textbook. It is quite difficult to apply when you’re down in the configuration of individual devices. But it’s also the key to understanding networks as a system and beginning the process of thinking about where and how to modularize to create the simplest system to solve a given hard problem.

Going back to the beginning, then—one of the reasons we build such complex networks is we do not really think about how the modules fit together. Instead, we use rules-of-thumb and folk wisdom while we mumble about failure domains and “this won’t scale” under our breath. We are so focused on the individual gears becoming commodities that we fail to see the system and all its moving parts—or we somehow think “this is all so easy,” that we build very inefficient systems with brute-force resolutions, often resulting in mass failures that are hard to understand and resolve.

Sorry, there’s no clear point or lesson here… This is just what happens when I’ve been buried in dissertation work all day and suddenly realize I have not written a blog post for this week… But it should give you something to think about.

If you are looking for a good resolution for 2020 still (I know, it’s a bit late), you can’t go wrong with this one: this year, I will focus on making the networks and products I work on truly simpler. Now, before you pull Tom’s take out on me—

There are those that would say that what we’re doing is just hiding the complexity behind another layer of abstraction, which is a favorite saying of Russ White. I’d argue that we’re not hiding the complexity as much as we’re putting it back where it belongs – out of sight. We don’t need the added complexity for most operations.

Three things: First, complex solutions are always required for hard problems. If you’ve ever listened to me talk about complexity, you’ve probably seen this quote on a slide someplace—

[C]omplexity is most succinctly discussed in terms of functionality and its robustness. Specifically, we argue that complexity in highly organized systems arises primarily from design strategies intended to create robustness to uncertainty in their environments and component parts.

You cannot solve hard problems—complex problems—without complex solutions. In fact, a lot of the complexity we run into in our everyday lives is a result of saying “this is too complex, I’m going to build something simpler.” (here I’m thinking of a blog post I read last year that said “when we were building containers, we looked at routing and realized how complex it was… so we invented something simpler… which, of course, turned out to be more complex than dynamic routing!)

Second, abstraction can be used the right way to manage complexity, and it can be used the wrong way to obfuscate or mask complexity. The second great source of complexity and system failure in our world is we don’t abstract complexity so much as we obfuscate it.

Third, abstraction is not a zero-sum game. If you haven’t found the tradeoffs, you haven’t looked hard enough. This is something expressed through the state/optimization/surface triangle, which you should know at this point.

Returning to the top of this post, the point is this: Using abstraction to manage complexity is fine. Obfuscation of complexity is not. Papering over complexity “just because I can” never solves the problem, any more than sweeping dirt under the rug, or papering over the old paint without bothering to fix the wall first.

We need to go beyond just figuring out how to make the user interface simpler, more “intent-driven,” automated, or whatever it is. We need to think of the network as a system, rather than as a collection of bits and bobs that we’ve thrown together across the years. We need to think about the modules horizontally and vertically, think about how they interact, understand how each piece works, understand how each abstraction leaks, and be able to ask hard questions.

For each module, we need to understand how things work well enough to ask is this the right place to divide these two modules? We should be willing to rethink our abstraction lines, the placement of modules, and how things fit together. Sometimes moving an abstraction point around can greatly simplify a design while increasing optimal behavior. Other times it’s worth it to reduce optimization to build a simpler mouse trap. But you cannot know the answer to this question until you ask it. If you’re sweeping complexity under the rug because… well, that’s where it belongs… then you are doing yourself and the organization you work for a disfavor, plain and simple. Whatever you sweep under the rug of obfuscation will grow and multiply. You don’t want to be around when it crawls back out from under that rug.

For each module, we need to learn how to ask is this the right level and kind of abstraction? We need to learn to ask does the set of functions this module is doing really “hang together,” or is this just a bunch of cruft no-one could figure out what to do with, so they shoved it all in a black box and called it done?

Above all, we need to learn to look at the network as a system. I’ve been harping on this for so long, and yet I still don’t think people understand what I am saying a lot of times. So I guess I’ll just have to keep saying it. 😊

The problems networks are designed to solve are hard—therefore, networks are going to be complex. You cannot eliminate complexity, but you can learn to minimize and control it. Abstraction within a well-thought-out system is a valid and useful way to control complexity and understanding how and where to create modules at the edges of which abstraction can take place is a valid and useful way of controlling complexity.

Don’t obfuscate. Think systemically, think about the tradeoffs, and abstract wisely.

It’s time for a short lecture on complexity.

Networks are complex. This should not be surprising, as building a system that can solve hard problems, while also adapting quickly to changes in the real world, requires complexity—the harder the problem, the more adaptable the system needs to be, the more resulting design will tend to be. Networks are bound to be complex, because we expect them to be able to support any application we throw at them, adapt to fast-changing business conditions, and adapt to real-world failures of various kinds.

There are several reactions I’ve seen to this reality over the years, each of which has their own trade-offs.

The first is to cover the complexity up with abstractions. Here we take a massively complex underlying system and “contain” it within certain bounds so the complexity is no longer apparent. I can’t really make the system simpler, so I’ll just make the system simpler to use. We see this all the time in the networking world, including things like intent driven replacing the command line with a GUI, and replacing the command line with an automation system. The strong point of these kinds of solutions is they do, in fact, make the system easier to interact with, or (somewhat) encapsulate that “huge glob of legacy” into a module so you can interface with it in some way that is not… legacy.

One negative side of these kinds of solutions, however, is that they really don’t address the complexity, they just hide it. Many times hiding complexity has a palliative effect, rather than a real world one, and the final state is worse than the starting state. Imagine someone who has back pain, so they take pain-killers, and then go back to the gym to life even heavier weights than they have before. Covering the pain up gives them the room to do more damage to their bodies—complexity, like pain, is sometimes a signal that something is wrong.

Another negative side effect of this kind of solution is described by the law of leaky abstractions: all nontrivial abstractions leak. I cannot count the number of times engineers have underestimated the amount of information that leaks through an abstraction layer and the negative impacts such leaks will have on the overall system.

The second solution I see people use on a regular basis is to agglutinate multiple solutions into a single solution. The line of thinking here is that reducing the number of moving parts necessarily makes the overall system simpler. This is actually just another form of abstraction, and it normally does not work. For instance, it’s common in data center designs to have a single control plane for both the overlay and underlay (which is different than just not having an overlay!). This will work for some time, but at some level of scale it usually creates more complexity, particularly in trying to find and fix problems, than it solves in reducing configuration effort.

As an example, consider if you could create some form of wheel for a car that contained its own little engine, braking system, and had the ability to “warp” or modify its shape to produce steering effects. The car designer would just provide a single fixed (not moving) attachment point, and let the wheel do all the work. Sounds great for the car designer, right? But the wheel would then be such a complex system that it would be near impossible to troubleshoot or understand. Further, since you have four wheels on the car, you must somehow allow them to communicate, as well as having communication to the driver to know what to do from moment to moment, etc. The simplification achieved by munging all these things into a single component will ultimately be overcome by complexity built around the “do-it-all” system to make the whole system run.

Or imagine a network with a single transport protocol that does everything—host-to-host, connection-oriented, connectionless, encrypted, etc. You don’t have to think about it long to intuitively know this isn’t a good idea.

An example for the reader: Geoff Huston joins the Hedge this week to talk about DNS over HTTPS. Is this an example of munging systems together than shouldn’t be munged together? Or is this a clever solution to a hard problem? Listen to the two episodes and think it through before answering—because I’m not certain there is a clear answer to this question.

Finally, what a lot of people do is toss the complexity over the cubicle wall. Trust me, this doesn’t work in the long run–the person on the other side of the wall has a shovel, too, and they are going to be pushing complexity at you as fast as they can.

There are no easy solutions to solving complexity. The only real way to deal with these problems is by looking at the network as part of a larger system including applications, the business environment, and many other factors. Then figure out what needs to be done, how to divide the work up (where the best abstraction points are), and build replaceable components that can solve each of these problems while leaking the least amount of information, and are internally as simple as possible.

Every other path leads to building more complex, brittle systems.

It was quite difficult to prepare a tub full of bath water at many points in recent history (and it probably still is in some many parts of the world). First, there was the water itself—if you do not have plumbing, then the water must be manually transported, one bucket at a time, from a stream, well, or pump, to the tub. The result, of course, would be someone who was sweaty enough to need the forthcoming bath. Then there is the warming of the water. Shy of building a fire under the tub itself, how can you heat enough water quickly enough to make the eventual bathing experience? According to legend, this resulted in the entire household using the same tub of water to bathe. The last to bathe was always the smallest, the baby. By then, the water would be murky with dirt, which means the child could not be seen in the tub. When the tub was thrown out, then, no-one could tell if the baby was still in there.

But it doesn’t take a dirty tub of water to throw the baby out with the bath. All it really takes is an unwillingness to learn from the lessons of others because, somehow, you have convinced yourself that your circumstances are so different there is nothing to learn. Take, for instance, the constant refrain, “you are not Google.”

I should hope not.

But this phrase, or something similar, is often used to say something like this: you don’t have the problems of any of the hyperscalers, so you should not look to their solutions to find solutions for your problems. An entertaining read on this from a recent blog:

Software engineers go crazy for the most ridiculous things. We like to think that we’re hyper-rational, but when we have to choose a technology, we end up in a kind of frenzy — bouncing from one person’s Hacker News comment to another’s blog post until, in a stupor, we float helplessly toward the brightest light and lay prone in front of it, oblivious to what we were looking for in the first place. —Oz Nova

There is a lot of truth here—you should never choose a system or solution because it solves someone else’s problem. Do not deploy Kafka if you you need the scale Kafka represents. Maybe you don’t need four links between every pair of routers “just to be certain you have enough redundancy.”

On the other hand, there is a real danger here of throwing the baby out with the bathwater—the water is murky with product and project claims, so just abandon the entire mess. To see where the problem is here, let’s look at another large scale system we don’t think about very much any longer: the NASA space program from the mid-1960’s. One of the great things the folks at NASA have always liked to talk about is all the things that came out of the space program. Remember Tang? Or maybe not. It really wasn’t developed for the space program, and it’s mostly sugar and water, but it was used in some of the first space missions, and hence became associated with hanging out in space.

There are a number of other inventions, however, that really did come directly out of research into solving problems folks hanging out in space would have, such as the space pen, freeze-dried ice cream, exercise machines, scratch-resistant eyeglass lenses, cameras on phones, battery powered tools, infrared thermometers, and many others.

Since you are not going to space any time soon, you refuse to use any of these technologies, right?

Do not be silly. Of course you still use these technologies. Because you are smart enough not to throw the baby out with the bathwater, right?

You should apply the same level of care to the solutions Google, Amazon, LinkedIn, Microsoft, and the other hyperscalers. Not everything is going to fit in your environment, of course. On the other hand, some things might fit. And regardless of whether any particular technology fits or not, you can still learn something about how systems work by considering how they are building things to scale to their needs. You can adopt operational processes that make sense based on what they have learned. You can pick out technologies and ways of thinking that make sense.

No, you’re (probably not) Google. On the other hand, we are all building complex networks. The more we can learn from those around us, the better what we build will be. Don’t throw the baby out with the bathwater.