Archive for 2019

History of GateD

Sue Hares, cochair of the IDR and I2RS working groups in the IETF, joins Donald Sharp and Russ White to talk about the origins of one of the first open source routing stacks, GateD. Sue was involved in MERIT and the university programs that originated this open source software, and managed its transition to a commercial offering.

download this episode

The entire history of networking series is available here.

Autonomic, Automated, and Reality

Once the shipping department drops the box off with that new switch, router, or “firewall,” what happens next? You rack it, cable it up, turn it on, and start configuring, right? There are access to controls to configure—SSH, keys, disabling standard accounts, disabling telnet—interface addresses to configure, routing adjacencies to configure, local policies to configure, and… After configuring all of this, you can adjust routing in the network to route around the new device, and then either canary the device “in production” (if you run your network the way it should be run), or find some prearranged maintenance time to bring the new device online and test things out. After all of this, you can leave the new device up and running in the network, and move on to the next task.

Until it breaks.

Then you consult the documentation to remind yourself why it was configured this way, consult the documentation to understand how the application everyone is complaining about not working should work, etc. There are the many hours spent sitting on the console gathering information by running various commands and the output of various logs. Eventually, once you find the problem, you can either replace the right parts, or reconfigure the right bits, and get everything running again.

In the “modern” world (such as it is), we think it’s a huge leap forward to stop configuring devices manually. If we can just automate the configuration of all that “stuff” we have to do at the beginning, after the box is opened and before the device is placed into service, we think we have this whole networking thing pretty well figured out.

Even if you had everything in your network automated, you still haven’t figured this networking thing out.

We need to move beyond automation. Where do we need to move to? It’s not one place, but two. The first is we need to move beyond automation to autonomous operation. As an example, there is a shiny new system that is currently being widely deployed to automate the deployment and management of containers. Part of this system is the automation of connectivity, including routing, between containers. The routing system being deployed as part of this system is essentially statically configured policy-based routing combined with network address translation.

Let me point something out that is not going to be very popular: this is a step backwards in terms of making the system autonomous. Automating static routing information is not a better solution than building a real, dynamic, proactive, autonomic, routing system. It’s not simpler—trust me, I say this as someone who has operated large networks which used automated static routes to do everything.

The “opsification of everything” is neat, but it shouldn’t be our end goal.

Now part of this, I know, is the fault of vendors. Vendors who push EGPs onto data center fabrics because, after all, “the configuration complexity doesn’t matter so long as you can automate it.” The configuration complexity does matter, because configuration complexity belies an underlying protocol complexity, and sets up long and difficult troubleshooting sessions that are completely unnecessary.

The second place we need to move in the networking world? The focus on automation is just another form of focusing on configuration. We abstract the configuration, and we touch a lot more devices at once, but we are still thinking about configuration. The more we think about configuration, the less we think about how the system should work, how it really works, what the gaps are, and how to bridge those gaps. So long as we are focused on the configuration, automated or not, we are not focused on how the network can bring value to the business. The longer we are focused on configuration, the less value we are bringing to the business, and the more likely we are to end up being replaced by … an automated system … no matter how poorly that automated system actually works.

And no, the cloud isn’t going to solve this. Containers aren’t going to solve this. The “automated configuration pattern” is already being repeated in the cloud. As more complex workloads are moved into the cloud, the problems there are only going to get harder. What starts out as a “simple” system using policy-based routing analogs and network address translation configured through an automation server will eventually look complex against the hardest problems we had to solve using T1’s, frame relay circuits, inverse multiplexers, wire down patch panels, and mechanical switch crossbar frames. It’s fun to pretend we don’t need dynamic routing to solve the problems that face the network—at least until you hit hard problems, and have to relearn the lessons of the last 20+ years.

Yes, I know vendors are partly to blame for this. I know that, for a vendor, it’s easier to get people to buy into your CLI, or your entire ecosystem, rather than getting them to think about how to solve the problems your business is handing them.

On the other hand, none of this is going to change from the top down. This is only going to change when the average network engineer starts asking vendors for truly simpler solutions that don’t require reams configuration information. It will change when network engineers get their heads out of the configuration and features, and into the business problems.

Weekend Reads 091319

The idea of object-oriented software originated in the 1960s and rose to dominance in the 1990s. In 2019, most main-stream languages are at least somewhat object-oriented. Despite this obvious success, the paradigm is still somewhat nebulous if you think about it in detail. —Felix

Unlike previous side-channel vulnerabilities disclosed in Intel CPUs, researchers have discovered a new flaw that can be exploited remotely over the network without requiring an attacker to have physical access or any malware installed on a targeted computer. —Swati Khandelwal

The International Society of Automation (ISA) 99 standards development committee brings together industrial cyber security experts from across the globe to develop ISA standards on industrial automation and control systems security that are applicable to all industry sectors and critical infrastructure. —Anastasios Arampatzis

If you feel as if there’s a new data breach in the news every day, it’s not just you. Breaches announced recently at Capital One, MoviePass, StockX, and others have exposed a variety of personal data across more than 100 million consumers. This has spurred lawsuits and generated thousands of headlines. —Shuman Ghosemajumder

Recently, Google’s Project Zero published a report describing a newly-discovered campaign of surveillance using chains of zero day iOS exploits to spy on iPhones. This campaign employed multiple compromised websites in what is known as a “watering hole” attack. —Cooper Quentin

Pandora Flexible Monitoring Solution (FMS) is all-purpose monitoring software, which means it can control network equipment, servers (Linux and Windows), virtual environments, applications, databases, and a lot more. It can do both remote monitoring and monitoring based on agents installed on the servers. You can get collected data in reports and graphs and raise alerts if something goes wrong. —Sancho Lerena

Cybersecurity researchers have discovered a new computer virus associated with the Stealth Falcon state-sponsored cyber espionage group that abuses a built-in component of the Microsoft Windows operating system to stealthily exfiltrate stolen data to attacker-controlled server. —Mohit Kumar

I recently volunteered as an AV tech at a science communication conference in Portland, OR. There, I handled the computers of a large number of presenters, all scientists and communicators who were passionate about their topic and occasionally laissez-faire about their system security. —Rtia Nygren

Organizations that that do things in the world beyond just releasing code or running services — as much as companies like Uber try to pretend they’re software companies — often find themselves subject to regulation or pressure on those AFK-centric activities. Life has, relatively speaking and with the exception of a few minor intellectual property kerfuffles, been pretty easy for pure software folks. —Eleanor Saitta

The horse-race between AMD and Intel is fun to follow, but when it comes to security, there’s far more at stake than framerates in games. There looms a ghostly apparition that’s easy to forget. Speculative execution exploits like Spectre and its variants, as well as ZombieLoad and a number of other side-channel attacks, are still as scary as ever. —Luke Larsen

Airlines and the airport industry in general are highly lucrative targets for APT groups; they are rife with information that other countries would find useful. NETSCOUT data from 2019 shows airport and airline targeting remains strong and steady, with Russian, Chinese, and Iranian APT groups attempting access. —ASSERT

It’s time for a short lecture on complexity…

It’s time for a short lecture on complexity.

Networks are complex. This should not be surprising, as building a system that can solve hard problems, while also adapting quickly to changes in the real world, requires complexity—the harder the problem, the more adaptable the system needs to be, the more resulting design will tend to be. Networks are bound to be complex, because we expect them to be able to support any application we throw at them, adapt to fast-changing business conditions, and adapt to real-world failures of various kinds.

There are several reactions I’ve seen to this reality over the years, each of which has their own trade-offs.

The first is to cover the complexity up with abstractions. Here we take a massively complex underlying system and “contain” it within certain bounds so the complexity is no longer apparent. I can’t really make the system simpler, so I’ll just make the system simpler to use. We see this all the time in the networking world, including things like intent driven replacing the command line with a GUI, and replacing the command line with an automation system. The strong point of these kinds of solutions is they do, in fact, make the system easier to interact with, or (somewhat) encapsulate that “huge glob of legacy” into a module so you can interface with it in some way that is not… legacy.

One negative side of these kinds of solutions, however, is that they really don’t address the complexity, they just hide it. Many times hiding complexity has a palliative effect, rather than a real world one, and the final state is worse than the starting state. Imagine someone who has back pain, so they take pain-killers, and then go back to the gym to life even heavier weights than they have before. Covering the pain up gives them the room to do more damage to their bodies—complexity, like pain, is sometimes a signal that something is wrong.

Another negative side effect of this kind of solution is described by the law of leaky abstractions: all nontrivial abstractions leak. I cannot count the number of times engineers have underestimated the amount of information that leaks through an abstraction layer and the negative impacts such leaks will have on the overall system.

The second solution I see people use on a regular basis is to agglutinate multiple solutions into a single solution. The line of thinking here is that reducing the number of moving parts necessarily makes the overall system simpler. This is actually just another form of abstraction, and it normally does not work. For instance, it’s common in data center designs to have a single control plane for both the overlay and underlay (which is different than just not having an overlay!). This will work for some time, but at some level of scale it usually creates more complexity, particularly in trying to find and fix problems, than it solves in reducing configuration effort.

As an example, consider if you could create some form of wheel for a car that contained its own little engine, braking system, and had the ability to “warp” or modify its shape to produce steering effects. The car designer would just provide a single fixed (not moving) attachment point, and let the wheel do all the work. Sounds great for the car designer, right? But the wheel would then be such a complex system that it would be near impossible to troubleshoot or understand. Further, since you have four wheels on the car, you must somehow allow them to communicate, as well as having communication to the driver to know what to do from moment to moment, etc. The simplification achieved by munging all these things into a single component will ultimately be overcome by complexity built around the “do-it-all” system to make the whole system run.

Or imagine a network with a single transport protocol that does everything—host-to-host, connection-oriented, connectionless, encrypted, etc. You don’t have to think about it long to intuitively know this isn’t a good idea.

An example for the reader: Geoff Huston joins the Hedge this week to talk about DNS over HTTPS. Is this an example of munging systems together than shouldn’t be munged together? Or is this a clever solution to a hard problem? Listen to the two episodes and think it through before answering—because I’m not certain there is a clear answer to this question.

Finally, what a lot of people do is toss the complexity over the cubicle wall. Trust me, this doesn’t work in the long run–the person on the other side of the wall has a shovel, too, and they are going to be pushing complexity at you as fast as they can.

There are no easy solutions to solving complexity. The only real way to deal with these problems is by looking at the network as part of a larger system including applications, the business environment, and many other factors. Then figure out what needs to be done, how to divide the work up (where the best abstraction points are), and build replaceable components that can solve each of these problems while leaking the least amount of information, and are internally as simple as possible.

Every other path leads to building more complex, brittle systems.

Weekend Reads 090619

Despite their freakish skill at board games, computer algorithms do not possess anything resembling human wisdom, common sense, or critical thinking. Deciding whether to accept a job offer, sell a stock, or buy a house is very different from recognizing that moving a bishop three spaces will checkmate an opponent. —Gary Smith

Every week I have at least one conversation with a security decision maker explaining why a lot of the hyperbole about passwords – “never use a password that has ever been seen in a breach,” “use really long passwords”, “passphrases-will-save-us”, and so on – is inconsistent with our research and with the reality our team sees as we defend against 100s of millions of password-based attacks every day. Focusing on password rules, rather than things that can really help – like multi-factor authentication (MFA), or great threat detection – is just a distraction. —Alex Weinert
Alex Weinert

One of the biggest projects in the blockchain industry, Hyperledger, is comprised of a set of open source tools and subprojects. It’s a global collaboration hosted by The Linux Foundation and includes leaders in different sectors who are aiming to build a robust, business-driven blockchain framework. —Matt Zand

Enterprise servers powered by Supermicro motherboards can remotely be compromised by virtually plugging in malicious USB devices, cybersecurity researchers at firmware security company Eclypsium told The Hacker News. —Mohit Kumar

Not content with monitoring almost everything you do online, Facebook now wants to read your mind as well. The social media giant recently announced a breakthrough in its plan to create a device that reads people’s brainwaves to allow them to type just by thinking. And Elon Musk wants to go even further. One of the Tesla boss’s other companies, Neuralink, is developing a brain implant to connect people’s minds directly to a computer. —Garfield Benjamin

The phrase was opaque but vaguely appealing. Why would anyone want to repeal something called “net neutrality”? Neutral is inoffensive, right? So when the Federal Communications Commission debated whether to ditch the policy, many Americans joined in the energetic protests. —Chicago Tribune

Telecommunications has become the nexus of U.S. frustration at unfair Chinese trade practices and concerns that the Chinese government is utilizing the networks of private Chinese corporations to spy on foreign governments and corporations. At the center of the debate is the leading Chinese telecommunications firm Huawei, which has been accused of poor ethics, shoddy security, intellectual property theft and even spying for years. —Tom Le

We are talking, of course, about a downturn in server spending, which we feel is one of the key economic indicators given the foundational nature of data processing in the 21st century. We have been on a hell of a run since 2016 in terms of server shipments and sales, and while growth has been slowing in recent quarters, both metrics went into negative territory in the second quarter of 2019. —Timothy Prickett Morgan