Random Thoughts on IoT

Let’s play the analogy game. The Internet of Things (IoT) is probably going end up being like … a box of chocolates, because you never do know what you are going to get? a big bowl of spaghetti with a serious lack of meatballs? Whatever it is, the IoT should have network folks worried about security. There is, of course, the problem of IoT devices being attached to random places on the network, exfiltrating personal data back to a cloud server you don’t know anything about. Some of these devices might be rogue, of course, such as Raspberry Pi attached to some random place in the network. Others might be more conventional, such as those new exercise machines the company just brought into the gym that’s sending personal information in the clear to an outside service.

While there is research into how to tell the difference between IoT and “larger” devices, the reality is spoofing and blurred lines will likely make such classification difficult. What do you do with a virtual machine that looks like a Raspberry Pi running on a corporate laptop for completely legitimate reasons? Or what about the Raspberry Pi-like device that can run a fully operational Windows stack, including “background noise” applications that make it look like a normal compute platform? These problems are, unfortunately, not easy to solve.

To make matters worse, there are no standards by which to judge the security of an IoT device. Even if the device manufacturer–think about the new gym equipment here–has the best intentions towards security, there is almost no way to determine if a particular device is designed and built with good security. The result is that IoT devices are often infected and used as part of a botnet for DDoS, or other, attacks.

What are our options here from a network perspective? The most common answer to this is segmentation–and segmentation is, in fact, a good start on solving the problem of IoT. But we are going to need a lot more than segmentation to avert certain disaster in our networks. Once these devices are segmented off, what do we do with the traffic? Do we just allow it all (“hey, that’s an IoT device, so let it send whatever it wants to… after all, it’s been segmented off the main network anyway”)? Do we try to manage and control what information is being exfiltrated from our networks? Is machine learning going to step in to solve these problems? Can it, really?

To put it another way–the attack surface we’re facing here is huge, and the smallest mistake can have very bad ramifications in individual lives. Take, for instance, the problem of data and IoT devices in abusive relationships. Relationships are dynamic; how is your company going to know when an employee is in an abusive relationship, and thus when certain kinds of access should be shut off? There is so much information here it seems almost impossible to manage it all.

It looks, to me, like the future is going to be a bit rough and tumble as we learn to navigate this new realm. Vendors will have lots of good ideas (look at Mists’ capabilities in tracking down the location of rogue devices, for instance), but in the end it’s going to be the operational front line that is going to have to figure out how to manage and deploy networks where there is a broad blend of ultimately untrustable IoT devices and more traditional devices.

Now would be the time to start learning about security, privacy, and IoT if you haven’t started already.

The Hedge 57: Brian Trammell and PANRG

Brian Trammell joins Alvaro Retana and Russ White to discuss the Path Aware Research Group in the IRTF. According to the charter page, PANRG “aims to support research in bringing path awareness to transport and application layer protocols, and to bring research in this space to the attention of the Internet engineering and protocol design community.”

download

Technologies that Didn’t: Network Operating Systems

For those with a long memory—no, even longer than that—there were once things called Network Operating Systems (NOS’s). These were not the kinds of NOS’s we have today, like Cisco IOS Software, or Arista EOS, or even SONiC. Rather, these were designed for servers. The most common example was Novell’s Netware. These operating systems were the “bread and butter” of the networking world for many years. I was a Certified Netware Expert (CNE) version 4.0, and then 4.11, before I moved into the routing and switching world. I also deployed Banyan’s Vines, IBM’s OS/2, and a much simpler system called LANtastic, among others.

What were these pieces of software? They were largely built around providing a complete environment for the network user. These systems began with file sharing and directory services and included a small driver that would need to be installed on each host accessing the file share. This small driver was actually a network stack for a proprietary set of protocols. For Vines, this was VIP; for Netware, it was IPX. Over time, these systems began to include email, and then, as a natural outgrowth of file sharing and email, directory services. For some time, there was a serious race on to push ever more features into these network operating systems. For instance, a Vines server could not only act as an email server, a file server, and a directory server, it could also act as a router, connecting two Ethernet segments and pushing traffic between them.

What happened? Why and how did these kinds of systems disappear—almost overnight it seems? After all, they provided a lot of very interesting services. You could use one of these systems as a corporate directory, adding each person’s contact information directly into the system itself. Once the person was there, you could assign them rights to file shares, individual files, and even services running on one of the servers. For instance, you could build an application on a framework within Vines that would run across multiple Vines server—the distribution of the data and the application were all handled in the Vines operating system itself, so long as you built it to their framework—and then simply give people access to it. Lotus Notes, which is still in use today from what I understand was an overlay service of the same style. You didn’t need to worry about access control, the difference between authentication and authorization, etc.; these were all built into the system

Why don’t we see these in widespread use today? The “official” reason, if there is such a thing, is the standardization of the IP protocol stack, and its widespread deployment, caused all of these operating systems to be replaced by a federation of other protocols and applications. For instance, FTP opened up the ability to upload and download files across an IP network, and SMTP standardized the various email clients so email gateways were no longer needed.

A more unofficial answer might be this: these systems tried to do too much. Rather than being a series of smaller systems, each of which solved a particular problem, each of these systems tried to solve every problem, from access control to routing. These systems became bloated and difficult to operate over time. The resource tree, which was grounded in X.500, in Netware 4.11 was a thing of beauty, if you like staring at Mandelbrot patterns. If there were some access control problem, it could take hours to work through the various layers of permissions, and how each was being inherited from the level above.

Further, these large scale, monolithic servers eventually could not keep up with the smaller tools that were being iteratively improved in the IP and OSI protocol suites. As networks grew, and routing became more important, these operating systems struggled to keep up with complex wide area networks.

Network operating systems are another story of complex, multifaceted, monolithic solutions to a lot of different problems. As with all such solutions, smaller, simpler systems simply overrun the capabilities of the monolithic systems through quick iteration of a smaller problem space. While these systems started out simple, they quickly took on too much, and ended up being difficult to deploy and maintain.

Hints and Principles: Applied to Networks

While software design is not the same as network design, there is enough overlap for network designers to learn from software designers. A recent paper published by Butler Lampson, updating a paper he wrote in 1983, is a perfect illustration of this principle. The paper is caleld Hints and Principles for Computer System Design. I’m not going to write a full review here–you should really go read the paper for yourself–but rather just point out some useful bits of the paper.

The first really useful point of this paper is Lampson breaks down the entire field of software design into three basic questions: What, How, and When (or who)? Each of these corresponds to the goals, techniques, and processes used to design and develop software. These same questions and answers apply to network design–if you are missing one of these three areas, then you are probably missing some important set of questions you have not answered yet. Each of these is also represented by an acronym: what? is STEADY, how? is AID, and when? is ART. Let’s look at a couple of these in a little more detail to see how Lampson’s system works.

STEADY stands for simple, timely, efficient, adaptable, dependable, and yummy. Simple is just what it sounds like –reduce complexity. I’m not entirely on board with Lampson’s description of simplicity, which seems to focus on abstraction–abstraction is one useful tool, but anyone who reads my work regularly knows I’m rather more careful about abstraction than most because it involves often-unexamined tradeoffs. Timely primarily relates to “is there a market for this,” in software design; for networks it might be better put as “does the business need this now or later?” Efficient is one of those tradeoffs involved in abstraction–what I might call one of the various ways of optimizing a system. Adaptable means just what it sounds like–are you creating technial debt that must be resolved later? Dependable could be translated to resilience in network design, but it would also relate to many aspects of security, and even the jitter and delay elements in application support.

Yummy is one many network engineers will not be familiar with, but is worth considering. If I’m reading Lampson right here, another way to say this might be “easy to consume.” Why do you want your customers to be able to consume the network easily? Because you do not want them running off and using the cloud (for instance) because they find committing and understanding resources in your network so difficult. We have, for far to long, assumed that “easy to consume” in the network design world means “just plug it into the wall.” It’s not that simple.

The second one, AID, stands for approximate, incremental, and divide & conquer. These are, again, easily adaptable to network design. You don’t need to make the design perfect the first time. In fact, as a young artist one thing that was drilled into my head was that the perfect was the enemy of the good–it’s better to get it approximately right, right now, than perfectly right ten years down the road (when no-one cares any longer). Incremental speaks to modularization, scale-out, and lifecycle management, for instance.

While not every principle here can be applied, a lot of them can. Having them listed out in an easy-to-remember format like this is a great design aid–learn these, and use them.

Underhanded Code and Automation

So, software is eating the world—and you thought this was going to make things simpler, right? If you haven’t found the tradeoffs, you haven’t looked hard enough. I should trademark that or something! 🙂 While a lot of folks are thinking about code quality and supply chain are common concerns, there are a lot of little “side trails” organizations do not tend to think about. One such was recently covered in a paper on underhanded code, which is code designed to pass a standard review which be used to harm the system later on. For instance, you might see at some spot—

if (buffer_size=REALLYLONGDECLAREDVARIABLENAMEHERE) {
/* do some stuff here */
} /* end of if */

Can you spot what the problem might be? In C, the = is different than the ==. Which should it really be here? Even astute reviewers can easily miss this kind of detail—not least because it could be an intentional construction. Using a strongly typed language can help prevent this kind of thing, like Rust (listen to this episode of the Hedge for more information on Rust), but nothing beats having really good code formatting rules, even if they are apparently arbitrary, for catching these things.

The paper above lists these—

  • Use syntax highlighting and typefaces that clearly distinguish characters. You should be able to easily tell the difference between a lowercase l and a 1.
  • Require all comments to be on separate lines. This is actually pretty hard in C, however.
  • Prettify code into a standard format not under the attacker’s control.
  • Use compiler warnings in static analysis.
  • Forbid unneeded dangerous constructions
  • Use runtime memory corruption detection
  • Use fuzzing
  • Watch your test coverage

Not all of these are directly applicable for the network engineer dealing with automation, but they do provide some good pointers, or places to start. A few more…

Yoda assignments are named after Yoda’s constant placement of the subject after the verb (or in a split infinitive)—”succeed you will…” It’s not technically wrong in terms of grammar, but it is just hard enough to understand that it makes you listen carefully and think a bit harder. In software development, the variable taking the assignment should be on the left, and the thing being assigned should be on the right. Reversing these is a Yoda assignment; it’s technically correct, but it’s harder to read.

Arbitrary standardization is useful when there are many options that ultimately result in the same outcome. Don’t let options proliferate just because you can.

Use macros!

There are probably plenty more, but this is an area where we really are not paying attention right now.

Link State in DC Fabrics

If you don’t normally read IPJ, you should. Melchoir and I have an article up in the latest edition on link state in DC fabrics.

To make a case for linkstate protocols in DC fabric underlays, an extensive examination of the positive and negative aspects of BGP—and the other available protocols—is essential. Ultimately, it is up to individual operators to decide which protocol is “the best” for their application, a decision based on business and operational—as well as technical—reasons.

Read the whole thing here.