Most packet processing in Linux “wants” to be in the kernel. The problem is that adding code to the kernel is a painstaking process because a single line of bad code can cause havoc for millions of Linux hosts. How, then, can new functionality be pushed into the kernel, particularly for packet processing, with reduced risk? Enter eBPF, which allows functions to be inserted into the kernel through a sort of “lightweight container.”

Michael Kehoe joins Tom Ammon and Russ White to discuss eBPF technology and its importance.


Before the large cable providers came on the scene, most people accessed the Internet through dial-up MODEMS, connecting to services like America Online, across plain old telephone lines. The entrance of cable providers, and cable MODEMs, allowed the edge of the Internet to explode, causing massive growth. Join Donald Sharp and I on this episode of the History of Networking as John Chapman discusses the origins of the cable MODEM, and the origins of the DOCSIS standards.

The collection of technical papers discussed on the show is here:


Let’s play the analogy game. The Internet of Things (IoT) is probably going end up being like … a box of chocolates, because you never do know what you are going to get? a big bowl of spaghetti with a serious lack of meatballs? Whatever it is, the IoT should have network folks worried about security. There is, of course, the problem of IoT devices being attached to random places on the network, exfiltrating personal data back to a cloud server you don’t know anything about. Some of these devices might be rogue, of course, such as Raspberry Pi attached to some random place in the network. Others might be more conventional, such as those new exercise machines the company just brought into the gym that’s sending personal information in the clear to an outside service.

While there is research into how to tell the difference between IoT and “larger” devices, the reality is spoofing and blurred lines will likely make such classification difficult. What do you do with a virtual machine that looks like a Raspberry Pi running on a corporate laptop for completely legitimate reasons? Or what about the Raspberry Pi-like device that can run a fully operational Windows stack, including “background noise” applications that make it look like a normal compute platform? These problems are, unfortunately, not easy to solve.

To make matters worse, there are no standards by which to judge the security of an IoT device. Even if the device manufacturer–think about the new gym equipment here–has the best intentions towards security, there is almost no way to determine if a particular device is designed and built with good security. The result is that IoT devices are often infected and used as part of a botnet for DDoS, or other, attacks.

What are our options here from a network perspective? The most common answer to this is segmentation–and segmentation is, in fact, a good start on solving the problem of IoT. But we are going to need a lot more than segmentation to avert certain disaster in our networks. Once these devices are segmented off, what do we do with the traffic? Do we just allow it all (“hey, that’s an IoT device, so let it send whatever it wants to… after all, it’s been segmented off the main network anyway”)? Do we try to manage and control what information is being exfiltrated from our networks? Is machine learning going to step in to solve these problems? Can it, really?

To put it another way–the attack surface we’re facing here is huge, and the smallest mistake can have very bad ramifications in individual lives. Take, for instance, the problem of data and IoT devices in abusive relationships. Relationships are dynamic; how is your company going to know when an employee is in an abusive relationship, and thus when certain kinds of access should be shut off? There is so much information here it seems almost impossible to manage it all.

It looks, to me, like the future is going to be a bit rough and tumble as we learn to navigate this new realm. Vendors will have lots of good ideas (look at Mists’ capabilities in tracking down the location of rogue devices, for instance), but in the end it’s going to be the operational front line that is going to have to figure out how to manage and deploy networks where there is a broad blend of ultimately untrustable IoT devices and more traditional devices.

Now would be the time to start learning about security, privacy, and IoT if you haven’t started already.

One of the most common ways such access is monetized these days is through ransomware, which holds a victim’s data and/or computers hostage unless and until an extortion payment is made. But in most cases, there is a yawning gap of days, weeks or months between the initial intrusion and the deployment of ransomware within a victim organization.

There is a lot of discussion about the Expedited Policy Development Process (EPDP) Phase 2 report on evaluating a System for Standardized Access/Disclosure (SSAD) to non-public gTLD registration data after the decisions taken by the GNSO Council on September 24th.

Nvidia’s announcement a month ago of a plan to acquire Arm Holdings for $40 billion was followed last week by reports of another towering deal in semiconductors: Advanced Micro Device’s prospective purchase of Xilinx for about $30 billion.

For a long time, datacenter compute has been the very picture of stability – Intel-based servers running enterprise workloads in central facilities. The workloads are changing fast and the datacenter is dissolving, and this is all having a ripple effect throughout the infrastructure, from the servers and storage appliances down to the components, most notability the silicon that is powering the systems.

Booter services continue to provide popular DDoS-as-a-Service platforms and enable anyone (irrespective of their technical ability) to execute DDoS attacks with devastating impact. Since booters are a serious threat to Internet operations and can cause significant financial and reputational damage, they also draw the attention of Law Enforcement Agencies and related counter activities.

The ultimate employee/manager relationship you should strive for is more of a partnership, one in which you and your manager work together to accomplish your mutual goals. In this article, I’ll discuss strategies for doing this.

USA Today reported last week that AT&T stopped selling new DSL to customers on October 1. This is an event that will transform the broadband landscape in a negative way across the country.

A group of experts from Interisle Consulting Group and Illumintel released a paper today, reporting a comprehensive study of the phishing landscape in 2020. The study’s goal was to capture and analyze a large set of information about phishing attacks to better understand how much phishing is taking place, where it is taking place, and better ways to fight them.

A number of high-profile data breaches have resulted directly from misconfigured permissions or unpatched vulnerabilities. For instance, the 2017 Equifax breach was the result of exploiting an unpatched flaw in Apache Struts allowing remote code execution. More recently, the Capital One breach last year stemmed from a misconfigured web application firewall. Verizon’s 2020 DBIR reported that only hacking was more prevalent than misconfiguration errors as the culprit of data breaches.

Unfortunately, making such a sweeping change to office workflow doesn’t just disrupt policies and expectations—it requires important changes to the technical infrastructure as well. Six months ago, we talked about the changes the people who work from home frequently need to make to accommodate telework; today, we’re going to look at the ongoing changes the businesses themselves need to make.

As mobile phones became more popular in the 1980s, more and more cellular network towers had to be built, most of which were relatively utilitarian and industrial-looking affairs. This naturally led to predictable NIMBY (not-in-my-backyard) criticisms from area residents who saw these additions as eyesores. Thus, an array of camouflage techniques emerged in parallel with this expanding technology, pioneered by companies like Larson Camouflage in Tucson, Arizona.

Let’s face it: Our digital public sphere has been failing for some time. Technologies designed to connect us have instead inflamed our arguments and torn our social fabric.

There are a number of sleepy corners of the security industry, but none perhaps as perplexing as the market for data protection. Why? Given recent metrics and clear trends, data protection should be a massive security segment.

Before you roll your eyes and click away because you see something about enterprise AI, read on for just a moment. Because it’s not about the workload or even GPUs. It’s about all the various performance pieces that go along with that shift and what they mean for a larger view of enterprise systems in the more encompassing sense.

Larry Cashdollar needed someone big — someone not afraid of physical retribution. So he called Donovan, an imposing figure at six-four. And Cashdollar says, “I made a mistake.”

Brian Trammell joins Alvaro Retana and Russ White to discuss the Path Aware Research Group in the IRTF. According to the charter page, PANRG “aims to support research in bringing path awareness to transport and application layer protocols, and to bring research in this space to the attention of the Internet engineering and protocol design community.”


For those with a long memory—no, even longer than that—there were once things called Network Operating Systems (NOS’s). These were not the kinds of NOS’s we have today, like Cisco IOS Software, or Arista EOS, or even SONiC. Rather, these were designed for servers. The most common example was Novell’s Netware. These operating systems were the “bread and butter” of the networking world for many years. I was a Certified Netware Expert (CNE) version 4.0, and then 4.11, before I moved into the routing and switching world. I also deployed Banyan’s Vines, IBM’s OS/2, and a much simpler system called LANtastic, among others.

What were these pieces of software? They were largely built around providing a complete environment for the network user. These systems began with file sharing and directory services and included a small driver that would need to be installed on each host accessing the file share. This small driver was actually a network stack for a proprietary set of protocols. For Vines, this was VIP; for Netware, it was IPX. Over time, these systems began to include email, and then, as a natural outgrowth of file sharing and email, directory services. For some time, there was a serious race on to push ever more features into these network operating systems. For instance, a Vines server could not only act as an email server, a file server, and a directory server, it could also act as a router, connecting two Ethernet segments and pushing traffic between them.

What happened? Why and how did these kinds of systems disappear—almost overnight it seems? After all, they provided a lot of very interesting services. You could use one of these systems as a corporate directory, adding each person’s contact information directly into the system itself. Once the person was there, you could assign them rights to file shares, individual files, and even services running on one of the servers. For instance, you could build an application on a framework within Vines that would run across multiple Vines server—the distribution of the data and the application were all handled in the Vines operating system itself, so long as you built it to their framework—and then simply give people access to it. Lotus Notes, which is still in use today from what I understand was an overlay service of the same style. You didn’t need to worry about access control, the difference between authentication and authorization, etc.; these were all built into the system

Why don’t we see these in widespread use today? The “official” reason, if there is such a thing, is the standardization of the IP protocol stack, and its widespread deployment, caused all of these operating systems to be replaced by a federation of other protocols and applications. For instance, FTP opened up the ability to upload and download files across an IP network, and SMTP standardized the various email clients so email gateways were no longer needed.

A more unofficial answer might be this: these systems tried to do too much. Rather than being a series of smaller systems, each of which solved a particular problem, each of these systems tried to solve every problem, from access control to routing. These systems became bloated and difficult to operate over time. The resource tree, which was grounded in X.500, in Netware 4.11 was a thing of beauty, if you like staring at Mandelbrot patterns. If there were some access control problem, it could take hours to work through the various layers of permissions, and how each was being inherited from the level above.

Further, these large scale, monolithic servers eventually could not keep up with the smaller tools that were being iteratively improved in the IP and OSI protocol suites. As networks grew, and routing became more important, these operating systems struggled to keep up with complex wide area networks.

Network operating systems are another story of complex, multifaceted, monolithic solutions to a lot of different problems. As with all such solutions, smaller, simpler systems simply overrun the capabilities of the monolithic systems through quick iteration of a smaller problem space. While these systems started out simple, they quickly took on too much, and ended up being difficult to deploy and maintain.