Back in January, I ran into an interesting article called The many lies about reducing complexity:

Reducing complexity sells. Especially managers in IT are sensitive to it as complexity generally is their biggest headache. Hence, in IT, people are in a perennial fight to make the complexity bearable.

Gerben then discusses two ways we often try to reduce complexity. First, we try to simply reduce the number of applications we’re using. We see this all the time in the networking world—if we could only get to a single pane of glass, or reduce the number of management packages we use, or reduce the number of control planes (generally to one), or reduce the number of transport protocols … but reducing the number of protocols doesn’t necessarily reduce complexity. Instead, we can just end up with one very complex protocol. Would it really be simpler to push DNS and HTTP functionality into BGP so we can use a single protocol to do everything?

Second, we try to reduce complexity by hiding it. While this is sometimes effective, it can also lead to unacceptable tradeoffs in performance (we run into the state, optimization, surfaces triad here). It can also make the system more complex if we need to go back and leak information to regain optimal behavior. Think of the OSPF type 4, which just reinjects information lost in building an area summary, or even the complexity involved in the type7 to type 5 process required to create not-so-stubby areas.

It would seem, then, that you really can’t get rid of complexity. You can move it around, and sometimes you can effectively hide it, but you cannot get rid of it.

This is, to some extent, true. Complexity is a reaction to difficult environments, and networks are difficult environments.

Even so, there are ways to actually reduce complexity. The solution is not just hiding information because it’s messy, or munging things together because it requires fewer applications or protocols. You cannot eliminate complexity, but if you think about how information flows through a system you might be able to reduce the amount of complexity, and even create boundaries where state (hence complexity) can be more effectively hidden.

As an instance, I have argued elsewhere that building a DC fabric with distinct overlay and underlay protocols can actually create a simpler overall design than using a single protocol. Another instance might be to really think about where route aggregation takes place—is it really needed at all? Why? Is this the right place to aggregate routes? Is there any way I can change the network design to reduce state leaking through the abstraction?

The problem is there are no clear-cut rules for thinking about complexity in this way. There’s no rule of thumb, there’s no best practices. You just have to think through each individual situation and consider how, where, and why state flows, and then think through the state/optimization/surface tradeoffs for each possible way of reducing the complexity of the system. You have to take into account that local reductions in complexity can cause the overall system to be much more complex, as well, and eventually make the system brittle.

There’s no “pat” way to reduce complexity—that there is, is perhaps one of the biggest lies about complexity in the networking world.


In this study, we investigate DoT using 3.2k RIPE Atlas home probes deployed across more than 125 countries in July 2019. We issue DNS measurements using Do53 and DoT, which RIPE Atlas has been supporting since 2018, to a set of 15 public resolver services (five of which support DoT), in addition to the local probe resolvers, shown in the figures below.


In the early days of congestion control signaling, we originally came up with the rather crude “Source Quench” control message, an ICMP control message sent from an interior gateway to a packet source that directed the source to reduce its packet sending rate.


Today, networking infrastructure is designed to deliver software ‘quickly’, ranging from a few seconds to hours.


Posture assessment requires periodic comparisons of current controls to expected values. Having posture assessment as part of products when deployed is possible, but it will take some time until the industry can provide these capabilities.


When you think of the cybersecurity “CIA” triad of Confidentiality, Integrity, and Availability, which one of those is most important to your organization?


The latest Honeywell USB Threat Report 2020 indicates that the number of threats specifically targeting Operational Technology systems has nearly doubled from 16% to 28%, while the number of threats capable of disrupting those systems rose from 26% to 59% over the same period.


In fact, much of the information required to detect most commonly encountered threats and malicious techniques can be drawn right from Windows event logs and systems monitoring, according to a new report by security vendor Red Canary.


Part of being a registry is being a phonebook for the Internet. But just as phonebooks have changed, so too are registries evolving. A core aspect of the ‘phonebook’ service that registries provide are known as ‘whois’ databases.


At this point, I had a functional CDN, but with the obvious limitation of being a CDN that only does DNS, so I felt it was time to add a new service — HTTP caching.


Cybercriminal extortionists have adopted a new tactic to apply even more pressure on their corporate victims: contacting the victims’ customers, and asking them to demand a ransom is paid to protect their own privacy.


ISPs in the U.S. saw a significant surge in both downstream and upstream traffic, increasing at least 30% and as much as 40% during peak business hours and as much as 60% in some markets, according to a new report from the Broadband Internet Technical Advisory Group (BITAG).


Networking equipment major Cisco Systems has said it does not plan to fix a critical security vulnerability affecting some of its Small Business routers, instead urging users to replace the devices.


Hidden deep in Google’s release notes for the new version of Chrome that shipped on March 1 is a fix for an “object lifecycle issue.” Or, for the less technically inclined, a major bug.


It is straightforward to reveal a user’s visited site if the destination IP address hosts only that particular domain. However, when a given destination IP address serves many domains, an adversary will have to ‘guess’ which one is being visited.


When it comes to cybersecurity, industrial IT—consisting mainly of operational technology (OT) and industrial control systems (ICS)—has failed to keep up with development in the enterprise IT world. That’s mostly because industries’ adoption of internet technology has been slower when compared with enterprises.


DNS Shotgun is a benchmarking tool specifically developed for realistic performance testing of DNS resolvers. Its goal is to simulate real clients and their behaviour, including timing of queries and realistic connection management, which are areas where traditional tools are lacking.


On 23 March, Internet traffic around the London Internet Exchange Point, LINX, showed some dramatic shifts.


In its April slate of patches, Microsoft rolled out fixes for a total of 114 security flaws, including an actively exploited zero-day and four remote code execution bugs in Exchange Server.


When a friend who works for a major telecom/ISP operator in Karachi asked me to provide an awareness session on Software Defined Networking (SDN), I was happy to oblige.


Tens of millions of Internet connected devices — including medical equipment, storage systems, servers, firewalls, commercial network equipment, and consumer Internet of Things (IoT) products — are open to potential remote code execution and denial-of-service attacks because of vulnerable DNS implementations.

Many networks are designed and operationally drive by the configuration and management of features supporting applications and use cases. For network engineering to catch up to the rest of the operational world, it needs to move rapidly towards data driven management based on a solid understanding of the underlying protocols and systems. Brooks Westbrook joins Tom Amman and Russ White to discuss the data driven lens in this episode of the Hedge.

download

When I was in the military we were constantly drilled about the problem of Essential Elements of Friendly Information, or EEFIs. What are EEFis? If an adversary can cast a wide net of surveillance, they can often find multiple clues about what you are planning to do, or who is making which decisions. For instance, if several people married to military members all make plans to be without their spouses for a long period of time, the adversary can be certain a unit is about to be deployed. If the unit of each member can be determined, then the strength, positioning, and other facts about what action you are taking can be guessed.

Given enough broad information, an adversary can often guess at details that you really do not want them to know.

What brings all of this to mind is a recent article in Dark Reading about how attackers take advantage of publicly available information to form Spear Phishing attacks—

Most security leaders are acutely aware of the threat phishing scams pose to enterprise security. What garners less attention is the vast amount of publicly available information about organizations and their employees that enables these attacks.

Going back further in time, during World War II, we have—

What does all of this mean for the average network engineer concerned about security? Probably nothing different than being just slightly paranoid about your personal security in the first place (way too much modern security is driven by an anti-paranoid mindset, a topic for a future post). Things like—

  • Don’t let people know, either through your job description or anything else, that you hold the master passwords for your company, or that your account holds administrator rights.
  • Don’t always go to the same watering holes, and don’t talk about work while there to people you’ve just met, or even people you see there all the time.
  • Don’t talk about when and where you’re going on vacation. You can talk about it, and share pictures, once you’re back.

If an attacker knows you are going to be on vacation, it’s a lot easier to create a fake “emergency,” tempting you to give out information about accounts, people, and passwords you shouldn’t. Phishing is primarily a matter of social engineering rather than technical acumen. Countering social engineering is also a social skill, rather than a technical one. You can start by learning to just say less about what you are doing, when you are doing it, and who holds the keys to the kingdom.


The Resource Public Key Infrastructure (RPKI) is a specialized PKI designed and deployed to improve the security of the Internet BGP routing system. Some of the ‘resources’ that make up the RPKI include IP address prefixes and Autonomous System numbers (ASNs).


The concept of a “hardware-based root of trust” takes aim at issues like this; it ensures that a computer always boots with legitimate code.


The server immersion technology that has cooled extreme density bitcoin hardware for The Bitfury Group is coming to the data center.


New research into 5G architecture has uncovered a security flaw in its network slicing and virtualized network functions that could be exploited to allow data access and denial of service attacks between different network slices on a mobile operator’s 5G network.


The “Milan” Epyc 7003 processors, the third generation of AMD’s revitalized server CPUs, is now in the field, and we await the entry of the “Ice Lake” Xeon SPs from Intel for the next jousting match in the datacenter to begin.


And just as these bad actors know that a highly successful way to weaken a business is to disrupt its supply chain, the same goes for cybercrime. It’s time to turn the tables on cybercriminals and use their own tactics against them.


Do you remember all the apprehension about cloud migration in the early days of cloud computing? Some of the concerns ran the full paranoia gamut from unreliability to massive overcharging for cloud services. Some concerns, such as the lack of security of the entire cloud infrastructure, rose to the level of conspiracy theories. It is nice to know that those myths are all behind us.


Though human errors — such as falling for phishing scams that result in data compromise or credential theft — remain one of the top security risks for organizations today, few appear to be making much progress in addressing the problem.


Our defense-in-depth approach for domain and DNS security offers four layers of security to provide the highest level of threat mitigation.


Organisations hit by ransomware attacks are finding themselves paying out more than ever before, according to a new report from Palo Alto Networks.


For two decades now, Google has demonstrated perhaps more than any other company that the datacenter is the new computer, what the search engine giant called a “warehouse-scale machine” way back in 2009 with a paper written by Urs Hölzle, who was and still is senior vice president for Technical Infrastructure at Google, and Luiz André Barroso, who is vice president of engineering for the core products at Google and who was a researcher at Digital Equipment and Compaq before that.


We are on a path that will see information security transformed in the next 5-10 years. There are five trends that will enable us as an industry to improve the overall security posture and reduce the surface attack space. There is evidence that we are already moving in that direction with a push for built-in security, but we must be mindful to ensure management scales.


Few things elicit terror quite like switching on a computer and viewing a message that all its files and data are locked up and unavailable to access. Yet, as society wades deeper into digital technology, this is an increasingly common scenario.


Forty years ago, the word “hacker” was little known. Its march from obscurity to newspaper headlines owes a great deal to tech journalist Steven Levy, who in 1984 defied the advice of his publisher to call his first book Hackers: Heroes of the Computer Revolution.


Today, it is a widely accepted thesis amongst historians and computer scientists that the modern notion of computer programs has its roots in the work of John von Neumann.


It’s easy to get overwhelmed with the number of cloud security resources available. How do you know which sources to trust? Which ones should inform your security strategies? Which reports will actually improve your cloud security posture?


The fire that destroyed a data center (and damaged others) at the OVHcloud facility in Strasbourg, France, on March 10-11, 2021, has raised a multitude of questions from concerned data center operators and customers around the world. Chief among these is, “What was the main cause, and could it have been prevented?”


In yet another instance of a software supply chain attack, unidentified actors hacked the official Git server of the PHP programming language and pushed unauthorized updates to insert a secret backdoor into its source code.


A security professional at Ubiquiti who helped the company respond to the two-month breach beginning in December 2020 contacted KrebsOnSecurity after raising his concerns with both Ubiquiti’s whistleblower hotline and with European data protection authorities.


GDPR is a data privacy law in the EU that mentions the use of encryption. Although not mandatory, it is yet seen as a best practice for protecting personal data. So, let us first understand what data encryption is and then understand the role of encryption in GDPR compliance.


The amount of activity in the DNS in the IETF seems to be growing every meeting. I thought that the best way to illustrate to considerably body of DNS working being undertaken at the IETF these days would be to take a snapshot of DNS activity that was reported to the DNS-related Working Group meetings at IETF 110.

Those who follow my work know I’ve been focused on building live webinars for the last year or two, but I am still creating pre-recorded material for Pearson. The latest is built from several live webinars which I no longer give; I’ve updated the material and turned them into a seven-hour course called How Networks Really Work. Although I begin here with the “four things,” the focus is on a problem/solution view of routed control planes. From the description:

There are many elements to a networking system, including hosts, virtual hosts, routers, virtual routers, routing protocols, discovery protocols, etc. Each protocol and device (whether virtual or physical) is generally studied as an individual “thing.” It is not common to consider all these parts as components of a system that works together to carry traffic through a network. To show how all these components work together to form a complete system, this video course presents a series of walk throughs showing the processing involved in various kinds of network events, and how control planes use those events to build the information needed to carry traffic through a network.

You can find this How Networks Really Work here.

This course is largely complimentary to the course Ethan and I did a couple of years back, Understanding Network Transports. Taking both would give you a good understanding of network fundamentals. This material is also parallel and complimentary to Problems and Solutions in Computer Networks, which Ethan and I published a few years ago.

I am working on one new live webinar; I really need to get my butt in gear on another one I’ve been discussing for a long time (but I somehow dropped the ball).