Hedge 110: Andrew Alston and SRv6 Security

SRv6, a form of source routing, is the new and interesting method being created by the IETF to allow traffic engineering and traffic steering. This is not the first time the networking world has tried source routing, however—and in the spirit of rule 11, we should ask some questions. How and why did source routing fail last time? Have we learned those lessons and changed the way we’re doing things to overcome those limitations? Security seems to be one area where problems arise in the source routing paradigm.

Andrew Alston joins Tom Ammon and Russ White to discuss security in SRv6.

download

Weekend Reads 111921


Kaspersky today publishes its Distributed Denial of Service (DDoS) Q3 2021 report, which found when compared to Q3 2020, the total number of DDoS attacks increased by nearly 24%, while the total number of smart attacks (advanced DDoS attacks that are often targeted) increased by 31% when compared to the same period last year.


IP fragmentation is a process that breaks large packets into smaller packets to allow them to more easily traverse a network. The process is common in the DNS, which is predominantly UDP based.


If you’ve been perusing cryptocurrency forums or video-game news recently—or spying everything from New York Times job listings to zany Twitter threads claiming that the traditional job interview is about to be replaced by blockchain-based “quests, adventures and courses to prove your worth”—you might have run into the term “Web3.”


When Facebook announced last month that it was rebranding as Meta, CEO Mark Zuckerberg enthusiastically described the metaverse his company would soon build, promising it would be a world “as detailed and convincing as this one” where “you’re going to be able to do almost anything you can imagine.”


In a previous blog, we shared how Paragon™ Pathfinder plays an important role in closed-loop automation by tuning the paths of RSVP or Segment-Routed Traffic Engineered LSPs according to changing conditions that it observes in the live network.


HTML smuggling, a highly evasive malware delivery technique that leverages legitimate HTML5 and JavaScript features, is increasingly used in email campaigns that deploy banking malware, remote access Trojans (RATs), and other payloads related to targeted attacks.


Smishing messages usually include a link to a site that spoofs a popular bank and tries to siphon personal information. But increasingly, phishers are turning to a hybrid form of smishing — blasting out linkless text messages about suspicious bank transfers as a pretext for immediately calling and scamming anyone who responds via text.


A state-sponsored threat actor allegedly affiliated with Iran has been linked to a series of targeted attacks aimed at internet service providers (ISPs) and telecommunication operators in Israel, Morocco, Tunisia, and Saudi Arabia, as well as a ministry of foreign affairs (MFA) in Africa, new findings reveal.


The aviation industry told the White House on Tuesday it will take “significant time” to ensure it is safe for major U.S. wireless companies to use C-Band spectrum for 5G communications.


If you are responsible for a web server, you already use Transport Layer Security (TLS, the ‘S’ in ‘HTTPS’) to protect your users from man-in-the-middle attackers that could otherwise passively sniff website cookies or actively inject malicious JavaScript.


ECDSA is a digital signature algorithm that is based on Elliptical Curve Cryptography (ECC). This form of cryptography is based on the algebraic structure of elliptic curves over finite fields.


As many as 13 security vulnerabilities have been discovered in the Nucleus TCP/IP stack, a software library now maintained by Siemens and used in three billion operational technology and IoT devices that could allow for remote code execution, denial-of-service (DoS), and information leak.


A few months ago, Proofpoint, a leading vendor of data loss prevention software, filed a lawsuit against a former employee for stealing confidential sales-enablement data prior to leaving for Abnormal Security, a market rival.


On November 15, 1971, Intel publicly debuted the first commercial single-chip microprocessor, the Intel 4004, with an advertisement in Electronic News.

Upcoming Webinar: How the Internet Really Works (Part 1)

This live training will provide an overview of the systems, providers, and standards bodies important to the operation of the global Internet, including the Domain Name System (DNS), the routing and transport systems, standards bodies, and registrars. For DNS, the process of a query will be considered in some detail, who pays for each server used in the resolution process, and tools engineers can use to interact DNS. For routing and transport, the role of each kind of provider will be considered, along with how they make money to cover their costs, and how engineers can interact with the global routing table (the Default Free Zone, of DFZ). Finally, registrars and standards bodies will be considered, including their organizational structure, how they generate revenue, and how to find their standards.

Thoughts on Auto Disaggregation and Complexity

Way in the past, the EIGRP team (including me) had an interesting idea–why not aggregate routes automatically as much as possible, along classless bounds, and then deaggregate routes when we could detect some failure was causing a routing black hole? To understand this concept better, consider the network below.

In this network, B and C are connected to four different routers, each of which is advertising a different subnet. In turn, B and C are aggregating these four routes into 2001:db8:3e8:10::/60, and advertising this aggregate towards A. From a control plane state perspective, this is a major win. The obvious gain is that the amount of state is reduced from four routes to one. The less obvious gain is A doesn’t need to know about any changes in the state for the four destinations aggregated into the /60. Depending on how often these links change state, the reduction in the rate of change is, perhaps, more important than the reduction in the amount of control plane state.

We always know there will be a tradeoff when reducing state; what is the tradeoff here? If C somehow loses its connection to one of the four routers, say the router advertising 11::/64, C’s 10::/60 aggregate will not change. Since A thinks C still has a route to every subnet within 10::/60, it will continue sending traffic destined to addresses in the 11::/64 towards both B and C. C will not have a route towards these destinations, so it will drop the traffic.

We have a routing black hole.

for more information on aggregation in networks, take a look at my livelesson on abstraction in computer networks

This much is pretty simple. The harder part is figuring out to eliminate this routing black hole. Our first choice is to just not aggregate these routes. While you might be cringing right now, this isn’t such a bad option in many networks. We often underestimate the amount of state and the speed of state change modern routing protocols running on modern processors can support. I’ve seen networks running IS-IS in a single flooding domain with tens of thousands of routes and thousands of nodes running “in the wild.” I’ve seen IS-IS networks with thousands of nodes and hundreds of thousands of routes running in lab environments. These networks still converge.

But what if we really think we need to reduce the amount and speed of state, so we really need to aggregate these routes?

One solution that has been proposed a number of times through the years is auto disaggregation.

In this case, suppose D somehow realizes C cannot reach one of the components of a shared aggregate route. D could simply stop advertising the aggregate, advertising each of the components instead. The question here might be: is this a good idea? Looking at this from the perspective of the SOS triad, the aggregation replaced four routes with a single route. In the auto disaggregation case, the single route change is replaced by four route changes. The amount of state is variable, and in some cases the rate of change in state is actually higher than without the aggregation.

So…

I don’t hold that auto disaggregation is either good nor bad—it just presents a different set of challenges to the network designer. Instead of designing for average rates of change and given table sizes, you can count on much smaller tables, but you might find there are times when the rate of change is dramatically higher than you expect. A good question to ask, before deploying this kind of technology, might be: can I forsee a chain of events that will cause a high enough rate of state change that auto disaggregation is actually more destabilizing than just not summarizing at all in this network?

A real danger with auto disaggregation, by the way, is using summarization to dramatically reduce table sizes without understanding how a goldilocks failure (what we used to call in telco a mother’s day event, or perhaps a black swan) can cascade into widespread failures. If you’re counting on particular devices in your network only have a dozen or two dozen table entries, but just the right set of failures can cause them to have several thousand entries because of auto disaggregation, what kinds of failures modes should you anticipate? Can you anticipate or mitigate this kind of problem?

The idea of automatically summarizing and disaggregating routes is an interesting study in complexity, state, and optimization. It’s a good brain exercise in thinking through what-if situations, and carefully thinking about when and where to deploy this kind of thing.

What do you think about this idea? When would you deploy it, where, and why? When and where would you be cautious about deploying this kind of technology?

Weekend Reads 111221


We’ve had too many face-palm-worthy incidents of organizations hearing “hey, I found your data in a world readable S3 bucket” or finding a supposedly “test” server exposed that had production data in it.


Virtually all compilers — programs that transform human-readable source code into computer-executable machine code — are vulnerable to an insidious attack in which an adversary can introduce targeted vulnerabilities into any software without being detected, new research released today warns.


2021 has already been a banner year for cybercriminals — the record-largest ransomware payment of $40 million was made by an insurance company this year. And the attacks won’t stop.


In the 2021 Domain Security Report, we analyzed the trend of domain security adoption with respect to the type of domain registrar used, and found that 57% of Global 2000 organizations use consumer-grade registrars with limited protection against domain and DNS hijacking, distributed denial of service (DDoS), man-in-the-middle attacks (MitM), or DNS cache poisoning.


When it comes to cybersecurity, risks are omnipresent. Whether it is a bank dealing with financial transactions or medical providers handling the personal data of patients, cybersecurity threats are unavoidable. The only way to efficiently combat these threats is to understand them.


‘Functional, free and secure by default’, OpenBSD remains a crucial yet largely unacknowledged player in the open-source field.


A new multistage phishing campaign spoofs Amazon’s order notification page and includes a phony customer service voice number where the attackers request the victim’s credit card details to correct the errant “order.”


Traditional security gives value to where the user is coming from. It uses a lot of trust because the user’s location or IP address (perimeter model) is used to define the user to the system. In a zero-trust model, we assume zero units of trust before we grant you access to anything and verify a lot of other information before granting access.


Up to the second half of the 19th century —with the exception of the industrial power Great Britain—the protection of inventions was inadequate and strongly disputed.


Two senators have introduced bipartisan legislation that would make it harder for online tech giants to make acquisitions that “harm competition and eliminate consumer choice,” according to the office of Sen. Amy Klobuchar (D-Minn.), one of the bill’s co-sponsors.


A team of tech companies including Google, Salesforce, Slack, and Okta recently released the Minimum Viable Secure Product (MVSP) checklist, a vendor-neutral security baseline listing minimum acceptable security requirements for B2B software and business process outsourcing suppliers.


Are you looking to get a VPN subscription soon? Before you get a multi-year subscription, make sure the VPN you choose has these six crucial features.


Death, taxes, and spam. It’s constant, ever-present, and you likely have a few hundred of them sitting in your Spam folder as you read this.


For those who follow the issue of blocking illegal content from the Internet, there is an interesting development in relation to this issue here in Germany, and I will tell you a little about it.


Neal Stephenson’s foundational cyberpunk novel Snow Crash brought to the public the concept of a metaverse, a virtual reality in which people interact using avatars in a manufactured ecosystem, eschewing the limitations of human existence.

Hedge 108: In Defense of Boring Technology with Andrew Wertkin

Engineers (and marketing folks) love new technology. Watching an engineer learn or unwrap some new technology is like watching a dog chase a squirrel—the point is not to catch the squirrel, it’s just that the chase is really fun. Join Andrew Wertkin (from BlueCat Networks), Tom Ammon, and Russ White as we discuss the importance of simple, boring technologies, and moderating our love of the new.

download