At the most basic level, there are only three BGP policies: pushing traffic through a specific exit point; pulling traffic through a specific entry point; preventing a remote AS (more than one AS hop away) from transiting your AS to reach a specific destination. In this series I’m going to discuss different reasons for these kinds of policies, and different ways to implement them in interdomain BGP.
There are many reasons an operator might want to select which neighboring AS through which to send traffic towards a given reachable destination (for instance, 100::/64). Each of these examples assumes the AS in question has learned multiple paths towards 100::/64, one from each peer, and must choose one of the two available paths to forward along.
In the following network—
From AS65001’s perspective
Assume AS65001 is some form of content provider, which means it offers some service such as bare metal compute, cloud services, search engines, social media, etc. Customers from AS65006 are connecting to its servers, located on the 100::/64 network, which generates a large amount of traffic returning to the customers.
From the perspective of AS hops, it appears the path from AS65001 to AS65006 is the same length—if this is true, AS65001 does not have any reason to choose one path or another (given there is no measurable performance difference, as in the cases described above from AS65006’s perspective). However, the AS hop count does not accurately describe the geographic distances involved:
- The geographic distance between 100::/64 and the exit towards AS65003 is very short
- The geographic distance between AS100::/64 and the exits towards AS65002 is very long
- The total geographic distance packets travel when following either path is about the same
In this case, AS65001 can either choose to hold on to packets destined to customers in AS65006 for a longer or shorter geographic distance.
While carrying the traffic over a longer geographic distance is more expensive, AS65001 would also like to optimize for the customer’s quality of experience (QoE), which means AS65001 should hold on to the traffic for as long as possible.
Because customers will use AS65001’s services in direct relation to their QoE (the relationship between service usage and QoE is measurable in the real world), AS65001 will opt to carry traffic destined to customers as long as possible—another instance of cold potato routing.
This is normally implemented by setting the preference for all routes equal and relying on the IGP metric part of the BGP bestpath decision process to control the exit point. IGP metrics can then be tuned based on the geographic distance from the origin of the traffic within the network and the exit point closest to the customer.
An alternative, more active, solution would be to have a local controller monitor the performance of individual paths to a given reachable destination, setting the preferences on individual reachable destinations and tuning IGP metrics in near-real-time to adjust for optimal customer experience.
Another alternative is to have a local controller monitor the performance individual paths and use MPLS, segment routing, or some other mechanism to actively engineer or steer the path of traffic through the network.
Some content providers may directly peer with transit and edge providers to reach customers more quickly, to reduce costs, and to increase their control over customer-facing traffic. For instance, if AS65001 is a content provider that transits traffic through [65002,65005] to reach customers in AS65006. To avoid transiting multiple autonomous systems, AS65001 can run a link directly to AS65005.
In some cases, content providers will build long-haul fiber optics (including undersea cable operations, see this site for examples) to avoid transiting multiple autonomous systems.
While the operator can end up paying a lot to build and operate long-haul optical links, this cost is offset is offset by decreasing paying transit providers for high levels of asymmetric traffic flows. Beyond this, content providers can control user experience more effectively the longer they control the user’s traffic. Finally, content providers can gain more information by connecting closer to users, feeding into Kai-Fu Lee’s virtuous cycle.
Note: content providers peering directly with edge providers and through IXPs is one component of the centralization of the Internet.
A failed alternative to the techniques described here was the use of automatic disaggregation at the content provider’s autonomous system borders. For instance, if a customer connected to a server in 100::/64 by sending traffic via the [65003,65001] link, an automated system will examine the routing table to see which route is currently being used to reach the customer’s reachable destination. If traffic forwarded to this customer’s address would normally pass through one of the [65001,65002] links, a local host route is created and distributed into AS65001 to draw this traffic to the exit connected to AS65003.
The theory behind this automatic disaggregation was that the customer will always take the shortest path from their perspective to reach the service. This assumption fails, in practice, however, so this scheme was ultimately abandoned.
Some of the most successful and lucrative online scams employ a “low-and-slow” approach — avoiding detection or interference from researchers and law enforcement agencies by stealing small bits of cash from many people over an extended period.
With increased demands placed on home internet connections and the nation’s internet infrastructure during the pandemic, the quality and affordability of home internet connections became a focus for users on several fronts.
Decentralized solutions, in our case, which come with the ambitious promise of providing everything their centralized counterpart can provide but without centralized points of failure and regulations. In our previous article, we enumerated several advantages associated with decentralized domain names.
With solution providers such as Unstoppable Domains or Handshake, and blockchain technology-friendly browsers, such as Brave, that are more than happy to assist on the implementation front, decentralized alternatives to the traditional Domain Name System has been receiving more and more attention lately.
I recently looked up a specialized medical network. For weeks following the search, I was bombarded with ads for the network and other related services: the Internet clearly thought I was on the market for a new doctor.
Expect to hear increasing buzz around graph neural network use cases among hyperscalers in the coming year. Behind the scenes, these are already replacing existing recommendation systems and traveling into services you use daily, including Google Maps.
A group of academics has proposed a machine learning approach that uses authentic interactions between devices in Bluetooth networks as a foundation to handle device-to-device authentication reliably.
Of course, the implications for such potential misdirection vary according to the nature of the service. So, let’s get personal. What about my bank? When I enter the URL of my bank, how do I know the resultant session is a session with my bank?
The U.S. Cybersecurity and Infrastructure Security Agency (CISA) on Monday added single-factor authentication to the short list of “exceptionally risky” cybersecurity practices that could expose critical infrastructure as well as government and the private sector entities to devastating cyberattacks.
Simple Mail Transfer Protocol or SMTP has easily exploitable security loopholes. Email routing protocols were designed in a time when cryptographic technology was at a nascent stage (e.g., the de-facto protocol for email transfer, SMTP, is nearly 40 years old now), and therefore security was not an important consideration.
Tobi Metz asked What is a Technologists? in a recent blog post. Tobi joins Tom and Russ on this episode of the Hedge to expand on his answer, and get our thoughts on the question.
Scott Bradner was given his first email address in the 1970’s, and his workstation was the gateway for all Internet connectivity at Harvard for some time. Join Donald Sharp and Russ White as Scott recounts the early days of networking at Harvard, including the installation of the first Cisco router, the origins of comparative performance testing and Interop, and the origins of the SHOULD, MUST, and MAY as they are used in IETF standards today.
Even before it announced that it would seek Chapter 11 bankruptcy, Frontier had a well-deserved reputation for mismanagement and abusive conduct. In an industry that routinely enrages its customers, Frontier was the literal poster-child for underinvestment and neglect, an industry leader in outages and poor quality of service, and the inventor of the industry’s most outrageous and absurd billing practices. —EFF
Observability matters. You should care about it. And vendors need to stop trying to confuse people into buying the same old bullshit tools by smooshing them together and slapping on a new label. Exactly how long do they expect to fool people for, anyway? —Charity
As we all know, RPKI is getting a lot of attention and traction nowadays. At the RIPE NCC, we operate one of the five Trust Anchors, a hosted RPKI service, and one of the Validator software packages. A big responsibility that we don’t take lightly. We’re constantly improving code and procedures to ensure we’re following the latest RFC and best practices. Also, security is of key(!) importance. —Nathalie Trenama
Technology always evolves and I’ve been reading about where scientists envision the evolution of 5G. The first generation of 5G, which will be rolled out over the next 3-5 years, is mostly aimed at increasing the throughput of cellular networks. According to Cisco, North American cellular data volumes are growing at a torrid 36% per year, and even faster than that in some urban markets where the volumes of data are doubling every two years. The main goal of first-generation 5G is to increase network capacity to handle that growth. —Doug Dawson
The MITRE ATT&CK framework, launched in 2015, has become the de facto method for cataloging attacks and understanding an organization’s defensive capabilities. This information is also useful to risk professionals, who are charged with aiding organizations in understanding which attacks are the most damaging and how often they might happen. —Jack Freund
Instead of being prescriptive, since one setup may not fit all, I asked our Community Trainers and my APNIC infrastructure colleagues to share their setups and common practices to manage the needs of their networks and staff from the confines of their home. I’ve summarized this below. —Tashi Phuntsho
By the beginning of 2019, it had become obvious that we needed to reassess our technical infrastructure, operational procedures and engineering capacity, as the original design and infrastructure had not taken these emerging requirements into account. Our priority was then set to increase the resilience and security of the RPKI Trust Anchor and Certificate Authority, in order to have a system that can be fully trusted and relied upon by network operators. —Felipe Victolla Silveira
The software development industry’s increasing reliance on open source components has led to a rise in awareness of open source security vulnerabilities, resulting in a drastic increase in the number of discovered open source vulnerabilities, as WhiteSource’s annual report, “The State of Open Source Security Vulnerabilities,” shows. —Jeffrey Martin
…as always, I’ve saved potentially controversial articles for the end…
The scientific revolution that has improved our lives in so many wonderful ways is based on the fundamental principle that theories about the world we live in should be tested rigorously. For example, centuries ago, more than 2 million sailors died from scurvy, a ghastly disease that is now known to be caused by a prolonged vitamin C deficiency. —Gary Smith
The coronavirus crisis has, once more, reminded us all of how much we live in an interdependent world in which what happens in one part of the globe has serious impacts in many other places, and how each of our own actions potentially have implications and importance for the well-being of multitudes of others around us, both near and far. —Richard Ebeling
There is an urgent-care doctor in Bakersfield, California, by the name of Dan Erickson, and he and his business partner Artin Massahi posted a video on YouTube a few days ago making the case for herd immunity and an end to the economic shutdowns. The video was widely shared across the internet, but only for a short time. —Scott McCay
Dispatch helps us effectively manage security incidents by deeply integrating with existing tools used throughout an organization (Slack, GSuite, Jira, etc.,) Dispatch leverages the existing familiarity of these tools to provide orchestration instead of introducing another tool. —Kevin Glisson, Marc Vilanova, Forest Monsen
No ideal hash function exists, of course, but each aims to operate as close to the ideal as possible. Given that (most) hash functions return fixed-length values and the range of values is therefore constrained, that constraint can practically be ignored. —Jeff M Lowery
Now we can all appreciate at a fundamental level what it feels like in the datacenter most days, and why Ethernet switch ASIC makers are all trying to push the bandwidth envelope. —Timothy Prickett Morgan
Extending the Internet of Things (IoT) everywhere on the planet comes down to two essential factors, network availability and cost. Over 20 new companies promise to lower IoT satellite equipment and monthly service pricing by leveraging mass market production and using constellations of low-cost low-flying “nanosatellites” (the size of a wine bottle box or smaller) to collect data from devices in the remotest part of the world – or at least outside of cell phone tower range. —Doug Mohney
The last few weeks have reinforced the importance of modern communication networks to societies. Health care providers, schools, governments, and businesses all rely on networks that enable us to connect and collaborate remotely. Had we encountered a similar pandemic ten years ago, we would not have been able to continue our activities on the level that is possible today. —Juha Holkkola
At the beginning of March 2020, Fifth Domain reported that Colorado-based aerospace, automotive and industrial parts manufacturer Visser Precision LLC had suffered a DoppelPaymer ransomware infection. Those behind this attack ultimately published information stolen from some of Visser’s customers. Those organizations included defense contractors Lockheed Martin, General Dynamics, Boeing and SpaceX. —David Bisson
As of this writing, the long-term effects of the coronavirus pandemic remain uncertain. But one possible consequence is an acceleration of the end of the megacity era. In its place, we may now be witnessing the outlines of a new, and necessary, dispersion of population, not only in the wide open spaces of North America and Australia, but even in the megacities of the developing world. —Joel Kotkin
Many network operators have a regulatory requirement to incorporate Lawful Interception (LI) capabilities into their networks, so that Law Enforcement Agencies (LEAs) can perform authorized electronic surveillance of specific target individuals. —Shane Alcock
Yet for many businesses, managing an entirely remote workforce is completely new, which means they may lack the processes, policies, and technologies that enable employees to work from home safely and securely. In addition, many employees may be unfamiliar or uncomfortable with the idea of working from home. As a result, organizations are scrambling to quickly roll out security awareness initiatives that enable their workforce to work from home safely and securely. —Lance Spitzner
The 5G story is everywhere in the American press these days, and not just the American press. You can barely turn around to scratch some needy body part without encountering another article about the wireless telecommunications technology. But the stovepiping in this coverage—the narrowing of the questions asked or answered—is acute. —Adam Garfinkle
First observed in 2009, Slow Drip attacks hit the world stage in a dramatic fashion in early-2014, wreaking havoc on the important middle-level infrastructure of the DNS, particularly on ISPs. Japanese service provider QTNet described the disruption not just of caching resolvers, but of load balancers too. —Renée Burton
A system is more than its central processor, and perhaps at no time in history has this ever been true than right now. Except, perhaps, in the future spanning out beyond the next decade until CMOS technologies finally reach their limits. Looking ahead, all computing will be hybrid, using a mix of CPUs, GPUs, FPGAs, and other forms of ASICs that run or accelerate certain functions in applications. —Timothy Prickett Morgan
Late last year saw the re-emergence of a nasty phishing tactic that allows the attacker to gain full access to a user’s data stored in the cloud without actually stealing the account password. The phishing lure starts with a link that leads to the real login page for a cloud email and/or file storage service. Anyone who takes the bait will inadvertently forward a digital token to the attackers that gives them indefinite access to the victim’s email, files and contacts — even after the victim has changed their password. —Brian Krebs
But what if, instead of focusing on Big Tech’s sins of commission, we paid equal attention to its sins of omission—the failures, the busts, the promises unfulfilled? The past year has offered several lurid examples. WeWork, the office-sharing company that claimed it would reinvent the workplace, imploded on the brink of a public offering. —Derek Thomspon
In the past half decade, a tremendous amount of effort has been put into securing Internet communications. TLS has evolved to version 1.3 and various parts of the Web platform have been conditioned to require a secure context. Let’s Encrypt was established to lower the barrier to getting a certificate, and work continues to make secure communication easy to deploy, easy to use, and eventually the only option. —Mark Nottingham
There has never been a more critical time when experienced infosec professionals are needed. From targeted intrusions, ransomware outbreaks, and relentless cyber-crime attacks, every industry is racing to build infosec muscle. It is said that it takes 10,000 hours to make an expert. —John Lambert
When acquiring big-ticket cybersecurity solutions, especially those that have hardware attached, buyers must remember that these solutions require a lot of coordination and advanced skills to utilize them correctly. Deploying a sophisticated cybersecurity solution doesn’t take place in a matter of days. You must build out advanced use cases, baseline the technology in your environment, then update and configure it to the risks your business is most likely to face. It’s a process that takes several weeks or even months. —Chris Schueler
Unfortunately, email is unprepared for today’s threats, because it was designed nearly 40 years ago when its eventual global reach and security challenges were unimaginable. Decades of work by the email industry has largely contained spam, but phishing and email-based malware remain enormous threats, with email involved in over 90% of all cyberattacks, according to various estimates. —Seth Blank