<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd"
xmlns:podcast="https://podcastindex.org/namespace/1.0"
xmlns:rawvoice="https://blubrry.com/developer/rawvoice-rss/"
>

<channel>
	<title>SECURITY &#8211; rule 11 reader</title>
	<atom:link href="https://rule11.tech/category/security/feed/" rel="self" type="application/rss+xml" />
	<link>https://rule11.tech</link>
	<description>culture eats technology for breakfast</description>
	<lastBuildDate>Mon, 18 Mar 2024 14:13:04 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
	<atom:link rel="hub" href="https://pubsubhubbub.appspot.com/" />
	<podcast:locked>yes</podcast:locked>
	<itunes:author>Russ White</itunes:author>
	<itunes:explicit>false</itunes:explicit>
	<itunes:image href="https://rule11.tech/wp-content/plugins/powerpress/itunes_default.jpg" />
	<itunes:type>episodic</itunes:type>
	<itunes:owner>
		<itunes:name>Russ White</itunes:name>
	</itunes:owner>
	<copyright>Russ White</copyright>
	<podcast:license>Russ White</podcast:license>
	<podcast:medium>podcast</podcast:medium>
	
	<itunes:category text="Technology" />
	<rawvoice:rating>TV-G</rawvoice:rating>
	<rawvoice:frequency>Weekly</rawvoice:frequency>
	<podcast:person role="Host" href="https://linkedin.com/in/riw777">Russ White</podcast:person>
	<podcast:podping usesPodping="true" />
<site xmlns="com-wordpress:feed-additions:1">73371701</site>	<item>
		<title>AI Assistants</title>
		<link>https://rule11.tech/ai-assistants/</link>
		
		<dc:creator><![CDATA[Russ]]></dc:creator>
		<pubDate>Mon, 18 Mar 2024 14:13:04 +0000</pubDate>
				<category><![CDATA[SECURITY]]></category>
		<category><![CDATA[SKILLS]]></category>
		<category><![CDATA[WRITTEN]]></category>
		<guid isPermaLink="false">https://rule11.tech/?p=17901</guid>

					<description><![CDATA[<img class="alignnone" src="https://rule11.tech/wp-content/uploads/ai-assistants.png" alt="" width="400" height="160" />

<a href="https://mindmatters.ai/2023/08/meet-mediocrates-when-ai-does-all-the-heavy-mental-lifting/">I have written elsewhere about the danger of AI assistants leading to mediocrity.</a> Humans tend to rely on authority figures rather strongly (see <em>Obedience to Authority</em> by Stanley Milgram as one example), and we often treat “the computer” as an authority figure.]]></description>
										<content:encoded><![CDATA[<p><img data-recalc-dims="1" fetchpriority="high" decoding="async" class="alignnone" src="https://i0.wp.com/rule11.tech/wp-content/uploads/ai-assistants.png?resize=400%2C160&#038;ssl=1" alt="" width="400" height="160" /></p>
<p><a href="https://mindmatters.ai/2023/08/meet-mediocrates-when-ai-does-all-the-heavy-mental-lifting/">I have written elsewhere about the danger of AI assistants leading to mediocrity.</a> Humans tend to rely on authority figures rather strongly (see <em>Obedience to Authority</em> by Stanley Milgram as one example), and we often treat “the computer” as an authority figure.</p>
<p>The problem is, of course, Large Language Models—and AI of all kinds—are mostly pattern-matching machines or <em>Chinese Rooms.</em> A pattern-matching machine can be pretty effective at many interesting things, but it will always be, in essence, a summary of “what a lot of people think.” If you choose the right people to summarize, you might get close to the truth. Finding the right people to summarize, however, is beyond the powers of a pattern-matching machine.</p>
<p>Just because many “experts” say the same thing does not mean the thing is true, valid, or useful.</p>
<p>AI assistants can make people more productive, at least in terms of sheer output. Someone using an AI assistant will write more words per minute than someone who is not. Someone using an AI assistant will write more code daily than someone who is not.</p>
<p>But is it just more, or is it better?</p>
<p>Measuring the mediocratic effect of using AI systems, even as an assistant, is difficult. We have the example of drivers using a GPS, never really learning how to get anyplace (and probably losing all larger sense of geography), but these things are hard to measure.</p>
<p><a href="https://dl.acm.org/doi/10.1145/3576915.3623157">However, a recent research paper on programming and security has shown at least one place where this effect can be measured.</a> Noting that most kinds of social research are problematic (they are hard to replicate, it’s hard to infer valid results accurately, etc.), this one seems well set up and executed, so I’m inclined to put at least some trust in the results.</p>
<p>The researchers asked programmers worldwide to write software to perform six different tasks. They constructed a control group that did not use AI assistants and a test group that did.</p>
<p>The result? In almost every case, participants using the AI assistant wrote much less secure code, including mistakes in building encryption functions, creating a sandbox, allowing SQL injection attacks, local pointers, and integer overflows. Participants made about the same number of mistakes in randomness—a problem not many programmers have taken the time to study—and fewer mistakes in buffer overflows.</p>
<p>It is possible, of course, for companies to create programming-specific AI assistants that might resolve these problems. Domain-specific AI assistants will always be more accurate and useful than general-purpose assistants.</p>
<p>Relying on AI assistants improves productivity but also seems to create mediocre results. In many cases, mediocre results will be “good enough.”</p>
<p>But what about when “good enough” isn’t … good enough?</p>
<p>Humans are creatures of habit. We do what we practice. If you want to become a better coder, you need to practice coding—and remember that practice does <em>not</em> make perfect. <em>Perfect practice makes perfect.</em></p>
<p>&nbsp;</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">17901</post-id>	</item>
		<item>
		<title>Hedge 178: Defined Trust Transport with Kathleen Nichols</title>
		<link>https://rule11.tech/hedge-178/</link>
		
		<dc:creator><![CDATA[Russ]]></dc:creator>
		<pubDate>Fri, 12 May 2023 12:52:17 +0000</pubDate>
				<category><![CDATA[AUDIO]]></category>
		<category><![CDATA[HEDGE]]></category>
		<category><![CDATA[SECURITY]]></category>
		<guid isPermaLink="false">https://rule11.tech/?p=16098</guid>

					<description><![CDATA[<img src="https://rule11.tech/wp-content/uploads/hedge-178.png" alt="" width="400" height="160" class="alignnone" />

The Internet of Things is still "out there"&#8212;operators and individuals are deploying millions of Internet connected devices every year. IoT, however, poses some serious security challenges. Devices can be taken over as botnets for DDoS attacks, attackers can take over appliances, etc. While previous security attempts have all focused on increasing password security and keeping things updated, Kathleen Nichols is working on a new solution&#8212;defined trust transport in limited domains.

Join us on for this episode of the Hedge with Kathleen to talk about the problems of trusted transport, the work she's putting in to finding solutions, and potential use cases beyond IoT.]]></description>
										<content:encoded><![CDATA[<p><img data-recalc-dims="1" decoding="async" class="alignnone" src="https://i0.wp.com/rule11.tech/wp-content/uploads/hedge-178.png?resize=400%2C160&#038;ssl=1" alt="" width="400" height="160" /></p>
<p>The Internet of Things is still &#8220;out there&#8221;—operators and individuals are deploying millions of Internet connected devices every year. IoT, however, poses some serious security challenges. Devices can be taken over as botnets for DDoS attacks, attackers can take over appliances, etc. While previous security attempts have all focused on increasing password security and keeping things updated, Kathleen Nichols is working on a new solution—defined trust transport in limited domains.</p>
<p>Join us for this episode of the Hedge with Kathleen to talk about the problems of trusted transport, the work she&#8217;s putting in to finding solutions, and potential use cases beyond IoT.</p>
<audio class="wp-audio-shortcode" id="audio-16098-1" preload="none" style="width: 100%;" controls="controls"><source type="audio/mpeg" src="https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-178.mp3?_=1" /><a href="https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-178.mp3">https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-178.mp3</a></audio>
<p><a href="https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-178.mp3"><em>download</em></a></p>
<p>You can find Kathleen at <a href="https://pollere.net/index.html">Pollere, LLC,</a> and her slides on <a href="https://pollere.net/Pdfdocs/slides-114-iotops-defined-trust-transport-00.pdf">DeftT here.</a></p>
]]></content:encoded>
					
		
				<enclosure url="https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-178.mp3" length="67303892" type="audio/mpeg" />

				<itunes:author>Russ White</itunes:author>
		<itunes:episodeType>full</itunes:episodeType>
		<itunes:duration>46:44</itunes:duration>
<post-id xmlns="com-wordpress:feed-additions:1">16098</post-id>	</item>
		<item>
		<title>Chatbot Attack Vectors</title>
		<link>https://rule11.tech/chatbot-attack-vectors/</link>
		
		<dc:creator><![CDATA[Russ]]></dc:creator>
		<pubDate>Wed, 22 Feb 2023 19:43:09 +0000</pubDate>
				<category><![CDATA[ON THE NET]]></category>
		<category><![CDATA[SECURITY]]></category>
		<guid isPermaLink="false">https://rule11.tech/?p=15850</guid>

					<description><![CDATA[<img src="https://rule11.tech/wp-content/uploads/chatbot-vector.png" alt="" width="400" height="160" class="alignnone" />

My monthly post is up over at Packet Pushers&#8212;

<blockquote><a href="https://packetpushers.net/chatbot-attack-vectors-and-failure-modes-in-networking-and-it/">Machine learning systems “learn” from existing data pools and user interactions and are given “guardrails” by the system’s designers. Let’s look at some possible attack vectors and failure modes of these systems, specifically how training data, interaction with users, and the choice of guardrails might interact with security and privacy.</a></blockquote>]]></description>
										<content:encoded><![CDATA[<p>My monthly post is up over at Packet Pushers&#8212;</p>
<blockquote><p><a href="https://packetpushers.net/chatbot-attack-vectors-and-failure-modes-in-networking-and-it/">Machine learning systems “learn” from existing data pools and user interactions and are given “guardrails” by the system’s designers. Let’s look at some possible attack vectors and failure modes of these systems, specifically how training data, interaction with users, and the choice of guardrails might interact with security and privacy.</a></p></blockquote>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">15850</post-id>	</item>
		<item>
		<title>Hedge 161: Going Dark with Geoff Huston</title>
		<link>https://rule11.tech/hedge-161/</link>
		
		<dc:creator><![CDATA[Russ]]></dc:creator>
		<pubDate>Thu, 12 Jan 2023 19:56:19 +0000</pubDate>
				<category><![CDATA[AUDIO]]></category>
		<category><![CDATA[HEDGE]]></category>
		<category><![CDATA[SECURITY]]></category>
		<guid isPermaLink="false">https://rule11.tech/?p=15732</guid>

					<description><![CDATA[<img src="https://rule11.tech/wp-content/uploads/hedge-161.png" alt="" width="400" height="160" class="alignnone" />

Encrypt everything! Now! We don't often do well with absolutes like this in the engineering world--we tend to focus on "get it down," and not to think very much about the side effects or unintended consequences. What are the unintended consequences of encrypting all traffic all the time? Geoff Huston joins Tom Ammon and Russ White to discuss the problems with going dark.]]></description>
										<content:encoded><![CDATA[<p><img data-recalc-dims="1" decoding="async" src="https://i0.wp.com/rule11.tech/wp-content/uploads/hedge-161.png?resize=400%2C160&#038;ssl=1" alt="" width="400" height="160" class="alignnone" />Encrypt everything! Now! We don&#8217;t often do well with absolutes like this in the engineering world&#8211;we tend to focus on &#8220;get it down,&#8221; and not to think very much about the side effects or unintended consequences. What are the unintended consequences of encrypting all traffic all the time? Geoff Huston joins Tom Ammon and Russ White to discuss the problems with going dark.</p>
<audio class="wp-audio-shortcode" id="audio-15732-2" preload="none" style="width: 100%;" controls="controls"><source type="audio/mpeg" src="https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-161.mp3?_=2" /><a href="https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-161.mp3">https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-161.mp3</a></audio>
<p><a href="https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-161.mp3"><em>download</em></a></p>
]]></content:encoded>
					
		
				<enclosure url="https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-161.mp3" length="55310984" type="audio/mpeg" />

				<itunes:author>Russ White</itunes:author>
		<itunes:episode>161</itunes:episode>
		<podcast:episode>161</podcast:episode>
		<itunes:title>Going Dark with Geoff Huston</itunes:title>
		<itunes:episodeType>full</itunes:episodeType>
		<itunes:duration>38:25</itunes:duration>
<post-id xmlns="com-wordpress:feed-additions:1">15732</post-id>	</item>
		<item>
		<title>Hedge 158: The State of DDoS with Roland Dobbins</title>
		<link>https://rule11.tech/hedge-158/</link>
		
		<dc:creator><![CDATA[Russ]]></dc:creator>
		<pubDate>Thu, 15 Dec 2022 15:51:52 +0000</pubDate>
				<category><![CDATA[AUDIO]]></category>
		<category><![CDATA[HEDGE]]></category>
		<category><![CDATA[SECURITY]]></category>
		<guid isPermaLink="false">https://rule11.tech/?p=15676</guid>

					<description><![CDATA[<img src="https://rule11.tech/wp-content/uploads/hedge-158.png" alt="" width="400" height="160" class="alignnone" />

DDoS attacks continue to be a persistent threat to organizations of all sizes and in all markets. Roland Dobbins joins Tom Ammon and Russ White to discuss current trends in DDoS attacks, including the increasing scope and scale, as well as the shifting methods used by attackers.]]></description>
										<content:encoded><![CDATA[<p>DDoS attacks continue to be a persistent threat to organizations of all sizes and in all markets. Roland Dobbins joins Tom Ammon and Russ White to discuss current trends in DDoS attacks, including the increasing scope and scale, as well as the shifting methods used by attackers.</p>
<audio class="wp-audio-shortcode" id="audio-15676-3" preload="none" style="width: 100%;" controls="controls"><source type="audio/mpeg" src="https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-158.mp3?_=3" /><a href="https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-158.mp3">https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-158.mp3</a></audio>
<p><a href="https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-158.mp3"><em>download</em></a></p>
]]></content:encoded>
					
		
				<enclosure url="https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-158.mp3" length="73092800" type="audio/mpeg" />

				<itunes:author>Russ White</itunes:author>
		<itunes:episodeType>full</itunes:episodeType>
		<itunes:duration>50:46</itunes:duration>
<post-id xmlns="com-wordpress:feed-additions:1">15676</post-id>	</item>
		<item>
		<title>Hedge 153: Security Perceptions and Multicloud Roundtable</title>
		<link>https://rule11.tech/hedge-153/</link>
		
		<dc:creator><![CDATA[Russ]]></dc:creator>
		<pubDate>Wed, 02 Nov 2022 19:06:16 +0000</pubDate>
				<category><![CDATA[AUDIO]]></category>
		<category><![CDATA[CULTURE]]></category>
		<category><![CDATA[HEDGE]]></category>
		<category><![CDATA[SECURITY]]></category>
		<guid isPermaLink="false">https://rule11.tech/?p=15555</guid>

					<description><![CDATA[<img src="https://rule11.tech/wp-content/uploads/hedge-153.png" alt="" width="400" height="160" class="alignnone" />

Tom, Eyvonne, and Russ hang out at the hedge on this episode. The topics of discussion include our perception of security&#8212;does the way IT professionals treat security and privacy helpful for those who aren't involved in the IT world? Do we discourage users from taking security seriously by making it so complex and hard to use? Our second topic is whether multicloud is being oversold for the average network operator.]]></description>
										<content:encoded><![CDATA[<p>Tom, Eyvonne, and Russ hang out at the hedge on this episode. The topics of discussion include our perception of security&#8212;does the way IT professionals treat security and privacy helpful for those who aren&#8217;t involved in the IT world? Do we discourage users from taking security seriously by making it so complex and hard to use? Our second topic is whether multicloud is being oversold for the average network operator.</p>
<audio class="wp-audio-shortcode" id="audio-15555-4" preload="none" style="width: 100%;" controls="controls"><source type="audio/mpeg" src="https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-153.mp3?_=4" /><a href="https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-153.mp3">https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-153.mp3</a></audio>
<p><a href="https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-153.mp3"><em>download</em></a></p>
]]></content:encoded>
					
		
				<enclosure url="https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-153.mp3" length="51456076" type="audio/mpeg" />

				<itunes:author>Russ White</itunes:author>
		<itunes:episode>153</itunes:episode>
		<podcast:episode>153</podcast:episode>
		<itunes:title>Security Perception and Multicloud</itunes:title>
		<itunes:episodeType>full</itunes:episodeType>
		<itunes:duration>35:44</itunes:duration>
<post-id xmlns="com-wordpress:feed-additions:1">15555</post-id>	</item>
		<item>
		<title>On the &#8216;net: Privacy and Networking</title>
		<link>https://rule11.tech/on-the-net-privacy-and-networking/</link>
		
		<dc:creator><![CDATA[Russ]]></dc:creator>
		<pubDate>Mon, 24 Oct 2022 17:00:27 +0000</pubDate>
				<category><![CDATA[ON THE NET]]></category>
		<category><![CDATA[SECURITY]]></category>
		<guid isPermaLink="false">https://rule11.tech/?p=15533</guid>

					<description><![CDATA[<img src="https://rule11.tech/wp-content/uploads/privacy-best-practices.png" alt="" width="400" height="160" class="alignnone" />

The final three posts in my series on privacy for infrastructure engineers is up over at Packet Pushers. While privacy might not seem like a big deal to infrastructure folks, it really is an issue we should all be considering and addressing&#8212;if for no other reason than privacy and security are closely related topics. The primary "thing" you're trying to secure when you think about networking is data&#8212;or rather, various forms of privacy.]]></description>
										<content:encoded><![CDATA[<p>The final three posts in my series on privacy for infrastructure engineers is up over at Packet Pushers. While privacy might not seem like a big deal to infrastructure folks, it really is an issue we should all be considering and addressing&#8212;if for no other reason than privacy and security are closely related topics. The primary &#8220;thing&#8221; you&#8217;re trying to secure when you think about networking is data&#8212;or rather, various forms of privacy.</p>
<blockquote><p><a href="https://packetpushers.net/privacy-and-networking-part-5-the-data-lifecycle/">Focusing on legal defensibility is the wrong way to look at privacy, or rather the wrong end of the stick.</a?</p></blockquote>
<blockquote><p><a href="https://packetpushers.net/privacy-and-networking-part-6-essential-questions-for-privacy-best-practices/">What are some best practices network operators can follow to reduce their risk? The simplest way to think about best practices is to think about user rights and risks at each stage of the data lifecycle.</a></p></blockquote>
<blockquote><p><a href="https://packetpushers.net/privacy-and-networking-part-7-dns-queries-and-having-a-breach-plan/>For the final post in this series, I’ll address two topics: the privacy implications of Domain Name System (DNS) queries, and the absolute necessity of having a plan for how to respond to a breach. Let’s start with DNS.</a></p></blockquote>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">15533</post-id>	</item>
		<item>
		<title>Privacy for Providers</title>
		<link>https://rule11.tech/privacy-for-providers/</link>
		
		<dc:creator><![CDATA[Russ]]></dc:creator>
		<pubDate>Mon, 11 Jul 2022 17:00:38 +0000</pubDate>
				<category><![CDATA[ON THE NET]]></category>
		<category><![CDATA[SECURITY]]></category>
		<category><![CDATA[SKILLS]]></category>
		<category><![CDATA[VIDEO]]></category>
		<guid isPermaLink="false">https://rule11.tech/?p=15181</guid>

					<description><![CDATA[<img src="https://rule11.tech/wp-content/uploads/privacy-for-providers.png" alt="" width="400" height="160" class="alignnone" />

While this talk is titled <em>privacy for providers,</em> it really applies to just about every network operator. This is meant to open a conversation on the topic, rather than providing definitive answers. I start by looking at some of the kinds of information network operators work with, and whether this information can or should be considered "private." In the second part of the talk, I work through some of the various ways network operators might want to consider when handling private information.
]]></description>
										<content:encoded><![CDATA[<p>While this talk is titled <em>privacy for providers,</em> it really applies to just about every network operator. This is meant to open a conversation on the topic, rather than providing definitive answers. I start by looking at some of the kinds of information network operators work with, and whether this information can or should be considered &#8220;private.&#8221; In the second part of the talk, I work through some of the various ways network operators might want to consider when handling private information.</p>
<p><iframe loading="lazy" class="youtube-player" width="640" height="360" src="https://www.youtube.com/embed/4yL6_tKfIfk?version=3&#038;rel=1&#038;showsearch=0&#038;showinfo=1&#038;iv_load_policy=1&#038;fs=1&#038;hl=en-US&#038;autohide=2&#038;wmode=transparent" allowfullscreen="true" style="border:0;" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox"></iframe></p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">15181</post-id>	</item>
		<item>
		<title>On Securing BGP</title>
		<link>https://rule11.tech/on-securing-bgp/</link>
		
		<dc:creator><![CDATA[Russ]]></dc:creator>
		<pubDate>Fri, 22 Apr 2022 16:00:08 +0000</pubDate>
				<category><![CDATA[SECURITY]]></category>
		<category><![CDATA[TECH]]></category>
		<guid isPermaLink="false">https://rule11.tech/?p=14844</guid>

					<description><![CDATA[<img src="https://rule11.tech/wp-content/uploads/fcc-secure-routing.png" alt="" width="400" height="160" class="alignnone" />

<em><a href="https://www.fcc.gov/document/fcc-launches-inquiry-internet-routing-vulnerabilities">The US Federal Communications Commission recently asked for comments on securing Internet routing.</a> While I worked on the responses offered by various organizations, I also put in my own response as an individual, which I've included below.</em>]]></description>
										<content:encoded><![CDATA[<p><em><a href="https://www.fcc.gov/document/fcc-launches-inquiry-internet-routing-vulnerabilities">The US Federal Communications Commission recently asked for comments on securing Internet routing.</a> While I worked on the responses offered by various organizations, I also put in my own response as an individual, which I&#8217;ve included below.</em></p>
<p>I am not providing this answer as a representative of any organization, but rather as an individual with long experience in the global standards and operations communities surrounding the Internet, and with long experience in routing and routing security.</p>
<p>I completely agree with the Notice of Inquiry that “networks are essential to the daily functioning of critical infrastructure [yet they] can be vulnerable to attack” due to insecurities in the BGP protocol. While proposed solutions exist that would increase the security of the BGP routing system, only some of these mechanisms are being widely deployed. This response will consider some of the reasons existing proposals are not deployed and suggest some avenues the Commission might explore to aid the community in developing and deploying solutions.</p>
<p><strong>9: Measuring BGP Security.</strong><br />
At this point, I only know of the systems mentioned in the query for measuring BGP routing security incidents. There have been attempts to build other systems, but none of these systems have been successfully built or deployed. Three problems seem to affect these kinds of systems.</p>
<p>First, there is a general lack of funding for building and maintaining such systems. These kinds of systems require a fair amount of research and creative energy to design, including making the networking community aware of these kinds of tools.</p>
<p>Second, building such a system is difficult because of the nature of inter-provider policy. It is often difficult to tell if some change in the Default Free Zone (DFZ) routing is valid or is somehow related to an attack. False positives can have a very negative impact and are hard to detect and guard against.</p>
<p>Third, these kinds of systems generally focus on a single system—routing—while excluding hints and information that can be gained from other systems (particularly the DNS). This is, at least in part, because of the complexity of each individual system, and the difficulty in understanding how to correlate and understand information from overlapping systems.</p>
<p><strong>10: Deployment of BGP Security Measures.</strong><br />
BGP security is divided into at least four different domains right now.</p>
<p>First is the exposure of policies and information through registries and similar mechanisms (such as <em>peeringdb</em> and <em>whois). </em>These mechanisms can generally be useful at the initial stages of peering, and hence are not very helpful in resolving hijacks, mistakes, etc., in near-real-time within the DFZ.</p>
<p>Second is the set of best common practices, such as BCP38, and represented by the MANRS effort. These will be more fully discussed in answer to question 13.</p>
<p>Third is origin validation, currently represented by the RPKI, which will be considered more fully in answering question 11.</p>
<p>Fourth is a more complete security system, currently represented by BGPSEC, which will be considered more fully in answering question 12.</p>
<p><strong>11: The Commission seeks comment on the extent to which RPKI, as implemented by other regional internet registries, effectively prevents BGP hijacking. </strong><br />
The RPKI can effectively block some hijacking events—so long as most providers implement and “pay attention” to the validation process. There are, however, problems with the RPKI system, including—</p>
<ul>
<li>There is no “quality control” over the contents of the RPKI. Other systems, such as the Internet Routing Registries (IRRs), that store policy and origination information have, over time, deteriorated in terms of the quality of information housed there. There is very little research into the quality of information stored in the RPKI, nor do we have any sense about how the quality of this information will stand up over time.</li>
<li>There are some concerns about the centralization of control over resources the RPKI represents. For instance, if a content or transit provider becomes entangled in a contract dispute over some resource with a registry, the registry can use the RPKI system to remove the provider from the Internet, essentially putting the provider out of business. Governments can, in theory, also cause registries to remove a provider’s authorization to use Internet resources. These are areas that may need to be researched and addressed to gain the trust of a larger part of the community.</li>
<li>The RPKI system does not expose any information about a route other than the originator. This leaves the possibility of hijacking a route by an Autonomous System (AS) advertising a route even though they cannot reach the destination by simply claiming to be connected to the originating AS.</li>
<li>The RPKI system does little to prevent an AS that should not be transiting traffic—end customers such as content providers and “enterprises”—from advertising routes in a way that pulls them into a transit role.</li>
</ul>
<p>The RPKI system does appear to be gaining widespread acceptance, and its deployment is increasing in scope.</p>
<p><strong>12: The Commission seeks comment on whether and to what extent network operators anticipate integrating BGPsec-capable routers into their networks. </strong><br />
BGPsec has not been deployed by a single provider on other than an experimental basis, as far as I know, and there are no active plans to implement BGPsec by any provider. BGPsec, in general, fails to provide enough additional security to justify the additional costs associated with its deployment. Specifically—</p>
<ul>
<li>Deploying BGPsec on individual routers requires the BGP speaker to perform complex cryptographic operations. No production router in existence today has the processing power to perform these operations quickly enough to be useful. The only apparent solution to this problem is to build specifically designed hardware to perform these operations—no router includes this hardware today, and no plans are in place to include them. The additional costs incurred to allow individual routers to perform these complex cryptographic operations would be prohibitive.</li>
<li>If it is run “on the side” by moving the complex cryptographic operations onto a separate device, the cost and complexity of running a network are dramatically increased.</li>
<li>BGPsec only signs the reachable destination (NLRI) and AS Path, which are only two components of a route. There are many other components in a route, such as the next hop and communities, which are just as important to the validity of an individual advertisement which are not covered by BGPsec. The signing of a “route” in BGPsec is a term of convenience, rather than a description of what is really signed.</li>
<li>BGPsec will only provide some additional security (BGPsec is not “perfect” from a security perspective) if most providers deploy the technology. This leads to a “chicken and egg” problem.</li>
<li>BGPsec reduces performance by eliminating specific optimizations, such as update packing, which have an important impact on BGP performance and BGP’s consumption of resources.</li>
<li>The additional resources required by BGPsec represent a surface of attack for DDoS attacks against individual routers and, with coordination, against entire networks.</li>
<li>BGPsec “freezes BGP in place” by assuming the best way to secure BGP is to “secure the way BGP works.” Deploying BGPsec would restrict future innovation in routing systems, particularly in the global Internet.</li>
</ul>
<p>To these general problems, there is one further problem—BGPsec does not secure the withdrawal of reachability, only its advertisement. Because of this, BGPsec can only be considered a somewhat partial solution to the problems any BGP security system needs to solve.</p>
<p>Consider a BGP speaker that has received a signed NLRI/AS Path pair (a signed “route”). This BGP speaker can continue advertising this route so long as it appears to be valid<em>—breaking the peering session does not invalidate the route.</em></p>
<p>Hence, the BGP speaker may mistakenly or intentionally <em>replay</em> this signed reachability information until something within the signed pair invalidates the information. There are four ways the signed route may be invalidated:</p>
<ul>
<li>A “better” route is propagated through the system</li>
<li>Some form of “revocation list” is maintained and distributed</li>
<li>Each signed route is given a defined “time-to-live,” after which it is invalidated</li>
<li>The signing key is revoked and/or replaced</li>
</ul>
<p>The first is impractical to guarantee in all situations. The second would involve maintaining a “negative routing table,” which is nearly impossible in practice.</p>
<p><em>The third—adding a time-to-live to BGP reachability information—imposes high operational costs.</em> BGP assumes that so long as a peer advertising a reachable destination maintains the peering session, the destination remains reachable (the route is valid). This assumption replaces the workload of constantly advertising already existing routing information with a single “hello” process to ensure the connection is still valid. A single “hello,” then, is a proxy validating the routing information for hundreds of thousands (potentially millions) of reachable destinations. Routes, in other words, have an implied infinite time-to-live.</p>
<p>Adding a time-to-live to individual routes would mean a BGP speaker must readvertise a given reachable destination periodically for the routing information to continue to be considered valid. According to <a href="https://www.cidr-report.org/as2.0/">this site, there are currently 916,000 IPv4 routes carried by a BGP speaker connected to the Internet</a> (the number varies by location, policies implemented, etc.). Note the analysis below does not consider IPv6 routes, which will probably be more numerous.</p>
<p>The time-to-live attached to any route determines how long the information can be replayed. If the originator sets the timer to 168 hours, the route can be replayed for a week before it is invalidated. It is difficult to say how long any given route should be valid, or what level of replay protection any given route requires. This illustration will assume 24 hours would be an average across many routes—but there are strong incentives to set the time-to-live much shorter, and there is little cost to the originator for doing so.</p>
<p>If each of these routes were given a time-to-live of 24 hours, the typical Internet BGP speaker would need to process about 10 updates/second (with the additional cryptographic processing requirements described above) just to process time-to-live expirations.</p>
<p>The impacts of this level of activity in the DFZ—beyond the sheer processing and bandwidth requirements—are wide-ranging. For instance, logging, telemetry, false route detection systems, and the way timers are deployed to dampen and manage high speed flapping events, would all need to be reconsidered and adjusted.</p>
<p><em>The fourth alternative is for the signing key to be revoked when a route is withdrawn. </em></p>
<p>If the operator uses a <em>single key</em> to sign all routes being advertised by the AS, then replacing the key on a single route requires re-advertising <em>every</em> route. Readvertising every route is a difficult process, fraught with potential failure modes.</p>
<p>If the operator assigns each BGP speaker a key, then only the key for BGP speakers impacted by withdrawing the route must have their keys changes. Hence, only the routes advertised by or through these individual speakers need to be re-advertised into the routing system. However, assigning each BGP speaker an individual key for signing routing information exposes another set of problems.</p>
<p>Key management is an obvious problem with this solution; the exposure of peering information, and the security implications of that exposure, are non-obvious problems. If each BGP speaker on the edge of a network has its own signing key, then outside observers can determine the actual pair of routers used to connect any two autonomous systems. This creates a “map” of points at which the network can be attacked, and is generally an unacceptable exposure of information for most providers.</p>
<p>These issues have, to this point, prevented any serious plans for deploying BGPsec—and will probably continue to do so for the foreseeable future. The very best that can be hoped for is BGPsec deployment in 10–20 years, and even full deployment would not necessarily improve the overall security posture of the global Internet.</p>
<p><strong>13: For network operators that currently participate in MANRS and comply with its requirements, including support for IETF Best Common Practice standards, the Commission seeks comment on the efficacy of such measures for preventing BGP hijacking.</strong></p>
<p>MANRS, BCP38, and peer-to-peer BGP session encryption (such as TCP-AO) should, in theory, be effective a large part of the unintentional and “unsophisticated” attacks and mistakes that cause large-scale BGP failures. There has been little research attempting to measure the impact of these measures, and it seems difficult to measure their impact.</p>
<p>The MANRS vendor program is an effective mechanism for promoting the common-sense practices, although it could probably be ramped up somewhat, and vendors more strongly encouraged to participate.</p>
<p>These measures should continue to be promoted through education, presentations, and other means, as they do appear to be improving the overall security posture of the Internet. TCP-AO, BCP38, and MANRS should, in particular, be encouraged and emphasized by all parties within the ecosystem.</p>
<p><strong>14: Commission&#8217;s Role.</strong><br />
The Commission should focus on <em>supporting</em> the community in developing deployable standards and systems to improve the global routing system.</p>
<p><em>First,</em> the Commission can encourage governmental organizations, and organizations funded by government organizations, to “go back to basics” and ask specific questions about <em>what</em> needs to be secured, <em>how </em>it can practically be secured, and <em>what the tradeoffs are.</em></p>
<p>To this point, BGP security efforts have often begun with the question <em>how we can secure the existing operation of BGP.</em> This is not the right question to ask. Instead, the community needs to be encouraged to create and understand what needs to be secured. Possible questions might be—</p>
<ul>
<li>What does <em>valid</em> mean in relation to a route? Must it include the <em>entire</em> route, or is “just” the AS Path and reachable destination “enough?”</li>
<li>In relation to the AS Path, is the AS Path given <em>valid</em> in the sense that it exists, and there are no policies preventing the use of this path to reach the given destination?</li>
<li>In relation to the reachable destination, how can aggregation and other forms of alternate origination be supported while still answering the questions posed above?</li>
<li>Will the providers along the path <em>actually use</em> the given path? Can “quality of path” be ensured? If so, how can the be accomplished without incurring unacceptable costs?</li>
<li>How can the effectiveness of the system be measured?</li>
<li>How can a system be designed so that increasing deployment increases security? How can the “tragedy of the commons” and “chicken and egg” problems be avoided?</li>
</ul>
<p><em>Second,</em> the Commission can encourage providers and operators, including large “enterprise” organizations, to participate in the process of understanding and building global routing system security. To this point, only a few providers have participated in the discussion. Quite often, those participating have a narrow perspective, and have been guided by groups asking the wrong question (as above). The scope of enquiry needs to be expanded.</p>
<p>What the Commission, or any other government organization, should <em>not</em> do is to push a solution from the top down. The IETF community is effective at finding solutions for these kinds of problems, and has vast experience in understanding the intended consequences, the unintended consequences, and operational aspects of deploying technologies at the scale of the Internet. Government agencies need to <em>leverage</em> these capacities, rather than trying to override them.</p>
<p>If funding is provided for research in this area, it should begin with some sort of “open research grant,” rather than selecting one solution to fund. Funding should<em> not</em> have an impact on the selection of a technical solution in open standards organizations (such as the IETF). Funding does, however, play a significant role by impacting the availability of implementations, time spent researching problems, time spent supporting a given solution at open meetings, etc.</p>
<p>The community must return to the beginning and find a solution that works by asking the right questions.</p>
<p><strong>15: The Commission seeks comment on the extent to which the effectiveness of BGP security measures may be related to international participation and coordination.</strong><br />
International coordination and cooperation are basic requirements.</p>
<p><strong>16&#8243; Costs and Benefits.</strong><br />
Please see the answers above, as some of the costs are considered there.</p>
<p><strong>17: The Commission seeks comment on whether the Commission should encourage industry to prioritize the deployment of BGP security measures within the networks on which critical infrastructure and emergency services rely, as a means of helping industry to control costs otherwise associated with a network-wide deployment. </strong></p>
<p>This is an attractive idea from the perspective of finding places where routing security could be deployed at a smaller scale and in a controlled manner to understand how the system works, make improvements in the system, etc. However, I would be concerned about how these kinds of services can be “separated out” for deployment in an effective way.</p>
<p>This kind of deployment would, however, make the problem of incremental deployment a fundamental requirement of any proposed system, which may at least encourage steps in the right direction.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">14844</post-id>	</item>
		<item>
		<title>Legal and Ethical Aspects of Privacy</title>
		<link>https://rule11.tech/legal-and-ethical-aspects-of-privacy/</link>
		
		<dc:creator><![CDATA[Russ]]></dc:creator>
		<pubDate>Thu, 21 Apr 2022 12:19:08 +0000</pubDate>
				<category><![CDATA[ON THE NET]]></category>
		<category><![CDATA[SECURITY]]></category>
		<guid isPermaLink="false">https://rule11.tech/?p=14841</guid>

					<description><![CDATA[<img src="https://rule11.tech/wp-content/uploads/privacy-legal.png" alt="" width="400" height="160" class="alignnone" />

My second post on privacy for network engineers is up over at Packet Pushers&#8212;

<blockquote><a href="https://packetpushers.net/privacy-and-networking-part-2-legal-and-ethical-privacy/">Given the arguments from the first article in this series, if privacy should be and is essential—what does the average network engineer do with this information? How does privacy impact network design and operations? To answer this question, we need to look at two other questions.</a></blockquote>
]]></description>
										<content:encoded><![CDATA[<p>My second post on privacy for network engineers is up over at Packet Pushers&#8212;</p>
<blockquote><p><a href="https://packetpushers.net/privacy-and-networking-part-2-legal-and-ethical-privacy/">Given the arguments from the first article in this series, if privacy should be and is essential—what does the average network engineer do with this information? How does privacy impact network design and operations? To answer this question, we need to look at two other questions.</a></p></blockquote>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">14841</post-id>	</item>
		<item>
		<title>Why Privacy?</title>
		<link>https://rule11.tech/why-privacy/</link>
		
		<dc:creator><![CDATA[Russ]]></dc:creator>
		<pubDate>Wed, 02 Feb 2022 18:30:28 +0000</pubDate>
				<category><![CDATA[ON THE NET]]></category>
		<category><![CDATA[SECURITY]]></category>
		<guid isPermaLink="false">https://rule11.tech/?p=14575</guid>

					<description><![CDATA[<img src="https://rule11.tech/wp-content/uploads/why-privacy.png" alt="" width="400" height="160" class="alignnone" />

I've kicked off a series over at Packet Pushers on the ; <a href="https://packetpushers.net/privacy-and-networking-part-1-why-privacy/">the first installment is up now.</a>]]></description>
										<content:encoded><![CDATA[<p>I&#8217;ve kicked off a series over at Packet Pushers on the ; <a href="https://packetpushers.net/privacy-and-networking-part-1-why-privacy/">the first installment is up now.</a></p>
<blockquote><p><a href="https://packetpushers.net/privacy-and-networking-part-1-why-privacy/">What does privacy have to do with running a network? Quite a bit. For instance, maintaining privacy is one of the most important reasons to take security seriously—the privacy of confidential company information and the privacy of individual network users.</a></p></blockquote>
<p>There&#8217;s a chapter <a href="https://www.amazon.com/dp/172527048X?tag=riw777-20">in my new book on the topic,</a> as well.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">14575</post-id>	</item>
		<item>
		<title>Hedge 103: BGP Security with Geoff Huston</title>
		<link>https://rule11.tech/hedge-103-bgp-security-with-geoff-huston/</link>
					<comments>https://rule11.tech/hedge-103-bgp-security-with-geoff-huston/#comments</comments>
		
		<dc:creator><![CDATA[Russ]]></dc:creator>
		<pubDate>Wed, 06 Oct 2021 21:05:24 +0000</pubDate>
				<category><![CDATA[AUDIO]]></category>
		<category><![CDATA[HEDGE]]></category>
		<category><![CDATA[SECURITY]]></category>
		<category><![CDATA[TECH]]></category>
		<guid isPermaLink="false">https://rule11.tech/?p=14234</guid>

					<description><![CDATA[<img src="https://rule11.tech/wp-content/uploads/hedge-103.png" alt="" width="400" height="160" class="alignnone" />

Our community has been talking about BGP security for over 20 years. While MANRS and the RPKI have made some headway in securing BGP, the process of deciding on a method to provide at least the information providers need to make more rational decisions about the validity of individual routes is still ongoing. Geoff Huston joins Alvaro, Russ, and Tom to discuss how we got here and whether we will learn from our mistakes.]]></description>
										<content:encoded><![CDATA[<p>Our community has been talking about BGP security for over 20 years. While MANRS and the RPKI have made some headway in securing BGP, the process of deciding on a method to provide at least the information providers need to make more rational decisions about the validity of individual routes is still ongoing. Geoff Huston joins Alvaro, Russ, and Tom to discuss how we got here and whether we will learn from our mistakes.</p>
<audio class="wp-audio-shortcode" id="audio-14234-5" preload="none" style="width: 100%;" controls="controls"><source type="audio/mpeg" src="https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-103.mp3?_=5" /><a href="https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-103.mp3">https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-103.mp3</a></audio>
<p><em<a href="https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-103.mp3">download</em></a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://rule11.tech/hedge-103-bgp-security-with-geoff-huston/feed/</wfw:commentRss>
			<slash:comments>5</slash:comments>
		
				<enclosure url="https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-103.mp3" length="70050154" type="audio/mpeg" />

				<itunes:author>Russ White</itunes:author>
		<itunes:episode>103</itunes:episode>
		<podcast:episode>103</podcast:episode>
		<itunes:title>BGP Security with Geoff Huston</itunes:title>
		<itunes:episodeType>full</itunes:episodeType>
		<itunes:duration>48:39</itunes:duration>
<post-id xmlns="com-wordpress:feed-additions:1">14234</post-id>	</item>
		<item>
		<title>Marketing Wins</title>
		<link>https://rule11.tech/marketing-wins/</link>
		
		<dc:creator><![CDATA[Russ]]></dc:creator>
		<pubDate>Mon, 30 Aug 2021 18:07:49 +0000</pubDate>
				<category><![CDATA[CULTURE]]></category>
		<category><![CDATA[SECURITY]]></category>
		<category><![CDATA[WRITTEN]]></category>
		<guid isPermaLink="false">https://rule11.tech/?p=14097</guid>

					<description><![CDATA[<img class="alignnone" src="https://rule11.tech/wp-content/uploads/marketing-wins.png" alt="" width="400" height="160" />

Off-topic post for today …

In the battle between marketing and security, marketing always wins. This topic came to mind after reading an article on using email aliases to control your email—

<blockquote><a href="https://www.popsci.com/set-up-email-alias/">For example, if you sign up for a lot of email newsletters, consider doing so with an alias. That way, you can quickly filter the incoming messages sent to that alias—these are probably low-priority, so you can have your provider automatically apply specific labels, mark them as read, or delete them immediately.</a></blockquote>]]></description>
										<content:encoded><![CDATA[<p>Off-topic post for today …</p>
<p>In the battle between marketing and security, marketing always wins. This topic came to mind after reading an article on using email aliases to control your email—</p>
<blockquote><p><a href="https://www.popsci.com/set-up-email-alias/">For example, if you sign up for a lot of email newsletters, consider doing so with an alias. That way, you can quickly filter the incoming messages sent to that alias—these are probably low-priority, so you can have your provider automatically apply specific labels, mark them as read, or delete them immediately.</a></p></blockquote>
<p>One of the most basic things you can do to increase your security against phishing attacks is to have two email addresses, one you give to financial institutions and another one you give to “everyone else.” It would be nice to have a third for newsletters and marketing, but this won’t work in the real world. Why?</p>
<p>Because it’s very rare to find a company that will keep <em>two</em> email addresses on file for you, one for “business” and another for “marketing.” To give specific examples—my mortgage company sends me both marketing messages in the form of a “newsletter” as well as information about mortgage activity. They only keep one email address on file, though, so they both go to a single email address.</p>
<p>A second example—even worse in my opinion—is PayPal. Whenever you buy something using PayPal, the vendor gets the email address associated with the account. That’s fine—they need to send me updates on the progress of the item I ordered, etc. But they also use this email address to send me newsletters … and PayPal sends any information about account activity <em>to the same email address.</em></p>
<p>Because of the way these things are structured, I cannot separate information about my account from newsletters, phishing attacks, etc. Since modern Phishing campaigns are using AI to create the most realistic emails possible, and most folks can’t spot a Phish anyway, you’d think banks and financial companies would want to give their users the largest selection of tools to fight against scams.</p>
<p>But they don’t. Why?</p>
<p>Because—if your financial information is mingled with a marketing newsletter, you’ll open the email to see what’s inside … you’ll pay attention. Why spend money helping your users <em>not</em> pay attention to your marketing materials by separating them from “the important stuff?”</p>
<p>When it comes to marketing versus security, marketing always wins. Somehow, we in IT need to do better than this.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">14097</post-id>	</item>
		<item>
		<title>NATs, PATs, and Network Hygiene</title>
		<link>https://rule11.tech/nats-pats-and-network-hygiene/</link>
		
		<dc:creator><![CDATA[Russ]]></dc:creator>
		<pubDate>Tue, 13 Jul 2021 17:00:14 +0000</pubDate>
				<category><![CDATA[DESIGN]]></category>
		<category><![CDATA[SECURITY]]></category>
		<category><![CDATA[WRITTEN]]></category>
		<guid isPermaLink="false">https://rule11.tech/?p=13906</guid>

					<description><![CDATA[<img src="https://rule11.tech/wp-content/uploads/nat-hygiene.png" alt="" width="400" height="160" class="alignnone" />

While reading a research paper on address spoofing from 2019, I ran into this on NAT (really PAT) failures—

<blockquote><a href="https://dl.acm.org/doi/10.1145/3319535.3354232">In the first failure mode, the NAT simply forwards the packets with the spoofed source address (the victim) intact … In the second failure mode, the NAT rewrites the source address to the NAT’s publicly routable address, and forwards the packet to the amplifier. When the server replies, the NAT system does the inverse translation of the source address, expecting to deliver the packet to an internal system. However, because the mapping is between two routable addresses external to the NAT, the packet is routed by the NAT towards the victim.</a></blockquote>]]></description>
										<content:encoded><![CDATA[<p>While reading a research paper on address spoofing from 2019, I ran into this on NAT (really PAT) failures—</p>
<blockquote><p><a href="https://dl.acm.org/doi/10.1145/3319535.3354232">In the first failure mode, the NAT simply forwards the packets with the spoofed source address (the victim) intact … In the second failure mode, the NAT rewrites the source address to the NAT’s publicly routable address, and forwards the packet to the amplifier. When the server replies, the NAT system does the inverse translation of the source address, expecting to deliver the packet to an internal system. However, because the mapping is between two routable addresses external to the NAT, the packet is routed by the NAT towards the victim.</a></p></blockquote>
<p>The authors state 49% of the NATs they discovered in their investigation of spoofed addresses fail in one of these two ways. From what I remember way back when the first NAT/PAT device (the PIX) was deployed in the real world (I worked in TAC at the time), there was a lot of discussion about what a firewall should do with packets sourced from addresses not indicated anywhere.</p>
<p>If I have an access list including 192.168.1.0/24, and I get a packet sourced from 192.168.2.24, what should the NAT do? Should it forward the packet, assuming it’s from some valid public IP space? Or should it block the packet because there’s no policy covering this source address?</p>
<p>This is similar to the discussion about whether BGP speakers should send routes to an external peer if there is no policy configured. The IETF (though not all vendors) eventually came to the conclusion that BGP speakers should not advertise to external peers without some form of policy configured.</p>
<p>My instinct is the NATs here are doing the right thing—these packets should be forwarded—but network operators should be aware of this failure mode and <em>configure their intentions explicitly.</em> I suspect most operators don’t realize this is the way most NAT implementations work, and hence they aren’t explicitly filtering source addresses that don’t fall within the source translation pool.</p>
<p>In the real world, there should also be a box just outside the NATing device that’s running unicast reverse path forwarding checks. This would resolve these sorts of spoofed packets from being forwarding into the DFZ—but uRPF is rarely implemented by edge providers, and most edge connected operators (enterprises) don’t think about the importance of uRPF to their security.</p>
<p>All this to say—if you’re running a NAT or PAT, make certain you understand how it works. Filters are tricky in the best of circumstances. NAT and PATs just make filters trickier.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">13906</post-id>	</item>
		<item>
		<title>Illusory Correlation and Security</title>
		<link>https://rule11.tech/illusory-correlation-and-security/</link>
		
		<dc:creator><![CDATA[Russ]]></dc:creator>
		<pubDate>Mon, 31 May 2021 17:00:53 +0000</pubDate>
				<category><![CDATA[CULTURE]]></category>
		<category><![CDATA[SECURITY]]></category>
		<category><![CDATA[WRITTEN]]></category>
		<guid isPermaLink="false">https://rule11.tech/?p=13771</guid>

					<description><![CDATA[<img src="https://rule11.tech/wp-content/uploads/illusory-correlation.png" alt="" width="400" height="160" class="alignnone" />

Fear sells. Fear of missing out, fear of being an imposter, fear of crime, fear of injury, fear of sickness … we can all think of times when people we know (or worse, a people in the throes of madness of crowds) have made really bad decisions because they were afraid of something. Bruce Schneier has documented this a number of times. For instance: <a href="https://www.schneier.com/essays/archives/2013/05/its_smart_politics_t.html">“it’s smart politics to exaggerate terrorist threats”</a>  and “<a href="https://www.schneier.com/blog/archives/2009/11/public_reaction.html">fear makes people deferential, docile, and distrustful, and both politicians and marketers have learned to take advantage of this.”</a>]]></description>
										<content:encoded><![CDATA[<p><strong>Fear sells.</strong> Fear of missing out, fear of being an imposter, fear of crime, fear of injury, fear of sickness … we can all think of times when people we know (or worse, a people in the throes of madness of crowds) have made really bad decisions because they were afraid of something. Bruce Schneier has documented this a number of times. For instance: <a href="https://www.schneier.com/essays/archives/2013/05/its_smart_politics_t.html">“it’s smart politics to exaggerate terrorist threats”</a>  and “<a href="https://www.schneier.com/blog/archives/2009/11/public_reaction.html">fear makes people deferential, docile, and distrustful, and both politicians and marketers have learned to take advantage of this.”</a> Here is a paper <a href="https://politicalscience.osu.edu/faculty/jmueller/bathtubs8APSA.pdf">comparing the risk of death in a bathtub to death because of a terrorist attack</a>—bathtubs win.</p>
<p>But while fear sells, the desire to appear unafraid also sells—and it conditions people’s behavior much more than we might think. For instance, we often say of surveillance “if you have done nothing wrong, you have nothing to hide”—a bit of meaningless bravado. What does this latter attitude—“I don’t have anything to worry about”—cause in terms of security?</p>
<p><a href="https://sauvikdas.com/uploads/paper/pdf/10/p1416-das.pdf">Several attempts at researching this phenomenon have come to the same conclusion:</a> average users will often intentionally <em>not</em> use things they see someone they perceive as paranoid using. According to this body of research, people will <em>not</em> use password managers because using one is perceived as being paranoid in some way. <a href="https://www.darkreading.com/endpoint/how-us-shady-geeks-put-others-off-security-/a/d-id/1340378">Theoretically, this effect is caused by illusory correlation, where people associate an action with a kind of person (only bad/scared people would want to carry a weapon).</a> Since we don’t want to be the kind of person we associate with that action, we avoid the action—even though it might make sense.</p>
<p>This is just the flip side of <em>fear sells,</em> of course. Just like we overestimate the possibility of a terrorist attack impacting our lives in a direct, personal way, we also underestimate the possibility of more mundane things, like drowning in a tub, because we either think can control it, or because we don’t think we’ll be targeted in that way, or because we want to signal to the world that we “aren’t one of <em>those people.”</em></p>
<p>Even knowing this is true, however, how can we counter this? How can we convince people to learn to assess risks rationally, rather than emotionally? How can we convince people that the perception of control should not impact your assessment of personal security or safety?</p>
<p>Simplifying design and use of the systems we build would be one—perhaps not-so-obvious—step we can take. The more security is just “automatic,” the more users will become accustomed to deploying security in their everyday lives. Another thing we might be able to do is stop trying to scare people into using these technologies.</p>
<p>In the meantime, just be aware that if you’re an engineer, your use of a technology “as an example” to others can backfire, causing people to <em>not</em> want to use those technologies.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">13771</post-id>	</item>
		<item>
		<title>The Hedge 82: Jared Smith and Route Poisoning</title>
		<link>https://rule11.tech/the-hedge-82-jared-smith-and-route-poisoning/</link>
		
		<dc:creator><![CDATA[Russ]]></dc:creator>
		<pubDate>Thu, 06 May 2021 22:06:15 +0000</pubDate>
				<category><![CDATA[AUDIO]]></category>
		<category><![CDATA[HEDGE]]></category>
		<category><![CDATA[SECURITY]]></category>
		<category><![CDATA[TECH]]></category>
		<guid isPermaLink="false">https://rule11.tech/?p=13665</guid>

					<description><![CDATA[<img src="https://rule11.tech/wp-content/uploads/hedge-082-1.png" alt="" width="400" height="160" class="alignnone" />

Intentionally poisoning BGP routes in the Default-Free Zone (DFZ) would always be a bad thing, right? Actually, this is a fairly common method to steer traffic flows away from and through specific autonomous systems. How does this work, how common is it, and who does this? Jared Smith joins us on this episode of the Hedge to discuss the technique, and his research into how frequently it is used.]]></description>
										<content:encoded><![CDATA[<p>Intentionally poisoning BGP routes in the Default-Free Zone (DFZ) would always be a bad thing, right? Actually, this is a fairly common method to steer traffic flows away from and through specific autonomous systems. How does this work, how common is it, and who does this? Jared Smith joins us on this episode of the Hedge to discuss the technique, and his research into how frequently it is used.</p>
<audio class="wp-audio-shortcode" id="audio-13665-6" preload="none" style="width: 100%;" controls="controls"><source type="audio/mpeg" src="https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-082.mp3?_=6" /><a href="https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-082.mp3">https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-082.mp3</a></audio>
<p><a href="https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-082.mp3"><em>download</em></a></p>
]]></content:encoded>
					
		
				<enclosure url="https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-082.mp3" length="62283998" type="audio/mpeg" />

				<itunes:author>Russ White</itunes:author>
		<itunes:episode>82</itunes:episode>
		<podcast:episode>82</podcast:episode>
		<itunes:title>BGP Route Poisoning</itunes:title>
		<itunes:episodeType>full</itunes:episodeType>
		<itunes:duration>43:15</itunes:duration>
<post-id xmlns="com-wordpress:feed-additions:1">13665</post-id>	</item>
		<item>
		<title>Loose Lips</title>
		<link>https://rule11.tech/loose-lips/</link>
		
		<dc:creator><![CDATA[Russ]]></dc:creator>
		<pubDate>Mon, 12 Apr 2021 17:00:57 +0000</pubDate>
				<category><![CDATA[SECURITY]]></category>
		<category><![CDATA[WRITTEN]]></category>
		<guid isPermaLink="false">https://rule11.tech/?p=13560</guid>

					<description><![CDATA[<img src="https://rule11.tech/wp-content/uploads/loose-lips.png" alt="" width="400" height="160" class="alignnone" />

When I was in the military we were constantly drilled about the problem of <em>Essential Elements of Friendly Information,</em> or <em>EEFIs.</em> What are EEFis? If an adversary can cast a wide net of surveillance, they can often find multiple clues about what you are planning to do, or who is making which decisions. For instance, if several people married to military members all make plans to be without their spouses for a long period of time, the adversary can be certain a unit is about to be deployed. If the unit of each member can be determined, then the strength, positioning, and other facts about what action you are taking can be guessed. 
]]></description>
										<content:encoded><![CDATA[<p>When I was in the military we were constantly drilled about the problem of <em>Essential Elements of Friendly Information,</em> or <em>EEFIs.</em> What are EEFis? If an adversary can cast a wide net of surveillance, they can often find multiple clues about what you are planning to do, or who is making which decisions. For instance, if several people married to military members all make plans to be without their spouses for a long period of time, the adversary can be certain a unit is about to be deployed. If the unit of each member can be determined, then the strength, positioning, and other facts about what action you are taking can be guessed. </p>
<p>Given enough broad information, an adversary can often guess at details that you really do not want them to know.</p>
<p>What brings all of this to mind is a recent article in Dark Reading about how attackers take advantage of publicly available information to form Spear Phishing attacks&#8212;</p>
<blockquote><p><a href="https://www.darkreading.com/risk/publicly-available-data-enables-enterprise-cyberattacks/d/d-id/1340550">Most security leaders are acutely aware of the threat phishing scams pose to enterprise security. What garners less attention is the vast amount of publicly available information about organizations and their employees that enables these attacks.</a></p></blockquote>
<p>Going back further in time, during World War II, we have&#8212;</p>
<p><img data-recalc-dims="1" decoding="async" src="https://i0.wp.com/rule11.tech/wp-content/uploads/Loose_lips_might_sink_ships-scaled.jpg?w=600&#038;ssl=1" alt=""  class="alignnone" /></p>
<p>What does all of this mean for the average network engineer concerned about security? Probably nothing different than being just slightly paranoid about your personal security in the first place (way too much modern security is driven by an anti-paranoid mindset, a topic for a future post). Things like&#8212;</p>
<ul>
<li>Don&#8217;t let people know, either through your job description or anything else, that you hold the master passwords for your company, or that your account holds administrator rights.</li>
<li>Don&#8217;t always go to the same watering holes, and don&#8217;t talk about work while there to people you&#8217;ve just met, or even people you see there all the time.</li>
<li>Don&#8217;t talk about when and where you&#8217;re going on vacation. You can talk about it, and share pictures, once you&#8217;re back.</li>
</ul>
<p>If an attacker knows you are going to be on vacation, it&#8217;s a lot easier to create a fake &#8220;emergency,&#8221; tempting you to give out information about accounts, people, and passwords you shouldn&#8217;t. Phishing is primarily a matter of social engineering rather than technical acumen. Countering social engineering is also a social skill, rather than a technical one. You can start by learning to just say less about what you are doing, when you are doing it, and who holds the keys to the kingdom.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">13560</post-id>	</item>
		<item>
		<title>The Insecurity of Ambiguous Standards</title>
		<link>https://rule11.tech/the-insecurity-of-ambiguous-standards/</link>
		
		<dc:creator><![CDATA[Russ]]></dc:creator>
		<pubDate>Mon, 29 Mar 2021 17:00:49 +0000</pubDate>
				<category><![CDATA[SECURITY]]></category>
		<category><![CDATA[WRITTEN]]></category>
		<guid isPermaLink="false">https://rule11.tech/?p=13485</guid>

					<description><![CDATA[<img src="https://rule11.tech/wp-content/uploads/ambiguous-standards.png" alt="" width="400" height="160" class="alignnone" />

Why are networks so insecure?

One reason is we don't take network security seriously. We just don't think of the network as a serious target of attack. Or we think of security as a problem "over there," something that exists in the application realm, that needs to be solved by application developers. Or we think the consequences of a network security breach as "well, they can DDoS us, and then we can figure out how to move load around, so if we build with resilience (enough redundancy) we're already taking care of our security issues." Or we put our trust in the firewall, which sits there like some magic box solving all our problems.]]></description>
										<content:encoded><![CDATA[<p><img data-recalc-dims="1" loading="lazy" decoding="async" src="https://i0.wp.com/rule11.tech/wp-content/uploads/ambiguous-standards.png?resize=400%2C160&#038;ssl=1" alt="" width="400" height="160" class="alignnone" /></p>
<p>Why are networks so insecure?</p>
<p>One reason is we don&#8217;t take network security seriously. We just don&#8217;t think of the network as a serious target of attack. Or we think of security as a problem &#8220;over there,&#8221; something that exists in the application realm, that needs to be solved by application developers. Or we think the consequences of a network security breach as &#8220;well, they can DDoS us, and then we can figure out how to move load around, so if we build with resilience (enough redundancy) we&#8217;re already taking care of our security issues.&#8221; Or we put our trust in the firewall, which sits there like some magic box solving all our problems.</p>
<p>The problem is&#8211;none of this is true. In any system where <em>overall</em> security is important, defense-in-depth is <strong>the</strong> key to building a secure system. No single part of the system bears the &#8220;primary responsibility&#8221; for “security.” The network is certainly a part of any defense-in-depth scheme that is going to work.</p>
<p>Which means network protocols need to be secure, at least in some sense, as well. I don’t mean “secure” in the sense of privacy—routes are not (generally) personally identifiable information (there are always exceptions, however). But rather “secure” in the sense that they cannot be easily attacked. On-the-wire encryption should prevent anyone from reading the contents of the packet or stream all the time. Network devices like routers and switches should be difficult to break in too, which means the protocols they run must be “secure” in the fuzzing sense—there should be no unexpected outputs because you’ve received an unexpected input.</p>
<p>I definitely do <em>not</em> mean path security of any sort. Making certain a packet (or update or whatever else) has followed a specified path is a chimera in packet switched networks. It’s like trying to nail your choice of multicolored gelatin desert to the wall. Packet switched networks are <em>designed</em> to adapt to changes in the network by rerouting traffic. Get over it.</p>
<p>So why are protocols and network devices so insecure? I recently ran into an interesting piece of research that provides some of the answer. To wit—</p>
<p><a href="https://github.com/nccgroup/RFC-Security-Research/blob/main/NCC%20Group%20RFC%20Security%20Analysis%202021.txt">Our research saw that ambiguous keywords SHOULD and MAY had the second highest number of occurrences across all RFCs. We’ve also seen that their intended meaning is only to be interpreted as such when written in uppercase (whereas often they are written in lowercase). In addition, around 40% of RFCs made no use of uppercase requirements level keywords. These observations point to inconsistency in use of these keywords, and possibly misunderstanding about their importance in a security context. We saw that RFCs relating to Session Initiation Protocol (SIP) made most use of ambiguous keywords, and had the most number of implementation flaws as seen across SIP-based CVEs. While not conclusive, this suggests that there may be some correlation between the level of ambiguity in RFCs and subsequent implementation security flaws.</a></p>
<p>In other words, ambiguous language leads to ambiguous implementations which leads to security flaws in protocols.</p>
<p>The solution for this situation might be just this—specify protocols more rigorously. But simple solutions rarely admit reality within their scope. It’s easy to build more precise specifications—so why aren’t our specifications more precise?</p>
<p>In a word: politics.</p>
<p>For every RFC I’ve been involved in drafting, reviewing, or otherwise getting through the IETF, there are two reasons for each MAY or SHOULD therein. The first is someone has thought of a use-case where an implementor or operator might want to do something that would be otherwise not allowed by MUST. In these cases, everyone looks at the proposed MAY or SHOULD, thinks about how not doing it might be useful, and then thinks … “this isn’t so bad, the available functionality is a good thing, and there’s no real problem I can see with making this a MAY or SHOULD.” In other words, we can think of possible worlds where someone might want to do something, so we allow them to do it. Call this the “freedom principle.”</p>
<p>The second reason is that multiple vendors have multiple customers who want to do things different ways. When the two vendors clash in the realm of standards, the result is often a set of interlocking MAYs and SHOULDs that allow two implementors to build solutions that are interoperable in the main, but not along the edges, that satisfy both of their existing customer’s requirements. Call this the “big check principle.”</p>
<p>The problem with these situations is—the specification has an undetermined set of MAYs and SHOULDs that might interlock in unforeseen ways, resulting in unanticipated variances in implementations that ultimately show up as security holes.</p>
<p>Okay—now that I’ve described the problem, what can you do about it? One thing is to <em>simplify.</em> Stop putting everything into a small set of protocols. The more functionality you pour into a protocol or system, the harder it is to secure. Complexity is the enemy of security (and privacy!).</p>
<p>As for the political problems, these are human-scale, which means they are larger than any network you can ever build—but I’ll ponder this more and get back to you if I come up with any answers.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">13485</post-id>	</item>
		<item>
		<title>The Hedge 66: Tyler McDaniel and BGP Peer Locking</title>
		<link>https://rule11.tech/hedge-66-tyler-mcdaniel-and-bgp-peer-locking/</link>
		
		<dc:creator><![CDATA[Russ]]></dc:creator>
		<pubDate>Thu, 14 Jan 2021 21:46:21 +0000</pubDate>
				<category><![CDATA[AUDIO]]></category>
		<category><![CDATA[HEDGE]]></category>
		<category><![CDATA[SECURITY]]></category>
		<category><![CDATA[TECH]]></category>
		<guid isPermaLink="false">https://rule11.tech/?p=13031</guid>

					<description><![CDATA[<img src="https://rule11.tech/wp-content/uploads/hedge-066.png" alt="" width="400"" class="alignnone" />

Tyler McDaniel joins Eyvonne, Tom, and Russ to discuss a study on BGP peerlocking, which is designed to prevent route leaks in the global Internet. From the study abstract:

</blockquote><a href="https://arxiv.org/abs/2006.06576">BGP route leaks frequently precipitate serious disruptions to interdomain routing. These incidents have plagued the Internet for decades while deployment and usability issues cripple efforts to mitigate the problem. Peerlock, introduced in 2016, addresses route leaks with a new approach. Peerlock enables filtering agreements between transit providers to protect their own networks without the need for broad cooperation or a trust infrastructure.</a></blockquote>]]></description>
										<content:encoded><![CDATA[<p>Tyler McDaniel joins Eyvonne, Tom, and Russ to discuss a study on BGP peerlocking, which is designed to prevent route leaks in the global Internet. From the study abstract:</p>
</blockquote>
<p><a href="https://arxiv.org/abs/2006.06576">BGP route leaks frequently precipitate serious disruptions to interdomain routing. These incidents have plagued the Internet for decades while deployment and usability issues cripple efforts to mitigate the problem. Peerlock, introduced in 2016, addresses route leaks with a new approach. Peerlock enables filtering agreements between transit providers to protect their own networks without the need for broad cooperation or a trust infrastructure.</a></p></blockquote>
<audio class="wp-audio-shortcode" id="audio-13031-7" preload="none" style="width: 100%;" controls="controls"><source type="audio/mpeg" src="https://media.blubrry.com/hedge/content.blubrry.com/hedge/Hedge-066.mp3?_=7" /><a href="https://media.blubrry.com/hedge/content.blubrry.com/hedge/Hedge-066.mp3">https://media.blubrry.com/hedge/content.blubrry.com/hedge/Hedge-066.mp3</a></audio>
<p><a href="https://media.blubrry.com/hedge/content.blubrry.com/hedge/Hedge-066.mp3"><em>download</em></a></p>
]]></content:encoded>
					
		
				<enclosure url="https://media.blubrry.com/hedge/content.blubrry.com/hedge/Hedge-066.mp3" length="37415453" type="audio/mpeg" />

				<itunes:author>Russ White</itunes:author>
		<itunes:episode>66</itunes:episode>
		<podcast:episode>66</podcast:episode>
		<itunes:title>BGP Peer Locking with Tyler McDaniel</itunes:title>
		<itunes:episodeType>full</itunes:episodeType>
		<itunes:duration>38:58</itunes:duration>
<post-id xmlns="com-wordpress:feed-additions:1">13031</post-id>	</item>
		<item>
		<title>Current Work in BGP Security</title>
		<link>https://rule11.tech/current-work-in-bgp-security/</link>
					<comments>https://rule11.tech/current-work-in-bgp-security/#comments</comments>
		
		<dc:creator><![CDATA[Russ]]></dc:creator>
		<pubDate>Mon, 07 Dec 2020 18:00:23 +0000</pubDate>
				<category><![CDATA[SECURITY]]></category>
		<category><![CDATA[TECH]]></category>
		<category><![CDATA[WRITTEN]]></category>
		<guid isPermaLink="false">https://rule11.tech/?p=12886</guid>

					<description><![CDATA[<img src="https://rule11.tech/wp-content/uploads/current-bgp-sec.png" alt="" width="400" height="160" class="alignnone" />

I've been chasing BGP security since before the publication of the soBGP drafts, way back in the early 2000's (that's almost 20 years for those who are math challenged). The most recent news largely centers on the RPKI, which is used to ensure the AS originating an advertisements is authorized to do so (or rather "owns" the resource or prefix). If you are not "up" on what the RPKI does, or how it works, you might find <a href="https://rule11.tech/securing-bgp-10/">this old blog post useful</a,>—its actually the tenth post in a ten post series on the topic of BGP security.]]></description>
										<content:encoded><![CDATA[<p>I&#8217;ve been chasing BGP security since before the publication of the soBGP drafts, way back in the early 2000&#8217;s (that&#8217;s almost 20 years for those who are math challenged). The most recent news largely centers on the RPKI, which is used to ensure the AS originating an advertisements is authorized to do so (or rather &#8220;owns&#8221; the resource or prefix). If you are not &#8220;up&#8221; on what the RPKI does, or how it works, you might find <a href="https://rule11.tech/securing-bgp-10/">this old blog post useful</a,>—its actually the tenth post in a ten post series on the topic of BGP security.</p>
<p>Recent news in this space largely centers around the ongoing deployment of the RPKI. According to <a href="https://www.wired.com/story/bgp-routing-manrs-google-fix/">Wired, Google and Facebook</a> have both recently <a href="https://www.manrs.org">adopted MANRS,</a> and are adopting RPKI. While it might not seem like autonomous systems along the edge adopting BGP security best practices and the RPKI system can make much of a difference, but the &#8220;heavy hitters&#8221; among the content providers can play a pivotal role here by refusing to accept routes that appear to be hijacked. This not only helps these providers and their customers directly—a point the Wired article makes—this also helps the &#8216;net in a larger way by blocking attackers access to at least some of the &#8220;big fish&#8221; in terms of traffic.</p>
<p>Leslie Daigle, over at the Global Cyber Alliance—an organization I&#8217;d never heard of until I saw this—has a post up explaining exactly how deploying the RPKI in an edge AS can make a big difference in the service level from a customer&#8217;s perspective. Leslie is looking for operators who will fill out a survey on the routing security measures they deploy. If you operate a network that has any sort of BGP presence in the default-free zone (DFZ), it&#8217;s worth taking a look and filling the survey out.</p>
<p>One of the various problems with routing security is just being able to see what&#8217;s in the RPKI. If you have a problem with your route in the global table, you can always go look at a route view server or looking glass (a topic I will cover in some detail in an upcoming live webinar over on Safari Books Online—I think it&#8217;s scheduled for February right now). But what about the RPKI? RIPE NCC has released a new tool called the JDR:</p>
<blockquote><p><a href="https://labs.ripe.net/Members/luuk_hendriks/introducing-jdr-explore-inspect-and-troubleshoot-the-rpki">Just like RP software, JDR interprets certificates and signed objects in the RPKI, but instead of producing a set of Verified ROA Payloads (VRPs) to be fed to a router, it annotates everything that could somehow cause trouble. It will go out of its way to try to decode and parse objects: even if a file is clearly violating the standards and should be rejected by RP software, JDR will try to process it and present as much troubleshooting information to the end-user afterwards.</a></p></blockquote>
<p><a href="https://jdr.nlnetlabs.nl/">You can find the JDR here.</a></p>
<p>Finally, the folks at APNIC, working with NLnet Labs, have taken a page from the BGP playbook and proposed an opaque object for the RPKI, extending it beyond &#8220;just prefixes.&#8221; They&#8217;ve created a new <em>Resource Tagged Attestations,</em> or RTAs, which can carry &#8220;any arbitrary file.&#8221; <a href="https://blog.apnic.net/2020/11/20/moving-rpki-beyond-routing-security/">They have a post up explaining the rational and work here.</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://rule11.tech/current-work-in-bgp-security/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">12886</post-id>	</item>
		<item>
		<title>The Hedge 59: Dan Blum and Rational Cybersecurity</title>
		<link>https://rule11.tech/the-hedge-59/</link>
		
		<dc:creator><![CDATA[Russ]]></dc:creator>
		<pubDate>Wed, 11 Nov 2020 18:00:20 +0000</pubDate>
				<category><![CDATA[AUDIO]]></category>
		<category><![CDATA[HEDGE]]></category>
		<category><![CDATA[SECURITY]]></category>
		<guid isPermaLink="false">https://rule11.tech/?p=12765</guid>

					<description><![CDATA[<img src="https://rule11.tech/wp-content/uploads/hedge-059.png" alt="" width="400" height="160" class="alignnone" />

Security has taken on an aura of mystery to many network engineers&#8212;why can't we approach security in the way we do many other topics, rationally? It turns out we can. Dan Blum joins Tom Ammon and Russ White to discuss the concepts and techniques behind rational cybersecurity.]]></description>
										<content:encoded><![CDATA[<p><img data-recalc-dims="1" loading="lazy" decoding="async" src="https://i0.wp.com/rule11.tech/wp-content/uploads/hedge-059.png?resize=400%2C160&#038;ssl=1" alt="" width="400" height="160" class="alignnone" /></p>
<p>Security has taken on an aura of mystery to many network engineers&#8212;why can&#8217;t we approach security in the way we do many other topics, rationally? It turns out we can. Dan Blum joins Tom Ammon and Russ White to discuss the concepts and techniques behind rational cybersecurity.</p>
<audio class="wp-audio-shortcode" id="audio-12765-8" preload="none" style="width: 100%;" controls="controls"><source type="audio/mpeg" src="https://media.blubrry.com/hedge/content.blubrry.com/hedge/Hedge-059.mp3?_=8" /><a href="https://media.blubrry.com/hedge/content.blubrry.com/hedge/Hedge-059.mp3">https://media.blubrry.com/hedge/content.blubrry.com/hedge/Hedge-059.mp3</a></audio>
<p><em><a href="https://media.blubrry.com/hedge/content.blubrry.com/hedge/Hedge-059.mp3">download</a></em></p>
]]></content:encoded>
					
		
				<enclosure url="https://media.blubrry.com/hedge/content.blubrry.com/hedge/Hedge-059.mp3" length="30891905" type="audio/mpeg" />

				<itunes:author>Russ White</itunes:author>
		<itunes:episode>59</itunes:episode>
		<podcast:episode>59</podcast:episode>
		<itunes:title>Rational Cybersecurity with Dan Blum</itunes:title>
		<itunes:episodeType>full</itunes:episodeType>
		<itunes:duration>32:11</itunes:duration>
<post-id xmlns="com-wordpress:feed-additions:1">12765</post-id>	</item>
		<item>
		<title>Random Thoughts on IoT</title>
		<link>https://rule11.tech/random-thoughts-on-iot/</link>
		
		<dc:creator><![CDATA[Russ]]></dc:creator>
		<pubDate>Mon, 26 Oct 2020 17:00:39 +0000</pubDate>
				<category><![CDATA[SECURITY]]></category>
		<category><![CDATA[WRITTEN]]></category>
		<guid isPermaLink="false">https://rule11.tech/?p=12713</guid>

					<description><![CDATA[<img src="https://rule11.tech/wp-content/uploads/iot-threat.png" alt="" width="400" height="160" class="alignnone" />

Let's play the analogy game. The Internet of Things (IoT) is probably going end up being like ... a box of chocolates, because you never do know what you are going to get? a big bowl of spaghetti with a serious lack of meatballs? Whatever it is, the IoT should have network folks worried about security. There is, of course, the problem of IoT devices being attached to random places on the network, exfiltrating personal data back to a cloud server you don't know anything about. Some of these devices might be rogue, of course, such as Raspberry Pi attached to some random place in the network. Others might be more conventional, such as those new exercise machines the company just brought into the gym that's sending personal information in the clear to an outside service.]]></description>
										<content:encoded><![CDATA[<p>Let&#8217;s play the analogy game. The Internet of Things (IoT) is probably going end up being like &#8230; a box of chocolates, because you never do know what you are going to get? a big bowl of spaghetti with a serious lack of meatballs? Whatever it is, the IoT should have network folks worried about security. There is, of course, the problem of IoT devices being attached to random places on the network, exfiltrating personal data back to a cloud server you don&#8217;t know anything about. Some of these devices might be rogue, of course, such as Raspberry Pi attached to some random place in the network. Others might be more conventional, such as those new exercise machines the company just brought into the gym that&#8217;s sending personal information in the clear to an outside service.</p>
<p><a href="https://www.darkreading.com/risk/how-to-pinpoint-rogue-iot-devices-on-your-network/d/d-id/1339145">While there is research into how to tell the difference between IoT and &#8220;larger&#8221; devices, the reality is spoofing and blurred lines will likely make such classification difficult.</a> What do you do with a virtual machine that looks like a Raspberry Pi running on a corporate laptop for completely legitimate reasons? Or what about the Raspberry Pi-like device that can run a fully operational Windows stack, including &#8220;background noise&#8221; applications that make it look like a normal compute platform? These problems are, unfortunately, not easy to solve.</p>
<p><a href="https://www.darkreading.com/edge/theedge/do-standards-exist-that-certify-secure-iot-systems/b/d-id/1339205">To make matters worse, there are no standards by which to judge the security of an IoT device.</a> Even if the device manufacturer&#8211;think about the new gym equipment here&#8211;has the best intentions towards security, there is almost no way to determine if a particular device is designed and built with good security. <a href="https://arstechnica.com/information-technology/2020/10/thousands-of-infected-iot-devices-used-in-for-profit-anonymity-service/">The result is that IoT devices are often infected and used as part of a botnet for DDoS, or other, attacks.</a> </p>
<p>What are our options here from a network perspective? The most common answer to this is segmentation&#8211;and segmentation is, in fact, a good start on solving the problem of IoT. But we are going to need a lot more than segmentation to avert certain disaster in our networks. Once these devices are segmented off, what do we do with the traffic? Do we just allow it all (&#8220;hey, that&#8217;s an IoT device, so let it send whatever it wants to&#8230; after all, it&#8217;s been segmented off the main network anyway&#8221;)? Do we try to manage and control what information is being exfiltrated from our networks? Is machine learning going to step in to solve these problems? Can it, really? </p>
<p>To put it another way&#8211;the attack surface we&#8217;re facing here is huge, and the smallest mistake can have very bad ramifications in individual lives. Take, for instance, the problem of data and IoT devices in abusive relationships. Relationships are dynamic; how is your company going to know when an employee is in an abusive relationship, and thus when certain kinds of access should be shut off? There is so much information here it seems almost impossible to manage it all.</p>
<p>It looks, to me, like the future is going to be a bit rough and tumble as we learn to navigate this new realm. Vendors will have lots of good ideas (look at Mists&#8217; capabilities in tracking down the location of rogue devices, for instance), but in the end it&#8217;s going to be the operational front line that is going to have to figure out how to manage and deploy networks where there is a broad blend of ultimately untrustable IoT devices and more traditional devices.</p>
<p>Now would be the time to start learning about security, privacy, and IoT if you haven&#8217;t started already.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">12713</post-id>	</item>
		<item>
		<title>Underhanded Code and Automation</title>
		<link>https://rule11.tech/underhanded-code-and-automation/</link>
		
		<dc:creator><![CDATA[Russ]]></dc:creator>
		<pubDate>Mon, 12 Oct 2020 17:00:42 +0000</pubDate>
				<category><![CDATA[SECURITY]]></category>
		<category><![CDATA[SKILLS]]></category>
		<category><![CDATA[WRITTEN]]></category>
		<guid isPermaLink="false">https://rule11.tech/?p=12654</guid>

					<description><![CDATA[<img src="https://rule11.tech/wp-content/uploads/underhanded-code.jpg" alt="" width="400" height="160" class="alignnone size-full wp-image-12653" />

So, software is eating the world&#8212;and you thought this was going to make things simpler, right? If you haven't found the tradeoffs, you haven't looked hard enough. I should trademark that or something! :-) While a lot of folks are thinking about code quality and supply chain are common concerns, there are a lot of little "side trails" organizations do not tend to think about. <a href="https://www.ida.org/-/media/feature/publications/i/in/initial-analysis-of-underhanded-source-code/d-13166.pdf">One such was recently covered in a paper on <em>underhanded code,</em> which is code designed to pass a standard review which be used to harm the system later on.</a>]]></description>
										<content:encoded><![CDATA[<p>So, software is eating the world—and you thought this was going to make things simpler, right? If you haven&#8217;t found the tradeoffs, you haven&#8217;t looked hard enough. I should trademark that or something! <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /> While a lot of folks are thinking about code quality and supply chain are common concerns, there are a lot of little &#8220;side trails&#8221; organizations do not tend to think about. <a href="https://www.ida.org/-/media/feature/publications/i/in/initial-analysis-of-underhanded-source-code/d-13166.pdf">One such was recently covered in a paper on <em>underhanded code,</em> which is code designed to pass a standard review which be used to harm the system later on.</a> For instance, you might see at some spot—</p>
<pre><code>if (buffer_size=REALLYLONGDECLAREDVARIABLENAMEHERE) {
/* do some stuff here */
} /* end of if */</code></pre>
<p>Can you spot what the problem might be? In C, the <code>=</code> is different than the <code>==</code>. Which should it really be here? Even astute reviewers can easily miss this kind of detail—not least because it could be an intentional construction. Using a strongly typed language can help prevent this kind of thing, like Rust <a href="https://rule11.tech/the-hedge-podcast-55-nick-carter-and-flock-networks/">(listen to this episode of the Hedge for more information on Rust),</a> but nothing beats having really good code formatting rules, even if they are apparently arbitrary, for catching these things.</p>
<p>The paper above lists these—</p>
<ul>
<li>Use syntax highlighting and typefaces that clearly distinguish characters. You should be able to easily tell the difference between a lowercase l and a 1.</li>
<li>Require all comments to be on separate lines. This is actually pretty hard in C, however.</li>
<li>Prettify code into a standard format not under the attacker&#8217;s control.</li>
<li>Use compiler warnings in static analysis.</li>
<li>Forbid unneeded dangerous constructions</li>
<li>Use runtime memory corruption detection</li>
<li>Use fuzzing</li>
<li>Watch your test coverage</li>
</ul>
<p>Not all of these are directly applicable for the network engineer dealing with automation, but they do provide some good pointers, or places to start. A few more&#8230;</p>
<p><em>Yoda assignments</em> are named after Yoda&#8217;s constant placement of the subject after the verb (or in a split infinitive)—&#8221;succeed you will&#8230;&#8221; It&#8217;s not <em>technically </em>wrong in terms of grammar, but it is just hard enough to understand that it makes you listen carefully and think a bit harder. In software development, the variable taking the assignment should be on the left, and the thing being assigned should be on the right. Reversing these is a Yoda assignment; it&#8217;s technically correct, but it&#8217;s harder to read.</p>
<p><em>Arbitrary standardization</em> is useful when there are many options that ultimately result in the same outcome. Don&#8217;t let options proliferate just because you can.</p>
<p><em>Use macros!</em></p>
<p>There are probably plenty more, but this is an area where we really are not paying attention right now.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">12654</post-id>	</item>
		<item>
		<title>Reducing RPKI Single Point of Takedown Risk</title>
		<link>https://rule11.tech/reducing-rpki-single-point-of-takedown-risk/</link>
		
		<dc:creator><![CDATA[Russ]]></dc:creator>
		<pubDate>Mon, 21 Sep 2020 17:00:30 +0000</pubDate>
				<category><![CDATA[RESEARCH]]></category>
		<category><![CDATA[SECURITY]]></category>
		<category><![CDATA[TECH]]></category>
		<category><![CDATA[WRITTEN]]></category>
		<guid isPermaLink="false">https://rule11.tech/?p=12555</guid>

					<description><![CDATA[<img src="https://rule11.tech/wp-content/uploads/reducing-rpki-risk.png" alt="" width="400" height="160" class="alignnon" />

The RPKI, for those who do not know, ties the origin AS to a prefix using a certificate (the Route Origin Authorization, or ROA) signed by a third party. The third party, in this case, is validating that the AS in the ROA is authorized to advertise the destination prefix in the ROA—if ROA’s were self-signed, the security would be no better than simply advertising the prefix in BGP. Who should be able to sign these ROAs? The assigning authority makes the most sense—the Regional Internet Registries (RIRs), since they (should) know which company owns which set of AS numbers and prefixes.

The general idea makes sense—you should not accept routes from “just anyone,” as they might be advertising the route for any number of reasons. An operator could advertise routes to source spam or phishing emails, or some government agency might advertise a route to redirect traffic, or block access to some web site. But … if you haven’t found the tradeoffs, you haven’t looked hard enough. Security, in particular, is replete with tradeoffs.]]></description>
										<content:encoded><![CDATA[<p>The RPKI, for those who do not know, ties the origin AS to a prefix using a certificate (the Route Origin Authorization, or ROA) signed by a third party. The third party, in this case, is validating that the AS in the ROA is authorized to advertise the destination prefix in the ROA—if ROA’s were self-signed, the security would be no better than simply advertising the prefix in BGP. Who should be able to sign these ROAs? The assigning authority makes the most sense—the Regional Internet Registries (RIRs), since they (should) know which company owns which set of AS numbers and prefixes.</p>
<p>The general idea makes sense—you should not accept routes from “just anyone,” as they might be advertising the route for any number of reasons. An operator could advertise routes to source spam or phishing emails, or some government agency might advertise a route to redirect traffic, or block access to some web site. But … if you haven’t found the tradeoffs, you haven’t looked hard enough. Security, in particular, is replete with tradeoffs.</p>
<p>Every time you deploy some new security mechanism, you create some new attack surface—sometimes more than one. Deploy a stateful packet filter to protect a server, and the device itself becomes a target of attack, including buffer overflows, phishing attacks to gain access to the device as a launch-point into the private network, and the holes you have to punch in the filters to allow services to work. What about the RPKI?</p>
<p>When the RKI was first proposed, one of my various concerns was the creation of new attack services. One specific attack surface is the control a single organization—the issuing RIR—has over the very existence of the operator. Suppose you start a new content provider. To get the new service up and running, you sign a contract with an RIR for some address space, sign a contract with some upstream provider (or providers), set up your servers and service, and start advertising routes. For whatever reason, your service goes viral, netting millions of users in a short span of time.</p>
<p>Now assume the RIR receives a complaint against your service for whatever reason—the reason for the complaint is not important. This places the RIR in the position of a prosecutor, defense attorney, and judge—the RIR must somehow figure out whether or not the charges are true, figure out whether or not taking action on the charges is warranted, and then take the action they’ve settled on.</p>
<p>In the case of a government agency (or a large criminal organization) making the complaint, there is probably going to be little the RIR can do other than simply revoke your certificate, pulling your service off-line.</p>
<p>Overnight your business is gone. You can drag the case through the court system, of course, but this can take years. In the meantime, you are losing users, other services are imitating what you built, and you have no money to pay the legal fees.</p>
<p>A true story—without the names. I once knew a man who worked for a satellite provider, let’s call them SATA. Now, SATA’s leadership decided they had no expertise in accounts receivables, and they were spending too much time on trying to collect overdue bills, so they outsourced the process. SATB, a competing service, decided to buy the firm SATA outsourced their accounts receivables to. You can imagine what happens next… The accounting firm worked as hard as it could to reduce the revenue SATA was receiving.</p>
<p>Of course, SATA sued the accounting firm, but before the case could make it to court, SATA ran out of money, laid off all their people, and shut their service down. SATA essentially went out of business. They won some money later, in court, but … whatever money they won was just given to the investors of various kinds to make up for losses. The business itself was gone, permanently.</p>
<p>Herein lies the danger of giving a single entity like an RIR, even if they are friendly, honest, etc., control over a critical resource.</p>
<p><a href="https://blog.apnic.net/2020/08/27/limiting-the-power-of-rpki-authorities/">A recent paper presented at the ANRW at APNIC caught my attention as a potential way to solve this problem.</a> The idea is simple—just allow (or even require) multiple signatures on a ROA. To be more accurate, each authorizing party issues a &#8220;partial certificate;&#8221; if &#8220;enough&#8221; pieces of the certificate are found and valid, the route will be validated. </p>
<p>The question is—how many signatures (or parts of the signature, or partial attestations) should be enough? The authors of the paper suggest there should be a “Threshold Signature Module” that makes this decision. The attestations of the various signers are combined in the threshold module to produce a single signature that is then used to validate the route. This way the validation process on the router remains the same, which means the only real change in the overall RPKI system is the addition of the threshold module.</p>
<p>If one RIR—even the one that allocated the addresses you are using—revokes their attestation on your ROA, the remaining attestations should be enough to convince anyone receiving your route that it is still valid. Since there are five regions, you have at least five different choices to countersign your ROA. Each RIR is under the control of a different national government; hence organizations like governments (or criminals!) would need to work across multiple RIRs and through other government organizations to have a ROA completely revoked. </p>
<p>An alternate solutions here, one that follows the PGP model, might be to simply have the threshold signature model consider the number and source of ROAs using the existing model. Local policy could determine how to weight attestations from different RIRs, etc.</p>
<p>This multiple or &#8220;shared&#8221; attestation (or signature) idea seems like a neat way to work around one of (possibly the major) attack surfaces introduced by the RPKI system. If you are interested in Internet core routing security, you should take a read through the post linked above, <a href="https://youtu.be/dgOieIfNsZo?t=2680">and then watch the video.</a></p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">12555</post-id>	</item>
		<item>
		<title>Zero Trust and the Cookie Metaphor</title>
		<link>https://rule11.tech/zero-trust-and-the-cookie-metaphor/</link>
		
		<dc:creator><![CDATA[Russ]]></dc:creator>
		<pubDate>Mon, 13 Jul 2020 17:00:25 +0000</pubDate>
				<category><![CDATA[SECURITY]]></category>
		<category><![CDATA[WRITTEN]]></category>
		<guid isPermaLink="false">https://rule11.tech/?p=12272</guid>

					<description><![CDATA[<img src="https://rule11.tech/wp-content/uploads/zero-trust.png" alt="" width="400" height="160" class="alignnone" />

In old presentations on network security (watch this space; I’m working on a new security course for Ignition in the next six months or so), I would use a pair of chocolate chip cookies as an illustration for network security. In the old days, I’d opine, network security was like a cookie that was baked to be crunchy on the outside and gooey on the inside. Now-a-days, however, I’d say network security needs to be more like a store-bought cookie—crunchy all the way through. I always used this illustration to make a point about defense-in-depth. You cannot assume the thin crunchy security layer at the edge of your network—generally in the form of stateful packet filters and the like (okay, firewalls, but let’s leave the appliance world behind for a moment)—is what you really need.]]></description>
										<content:encoded><![CDATA[<p>In old presentations on network security (watch this space; I’m working on a new security course for Ignition in the next six months or so), I would use a pair of chocolate chip cookies as an illustration for network security. In the old days, I’d opine, network security was like a cookie that was baked to be crunchy on the outside and gooey on the inside. Now-a-days, however, I’d say network security needs to be more like a store-bought cookie—crunchy all the way through. I always used this illustration to make a point about defense-in-depth. You cannot assume the thin crunchy security layer at the edge of your network—generally in the form of stateful packet filters and the like (okay, firewalls, but let’s leave the appliance world behind for a moment)—is what you really need.</p>
<p>There are such things as <em>insider attacks,</em> after all. Further, once someone breaks through the thin crunchy layer at the edge, you really don’t want them being able to move laterally through your network.</p>
<p><a href="https://csrc.nist.gov/publications/detail/sp/800-207/draft">The United States National Institute of Standards and Technology (NIST) has released a draft paper describing <em>Zero Trust Architecture,</em></a> which addresses many of the same concerns as the cookie that’s crunchy all the way through—the lateral movement of attackers through your network, for instance.</p>
<p>The situation, however, has changed quite a bit since I used the cookie illustration. The problem is no longer that the inside of your network needs to be just as secure as the outside of your network, but rather that <em>there is no “inside” to your network any longer.</em> For this we need to add a third cookie—the kind you get in the soft-baked packages, or even in the jar (or roll) of cookie dough—these cookies are gooey all the way through.</p>
<p>To understand why this is… It used to be, way back when, we had a fairly standard <em>Demilitarized Zone</em> design.</p>
<p>&nbsp;</p>
<p><img data-recalc-dims="1" loading="lazy" decoding="async" class="alignnone" src="https://i0.wp.com/rule11.tech/wp-content/uploads/dmz-figure.png?resize=400%2C237&#038;ssl=1" alt="" width="400" height="237" /></p>
<p>&nbsp;</p>
<p>For those unfamiliar with this design, D is configured to block traffic to C or A’s interfaces, and C is configured as a stateful filter and to block access to A’s addresses. If D is taken over, it should not have access to C or A; if C is taken over, it still should not have access to A. This provides a sort-of defense-in-depth.</p>
<p>Building this kind of DMZ, however, anticipates there will be at most a few ways into the network. These entries are choke points that give the network operator a place to look for anything “funny.”</p>
<p>Moving applications to the cloud, widespread remote work, and many other factors have rendered the “choke point/DMZ” model of security. There just isn’t a hard edge any longer to harden; just because someone is “inside” the topological bounds of your network does not mean they are authorized to be there, or to access data and applications.</p>
<p>The new solution is Zero Trust—moving authentication out to the endpoints. The crux of Zero Trust is to prevent unauthorized access to data or services on a per user, per device basis. There is still an “implied trust zone,” a topology within a sort of DMZ, where user traffic is trusted—but these are small areas with no user-controlled hosts.</p>
<p>If you want to understand Zero Trust beyond just the oft thrown around “microsegmentation,” this paper is well worth reading, as it explains the terminology and concepts in terms even network engineers can understand.</p>
<p>]</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">12272</post-id>	</item>
		<item>
		<title>The Hedge 43: Ivan Pepelnjak and Trusting Routing Protocols</title>
		<link>https://rule11.tech/the-hedge-pdocast-episode-43-ivan-pepelnjak-and-trusting-routing-protocols/</link>
		
		<dc:creator><![CDATA[Russ]]></dc:creator>
		<pubDate>Wed, 08 Jul 2020 17:00:46 +0000</pubDate>
				<category><![CDATA[AUDIO]]></category>
		<category><![CDATA[HEDGE]]></category>
		<category><![CDATA[SECURITY]]></category>
		<category><![CDATA[TECH]]></category>
		<guid isPermaLink="false">https://rule11.tech/?p=12244</guid>

					<description><![CDATA[<img src="https://rule11.tech/wp-content/uploads/hedge-043.png" alt="" width="400" height="160" class="alignnone" />

Can you really trust what a routing protocol tells you about how to reach a given destination? Ivan Pepelnjak joins Nick Russo and Russ White to provide a longer version of the tempting one-word answer: <strong>no!</strong> Join us as we discuss a wide range of issues including third-party next-hops, BGP communities, and the RPKI.]]></description>
										<content:encoded><![CDATA[<p>Can you really trust what a routing protocol tells you about how to reach a given destination? Ivan Pepelnjak joins Nick Russo and Russ White to provide a longer version of the tempting one-word answer: <strong>no!</strong> Join us as we discuss a wide range of issues including third-party next-hops, BGP communities, and the RPKI.</p>
<audio class="wp-audio-shortcode" id="audio-12244-9" preload="none" style="width: 100%;" controls="controls"><source type="audio/mpeg" src="https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-043.mp3?_=9" /><a href="https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-043.mp3">https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-043.mp3</a></audio>
<p><em><a href="https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-043.mp3">download</a></em></p>
]]></content:encoded>
					
		
				<enclosure url="https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-043.mp3" length="39869121" type="audio/mpeg" />

				<itunes:author>Russ White</itunes:author>
		<itunes:episode>43</itunes:episode>
		<podcast:episode>43</podcast:episode>
		<itunes:title>Routing Security on the Hedge</itunes:title>
		<itunes:episodeType>full</itunes:episodeType>
		<itunes:duration>41:31</itunes:duration>
<post-id xmlns="com-wordpress:feed-additions:1">12244</post-id>	</item>
		<item>
		<title>The Hedge 42: Andrei Robachevsky and MANRS</title>
		<link>https://rule11.tech/the-hedge-podcast-episode-42-andrei-robachevsky-and-manrs/</link>
		
		<dc:creator><![CDATA[Russ]]></dc:creator>
		<pubDate>Wed, 01 Jul 2020 17:00:48 +0000</pubDate>
				<category><![CDATA[AUDIO]]></category>
		<category><![CDATA[HEDGE]]></category>
		<category><![CDATA[SECURITY]]></category>
		<category><![CDATA[TECH]]></category>
		<guid isPermaLink="false">https://rule11.tech/?p=12216</guid>

					<description><![CDATA[<img src="https://rule11.tech/wp-content/uploads/hedge-042.png" alt="" width="400" height="160" class="alignnone" />

The security of the global routing table is foundational to the security of the overall Internet as an ecosystem&#8212;if routing cannot be trusted, then everything that relies on routing is suspect, as well. Mutually Agreed Norms for Routing Security (MANRS) is a project of the Internet Society designed to draw network operators of all kinds into thinking about, and doing something about, the security of the global routing table by using common-sense filtering and observation. Andrei Robachevsky joins Russ White and Tom Ammon to talk about MANRS.

<a href="https://www.manrs.org">More information about MANRS can be found on the project web site, including how to join and how to support global routing security.</a>]]></description>
										<content:encoded><![CDATA[<p>The security of the global routing table is foundational to the security of the overall Internet as an ecosystem&#8212;if routing cannot be trusted, then everything that relies on routing is suspect, as well. Mutually Agreed Norms for Routing Security (MANRS) is a project of the Internet Society designed to draw network operators of all kinds into thinking about, and doing something about, the security of the global routing table by using common-sense filtering and observation. Andrei Robachevsky joins Russ White and Tom Ammon to talk about MANRS.</p>
<p><a href="https://www.manrs.org">More information about MANRS can be found on the project web site, including how to join and how to support global routing security.</a></p>
<audio class="wp-audio-shortcode" id="audio-12216-10" preload="none" style="width: 100%;" controls="controls"><source type="audio/mpeg" src="https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-042.mp3?_=10" /><a href="https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-042.mp3">https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-042.mp3</a></audio>
<p><em><a href="https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-042.mp3">download</a></em></p>
]]></content:encoded>
					
		
				<enclosure url="https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-042.mp3" length="32136366" type="audio/mpeg" />

				<itunes:author>Russ White</itunes:author>
		<itunes:episode>42</itunes:episode>
		<podcast:episode>42</podcast:episode>
		<itunes:title>MANRS on the Hedge</itunes:title>
		<itunes:episodeType>full</itunes:episodeType>
		<itunes:duration>33:28</itunes:duration>
<post-id xmlns="com-wordpress:feed-additions:1">12216</post-id>	</item>
		<item>
		<title>Research: Off-Path TCP Attacks</title>
		<link>https://rule11.tech/reseach-off-path-tcp-attacks/</link>
		
		<dc:creator><![CDATA[Russ]]></dc:creator>
		<pubDate>Mon, 22 Jun 2020 17:00:58 +0000</pubDate>
				<category><![CDATA[RESEARCH]]></category>
		<category><![CDATA[SECURITY]]></category>
		<category><![CDATA[WRITTEN]]></category>
		<guid isPermaLink="false">https://rule11.tech/?p=12178</guid>

					<description><![CDATA[<img src="https://rule11.tech/wp-content/uploads/off-path-tcp-attacks.png" alt="" width="400" height="160" class="alignnone " />

I’s fnny, bt yu cn prbbly rd ths evn thgh evry wrd s mssng t lst ne lttr. This is because every effective language—or rather every communication system—carried enough information to reconstruct the original meaning even when bits are dropped. Over-the-wire protocols, like TCP, are no different—the protocol must carry enough information about the conversation (flow data) and the data being carried (metadata) to understand when something is wrong and error out or ask for a retransmission. These things, however, are a form of data exhaust; much like you can infer the tone, direction, and sometimes even the content of conversation just by watching the expressions, actions, and occasional word spoken by one of the participants, you can sometimes infer a lot about a conversation between two applications by looking at the amount and timing of data crossing the wire.]]></description>
										<content:encoded><![CDATA[<p>I’s fnny, bt yu cn prbbly rd ths evn thgh evry wrd s mssng t lst ne lttr. This is because every effective language—or rather every communication system—carried enough information to reconstruct the original meaning even when bits are dropped. Over-the-wire protocols, like TCP, are no different—the protocol must carry enough information about the conversation (flow data) and the data being carried (metadata) to understand when something is wrong and error out or ask for a retransmission. These things, however, are a form of data exhaust; much like you can infer the tone, direction, and sometimes even the content of conversation just by watching the expressions, actions, and occasional word spoken by one of the participants, you can sometimes infer a lot about a conversation between two applications by looking at the amount and timing of data crossing the wire.</p>
<p>The paper under review today, <a href="https://www.usenix.org/system/files/conference/usenixsecurity18/sec18-chen_0.pdf"><em>Off-Path TCP Exploit,</em></a> uses cleverly designed streams of packets and observations about the timing of packets in a TCP stream to construct an off-path TCP injection attack on wireless networks. Understanding the attack requires understanding the interaction between the collision avoidance used in wireless systems and TCP’s reaction to packets with a sequence number outside the current window.</p>
<p>Beginning with the TCP end of things—if a TCP packet is received with a window falling outside the current window, TCP implementations will send a duplicate of the last ACK it sent back to the transmitter. From the Wireless network side of things, only one talker can use the channel at a time. If a device begins transmitting a packet, and then hears another packet inbound, it should stop transmitting and wait some random amount of time before trying to transmit again. These two things can be combined to guess at the current window size.</p>
<p>Assume an attacker sends a packet to a victim which must be answered, such as a probe. Before the victim can answer, the attacker than sends a TCP segment which includes a sequence number the attacker thinks might be within the victim’s receive window, sourcing the packet from the IP address of some existing TCP session. Unless the IP address of some existing session is used in this step, the victim will not answer the TCP segment. Because the attacker is using a spoofed source address, it will not receive the ACK from this segment, so it must find some other way to infer if an ACK was sent by the victim.</p>
<p>How can the attacker infer this? After sending this TCP sequence, the attacker sends another probe of some kind to the victim which must be answered. If the TCP segment’s sequence number is outside the current window, the victim will attempt to send a copy of its previous ACK. If the attacker times things correctly, the victim will attempt to send this duplicate ACK while the attacker is transmitting the second probe packet; the two packets will collide, causing the victim to back off, slowing the receipt of the probe down a bit from the attacker’s perspective.</p>
<p>If the answer to the second probe is slower than the answer to the first probe, the attacker can infer the sequence number of the spoofed TCP segment is outside the current window. If the two probes are answered in close to the same time, the attacker can infer the sequence number of the spoofed TCP segment is within the current window.</p>
<p>Combining this information with several other well-known aspects of widely deployed TCP stacks, the researchers found they could reliably inject information into a TCP stream from an attacker. While these injections would still need to be shaped in some way to impact the operation of the application sending data over the TCP stream, the ability to inject TCP segments in this way is “halfway there” for the attacker.</p>
<p>There probably never will be a truly secure communication channel invented that does not involve encryption—the data required to support flow control and manage errors will always provide enough information to an attacker to find some clever way to break into the channel.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">12178</post-id>	</item>
		<item>
		<title>The Hedge 37: Stephane Bortzmeyer and DNS Privacy</title>
		<link>https://rule11.tech/the-hedge-podcast-037-stephane-bortzmeyer-and-dns-privacy/</link>
		
		<dc:creator><![CDATA[Russ]]></dc:creator>
		<pubDate>Wed, 27 May 2020 17:00:36 +0000</pubDate>
				<category><![CDATA[AUDIO]]></category>
		<category><![CDATA[HEDGE]]></category>
		<category><![CDATA[SECURITY]]></category>
		<category><![CDATA[STANDARDS]]></category>
		<guid isPermaLink="false">https://rule11.tech/?p=12072</guid>

					<description><![CDATA[<img src="https://rule11.tech/wp-content/uploads/hedge-037.png" alt="" width="400" height="160" class="alignnone" />

In this episode of the Hedge, Stephane Bortzmeyer joins Alvaro Retana and Russ White to discuss <a href="https://datatracker.ietf.org/doc/draft-ietf-dprive-rfc7626-bis/">draft-ietf-dprive-rfc7626-bis,</a> which "describes the privacy issues associated with the use of the DNS by Internet users." Not many network engineers think about the privacy implications of DNS, a important part of the infrastructure we all rely on to make the Internet work.]]></description>
										<content:encoded><![CDATA[<p>In this episode of the Hedge, Stephane Bortzmeyer joins Alvaro Retana and Russ White to discuss <a href="https://datatracker.ietf.org/doc/draft-ietf-dprive-rfc7626-bis/">draft-ietf-dprive-rfc7626-bis,</a> which &#8220;describes the privacy issues associated with the use of the DNS by Internet users.&#8221; Not many network engineers think about the privacy implications of DNS, a important part of the infrastructure we all rely on to make the Internet work.</p>
<audio class="wp-audio-shortcode" id="audio-12072-11" preload="none" style="width: 100%;" controls="controls"><source type="audio/mpeg" src="https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-037.mp3?_=11" /><a href="https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-037.mp3">https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-037.mp3</a></audio>
<p><em><a href="https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-037.mp3">download</a></em></p>
]]></content:encoded>
					
		
				<enclosure url="https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-037.mp3" length="25052463" type="audio/mpeg" />

				<itunes:author>Russ White</itunes:author>
		<itunes:episode>37</itunes:episode>
		<podcast:episode>37</podcast:episode>
		<itunes:title>The Hedge</itunes:title>
		<itunes:episodeType>full</itunes:episodeType>
		<itunes:duration>34:47</itunes:duration>
<post-id xmlns="com-wordpress:feed-additions:1">12072</post-id>	</item>
		<item>
		<title>Reflections on Intent</title>
		<link>https://rule11.tech/reflections-on-intent/</link>
					<comments>https://rule11.tech/reflections-on-intent/#comments</comments>
		
		<dc:creator><![CDATA[Russ]]></dc:creator>
		<pubDate>Mon, 20 Apr 2020 17:00:16 +0000</pubDate>
				<category><![CDATA[SECURITY]]></category>
		<category><![CDATA[WRITTEN]]></category>
		<guid isPermaLink="false">https://rule11.tech/?p=11893</guid>

					<description><![CDATA[<img src="https://rule11.tech/wp-content/uploads/reflections-on-intent.png" alt="" width="400" height="160" class="alignnone" />

No, not that kind. :-)

BGP security is a vexed topic—people have been working in this area for over twenty years with some effect, but we continuously find new problems to address. Today I am looking at a paper called <em>BGP Communities: Can of Worms,</em> which analyses some of the security problems caused by current BGP community usage in the ‘net. The point I want to think about here, though, is not the problem discussed in the paper, but rather some of the larger problems facing security in routing.]]></description>
										<content:encoded><![CDATA[<p>No, not that kind. <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
<p>BGP security is a vexed topic—people have been working in this area for over twenty years with some effect, but we continuously find new problems to address. Today I am looking at a paper called <em>BGP Communities: Can of Worms,</em> which analyses some of the security problems caused by current BGP community usage in the ‘net. The point I want to think about here, though, is not the problem discussed in the paper, but rather some of the larger problems facing security in routing.</p>
<p><img data-recalc-dims="1" loading="lazy" decoding="async" class="alignnone" src="https://i0.wp.com/rule11.tech/wp-content/uploads/community-security.png?resize=600%2C272&#038;ssl=1" alt="" width="600" height="272" /></p>
<p>Assume there is some traffic flow passing from 101::47/64 and 100::46/64 in this network. AS65003 has helpfully set up community string-based policies that allow a peer to advertise a route with a specified AS Path prepend. In this case, if AS65003 receives a route with 3:65004x to prepend the route advertised towards 65004 with <em>x</em> number of additional AS Path entries, and 3:65005x to prepend the route advertised towards 65005 with <em>x</em> number of additional AS Path entries.</p>
<p>Assuming community strings set by AS65002 are carried with the 100::46/64 route through the rest of the network, AS65002 can:</p>
<ul>
<li>Advertise 100::/46 towards AS65003 with 3:650045, causing the route received at AS65006 from AS65004 to have a longer AS Path than the route received through AS65005, causing the traffic to flow through AS65005</li>
<li>Advertise 100::/46 towards AS65003 with 3:650055, causing the route received at AS65006 from AS65005 to have a longer AS Path than the route received through AS65004, causing the traffic to flow through AS65004</li>
</ul>
<p>A lot of abuse is possible because of this situation. For instance, AS65002 might know the cost of the link between AS65006 and AS65004 is very expensive, so directing large amounts of traffic across that link will cause financial harm to AS65004 or AS65006. A malicious actor at AS65002 could also determine it can overwhelm this link, causing a sort of denial of service against anyone connected to AS65004 or AS65006.</p>
<p>The potential problem, then, is real.</p>
<p>The problem is, however, how do we solve this? The most obvious way is to block communities from being transmitted beyond one hop past the point in the network where they are set. There are, however, two problems with this solution. First, how can anyone tell which AS set a community on a route? There is no originator code in the community string, and there’s no particular way to protect this kind of information from being forged or modified short of carrying a cryptographic hash in the update—which is probably not going to be acceptable from a performance perspective.</p>
<p>But the technical problem here is just the “tip of the iceberg.” Even if we could determine who modified the route to include the community, there is no particular way for anyone receiving the community to determine the originator’s intent. AS65002 may well install some system which measures, in near-real time, the delay across multiple paths to determine which performs the best. Such a system could be programmed with the correct community strings to impact traffic, and then left to run some sort of machine learning process to figure out how to mark routes to improve performance. If the operator at AS65002 does not realize the cost of the AS65004-&gt;AS65006 link is prohibitive, any sort of financial burden imposed by this system could be an unintended, rather than intended, consequence.</p>
<p>This, it turns out, is often the problem with security. It might be that person is bypassing building security to save a life, or it could be they are doing so to steal corporate secrets. There is simply no way to know without meeting the person in question, listening to their reasoning, and allowing a human to decide which course of action is appropriate.</p>
<p>In the case of BGP, we’re dealing with “spooky action at a distance;” the source of the problem is several steps removed from the result of the problem, there’s no clear way to connect the two, and there’s no clear way to resolve the problem other than “picking up the phone” even if one of these operators can figure out what is going on.</p>
<p>The problem of intent is what RFC3514’s <em>evil bit</em> is poking a bit of fun at—if we only knew the attacker’s intent, we could often figure out what to actually <em>do.</em> Not knowing intent, however, puts a major crimp in many of the best-laid security plans.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://rule11.tech/reflections-on-intent/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">11893</post-id>	</item>
		<item>
		<title>An Interesting take on Mapping an Attack Surface</title>
		<link>https://rule11.tech/an-interesting-take-on-mapping-an-attack-surface/</link>
		
		<dc:creator><![CDATA[Russ]]></dc:creator>
		<pubDate>Mon, 23 Mar 2020 17:00:30 +0000</pubDate>
				<category><![CDATA[SECURITY]]></category>
		<category><![CDATA[WRITTEN]]></category>
		<guid isPermaLink="false">https://rule11.tech/?p=11762</guid>

					<description><![CDATA[<img src="https://rule11.tech/wp-content/uploads/mapping-attack-surface.png" alt="" width="400" height="160" class="alignnone" />

Security often lives in one of two states. It’s either something “I” take care of, because my organization is so small there isn’t anyone else taking care of it. Or it’s something <em>those folks sitting over there in the corner</em> take care of because the organization is, in fact, large enough to have a separate security team. In both cases, however, security is something that is done to networks, or something thought about kind-of off on its own in relation to networks.

I’ve been trying to think of ways to challenge this way of thinking for many years—a long time ago, in a universe far away, I created and gave a presentation on network security at Cisco Live (raise your hand if you’re old enough to have seen this presentation!).]]></description>
										<content:encoded><![CDATA[<p>Security often lives in one of two states. It’s either something “I” take care of, because my organization is so small there isn’t anyone else taking care of it. Or it’s something <em>those folks sitting over there in the corner</em> take care of because the organization is, in fact, large enough to have a separate security team. In both cases, however, security is something that is done to networks, or something thought about kind-of off on its own in relation to networks.</p>
<p>I’ve been trying to think of ways to challenge this way of thinking for many years—a long time ago, in a universe far away, I created and gave a presentation on network security at Cisco Live (raise your hand if you’re old enough to have seen this presentation!).</p>
<p>Reading through my paper pile this week, <a href="https://dl.acm.org/doi/abs/10.1145/3347144">I ran into a <em>viewpoint</em> in the <em>Communications of the ACM</em> that revived my older thinking about network security and gave me a new way to think about the problem.</a> The author’s expression of the problem of supply chain security can be used more broadly. The illustration below is replicated from the one in the original article; I will use this as a starting point.</p>
<p><img data-recalc-dims="1" loading="lazy" decoding="async" class="alignnone" src="https://i0.wp.com/rule11.tech/wp-content/uploads/sec-chart.png?resize=1055%2C702&#038;ssl=1" alt="" width="1055" height="702" /></p>
<p>This is a nice way to visualize your attack surface. The columns represent applications or systems and the rows represent vulnerabilities. The colors represent the risk, as explained across the bottom of the chart. One simple way to use this would be just to list all the things in the network along the top as columns, and all the things that can go wrong as rows and use it in the same way. This would just be a cut down, or more specific, version of the same concept.</p>
<p>Another way to use this sort of map—and this is just a nub of an idea, so you’ll need to think about how to apply it to your situation a little more deeply—is to create two groups of columns; one column for each application that relies on network services, and one for network infrastructure devices and services you rely on. Rows would be broken up into three classes, from the top to bottom—protection, services, and systems. In the protection group you would have things the network does to protect data and applications, like segmentation, preventing data exfiltration, etc. In the services group, you would mostly have various forms of denial of service and configuration. In the systems group, you would have individual hardware devices, protocols, software packages used to make the network “go,” etc. Maybe something like the illustration below.</p>
<p><img data-recalc-dims="1" loading="lazy" decoding="async" class="alignnone" src="https://i0.wp.com/rule11.tech/wp-content/uploads/alt-sec-chart.png?resize=600%2C456&#038;ssl=1" alt="" width="600" height="456" /></p>
<p>If you place the most important applications towards the left, and the protection towards the top, the more severe vulnerabilities will be in the upper left corner of the chart, with less severe areas falling to the right and (potentially) towards the bottom. You would fill this chart out starting in the upper left, figuring out what each kind of “protection” the network as a service can offer to each application. These should, in turn, roll down to the services the network offers and their corresponding configurations. These should, in turn, roll across to the devices and software used to create these services, and then roll back down to the vulnerabilities of those services and devices. For instance, if sales management relies on application access control, and application access control relies on proper filtering, and filtering is configured on BGP and some sort of overlay virtual link to a cloud service… You start to get the idea of where different kinds of services rely on underlying capabilities, and then how those are related to suppliers, hardware, etc.</p>
<p>You can color the squares in different ways—the way the original article does, perhaps, or your reliance on an outside vendor to solve this problem, etc. Once the basic chart is in place you can use multiple color schemes to get different views of the attack surface by using the chart as a sort of heat map.</p>
<p>Again, this is something of a nub of an idea, but it is a potentially interesting way to get a single view of the entire network ecosystem from a security standpoint, know where things are weak (and hence need work), and understand where cascading failures might happen.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">11762</post-id>	</item>
		<item>
		<title>The Hedge 19: Optional Security is not Optional</title>
		<link>https://rule11.tech/the-hedge-podcast-episode-19-optional-security-is-not-optional/</link>
		
		<dc:creator><![CDATA[Russ]]></dc:creator>
		<pubDate>Wed, 22 Jan 2020 18:00:36 +0000</pubDate>
				<category><![CDATA[AUDIO]]></category>
		<category><![CDATA[HEDGE]]></category>
		<category><![CDATA[SECURITY]]></category>
		<category><![CDATA[TECH]]></category>
		<guid isPermaLink="false">https://rule11.tech/?p=11507</guid>

					<description><![CDATA[<img class="alignnone" src="https://rule11.tech/wp-content/uploads/hedge-019.png" alt="" width="300" />

Brian Trammell joins Alvaro Retana and Russ White to discuss his IETF draft <a href="https://datatracker.ietf.org/doc/draft-trammell-optional-security-not/">Optional Security Is Not An Option,</a> and why optional security is very difficult to deploy in practice. Brian blogs at <a href="http://trammell.ch">http://trammell.ch</a> and also writes at <a href="https://blog.apnic.net/author/brian-trammell/">APNIC.</a>]]></description>
										<content:encoded><![CDATA[<p><img data-recalc-dims="1" decoding="async" class="alignnone" src="https://i0.wp.com/rule11.tech/wp-content/uploads/hedge-019.png?w=300&#038;ssl=1" alt=""  /></p>
<p>Brian Trammell joins Alvaro Retana and Russ White to discuss his IETF draft <a href="https://datatracker.ietf.org/doc/draft-trammell-optional-security-not/">Optional Security Is Not An Option,</a> and why optional security is very difficult to deploy in practice. Brian blogs at <a href="http://trammell.ch">http://trammell.ch</a> and also writes at <a href="https://blog.apnic.net/author/brian-trammell/">APNIC.</a></p>
<audio class="wp-audio-shortcode" id="audio-11507-12" preload="none" style="width: 100%;" controls="controls"><source type="audio/mpeg" src="https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-019.mp3?_=12" /><a href="https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-019.mp3">https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-019.mp3</a></audio>
<p><em><a href="https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-019.mp3">download</a></em></p>
]]></content:encoded>
					
		
				<enclosure url="https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-019.mp3" length="57943171" type="audio/mpeg" />

				<itunes:author>Russ White</itunes:author>
		<itunes:episodeType>full</itunes:episodeType>
		<itunes:duration>40:14</itunes:duration>
<post-id xmlns="com-wordpress:feed-additions:1">11507</post-id>	</item>
		<item>
		<title>Research: Securing Linux with a Faster and Scalable IPtables</title>
		<link>https://rule11.tech/research-securing-linux-with-a-faster-and-scalable-iptables/</link>
					<comments>https://rule11.tech/research-securing-linux-with-a-faster-and-scalable-iptables/#comments</comments>
		
		<dc:creator><![CDATA[Russ]]></dc:creator>
		<pubDate>Mon, 25 Nov 2019 19:51:16 +0000</pubDate>
				<category><![CDATA[RESEARCH]]></category>
		<category><![CDATA[SECURITY]]></category>
		<category><![CDATA[WRITTEN]]></category>
		<guid isPermaLink="false">https://rule11.tech/?p=11317</guid>

					<description><![CDATA[If you haven’t found the trade-offs, you haven’t looked hard enough.

A perfect illustration is the research paper under review, <em>Securing Linux with a Faster and Scalable Iptables.</em> Before diving into the paper, however, some background might be good. Consider the situation where you want to filter traffic being transmitted to and by a virtual workload of some kind, as shown below.

<img src="https://rule11.tech/wp-content/uploads/lpacketpath.png" alt="" width="599" class="alignnone" />

To move a packet from the user space into the kernel, the packet itself must be copied into some form of memory that processes on “both sides of the divide” can read, then the entire state of the process (memory, stack, program execution point, etc.) must be pushed into a local memory space (stack), and control transferred to the kernel. This all takes time and power, of course.]]></description>
										<content:encoded><![CDATA[<p>If you haven’t found the trade-offs, you haven’t looked hard enough.</p>
<p><a href="https://ccronline.sigcomm.org/2019/ccr-july-2019/securing-linux-with-a-faster-and-scalable-iptables/">A perfect illustration is the research paper under review, <em>Securing Linux with a Faster and Scalable Iptables.</em></a> Before diving into the paper, however, some background might be good. Consider the situation where you want to filter traffic being transmitted to and by a virtual workload of some kind, as shown below.</p>
<p><img data-recalc-dims="1" decoding="async" class="alignnone" src="https://i0.wp.com/rule11.tech/wp-content/uploads/lpacketpath.png?w=599&#038;ssl=1" alt=""  /></p>
<p>To move a packet from the user space into the kernel, the packet itself must be copied into some form of memory that processes on “both sides of the divide” can read, then the entire state of the process (memory, stack, program execution point, etc.) must be pushed into a local memory space (stack), and control transferred to the kernel. This all takes time and power, of course.</p>
<p>In the current implementation of packet filtering, <em>netfilter</em> performs the majority of filtering within the kernel, while <em>iptables</em> acts as a user frontend as well as performing some filtering actions in the user space. Packets being pushed from one interface to another must make the transition between the user space and the kernel twice. Interfaces like XDP aim to make the processing of packets faster by shortening the path from the virtual workload to the PHY chipset.</p>
<p>What if, instead of putting the functionality of <em>iptables</em> in the user space you could put it in the kernel space? This would make the process of switching packets through the device faster, because you would not need to pull packets out of the kernel into a user space process to perform filtering.</p>
<p>But there are trade-offs. According to the authors of this paper, there are three specific challenges that need to be addressed. First, users expect <em>iptables</em> filtering to take place in the user process. If a packet is transmitted between virtual workloads, the user expects any filtering to take place before the packet is pushed to the kernel to be carried across the bridge, and back out into user space to the second process, Second, a second process, <em>contrack,</em> checks the existence of a TCP connection, which <em>iptables </em>then uses to determine whether a packet that is set to drop because there no existing connection. This give <em>iptables</em> the ability to do stateful filtering. Third, classification of packets is very expensive; classifying packets could take too much processing power or memory to be done efficiently in the kernel.<br />
To resolve these issues, the authors of this paper propose using an in-kernel virtual machine, or eBPF. They design an architecture which splits <em>iptables</em> into to pipelines, and ingress and egress, as shown in the illustration taken from the paper below.</p>
<p><img data-recalc-dims="1" decoding="async" class="alignnone" src="https://i0.wp.com/rule11.tech/wp-content/uploads/bpf-iptables.png?w=600&#038;ssl=1" alt=""  /></p>
<p>As you can see, the result is&#8230; complex. Not only are there more components, with many more interaction surfaces, there is also the complexity of creating in-kernel virtual machines&#8212;remembering that virtual machines are designed to separate out processing and memory spaces to prevent cross-application data leakage and potential single points of failure.<br />
That these problems are solvable is not in question&#8212;the authors describe how they solved each of the challenges they laid out. The question is: are the trade-offs worth it?</p>
<p>The bottom line: when you move filtering from the network to the host, you are not moving the problem from a place where it is less complex. You may make the network design itself less complex, and you may move filtering closer to the application, so some specific security problems are easier to solve, but the overall complexity of the system is going way up&#8212;particularly if you want a high performance solution.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://rule11.tech/research-securing-linux-with-a-faster-and-scalable-iptables/feed/</wfw:commentRss>
			<slash:comments>3</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">11317</post-id>	</item>
		<item>
		<title>IPv6 and Leaky Addresses</title>
		<link>https://rule11.tech/ipv6-and-leaky-addresses/</link>
		
		<dc:creator><![CDATA[Russ]]></dc:creator>
		<pubDate>Mon, 18 Nov 2019 18:00:12 +0000</pubDate>
				<category><![CDATA[SECURITY]]></category>
		<category><![CDATA[TECH]]></category>
		<category><![CDATA[WRITTEN]]></category>
		<guid isPermaLink="false">https://rule11.tech/?p=11294</guid>

					<description><![CDATA[One of the recurring myths of IPv6 is its very large address space somehow confers a higher degree of security. The theory goes something like this: there is so much more of the IPv6 address space to test in order to find out what is connected to the network, it would take too long to scan the entire space looking for devices. The first problem with this myth is it simply is not true—it is quite possible to scan the entire IPv6 address space rather quickly, probing enough addresses to perform a tree-based search to find attached devices. The second problem is this assumes the only modes of attack available in IPv4 will directly carry across to IPv6. But every protocol has its own set of tradeoffs, and therefore its own set of attack surfaces.]]></description>
										<content:encoded><![CDATA[<p>One of the recurring myths of IPv6 is its very large address space somehow confers a higher degree of security. The theory goes something like this: there is so much more of the IPv6 address space to test in order to find out what is connected to the network, it would take too long to scan the entire space looking for devices. The first problem with this myth is it simply is not true—it is quite possible to scan the entire IPv6 address space rather quickly, probing enough addresses to perform a tree-based search to find attached devices. The second problem is this assumes the only modes of attack available in IPv4 will directly carry across to IPv6. But every protocol has its own set of tradeoffs, and therefore its own set of attack surfaces.</p>
<p>Assume, for instance, you follow the “quick and easy” way of configuring IPv6 addresses on devices as they are deployed in your network. The usual process for building an IPv6 address for an interface is to take the prefix, learned from the advertisement of a locally attached router, and the MAC address of one of the locally attached interfaces, combining them into an IPv6 address <em>(SLAAC).</em> The size of the IPv6 address space proves very convenient here, as it allows the MAC address, which is presumably unique, to be used in building a (presumably unique) IPv6 address.</p>
<p><a href="https://datatracker.ietf.org/doc/rfc7721/">According to RFC7721,</a> this process opens several new attack surfaces that did not exist in IPv4, primarily because the device has exposed more information about itself through the IPv6 address. First, the IPv6 address now contains at least some part of the OUI for the device. This OUI can be directly converted to a device manufacturer using web pages such as this one. In fact, in many situations you can determine where and when a device was manufactured, and often what class of device it is. This kind of information gives attackers an “inside track” on determining what kinds of attacks might be successful against the device.</p>
<p>Second, if the IPv6 address is calculated based on a local MAC address, the host bits of the IPv6 address of a host will remain the same regardless of where it is connected to the network. For instance, I may normally connect my laptop to a port in a desk in the Raleigh area. When I visit Sunnyvale, however, I will likely connect my laptop to a port in a conference room there. If I connect to the same web site from both locations, the site can infer I am using the same laptop from the host bits of the IPv6 address. Across time, an attacker can track my activities regardless of where I am physically located, allowing them to correlate my activities. Using the common lower bits, an attacker can also infer my location at any point in time.</p>
<p>Third, knowing what network adapters an organization is likely to use reduces the amount of raw address space that must be scanned to find active devices. If you know an organization uses Juniper routers, and you are trying to find all their routers in a data center or IX fabric, you don’t really need to scan the entire IPv6 address space. All you need to do is probe those addresses which would be formed using SLAAC with OUI’s formed from Juniper MAC addresses.</p>
<p>Beyond RFC7721, many devices also return their MAC address when responding to ICMPv6 probes in the <em>time exceeded</em> response. This directly exposes information about the host, so the attacker does not need to infer information from SLAAC-derived MAC addresses.</p>
<p>What can be done about these sorts of attacks?</p>
<p>The primary solution is to use <em>semantically opaque</em> identifiers when building IPv6 addresses using SLAAC—perhaps even using a cryptographic hash to create the base identifiers from which IPv6 addresses are created. The bottom line is, though, that you should examine the vendor documentation for each kind of system you deploy—especially infrastructure devices—as well as using packet capture tools to understand what kinds of information your IPv6 addresses may be leaking and how to prevent it.</p>
<p>&nbsp;</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">11294</post-id>	</item>
		<item>
		<title>The Hedge 12: Cyberinsecurity with Andrew Odlyzko</title>
		<link>https://rule11.tech/the-hedge-episode-12-cyberinsecurity-with-andrew-odlyzko/</link>
		
		<dc:creator><![CDATA[Russ]]></dc:creator>
		<pubDate>Wed, 13 Nov 2019 18:00:25 +0000</pubDate>
				<category><![CDATA[AUDIO]]></category>
		<category><![CDATA[HEDGE]]></category>
		<category><![CDATA[SECURITY]]></category>
		<guid isPermaLink="false">https://rule11.tech/?p=11260</guid>

					<description><![CDATA[<img src="https://rule11.tech/wp-content/uploads/hedge-012.png" alt="" width="600" height="240" class="alignnone" />

There is a rising tide of security breaches. There is an even faster rising tide of hysteria over the ostensible reason for these breaches, namely the deficient state of our information infrastructure. Yet the world is doing remarkably well overall, and has not suffered any of the oft-threatened giant digital catastrophes. Andrew Odlyzko joins Tom Ammon and I to talk about cyber insecurity.]]></description>
										<content:encoded><![CDATA[<p><img data-recalc-dims="1" loading="lazy" decoding="async" src="https://i0.wp.com/rule11.tech/wp-content/uploads/hedge-012.png?resize=600%2C240&#038;ssl=1" alt="" width="600" height="240" class="alignnone" /></p>
<p>There is a rising tide of security breaches. There is an even faster rising tide of hysteria over the ostensible reason for these breaches, namely the deficient state of our information infrastructure. Yet the world is doing remarkably well overall, and has not suffered any of the oft-threatened giant digital catastrophes. Andrew Odlyzko joins Tom Ammon and I to talk about cyber insecurity.</p>
<audio class="wp-audio-shortcode" id="audio-11260-13" preload="none" style="width: 100%;" controls="controls"><source type="audio/mpeg" src="https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-012.mp3?_=13" /><a href="https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-012.mp3">https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-012.mp3</a></audio>
<p><a href="https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-012.mp3"><em>download</em></a></p>
<p><a href="http://www.dtc.umn.edu/~odlyzko/doc/cyberinsecurity.pdf">The original paper referenced in this episode is here.</a></p>
]]></content:encoded>
					
		
				<enclosure url="https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-012.mp3" length="49048984" type="audio/mpeg" />

				<itunes:author>Russ White</itunes:author>
		<itunes:episodeType>full</itunes:episodeType>
		<itunes:duration>34:03</itunes:duration>
<post-id xmlns="com-wordpress:feed-additions:1">11260</post-id>	</item>
		<item>
		<title>The Hedge 10: Pavel Odintsov and Fastnetmon</title>
		<link>https://rule11.tech/the-hedge-episode-10-pavel-odintsov-and-fastnetmon/</link>
		
		<dc:creator><![CDATA[Russ]]></dc:creator>
		<pubDate>Tue, 22 Oct 2019 17:00:08 +0000</pubDate>
				<category><![CDATA[AUDIO]]></category>
		<category><![CDATA[HEDGE]]></category>
		<category><![CDATA[SECURITY]]></category>
		<guid isPermaLink="false">https://rule11.tech/?p=11152</guid>

					<description><![CDATA[<img src="https://rule11.tech/wp-content/uploads/hedge-010.png" alt="" width="400" class="alignnone" />

<a href="https://fastnetmon.com">Fastnetmon began life as an open source DDoS detection tool,</a> but has grown in scope over time. By connecting Fastnetmon to open source BGP implementations, operators can take action when a denial of service event is detected, triggering black holes and changing route preferences. Pavel Odintsov joins us to talk about this interesting and useful open source project.]]></description>
										<content:encoded><![CDATA[<p><img data-recalc-dims="1" decoding="async" src="https://i0.wp.com/rule11.tech/wp-content/uploads/hedge-010.png?w=400&#038;ssl=1" alt=""  class="alignnone" /></p>
<p><a href="https://fastnetmon.com">Fastnetmon began life as an open source DDoS detection tool,</a> but has grown in scope over time. By connecting Fastnetmon to open source BGP implementations, operators can take action when a denial of service event is detected, triggering black holes and changing route preferences. Pavel Odintsov joins us to talk about this interesting and useful open source project.</p>
<audio class="wp-audio-shortcode" id="audio-11152-14" preload="none" style="width: 100%;" controls="controls"><source type="audio/mpeg" src="https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-010.mp3?_=14" /><a href="https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-010.mp3">https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-010.mp3</a></audio>
<p><em><a href="https://media.blubrry.com/hedge/content.blubrry.com/hedge/hedge-010.mp3">download episode</a></em></p>
<p><em>you can subscribe to the Hedge on iTunes and other podcast listing sites, or use the RSS feed directly from rule11.tech</em></p>
]]></content:encoded>
					
		
				<enclosure url="https://media.blubrry.com/hedge/thehedge.s3.amazonaws.com/hedge-010.mp3" length="44240123" type="audio/mpeg" />

				<itunes:author>Russ White</itunes:author>
		<itunes:episodeType>full</itunes:episodeType>
		<itunes:duration>30:43</itunes:duration>
<post-id xmlns="com-wordpress:feed-additions:1">11152</post-id>	</item>
		<item>
		<title>IPv6 Backscatter and Address Space Scanning</title>
		<link>https://rule11.tech/ipv6-backscatter-and-address-space-scanning/</link>
		
		<dc:creator><![CDATA[Russ]]></dc:creator>
		<pubDate>Wed, 09 Oct 2019 17:00:39 +0000</pubDate>
				<category><![CDATA[RESEARCH]]></category>
		<category><![CDATA[SECURITY]]></category>
		<category><![CDATA[WRITTEN]]></category>
		<guid isPermaLink="false">https://rule11.tech/?p=11073</guid>

					<description><![CDATA[Backscatter is often used to detect various kinds of attacks, but how does it work? The paper under review today, <em>Who Knocks at the IPv6 Door,</em> explains backscatter usage in IPv4, and examines how effectively this technique might be used to detect scanning of IPv6 addresses, as well. Scanning the IPv6 address space is much more difficult because there are 2<sup>128</sup> addresses rather than 2<sup>32</sup>. The paper under review here is one of the first attempts to understand backscatter in the IPv6 address space, which can lead to a better understanding of the ways in which IPv6 scanners are optimizing their search through the larger address space, and also to begin understanding how backscatter can be used in IPv6 for many of the same purposes as it is in IPv4.

<em>Kensuke Fukuda and John Heidemann. 2018. Who Knocks at the IPv6 Door?: Detecting IPv6 Scanning. In Proceedings of the Internet Measurement Conference 2018 (IMC '18). ACM, New York, NY, USA, 231-237. DOI: <a href="https://doi.org/10.1145/3278532.3278553">https://doi.org/10.1145/3278532.3278553</a></em>]]></description>
										<content:encoded><![CDATA[<p>Backscatter is often used to detect various kinds of attacks, but how does it work? The paper under review today, <em>Who Knocks at the IPv6 Door,</em> explains backscatter usage in IPv4, and examines how effectively this technique might be used to detect scanning of IPv6 addresses, as well. The best place to begin is with an explanation of backscatter itself; the following network diagram will be helpful—</p>
<p><img data-recalc-dims="1" decoding="async" class="alignnone" src="https://i0.wp.com/rule11.tech/wp-content/uploads/backscatter.jpg?w=600&#038;ssl=1" alt=""  /></p>
<p>Assume A is scanning the IPv4 address space for some reason—for instance, to find some open port on a host, or as part of a DDoS attack. When A sends an unsolicited packet to C, a firewall (or some similar edge filtering device), C will attempt to discover the source of this packet. It could be there is some local policy set up allowing packets from A, or perhaps A is part of some domain none of the devices from C should be connecting to. IN order to discover more, the firewall will perform a <em>reverse lookup.</em> To do this, C takes advantage of the PTR DNS record, looking up the IP address to see if there is an associated domain name (this is explained in more detail in my <em>How the Internet Really Works</em> webinar, which I give every six months or so). This reverse lookup generates what is called a <em>backscatter—</em>these backscatter events can be used to find hosts scanning the IP address space. Sometimes these scans are innocent, such as a web spider searching for HTML servers; other times, they could be a prelude to some sort of attack.</p>
<p><em>Kensuke Fukuda and John Heidemann. 2018. Who Knocks at the IPv6 Door?: Detecting IPv6 Scanning. In Proceedings of the Internet Measurement Conference 2018 (IMC &#8217;18). ACM, New York, NY, USA, 231-237. DOI: <a href="https://doi.org/10.1145/3278532.3278553">https://doi.org/10.1145/3278532.3278553</a></em></p>
<p>Scanning the IPv6 address space is much more difficult because there are 2<sup>128</sup> addresses rather than 2<sup>32</sup>. The paper under review here is one of the first attempts to understand backscatter in the IPv6 address space, which can lead to a better understanding of the ways in which IPv6 scanners are optimizing their search through the larger address space, and also to begin understanding how backscatter can be used in IPv6 for many of the same purposes as it is in IPv4.</p>
<p>The researchers begin by setting up a backscatter testbed across a subset of hosts for which IPv4 backscatter information is well-known. They developed a set of heuristics for identifying the kind of service or host performing the reverse DNS lookup, classifying them into major services, content delivery networks, mail servers, etc. They then examined the number of reverse DNS lookups requested versus the number of IP packets each received.</p>
<p>It turns out that about ten times as many backscatter incidents are reported for IPv4 than IPv6, which either indicates that IPv6 hosts perform reverse lookup requests about ten times less often than IPv4 hosts, or IPv6 hosts are ten times less likely to be monitored for backscatter events. Either way, this result is not promising—it appears, on the surface, that IPv6 hosts will be less likely to cause backscatter events, or IPv6 backscatter events are ten times less likely to be reported. This could indicate that widespread deployment of IPv6 will make it harder to detect various kinds of attacks on the DFZ. A second result from this research is that using backscatter, the researchers determined IPv6 scanning is increasing over time; while the IPv6 space is not currently a prime target for attacks, it might become more so over time, if the scanning rate is any indicator.</p>
<p>The bottom line is—IPv6 hosts need to be monitored as closely, or more closely than IPv6 hosts, for scanning events. The techniques used for scanning the IPv6 address space are not well understood at this time, either.</p>
<p>&nbsp;</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">11073</post-id>	</item>
		<item>
		<title>DNS Query Minimization and Data Leaks</title>
		<link>https://rule11.tech/dns-query-minimization-and-data-leaks/</link>
					<comments>https://rule11.tech/dns-query-minimization-and-data-leaks/#comments</comments>
		
		<dc:creator><![CDATA[Russ]]></dc:creator>
		<pubDate>Mon, 26 Aug 2019 17:00:18 +0000</pubDate>
				<category><![CDATA[SECURITY]]></category>
		<category><![CDATA[STANDARDS]]></category>
		<category><![CDATA[TECH]]></category>
		<category><![CDATA[WRITTEN]]></category>
		<guid isPermaLink="false">https://rule11.tech/?p=10874</guid>

					<description><![CDATA[When a recursive resolver receives a query from a host, it will first consult any local cache to discover if it has the information required to resolve the query. If it does not, it will begin with the rightmost section of the domain name, the Top Level Domain (TLD), moving left through each section of the Fully Qualified Domain Name (FQDN), in order to find an IP address to return to the host, as shown in the diagram below.

<img src="https://rule11.tech/wp-content/uploads/dns-query-400x285.jpg" alt="" width="400" height="285" class="alignnone" />

This is pretty simple at its most basic level, of course—virtually every network engineer in the world understands this process (and if you don’t, you should enroll in my <em>How the Internet Really Works</em> webinar the next time it is offered!). The question almost no-one ever asks, however, is: <em>what, precisely, is the recursive server sending to the root, TLD, and authoritative servers?</em>]]></description>
										<content:encoded><![CDATA[<p>When a recursive resolver receives a query from a host, it will first consult any local cache to discover if it has the information required to resolve the query. If it does not, it will begin with the rightmost section of the domain name, the Top Level Domain (TLD), moving left through each section of the Fully Qualified Domain Name (FQDN), in order to find an IP address to return to the host, as shown in the diagram below.</p>
<p><img data-recalc-dims="1" loading="lazy" decoding="async" src="https://i0.wp.com/rule11.tech/wp-content/uploads/dns-query.jpg?resize=400%2C285&#038;ssl=1" alt="" width="400" height="285" class="alignnone" /></p>
<p>This is pretty simple at its most basic level, of course—virtually every network engineer in the world understands this process (and if you don’t, you should enroll in my <em>How the Internet Really Works</em> webinar the next time it is offered!). The question almost no-one ever asks, however, is: <em>what, precisely, is the recursive server sending to the root, TLD, and authoritative servers?</em></p>
<p>Begin with the perspective of a coder who is developing the code for that recursive server. You receive a query from a host, you have the code check the local cache, and you find there is no matching information available locally. This means you need to send a query out to some other server to determine the correct IP address to return to the host. You could keep a copy of the query from the host in your local cache and build a new query to send to the root server.</p>
<p>Remember, however, that local server resources may be scarce; recursive servers must be optimized to process very high query rates very quickly. Much of the user’s perception of network performance is actually tied to DNS performance. A second option is you could save local memory and processing power by sending the <em>entire query,</em> as you have received it, on to the root server. This way, you do not need to build a new query packet to send to the root server.</p>
<p>Consider this process, however, in the case of a query for a local, internal resource you would rather not let the world know exists. The recursive server, by sending the entire query to the root server, is also sending information about the internal DNS structure and potential internal server names to the external root server. As the FQDN is resolved (or not), this same information is sent to the TLD and authoritative servers, as well.</p>
<p>There is something else contained here, however, that is not so obvious—the IP address of the requestor is contained in that original query, as well. Not only is your internal namespace leaking, your internal IP addresses are leaking, as well.</p>
<p>This is not only a massive security hole for your organization, it also exposes information from individual users on the global ‘net.</p>
<p>There are several things that can be done to resolve this problem. Organizationally, running a private DNS server, hard coding resolving servers for internal domains, and using internal domains that are not part of the existing TLD infrastructure, can go a long way towards preventing information leaking of this kind through DNS. Operating a DNS server internally might not be ideal, of course, although DNS services are integrated into a lot of other directory services used in operational networks. If you are using a local DNS server, it is important to remember to configure DHCP and/or IPv6 ND to send the correct, internal, DNS server address, rather than an external address. It is also important to either block or redirect DNS queries sent to public servers by hosts using hard-coded DNS server configurations.</p>
<p>A second line of defense is through DNS query minimization. <a href="https://www.rfc-editor.org/rfc/pdfrfc/rfc7816.txt.pdf">Described in RFC7816, query minimization argues recursive servers should use QNAME queries to only ask about the one relevant part of the FQDN.</a> For instance, if the recursive server receives a query for <code>www.banana.example,</code> the server should request information about <code>.example</code> from the root server, <code>banana.example</code> from the TLD, and send the full requested domain name only to the authoritative server. This way, the full search is not exposed to the intermediate servers, protecting user information.</p>
<p>Some recursive server implementations already support QNAME queries. If you are running a server for internal use, you should ensure the server you are using supports DNS query minimization. If you are directing your personal computer or device to publicly reachable recursive servers, you should investigate whether these servers support DNS query minimization.</p>
<p>Even with DNS query minimization, your recursive server still knows a lot about what you ask for&#8212;the topic of discussion on a <a href="https://rule11.tech/hedge/">forthcoming episode of the Hedge, where our guest will be Geoff Huston.</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://rule11.tech/dns-query-minimization-and-data-leaks/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">10874</post-id>	</item>
		<item>
		<title>There is Always a Back Door</title>
		<link>https://rule11.tech/there-is-always-a-back-door/</link>
		
		<dc:creator><![CDATA[Russ]]></dc:creator>
		<pubDate>Mon, 12 Aug 2019 17:00:04 +0000</pubDate>
				<category><![CDATA[SECURITY]]></category>
		<category><![CDATA[TECH]]></category>
		<category><![CDATA[WRITTEN]]></category>
		<guid isPermaLink="false">https://rule11.tech/?p=10807</guid>

					<description><![CDATA[A long time ago, I worked in a secure facility. I won’t disclose the facility; I’m certain it no longer exists, and the people who designed the system I’m about to describe are probably long retired. Soon after being transferred into this organization, someone noted I needed to be trained on how to change the cipher door locks. We gathered up a ladder, placed the ladder just outside the door to the secure facility, popped open one of the tiles on the drop ceiling, and opened a small metal box with a standard, low security key. Inside this box was a jumper board that set the combination for the secure door.
First lesson of security: there is (almost) always a back door.

<a href="https://www.usenix.org/conference/usenixsecurity18/presentation/birge-lee">I was reminded of this while reading a paper recently published about a backdoor attack on certificate authorities.</a> There are, according to the paper, around 130 commercial Certificate Authorities (CAs). Each of these CAs issue widely trusted certificates used for everything from TLS to secure web browsing sessions to RPKI certificates used to validate route origination information. When you encounter these certificates, you assume at least two things: the private key in the public/private key pair has not been compromised, and the person who claims to own the key is really the person you are talking to. The first of these two can come under attack through data breaches. The second is the topic of the paper in question.

How do CAs validate the person asking for a certificate actually is who they claim to be? Do they work for the organization they are obtaining a certificate for? Are they the “right person” within that organization to ask for a certificate? Shy of having a personal relationship with the person who initiates the certificate request, how can the CA validate who this person is and if they are authorized to make this request?]]></description>
										<content:encoded><![CDATA[<p>A long time ago, I worked in a secure facility. I won’t disclose the facility; I’m certain it no longer exists, and the people who designed the system I’m about to describe are probably long retired. Soon after being transferred into this organization, someone noted I needed to be trained on how to change the cipher door locks. We gathered up a ladder, placed the ladder just outside the door to the secure facility, popped open one of the tiles on the drop ceiling, and opened a small metal box with a standard, low security key. Inside this box was a jumper board that set the combination for the secure door.<br />
First lesson of security: there is (almost) always a back door.</p>
<p><a href="https://www.usenix.org/conference/usenixsecurity18/presentation/birge-lee">I was reminded of this while reading a paper recently published about a backdoor attack on certificate authorities.</a> There are, according to the paper, around 130 commercial Certificate Authorities (CAs). Each of these CAs issue widely trusted certificates used for everything from TLS to secure web browsing sessions to RPKI certificates used to validate route origination information. When you encounter these certificates, you assume at least two things: the private key in the public/private key pair has not been compromised, and the person who claims to own the key is really the person you are talking to. The first of these two can come under attack through data breaches. The second is the topic of the paper in question.</p>
<p>How do CAs validate the person asking for a certificate actually is who they claim to be? Do they work for the organization they are obtaining a certificate for? Are they the “right person” within that organization to ask for a certificate? Shy of having a personal relationship with the person who initiates the certificate request, how can the CA validate who this person is and if they are authorized to make this request?</p>
<p>They could do research on the person—check their social media profiles, verify their employment history, etc. They can also send them something that, in theory, only that person can receive, such as a physical letter, or an email sent to their work email address. To be more creative, the CA can ask the requestor to create a small file on their corporate web site with information supplied by the CA. In theory, these electronic forms of authentication should be solid. After all, if you have administrative access to a corporate web site, you are probably working in information technology at that company. If you have a work email address at a company, you probably work for that company.</p>
<p>These electronic forms of authentication, however, can turn out to be much like the small metal box which holds the jumper board that sets the combination just outside the secure door. They can be more security theater than real security.<br />
In fact, the authors of this paper found that some 70% of the CAs could be tricked into issuing a certificate for just about any organization—by hijacking a route. Suppose the CA asks the requestor to place a small file containing some supplied information on the corporate web site. The attacker creates a web server, inserts the file, hijacks the route to the corporate web site so it points at the fake web site, waits for the authentication to finish, and then removes the hijacked route.</p>
<p>The solution recommended in this paper is for the CAs to use multiple overlapping factors when authenticating a certificate requestor—which is always a good security practice. Another solution recommended by the authors is to monitor your BGP tables from multiple “views” on the Internet to discover when someone has hijacked your routes, and take active measures to either remove the hijack, or at least to detect the attack.</p>
<p>These are all good measures—ones your organization should already be taking.</p>
<p>But the larger point should be this: putting a firewall in front of your network is not enough. Trusting that others will “do their job correctly,” and hence that you can trust the claims of certificates or CAs, is not enough. The Internet is a low trust environment. You need to think about the possible back doors and think about how to close them (or at least know when they have been opened).</p>
<p>Having personal relationships with people you do business with is a good start. Being creative in what you monitor and how is another. Firewalls are not enough. Two-factor authentication is not enough. Security is systemic and needs to be thought about holistically.<br />
There are always back doors.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">10807</post-id>	</item>
		<item>
		<title>What&#8217;s in your DNS query?</title>
		<link>https://rule11.tech/whats-in-your-dns-query/</link>
		
		<dc:creator><![CDATA[Russ]]></dc:creator>
		<pubDate>Mon, 05 Aug 2019 17:00:54 +0000</pubDate>
				<category><![CDATA[SECURITY]]></category>
		<category><![CDATA[WRITTEN]]></category>
		<guid isPermaLink="false">https://rule11.tech/?p=10777</guid>

					<description><![CDATA[Privacy problems are an area of wide concern for individual users of the Internet—but what about network operators? In this issue of <em>The Internet Protocol Journal,</em> Geoff Huston has an article up about privacy in DNS, and the various attempts to make DNS private on the part of the IETF—the result can be summarized with this long, but entertaining, quote:

<blockquote><a href="http://ipj.dreamhosters.com/wp-content/uploads/2019/07/ipj222.pdf"> The Internet is largely dominated, and indeed driven, by surveillance, and pervasive monitoring is a feature of this network, not a bug. Indeed, perhaps the only debate left today is one over the respective merits and risks of surveillance undertaken by private actors and surveillance by state-sponsored actors. … We have come a very long way from this lofty moral stance on personal privacy into a somewhat tawdry and corrupted digital world, where “do no evil!” has become “don’t get caught!”</a></blockquote>

Before diving into a full-blown look at the many problems with DNS security, it is worth considering what kinds of information can leak through the DNS system. <a href="https://blog.apnic.net/2019/06/24/real-time-detection-of-dns-exfiltration/">Let’s ignore the recent discovery that DNS queries can be used to exfiltrate data;</a> instead, let’s look at more mundane data leakage from DNS queries.]]></description>
										<content:encoded><![CDATA[<p>Privacy problems are an area of wide concern for individual users of the Internet—but what about network operators? In this issue of <em>The Internet Protocol Journal,</em> Geoff Huston has an article up about privacy in DNS, and the various attempts to make DNS private on the part of the IETF—the result can be summarized with this long, but entertaining, quote:</p>
<blockquote><p><a href="http://ipj.dreamhosters.com/wp-content/uploads/2019/07/ipj222.pdf"> The Internet is largely dominated, and indeed driven, by surveillance, and pervasive monitoring is a feature of this network, not a bug. Indeed, perhaps the only debate left today is one over the respective merits and risks of surveillance undertaken by private actors and surveillance by state-sponsored actors. … We have come a very long way from this lofty moral stance on personal privacy into a somewhat tawdry and corrupted digital world, where “do no evil!” has become “don’t get caught!”</a></p></blockquote>
<p>Before diving into a full-blown look at the many problems with DNS security, it is worth considering what kinds of information can leak through the DNS system. <a href="https://blog.apnic.net/2019/06/24/real-time-detection-of-dns-exfiltration/">Let’s ignore the recent discovery that DNS queries can be used to exfiltrate data;</a> instead, let’s look at more mundane data leakage from DNS queries.</p>
<p>For instance, say you work in a marketing department for a company that is just about to release a new product. In order to build the marketing and competitive materials your sales critters will need to stand in front of customers, you do a lot of research around competitor products. In the process, you examine, in detail, each of the competing product’s pages. Or perhaps you work in a company that is determining whether or another purchasing or merging with another company might be a good idea. Or you are working on a new externally facing application, or component in an existing application, that relies on a new connection point into your network.</p>
<p>All of these processes can lead to a lot of DNS queries. For someone who knows what they are looking for, the pattern of queries may be enough to examine strings queried from search engines and other information, ultimately leading to someone being able to guess a lot about that new product, what company your company is thinking about buying or merging with, what your new application is going to do, etc. DNS is a treasure trove of information at a personal and organizational level.</p>
<p>Operators and protocol designers have been working for years to resolve these problems, making DNS queries “more private;” Geoff Huston’s article provides a good overview of many of these attempts. DNS over HTTPS (DoH), a recent (and ongoing) attempt bears a closer look.</p>
<p>DNS is normally sent “in plain text” over the network; anyone who can capture the packets can read not only the query, but also the responses. The simplest way to solve this problem is to encrypt the DNS data in flight using something like TLS—hence DoT, or DNS over TLS. One problem with DoT is it is carried over a unique port number, which means it is probably blocked by default by most packet filters, and can easily be blocked by administrators who either do not know what this traffic is, or do not want it on their network. To solve this, DoH carries TLS encrypted traffic in a way that makes it look just like an HTTPS session. If you block DoH traffic, you will also block access to web servers running HTTPS. This is the logical “end” of carrying everything else over HTTPS to avoid the impact of stateful and stateless packet filters and the impact of middle boxes on Internet traffic.</p>
<p>The good result is, in fact, that DNS traffic can no longer be “spied on” by anyone outside servers in the DNS system itself. Whether or not this is “enough” privacy is a matter of conjecture, however. Servers within the DNS system can still collect information about what queries you are making; if the server has access to other information about you or your organization, combining this data into a profile, or using it to determine some deeper investigation is warranted by looking at other sources of data, is pretty simple. Ultimately, DoH is only really useful if you trust your DNS provider.</p>
<p>Do you? Perhaps more importantly—should you?</p>
<p>DNS providers are like any other business; they must buy hardware, connectivity, and the time of smart people who can make the system work, troubleshoot the system when it fails, and think about ways of improving the system. If the service is free…</p>
<p>DoH, however, has another problem Geoff outlines in his article—DNS is moved up the stack so it no longer runs over TCP and UDP directly, but rather it runs over HTTPS. This means local applications, like browsers, can run DNS queries independently of the operating system. In fact, because these queries are TLS encrypted, the operating system itself cannot even “see” the contents of these DNS queries. This might be a good thing—or might be a bad thing. If nothing else, it means the browser, or any other application, can choose to use a resolver not configured by the local operating system. A browser maker, for instance, can direct their browser to send all DNS queries made within the browser to their DNS server, exposing another source of information about users (and the organizations they work for).</p>
<p>Remember that time you typed an internal hostname incorrectly in your browser? Thankfully, you had a local DNS server configured, so the query did not go out to a resolver on the Internet. With DoH, the query can go out to an open resolver on the Internet regardless of how your local systems are configured. Something to ponder.</p>
<p>The bottom line is this—the nature of DNS makes it extremely difficult to secure. Somehow you have to have someone operate, and pay for, an open database of names which translate to addresses. Somehow you have to have a protocol that allows this database to be queried. All of these “somehows” expose information, and there is no clear way to hide that information. You can solve parts of the problem, but not the whole problem. Solving one part of the problem seems to make another part of the problem worse.</p>
<p>If you haven’t found the tradeoff, you haven’t looked hard enough.</p>
<p>In the end, though, the privacy of DNS queries at a personal and organizational level is something you need to think about.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">10777</post-id>	</item>
		<item>
		<title>CAA Records and Site Security</title>
		<link>https://rule11.tech/caa-records-and-site-security/</link>
		
		<dc:creator><![CDATA[Russ]]></dc:creator>
		<pubDate>Mon, 19 Nov 2018 18:00:16 +0000</pubDate>
				<category><![CDATA[RESEARCH]]></category>
		<category><![CDATA[SECURITY]]></category>
		<category><![CDATA[TECH]]></category>
		<category><![CDATA[WRITTEN]]></category>
		<guid isPermaLink="false">https://rule11.tech/?p=9805</guid>

					<description><![CDATA[The little green lock—now being deprecated by some browsers—provides some level of comfort for many users when entering personal information on a web site. You probably know the little green lock means the traffic between the host and the site is encrypted, but you might not stop to ask the fundamental question of all cryptography:&#8230;]]></description>
										<content:encoded><![CDATA[<p>The little green lock—now being deprecated by some browsers—provides some level of comfort for many users when entering personal information on a web site. You probably know the little green lock means the traffic between the host and the site is encrypted, but you might not stop to ask the fundamental question of all cryptography: using what key? The quality of an encrypted connection is no better than the quality and source of the keys used to encrypt the data carried across the connection. If the key is compromised, then entire encrypted session is useless.</p>
<p>So where does the key pair come from to encrypt the session between a host and a server? The session key used for symmetric cryptography on each session is obtained using the public key of the server (thus through asymmetric cryptography). How is the public key of the server obtained by the host? Here is where things get interesting.</p>
<p>The older way of doing things was for a list of domains who were trusted to provide a public key for a particular server was carried in HTTP. The host would open a session with a server, which would then provide a list of domains where its public key could be found in the opening HTTP packets. The host would then find one of those hosts, and hence the server’s public key. From there, the host could create the correct nonce and other information to form a session key with the server. If you are quick on the security side, you might note a problem with this solution: if the HTTP session itself is somehow hijacked early in the setup process, a man-in-the-middle could substitute its own host list for the one the server provides. Once this substitution is done, the MITM could set up perfectly valid encrypted sessions with both the host and the server, funneling traffic between them. The MITM now has full access to the unencrypted data flowing through the session, even though the traffic is encrypted as it flows over the rest of the ‘net.</p>
<p>To solve this problem, a new method for finding the server’s public key was designed around 2010. In this method, the host requests the Certificate Authority Authorization (CAA) record from the server’s DNS server. This record lists the domains who are authorized to provide a public key, or certificate, for the servers within a domain. Thus, if you purchase your certificates from <em>BigCertProvider,</em> you would list <em>BigCertProvider’s</em> domain in your CAA. The host can then find the correct DNS record, and retrieve the correct certificate from the DNS system. This cuts out the possibility of a MITM attacking the HTTP session during the initial setup phases. If DNSSEC is deployed, the DNS records should also be secured, preventing MITM attacks from that angle, as well.</p>
<p>The paper under review today examines the deployment of CAA records in the wild, to determine how widely CAAs are deployed and used.</p>
<p><citation>Scheitle, Quirin, Taejoong Chung, Jens Hiller, Oliver Gasser, Johannes Naab, Roland van Rijswijk-Deij, Oliver Hohlfeld, et al. 2018. “A First Look at Certification Authority Authorization (CAA).” <em>SIGCOMM Comput. Commun. Rev.</em> 48 (2): 10–23. <a href="https://doi.org/10.1145/3213232.3213235">https://doi.org/10.1145/3213232.3213235</a>.</citation></p>
<p>In this paper, a group of researchers put the CAA system to the test to see just how reliable the information is. In their first test, they attempted to request certificates that would cause the issuer to issue invalid certificates in some way; they found that many certificate providers will, in fact, issue such invalid certificates for various reasons. For instance, in one case, they discovered a defect in the provider’s software that allowed their automated system to issue invalid certificates.</p>
<p>In their second test, they examined the results of DNS queries to determine if DNS operators were supporting and returning CAA certificates. They discovered that very few certificate authorities deploy security controls on CAA lookups, leaving open the possibility of the lookups themselves being hijacked. Finally, they examine the deployment of CAA in the wild by web site operators. They found CAA is not widely deployed, with CAA records covering around 40,000 domains. DNSSEC and CAA deployment generally overlap, pointing to a small section of the global ‘net that is concerned about the security of their web sites.</p>
<p>Overall, the results of this study were not heartening for the overall security of the ‘net. While the HTTP based mechanism of discovering a server’s certificate is being deprecated, not many domains have started deploying the CAA infrastructure to replace it—in fact, only a small number of DNS providers support users entering their CAA certificate into their domain records.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">9805</post-id>	</item>
		<item>
		<title>BGP Hijacks: Two more papers consider the problem</title>
		<link>https://rule11.tech/bgp-hijacks-two-more-papers-consider-the-problem/</link>
		
		<dc:creator><![CDATA[Russ]]></dc:creator>
		<pubDate>Mon, 05 Nov 2018 18:00:32 +0000</pubDate>
				<category><![CDATA[RESEARCH]]></category>
		<category><![CDATA[SECURITY]]></category>
		<category><![CDATA[TECH]]></category>
		<guid isPermaLink="false">https://rule11.tech/?p=9762</guid>

					<description><![CDATA[The security of the global Default Free Zone DFZ) has been a topic of much debate and concern for the last twenty years (or more). Two recent papers have brought this issue to the surface once again—it is worth looking at what these two papers add to the mix of what is known, and what&#8230;]]></description>
										<content:encoded><![CDATA[<p>The security of the global Default Free Zone DFZ) has been a topic of much debate and concern for the last twenty years (or more). Two recent papers have brought this issue to the surface once again—it is worth looking at what these two papers add to the mix of what is known, and what solutions might be available. The first of these—</p>
<p><em>Demchak, Chris, and Yuval Shavitt. 2018. “China’s Maxim – Leave No Access Point Unexploited: The Hidden Story of China Telecom’s BGP Hijacking.” Military Cyber Affairs 3 (1). <a href="https://doi.org/10.5038/2378-0789.3.1.1050">https://doi.org/10.5038/2378-0789.3.1.1050</a>.</em></p>
<p>—traces the impact of Chinese “state actor” effects on BGP routing in recent years. </p>
<p><crosspost><a href="http://www.circleid.com/posts/20181106_bgp_hijacks_two_more_papers_consider_the_problem/">cross posted to CircleID</a></crosspost></p>
<p>Whether these are actual attacks, or mistakes from human error for various reasons generally cannot be known, but the potential, at least, for serious damage to companies and institutions relying on the DFZ is hard to overestimate. This paper lays out the basic problem, and the works through a number of BGP hijacks in recent years, showing how they misdirected traffic in ways that could have facilitated attacks, whether by mistake or intentionally. For instance, quoting from the paper—</p>
<ul>
<li>Starting from February 2016 and for about 6 months, routes from Canada to Korean government sites were hijacked by China Telecom and routed through China.</li>
<li>On October 2016, traffic from several locations in the USA to a large Anglo-American bank</li>
<li>headquarters in Milan, Italy was hijacked by China Telecom to China.</li>
<li>Traffic from Sweden and Norway to the Japanese network of a large American news organization was hijacked to China for about 6 weeks in April/May 2017.</li>
</ul>
<p>What impact could such a traffic redirection have? If you can control the path of traffic while a TLS or SSL session is being set up, you can place your server in the middle as an observer. This can, in many situations, be avoided if DNSSEC is deployed to ensure the certificates used in setting up the TLS session is valid, but DNSSEC is not widely deployed, either. Another option is to simply gather encrypted traffic and either attempt to break the key, or use data analytics to understand what the flow is doing <a href="https://rule11.tech/short-take-side-channel-attacks/">(a side channel attack).</a></p>
<p>What can be done about these kinds of problems? The “simplest”—and most naïve—answer is “let’s just secure BGP.” There are many, many problems with this solution. Some of them are highlighted in the second paper under review—</p>
<p><em>Bonaventure, Olivier. n.d. “A Survey among Network Operators on BGP Prefix Hijacking – Computer Communication Review.” Accessed November 3, 2018. <a href="https://ccronline.sigcomm.org/2018/ccr-january-2018/a-survey-among-network-operators-on-bgp-prefix-hijacking/">https://ccronline.sigcomm.org/2018/ccr-january-2018/a-survey-among-network-operators-on-bgp-prefix-hijacking/</a>.</em></p>
<p>—which illustrates the objections providers have to the many forms of BGP security that have been proposed to this point. The first is, of course, that it is expensive. The ROI of the systems proposed thus far are very low; the cost is high, and the benefit to the individual provider is rather low. There is both a <em>race to perfection</em> problem here, as well as a <em>tragedy of the commons</em> problem. The <em>race to perfection</em> problem is this: we will not design, nor push for the deployment of, any system which does not “solve the problem entirely.” This has been the mantra behind BGPSEC, for instance. But not only is BGPSEC expensive—I would say to the point of being impossible to deploy—it is also not perfect.</p>
<p>The second problem in the ROI space is the <em>tragedy of the commons.</em> I cannot do much to prevent other people from misusing my routes. All I can really do is stop myself and my neighbors from misusing other people’s routes. What incentive do I have to try to make the routing in my neighborhood better? The hope that everyone else will do the same. Thus, the only way to maintain the commons of the DFZ is for everyone to work together for the common good. This is difficult. Worse than herding cats.</p>
<p>A second point—not well understood in the security world—is this: a core point of DFZ routing is that when you hand your reachability information to someone else, you lose control over that reachability information. There have been a number of proposals to “solve” this problem, but it is a basic fact that if you cannot control the path traffic takes through your network, then you have no control over the profitability of your network. This tension can be seen in the results of the survey above. People want security, but they do not want to release the information needed to make security happen. <em>Both realities are perfectly rational!</em></p>
<p>Part of the problem with the “more strict,” and hence (considered) “more perfect” security mechanisms proposed is simply this: they are not quiet enough. They expose far too much information. Even systems designed to prevent information leakage ultimately leak too much.</p>
<p>So… what do real solutions on the ground look like?</p>
<p>One option is for everyone to encrypt all traffic, all the time. This is a point of debate, however, as it also damages the ability of providers to optimize their networks. One point where the plumbing allegory for networking breaks down is this: all bits of water are the same. Not all bits on the wire are the same.</p>
<p><a href="https://rule11.tech/ossification-and-fragmentation-the-once-and-future-net/">Another option is to rely less on the DFZ. We already seem to be heading in this direction, if Geoff Huston and other researchers are right.</a> Is this a good thing, or a bad one? It is hard to tell from this angle, but a lot of people think it is a bad thing.</p>
<p>Perhaps we should revisit some of the proposed BGP security solutions, reshaping some of them into something that is more realistic and deployable? Perhaps—but the community is going to let go of the “but it’s not perfect” line of thinking, and start developing some practical, deployable solutions that don’t leak so much information.</p>
<p><a href="http://web.thinkingcat.com/wordpress/blog/">Finally, there is a solution Leslie Daigle and I have been tilting at for a couple of years now.</a> Finding a way to build a set of open source tools that will allow any operator or provider to quickly and cheaply build an internal system to check the routing information available in their neighborhood on the ‘net, and mix local policy with that information to do some bare bones work to make their neighborhood a little cleaner. This is a lot harder than “just build some software” for various reasons; the work is often difficult—as Leslie says, it is largely a matter of herding cats, rather than inventing new things.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">9762</post-id>	</item>
		<item>
		<title>IPv6 Security Considerations</title>
		<link>https://rule11.tech/ipv6-security-considerations/</link>
		
		<dc:creator><![CDATA[Russ]]></dc:creator>
		<pubDate>Mon, 01 Oct 2018 17:00:55 +0000</pubDate>
				<category><![CDATA[SECURITY]]></category>
		<category><![CDATA[STANDARDS]]></category>
		<category><![CDATA[WRITTEN]]></category>
		<guid isPermaLink="false">https://rule11.tech/?p=9490</guid>

					<description><![CDATA[When rolling out a new protocol such as IPv6, it is useful to consider the changes to security posture, particularly the network’s attack surface. While protocol security discussions are widely available, there is often not “one place” where you can go to get information about potential attacks, references to research about those attacks, potential counters,&#8230;]]></description>
										<content:encoded><![CDATA[<p>When rolling out a new protocol such as IPv6, it is useful to consider the changes to security posture, particularly the network’s attack surface. While protocol security discussions are widely available, there is often not “one place” where you can go to get information about potential attacks, references to research about those attacks, potential counters, and operational challenges. <a href="https://tools.ietf.org/html/draft-ietf-opsec-v6-13">In the case of IPv6, however, there is “one place” you can find all this information: draft-ietf-opsec-v6. </a>This document is designed to provide information to operators about IPv6 security based on solid operational experience—and it is a <em>must read</em> if you have either deployed IPv6 or are thinking about deploying IPv6.</p>
<p><crosspost><a href="http://www.circleid.com/posts/20181001_ipv6_security_considerations/">cross posted on CircleID</a></crosspost></p>
<p>The draft is broken up into four broad sections; the first is the longest, addressing generic security considerations. The first consideration is whether operators should use Provider Independent (PI) or Provider Assigned (PA) address space. One of the dangers with a large address space is the sheer size of the potential routing table in the Default Free Zone (DFZ). If every network operator opted for an IPv6 /32, the potential size of the DFZ routing table is 2.4 billion routing entries. If you thought converging on about 800,000 routes is bad, just wait ‘til there are 2.4 billion routes. Of course, the actual PI space is being handed out on /48 boundaries, which makes the potential table size exponentially larger. PI space, then, is “bad for the Internet” in some very important ways.</p>
<p>This document provides the other side of the argument—security is an issue with PA space. While IPv6 was supposed to make renumbering as “easy as flipping a switch,” it does not, in fact, come anywhere near this. Some reports indicate IPv6 re-addressing is more difficult than IPv4. Long, difficult renumbering processes indicate many opportunities for failures in security, and hence a large attack surface. Preferring PI space over PA space becomes a matter of reducing the operational attack surface.</p>
<p>Another interesting question when managing an IPv6 network is whether static addressing should be used for some services, or if all addresses should be dynamically learned. There is a perception out there that because the IPv6 address space is so large, it cannot be “scanned” to find hosts to attack. As pointed out in this draft, there is research showing this is simply not true. Further, static addresses may expose specific servers or services to easy recognition by an attacker. The point the authors make here is that either way, endpoint security needs to rely on actual security mechanisms, rather than on hiding addresses in some way.</p>
<p>Other very useful topics considered here are Unique Local Addresses (ULAs), numbering and managing point-to-point links, privacy extensions for SLAAC, using a /64 per host, extension headers, securing DHCP, ND/RA filtering, and control plane security.</p>
<p>If you are deploying, or thinking about deploying, IPv6 in your network, this is a “must read” document.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">9490</post-id>	</item>
		<item>
		<title>Research: Tail Attacks on Web Applications</title>
		<link>https://rule11.tech/research-tail-attacks-on-web-applications/</link>
					<comments>https://rule11.tech/research-tail-attacks-on-web-applications/#comments</comments>
		
		<dc:creator><![CDATA[Russ]]></dc:creator>
		<pubDate>Wed, 12 Sep 2018 17:00:26 +0000</pubDate>
				<category><![CDATA[RESEARCH]]></category>
		<category><![CDATA[SECURITY]]></category>
		<category><![CDATA[WRITTEN]]></category>
		<guid isPermaLink="false">https://rule11.tech/?p=9439</guid>

					<description><![CDATA[When you think of a Distributed Denial of Service (DDoS) attack, you probably think about an attack which overflows the bandwidth available on a single link; or overflowing the number of half open TCP sessions a device can have open at once, preventing the device from accepting more sessions. In all cases, a DoS or&#8230;]]></description>
										<content:encoded><![CDATA[<p>When you think of a Distributed Denial of Service (DDoS) attack, you probably think about an attack which overflows the bandwidth available on a single link; or overflowing the number of half open TCP sessions a device can have open at once, preventing the device from accepting more sessions. In all cases, a DoS or DDoS attack will involve a lot of traffic being pushed at a single device, or across a single link.</p>
<div class="tldr"><strong>TL;DR</strong>[time-span]</p>
<ul>
<li>Denial of service attacks do not always require high volumes of traffic</li>
<li>An intelligent attacker can exploit the long tail of service queues deep in a web application to bring the service down</li>
<li>These kinds of attacks would be very difficult to detect</li>
</ul>
</div>
<p>&nbsp;</p>
<p>But if you look at an entire system, there are a lot of places where resources are scarce, and hence are places where resources could be consumed in a way that prevents services from operating correctly. Such attacks would not need to be distributed, because they could take much less traffic than is traditionally required to deny a service. These kinds of attacks are called <em>tail attacks,</em> because they attack the long tail of resource pools, where these pools are much thinner, and hence much easier to attack.</p>
<p>There are two probable reasons these kinds of attacks are not often seen in the wild. First, they require an in-depth knowledge of the system under attack. Most of these long tail attacks will take advantage of the interaction surface between two subsystems within the larger system. Each of these interaction surfaces can also be attack surfaces if an attacker can figure out how to access and take advantage of them. Second, these kinds of attacks are difficult to detect, because they do not require large amounts of traffic, or other unusual traffic flows, to launch.</p>
<p>The paper under review today, <em>Tail Attacks on Web Applications,</em> discusses a model for understanding and creating tail attacks in a multi-tier web application—the kind commonly used for any large-scale frontend service, such as ecommerce and social media.</p>
<p><span style="color: #999999;"><em>Huasong Shan, Qingyang Wang, and Calton Pu. 2017. Tail Attacks on Web Applications. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security (CCS &#8217;17). ACM, New York, NY, USA, 1725-1739. DOI: https://doi.org/10.1145/3133956.3133968</em></span></p>
<p>The figure below illustrates a basic service of this kind for those who are not familiar with it.</p>
<p><img data-recalc-dims="1" decoding="async" class="alignnone" src="https://i0.wp.com/rule11.tech/wp-content/uploads/tail-app-dos-1.png?w=600&#038;ssl=1" alt=""  /></p>
<p>The typical application at scale will have at least three stages. The first stage will terminate the user’s session and render content; this is normally some form of modified web server. The second stage will gather information from various backend services (generally microservices), and pass the information required to build the page or portal to the rendering engine. The microservices, in turn, build individual parts of the page, and rely on various storage and other services to supply the information needed.</p>
<p>If you can find some way to clog up the queue at one of the storage nodes, you can cause every other service along the information path to wait on the prior service to fulfill its part of the job in hand. This can cause a cascading effect through the system, where a single node struggling because of full queues can cause an entire set of dependent nodes to become effectively unavailable, cascading to a larger set of nodes in the next layer up. For instance, in the network illustrated, if an attacker can somehow cause the queues at storage service 1 to fill up, even for a moment, this can cascade into a backlog of work at services 1 and 2, cascading into a backlog at the front-end service, ultimately slowing—or even shutting—the entire service down. The queues at storage service 1 may be the same size as every other queue in the system (although they are likely smaller, as they face internal, rather than external, services), but storage system 1 may be servicing many hundreds, perhaps thousands, of copies of services 1 and 2.</p>
<p>The queues at storage service 1—and all the other storage services in the system—represent a hidden bottleneck in the overall system. If an attacker can, for a few moments at a time, cause these internal, intra-application queue to fill up, the overall service can be made to slow down to the point of being almost unusable.</p>
<p>How plausible is this kind of attack? The researchers modeled a three-stage system (most production systems have more than three stages) and examined the total queue path through the system. By examining the queue depths at each stage, they devised a way to fill the queues at the first stage in the system by sending millibursts of valid sessions requests to the rend engine, or the use facing piece of the application. Even if these millibursts are spread out across the edge of the application, so long as they are all the same kind of requests, and timed correctly, they can bring the entire system down. In the paper, the researchers go further and show that once you understand the architecture of one such system, it is possible to try different millibursts on a running system, causing the same DoS effect.</p>
<p>This kind of attack, because it is built out of legitimate traffic, and it can be spread across the entire public facing edge of an application, would be nearly impossible to detect or counter at the network edge. One possible counter to this kind of attack would be increasing capacity in the deeper stages of the application. This countermeasure could be expensive, as the data must be stored on a larger number of servers. Further, data synchronized across multiple systems will subject to CAP limitations, which will ultimately limit the speed at which the application can run anyway. Operators could also consider fine grained monitoring, which increases the amount of telemetry that must be recovered from the network and processed—another form of monetary tradeoff.</p>
<p>&nbsp;</p>
]]></content:encoded>
					
					<wfw:commentRss>https://rule11.tech/research-tail-attacks-on-web-applications/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">9439</post-id>	</item>
		<item>
		<title>Research: DNSSEC in the Wild</title>
		<link>https://rule11.tech/research-dnssec-in-the-wild/</link>
		
		<dc:creator><![CDATA[Russ]]></dc:creator>
		<pubDate>Wed, 05 Sep 2018 17:00:43 +0000</pubDate>
				<category><![CDATA[RESEARCH]]></category>
		<category><![CDATA[SECURITY]]></category>
		<guid isPermaLink="false">https://rule11.tech/?p=9420</guid>

					<description><![CDATA[The DNS system is, unfortunately, rife with holes like Swiss Cheese; man-in-the-middle attacks can easily negate the operation of TLS and web site security. To resolve these problems, the IETF and the DNS community standardized a set of cryptographic extensions to cryptographically sign all DNS records. These signatures rely on public/private key pairs that are&#8230;]]></description>
										<content:encoded><![CDATA[<p>The DNS system is, unfortunately, rife with holes like Swiss Cheese; man-in-the-middle attacks can easily negate the operation of TLS and web site security. To resolve these problems, the IETF and the DNS community standardized a set of cryptographic extensions to cryptographically sign all DNS records. These signatures rely on public/private key pairs that are transitively signed (forming a signature chain) from individual subdomains through the Top Level Domain (TLD). Now that these standards are in place, how heavily is DNSSEC being used in the wild? How much safer are we from man-in-the-middle attacks against TLS and other transport encryption mechanisms?</p>
<div class="tldr"><strong>TL;DR</strong>[time-span]</p>
<ul>
<li>DNSSEC is enabled on most top level domains</li>
<li>However, DNSSEC is not widely used or deployed beyond these TLDs</li>
</ul>
</div>
<p>&nbsp;</p>
<p><crosspost><a href="http://www.circleid.com/posts/20180906_a_look_at_current_state_of_dnsse c_in_the_wild/">Crossposted at CircleID</a></crosspost></p>
<p><a href="https://www.usenix.org/publications/login/winter2017/chung">Three researchers published an article in Winter <em>;login;</em> describing their research into answering this question (membership and login required to read the original article).</a> The result? While more than 90% of the TLDs in DNS are DNSEC enabled, DNSSEC is still not widely deployed or used. To make matter worse, where it is deployed, it isn’t well deployed. The article mentions two specific problems that appear to plague DNSSEC implementations.</p>
<p><em>First,</em> on the server side, a number of domains either deploy weak or expired keys. An easily compromised key is often worse than having no key at all; there is no way to tell the difference between a key that has or has not been compromised. A weak key that has been compromised does not just impact the domain in question, either. If the weakly protected domain has subdomains, or its key is used to validate other domains in any way, the entire chain of trust through the weak key is compromised. Beyond this, there is a threshold over which a system cannot pass without the entire system, itself, losing the trust of its users. If 30% of the keys returned in DNS are compromised, for instance, most users would probably stop trusting any DNSSEC signed information. While expired keys are more obvious that weak keys, relying on expired keys still works against user trust in the system.</p>
<p><em>Second,</em> DNSSEC is complex. The net result of a complex protocol combined with low deployment and demand on the server side is poor implementations in client implementations. Many implementations, according to the research in this paper, simply ignore failures in the certification validation process. Some of the key findings of the paper are—</p>
<ul>
<li>One-third of the DNSSEC enabled domains produce responses that cannot be validated</li>
<li>While TLD operators widely support DNSSEC, registrars who run authoritative servers rarely support DNSSEC; thus the chain of trust often fails at the fist hop in the resolution process beyond the TLD</li>
<li>Only 12% of the resolvers that request DNSSEC records in the query process validate them</li>
</ul>
<p>To discover the deployment of DNSSEC, the researchers built an authoritative DNS server and a web server to host a few files. They configured subdomains on the authoritative server; some subdomains were configured correctly, while others were configured incorrectly (a certificate was missing, expired, malformed, etc.). By examining DNS requests for the subdomains they configured, they could determine which DNS resolvers were using the included DNSSEC information, and which were not.</p>
<p>Based on their results, the authors of this paper make some specific recommendations, such as enabling DNSSEC on all resolvers, such as the recursive servers your company probably operates for internal and external use. Owners of domain names should also ask their registrars to support DNSSEC on their authoritative servers.</p>
<p>Ultimately, it is up to the community of operators and users to make DNSSEC a reality in the ‘net.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">9420</post-id>	</item>
		<item>
		<title>Securing BGP: A Case Study (10)</title>
		<link>https://rule11.tech/securing-bgp-10/</link>
					<comments>https://rule11.tech/securing-bgp-10/#comments</comments>
		
		<dc:creator><![CDATA[Russ]]></dc:creator>
		<pubDate>Mon, 09 May 2016 17:15:01 +0000</pubDate>
				<category><![CDATA[SECURITY]]></category>
		<category><![CDATA[TECH]]></category>
		<category><![CDATA[WRITTEN]]></category>
		<guid isPermaLink="false">http://rule11.tech/?p=6463</guid>

					<description><![CDATA[The next proposed (and actually already partially operational) system on our list is the Router Public Key Infrastructure (RPKI) system, which is described in RFC7115 (and a host of additional drafts and RFCs). The RPKI systems is focused on solving a single solution: validating that the originating AS is authorized to originate a particular prefix.&#8230;]]></description>
										<content:encoded><![CDATA[<p>The next proposed (and actually already partially operational) system on our list is the Router Public Key Infrastructure (RPKI) system, which is described in <a href="https://datatracker.ietf.org/doc/rfc7115/">RFC7115</a> (and a <a href="https://datatracker.ietf.org/doc/search/?name=rpki&amp;activeDrafts=on&amp;rfcs=on">host of additional drafts and RFCs).</a> The RPKI systems is focused on solving a single solution: validating that the originating AS is authorized to originate a particular prefix. An example will be helpful; we&#8217;ll use the network below.</p>
<p><img data-recalc-dims="1" decoding="async" class="alignnone size-full wp-image-6465" src="https://i0.wp.com/rule11.tech/wp-content/uploads/2016/05/RPKI-Operation.jpg?w=500&#038;ssl=1" alt="RPKI-Operation"  srcset="https://i0.wp.com/rule11.tech/wp-content/uploads/2016/05/RPKI-Operation.jpg?w=619&amp;ssl=1 619w, https://i0.wp.com/rule11.tech/wp-content/uploads/2016/05/RPKI-Operation.jpg?resize=140%2C150&amp;ssl=1 140w, https://i0.wp.com/rule11.tech/wp-content/uploads/2016/05/RPKI-Operation.jpg?resize=280%2C300&amp;ssl=1 280w" sizes="(max-width: 619px) 100vw, 619px" /></p>
<p><span style="color: #808080;"><em>(this is a graphic pulled from a presentation, rather than one of my usual line drawings)</em></span></p>
<p>Assume, for a moment, that AS65002 and AS65003 both advertise the same route, 2001:db8:0:1::/64, towards AS65000. How can the receiver determine if both of these two advertisers can actually reach the destination, or only one can? And, if only one can, how can AS65000 determine which one is the &#8220;real thing?&#8221; This is where the RPKI system comes into play. A very simplified version of the process looks something like this (assuming AS650002 is the true owner of 2001:db8:0:1::/64):</p>
<ul>
<li>AS65002 obtains, from the Regional Internet Registry (labeled the RIR in the diagram), a certificate showing AS65002 has been issued 2001:db8:0:1::/64.</li>
<li>AS65002 places this certificate into a local database that is synchronized with all the other operators participating in the routing system.</li>
<li>When AS65000 receives a route towards 2001:db8:0:1::/64, it checks this database to make certain the origin AS on the advertisement matches the owning AS.</li>
</ul>
<p>If the owner and the origin AS match, AS65000 can increase the route&#8217;s preference. If it doesn&#8217;t AS65000 can reduce the route&#8217;s preference. It might be that AS65000 discards the route if the origin doesn&#8217;t match—or it may not. For instance, AS65003 may know, from historical data, or through a strong and long standing business relationship, or from some other means, that 2001:db8:0:1::/64 actually belongs to AS65004, even through the RPKI data claims it belongs to AS65002. Resolving such problems falls to the receiving operator—the RPKI simply provides more information on which to act, rather than dictating a particular action to take.</p>
<p>Let&#8217;s compare this to our requirements to see how this proposal stacks up, and where there might be objections or problems.</p>
<p><strong>Centralized versus Decentralized:</strong> The distribution of the origin authentication information is currently undertaken with rsync, which means the certificate system is decentralized from a technical perspective.</p>
<p>However—there have been technical issues with the rsync solution in the past, such that it can take up to 24 hours to change the distributed database. <a href="http://rule11.tech/cap-theorem-routing/">This is a pretty extreme case of eventual consistency,</a> and it&#8217;s a major problem in the global default free zone. BGP might converge very slowly, but it still converges more quickly than 24 hours.</p>
<p>Beyond the technical problems, there is a business side to the centralized/decentralized issue as well. Specifically, many business don&#8217;t want their operations impacted by contract issues, negotiation issues, and the like. Many large providers see the RPKI system as creating just such problems, as the &#8220;trust anchor&#8221; is located in the RIRs. There are ways to mitigate this—just use some other root, or even self sign your certificates—but the RPKI system faces an uphill battle in this are from large transit providers.</p>
<p><strong>Cost:</strong> The actual cost of setting up and running a server doesn&#8217;t appear to be very high within the RPKI system. The only things you need to &#8220;get into the game&#8221; are a couple of VMs or physical servers to run rsync, and some way to inject the information gleaned from the RPKI system into the routing decisions along the network edge (which could even be just plugging the information into existing policy mechanisms).</p>
<p>The business issue described above can also be counted as a cost—how much would it cost a provider if their origin authentication were taken out of the database for a day or two, or even a week or two, while a contract dispute with the RIR was worked out?</p>
<p><strong>Information Cost:</strong> There is virtually no additional information cost involved in deploying the RPKI.</p>
<p><strong>Other thoughts:</strong> The RPKI system wasn&#8217;t designed to, and doesn&#8217;t, validate anything other than the origin in the AS Path. It doesn&#8217;t, therefore, allow an operator to detect AS65003, for instance, claiming to be connected to AS65002 even though it&#8217;s not (or it&#8217;s not supposed to transit traffic to AS65002). This isn&#8217;t really a &#8220;lack&#8221; on the part of the RPKI, it&#8217;s just not something it&#8217;s designed to do.</p>
<p>Overall, the RPKI is useful, and will probably be deployed by a number of providers, and shunned by others. It would be a good component of some larger system (again, this was the original <em>intent,</em> so this isn&#8217;t a lack), but it cannot stand alone as a complete BGP security system.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://rule11.tech/securing-bgp-10/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">6463</post-id>	</item>
		<item>
		<title>Securing BGP: A Case Study (9)</title>
		<link>https://rule11.tech/securing-bgp9/</link>
		
		<dc:creator><![CDATA[Russ]]></dc:creator>
		<pubDate>Mon, 02 May 2016 17:53:48 +0000</pubDate>
				<category><![CDATA[SECURITY]]></category>
		<category><![CDATA[TECH]]></category>
		<category><![CDATA[WRITTEN]]></category>
		<guid isPermaLink="false">http://rule11.tech/?p=6439</guid>

					<description><![CDATA[There are a number of systems that have been proposed to validate (or secure) the path in BGP. To finish off this series on BGP as a case study, I only want to look at three of them. At some point in the future, I will probably write a couple of posts on what actually&#8230;]]></description>
										<content:encoded><![CDATA[<p>There are a number of systems that have been proposed to validate (or secure) the path in BGP. To finish off this series on BGP as a case study, I only want to look at three of them. At some point in the future, I will probably write a couple of posts on what actually seems to be making it to some sort of deployment stage, but for now I just want to compare various proposals against the requirements outlined in the <a href="http://rule11.tech/securing-bgp-case-study-8/">last post on this topic (you can find that post here).</a></p>
<p>The first of these systems is BGPSEC—or as it was known before it was called BGPSEC, S-BGP. I&#8217;m not going to spend a lot of time explaining how S-BGP works, as I&#8217;ve written a series of posts over at Packet Pushers on this very topic:</p>
<p><a href="http://packetpushers.net/bgpsec-basic-operation/">Part 1: Basic Operation</a><br />
<a href="http://packetpushers.net/bgpsec-protections-offered/">Part 2: Protections Offered</a><br />
<a href="http://packetpushers.net/bgpsec-replays-timers-performance/">Part 3: Replays, Timers, and Performance</a><br />
<a href="http://packetpushers.net/bgpsec-signatures-performance/">Part 4: Signatures and Performance</a><br />
<a href="http://packetpushers.net/bgpsec-leaks-leaks/">Part 5: Leaks</a></p>
<p>Considering S-BGP against the requirements:</p>
<ul>
<li><strong>Centralized versus decentralized balance:</strong> S-BGP distributes path validation information throughout the internetwork, as this information is actually contained in a new attribute carried with route advertisements. Authorization and authentication are implicitly centralized, however, with the root certificates being held by address allocation authorities. It&#8217;s hard to say if this is the correct balance.</li>
<li><strong>Cost:</strong> In terms of financial costs, S-BGP (or BGPSEC) requires every eBGP speaker to perform complex cryptographic operations in line with receiving updates and calculating the best path to each destination. This effectively means replacing every edge router in every AS in the entire world to deploy the solution—this is definitely not cost friendly. Adding to this cost is the simply increase in the table size required to carry all this information, and the loss of commonly used (and generally effective) optimizations.</li>
<li><strong>Information cost:</strong> S-BGP leaks new information into the global table as a matter of course—not only can anyone see who is peered with whom by examining information gleaned from route view servers, they can even figure out how many actual pairs of routers connect each AS, and (potentially) what other peerings those same routers serve. This huge new chunk of information about provider topology being revealed simply isn&#8217;t acceptable.</li>
</ul>
<p>Overall, then, BGP-SEC doesn&#8217;t meet the requirements as they&#8217;ve been outlined in this series of posts. Next week, I&#8217;ll spend some time explaining the operation of another potential system, a graph overlay, and then we&#8217;ll consider how well it meets the requirements as outlined in these posts.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">6439</post-id>	</item>
		<item>
		<title>Securing BGP: A Case Study (8)</title>
		<link>https://rule11.tech/securing-bgp-8/</link>
					<comments>https://rule11.tech/securing-bgp-8/#comments</comments>
		
		<dc:creator><![CDATA[Russ]]></dc:creator>
		<pubDate>Mon, 25 Apr 2016 17:38:15 +0000</pubDate>
				<category><![CDATA[SECURITY]]></category>
		<category><![CDATA[TECH]]></category>
		<category><![CDATA[WRITTEN]]></category>
		<guid isPermaLink="false">http://rule11.tech/?p=6396</guid>

					<description><![CDATA[Throughout the last several months, I&#8217;ve been building a set of posts examining securing BGP as a sort of case study around protocol and/or system design. The point of this series of posts isn&#8217;t to find a way to secure BGP specifically, but rather to look at the kinds of problems we need to think&#8230;]]></description>
										<content:encoded><![CDATA[<p>Throughout the last several months, I&#8217;ve been building a set of posts examining securing BGP as a sort of case study around protocol and/or system design. The point of this series of posts isn&#8217;t to find a way to secure BGP specifically, but rather to look at the kinds of problems we need to think about when building such a system. The interplay between technical and business requirements are wide and deep. In this post, I&#8217;m going to summarize the requirements drawn from the last seven posts in the series.</p>
<p><strong>Don&#8217;t try to prove things you can&#8217;t.</strong> This might feel like a bit of an &#8220;anti-requirement,&#8221; but the point is still important. In this case, we can&#8217;t prove which path along which traffic will flow. We also can&#8217;t enforce policies, specifically &#8220;don&#8217;t transit this AS;&#8221; the best we can do is to provide information and letting other operators make a local decision about what to follow and what not to follow. In the larger sense, it&#8217;s important to understand what can, and what can&#8217;t, be solved, or rather what the practical limits of any solution might be, as close to the beginning of the design phase as possible.</p>
<p>In the case of securing BGP, I can, at most, validate three pieces of information:</p>
<ul>
<li>That the origin AS in the AS Path matches the owner of the address being advertised.</li>
<li>That the AS Path in the advertisement is a valid path, in the sense that each pair of autonomous systems in the AS Path are actually connected, and that no-one has &#8220;inserted themselves&#8221; in the path silently.</li>
<li>The policies of each pair of autonomous systems along the path towards one another. This is completely voluntary information, of course, and cannot be enforced in any way if it is provided, but more information provided will allow for stronger validation.</li>
</ul>
<p><strong>There is a fine balance between centralized and distributed systems.</strong> There are actually things that can be centralized or distributed in terms of BGP security: how ownership is claimed over resources, and how the validation information is carried to each participating AS. In the case of ownership, the tradeoff is between having a widely trusted third party validate ownership claims and having a third party who can shut down an entire business. In the case of distributing the information, there is a tradeoff between the consistency and the accessibility of the validation information. These are going to be points on which reasonable people can disagree, and hence are probably areas where the successful system must have a good deal of flexibility.</p>
<p><strong>Cost is a major concern.</strong> There are a number of costs that need to be considered when determining which solution is best for securing BGP, including—</p>
<ul>
<li>Physical equipment costs. The most obvious cost is the physical equipment required to implement each solution. For instance, any solution that requires providers to replace all their edge routers is simply not going to be acceptable.</li>
<li>Process costs. Any solution that requires a lot of upkeep and maintenance is going to be cast aside very quickly. Good intentions are overruled by the tyranny of the immediate about 99.99% of the time.</li>
</ul>
<p>Speed is also a cost that can be measured in business terms; if increasing security decreases the speed of convergence, providers who deploy security are at a business disadvantage relative to their competitors. The speed of convergence must be on the order of Internet level convergence today.</p>
<p><strong>Information costs are a particularly important issue.</strong> There are at least three kinds of information that can leak out of any attempt to validate BGP, each of them related to connectivity—</p>
<ul>
<li>Specific information about peering, such as how many routers interconnect two autonomous systems, where interconnections are, and how interconnection points are related to one another.</li>
<li>Publicly verifiable claims about interconnection. Many providers argue there is a major difference between connectivity information that can be <em>observed</em> and connectivity information that is <em>claimed.</em></li>
<li>Publicly verifiable information about business relationships. Virtually every provider considers it important not to release at least some information about their business relationships with other providers and customers.</li>
</ul>
<p>While there is some disagreement in the community over each of these points, it&#8217;s clear that releasing the first of these is almost always going to be unacceptable, while the second and third are more situational.</p>
<p>With these requirements in place, it&#8217;s time to look at a couple of proposed systems to see how they measure up.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://rule11.tech/securing-bgp-8/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">6396</post-id>	</item>
		<item>
		<title>Information wants to be protected: Security as a mindset</title>
		<link>https://rule11.tech/information-security/</link>
					<comments>https://rule11.tech/information-security/#comments</comments>
		
		<dc:creator><![CDATA[Russ]]></dc:creator>
		<pubDate>Mon, 14 Sep 2015 13:39:46 +0000</pubDate>
				<category><![CDATA[CULTURE]]></category>
		<category><![CDATA[SECURITY]]></category>
		<category><![CDATA[WRITTEN]]></category>
		<guid isPermaLink="false">http://rule11.tech/?p=4704</guid>

					<description><![CDATA[I was teaching a class last week and mentioned something about privacy to the students. One of them shot back, &#8220;you&#8217;re paranoid.&#8221; And again, at a meeting with some folks about missionaries, and how best to protect them when trouble comes to their door, I was again declared paranoid. In fact, I&#8217;ve been told I&#8217;m&#8230;]]></description>
										<content:encoded><![CDATA[<p><a href="https://i0.wp.com/rule11.tech/wp-content/uploads/2015/09/George-Orwell-house-big-brother.jpg"><img data-recalc-dims="1" loading="lazy" decoding="async" class="alignleft" style="margin-right: 5px;" src="https://i0.wp.com/rule11.tech/wp-content/uploads/2015/09/George-Orwell-house-big-brother.jpg?resize=200%2C169&#038;ssl=1" alt="George-Orwell-house-big-brother" width="200" height="169" /></a>I was teaching a class last week and mentioned something about privacy to the students. One of them shot back, &#8220;you&#8217;re paranoid.&#8221; And again, at a meeting with some folks about missionaries, and how best to protect them when trouble comes to their door, I was again declared paranoid. In fact, I&#8217;ve been told I&#8217;m paranoid after presentations by complete strangers who were sitting in the audience.</p>
<p>Okay, so I&#8217;m paranoid. I admit it.</p>
<p>But what is there to be paranoid about? We&#8217;ve supposedly gotten to the point where no-one cares about privacy, where encryption is pointless because everyone can see everything anyway, and all the rest. Everyone except me, that is—I&#8217;ve not &#8220;gotten over it,&#8221; nor do I think I ever will. In fact, I don&#8217;t think any engineer should &#8220;get over it,&#8221; in terms of privacy and security. Even if you think it&#8217;s not a big deal in your own life, engineers should learn to treat other people&#8217;s information with the utmost care.</p>
<p>In moving from the person to the digital representation of the person, we often forget it&#8217;s someone&#8217;s life we&#8217;re actually playing with. I think it&#8217;s time for engineers to take security—and privacy—personally. It&#8217;s time to actually do what we say we do, and make security a part of the design from day one, rather than something tacked on to the end.</p>
<p>And I don&#8217;t care if you think I&#8217;m paranoid.</p>
<p>Maybe it&#8217;s time to replace the old saying <em>information wants to be free.</em> Perhaps we should replace it with something a little more realistic, like:</p>
<p><strong>Information wants to be protected.</strong></p>
<p>It&#8217;s true that there are many different kinds of information. For instance, there&#8217;s the information contained in a song, or the information contained in a book, or a blog, or information about someone&#8217;s browsing history. Each piece of information has a specific intent, or purpose, a goal for which it was created. Engineers should make their default design such that information is only used for its intended purpose by the creator (or owner) of that information. We should design this into our networks, into our applications, and into our thought patterns. It&#8217;s all too easy to think, &#8220;we&#8217;ll get to security once things are done, and there&#8217;s real data being pushed into the system.&#8221; And then it&#8217;s too easy to think, &#8220;no-one has complained, and the world didn&#8217;t fall apart, so I&#8217;ll do it later.&#8221;</p>
<p>But what does it mean to design security into the system from day one? This is often, actually, the hard part. There are tradeoffs, particularly costs, involved with security. These costs might be in terms of complexity, which makes our jobs harder, or in terms of actual costs to bring the system up in the first place.</p>
<p>But if we don&#8217;t start pushing back, who will? The users? Most of them don&#8217;t even begin to understand the threat. The business folks who pay for the networks and applications we build? Not until they&#8217;re convinced there&#8217;s an ROI they can get their minds around. Who&#8217;s going to need to build that ROI? We are.</p>
<p><a href="https://blog.cloudsecurityalliance.org/2015/09/09/four-criteria-for-legal-hold-of-electronically-stored-information-esi/">A good place to start might be here.</a></p>
<p>And we&#8217;re not going to until we all start nurturing the little security geek inside every engineer, until we start taking security (and privacy) a little more seriously. Until we stop thinking about this stuff as just bits on the wire, and start thinking about it as people&#8217;s lives. Until we reset our default to &#8220;just a little paranoid,&#8221; perhaps.</p>
<hr />
<p><span style="color: #808080;"><em>P.S. I&#8217;m not so certain we should get over it. Somehow I think we&#8217;re losing something of ourselves in this process of opening our lives to anyone and everyone, and I fear that by the time we figure out what it is we&#8217;re losing, it&#8217;ll be too late to reverse the process. Somehow I think that treating other people as a product (if the service is free, you are the product) is just wrong in ways we&#8217;ve not yet been able to define.</em></span></p>
]]></content:encoded>
					
					<wfw:commentRss>https://rule11.tech/information-security/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">4704</post-id>	</item>
	</channel>
</rss>
