CAA Records and Site Security

The little green lock—now being deprecated by some browsers—provides some level of comfort for many users when entering personal information on a web site. You probably know the little green lock means the traffic between the host and the site is encrypted, but you might not stop to ask the fundamental question of all cryptography: using what key? The quality of an encrypted connection is no better than the quality and source of the keys used to encrypt the data carried across the connection. If the key is compromised, then entire encrypted session is useless.

So where does the key pair come from to encrypt the session between a host and a server? The session key used for symmetric cryptography on each session is obtained using the public key of the server (thus through asymmetric cryptography). How is the public key of the server obtained by the host? Here is where things get interesting.

The older way of doing things was for a list of domains who were trusted to provide a public key for a particular server was carried in HTTP. The host would open a session with a server, which would then provide a list of domains where its public key could be found in the opening HTTP packets. The host would then find one of those hosts, and hence the server’s public key. From there, the host could create the correct nonce and other information to form a session key with the server. If you are quick on the security side, you might note a problem with this solution: if the HTTP session itself is somehow hijacked early in the setup process, a man-in-the-middle could substitute its own host list for the one the server provides. Once this substitution is done, the MITM could set up perfectly valid encrypted sessions with both the host and the server, funneling traffic between them. The MITM now has full access to the unencrypted data flowing through the session, even though the traffic is encrypted as it flows over the rest of the ‘net.

To solve this problem, a new method for finding the server’s public key was designed around 2010. In this method, the host requests the Certificate Authority Authorization (CAA) record from the server’s DNS server. This record lists the domains who are authorized to provide a public key, or certificate, for the servers within a domain. Thus, if you purchase your certificates from BigCertProvider, you would list BigCertProvider’s domain in your CAA. The host can then find the correct DNS record, and retrieve the correct certificate from the DNS system. This cuts out the possibility of a MITM attacking the HTTP session during the initial setup phases. If DNSSEC is deployed, the DNS records should also be secured, preventing MITM attacks from that angle, as well.

The paper under review today examines the deployment of CAA records in the wild, to determine how widely CAAs are deployed and used.

Scheitle, Quirin, Taejoong Chung, Jens Hiller, Oliver Gasser, Johannes Naab, Roland van Rijswijk-Deij, Oliver Hohlfeld, et al. 2018. “A First Look at Certification Authority Authorization (CAA).” SIGCOMM Comput. Commun. Rev. 48 (2): 10–23. https://doi.org/10.1145/3213232.3213235.

In this paper, a group of researchers put the CAA system to the test to see just how reliable the information is. In their first test, they attempted to request certificates that would cause the issuer to issue invalid certificates in some way; they found that many certificate providers will, in fact, issue such invalid certificates for various reasons. For instance, in one case, they discovered a defect in the provider’s software that allowed their automated system to issue invalid certificates.

In their second test, they examined the results of DNS queries to determine if DNS operators were supporting and returning CAA certificates. They discovered that very few certificate authorities deploy security controls on CAA lookups, leaving open the possibility of the lookups themselves being hijacked. Finally, they examine the deployment of CAA in the wild by web site operators. They found CAA is not widely deployed, with CAA records covering around 40,000 domains. DNSSEC and CAA deployment generally overlap, pointing to a small section of the global ‘net that is concerned about the security of their web sites.

Overall, the results of this study were not heartening for the overall security of the ‘net. While the HTTP based mechanism of discovering a server’s certificate is being deprecated, not many domains have started deploying the CAA infrastructure to replace it—in fact, only a small number of DNS providers support users entering their CAA certificate into their domain records.

Research: Measuring IP Liveness

Of the 4.2 billion IPv4 addresses available in the global space, how many are used—or rather, how many are “alive?” Given the increasing usage of IPv6, it might seem this is an unimportant question. Answering the question, however, resolves to another question that is actually more important: how can you determine whether or not an IP address is in use? This question might seem easy to answer: ping every address in the address space. This, however, turns out to be the wrong answer.

Scanning the Internet for Liveness. SIGCOMM Comput. Commun. Rev. 48, 2 (May 2018), 2-9. DOI: https://doi.org/10.1145/3213232.3213234

This answer is wrong because a substantial number of systems do not respond to ICMP requests. According to this paper, in fact, some 16% of the hosts they discovered that would respond to a TCP SYN, and another 2% that would respond to a UDP packet shaped to connect to a service, do not respond to ICMP requests. There are a number of possible reasons for this situation, including hosts being placed behind devices that block ICMP packets, hosts being configured not to respond to ICMP requests, or a server sitting behind a PAT or CGNAT device that only passes through service requests rather than all packets.

The paper begins by building a taxonomy of liveness, describing the process they use to determine if an address is in use or not, as shown in the image replicated from the paper.

One problem of note is that address usage can shift over time; between trying to use ICMP and a TCP SYN to determine if an IP address is in use, the device connected to that address can change. To limit the impact of this problem, the researchers sent each kind of liveness test to the same address close together in time. The authors then attempt to cross reference the liveness indicated using different techniques to an overall view of liveness for a particular address.

The research resulted in a number of interesting observations, such as the 16% of hosts that respond to TCP SYN probes on some port, but do not respond to ICMP requests. The kinds of ICMP and TCP responses was also quite interesting; many TCP implementations do not seem compliant to the TCP specification in how they respond to a SYN request.

Along the way, the authors added new capabilities to ZMap which allow them to perform these measurements. The tool they used has a web based frontend, and can be accessed here.

The results are interesting for network operators because they indicate the kinds of work required to find all the devices attached to a network using IP addresses—a mass ping utility is simply not enough. The tools developed here, and the lessons learned, can be added to the set of tools used by operators in all networks to better understand their IP address usage, and the shape of their networks.

Weekend Reads 110818

KrebsOnSecurity recently had a chance to interview members of the REACT Task Force, a team of law enforcement officers and prosecutors based in Santa Clara, Calif. that has been tracking down individuals engaged in unauthorized “SIM swaps” — a complex form of mobile phone fraud that is often used to steal large amounts of cryptocurrencies and other items of value from victims. Snippets from that fascinating conversation are recounted below, and punctuated by accounts from a recent victim who lost more than $100,000 after his mobile phone number was hijacked. @Krebs on Security

PortSmash, as the new attack is being called, exploits a largely overlooked side-channel in Intel’s hyperthreading technology. A proprietary implementation of simultaneous multithreading, hyperthreading reduces the amount of time needed to carry out parallel computing tasks, in which large numbers of calculations or executions are carried out simultaneously. The performance boost is the result of two logical processor cores sharing the hardware of a single physical processor. The added logical cores make it easier to divide large tasks into smaller ones that can be completed more quickly. —Dan Goodin @ARS Technica

Security researchers have unveiled details of two critical vulnerabilities in Bluetooth Low Energy (BLE) chips embedded in millions of access points and networking devices used by enterprises around the world. Dubbed BleedingBit, the set of two vulnerabilities could allow remote attackers to execute arbitrary code and take full control of vulnerable devices without authentication, including medical devices such as insulin pumps and pacemakers, as well as point-of-sales and IoT devices. —Swati Khandelwal @The Hacker News

Crooks who hack online merchants to steal payment card data are constantly coming up with crafty ways to hide their malicious code on Web sites. In Internet ages past, this often meant obfuscating it as giant blobs of gibberish text that was obvious even to the untrained eye. These days, a compromised e-commerce site is more likely to be seeded with a tiny snippet of code that invokes a hostile domain which appears harmless or that is virtually indistinguishable from the hacked site’s own domain. @Krebs on Security

Over the last several years, Facebook has gone from facilitating the free flow of information to inhibiting it through incremental censorship and account purges. What began with the ban of Alex Jones last summer has since escalated to include the expulsion of hundreds of additional pages, each political in nature. And as more people become wary of the social media platform’s motives, one thing is absolutely certain: we need more market competition in the realm of social media. —Brittany Hunter @Interllectual Takeout

Tim Berners-Lee, a London-born computer scientist who invented the Web in 1989, said he was disappointed with the current state of the internet, following scandals over the abuse of personal data and the use of social media to spread hate. “What naturally happens is you end up with one company dominating the field so through history there is no alternative to really coming in and breaking things up,” Berners-Lee, 63, said in an interview. “There is a danger of concentration.” —Guy Faulconbridge, Paul Sandle @Reuters

It’s been three years since Australia adopted a national copyright blocking system, despite widespread public outcry over the abusive, far-reaching potential of the system, and the warnings that it would not achieve its stated goal of preventing copyright infringement. Three years later, the experts who warned that censorship wouldn’t drive people to licensed services have been vindicated. According to the giant media companies who drove the copyright debate in 2015, the national censorship system has not convinced Australians to pay up. —Cory Doctorow @EFF

Thiel said Silicon Valley has fallen victim to groupthink, citing its politically insular atmosphere for his moving away to Los Angeles. “There’s a sense that the network effects that made Silicon Valley good have gone haywire,” he said, according to CNBC. “It’s not the wisdom of crowds, it’s the madness of crowds.” @Market Watch

Google Chrome is the most popular browser in the world. Chrome routinely leads the pack in features for security and usability, most recently helping to drive the adoption of HTTPS. But when it comes to privacy, specifically protecting users from tracking, most of its rivals leave it in the dust. —Bennett Cyphers and Mitch Stoltz @EFF