Weekend Reads 081618: New Vulnerabilities

Spectre and Meltdown are more than a new class of security holes. They’re deeply embedded in the fundamental design of recent generations of CPUs. So it shouldn’t come as any surprise that yet another major Intel chip security problem has been discovered: Foreshadow. —Steven J. Vaughan-Nichols @ZDNet

Foreshadow Attacks – Security researchers disclosed the details of three new speculative execution side-channel attacks that affect Intel processors. The new flaws, dubbed Foreshadow and L1 Terminal Fault (L1TF), were discovered by two independent research teams. —Pierluigi Paganini @Security Affairs

Security researchers at Check Point Software Technologies have discovered a new attack vector against the Android operating system that could potentially allow attackers to silently infect your smartphones with malicious apps or launch denial of service attacks. —Swati Khandelwal @The Hacker News

When a room filled with hundreds of security professionals erupts into applause, it’s notable. When that happens less than five minutes into a presentation, it’s remarkable. But that’s what transpired when security researcher Christopher Domas last week showed a room at Black Hat USA how to break the so-called ring-privilege model of modern CPU security. —Curtis Franklin Jr. @Dark Reading

Bought a new Android phone? What if I say your brand new smartphone can be hacked remotely? Nearly all Android phones come with useless applications pre-installed by manufacturers or carriers, usually called bloatware, and there’s nothing you can do if any of them has a backdoor built-in—even if you’re careful about avoiding sketchy apps. —Swati Khandelwal @The Hacker News

Your Mac computer running the Apple’s latest High Sierra operating system can be hacked by tweaking just two lines of code, a researcher demonstrated at the Def Con security conference on Sunday. Patrick Wardle, an ex-NSA hacker and now Chief Research Officer of Digita Security, uncovered a critical zero-day vulnerability in the macOS operating system that could allow a malicious application installed in the targeted system to virtually “click” objects without any user interaction or consent. —Mohit Kumar @The Hacker News

You probably received more than a few emails from companies notifying you of changes to their privacy policy in the lead-up to May 25, 2018—the day the General Data Protection Regulation (GDPR) went into effect. The European Union drafted the GDPR to protect the personal and private data of citizens of the EU and European Economic Area and to establish a standard for data-security laws across Europe. —Azam Qureshi @Data Journal Journal

“I need you to make sure that I don’t walk into any walls or trip on the stairs,” one of my friends recently informed me. Her reason? She was running on about three and a half hours of sleep and was struggling with the simple task of walking. I hadn’t gotten much more sleep than she had, and I’m honestly not sure if we were any help to one another. I don’t remember if either of us walked into anything, but I don’t think I was alert enough to catch her if she did. —Patience Griswold @Intellectual Takeout

Reaction: Network software quality

Over at IT ProPortal, Dr Greg Law has an article up chiding the networking world for the poor software quality. To wit—

When networking companies ship equipment out containing critical bugs, providing remediation in response to their discovery can be almost impossible. Their engineers back at base often lack the data they need to reproduce the issue as it’s usually kept by clients on premise. An inability to cure a product defect could result in the failure of a product line, temporary or permanent withdrawal of a product, lost customers, reputational damage, and product reengineering expenses, any of which could have a material impact on revenue, margins, and net income.

Let me begin here: Dr. Law, you are correct—we have a problem with software quality. I think the problem is a bit larger than just the networking world—for instance, my family just purchased two new vehicles, a Volvo and a Fiat. Both have Android systems in the center screen. And neither will connect correctly with our Android based phones. It probably isn’t mission critical, like it could be for a network, but it is annoying.

But even given software quality is a widespread issue in our world, it is still true that networks are something of a special case. While networks are often just hidden behind the plug, they play a much larger role in the way the world works than most people realize. Much like the train system at the turn of the century, and the mixed mode transportation systems that enable us to put dinner on the table every night, the network carries most of what really matters in the virtual world, from our money to our medical records.

Given the assessment is correct—and I think it is—what is the answer?

One answer is to simply do better. To fuss at the vendors, and the open source projects, and to make the quality better. The beatings, as they say, will continue until moral improves. If anyone out there thinks this will really work, raise your hands. No, higher. I can’t see you. Or maybe no-one has their hands raised.

What, then, is the solution? I think Dr. Law actually gets at the corner of what the solution needs to be in this line—

The complexity of the network stack though, is higher than ever. An increased number of protocols leads to a more complex architecture, which in turn severely impacts operational efficiency of networks.

For a short review, remember that complexity is required to solve hard problems. Specifically, the one hard problem complexity is designed to solve is environmental uncertainty. Because of this, we are not going to get rid of complexity any time soon. There are too many old applications, and too many old appliances, that no-one willing to let go of. There are too many vendors trying to keep people within their ecosystem, and too many resulting one-off connectors to bridge the gap, that will never be replaced. Complexity isn’t really going to be dramatically reduced until we bite the bullet and take these kinds of organizational and people problems on head on.

In the meantime, what can we do?

Design simpler. Stop stacking tons of layers. Focus on solving problems, rather than deploying technologies. Stop being afraid to rip things out.

If you have read my work in the past, for instance Navigating Network Complexity, or Computer Networking Problems and Solutions, or even The Art of Network Architecture, you know the drill.

We can all cast blame at the vendors, but part of this is on us as network engineers. If you want better quality in your network, the best place to start is with the network you are working on right now, the people who are designing and deploying that network, and the people who make the business decisions.

Research: Are We There Yet? RPKI Deployment Considered

The Resource Public Key Infrastructure (RPKI) system is designed to prevent hijacking of routes at their origin AS. If you don’t know how this system works (and it is likely you don’t, because there are only a few deployments in the world), you can review the way the system works by reading through this post here on rule11.tech.

Gilad, Yossi & Cohen, Avichai & Herzberg, Amir & Schapira, Michael & Shulman, Haya. (2017). Are We There Yet? On RPKI’s Deployment and Security. 10.14722/ndss.2017.23123.

The paper under review today examines how widely Route Origin Validation (ROV) based on the RPKI system has been deployed. The authors began by determining which Autonomous Systems (AS’) are definitely not deploying route origin validation. They did this by comparing the routes in the global RPKI database, which is synchronized among all the AS’ deploying the RPKI, to the routes in the global Default Free Zone (DFZ), as seen from 44 different route servers located throughout the world. In comparing these two, they found a set of routes which the RPKI system indicated should be originated from one AS, but were actually being originated from another AS in the default free zone.

Using this information, the researchers then looked for AS’ through which these routes with a mismatched RPKI and global table origin were advertised. If an AS accepted, and then readvertised, routes with mismatched RPKI and global table origins, they marked this AS as one that does not enforce route origin authentication.

A second, similar check was used to find the mirror set of AS’, those that do perform a route origin validation check. In this case, the authors traced the same type of route—those for which the origin AS  the route is advertised with does not match the originating AS in the RPKI–and discovered some AS’ will not readvertise such a route. These AS’ apparently do perform a check for the correct route origin information.

The result is that only one of the 20 Internet Service Providers (ISPs) with the largest number of customers performs route origination validation on the routes they receive. Out of the largest 100 ISPs (again based on customer AS count), 22 appear to perform a route origin validation check. These are very low numbers.

To double check these numbers, the researchers surveyed a group of ISPs, and found that very few of them claim to check the routes they receive against the RPKI database. Why is this? When asked, these providers gave two reasons.

First, these providers are concerned about the problems involved with their connectivity being impacted in the case of an RPKI system failure. For instance, it would be easy enough for a company to become involved in a contract dispute with their naming authority, or with some other organization (two organizations claiming the same AS number, for instance). These kinds of cases could result in many years of litigation, causing a company to effectively lose their connectivity to the global ‘net during the process. This might seem like a minor fear for some, and there might be possible mitigations, but the ‘net is much more statically defined than many people realize, and many operators operate on a razor thin margin. The disruptions caused by such an event could simply put a company out of business.

Second, there is a general perception that the RPKI database is not exactly a “clean” representation of the real world. Since the database is essentially self-reported, there is little incentive to make changes to the database once something in the real world has changed (such as the transfer of address space between organization). It only takes a small amount of old, stale, or incorrect information to reduce the usefulness of this kind of public database. The authors address this concern by examining the contents of the RPKI, and find that it does, in fact, contain a good bit of incorrect information. They develop a tool to help administrators find this information, but ultimately people must use these kinds of tools.

The point of the paper is that the RPKI system, which is seen as crucial to the security of the global Internet, is not being widely used, and deployment does not appear to be increasing over time. One possible takeaway is the community needs to band together and deploy this technology. Another might be that the RPKI is not a viable solution to the problem at hand for various technical and social reasons—it might be time to start looking for another alternative for solving this problem.


August 2018

July 2018

Research; HTTPS Interceptions

I have written elsewhere about the problems with the “little green lock” shown by browsers to indicate a web page