Russ

About Russ

This author has not yet filled in any details.
So far Russ has created 1269 blog entries.

Weekend Reads 091418

Security

You install a new app on your phone, and it asks for access to your email accounts. Should you, or shouldn’t you? TL;DR? You shouldn’t. When an app asks for access to your email, they are probably reading your email, performing analytics across it, and selling that information. Something to think about: how do they train their analytics models? By giving humans the job of reading it.

When you shut your computer down, the contents of memory are not wiped. This means an attacker can sometimes grab your data while the computer is booting, before any password is entered. Since 2008, computers have included a subsystem that wipes system memory before starting any O/S launch—but researchers have found a way around this memory wipe.

You know when your annoying friend talks about the dangers of IoT when you bragging about your latest install of that great new electronic doorlock that works off your phone? You know the one I’m talking about. Maybe that annoying friend has some things right, and we should really be paying more attention to the problems inherent in large scale IoT deployments. For instance, what would happen if you could get the electrical grid in hot water using… hot water heaters?

Copyright

One of the seemingly intractable problems facing content creators today is copyright—this is largely an untold story, and it is also often “little folks” against “big folks.” As copyright infringement detection is automated, it is likely to become a big mess. One way to think about it: a thousand monkeys typing at a thousand typewriters are not going to produce the works of any great artist. On the other hand, a thousand humans writing pieces on the same new product announcements are bound to same the same things in the same way at some point. When everyone hits “publish” at the same time, and the bots of the big folks start calling for takedown on content written by a little folk, the disparity in legal resources that can be brought to bear is the controlling factor. This problem is made worse by the mandatory implementation of said bots through government action.

Other Stories

While most people think of monopolies in terms of physical goods, it seems possible for monopolies to form around information and services, as well. In fact, it would seem that control of information is at the heart of every monopoly. As anti-trust forces grow against the big content providers in the U.S., the courts will need to sort out when controlling access to information, by itself, becomes a monopoly. Who are the big targets, and what would a case look like against them?

Finally, Google wants to kill the URL. Is this a good idea, or a bad one? My initial reaction is—this is a bad idea. Users certainly find URL’s confusing, but this is in part our own fault. Why are URL’s confusing? Primarily because we have allowed systems to tack so much state information onto them. Perhaps an alternate solution is not to bury the complexity, forcing users to trust the machine, make the interface simple again, so users can actually tell what is going on. Of course, one of the oldest marketing tricks in the book is to make something so complicated that users cannot understand it, then offer to sell them a solution for the complexity you have created.

Research: Tail Attacks on Web Applications

When you think of a Distributed Denial of Service (DDoS) attack, you probably think about an attack which overflows the bandwidth available on a single link; or overflowing the number of half open TCP sessions a device can have open at once, preventing the device from accepting more sessions. In all cases, a DoS or DDoS attack will involve a lot of traffic being pushed at a single device, or across a single link.

TL;DR

  • Denial of service attacks do not always require high volumes of traffic
  • An intelligent attacker can exploit the long tail of service queues deep in a web application to bring the service down
  • These kinds of attacks would be very difficult to detect

 

But if you look at an entire system, there are a lot of places where resources are scarce, and hence are places where resources could be consumed in a way that prevents services from operating correctly. Such attacks would not need to be distributed, because they could take much less traffic than is traditionally required to deny a service. These kinds of attacks are called tail attacks, because they attack the long tail of resource pools, where these pools are much thinner, and hence much easier to attack.

There are two probable reasons these kinds of attacks are not often seen in the wild. First, they require an in-depth knowledge of the system under attack. Most of these long tail attacks will take advantage of the interaction surface between two subsystems within the larger system. Each of these interaction surfaces can also be attack surfaces if an attacker can figure out how to access and take advantage of them. Second, these kinds of attacks are difficult to detect, because they do not require large amounts of traffic, or other unusual traffic flows, to launch.

The paper under review today, Tail Attacks on Web Applications, discusses a model for understanding and creating tail attacks in a multi-tier web application—the kind commonly used for any large-scale frontend service, such as ecommerce and social media.

Huasong Shan, Qingyang Wang, and Calton Pu. 2017. Tail Attacks on Web Applications. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security (CCS ’17). ACM, New York, NY, USA, 1725-1739. DOI: https://doi.org/10.1145/3133956.3133968

The figure below illustrates a basic service of this kind for those who are not familiar with it.

The typical application at scale will have at least three stages. The first stage will terminate the user’s session and render content; this is normally some form of modified web server. The second stage will gather information from various backend services (generally microservices), and pass the information required to build the page or portal to the rendering engine. The microservices, in turn, build individual parts of the page, and rely on various storage and other services to supply the information needed.

If you can find some way to clog up the queue at one of the storage nodes, you can cause every other service along the information path to wait on the prior service to fulfill its part of the job in hand. This can cause a cascading effect through the system, where a single node struggling because of full queues can cause an entire set of dependent nodes to become effectively unavailable, cascading to a larger set of nodes in the next layer up. For instance, in the network illustrated, if an attacker can somehow cause the queues at storage service 1 to fill up, even for a moment, this can cascade into a backlog of work at services 1 and 2, cascading into a backlog at the front-end service, ultimately slowing—or even shutting—the entire service down. The queues at storage service 1 may be the same size as every other queue in the system (although they are likely smaller, as they face internal, rather than external, services), but storage system 1 may be servicing many hundreds, perhaps thousands, of copies of services 1 and 2.

The queues at storage service 1—and all the other storage services in the system—represent a hidden bottleneck in the overall system. If an attacker can, for a few moments at a time, cause these internal, intra-application queue to fill up, the overall service can be made to slow down to the point of being almost unusable.

How plausible is this kind of attack? The researchers modeled a three-stage system (most production systems have more than three stages) and examined the total queue path through the system. By examining the queue depths at each stage, they devised a way to fill the queues at the first stage in the system by sending millibursts of valid sessions requests to the rend engine, or the use facing piece of the application. Even if these millibursts are spread out across the edge of the application, so long as they are all the same kind of requests, and timed correctly, they can bring the entire system down. In the paper, the researchers go further and show that once you understand the architecture of one such system, it is possible to try different millibursts on a running system, causing the same DoS effect.

This kind of attack, because it is built out of legitimate traffic, and it can be spread across the entire public facing edge of an application, would be nearly impossible to detect or counter at the network edge. One possible counter to this kind of attack would be increasing capacity in the deeper stages of the application. This countermeasure could be expensive, as the data must be stored on a larger number of servers. Further, data synchronized across multiple systems will subject to CAP limitations, which will ultimately limit the speed at which the application can run anyway. Operators could also consider fine grained monitoring, which increases the amount of telemetry that must be recovered from the network and processed—another form of monetary tradeoff.

 

Think Like an Engineer, not a Cheerleader

When you see a chart like this—

—you probably think if I were staking my career on technologies, I would want to jump from the older technology to the new just at the point where that adoption curve starts to really drive upward.

Over at ACM Queue, Peter J. Denning has an article up on just this topic. He argues that if you understand the cost curve and tipping point of any technology, you can predict—with some level of accuracy—the point at which the adoption s-curve is going to begin its exponential growth phase.

Going back many years, I recognize this s-curve. It was used for FDDI, ATM, Banyan Vines, Novell Netware, and just about every new technology that has ever entered the market.

TL;DR

  • There are technology jump points where an entire market will move from one technology to another
  • From a career perspective, it is sometimes wise to jump to a new technology when at the early stages of such a jump
  • However, there are risks invovled, such as hidden costs that prevent the jump from occurring
  • Hence, you need to be cautious and thoughtful when considering jumping to a new technology

 

The problem with this curve, especially when applied to every new technology ever invented, is it often makes it seem inevitable some new technology is going to replace an older, existing technology. This, however, makes a few assumptions that are not always warranted.

First, there is an underlying assumption that a current exponential reduction in technology costs will continue until the new technology is cheaper than the old. There are several problems in this neighborhood. Sometimes, for instance, the obvious or apparent costs are much less expensive, but the overall costs of adoption are not. To give one example, many people still heat their homes with some form of oil-based product. Since electricity is so much less expensive—or at least it seems to be at first glance—why is this so? I’m not an economist, but I can take some wild guesses at the answer.

For instance, electricity must be generated from heat. Someplace, then, heat must be converted to electricity, the electricity transported to the home, and then the electricity must be converted back to heat. A crucial question: is the cost of the double conversion and transportation more than the cost of simply transporting the original fuel to the home? If so, by how much? Many of these costs can be hidden—if every person in the world converted to electric heat, what would be the cost of upgrading and maintain an electric grid that could support this massive increase in power usage?

Hidden costs, and our inability to see the entire system at once, often make it more difficult than it might seem to predict the actual “landing spot” on the cost curve of a technology. Nor is it always possible to assume that once a technology has reached a “landing spot,” it will stay there. Major advances in some new technology may actually cross over into the older technology, so that both cost curves are driven down at the same time.

Second, there is the problem of “good enough.” Why are there no supersonic jets flying regularly across the Atlantic Ocean? Because people who fly, as much as they might complain (like me!) have ultimately decided with their wallets that the current technology is “good enough” to solve the problem at hand. That increasing the speed of flight just isn’t worth the risks and the costs.

Third, as Mike Bushong recently pointed out in a member’s Q&A at The Network Collective, many times a company (startup) will fail because it is too early in the cycle, rather than too late. I will posit that technologies can go the same way; a lot of people can invest in a technology really early and find it just does not work. The idea, no matter how good, will then go on the back burner for many years—perhaps forever—until someone else tries it again.

The Bottom Line

The bottom line is this: just because the curves seem to be converging does not mean a technology is going to follow the s-curve up and to the right. If you are thinking in terms of career growth, you have to ask hard questions, think about the underlying principles, and think about what the failure scenarios might look like for this particular technology.

Another point to remember is the staid and true rule 11. What problem does this solve, and how does it solve it? How is this solution like solutions attempted in the past? If those solutions failed, what will cause the result to be different this time? Think also in terms of complexity—is the added complexity driving real value?

I am not saying you should not bet on a new technology for your future. Rather—think like an engineer, rather than a cheerleader.