LEFT

Shows in left side column — all but worth reading should be in this category

Weekend Reads 042018: Mostly DNS Security Stuff

For many, the conversation about online privacy centers around a few high-profile companies, and rightly so. We consciously engage with their applications and services and want to know who else might access our information and how they might use it. But there are other, less obvious ways that accessing the World Wide Web exposes us. In this post we will look at how one part of the web’s infrastructure, the Domain Name System (DNS), “leaks” your private information and what you can do to better protect your privacy and security. Although DNS has long been a serious compromise in the privacy of the web, we’ll discuss some simple steps you can take to improve your privacy online. —Stan Adams @CDT

Cloudflare, the internet security and performance services company, announced a new service called “Spectrum.” The service gets its name from the fact that Cloudflare aims to offer DDoS protection for the whole “spectrum” of ports and protocols for its enterprise customers. @Tom’s Hardware

Security researchers have been warning about an ongoing malware campaign hijacking Internet routers to distribute Android banking malware that steals users’ sensitive information, login credentials and the secret code for two-factor authentication. In order to trick victims into installing the Android malware, dubbed Roaming Mantis, hackers have been hijacking DNS settings on vulnerable and poorly secured routers. —Swati Khandelwal @Hacker News

In summary, the best lesson one can take from this paper is that publication in a journal or conference proceedings does not guarantee that the paper withstands scrutiny. The paper is linked above for the interested reader to peruse himself, and to investigate the claims. —Rachel Traylor @The Math Citadel

A security researcher has disclosed details of an important vulnerability in Microsoft Outlook for which the company released an incomplete patch this month—almost 18 months after receiving the responsible disclosure report. The Microsoft Outlook vulnerability (CVE-2018-0950) could allow attackers to steal sensitive information, including users’ Windows login credentials, just by convincing victims to preview an email with Microsoft Outlook, without requiring any additional user interaction. —Swati Khandelwal @Hacker News

As Gandhi once said, “An eye for an eye will only make the whole world blind.” The same could be said about using “hack back” technology for vengeful purposes, such as security defenders who respond to attackers with the intent to harm their systems. —Dr. Salvatore Stolfo @Deark Reading

So ICANN decided to ask Article 29 for some specific guidance about WHOIS and how ICANN plans to deal with it in light of GDPR. You can read the original letter here. Article 29 were meeting in Brussels this week, and they not only discussed the ICANN request, but issued formal advice in response to ICANN’s letters. —Michele Neylon @CircleID

While performing in-depth analysis of various malware samples, security researchers at Cyberbit found a new code injection technique, dubbed Early Bird, being used by at least three different sophisticated malware that helped attackers evade detection. As its name suggests, Early Bird is a “simple yet powerful” technique that allows attackers to inject malicious code into a legitimate process before its main thread starts, and thereby avoids detection by Windows hook engines used by most anti-malware products. —Mohit Kumar @Hacker News

Reaction: DNS Complexity Lessons

Recently, Bert Hubert wrote of a growing problem in the networking world: the complexity of DNS. We have two systems we all use in the Internet, DNS and BGP. Both of these systems appear to be able to handle anything we can throw at them and “keep on ticking.”

this article was crossposted to CircleID

But how far can we drive the complexity of these systems before they ultimately fail? Bert posted this chart to the APNIC blog to illustrate the problem—

I am old enough to remember when the entire Cisco IOS Software (classic) code base was under 150,000 lines; today, I suspect most BGP and DNS implementations are well over this size. Consider this for a moment—a single protocol implementation that is larger than an entire Network Operating System ten to fifteen years back.

What really grabbed my attention, though, was one of the reasons Bert believes we have these complexity problems—

DNS developers frequently see immense complexity not as a problem but as a welcome challenge to be overcome. We say ‘yes’ to things we should say ‘no’ to. Less gifted developer communities would have to say no automatically since they simply would not be able to implement all that new stuff. We do not have this problem. We’re also too proud to say we find something (too) hard.

How often is this the problem in network design and deployment? “Oh, you want a stretched Ethernet link between two data centers 150 miles apart, and you want an eVPN control plane on top of the stretched Ethernet to support MPLS Traffic Engineering, and you want…” All the while the equipment budget is ringing up numbers in our heads, and the realyl cool stuff we will be able to play with is building up on the list we are writing in front of us. Then you hear the ultimate challenge—”if you were a real engineer, you could figure out how to do this all with a pair of routers I can buy down at the local office supply store.”

Some problems just do not need to be solved in the current system. Some problems just need to have their own system built for them, rather than reusing the same old stuff because, well, “we can.”

The real engineer is the one who knows how to say “no.”

Weekend Reading 041318: GDPR, and the ever deepening pile of security vulnerabilities

I think we are all hoping that when ICANN meets with the DPAs (Digital Protection Authorities) a clear path forward will be illuminated. We are all hoping that the DPAs will provide definitive guidance regarding ICANN’s interim model and that some special allowance will be made so that registrars and registries are provided with additional time to implement a GDPR-compliant WHOIS solution. —Matt Serlin @CircleID

Security researchers at Embedi have disclosed a critical vulnerability in Cisco IOS Software and Cisco IOS XE Software that could allow an unauthenticated, remote attacker to execute arbitrary code, take full control over the vulnerable network equipment and intercept traffic. —Swati Khandelwal @The Hacker News

With the growing presence and sophistication of online threats like viruses, ransomware, and phishing scams, it’s increasingly important to have the right protection and tools to help protect your devices, personal information, and files from being compromised. Microsoft already provides robust security for Office services, including link checking and attachment scanning for known viruses and phishing threats, encryption in transit and at rest, as well as powerful antivirus protection with Windows Defender. Today, we’re announcing new advanced protection capabilities coming to Office 365 Home and Office 365 Personal subscribers to help further protect individuals and families from online threats. —Kirk Koenigsbauer @Microsoft

In September 2017 the proposed roll of the Root Zone Key Signing Key (KSK), scheduled for 11th October 2017 was suspended. I wrote about the reasons for this suspension of the key roll at the time. The grounds for this action was based in the early analysis of data derived from initial deployment of resolvers that supported the trust anchor signal mechanism described in RFC 8145. In the period since then the data shows an increasing proportion of resolvers reporting that they trust KSK-2010 (the old KSK) but not KSK-2017 (the incoming KSK). —Geoff Huston @Potaroo

Security firm Varonis analyzed data risk assessments performed by its engineers on 130 companies and 5.5 petabyes of data through 2017. What concerns Varonis technical evangelist Brian Vecci most is that companies left 21% of all their folders open to everyone in the company. —Sara Peters @Dark Reading

The U.S. Secret Service is warning financial institutions about a new scam involving the temporary theft of chip-based debit cards issued to large corporations. In this scheme, the fraudsters intercept new debit cards in the mail and replace the chips on the cards with chips from old cards. When the unsuspecting business receives and activates the modified card, thieves can start draining funds from the account. @Krebs on Security

Insider mistakes like networked backup incidents and misconfigured cloud servers caused nearly 70% of all compromised records in 2017, according to new data from IBM X-Force. These types of incidents affected 424% more records last year than the year prior, they report. —Kelly Sheridan @Dark Reading

As recent revelations from Grindr and Under Armour remind us, Facebook is hardly alone in its failure to protect user privacy, and we’re glad to see the issue high on the national agenda. At the same time, it’s crucial that we ensure that privacy protections for social media users reinforce, rather than undermine, equally important values like free speech and innovation. We must also be careful not to unintentionally enshrine the current tech powerhouses by making it harder for others to enter those markets. Moreover, we shouldn’t lose sight of the tools we already have for protecting user privacy. —Corynne McSherry @EFF

Your email address is an excellent identifier for tracking you across devices, websites and apps. Even if you clear cookies, use private browsing mode or change devices, your email address will remain the same. Due to privacy concerns, tracking companies including ad networks, marketers, and data brokers use the hash of your email address instead, purporting that hashed emails are “non-personally identifying”, “completely private” and “anonymous”. But this is a misleading argument, as hashed email addresses can be reversed to recover original email addresses. In this post we’ll explain why, and explore companies which reverse hashed email addresses as a service. —Gunes Acar @Freedom to Tinker

I had a teacher who once said, “When the stuff is hitting the fan, there are three questions to ask: What’s important? What’s missing? And what’s next?” Members of Congress will have their day with Mark Zuckerberg this week, but I’m more interested in unpacking these three questions – and moving towards their answers. —Nuala O’Conner @CDT

I recently received an email from Netflix which nearly caused me to add my card details to someone else’s Netflix account. Here I show that this is a new kind of phishing scam which is enabled by an obscure feature of Gmail called “the dots don’t matter”. I then argue that the dots do matter, and that this Gmail feature is in fact a misfeature. —James Fisher

Social media sites are littered with seemingly innocuous little quizzes, games and surveys urging people to reminisce about specific topics, such as “What was your first job,” or “What was your first car?” The problem with participating in these informal surveys is that in doing so you may be inadvertently giving away the answers to “secret questions” that can be used to unlock access to a host of your online identities and accounts. @Krebs on Security

On the ‘web: The Best Decicions

A long time ago, when I worked for a major vendor’s escalation team, I was called into a customer to help them move from one routing protocol to another. We spent some time looking over their network, discussing how best to approach the problems they would probably encounter, how long it would take to make the change, and many other issues involved. Finally, over lunch, I asked why they were planning on changing routing protocols. @ECI

On the ‘web: How to Bridge the Network Skills Gap

The landscape is littered with training for all these new skills, but there is little time, or guidance, through the process of retraining into these new skills. The process of retraining, far too often, feels like leaving all your old skills behind and learning completely new ones. The seeming wide array of paths the average engineer faces, combined with the uncertainty of which of these paths forward represent the real future, can seem overwhelming. @TechTarget

The EIGRP SIA Incident: Positive Feedback Failure in the Wild

Reading a paper to build a research post from (yes, I’ll write about the paper in question in a later post!) jogged my memory about an old case that perfectly illustrated the concept of a positive feedback loop leading to a failure. We describe positive feedback loops in Computer Networking Problems and Solutions, and in Navigating Network Complexity, but clear cut examples are hard to find in the wild. Feedback loops almost always contribute to, rather than independently cause, failures.

Many years ago, in a network far away, I was called into a case because EIGRP was failing to converge. The immediate cause was neighbor flaps, in turn caused by Stuck-In-Active (SIA) events. To resolve the situation, someone in the past had set the SIA timers really high, as in around 30 minutes or so. This is a really bad idea. The SIA timer, in EIGRP, is essentially the amount of time you are willing to allow your network to go unconverged in some specific corner cases before the protocol “does something about it.” An SIA event always represents a situation where “someone didn’t answer my query, which means I cannot stay within the state machine, so I don’t know what to do—I’ll just restart the state machine.” Now before you go beating up on EIGRP for this sort of behavior, remember that every protocol has a state machine, and every protocol has some condition under which it will restart the state machine. IT just so happens that EIGRP’s conditions for this restart were too restrictive for many years, causing a lot more headaches than they needed to.

So the situation, as it stood at the moment of escalation, was that the SIA timer had been set unreasonably high in order to “solve” the SIA problem. And yet, SIAs were still occurring, and the network was still working itself into a state where it would not converge. The first step in figuring this problem out was, as always, to reduce the number of parallel links in the network to bring it to a stable state, while figuring out what was going on. Reducing complexity is almost always a good, if counterintuitive, step in troubleshooting large scale system failure. You think you need the redundancy to handle the system failure, but in many cases, the redundancy is contributing to the system failure in some way. Running the network in a hobbled, lower readiness state can often provide some relief while figuring out what is happening.

In this case, however, reducing the number of parallel links only lengthened the amount of time between complete failures—a somewhat odd result, particularly in the case of EIGRP SIAs. Further investigation revealed that a number of core routers, Cisco 7500’s with SSE’s, were not responding to queries. This was a particularly interesting insight. We could see the queries going into the 7500, but there was no response. Why?

Perhaps the packets were being dropped on the input queue of the receiving box? There were drops, but not nearly enough to explain what we were seeing. Perhaps the EIGRP reply packets were being dropped on the output queue? No—in fact, the reply packets just weren’t being generated. So what was going on?

After collecting several show tech outputs, and looking over them rather carefully, there was one odd thing: there was a lot of free memory on these boxes, but the largest block of available memory was really small. In old IOS, memory was allocated per process on an “as needed basis.” In fact, processes could be written to allocate just enough memory to build a single packet. Of course, if two processes allocate memory for individual packets in an alternating fashion, the memory will be broken up into single packet sized blocks. This is, as it turns out, almost impossible to recover from. Hence, memory fragmentation was a real thing that caused major network outages.

Here what we were seeing was EIGRP allocating single packet memory blocks, along with several other processes on the box. The thing is, EIGRP was actually allocating some of the largest blocks on the system. So a query would come in, get dumped to the EIGRP process, and the building of a response would be placed on the work queue. When the worker ran, it could not find a large enough block in which to build a reply packet, so it would patiently put the work back on its own queue for future processing. In the meantime, the SIA timer is ticking in the neighboring router, eventually timing out and resetting the adjacency.

Resetting the adjacency, of course, causes the entire table to be withdrawn, which, in turn, causes… more queries to be sent, resulting in the need for more replies… Causing the work queue in the EIGRP process to attempt to allocate more packet sized memory blocks, and failing, causing…

You can see how this quickly developed into a positive feedback loop—

  • EIGRP receives a set of queries to which it must respond
  • EIGRP allocates memory for each packet to build the responses
  • Some other processes allocate memory blocks interleaved with EIGRP’s packet sized memory blocks
  • EIGRP receives more queries, and finds it cannot allocate a block to build a reply packet
  • EIGRP SIA timer times out, causing a flood of new queries…

Rinse and repeat until the network fails to converge.

There are two basic problems with positive feedback loops. The first is they are almost impossible to anticipate. The interaction surfaces between two systems just have to be both deep enough to cause unintended side effects (the law of leaky abstractions almost guarantees this will be the case at least some times), and opaque enough to prevent you from seeing the interaction (this is what abstraction is supposed to do). There are many ways to solve positive feedback loops. In this case, cleaning up the way packet memory was allocated in all the processes in IOS, and, eventually, giving the active process in EIGRP an additional, softer, state before it declared a condition of “I’m outside the state machine here, I need to reset,” resolved most of the incidents of SIA’s in the real world.

But rest assured—there are still positive feedback loops lurking in some corner of every network.

Weekend Reads 040618: Kernel Programming, Secure Messaging, and Icebergs

Programming in user space is safer for a very small number of reasons, not the least of which is the virtual memory system, which tricks programs into believing they have full control over system memory and catches a small number of common C-language programming errors, such as touching a piece of memory that the program has no right to touch. Other reasons include the tried-and-true programming APIs that operating systems have now provided to programs for the past 30 years. All of which means programmers can possibly catch more errors before their code ships, which is great news—old news, but great news. What building code in user space does not do is solve the age-old problems of isolation, composition, and efficiency. —George V. Neville-Neil @ACM

There is no such thing as a perfect or one-size-fits-all messaging app. For users, a messenger that is reasonable for one person could be dangerous for another. And for developers, there is no single correct way to balance security features, usability, and the countless other variables that go into making a high-quality, secure communications tool. @EFF

Breaches of private information in hospital records are serious and expensive security events but remediating them can be deadly. That’s the conclusion of a study presented last week at the 4A Security and Compliance Conference. The data shows that the type and scale of a breach don’t have an impact on patient outcomes but that breaches do have an effect, and it appears to come from the hospital’s response rather than the attack itself. The effect is serious: mortality rates go up significantly. —Curtis Franklin Jr. @Dark Reading

Financial organizations are no strangers to regulation, but when it comes to cybersecurity, new mandates keep cropping up, and for good reason. According to a study from Accenture and the Ponemon Institute, the global financial services sector has experienced a 40% increase in the cost of cyberattacks during the past three years. Cyber heists against a string of banks (such as $81 million stolen from the Bangladesh central bank and $6 million from the Russian bank) and high-profile data breaches of well-known global financial organizations have demonstrated that financial companies are top targets for cybercriminals. —Steven Grossman @Dark Reading

Let’s start with Facebook’s Surveillance Machine, by Zeynep Tufekci in last Monday’s New York Times. Among other things (all correct), Zeynep explains that “Facebook makes money, in other words, by profiling us and then selling our attention to advertisers, political actors and others. These are Facebook’s true customers, whom it works hard to please.” —Doc Searls

Crypto-backdoors for law enforcement is a reasonable position, but the side that argues for it adds things that are either outright lies or morally corrupt. Every year, the amount of digital evidence law enforcement has to solve crimes increases, yet they outrageously lie, claiming they are “going dark”, losing access to evidence. A weirder claim is that those who oppose crypto-backdoors are nonetheless ethically required to make them work. This is morally corrupt. @Errata Security

Reaction: The NRE as the new architect

Over at the Packet Pushers, Anthony Miloslavsky suggests that network architects have outlived their usefulness, so it is time to think of a new role. He describes a role called the “NRE” to replace the architect; the NRE would—

…spend no less than 50% of their time focusing on automation, while spending the other 50% deeply embedded in the operations/engineering/architecture realms of networking. They participate in an on-call rotation to stay in touch with the ops side of the house, with a focus on “treating operations as if it’s a software problem” in response. NREs would provide a expert big picture view of BOTH the development/automation and network operation/design sides of the house.

The author goes on to argue that we need someone who will do operations, engineering, architecture, and development because “pure architecture” folks tend to “lose touch” with the operations side of things. It is too easy to “throw a solution over the cubicle wall” without considering the implementation and operational problems. But, as a friend used to ask of everything when I was still in electronics, will it work? I suspect the answer is no for several reasons.

First, there is no such person as described, and I am not certain there ever can be. Splitting your time between operations, engineering, architecture, and development means you will not actually be “deeply embedded” in any of the above. Being in the on-call rotation, for instance, means that you will not have the time required to both research and understand new development practices, languages, and tools. Being in the on-call rotation also means you will not have the time to research any sort of new technology or idea that is being proposed or considered, nor keep up with any research. Perhaps all of these things might seem useless from an operational point of view, but they are not.

Second, again, there is no such person as described. Being in the on-call rotation will prevent you from ever being able to keep up with your coding skills, and it will keep you from being able to develop the “larger picture” thinking you need to do architecture. “Sorry, I was called out for ten hours yesterday, so I did not have time to understand this new line of business being proposed,” is a recipe for diminished respect towards the IT organization.

Third, even if such a person did exist, I am not convinced it would solve the problem. There was a time, when I was in the Technical Assistance Center (TAC) at a vendor, when I wished the coders could just come pull some TAC duty time. And there were many times when the coders, I am certain, wished I would come pull some coding duty time. We tried this. The resulting escalated cases and new bugs resulting from the exercise made more work for everyone.

Fourth, even if such a person did exist, the job of the architect is not to be buried in the details of configurations and coding. The job of the architect is not pull in business, operations, and technology. I know the business part is easy to forget when you are called out at 2AM, but it is still important if you want to have a company that operates a network in the first place.

My estimation is, after seeing this sort of thing tried many times? It will not work.

It is a nice theory, but the job becomes too diluted, too quickly, to be useful to anyone. The entire company ends up being harmed, with poorly laid 5 years plans mixed with poorly implemented operations. If you take the theory here—just make people cross more boundaries in their day-to-day jobs—we could easily have accountants doing programming part time on accounting systems, and programmers doing every job in the company, from accounting to sales to…

If I do not think this is the answer, then what do I think is?

Stronger communications are key. The problem is not who do different jobs, but rather people who do not talk. Figuring out how to build systems that facilitate and encourage communication is ultimately more important than asking someone to do every possible job adjacent to theirs. This is not about meetings, this is about communications—do not ever confuse these two things.

Moving back to the fundamentals is key (part 1). Communication is easier if everyone is working from the same set of ideas and goals. If you have one team focusing on building really cool technology, while another focuses on really far out business ideas, then you are going to have a clash that will cause much pain and anguish.

Moving back to the fundamentals is key (part 2). Start simple, stay simple, and be agile. The less complex the system is, the more it is made up of clearly delineated pieces, the easier it will be to communicate about where it is now, and where it is going. Further, a lot of the reason the architecture folks “throw hard things over the cubicle wall” is they simply do not understand the complexity of the existing system. This is one of the disadvantages of very complex systems—you end up in a complexity wormhole.

Ultimately, everyone needs to understand everyone else’s job, including their limits and capabilities. Sometimes it might be useful to “trade jobs,” or to get folks from one team involved in the daily operations of another. But the overall concept of having a group of people who “try to do everything,” because “architects just do not understand the real worlds of operations any longer,” is generally a recipe for failure.

Deconfusing the Static Route

Configuring a static route is just like installing an entry directly in the routing table (or the RIB).

I have been told this many times in my work as a network engineer by operations people, coders, designers, and many other folks. The problem is that it is, in some routing table implementations, too true. To understand, it is best to take a short tour through how a typical RIB interacts with a routing protocol. Assume BGP, or IS-IS, learns about a new route that needs to be installed in the RIB:

  • The RIB into which the route needs to be installed is somehow determined. This might be through some sort of special tagging, or perhaps each routing process has a separate RIB into which it is installing routes, etc.. In any case, the routing process must determine which RIB the route should be installed in.
  • Look the next hop up in the RIB, to determine if it is reachable. A route cannot be installed if there is no next hop through which to forward the traffic towards the described destination.
  • Call the RIB interface to install the route.

The last step results in one of two possible reactions. The first is that the local RIB code compares any existing route to the new route, using the administrative distance and other factors (internal IS-IS routes should be preferred over external routes, eBGP routes should be preferred over iBGP routes, etc.) to decide which route should “win.” This process can be quite complex, however, as the rules are different for each protocol, and can change between protocols. In order to prevent long switch statements that need to be maintained in parallel with the routing protocol code, many RIB implementations use a set of call back functions to determine whether the existing route, or the new route, should be preferred.

In this diagram—

  1. The IS-IS process receives an LSP, calculates a new route based on the information (using SPF), and installs the route into the routing table.
  2. The RIB calls back to the owner of the current route, BGP, handing the new route to BGP for comparison with the route currently installed in the RIB.
  3. BGP finds the local copy of the route (rather than the version installed in the RIB) based on the supplied information, and determines the IS-IS route should win over the current BGP route. It sends this information to the RIB.

Using a callback system of this kind allows the “losing” routing protocol to determine if the new route should replace the current route. This might seem to be slower, but the reduced complexity in the RIB code is most often worth the tradeoff in time.

The static route is, in some implementations, and exception to this kind of processing. For instance, in old Cisco IOS code, the static route code was part of the RIB code. When you configured a static route, the code just created a new routing table entry and a few global variables to keep track of the manual configuration. FR Routing’s implementation of the static route is like this today; you can take a look at the zebra_static.c file in the FR Routing code base to see how static routes are implemented

However, there is current work being done to separate the static route from the RIB code; to create a completely different static process, so static routes are processed in the same way as a route learned from any other process.

Even though many implementations manage static routes as part of the RIB, however, you should still think of the static route as being like any other route, installed by any other process. Thinking about the static route as a “special case” causes many people to become confused about how routing really works. For instance—

Routing is really just another kind of configuration. After all, I can configure a route directly in the routing table with a static route.

I’ve also heard some folks say something like—

Software defined networks are just like DevOps. Configuring static routes through a script is just like using a southbound interface like I2RS or OpenFlow to install routing information.

The confusion here stems from the idea that static routes directly manipulate the RIB. This is an artifact of the way the code is structured, however, rather than a fact. The processing for static routes are contained in the RIB code, but that just means the code for determining which route out of a pair wins, etc., is all part of the RIB code itself, rather than residing in a separate process. The source of the routing information is different—a human configuring the device—but the actuall processing is no different, even if that processing is mixed into the RIB code.

What about a controller that screen scrapes the CLI for link status to discover reachable destinations, calculates a set of best paths through the network based on this information, and then uses static routes to build a routing table in each device? This can be argued to be a “form” of SDN, but the key element is the centralized calculation of loop free paths, rather than the static routes.

The humble static route has caused a lot of confusion for network engineers, but clearly separating the function from the implementation can not only help understand how and why static routes work, but the different components of the routing system, and what differentiates SDNs from distributed control planes.

Weekend Reads 033018: Olympic Destroyer, SONiC, and all the rest

The crippling Olympic Destroyer attack that hit several systems supporting the Pyeongchang Winter Olympics last month may have forever changed the game of attack attribution: the sophisticated attackers created a convincing forgery of malware associated with the North Korean nation-state Lazarus Group, fooling several experts who initially pinned the blame for the attacks on the DPRK. —Kelly Jackson Higgins @Dark Reading

SONiC is the default switch OS powering Azure and many other parts of the Microsoft Cloud. Since last year’s Summit, we have grown its footprint substantially and are now also powering services such as our AI platform, making sure researches have the very best experience when working on solving some of the world’s most pressing problems. —Yousef Khalidi @Azure

For decades, academics and technologists have sparred with the government over access to crypographic technology. In the 1970s, when crypto started to become an academic discipline, the NSA was worried, fearing that they’d lose the ability to read other countries’ traffic. And they acted. For example, they exerted pressure to weaken DES. —Steve Bellovin @CircleID

Cybersecurity attacks have become a weekly occurrence in many news columns. One recent example was that of one of our customers, QIWI payment system, successfully mitigating a 480 Gbps memcached amplified UDP DDoS attack. —Artyom Gavrichenkov @APNIC

With less than 2 months before the General Data Protection Regulations (GDPR) come into force on the 25th May, tens of thousands of businesses are woefully underprepared, and many businesses outside of the EU do not realize that GDPR also applies to them. —Patti MacDonald @Web Designer Depot

Whereas zero-days are a class of vulnerability that is unknown to a software developer or hardware manufacturer, an N-day is a flaw that is already publicly known but may or may not have a security patch available. There are countless known vulnerabilities in existence today, and many large commercial and governmental entities will find they have significant exposure within their broad network footprints. —Ang Cui @Dark Reading

If you are reading this, then you probably know by now how microservices can serve as an elegant way to break the shackles of monolithic architectures when building or deploying applications. But aside from pioneers such as Netflix, which began exploring this territory a few years ago, chances are microservices are something relatively new for your organization. Protection against cyber attacks represents an even bigger unknown for many. —B. Cameron Gain @The New Stack