When you see a chart like this—
—you probably think if I were staking my career on technologies, I would want to jump from the older technology to the new just at the point where that adoption curve starts to really drive upward.
Over at ACM Queue, Peter J. Denning has an article up on just this topic. He argues that if you understand the cost curve and tipping point of any technology, you can predict—with some level of accuracy—the point at which the adoption s-curve is going to begin its exponential growth phase.
Going back many years, I recognize this s-curve. It was used for FDDI, ATM, Banyan Vines, Novell Netware, and just about every new technology that has ever entered the market.
- There are technology jump points where an entire market will move from one technology to another
- From a career perspective, it is sometimes wise to jump to a new technology when at the early stages of such a jump
- However, there are risks invovled, such as hidden costs that prevent the jump from occurring
- Hence, you need to be cautious and thoughtful when considering jumping to a new technology
The problem with this curve, especially when applied to every new technology ever invented, is it often makes it seem inevitable some new technology is going to replace an older, existing technology. This, however, makes a few assumptions that are not always warranted.
First, there is an underlying assumption that a current exponential reduction in technology costs will continue until the new technology is cheaper than the old. There are several problems in this neighborhood. Sometimes, for instance, the obvious or apparent costs are much less expensive, but the overall costs of adoption are not. To give one example, many people still heat their homes with some form of oil-based product. Since electricity is so much less expensive—or at least it seems to be at first glance—why is this so? I’m not an economist, but I can take some wild guesses at the answer.
For instance, electricity must be generated from heat. Someplace, then, heat must be converted to electricity, the electricity transported to the home, and then the electricity must be converted back to heat. A crucial question: is the cost of the double conversion and transportation more than the cost of simply transporting the original fuel to the home? If so, by how much? Many of these costs can be hidden—if every person in the world converted to electric heat, what would be the cost of upgrading and maintain an electric grid that could support this massive increase in power usage?
Hidden costs, and our inability to see the entire system at once, often make it more difficult than it might seem to predict the actual “landing spot” on the cost curve of a technology. Nor is it always possible to assume that once a technology has reached a “landing spot,” it will stay there. Major advances in some new technology may actually cross over into the older technology, so that both cost curves are driven down at the same time.
Second, there is the problem of “good enough.” Why are there no supersonic jets flying regularly across the Atlantic Ocean? Because people who fly, as much as they might complain (like me!) have ultimately decided with their wallets that the current technology is “good enough” to solve the problem at hand. That increasing the speed of flight just isn’t worth the risks and the costs.
Third, as Mike Bushong recently pointed out in a member’s Q&A at The Network Collective, many times a company (startup) will fail because it is too early in the cycle, rather than too late. I will posit that technologies can go the same way; a lot of people can invest in a technology really early and find it just does not work. The idea, no matter how good, will then go on the back burner for many years—perhaps forever—until someone else tries it again.
The Bottom Line
The bottom line is this: just because the curves seem to be converging does not mean a technology is going to follow the s-curve up and to the right. If you are thinking in terms of career growth, you have to ask hard questions, think about the underlying principles, and think about what the failure scenarios might look like for this particular technology.
Another point to remember is the staid and true rule 11. What problem does this solve, and how does it solve it? How is this solution like solutions attempted in the past? If those solutions failed, what will cause the result to be different this time? Think also in terms of complexity—is the added complexity driving real value?
I am not saying you should not bet on a new technology for your future. Rather—think like an engineer, rather than a cheerleader.
Did the passage of gDPR impact the amount of spam on the ‘net, or not? It depends on who you ask.
The folks at the Recorded Future blog examined the volume of spam and the number of registrations for domains used in phishing activity, and determined the volume of spam was not impacted by the implementation of Europe’s new privacy laws.
There were many concerns that after the European Union’s General Data Protection Regulation (GDPR) went into effect on May 25, 2018, there would be an uptick in spam. While it has only been three months since the GDPR went into effect, based on our research, not only has there not been an increase in spam, but the volume of spam and new registrations in spam-heavy generic top-level domains (gTLDs) has been on the decline.
To understand the effect of GDPR, the relevant questions are: Is GDPR enabling damage, because it makes detection, blocking, and mitigation harder?
Note that the CircleID article only addresses the domain registration question, and does not address the question of spam volume directly.
I would normally download a paper like this and post a synopsis of it as a research post later on, but the synopsis provided by Monday Note is good enough just to read directly.
Acoustic side channels are being discovered all the time; this new one uses the “whine” from electronic components in a monitor to determine what someone is looking at by listening to their microphone. While this might not seem like a big deal at first, consider this: anyone on a web conference can use this technology to determine what is on your screen.
Daniel Genkin of the University of Michigan, Mihir Pattani of the University of Pennsylvania, Roei Schuster of Cornell Tech and Tel Aviv University, and Eran Tromer of Tel Aviv University and Columbia University investigated a potential new avenue of remote surveillance that they have dubbed “Synesthesia”: a side-channel attack that can reveal the contents of a remote screen, providing access to potentially sensitive information based solely on “content-dependent acoustic leakage from LCD screens.”