Skip to content

The Revenge of the Ancillaries

Have you ever tried to make water flow in a specific direction? Maybe you have some particularly muddy spot in your yard, so you dig a small ditch and think, “the water will now flow from here to there, and the muddy spot won’t be so muddy the next time it rains.” Then it rains, and the water goes a completely different direction, or overflows the little channel you’ve dug, making things worse. The most effective way to channel water, of course, is to put it in pipes—but this doesn’t always seem to work, either.

The next time you think about shadow IT in your organization, think of these pipes, and how the entire system of IT must look to a user in your organization. For instance, I have had corporate laptops where you must enter two or three passwords to boot the laptop, provided by departments that require you to use your corporate laptop for everything, and with security rules forbidding the use of any personal software on the corporate laptop. I have even had company issued laptops on which you could not modify the position of icons on the desktop, change the menu items in any piece of software, or modify the software in any way. Why? Because… information security … making the job of the help desk easier (so they can close cases faster) … getting you to focus on your job, instead of social media …

Either one of two things is going to happen in this kind of situation: people are going to find a way around the rules, or they are going to minimize the amount of time they spend working. The pipe is either going to drain or burst.

This is what Sumit Rama has called the revenge of the ancillaries:

In building a given function—say, an order form for a brain MRI—the design choices were more political than technical: administrative staff and doctors had different views about what should be included. The doctors were used to having all the votes. But Epic had arranged meetings to try to adjudicate these differences. Now the staff had a say (and sometimes the doctors didn’t even show), and they added questions that made their jobs easier but other jobs more time-consuming. Questions that doctors had routinely skipped now stopped them short, with “field required” alerts.

This is a form of the tragedy of the commons. It seems fine for you to put requirements on someone else that makes your life easier; it only takes a few more seconds for them and the requirement seems to be quite reasonable. But if no-one is looking at the complete system, the system itself becomes too complex to use, and people start saying things like, “I’d really like to do this for you, but the system won’t let me.” Every heard that one?

Now let’s apply this to networking. Suppose you have some process for connecting servers to the network. This process involves going to house security, who imposes a long checklist on the connection, then to budgeting, who wants to know precisely what the server will be used for, why, and how long, and then to someone in O/S compliance, who wants to know what operating system will be used, and why, and then to DevOps, who wants to ensure the deployment of these servers are properly automated, and…

No single requirement is a big deal. None of them really take a lot of time. But combined, the process is so difficult that the user finally just pulls out a credit card and expenses a virtual machine on some public cloud service. Then you end up with production stuff running in a public cloud service with no controls at all.

Underlying some of this is the problem of complexity. If you have ten different monitoring platforms, pushing new hardware and software into place becomes a gauntlet no-one wants to run. If, on the other hand, you have one centralized data store, coupled with a myriad of tools to push information into, and retrieve information from, that one data store, you can allow system developers to choose whatever method works best to push and pull information. Marshaling the data becomes the largest issue, and the APIs into and out of the data store becomes the biggest decision to make—rather than selecting the suite of applications used to run telemetry.

Having an internal cloud model, with clear rules about when a virtual server will be deactivated and archived in some way, perhaps with manual process review on objection, might be a good idea. One of the nice things about virtualization is it allows many of the security, usage, and other rules to just be implemented without any sort of process. If you want people to build applications that use IP as their primary point of contact, rather than Ethernet addresses, make IP addresses easy to get, and layer connections harder. Channeling works; containment does not.

Let me repeat this one more time for emphasis: you can channel users, but you cannot contain them.

Rules need to be truly reasonable, with an eye to the system as a whole, rather than focusing on individual snippets. Documentation must be easy to find, and a clear process for working around any rules well explained. Rules need to be examined from time to time to see what percentage of the population is simply ignoring them, or working around them, why, and how things might be changed to be better.

Ultimately, people cannot be contained in a pipe. Not that you really want to—people in pipes don’t produce or create. It’s not a good place to be.

Scroll To Top