Posts Tagged ‘machine learning’

Responding to Readers: Automated Design?

Deepak responded to my video on network commodization with a question:

What’s your thoughts on how Network Design itself can be Automated and validated. Also from Intent based Networking at some stage Network should re-look into itself and adjust to meet design goals or best practices or alternatively suggest the design itself in green field situation for example. APSTRA seems to be moving into this direction.

The answer to this question, as always, is—how many balloons fit in a bag? 🙂 I think it depends on what you mean when you use the term design. If we are talking about the overlay, or traffic engineering, or even quality of service, I think we will see a rising trend towards using machine learning in network environments to help solve those problems. I am not convinced machine learning can solve these problems, in the sense of leaving humans out of the loop, but humans could set the parameters up, let the neural network learn the flows, and then let the machine adjust things over time. I tend to think this kind of work will be pretty narrow for a long time to come.

There will be stumbling blocks here that need to be solved. For instance, if you introduce a new application into the network, do you need to re-teach the machine learning network? Or can you somehow make some adjustments? Or are you willing to let the new application underperform while the neural network adjusts? There are no clear answers to these questions, and yet we are going to need clear answers to them before we can really start counting on machine learning in this way.

If, on the other hand, you think of design as figuring out what the network topology should look like in the first place, or what kind of bandwidth you might need to build into the physical topology and where, I think machine learning can provide hints, but it is not going to be able to “design” a network in this way. There is too much intent involved here. For instance, in your original question, you noted the network can “look into itself” and “make adjustments” to better “meet the original design goals.” I’m not certain those “original design goals” are ever going to come from machine learning.

If this sounds like a wishy-washy answer, that’s because it is, in the end… It is always hard to make predictions of this kind—I’m just working off of what I know of machine learning today, compared to what I understand of the multi-variable problem of network designed, which is then mushed into the almost infinite possibilities of business requirements.

Beware the network without an operator

A lot of people seem to be looking forward to the day we build a network without an operator; to wit—

Containerized solutions and machine learning may soon be more than tangentially related. Containerized solutions will usher in an era of operations that don’t require human intervention. Once humans are taken out of operations, we will be free to apply machine learning techniques to what is left. —The New Stack

I hope not, because machines are more brittle than humans. Totally automated security fails much more often than security that uses a blend of people and algorithms. Machines do well at repetitive tasks, humans at catching the things that don’t fit into the algorithm’s state machine. Taking the person out of the network just means there’s no-one there to see when the state machine fails.

And it will fail—at some point. I know we like to believe that machines break less often, but I’m pretty certain there’s a counterpoint to this: when machines break, it’s more likely to be catastrophic. I’m not convinced replacing people with algorithms always reduces damage so much as move the potential damage around.

I hope not, because machines separate the decision from the decision maker. Butting your head against reality means making decisions in the face of tradeoffs. Allowing machines to make the decision doesn’t really reduce the tradeoffs, it just pushes the decision back to the algorithm designer rather than the operator. Taking the decision out of the hands of a person who sees the actual situation, and handing it to a person who can white board a decision long before the situation occurs just means you’ve pre-decided, it doesn’t mean you’ve decided correctly. When a self-driving car faces the trolley problem, what will it do? Of course, a data center isn’t a car, but does that mean there will never be moral choices involved in running a data center?

I hope not, because when it does crash, someone still needs to know how to work on it. When machines become so complex that we can build them but not understand them, then maybe it’s time to rethink whether or not building the machine is the right thing to do in the first place.

When should humans be taken out of operations?

Never.