Speaker 0 00:00:00 Join us as we gather around the hedge, where we dig into technology, business and culture with the finest minds in computer networking. Speaker 2 00:00:18 Where does self-healing come into that? 'cause I think that's really the goal for me a lot of times is to get to the self-healing point. And again, I'll bring up something I brought up before, which is that you're actually dealing with a system of systems. You're not dealing with a system. Sometimes when we look at automation systems industrial side, we go, oh, but they've, they've done so much more automation than we have. But let me explain something to you. It's a lot easier to automate a robot arm to make certain movements than it is to automate a robot arm a, a fleet of robots that have to work together to accomplish Sure. A goal, right? Yep. And where we are in the networking world is we're not just automating robot arms individually, like the individual device is not by itself. There's writing protocols that lay underneath this and on top of this, and things that interact. So it's really a system of systems. It's a lot more complex than what we're thinking of. A lot of times. Speaker 3 00:01:19 We can't disagree with you there, but I, I could make a, I don't wanna call it a counter-argument, but maybe add the nuance of isn't everything a system of systems at some level, and Yeah. To Speaker 2 00:01:32 Some there, but Speaker 3 00:01:34 Yeah, there, there are atomic things we're solving for in all sorts of contexts, and we need to bring those atoms together to form molecules, et cetera, et cetera. Like, here's a example that has been on my mind a lot lately. If I think about just IP network automation and two really close adjacencies, but we can't see the zoom recording. I'm holding my hands really close together. Um, , you know, there's, there's the optical layer, right? That obviously has significant interactions with the IP layer, and there's network firewalls. So a security function, um, that we don't normally think of in, in the context of network automation, at least many of us don't. I think those are three tech domains that are close enough together and have enough, um, boundary sharing and boundary crossing that, boy, if I could even just coordinate those three tech domains, that would be progress. Speaker 2 00:02:39 Okay. But yeah. So what I'm, what I'm thinking though specific more specifically is I want to automate my configuration of ISIS and automate my configuration of BGP. And the problem is those can't be, I mean, they work together, they interact with one another, right? And that makes the job much complex because you, Speaker 4 00:03:00 It's almost like orchestration versus automation. Yeah. Yeah. It's two domains of automation that need to be orchestrated. Speaker 2 00:03:08 Yes, exactly. And I think that's something we often miss when we look at this stuff is that, you know, when you're talking about firewalls, I mean, the same thing kind of applies to firewalls, right? I've gotta make sure the packet gets through that makes the network run. But not because, because me turning something off can make BGP stock working. Well, now there's a problem, right? And, and so the unintended consequences are, and the, the adjacent systems are much more complex than in many other cases. We don't have the control over them that other people do. So it does make it harder in some ways. Speaker 3 00:03:45 And, and so, you know, to your point, Russ, the, the cognitive load offered by the complexity of these different subsystems needing to be orchestrated, I think this is gonna be a really interesting area for research in ai. Yeah. Right? And how AI can help provide the brain boost, you know, to use highly technical terms to help make those things more tractable problems. Speaker 2 00:04:11 Yeah. Speaker 4 00:04:13 And I think part of the, the AI initiative there is AI can automate the examination of way more data than you can intellectually handle. So you, you don't wind up with, with brain overload, Speaker 3 00:04:28 Right? Speaker 2 00:04:29 Yes. Yeah. And I think that's really important, actually. I think that's really important. Speaker 4 00:04:35 So I think that's a fundamental requirement for what Russ is asking for here. Speaker 2 00:04:41 Um, so the next thing is, um, lack of commitment from the top of the organization down. And my impression of this is people seem to think that I, when I, I walk into my manager's office and he says, yeah, go automate this. Or she says, go automate this, or whatever, I'm done. I have my commitment and my experience that is absolutely untrue. When, you know, you have commitment is when the fires are burning and the manager still says, or the vp, whoever still says, yeah, no, do it the right way. The fires are gonna burn another hour, but you need to do it the right way. We need to set the standard and we need to do it the right way. That's when the real commitment hits is, I, I think we're very, um, kind of, I don't know what, what the right word is, but we're very soft on what we mean by commitment in that space, right? Speaker 5 00:05:42 And it's pretty easy for a leader to say yes to an engineer who's like, Hey, I want to be able to do my job faster and with less errors. Um, and here are all the benefits of automation. Like, who wouldn't say yes to that? Nobody would not say yes to that. Yeah, right. Like, but, but that's not the, like you said, pointed at ru, that's not the same thing as commitment. Like that's, that's like, uh, individual technical behaviors that they're saying yes to, and they might not even know that that's what they're saying yes to what they, and, and it's, it's on the person who's enthusiastic about automation to, to secure commitment. And then, you know, as I've argued before, it's not just the grassroots people at the bottom of the org chart. They're gonna drive this. And it's not just the top either. Like, I've seen things fail when the top is committed and nobody at the bottom is, I think there has to be somebody in the middle who's willing to kind of, you know, offer direction stuff. But, but yeah, I think, I think a lot of leaders do sign off on automation. Um, and commitment isn't necessarily part of it. Speaker 3 00:06:41 One of the, one of the threads I think that is really important here is putting the right focus on the business case for automation. I think that's something that most of us in our circles don't enjoy , um, because it's not, you know, again, hand jamming CLI information. Um, but it, there are quantifiable things that we can and should be able to call out. So the right support is given up and down the management stack, right? I mean, there are, there are reasons where I can say, look, if I take, you know, an architecture, a bigger plan, and I do this with my network, you know, I'll, I will be able to achieve cost savings. I will be able to increase resiliency, and it might even make the lives of most of my network operators a little better. . Um, and you know, that's just a start. It's, it's a, it's something I wanna pursue through network automation forum. Um, we didn't get all the, the right items lined up for it to be an item of discussion in the, in the November event, but it's something I definitely want Naft to put some focus on. Speaker 4 00:07:59 Yeah. Yeah. One of the, I, I have a couple of of specific examples about both successes and failures in, in this area. Um, commitment from the top down to the bottom, uh, in one case, uh, organization needed to, to roll out a big QOS update across 400 devices. And they decided they were gonna automate, they were gonna automate the entire process. And they went out and acquired a tool. They, they spent the money, they acquired the tool, they did the training, um, they started implementation. And in about four months, they actually had something working and operational that they could use. They had buy-in from management to buy the stuff, train the people, make it happen, and continue using that system. Very successful project. In another case, an organization put together a massive deployment of automation, spent millions of dollars on building this automation system, but only about 10% of the, the people down the path at, at the lower levels of implementation actually started using it. And it fell into disuse at the top of the organization. They didn't support the continued use of the project that they had built. And so it wound up falling into disuse and, and, um, basically it was a waste of money. Speaker 4 00:09:27 So you, you have to have the support at the top and say, are you using the system we just put in place so they have to stay on it? So it's not just supportive hiring the right people and getting the right tools in place and things like that. It's, are you actually using this on our, in our day-to-day operations? Speaker 5 00:09:47 And I, I, I think it's also really important for people to understand that you will, you will be tested once you start, once you start building these systems. Like this whole idea is going to be tested. And like, it's not just gonna all, like, I think sometimes we think, yeah, there'll be problems in implementations, for sure. There will be, there'll be bugs. And, and, and sometimes I think we think that's the end of our troubles. No, that's actually just the beginning of the bugs are just the beginning of your troubles. The real troubles are, um, like what we were talking about when everything's on fire and, and you have to decide how much you really wanna do this like that. I, I, that's going to happen for every, every meaningful automation deployment I, I think is, is going to happen. Speaker 3 00:10:25 Yeah. Well, it's, you know, to, to take a really simple example when, when people could, you know, create webpages for the first time and everybody wanted to put up their own webpage, it's like, well, you can stand up a page, but information changes over time. You need to have a function that will update the information you're advertising via your webpage. Um, there's an analogy here, right? Um, it's commitment over a lifecycle. It's not, um, we do it once and it's good. Um, and there are all sorts of workflow, um, and process implications that touch multiple organizations that, again, all go all the way back to culture change, um, and doing things according to your architecture, right? It's not just python scripts and doing it all once and uh, and being done. Speaker 2 00:11:19 Yeah. And, um, I think that that kind of walks into the next thing, which is that part of the commitment that has to be done is to commit to a set of tools. Because as I've said before, many times here in network engineering, there's a million ways of doing things. How do you decide, like Terry has written down here, ner, Ansible, tss, something else, which libraries do you use? How do you set all that up? Now, part of that, I think, comes back to architecture. Yes. Part of that, I think, comes back to someone at the upper level saying, if we're gonna make this work, we've just gotta bite whatever we had to bite off and just go fix it and do this. Speaker 5 00:12:02 I think it, I think architecture, par architecture partially answers that, but doesn't necessarily, and in fact, you could say, if it's a good architecture, it will explicitly not answer, which tool should I use? Mm-Hmm. You still, you still like, do you want declarative or imperative behavior? Well, you got your choice. You want, you want imperative, you go do this if you want declarative. Well, you've got a couple choices over here. Um, the architect, now, I'm not a fan of like, extremely abstract architectures that are like, uh, full of fluffy stuff that never touches the ground. I don't think that's really practical, but, but I think a good architecture would be, would allow choices of tools to be made, um, without prescribing them necessarily. I don't know. Maybe I'm wrong about that. What do you guys think? Speaker 2 00:12:42 Well, no, I, I think, I think you're right to a agree, right? It shouldn't talk about tools, but I think it sets the limits. Speaker 5 00:12:49 Mm-Hmm, , yeah, that's a good way of put it. Yeah. Speaker 3 00:12:52 Um, yes. And so that the, you know, what's car and what's horse should architecture come first and then, you know, the tool choices or do the tool choices influence the architecture? And I would say yes, , um, because the tools that you have available at the time that you think through an architecture are gonna influence that architecture. Um, but there are those, you know, once every 10 year changes in tooling that might enable new architectures, right? Like the relative availability of lots of compute at low cost enables things that we couldn't do 10 years ago or even 20 years ago, right? That could in, that could inform architecture in a new way and enable, you know, new architectures, I think really abstractly here. But, uh, you know, there's a membrane and the pressure can go either way, I think between architecture and tooling. Speaker 2 00:13:50 Yeah. Speaker 4 00:13:51 And it's probably something that in implementation goes through cycles. So based on our understanding today, this is the kind of operation to get back what Tom is saying. You want declarative or imperative, okay? So you make a decision that's gonna influence which tools you get. And then, I don't know, maybe something else comes along and you decide, gee, that sounds like a really interesting addition to the architecture or replacement for parts of the architecture that drives the, the acquisition of a different set of tools. Speaker 5 00:14:28 Yeah. And hopefully, hopefully that does happen because it shows that the, the organization, organization is learning. You want, you want to be learning from your systems as you're building them, and you want to be making decisions differently in five years or two years than you were today. Yeah, Speaker 4 00:14:42 Yeah, exactly. And that then goes back to the commitment throughout the organization that, okay, it is time to switch. Sometimes you just have to throw it away and start over. Speaker 2 00:14:52 And the, and a proper architecture will have places in it where pieces can be pulled out and replaced. If it doesn't, then it's not a proper architecture. I mean, that that good point. Part of it is modularization, we don't do very well with modularization in our world either. We, we play games with it. Speaker 5 00:15:12 I think we also confuse high level designs and low level designs often, and, and we think get to thinking that one is the other. And I think that hurts what you're saying, Ru ru Speaker 2 00:15:21 Yeah, it does. So I wanna skip down a few, because there was an argument here that you said, Terry, about good sandbox functionality. And I find this very interesting because, yeah, again, I'm laughing right now to do some, to, to, to build up some stuff that I need to do for work. And like, wow, this is, it's not easy to pick out something as it exists today and have a lab where you can modify it. You're not, you don't necessarily wanna replicate the whole thing, but you don't know which pieces need to be replicated to test the behavior you're trying to change. And that's a really hard thing right now. And I think this probably does impact automation a lot because another fear factor is I'm gonna put automation in, I'm gonna make a change, and then bam, the whole world's gonna fall apart. And I'm not happy about that. I have no way of testing it. So I don't know. Speaker 4 00:16:20 Yeah. Having a good sandbox is, is very important. And I've done some investigation on what's called digital twins, but most of these digital twins are making a copy of the current state of a network specific to networking. Anyway, they're making a copy of this, the current state of the network and analyzing that, and that's their digital twin. Like, no, that's not what I meant by a digital twin. I meant a real digital twin with data flows and Speaker 5 00:16:51 Real code running. And yeah, Speaker 4 00:16:53 I can do what if analysis, and I can take a link down. I, I can say, well, this link just had a backhoe go through it. Uh, how did my network react to that? Uh, I'm gonna deploy this automation script on my network. Did it fall over dead or did it succeed in doing what I wanted it to? Speaker 2 00:17:10 And, and, but I think that's a hard ask. That's the problem. I think. I think that's a hard ask. Speaker 4 00:17:15 Yes, it is. Speaker 2 00:17:16 And I'm not even sure. Speaker 3 00:17:17 So, you know, my, this is an early lesson learned in my career, right? I came out of, uh, outta grad school basically with an applied math, um, degree and focus on model modeling and simulation. And as I got into networking, I just kind of scratched my head, you know, why, why don't we do more modeling and simulation? You know, instead of always trying to get gear in the lab and testing things on small pieces of the network, you know, case by case, you know, or, or architectural component by architectural component. And I think one of the blockers there is hardware forwarding behavior, especially when it breaks, is something that makes, uh, network engineers very nervous and they, they just have an inherent distrust of anything other than I'm running box from vendor A and box from vendor B and box from vendor C with the code that I've decided to use, you know, the very specific build. Um, and they wanna see it in the lab and labs are small 'cause they cost money. And so I can only do pieces at a time, and very rarely do I actually model my network in a physical lab because of those limitations. Does that make sense? Speaker 4 00:18:44 Yeah. And there's another, there's another nuance to that. I, I'll agree with that. But the other thing is to drive it with, with, uh, network traffic loads that mirror the real world and collecting that data and doing a real model like that is very, very difficult. It, it's very time consuming and most organizations are not willing to, to go try to tackle that. Speaker 5 00:19:10 I think part of the, part of the difficulty is because this is a large distributed system, and if you end up with a digital twin that's not really a twin, then you make conclusions that are false and the first time it leads you astray, you'll never trust it again. And, and I think some of this comes back to, to software architecture of the network operating itself, network operating system itself. Uh, it, it can, nassis can be built in such a way that the abstractions are in the right places. That in, in this virtual router you have a virtual asic, um, and in this physical router you have a physical asic and yes, they're different, but the abstractions can be built such that the rest of the code north of it behaves the same way. That is, it is possible, but not all, not all network operating systems have that lineage. Not all of them have the, um, you know, many, they have many years of investment behind a lot of them. Um, and that to be totally re re-architected to make it properly abstracted, that's just never going to happen. And, and you can't choose the network vendor that you're gonna buy probably based on how their virtual router stacks up. But, um, you know, I, I think that's part of the problem too, is the architecture of the NOSS itself. Yep. Speaker 4 00:20:16 Totally agree. Russ, Speaker 3 00:20:18 Russ, are, are, are we as former tech engineers going to respond in unison to Tom on that Speaker 4 00:20:25 ? I don't know. Go Speaker 3 00:20:27 Ahead. I'm, I, I gonna say I mostly, I mostly agree. Here's the problem. You, you often find hardware, uh, forwarding anomalies and forwarding behavior in live networks. Oh, sure. That you wouldn't, you wouldn't know how to abstract and put that. I, I was, uh, 20 plus years ago, I found, I found a bug where a T one line running at line rate limited in OC three from operating at line rate because of some very specific ASIC behavior. And I don't think that's a, i that finding a way to abstract that is hard because you don't wanna abstract bugs you might run into in the future. Right? Yeah. You see what I'm saying, Tom? Speaker 5 00:21:18 Yeah. No, AB absolutely, they're gonna be hardware things that you could never do virtually. Yes. But there is a ton of control plane code that can be virtualized. No problem. Agreed. And I think, I think prob because it's a hard problem to solve. We, I think we find a lot of, a lot of network vendors have just like, you want a vm, here's a vm. It's not anything like what's running on the physical gear, but it's okay. Like, it's too hard. It's, but but, but if you can, like there, if you could have examine the control plane, yes, the data plane will always be there and it's always physical, but, but you know, if there's a bug in your BGP implementation, it does not matter what the ASIC is doing for Speaker 2 00:21:52 The most part. Well, actually yes and no agreed. Speaker 3 00:21:54 That, Speaker 2 00:21:54 And that's, and that's probably because it does run into timing problems. Sometimes you don't know, sometimes you run into race conditions that you fi face in the real world, um, around memory usage and interaction with the kernel that you don't get to in a lab regardless. Speaker 5 00:22:11 Yeah. Yeah. I'm, yeah, I'm not saying that hardware bugs don't exist. That is, they never said that . Yeah. I'm just saying that you, that is, Speaker 2 00:22:17 Well, we're gonna treat you, like you said that , there's, Speaker 5 00:22:19 There's software bugs that can be found. Um, but again, it doesn't, none of this matters because how many, how many vendors actually have a nos that would even make it practical? Not that many. There aren't that many, but not that many. Yeah. Speaker 4 00:22:29 Yeah. You have to watch out for perfect. Being the enemy of good or good enough, right? Speaker 3 00:22:34 Yeah. Yeah. Speaker 2 00:22:36 So, um, one last thing I just wanted to talk about, because we're almost at the end of your document here, Terry, is 'cause I've been skipping a few things here and there is OpenFlow and you're, Speaker 4 00:22:47 Oh darn. I was hoping you'd go to Snowflake Networks. Speaker 2 00:22:51 Oh, snowflake. Well, we can talk about snowflakes too. Um, so we'll talk about two things. Let's, let's cover OpenFlow real fast and we'll talk about Snowflake. So I see what you're saying that OpenFlow was tailored to switch, you can do layer three switching with it. I think the heritage, however, was layer two. And honestly, if I'm to say why OpenFlow failed, it's not because it was layer two or layer three, it's because it was based on a reactive control plane. And they never figured out, like by the time they got to the point where they understood that reactive control planes, I mean, a lot of people love, oh, we reinvent reactive control planes all the time. Um, token ring explorers, uh, the original versions of Lisp, um, open flow switching. We reinvent these things constantly. And I mean, we still have some of, in our network, I mean, bridge learning is still a reactive control plane, but scope and scale, there's a scope and scale beyond which reactive control planes just don't work. You've gotta have a proactive control plane, and that's just all there is to it. So I think that the main reason OpenFlow failed is that they didn't realize that until it was too late, it already had its reputation nailed down, right? Be before they realized, oh no, we need to be able to do proactive control planes to be able to scale. And by then, yeah, the market had already made a decision that OpenFlow was not gonna work and it was too late. Speaker 5 00:24:18 You, you could argue the interface could be used to build a proactive, in fact, it can be used to build a proactive control plane, but it's just not how the software was built at the Speaker 2 00:24:25 Time. Exactly. Right. And, and eventually, like I said, they, they came to that conclusion, but it was, yeah. Speaker 4 00:24:31 My, my point and what I wrote up, um, between us was that OpenFlow was, didn't have the opportunity of a long enough gestation period to work out a bunch of these knits and the market made a decision and it just died right there. Yep, Speaker 2 00:24:49 That's exactly right. And Speaker 3 00:24:51 I, let me, um, let me, let me take a moment to remember open flow fondly and pay respect to some of some of the contributions it did make. Um, since I've gone on record on not being really excited about OpenFlow ever. Um, you know, just the idea of being able to program the fib and opening that up and giving people outside the router and switch vendor, um, space, the exposure to the idea that, oh, there are other ways that I could enter fib entries, that's a good thing. Right? And so even though OpenFlow may not have been, you know, obviously hasn't, um, gone on to receive wide deployment, um, you know, the ideas out there are important. And Terry, you know, you've mentioned too, you've seen some of the important lessons learned, make it back into some ASIC designs. So, you know, we can see that with lots of other technological developments too, even though, you know, even though they didn't enjoy commercial success, they introduced important ideas into the ecosystem. Speaker 2 00:26:00 I, I actually have high hopes for I two RS for a long time, but it fell apart as well on on many of the same shores of people set up different expectations than I thought were the right thing. And it just fell apart because of that. It became a digital training, uh, a soft digital twinning and automation system. And it was originally designed to be just what it sounds like an interface to the rib. So you could just create new writing protocols or write a, write a, write a service that just installed stuff directly in the rib and it gets around the, the open flow abstraction problems of different asics and the P four abstraction problems and everything else. So I don't really care what's southbound under the rib. I'm installing it in the rib. Thank you. I'm done. And you know, I, so another one of those things where we kind of didn't let it gestate or whatever, alright, snowflakes Terry, have your say. We're gonna Speaker 4 00:26:59 Part of the problem and, and actually chat. GPT came up with this as well. Legacy infrastructure, uh, networks have infrastructure that lacks the necessary programmability and automation capabilities. That's what GPT had to say, . And it, it's basically that we have these networks that are unique and it makes automation really hard because you have to handle all the corner cases that it might run into. Um, a good very simple example is you have 500 branches and the, the prediction of what is the uplink is never consistent across them. and Speaker 2 00:27:41 Yeah. Speaker 4 00:27:43 Yeah. We have a, we have a standard branch design and you, you pull the covers back and you start taking a look closely at it and you find out that there are four or five different variations of the standard branch design , right? Because you happen to have router X on the shelf and you could deploy that branch quickly instead of having to wait for an order to get turned around. Speaker 2 00:28:05 Yeah. And I think, I think part of the, you know, like when Ethan talks about snowflakes and he worries about it, he thinks about, he thinks about more like all branches in the world should be designed the same. Like there should be five branch designs in the world. What you're talking about, Terry, is more like within a single organization, there should Speaker 4 00:28:25 Be, Speaker 2 00:28:26 And that's, and I think that's, that's perfectly valid. I think that that, you know, I'm not sure how realistic it is that we'll ever get to a single branch design across the entire world, but I do think a single company, you should have a lifecycle, part of your architecture should be a lifecycle and it should be part of my architecture should be thou shalt not have more than three branch designs deployed at a time. . Speaker 4 00:28:50 That's it. Exactly. I I was just gonna make the point that you can have multiple branch designs because they'll be at different points in maturity model for how you're deploying those branches. And you may have a, a different set of branch designs depending on whether it's a five person branch or a 50 person branch, Speaker 2 00:29:08 Right? Correct. Right. But, but you should have a lifecycle. You should say, yeah. You know, I'm deploying, so I'm designing a new branch design or a new data center fabric or a new campus or whatever it is. This is a brand new design. Great. I'm gonna make the commitment right now that when I deploy this, I'm going to have a process and a plan within X period of months, not years, months, weeks is better, but let's just live with months to take out everything. So if I have four generations today, I design a new one, I'm taking out the first generation Speaker 4 00:29:47 Exactly. Speaker 2 00:29:47 In only four months. Yep. Pre-commit to that. That's just the way things work. Speaker 3 00:29:53 Um, there's, there's a fir there's a first principle here, right? Where you know, I should only tolerate complexity when it provides value. Speaker 2 00:30:03 That's right. Speaker 3 00:30:05 And if increased complexity doesn't provide value, that means it's only adding cost. Yeah. . So take, take Occam's razor and cut out everything that you can to make it as simple as you can. That's engineering elegance. It's not making it complex because it's really cool. Speaker 2 00:30:25 Yeah, Speaker 4 00:30:26 Exactly. Yeah. I once worked on a, a financial network deployment. So we brought in a bunch of people, going out and deploying them. We had stock configurations. We, we had everything designed, you know, here's the building blocks, here are the configurations that go in 'em. And we go back and start taking a look at the configs. It's like, where did this config come from? And the CCIE who worked on that particular branch said, but the hardware does that and completely change the config so that it was no longer a standard config just because the hardware could do some function. Yeah. . It was pretty funny. Speaker 5 00:31:05 Yeah. I I Speaker 4 00:31:06 At the time . Speaker 5 00:31:08 Yeah. I think when I think Snowflake, I think more like what Russ was saying at a higher level, like on an industry level, I, I wouldn't describe this as snowflake. I would just describe this as internal inconsistency and lack of, lack of discipline and deployment. Yeah. Like, and definitely I totally agree. You can't, you can't automate that. I mean you can, um, if you automate it, what you end up doing is you end up taking the complexity and pushing it into your data model. Your data model becomes just totally trashed with every exception. And you can do it, you can still do it, you can still write your templates, you can still push config, you can still, but, but you look at the data model and it becomes impossible to use it for anything other than rendering configs. And I mean that's, that's really hard. Speaker 2 00:31:47 Yeah. It's, Speaker 4 00:31:48 It's, yeah. So there are really two levels of, of Snowflake here. There's the, the higher abstract snowflake that you're talking about that that covers an entire organization and how they have architected their network and then there's a smaller snowflake of this branch is different than that branch for no obvious reason. Yeah. Speaker 2 00:32:04 There's actually a third level, which is industry-wide. Right. And I'm not sure that we'll ever get to anything less than snowflake at the industry level, Speaker 5 00:32:13 Nor should we, nor should we, in my opinion. Agreed. Speaker 3 00:32:16 Right? Yeah. Yep. Yeah. Yep. But there's, this is part of a, you know, this could be justification for, you know, Russ's comment on modularization, right? So let's say you, you need nine months or a year to get down to two or three standard branch designs. You know, you could design things in a way such that every branch has, has uplink, uplink comes in different flavors, right? And, you know, you could, you could separate that out from the, you know, the branch implementation and, and deprecate a class of uplink when you've finally gotten rid of it in the network. Yeah. I know that's abstract for, for this example. Speaker 2 00:32:56 No, no. Even, even more so if you add a fourth type of uplink, commit yourself to getting rid of one of the other ones within a certain period of time, right? Yeah. I mean, just say I'm only going to have three uplink styles from my branches. That's it. That's what I'm having. And if somebody says, well that takes this branch offline, well then we need to reconsider adding this other one. Okay. We need to think in terms of how many kinds of uplink do I want in my network? I don't want a different uplink for every region and I have a hundred regions. You know, I just, that's Yep. The complexity is just not, it's just you're not gaining anything with that kind of complexity. Oh. But in this area I can get metro E and in that area I can only get DSL or well not DSL anymore 'cause it's going away. But I can only get GO off of a, a local cable provider and I can get GON off a local cable provider in the other area, but DSL. But this new, this new metro east office is so much better. I should go and deploy that in that area. Well wait a minute. You're gaining bandwidth for one particular location against the complexity of now supporting two different kinds of network. Speaker 4 00:34:11 I mean, and what and what happens there, Russ is a lot of times that's driven by financials because that metro e costs less than the the uplink type that was that it's supplanting. Right? And so implement implementation costs drive the decision, not how much does it cost in total cost of ownership. Speaker 2 00:34:31 Yep, that's right. And that's where having an architecture and again where you know whether or not you have a commitment from upper level management, right? If they commit Exactly. They're paying a little bit more to make the architecture work, then you've got a real commitment. But, you know, otherwise you're just playing games with what the word commitment means. Speaker 5 00:34:53 The thing I think is interesting about a culture, especially on the finance side of automation, is you, when, when the organization does commit to this, um, 'cause you have all these costs that are always, there's all this, this tension between them that you have cost of implementation. Um, here, you know, as soon as you introduce software is a significant part of your, um, as your, of your strategy and automation, now it's gonna cost something to push that complexity of the fourth, uh, uplink type Yes. Into the software. So, so it's, it's gonna go somewhere. So yes, okay. It's a valid business decision to say we're gonna take the cheaper uplink over here. But you don't, it's not for free. Right. And I think we have to educate the business about this. Yes, you're, you're saving money there. You're gonna take all that money and you're gonna put it into the complexity of the software and, and probably multiples of it. Uh, if you look at operational impact and reliability and stuff like that. And, um, so those Speaker 2 00:35:44 Are you and troubleshooting and meantime to repair and all that other stuff that are very difficult to measure. Speaker 4 00:35:51 Exactly. And you touched on the key one there that a lot of people overlook Russ, and that's the troubleshooting part of it. Automating testing so that you can quickly and easily determine what went wrong. Speaker 2 00:36:03 Yeah, yeah, yeah. Again, something we don't often think about. So I think we've covered your list, Harry, unless you want to go back and cover anything else, Speaker 4 00:36:13 I'm, I'm fine with what we've covered. Speaker 2 00:36:16 . Speaker 3 00:36:17 Good list. Speaker 2 00:36:18 Yeah, good list. Um, thank you. So we should wrap up, I guess we've been at this for an hour. Wow. Speaker 5 00:36:27 Yeah, great. Speaker 2 00:36:28 Great. Yeah. Alright, so Speaker 4 00:36:31 We covered a lot of ground. Speaker 2 00:36:33 We did. So Terry, you're retired, but if people want to get in touch with you or if you, do you still write, I think you still write for various places, right? Speaker 4 00:36:42 Not anymore. I don't, not anymore. Speaker 2 00:36:43 Okay. All Speaker 5 00:36:44 Right. He's living a good life, . Yeah, Speaker 4 00:36:46 That's right. Like I said, I'm Speaker 3 00:36:48 Taking network automation classes for fun, right? That's Speaker 2 00:36:51 Right. Yes. Yeah, we, we may have to have a talk about what the good life is there, Terry . Speaker 4 00:36:58 Sure. We can do that. . So I am on LinkedIn and that's the only social thing that I do. I don't do, what's it called now? XX, yes. Speaker 2 00:37:08 Do you Speaker 4 00:37:08 X ? Speaker 3 00:37:10 I'm, I'm going with twin X Speaker 2 00:37:12 . Speaker 3 00:37:12 I'm gonna start a movement right here. Speaker 2 00:37:16 Okay. So LinkedIn, if people wanna get in touch with you, right Terry? Yes. All right. And Scott, I know you're on LinkedIn. Where else? Speaker 3 00:37:25 LinkedIn is really the best place to get me. I'm also Rob, um, on tx. Okay. Um, those are those the places I pay attention to . Yep. Speaker 2 00:37:37 And Tom, Speaker 5 00:37:38 I'm with Terry if you wanna find me. Uh, LinkedIn is your best bet. Speaker 2 00:37:41 Alright, awesome. Ima White, you can find me here at the hedge on rule 11. Do tech on LinkedIn and the, I do log into the service formerly known as Twitter every now and again, uh, for nowadays. Uh, 'cause I've discovered some things I need to do there. So, uh, but other than that you can always email me and I'm pretty easy to find. Uh, well we know that you live in a busy world, you're a network person and therefore you have more that you can do than, than you have time to do. So we appreciate you taking the time to listen to us here at the hedge and talk about all the things, networking. And we thank you very much for listening to this episode and we will catch you next time.