cli

All posts tagged cli

Update:  From Fred, who was the guy referenced in the first paragraph:

Actually it was a white button with a router icon on it and “make cli great again”, I know this because it was me. It was June 2016. Needless to say in my view that did not age well.

When I attended Cisco Live sometime around the election of Donald Trump, there was a fellow walking around with a red hat with white lettering on it:  MAKE CLI GREAT AGAIN.  Ha!  I love Cisco Live.  These are my people.

I remember back when I worked at Juniper, one exec looked at me working on CLI and said, “you know that’s going to be gone soon.  It’ll all be GUI.”  That was 8 years ago…how’s that going?  When I work on CLI (and I still do!), or programming, my wife always says, “how can you stare at that cryptic black screen for hours?”  Hey, I’ve been doing it since I was a kid.

The black screen won’t go away, I’m afraid.  I’ve recently been learning iOS app development for fun (not profit).  It’s surprisingly hard given the number of successful app developers out there.  I may be too used to Python to program in Swift, and my hatred of object-oriented programming doesn’t help me when there is no way to avoid it in Swift.  Anyways, it took me about a week to sort out the different UI frameworks used in iOS.  There are basically three:

  • Storyboards.  Storyboards are a graphical design framework for UI layout.  Using storyboards, you drag and drop UI elements like buttons and textfields onto a miniature iPhone screen.
  • UIKit.  (Technically storyboards use UIKit, but I don’t know what else to call this.)  Most high-end app developers will delete the storyboard in their project and write the UI as code.  They actually type in code to tell iOS what UI elements they want, how to position them, and what to do in the event they are selected.  Positioning is fairly manual and is done relative to other UI elements.
  • SwiftUI.  Apple is pushing towards this model and will eventually deprecate the other two.  SwiftUI is also a UI-as-code model, but it’s declarative instead of imperative.  You tell SwiftUI what you want and roughly how you want to position things, and Swift does it for you.

Did you catch my point?  The GUI-based layout tool is going away in favor of UI-as-code!  The black screen always comes back!

The difference between computer people and non-computer-computer-people (many industry MBAs, analysts, etc.), is that computer people understand that text-based interaction is far more efficient, even if the learning curve is steeper.

Andrew Tanenbaum, author of the classic Computer Networks, typeset his massive work in troff.  Troff is a text-based typesetting tool where you enter input like this:

.ll 3i
.mk a
.ce
Preamble
.sp
We, the people of the United States, in order
to form a more perfect Union...

Why doesn’t he just use Word?  I’ll let Dr. Tanenbaum speak for himself:

All my typesetting is done using troff. I don’t have any need to see what the output will look like. I am quite convinced that troff will follow my instructions dutifully. If I give it the macro to insert a second-level heading, it will do that in the correct font and size, with the correct spacing, adding extra space to align facing pages down to the pixel if need be. Why should I worry about that? WYSIWYG is a step backwards. Human labor is used to do that which the computer can do better.  (Emphasis added.)

I myself am not quite enough of a cyborg to use troff (though I use vi), but I have used Latex with far better results than Word.  (Dr. Tanenbaum says “real authors use troff,” however.)

One of my more obscure interests (I have many) is Gregorian Chant.  Chant uses a musical notation which is markedly different from modern music notation, and occasionally I need to typeset it.  I use a tool called Gregorio, where I enter the chant like this:

(cb3) Ad(d)ór(f’)o(h) te(h’) de(h)vó(hi)te,(h.) (,) la(g)tens(f) Dé(e’)i(d)tas,(d.)

The letters in parentheses represent the different musical notes.  I once tried typesetting the chant graphically, and it was far more tedious than the above.  Why not enter what I want and let the typesetting system do the work?

Aside from the mere efficiency, text files can be easily version controlled and diff’d.  Try that with your GUI tool!

It’s very ironic that many of my customers who use controllers like DNAC or vManage are actually accessing the tool through APIs.  They bought a GUI tool, but they prefer the black screen.  The controller in this case becomes a point of aggregation for them, a system which at least does discovery and allows some level of abstraction.

The non-computer-computer-people look at SwiftUI, network device CLI, troff, Gregorio, APIs, and rend their garments, crying out to heaven, “why, oh why?!”  Some may even remember the days of text-based editing systems on their DOS machines, which they could never learn, and the great joy that WYSIWYG brought them.  It reminds me of a highly incompetent sales guy I worked with at the Gold partner back in the day.  He once saw me configuring a router and said:  “Wow, you still use DOS to configure routers!”

“It’s actually IOS CLI, not DOS.”

“That’s DOS!” he densely replied.  “I remember DOS.  I can’t believe you still use DOS!”

It’s funny that no matter how hard we try to get away from code, we always come back to it.  We’re hearing a lot about “low code” environments these days.  It tells you something when the first three Google hits on “low code” just come back to Gartner reports.  Gee, have we been down this path before?  Visual Basic was invented in 1991If low code is so great, why is Apple moving from storyboards to SwiftUI?

In my last post I wrote about the war on expertise.  This is one of the fronts in the war.  The non-computer-computer-people cannot understand the black screen, and are convinced they can eliminate it.  They learned about “innovation” in business school, and read case studies about Windows 95 and the end of DOS.  They read about how companies like Sun Microsystems went belly-up because they are not “disruptive.”  They did not, however, read about all the failed attempts to eliminate the black screen, spanning decades.  I believe it was George Santayana who said, “If you don’t remember computer history, you’re doomed to repeat it.”

A couple of years back I purchased an AI-powered energy monitoring system for my home.  It clips on to the power mains and monitors amperage/wattage.  I can view an a graph showing energy usage over time, which is really quite helpful to keep tabs on my electricity consumption at a time when electricity is expensive.

The AI part identifies what devices are drawing power in my house.  Based simply on wattage patterns, so they claim, the app will tell me this device is a light, that device is an air conditioner, and so on.  An electric oven, for example, consumes so much power and switches itself on and off in such a pattern that AI can identify it.  The company has a large database of all of the sorts of products that can be plugged into an outlet, and it uses its database to figure out what you have connected.

So far my AI energy monitor has identified ten different heaters in my house.  That’s really cool, except for the fact that I have exactly one heater.  When the message popped up saying “We’ve identified a new device!  Heater #10!”, I must admit I wasn’t surprised.  It did raise an eyebrow, however, given that it was summer and over 100 degrees (38 C) outside.  At the very least, you’d think the algorithm could correlate location and weather data with its guesses.

Many “futurists” who lurk around Silicon Valley believe in a few years we’ll live for ever when we merge our brains with AI.  I’ve noticed that most of these “futurists” have no technological expertise at all.  Usually they’re journalists or marketing experts.  I, on the other hand, deal with technology every day, and it leaves me more than a little skeptical of the “AI” wave that’s been sweeping over the Valley for a few years.

Of course, once the “analysts” identify a trend, all of us vendors need to move on it.  (“SASE was hot last fall, but this season SSE is in!”)  A part of that involves labeling things with the latest buzzword even when they have nothing to do with it.  (Don’t get me started on “controllers”…)  One vendor has a tool that opens a TAC case after detecting a problem.  They call this something like “AI-driven issue resolution.”  Never mind that a human being gets the TAC case and has to troubleshoot it–this is the exact opposite of AI.  We can broaden the term to mean a computer doing anything on its own, in this case calling a human.  Hey, is there a better indicator of intelligence than asking for help?

Dynamic baselines are neat.  I remember finding the threshold altering capabilities in NMS tools useless back in the 90’s.  Do I set it at 50% of bandwidth?  60%?  80%?  Dynamic baselining determines the normal traffic (or whatever) level at a given time, and sets a variable threshold based on historical data.  It’s AI, I suppose, but it’s basically just pattern analysis.

True issue resolution is a remarkably harder problem.  I once sat in a product meeting where we had been asked to determine all of the different scenarios the tool we were developing would be able to troubleshoot.  Then we were to determine the steps the “AI” would take (i.e., what CLI to execute.)  We built slide after slide, racking our brains for all the ways networks fail and how we’d troubleshoot them.

The problem with this approach is that if you think of 100 ways networks fail, when a customer deploys the product it will fail in the 101st way.  Networks are large distributed systems, running multiple protocols, connecting multiple operating systems, with different media types and they have ways of failing, sometimes spectacularly, that nobody ever thinks about.  A human being can think adaptively and dynamically in a way that a computer cannot.  Troubleshooting an outage involves collecting data from multiple sources, and then thinking through the problem until a resolution is found.  How many times, when I was in TAC, did I grab two or three other engineers to sit around a whiteboard and debate what the problem could be?  Using our collective knowledge and experience, bouncing ideas off of one another, we would often come up with creative approaches to the problem at hand and solve it.  I just don’t see AI doing that.  So, maybe it’s a good thing it phones home for help.

I do see a role for AI and its analysis capabilities in providing troubleshooting information on common problems.  Also, data can be a problem for humans to process.  We’re inundated by numbers and often cannot easily find patterns in what we are presented.  AI-type tools can help to aggregate and analyze data from numerous sources in a single place.  So, I’m by no means saying we should be stuck in 1995 for our NMS tools.  But I don’t see AI tools replacing network operations teams any time soon, despite what may be sold.

And I certainly have no plans to live forever by fusing my brain with a computer.  We can leave that to science fiction writers, and their more respectable colleagues, the futurists.

“Progress might have been alright once, but it has gone on too long.”
–  Ogden Nash

The book The Innovator’s Dilemma appears on the desk of a lot of Silicon Valley executives.  Its author, Clayton Christiensen, is famous for having coined the term “disruptive innovation.”  The term has always bothered me, and I keep waiting for the word “disruption” to die a quiet death.  I have the disadvantage of having studied Latin quite a bit.  The word “disrupt” comes from the Latin verb rumperewhich means to “break up”, “tear”, “rend”, “break into pieces.”  The word, as does our English derivative, connotes something quite bad.  If you think “disruption” is good, what would you think if I disrupted a presentation you were giving?  What if I disrupted the electrical system of your heart?

Side note:  I’m fascinated with the tendency of modern English to use “bad” words to connote something good.  In the 1980’s the word “bad” actually came to mean its opposite.  “Wow, that dude is really bad!” meant he was good.  Cool people use the word “sick” in this way.  “That’s a sick chopper” does not mean the motorcycle is broken.

The point, then, of disruption is to break up something that already exists, and this is what lies beneath the b-school usage of it.  If you innovate, in a disruptive way, then you are destroying something that came before you–an industry, a way of working, a technology.  We instantly assume this is a good thing, but what if it’s not?  Beneath any industry, way of working, or technology are people, and disruption is disruption of them, personally.

The word “innovate” also has a Latin root.  It comes from the word novus, which means “new”.  In industry in general, but particularly the tech industry, we positively worship the “new”.  We are constantly told we have to always be innovating.  The second one technology is invented and gets established, we need to replace it.  Frame Relay gave way to MPLS, MPLS is giving way to SD-WAN, and now we’re told SD-WAN has to give way…  The life of a technology professional, trying to understand all of this, is like a man trying to walk on quicksand.  How do you progress when you cannot get a firm footing?

We seem to have forgotten that a journey is worthless unless you set out on it with an end in mind.  One cannot simply worship the “new” because it is new–this is self-referential pointlessness.  There has to be a goal, or an end–a purpose, beyond simply just cooking up new things every couple years.

Most tech people and b-school people have little philosophical education outside of, perhaps (and unfortunately) Atlas Shrugged.  Thus, some of them, realizing the pointlessness of endless innovation cycles, have cooked up ludicrous ideas about the purpose of it all.  Now we have transhumanists telling us we’ll merge our brains with computers and evolve into some sort of new God-species, without apparently realizing how ridiculous they sound.  COVID-19 should disabuse us of any notion that we’re not actually human beings, constrained by human limitations.

On a practical level, the furious pace of innovation, or at least what is passed off as such, has made the careers of technology people challenging.  Lawyers and accountants can master their profession and then worry only about incremental changes.  New laws are passed every year, but fundamentally the practice of their profession remains the same.  For us, however, we seem to face radical disruption every couple of years.  Suddenly, our knowledge is out-of-date.  Technologies and techniques we understood well are yesterday’s news, and we have to re-invent ourselves yet again.

The innovation imperative is driven by several factors:  Wall Street constantly pushes public companies to “grow”, thus disparaging companies that simply figure out how to do something and do it well.  Companies are pressured into expanding to new industries, or into expanding their share of existing industries, and hence need to come up with ways to differentiate themselves.  On an individual level, many technologists are enamored of innovation, and constantly seek to invent things for personal satisfaction or for professional gain.  Wall Street seems to have forgotten the natural law of growth.  Name one thing in nature that can grow forever.  Trees, animals, stars…nothing can keep growing indefinitely.  Why should a company be any different?  Will Amazon simply take over every industry and then take over governing the planet?  Then what?

This may seem a strange article coming from a leader of a team in a tech company that is handling bleeding edge technologies.  And indeed it would seem to be a heresy for someone like me to say these things.  But I’m not calling for an end to inventing new products or technologies.  Having banged out CLI for thousands of hours, I can tell you that automating our networks is a good thing.  Overlays do make sense in that they can abstract complexity out of networks.  TrustSec/Scalable Group Tags are quite helpful, and something like this should have been in IP from the beginning.

What I am saying is that innovation needs a purpose other than just…innovation.  Executives need to stop waxing eloquent about “disrupting” this or that, or our future of fusing our brains with an AI Borg.  Wall Street needs to stop promoting growth at all costs.  And engineers need time to absorb and learn new things, so that they can be true professionals and not spend their time chasing ephemera.

Am I optimistic?  Well, it’s not in my nature, I’m afraid.  As I write this we are in the midst of the Coronavirus crisis.  I don’t know what the world will look like a year from now.  Business as usual, with COVID a forgotten memory?  Perhaps.  Great Depression due to economic shutdown?  Perhaps.  Total societal, governmental, and economic collapse, with rioting in the streets?  I hope not, but perhaps.  Whatever happens, I do hope we remember that word “novel”, as in “novel Coronavirus”, comes from the same Latin root as the word “innovation”.  New isn’t always the best.

I was doing well on the blog for a few months but lately fell behind.  With (now) 12 people reporting to me, and three major areas of responsibility (SD-Access, Assurance, and Programmability), it’s not easy to find time to write up a blog post.   I have about five drafts needing work but I cannot seem to find the will to finish them.  Sometimes, however, it just takes a spark to get me going. That spark came in my inbox from Ivan Peplnjak.  I like Ivan’s blog posts, which, while often not favorable to Cisco, are nonetheless fair and balanced and raise some very important points.

“Why Is Every SDN Vendor Bashing Networking Engineers?” asks Ivan in the form email I received.  “[T]he vendors know they wouldn’t be able to sell their latest concoctions to people who actually understand how networking works and why some architectures have no chance of ever working in real life,” answers Ivan.  “The only way to sell the warez is to try to convince everyone else how to get rid of the pesky ossified CLI jockeys.”

Now I work for a vendor, and since I deal with the aforementioned products, I guess I am an SDN vendor.  That would seem to qualify me to speak on this subject.  (With, of course, the usual disclaimer that the opinions here are my own and do not represent Cisco officially.)

Selling Concoctions

I must admit, I do want to sell our products.  Everyone at Cisco should want our products to sell.  Just about all of us have a personal, financial stake in the matter, whether we have stock grants or ESPP.  We would be insane not to want people to buy our products.  I, and most of my co-workers, are driven by far more than finance, however.  We all want to know that our work means something, and that we are coming up with innovative solutions to problems.  Otherwise, why show up in the office every day?

We operate in a highly competitive environment, which means if we are not constantly innovating and coming up with better ways to do things, we will all suffer.  You can complain about the macroeconomic system, and believe me, I’m not a Randian, objectivist believer in unbridled capitalism.  But, at the end of the day, a public company needs to create the perception of future value in the eyes of the stock market, and that’s a motivating factor for all of us.

These things being said, I’ve been in product management for a few years now and I have never heard anyone, ever, talk about trying to put one over on our customers.  I’m not saying that’s what Ivan means here, but it’s an accusation I’ve heard before.  In the first place, our customers are network engineers who are quite smart.  If ever I’ve presented to my customer and was not crystal clear on what I was talking about and what advantage it would bring the customer, they will let me know it.  We’re constantly trying to find ways to do things better and make our customers’ lives easier.  As somebody who worked in IT for more years than product management, I’m very interested in this subject.  There were a lot of things that were frustrating and I want to fix things that used to annoy me.  You can argue about whether we’ve come up with the right ideas, but I hope nobody questions our motivations.

CLI Jockeys

Do I bash CLI jockeys in order to sell my products?  I should hope not, given that most of my customers are CLI jockeys, as I am myself!  I have two CCIEs and a JNCIE.  I spent a couple years in routing protocols TAC and many years in IT.  I spent a long time learning my trade and I have a lot of respect for those who have put the time and effort into learning it as well.  It’s not easy.

However, I don’t operate under the delusion that network engineers do a good job of configuring and managing CLI.  When I was at Juniper, I had designed a new NGMVPN system for our WAN.  I handed it off to the implementation team with some sample configs and asked them to come back to me with their plan.  I think we were touching about 20 devices the first go around.  The engineer came back with 20 Word documents.  He took my sample config and copied and pasted it into Word, and then modified the config in a separate Word doc for each CE/PE he was touching.  CLI itself isn’t a problem, but how we manage it.  This is where programmability and automation tools come in.  At the very least Ansible templating would have made this easier.  Software-Defined Networking (a very loose term, for what it’s worth), is not about replacing ossified CLI jockeys but getting them to focus on what they should be doing (network engineering) and avoiding what they should not (pasting stuff in Word docs.)

SD-Access takes this quite a bit further than Ansible, NETCONF, and other device-level tools.  Rather than saying “I want this device to be a LISP MS/MR” and so forth, you just say “I want this device to be a control plane node” and the system figures out what you need.  Theoretically we could change from LISP to some other protocol and the end-user shouldn’t even notice.  The idea here is somewhat like a fly-by-wire system.  When a pilot operated the controls of an airplane, they used to be directly coupled to the control surfaces via hydraulics.  Now, the pilot is operating what is essentially a joystick, providing control inputs to a computer, which then computes the best way to move the control surfaces given the conditions.  This is then relayed to servo motors in the wings, tail, etc.  The complexity of a fly-by-wire system is much higher than an old hydraulic system, but the complexity is hidden from the pilot in order to provide a better experience.  Likewise, with SD-Access, we’ve made the details more complex in order to deliver a better experience (TrustSec, layer 3 routed backbone, etc.) while hiding the complexity from the user.  It’s a different approach, for sure, but the idea is to allow engineers to focus on the right problems, like how to design their network, and not worry so much about configuration.

A New Era?

I’ve written extensively (see, for example, here, and here) about the role for CLI-jockey network engineers in the future.  When airplanes switched from the old dials and gauges to sleek, modern computerized (glass) cockpits, I’m sure some old timers threw up their hands, retired, and got their old Piper Super Cubs out of the hanger to do some “real” flying.  But most adapted, and in the end, saw how the new automation systems helped them do their jobs better.  That’s an era I’m looking forward to.  And as I always, always say, the pilots who fly the new cockpits still need to understand weather systems, engines, navigation, etc.  We still need network engineers who know how networks operate.

Meanwhile, I won’t bash any CLI jockeys and I hope nobody else here does either.

An old networking friend whom I mentored for his CCIE a long time ago wrote me an email:  I’ve been a CCIE for 10 years now, he said, and I’m feeling like a dinosaur.  Everyone wants people who know AWS and automation and they don’t want old-school CLI guys.

It takes me back to a moment in my career that has always stuck with me.  I was in my early twenties at my first job as a full-time network engineer.  I was working at the San Francisco Chronicle, at the time (early 2000’s) a large newspaper with a wide circulation.  The company had a large newsroom, a huge advertising call center, three printing plants, and numerous circulation offices across the bay area.  We had IP, IPX, AppleTalk and SNA on the network, typical of the multi-protocol environments of the time.

My colleague Tony and I were up in the MIS area on the second floor of the old Chronicle building on 5th and Mission St. in downtown San Francisco.  The area we were in contained armies of mainframe programmers, looking at the black screens of COBOL code that were the backbone of the newspaper systems in those days.  Most of the programmers were in their fifties, with gray hair and beards.  Tony and I were young, and TCP/IP networking was new to these guys.

I was telling Tony how I always wanted to be technical.  I loved CLI, and it was good at it.  I was working on my first CCIE.  I was at the top of my game, and if any weird problem cropped up on our network I dove in and got it fixed, no matter how hard.  As I explained to Tony, this was all I wanted to do in my career, to be a CLI guy, working with Cisco routers and switches.

Tony gestured at the mainframe programmers, sitting in their cubes typing their COBOL.  “Is this what you want to be when you’re in your fifties,” he said under his breath, “a dinosaur?  Do you just want to be typing obscure code into systems that are probably going to be one step away from being shut down?  How long do you think these guys will have their jobs anyways?”

Well, I haven’t been to the Chronicle in a while but those jobs are almost certainly gone.  Fortunately for the COBOL guys, they’re all retirement age anyways.

We live in a world and an industry that worships the young and the new.  If you’re in your twenties, and totally current on the latest DevOps tools, be warned:  someday you’ll be in your forties and people will think DevOps is for dinosaurs.  The tech industry is under constant pressure to innovate, and innovating usually means getting machines to do things people used to do.  This is why some tech titans are pushing for universal basic income.  They realize that their innovations eliminate jobs at such a rate that people won’t be able to afford to live anymore.  I think it’s a terrible idea, but that’s a subject for another post.  The point is, in this industry, when you think you’ve mastered something and are relevant, be ready:  your obsolescence commeth.

This is an inversion of the natural respect for age and experience we’ve had throughout human history.  I don’t say this as a 40-something feeling some bitterness for the changes to his industry;  in fact, I actually had this thought when I was much younger.  In the West, at least,  in the 1960’s there developed a sense that, to paraphrase Hunter Thompson, old is evil.  This was of course born from legitimately bad things that were perpetuated by previous generations, but it’s interesting to see how the attitude has taken hold in every aspect of our culture.  If you look at medieval guilds, the idea was that the young spent years going through apprentice and journeyman stages before being considered a master of their craft.  This system is still in place in many trades that do not experience innovation at the rate of our industry, and there is a lot to be said for it.  The older members of the trade get security and the younger get experience.

I’ve written a bit about the relevance of the CCIE, and of networking skills in general, in the new age.  Are we becoming the COBOL programmers of the early 2000’s?  Is investing in networking skills about the same as studying mainframe programming back then, a waste of cycles on dying systems?

I’ve made the point many times on this blog that I don’t think that’s (yet) the case.  At the end of the day, we still need to move packets around, and we’re still doing it in much the same way as we did in 1995.  Most of the protocols are the same, and even the newer ones like VXLAN are not that different from the old ones.  Silicon improves, speeds increase, but fundamentally we’re still doing the same thing.  What changing is how we’re managing those systems, and as I say in my presentations, that’s not a bad thing.  Using Notepad to copy/paste across a large number of devices is not a good use of network engineers’ time.  Automating can indeed help us to do things better and focus on what matters.

I’ve often used the example of airline pilots.  A modern airplane cockpit looks totally different from a cockpit in the 1980’s or even 1990’s.  The old dials and switches have been replaced by LCD panels and much greater automation.  And yet we still have pilots, and the pilot today still needs to understand engine systems, weather, aerodynamics, and navigation.  What’s changed is how that pilot interacts with the machine.  As a pilot myself, I can tell you how much better a glass cockpit is than the old dials.  I get better information presented in a much more useful way and don’t have to waste my time on unnecessary tasks.  This is how network automation should work.

When I raised this point to some customer execs at a recent briefing, one of them said that the pilots could be eliminated since automation is so good now.  I’m skeptical we will ever reach that level of automation, despite the futurists’ love of making such predictions.  The pilots aren’t there for the 99% of the time when things work as expected, but for the 1% when they don’t, and it will be a long time, if ever, before AI can make judgement calls like a human can.  And in order to make those 1% of calls, the pilots need to be flying the 99% of the time when it’s routine, so they know what to do.

So, are we dinosaurs?  Are we the COBOL programmers of the late 2010’s, ready to be eliminated in the next wave of layoffs?  I don’t think so, but we have to adapt.  We need to learn the glass cockpit.  We need to stay on top of developments, and learn how those developments help us to manage the systems we know well how to manage.  Mainframes and operating systems will come and go, but interconnecting those systems will still be relevant for a long time.

Meanwhile, an SVP at Cisco told me he saw someone with a ballcap at Cisco Live:  “Make CLI Great Again”.  Gotta love that.  Some dinosaurs don’t want to go extinct.

Since I finished my series of articles on the CCIE, I thought I would kick off a new series on my current area of focus:  network programmability.  The past year at Cisco, programmability and automation have been my focus, first on Nexus and now on Catalyst switches.  I did do a two-part post on DCNM, a product which I am no longer covering, but it’s well worth a read if you are interested in learning the value of automation.

One thing I’ve noticed about this topic is that many of the people working on and explaining programmability have a background in software engineering.  I, on the other hand, approach the subject from the perspective of a network engineer.  I did do some programming when I was younger, in Pascal (showing my age here) and C.  I also did a tiny bit of C++ but not enough to really get comfortable with object-oriented programming.  Regardless, I left programming (now known as “coding”) behind for a long time, and the field has advanced in the meantime.  Because of this, when I explain these concepts I don’t bring the assumptions of a professional software engineer, but assume you, the reader, know nothing either.

Thus, it seems logical that in starting out this series, I need to explain what exactly programmability means in the context of network engineering, and what it means to do something programmatically.

Programmability simply means the capacity for a network device to be configured and managed by a computer program, as opposed to being configured and managed directly by humans.  This is a broad definition, but technically using an interface like Expect (really just CLI) or SNMP qualifies as a type of programmability.  Thus, we can qualify this by saying that programmability in today’s world includes the design of interfaces that are optimized for machine-to-machine control.

To manage a network device programmatically really just means using a computer program to control that network device.  However, when we consider a computer program, it has certain characteristics over and above simply controlling a device.  Essential to programming is the design of control structures that make decisions based on certain pieces of information.

Thus, we could use NETCONF to simply push configuration to a router or switch, but this isn’t the most optimal reason to use it.  It would be a far more effective use of NETCONF if we read some piece of data from the device (say interface errors) and took an action based on that data (say, shutting the interface down when the counters got too high.)  The other advantage of programmability is the ability to tie together multiple systems.  For example, we could read a device inventory out of APIC-EM, and then push config to devices based on the device type.  In other words, the decision-making capability of programmability is most important.

Network programmability encompasses a number of technologies:

  • Day 0 technologies to bring network devices up with an appropriate software version and configuration, with little to no human intervention.  Examples:  ZTP, PoAP, PnP.
  • Technologies to push and read configuration and operational data from devices.  Examples:  SNMP, NETCONF.
  • Automation systems such as Puppet, Chef, and Ansible, which are not strictly programming languages, but allow for configuration of numerous devices based on the role of the device.
  • The use of external programming languages, such as Ruby and Python, to interact with network devices.
  • The use of on-box programming technologies, such as on-box Python and EEM, to control network devices.

In this series of articles we will cover all of these topics as well as the mysteries of data models, YANG, YAML, JSON, XML, etc., all within the context of network engineering.  I know when I first encountered YANG and data models, I was quite confused and I hope I clear up some of this confusion.

1
1