Perspectives

It’s funny that I remember a time when Cisco Live used to be a privilege, before it became a chore.  I’ve been to every Cisco Live US and every Cisco Live Europe since I started here in 2015.  I enjoy the show primarily because I enjoy meeting with fellow network engineers, and there is still a lot of energy every show.  But I’ve seen all the booths, the flashy keynotes, and the marketing sessions many times over.  I’ve eaten the bad food and stayed in a few bad hotels.

Speaking was one of my favorite activities.  The audiences are intimidating, and as someone who used to have a terrible fear of public speaking, there is always a bit of jitters until my session(s) are over.  But I enjoy communicating technical concepts in a clear and understandable way.  That’s why I got into speaking.  I just wish I was teaching a college course where I grade the students instead of the students grading me.

After I transitioned from being an individual contributor to a manager, I held onto my speaking slots for a while, but the last Cisco Live where I spoke was Las Vegas 2022.  I attended, but didn’t speak, at Cisco Live Amsterdam 2023 and Cisco Live Las Vegas 2023.  It was nice not to have to prepare much, but I missed the “speaker” tab on my badge.  It just wasn’t the same.  Yet this is what happens when you move into leadership.  The sessions at Cisco Live are meant to be technical, and the SGMs who control sessions want technical speakers, not manager-types.  When you lead a team, you also don’t want to take sessions from your staff.

In Vegas, I swore I would not return to Cisco Live without a speaker badge.  If I didn’t get a slot, I would skip CL for the first time since I came back here in 2015.  I applied for two sessions for Europe in Feb 2024, trying to cash in on my “Hall of Fame” status.  The SGMs want relevant topics and good speakers.  I didn’t get any emails and assumed I wasn’t going to Amsterdam.  Have I finally jumped the shark?

Then I got a funny email.  “Do you want your Amsterdam session considered for Las Vegas?”  Huh?  Only speakers get that email.  I checked the Cisco Live Speaker Resource Center, and lo and behold, I saw one DevNet session there.

The session I submitted is called “Time Machine! A 1980’s bulletin board system, revived”.  What??  I submitted this as a total long shot, nearly positive it would be denied.  As detailed in a couple posts on this blog, I ran a bulletin board system in the 1980’s.  I proposed to do a DevNet session on how BBS’s worked and how I had revived my own in an emulator.  And they accepted it!  Ha!  Now I have to come up with the content.  I remember in the abstract where it asks for “business value of this session” and I said something like “There is absolutely no business value to this session.”  I guess the SGMs have a sense of humor.

I guess I’m going to Amsterdam.  I know for a fact one of my three regular readers goes to Cisco Live Europe.  So I look forward to seeing you there.  Meanwhile, I realized at this point in my career I need to submit my sessions in the “IT Leadership” track, so I’m writing some Las Vegas abstracts for that.  Can’t keep an old dog down!

7
1

In a recent article (paywall), Elon Musk has once again turned his wrath to remote workers.  Elon has a lot of good ideas, but also many bad ones, such as naming his child “X AE A-XII”.  This is certainly proof that we don’t need to take everything he says seriously.

Elon has said that those who want to work remote are “detached from reality” and give off “Marie Antoinette vibes” (I will forgive his apparent misunderstanding of history).  His argument, to the extent he even articulates it, seems to have two angles:

  • First, it is “morally wrong” to work remotely because so many people in the world cannot do their jobs from home.
  • Second, the productivity of remote workers is not the same.  (I’m extrapolating a bit here.)

I don’t think the first argument is terribly serious.  Just because some people (food service workers, factory workers, etc.) cannot work from home does not mean I should not.  Long before remote work, some people worked in clean offices while others worked in filthy coal mines.  We can debate the injustices of life, but I’m not convinced this disparity should guide office policies.

As to the second, well, as manager of many large teams, I can say that some of my most productive workers are fully remote, i.e., they work entirely from home.  I can think of two or three of my most respected and productive employees who have this arrangement.  Because they don’t live near a Cisco office, they have no choice.  I recently promoted one of them to prinicpal, a major accomplishment that is not given out lightly.  So, we cannot say that working from home automatically means lack of productivity.

On the other hand, I’ve had some very poor performers who worked from home.  This was particularly the case during the lockdowns.  I remember one engineer who seemed to be doing nothing, and when I checked her calendar it was empty.  She took a job elsewhere, finally.  But I, as any good boss should, was well aware of her lack of contribution and would have done something had she not taken the initiative herself.

Does being in the office guarantee productivity?  Not at all!  I can sit around and watch YouTube videos at work just the same as I can from home.  I remember a guy who sat near me many years ago, and had a rearview mirror on his monitor.  He was always playing Solitaire and every time I, or anyone else, walked by his desk he would glance in the mirror and minimize the game.  He wasn’t fooling anyone.

For me, the noise and distractions of the office often make productive work difficult.  Thankfully, post-lockdown, several Cisco buildings are virtually empty, and I decamp to one of them if I need to get actual work done.  Pre-COVID I used to head out to a coffee shop, put in earphones, and get productive work done there.  Open offices are the worst for this.  They make serious work nearly impossible.

Then there’s this…  Let me be open about it.  I never agreed with the lockdowns.  When they first implemented them, I wrote every congressman, city councilman, county supervisor, the health department, the governor, the president, and pretty much anyone I could think of with my opposition to what seemed to me a lunatic idea, and totally unneeded.  Now you can disagree with me vehemently, you can think I’m a jerk, and that’s fine.  But here’s the point:  Almost all the large corporations bought into it.  They could have fought these mandates, but they went along with them, shut their doors, and embraced remote work.  Many started marketing campaigns (and still have them) around hybrid work.  You cannot go 100% in for the thing and then make a 180 degree shift a few years later because you regret your decision.  The outcome of the lockdowns–a lot of people unwilling to return to the office–was entirely predictable.  I think corporations need to embrace the world they created, and live with the consequences of their choices.  Your workers want to be remote, let them be remote.  Sure, give them incentives to come into the office and be together.  Encourage them to do so.  But accept the reality of the new world.

Elon Musk reopened his factories mid-lockdown.  He may not know how to name a child, but I’ll at least give him points for consistency.

I’ve been reluctant to comment on the latest fad in our industry, generative AI, simply because everybody has weighed in on it.  I do also try to avoid commenting on subjects outside of my scope of authority.  Increasingly, though, people are coming to me at work and asking how we can incorporate this technology into our products, how our competitors are doing it, and what our AI strategy is.  So I guess I am an authority.

To be honest, I didn’t play with ChatGPT until this week.  When I first looked at it, it wanted my email address and phone number and I wasn’t sure I wanted to provide that to our new AI overlords.  So I passed on it.  Then Cisco release an internal-only version, which is supposedly anonymous, so I decided to try it out.

My first impression was, as they say, “meh.”  Obviously its ability to interpret and generate natural language are amazing.  Having it recite details of its data set in the style of Faulkner was cool.  But overall, the responses seemed like warmed-over search-engine results.  I asked it if AI is environmentally irresponsible since it will require so much computing power.  The response was middle-of-the-road, “no, AI is not environmentally irresponsible” but “we need to do more to protect the environment.”  Blah, blah.  Non-committal, playing both sides of the coin.  Like almost all of its answers.

Then I decided to dive a bit deeper into a subject I know well:  Ancient Greek.  How accurately would ChatGPT be on a relatively obscure subject (and yet one with thousands of years of data!)

Even if you have no interest, bear with me.  I asked ChatGPT if it knew the difference between the Ionic dialect of Herodotus and the more common dialect of classical Athens.  (Our version, at least, does not allow proper names so I had to refer to Herodotus somewhat elliptically.)  It assured me it did.  I asked it to write “the men are arriving at Athens” in the dialect of Herodotus.  It wrote, “Ἀφίκοντο οἱ ἄνδρες εἰς Ἀθήνας,” which is obviously wrong.  The first word there, “aphikonto“, would actually be “apikonto” in the dialect of Herodotus.  He was well known for dropping aspirations.  The version ChatGPT gave me would be the classical Attic version.

I let ChatGPT know it was wrong, and it dutifully apologized.  Then I asked it to summarize the differences in the dialects.  It said to me:

Herodotus and Ionic writers typically removed initial aspirations, while the Attic dialect retained them. For example, “Ἀφίκοντο” (Herodotus) vs. “ἔφικοντο” (Attic)

Uh, you don’t need to know the Greek alphabet to see it made exactly the same mistake, again.  It should have said that Herodotus would use “Ἀπίκοντο” (apikonto) whereas in Attic the word would be “Ἀφίκοντο” (aphikonto.)

Here’s a little more computer-person-friendly example.  I decided to ask about my other favorite subject, retro computing.  I asked, “What is the difference between the Apple IIGS ProDOS 8, ProDOS 16, and GS/OS operating systems?”  Among other things, ChatGPT responded that GS/OS ” introduced features such as a desktop metaphor, drop-down menus, and a hierarchical file system.”  Which happens to be wrong.  ProDOS 16 actually introduced those features.  (In fact, ProDOS 8 introduced the hierarchy.)  I pointed this out.  “You are correct, and I apologize for the oversight in my previous response. I misspoke in stating that GS/OS introduced [those] features.”  Hmmm.

I’m testing it and I know the right answers.   The problem would arise if I were trusting ChatGPT to provide me with a correct answer.  There have been plenty of examples of mistakes made by ChatGPT, such as adding a “permit any” to the top of access-lists.

The issue is, ChatGPT sounds authoritative when it responds.  Because it is a computer and speaking in natural language, we have a tendency to trust it.  And yet it has consistently proven it can be quite wrong on even simple subjects.  In fact, our own version has the caveat “Cisco Enterprise Chat AI may produce inaccurate information about people, places, or facts” at the bottom of the page, and I’m sure most implementations of ChatGPT carry a similar warning.

Search engines place the burden of determining truth or fiction upon the user.  I get hundreds or thousands of results, and I have to decide which is credible based on the authority of the source, how convincing it sounds, etc.  AI provides one answer.  It has done the work for you.  Sure, you can probe further, but it many cases you won’t even know the answer served back is not trustworthy.  For that reason, I see AI tools to be potentially very misleading and potentially harmful in some circumstances.

That aside, I do like the fact I can dialog with it in ancient Latin and Greek, even if it makes mistakes.  It’s a good way to kill time in boring meetings.

As I mentioned in my last post, I like modeling networks using tools like Cisco Modeling Labs or GNS3.  I recalled how, back in TAC, I had access to a Cisco-internal (at the time) tool called IOS on Unix, or IOU.  This enabled me to recreate customer environments in minutes, with no need to hunt down hardware.  Obviously IOU didn’t work for every case.  Often times, the issue the customer raised was very hardware specific, even when it was a “routing protocol” issue.  However, if I could avoid hardware, I would do the recreate virtually.

When I worked at Juniper (in IT), we did a huge project to refresh the WAN.  This was just before SD-WAN came about.  We sourced VPLS from two different service providers, and then ran our own layer 3 MPLS on top of it.  The VPLS just gave us layer 2 connectivity, like a giant switch.  We had two POPs in each region which acted as aggregation points for smaller sites.  For these sites we had CE routers deployed on prem, connecting to PE routers in the POPs.  This is a basic service provider configuration, with us as a service provider.  Larger sites had PE routers on site, with the campus core routers acting as CEs.

We got all the advantages of layer 3 MPLS (traffic engineering, segmentation via VRF) without the headaches (peering at layer 3 with your SP, yuck!)

As the “network architect” for IT, I needed a way to model and test changes to the network.  I used a tool called VMM, which was similar to IOU.  Using a text file I could define a topology of routers, and their interconnections.  Then I used a Python script to start it up.  I then had a fully functional network model running under a hypervisor, and I could test stuff out.

I never recreated the entire network–it wasn’t necessary.  I created a virtual version with two simulated POPs, a tier 1 site (PE on prem), and a tier 2 site (PE in POP).  I don’t fully remember the details, there may have been one or two other sites in my model.

For strictly testing routing issues assuming normally functioning devices, my VMM-based model was a dream.  Before we rolled out changes we could test them in my virtual lab.  We could apply the configuration exactly as entered into the real device, to see what effect there would be in the network.  I just didn’t have the cool marketing term “digital twin” as it didn’t exist yet.

I remember working on a project to roll out multicast on the WAN using Next Generation Multicast VPN (NGMVPN).  NGMVPN was (is?) a complex beast, and as I designed the network and sorted out things like RP placement, I used my virtual lab.  I even filed bugs against Juniper’s NGMVPN code, bugs I found while using my virtual devices.  I remember the night we did I pilot rollout to two sites.  Our Boston office dropped off the network entirely.  Luckily we had out-of-band access and rolled back the config.  I SSHd into my virtual lab, applied the config, and spend a short amount of time diagnosing the problem (a duplicate loopback address applied), and did so without the stress of troubleshooting a live network.

I’ve always been a bit skeptical of the network simulation/modeling approach.  This is where you have some software intellgence layer that tries to “think through” the consequences of applied changes.  The problem is the variability of networks.  So many things can happen in so many ways.  Actual devices running actual NOS code in a virtual environment will behave exactly the way real devices will, given their constraints.  (Such as:  not emulating the harware precisely, not emulating all the different interface types, etc.)  I may be entirely wrong on this one, I’ve spent virtually no time with these products.

The problems I was modeling were protocol issues amongst a friendly group of routers.  When you add in campus networking, the complexity increases quite dramatically.  Aside from wireless being in the mix, you also have hundreds, thousands of non-network devices like laptops, printers, and phones which often cause networks to behave unpredictably.  I don’t think our AI models are yet at the point where they can predict what comes with that complexity.

Of course, the problem you have is always the one you don’t predict.  In TAC, most of the cases I took were bugs.  Hardware and software behaves unexpectedly.  As in the NGMVPN case, if there is a bug in software that is strictly protocol related, you might catch it in an emulation.  But many bugs exist only on certain hardware platforms, or in versions of software that don’t run virtually, etc.

As for digital twins, I do think learning to use CML (of course I’m Cisco-centric) or similar tools is very worthwhile.  Preparing for major changes offline in a virtual environment is a fantastic way to prep for the real thing.  Don’t forget, though, that things never go as planned, and thank goodness for that, as it gives us all job security.

We all have to make a decision, at some point in our career, about whether or not we get into the management track.  At Cisco, there is a very strong path for individual contributors (IC).  You can be come a principal (director-level), a distinguished (senior director-level), and a fellow (VP-level) as an IC, never having to manage a soul.  When I married my wife, I told her:  “Never expect me to get into management, I’m a technical guy and I love being a technical guy, and I have zero interest in managing people.”

Thus, I surprised myself back in 2016 when my boss asked me, out of the blue, to step into management and I said yes.  Partly it was my love of the Technical Marketing Engineer role, partly my desire to have some authority behind my ideas.  At one point my team grew to fifty TMEs.

All technical people know that, when you go that route, your technical skills will atrophy as you will have less and less hands-on experience.  This is very true.  In the first couple of years, I kept up my formidible lab, then over time it sat in Cisco building 23, unused and consuming OpEx.  I almost surrendered it numerous times.

Through attrition and corporate shenanigans, my team is considerably smaller (25 or so) and run by a very strong management team.  Last week, I decided to bring the lab back up.  I’ve been spending a lot of time sorting through servers and devices, figuring out which to scrap and which to keep.  (Many of my old servers require Flash to access the CIMC, which is not feasible going forward.)  I haven’t used ESXi in years, and finding out I can now access vSphere in a browser–from my Mac!!–was a pleasant surprise.  Getting CIMCs upgraded, ESXi installed, and a functional Ubuntu server was a bit of a pain, but this is oddly the sort of pain I miss.

I have several Cat 9k switches in my lab, but I installed Cisco Modeling Labs on one of my servers.  (The nice thing about working for Cisco is the license is free.)  I used VIRL many years ago, which I absolutely hated.  CML is quite slick.  It was simple to install, and within a short time I had a lab up and running with a CSR1kv, a Cat 8k, and a virtual Cat 9k.

When I was in TAC I discovered IOS on Unix, or IOU.  Back then, TAC agents were each given a Sun Sparc station, and I used mine almost exclusively to run IOU.  (I thought it was so cool back then to have a Sun box on my desk.  And those of you who remember them will know what I mean when I say I miss the keyboard.)  IOU allowed me to define a topology in a text file, and then spin up several virtual IOS devices on the Sparc station in that topology.  It only supported sinulated Ethernet links, but for pure routing protocols cases, IOU was more than adequate to recreate a customer environment.  In 15 minutes I could have my recreate up and running.  Other engineers would open a case to have a recreate built by our lab team, which could take days.  I never figured out why they wouldn’t use IOU.

When I left Cisco I had to resort to GNS3, which was a pretty helpful piece of software.  Then, when I went to Juniper I used Junosphere, or actually an internal version of it called VMM, to spin up topologies.  VMM was awesome.  Juniper produced a virtual version of its MX router that was so faithful to the real thing that I could pass the JNCIE Service Provider exam without ever having logged into a real one, at least until exam day.

It’ll be interesting to see what I can do on virtual 9ks in CML–I hear there are some limitations.  But I do plan to spend as much time as possible using the virtual version over the real thing.

One thing I think I lost sight of as I (slightly) climbed the corporate ladder was the necessity of technical leadership.  We have plenty of people managers and MBAs.  We need leaders who understand the technology, badly.  And while I have a lot of legacy knowledge in my mental database, it’s in need of refresh.  It’s hard to stay sharp technically when reading about new technologies in PowerPoint.

The other side of this is that, as engineers, we love the technology.  I love making stuff work.  My wife is not technical at all, and cannot understand why I get a thrill from five little exclamation points when a ping goes through.  I don’t love budgets and handling HR cases, although I’ve come to learn why I need to do those things.  I need to do them so my people can function optimally.  And I’m happy to do them for my team.

On the other hand, I’m glad to be in the frigid, loud, harsh lighting of a massive Cisco lab again. It’s very cool to have all this stuff.   Ain’t life grand!

Update:  From Fred, who was the guy referenced in the first paragraph:

Actually it was a white button with a router icon on it and “make cli great again”, I know this because it was me. It was June 2016. Needless to say in my view that did not age well.

When I attended Cisco Live sometime around the election of Donald Trump, there was a fellow walking around with a red hat with white lettering on it:  MAKE CLI GREAT AGAIN.  Ha!  I love Cisco Live.  These are my people.

I remember back when I worked at Juniper, one exec looked at me working on CLI and said, “you know that’s going to be gone soon.  It’ll all be GUI.”  That was 8 years ago…how’s that going?  When I work on CLI (and I still do!), or programming, my wife always says, “how can you stare at that cryptic black screen for hours?”  Hey, I’ve been doing it since I was a kid.

The black screen won’t go away, I’m afraid.  I’ve recently been learning iOS app development for fun (not profit).  It’s surprisingly hard given the number of successful app developers out there.  I may be too used to Python to program in Swift, and my hatred of object-oriented programming doesn’t help me when there is no way to avoid it in Swift.  Anyways, it took me about a week to sort out the different UI frameworks used in iOS.  There are basically three:

  • Storyboards.  Storyboards are a graphical design framework for UI layout.  Using storyboards, you drag and drop UI elements like buttons and textfields onto a miniature iPhone screen.
  • UIKit.  (Technically storyboards use UIKit, but I don’t know what else to call this.)  Most high-end app developers will delete the storyboard in their project and write the UI as code.  They actually type in code to tell iOS what UI elements they want, how to position them, and what to do in the event they are selected.  Positioning is fairly manual and is done relative to other UI elements.
  • SwiftUI.  Apple is pushing towards this model and will eventually deprecate the other two.  SwiftUI is also a UI-as-code model, but it’s declarative instead of imperative.  You tell SwiftUI what you want and roughly how you want to position things, and Swift does it for you.

Did you catch my point?  The GUI-based layout tool is going away in favor of UI-as-code!  The black screen always comes back!

The difference between computer people and non-computer-computer-people (many industry MBAs, analysts, etc.), is that computer people understand that text-based interaction is far more efficient, even if the learning curve is steeper.

Andrew Tanenbaum, author of the classic Computer Networks, typeset his massive work in troff.  Troff is a text-based typesetting tool where you enter input like this:

.ll 3i
.mk a
.ce
Preamble
.sp
We, the people of the United States, in order
to form a more perfect Union...

Why doesn’t he just use Word?  I’ll let Dr. Tanenbaum speak for himself:

All my typesetting is done using troff. I don’t have any need to see what the output will look like. I am quite convinced that troff will follow my instructions dutifully. If I give it the macro to insert a second-level heading, it will do that in the correct font and size, with the correct spacing, adding extra space to align facing pages down to the pixel if need be. Why should I worry about that? WYSIWYG is a step backwards. Human labor is used to do that which the computer can do better.  (Emphasis added.)

I myself am not quite enough of a cyborg to use troff (though I use vi), but I have used Latex with far better results than Word.  (Dr. Tanenbaum says “real authors use troff,” however.)

One of my more obscure interests (I have many) is Gregorian Chant.  Chant uses a musical notation which is markedly different from modern music notation, and occasionally I need to typeset it.  I use a tool called Gregorio, where I enter the chant like this:

(cb3) Ad(d)ór(f’)o(h) te(h’) de(h)vó(hi)te,(h.) (,) la(g)tens(f) Dé(e’)i(d)tas,(d.)

The letters in parentheses represent the different musical notes.  I once tried typesetting the chant graphically, and it was far more tedious than the above.  Why not enter what I want and let the typesetting system do the work?

Aside from the mere efficiency, text files can be easily version controlled and diff’d.  Try that with your GUI tool!

It’s very ironic that many of my customers who use controllers like DNAC or vManage are actually accessing the tool through APIs.  They bought a GUI tool, but they prefer the black screen.  The controller in this case becomes a point of aggregation for them, a system which at least does discovery and allows some level of abstraction.

The non-computer-computer-people look at SwiftUI, network device CLI, troff, Gregorio, APIs, and rend their garments, crying out to heaven, “why, oh why?!”  Some may even remember the days of text-based editing systems on their DOS machines, which they could never learn, and the great joy that WYSIWYG brought them.  It reminds me of a highly incompetent sales guy I worked with at the Gold partner back in the day.  He once saw me configuring a router and said:  “Wow, you still use DOS to configure routers!”

“It’s actually IOS CLI, not DOS.”

“That’s DOS!” he densely replied.  “I remember DOS.  I can’t believe you still use DOS!”

It’s funny that no matter how hard we try to get away from code, we always come back to it.  We’re hearing a lot about “low code” environments these days.  It tells you something when the first three Google hits on “low code” just come back to Gartner reports.  Gee, have we been down this path before?  Visual Basic was invented in 1991If low code is so great, why is Apple moving from storyboards to SwiftUI?

In my last post I wrote about the war on expertise.  This is one of the fronts in the war.  The non-computer-computer-people cannot understand the black screen, and are convinced they can eliminate it.  They learned about “innovation” in business school, and read case studies about Windows 95 and the end of DOS.  They read about how companies like Sun Microsystems went belly-up because they are not “disruptive.”  They did not, however, read about all the failed attempts to eliminate the black screen, spanning decades.  I believe it was George Santayana who said, “If you don’t remember computer history, you’re doomed to repeat it.”

It’s impossible to count how many people at my college wanted to be “writers”.  So many early-twenty-somethings here in the US think they are going to spend their lives as screenwriters or novelists.  My colleagues from India tell me most people there want to be doctors or engineers, which tells you something about the decline of the United States.

Back in the mid-2000’s, a popular buddy-comedy came out about a novelist and an actor and their adventures in the “California wine country”.  The author of the film is an LA novelist.  The only people he knew, and the only characters he could create, were writers and actors.  I read that his first novel was about a screenwriter.  The movie was popular, but I found the characters utterly boring.  Who cares about a novelist and his romantic adventures?  Herman Melville spent years at sea, giving him the material to write Moby Dick.  Fyodor Dostoevsky wanted to be a writer from an early age, but he spent years in a prison camp followed by years of forced military service, to give him a view into nihilism and its effect on the human soul.  The point is, these great writers earned the right to talk about something, they didn’t just go to college and come out a genius with brilliant things to say.

I’ve been hearing a lot about “product management” lately.  I work in product management, in fact, and I’ve worked with product managers for many years.  However, I didn’t realize until recently that product management is the hot new field.  Everyone wants to major in PM in business school.  As one VP I know told me, “people want to be PMs because that’s where CEOs come from.”  Well, like 19-year-olds feeling entitled to be great novelists, b-school students are apparently expecting to become CEOs.  Somewhere missing in this sense of entitlement is that achievement has to be earned, and that is has to be earned by developing specific expertise.  A college student who wants to be a novelist thinks he or she simply deserves to be a novelist by virtue of his or her brilliance;  a b-school PM student apparently thinks the same way about being a CEO.

Back when I worked in TAC, one of my mentors was a TAC engineer who had previously been a product manager for GSR (12000-series) line cards.  He went back to TAC because he wanted to get into the new CRS-1 router and felt it was the best place to learn the new product quickly.  It made sense at the time, but it is inconceivable now that a PM would go to TAC.  The product manager career path is directed towards managing business, not technology, and it would be a step down for product managers to become technical again.

If you don’t work for a tech company, you may not know a lot about product management, but PMs are very important to the development of the products you use.  They decide what products are brought to market;  what features they will have;  they prioritize product roadmaps.  They are held accountable for the revenue (or lack thereof) for a product.

Imagine, now, that somebody with that responsibility for, say, a router has no direct experience as a network engineer, but instead has an MBA from Kellogg or Haas or Wharton.  They’ve studied product management as a discipline, but know nothing about the technology that they own.  Suppose this person has no particular interest in or passion for their field–they just want to succeed in business and be a CEO some day.  What do you think the roadmap will look like?  Do you think the product will take into account the needs of the customer?  When various technologists come to such a PM, will he be able to rationally sort through their competing proposals and select the correct technology?

To be clear, I am not criticizing any individual or my current employer here.  This problem extends industry-wide and explains why so many badly conceived products exist.  The problem of corporatism, which I’ve written about often, extends beyond product management too.  How often are decisions in IT departments made by business people who have little to no experience in the field they are responsible for?  I got into network engineering because I was fascinated by it and loved it.  I’m not the best engineer out there–I’ve worked with some brilliant people–but I do care about the industry and the products we make.  And most importantly, I care about network engineers because I’ve been one.

Corporatists believe generic management principles can be learned which apply to any business, and that they don’t really need domain-specific expertise.  They know business, so why would they?  True, there are some “business” specific tasks like finance that where generic business knowledge is really all that’s needed.  But the mistaken thinking that generic business knowledge qualifies one to be authoritative on technical topics doesn’t make sense.  This is how tech CEO’s end up CEO of coffee companies–it’s just business, right?

I don’t mean to denigrate product management as a discipline.  PMs have an important role to play, and product management is the art of making decisions between different alternatives with constrained resources.  I am saying this:  that if you want to become a product manager, spend the time to learn not just the business, but the actual thing you are product managing.  You’d be better off spending a couple years in TAC out of business school than going straight into PM.  Not that many CEO-aspiring PMs would ever do that, these days.

Now off to write my first novel.

A couple of years back I purchased an AI-powered energy monitoring system for my home.  It clips on to the power mains and monitors amperage/wattage.  I can view an a graph showing energy usage over time, which is really quite helpful to keep tabs on my electricity consumption at a time when electricity is expensive.

The AI part identifies what devices are drawing power in my house.  Based simply on wattage patterns, so they claim, the app will tell me this device is a light, that device is an air conditioner, and so on.  An electric oven, for example, consumes so much power and switches itself on and off in such a pattern that AI can identify it.  The company has a large database of all of the sorts of products that can be plugged into an outlet, and it uses its database to figure out what you have connected.

So far my AI energy monitor has identified ten different heaters in my house.  That’s really cool, except for the fact that I have exactly one heater.  When the message popped up saying “We’ve identified a new device!  Heater #10!”, I must admit I wasn’t surprised.  It did raise an eyebrow, however, given that it was summer and over 100 degrees (38 C) outside.  At the very least, you’d think the algorithm could correlate location and weather data with its guesses.

Many “futurists” who lurk around Silicon Valley believe in a few years we’ll live for ever when we merge our brains with AI.  I’ve noticed that most of these “futurists” have no technological expertise at all.  Usually they’re journalists or marketing experts.  I, on the other hand, deal with technology every day, and it leaves me more than a little skeptical of the “AI” wave that’s been sweeping over the Valley for a few years.

Of course, once the “analysts” identify a trend, all of us vendors need to move on it.  (“SASE was hot last fall, but this season SSE is in!”)  A part of that involves labeling things with the latest buzzword even when they have nothing to do with it.  (Don’t get me started on “controllers”…)  One vendor has a tool that opens a TAC case after detecting a problem.  They call this something like “AI-driven issue resolution.”  Never mind that a human being gets the TAC case and has to troubleshoot it–this is the exact opposite of AI.  We can broaden the term to mean a computer doing anything on its own, in this case calling a human.  Hey, is there a better indicator of intelligence than asking for help?

Dynamic baselines are neat.  I remember finding the threshold altering capabilities in NMS tools useless back in the 90’s.  Do I set it at 50% of bandwidth?  60%?  80%?  Dynamic baselining determines the normal traffic (or whatever) level at a given time, and sets a variable threshold based on historical data.  It’s AI, I suppose, but it’s basically just pattern analysis.

True issue resolution is a remarkably harder problem.  I once sat in a product meeting where we had been asked to determine all of the different scenarios the tool we were developing would be able to troubleshoot.  Then we were to determine the steps the “AI” would take (i.e., what CLI to execute.)  We built slide after slide, racking our brains for all the ways networks fail and how we’d troubleshoot them.

The problem with this approach is that if you think of 100 ways networks fail, when a customer deploys the product it will fail in the 101st way.  Networks are large distributed systems, running multiple protocols, connecting multiple operating systems, with different media types and they have ways of failing, sometimes spectacularly, that nobody ever thinks about.  A human being can think adaptively and dynamically in a way that a computer cannot.  Troubleshooting an outage involves collecting data from multiple sources, and then thinking through the problem until a resolution is found.  How many times, when I was in TAC, did I grab two or three other engineers to sit around a whiteboard and debate what the problem could be?  Using our collective knowledge and experience, bouncing ideas off of one another, we would often come up with creative approaches to the problem at hand and solve it.  I just don’t see AI doing that.  So, maybe it’s a good thing it phones home for help.

I do see a role for AI and its analysis capabilities in providing troubleshooting information on common problems.  Also, data can be a problem for humans to process.  We’re inundated by numbers and often cannot easily find patterns in what we are presented.  AI-type tools can help to aggregate and analyze data from numerous sources in a single place.  So, I’m by no means saying we should be stuck in 1995 for our NMS tools.  But I don’t see AI tools replacing network operations teams any time soon, despite what may be sold.

And I certainly have no plans to live forever by fusing my brain with a computer.  We can leave that to science fiction writers, and their more respectable colleagues, the futurists.

I haven’t posted in a while, for the simple reason that writing a blog is a challenge.  What the heck am I going to write about?  Sometimes ideas come easily, sometimes not.  Of course, I have a day job, and part of that day job involves Cisco Live, which is next week, in person, for the first time in two years.  Getting myself ready, as well as a coordinating with a team of almost fifty technical marketing engineers, does not leave a lot of free time.

For the last several in-person Cisco Lives, I did a two-hour breakout on programmability and scripting.  The meat of the presentation was NETCONF/RESTCONF/YANG, and how to use Python to configure/operate devices using those protocols.  I don’t really work on this anymore, and I have a very competent colleague who has taken over.  I kept delivering the session because I loved doing it.  But good things have to come to an end.  At the last in-person Cisco Live (Barcelona 2020), I had just wrapped up delivering the session for what I assumed would be the last time.  A couple of attendees approached me afterwards.  “We love your session, we come to it every year!” they told me.

I was surprised.  “But I deliver almost the same content every year,” I replied.  “I even use the same jokes.”

“Well, it’s our favorite session,” they said.

At that point I resolved to keep doing it, even if my experience was diminishing.  Then, COVID.

I had one other session which was also a lot of fun, called “The CCIE in an SDN world.”  Because it was in the certification track, I wasn’t taking a session away from my team by doing it.  There is a bit about the CCIE certification, its history, and its current form, but the thrust of it is this:  network engineers are still relevant, even today with SDN and APIs supposedly taking over everything.  There is so much marketing fluff around SDN and its offshoots, and while there may be good ideas in there (and a lot of bad ones), nevertheless we still need engineers who study who to manage and operate data networks, just like we did in the past.

I will be delivering that session.  I have 50 registered attendees, which is far cry from the 500 I used to pack in at the height of the programmability gig.  Being a Senior Director, you end up in limbo between keynotes (too junior) and breakouts (too senior).  But the cert guys were gracious enough to let me speak to my audience of 50.

Cisco Live is really the highlight of the TME role, and I’m happy to finally be back.  Let’s just hope I’m still over my stage fright, I haven’t had an audience in years!

I shall avoid naming names, but when I worked for Juniper we had a certain CEO who pumped us up as the next $10 billion company.  It never happened, and he left and became the CEO of Starbucks.  Starbucks has nothing to do with computer networking at all.  Why was he hired by Starbucks?  How did his (supposed) knowledge of technology translate into coffee?

Apparently it didn’t.  Howard Schultz, Starbucks’ former CEO, is back at the helm.  “I wasn’t here the last four years, but I’m here now,” he said, according to an article in the Wall Street Journal (paywall).  “I am not in business, as a shareholder of Starbucks, to make every single decision based on the stock price for the quarter…Those days, ladies and gentlemen, are over.”  Which of course, implies that that was exactly what the previous CEO was doing.

What happened under the old CEO?  “Workers noticed an increasing focus on speed metrics, including the average time to prepare an order, by store.”  Ah, metrics, my old enemy.  There’s a reason one of my favorite books is called The Tyranny of Metrics and why I wrote a TAC Tales piece just about the use of metrics in TAC.  More on that in a bit.

As I look at what I refer to as “corporatism” and its effect on our industry, it often becomes apparent that the damage of this ethos extends beyond tech.  The central tenet of corporatism, as I define it, is that organizations are best run by people who have no particular expertise other than management itself.  That is, these individuals are trained and experienced in generic management principles, and this is what qualifies them to run businesses.  The generic management skills are translate-able, meaning that if you become an expert in managing a company that makes paper clips, you can successfully use your management skills to run a company that makes, say, medical-device software.  Or pharmaceuticals.  Or airplanes.  Or whatever.  You are, after all, a manager, maybe even a leader, and you just know what to do without any deep expertise or hard-acquired industry-specific knowledge.

Those of us who spend years, even decades acquiring deep technical knowledge of our fields are, according to this ethos, the least qualified to manage and lead.  That’s because we are stuck in our old ways of doing things, and therefore we don’t innovate, and we probably make things complex, using funny acronyms like EIGRP, OSPF, BGP, STP, MPLS, L2VNI, etc., to confuse the real leaders.

Corporatists simply love metrics.  They may not understand, say, L2VNIs, but they look at graphs all day long.  Everything has to be measured in their world, because once it’s measured it can be graphed, and once it’s graphed it’s simply a matter of making the line go the right direction.  Anyone can do that!

Sadly, as Starbucks seems to be discovering, life is messier than a few graphs.  Management by metric usually leads to unintended consequences, and frequently those who operate in such systems resort to metric-gaming.  As I mentioned in the TAC Tale, measuring TAC agents on create-to-close numbers led to many engineers avoiding complex cases and sticking with RMAs to get their numbers looking good.  Tony Hsieh at Zappos, whatever problems he may have had, was totally right when he had his customer service reps stay on the phone as long as needed with customers, hours if necessary, to resolve an issue with a $20 pair of shoes.  That would never fly with the corporatists.  But he understood that customer satisfaction would make or break his business, and it’s often hard to put a number on that.

Corporatism of various sorts has been present in every company I’ve worked for.  The best, and most successful, leadership teams I’ve worked for have avoided it by employing leaders that grew up within the industry.  This doesn’t make them immune from mistakes, of course, but it allows them to understand their customers, something corporatists have a hard time with.

Unfortunately, we work in an industry (like many) in which the stock value of companies is determined by an army of non-technical “analysts” who couldn’t configure a static route, let alone explain what one is.  And yet somehow, their opinions on (e.g.) the router business move the industry.  They of course adhere to the ethos of corporatism.  And I’m sure they get paid better than I do.

Starbucks seems to be correcting a mistake by hiring back someone who actually knows their business.  Would that all corporations learn from Starbucks’ mistake, and ensure their leaders know at least something about what they are leading.