Perspectives

I’ve been reluctant to comment on the latest fad in our industry, generative AI, simply because everybody has weighed in on it.  I do also try to avoid commenting on subjects outside of my scope of authority.  Increasingly, though, people are coming to me at work and asking how we can incorporate this technology into our products, how our competitors are doing it, and what our AI strategy is.  So I guess I am an authority.

To be honest, I didn’t play with ChatGPT until this week.  When I first looked at it, it wanted my email address and phone number and I wasn’t sure I wanted to provide that to our new AI overlords.  So I passed on it.  Then Cisco release an internal-only version, which is supposedly anonymous, so I decided to try it out.

My first impression was, as they say, “meh.”  Obviously its ability to interpret and generate natural language are amazing.  Having it recite details of its data set in the style of Faulkner was cool.  But overall, the responses seemed like warmed-over search-engine results.  I asked it if AI is environmentally irresponsible since it will require so much computing power.  The response was middle-of-the-road, “no, AI is not environmentally irresponsible” but “we need to do more to protect the environment.”  Blah, blah.  Non-committal, playing both sides of the coin.  Like almost all of its answers.

Then I decided to dive a bit deeper into a subject I know well:  Ancient Greek.  How accurately would ChatGPT be on a relatively obscure subject (and yet one with thousands of years of data!)

Even if you have no interest, bear with me.  I asked ChatGPT if it knew the difference between the Ionic dialect of Herodotus and the more common dialect of classical Athens.  (Our version, at least, does not allow proper names so I had to refer to Herodotus somewhat elliptically.)  It assured me it did.  I asked it to write “the men are arriving at Athens” in the dialect of Herodotus.  It wrote, “Ἀφίκοντο οἱ ἄνδρες εἰς Ἀθήνας,” which is obviously wrong.  The first word there, “aphikonto“, would actually be “apikonto” in the dialect of Herodotus.  He was well known for dropping aspirations.  The version ChatGPT gave me would be the classical Attic version.

I let ChatGPT know it was wrong, and it dutifully apologized.  Then I asked it to summarize the differences in the dialects.  It said to me:

Herodotus and Ionic writers typically removed initial aspirations, while the Attic dialect retained them. For example, “Ἀφίκοντο” (Herodotus) vs. “ἔφικοντο” (Attic)

Uh, you don’t need to know the Greek alphabet to see it made exactly the same mistake, again.  It should have said that Herodotus would use “Ἀπίκοντο” (apikonto) whereas in Attic the word would be “Ἀφίκοντο” (aphikonto.)

Here’s a little more computer-person-friendly example.  I decided to ask about my other favorite subject, retro computing.  I asked, “What is the difference between the Apple IIGS ProDOS 8, ProDOS 16, and GS/OS operating systems?”  Among other things, ChatGPT responded that GS/OS ” introduced features such as a desktop metaphor, drop-down menus, and a hierarchical file system.”  Which happens to be wrong.  ProDOS 16 actually introduced those features.  (In fact, ProDOS 8 introduced the hierarchy.)  I pointed this out.  “You are correct, and I apologize for the oversight in my previous response. I misspoke in stating that GS/OS introduced [those] features.”  Hmmm.

I’m testing it and I know the right answers.   The problem would arise if I were trusting ChatGPT to provide me with a correct answer.  There have been plenty of examples of mistakes made by ChatGPT, such as adding a “permit any” to the top of access-lists.

The issue is, ChatGPT sounds authoritative when it responds.  Because it is a computer and speaking in natural language, we have a tendency to trust it.  And yet it has consistently proven it can be quite wrong on even simple subjects.  In fact, our own version has the caveat “Cisco Enterprise Chat AI may produce inaccurate information about people, places, or facts” at the bottom of the page, and I’m sure most implementations of ChatGPT carry a similar warning.

Search engines place the burden of determining truth or fiction upon the user.  I get hundreds or thousands of results, and I have to decide which is credible based on the authority of the source, how convincing it sounds, etc.  AI provides one answer.  It has done the work for you.  Sure, you can probe further, but it many cases you won’t even know the answer served back is not trustworthy.  For that reason, I see AI tools to be potentially very misleading and potentially harmful in some circumstances.

That aside, I do like the fact I can dialog with it in ancient Latin and Greek, even if it makes mistakes.  It’s a good way to kill time in boring meetings.

As I mentioned in my last post, I like modeling networks using tools like Cisco Modeling Labs or GNS3.  I recalled how, back in TAC, I had access to a Cisco-internal (at the time) tool called IOS on Unix, or IOU.  This enabled me to recreate customer environments in minutes, with no need to hunt down hardware.  Obviously IOU didn’t work for every case.  Often times, the issue the customer raised was very hardware specific, even when it was a “routing protocol” issue.  However, if I could avoid hardware, I would do the recreate virtually.

When I worked at Juniper (in IT), we did a huge project to refresh the WAN.  This was just before SD-WAN came about.  We sourced VPLS from two different service providers, and then ran our own layer 3 MPLS on top of it.  The VPLS just gave us layer 2 connectivity, like a giant switch.  We had two POPs in each region which acted as aggregation points for smaller sites.  For these sites we had CE routers deployed on prem, connecting to PE routers in the POPs.  This is a basic service provider configuration, with us as a service provider.  Larger sites had PE routers on site, with the campus core routers acting as CEs.

We got all the advantages of layer 3 MPLS (traffic engineering, segmentation via VRF) without the headaches (peering at layer 3 with your SP, yuck!)

As the “network architect” for IT, I needed a way to model and test changes to the network.  I used a tool called VMM, which was similar to IOU.  Using a text file I could define a topology of routers, and their interconnections.  Then I used a Python script to start it up.  I then had a fully functional network model running under a hypervisor, and I could test stuff out.

I never recreated the entire network–it wasn’t necessary.  I created a virtual version with two simulated POPs, a tier 1 site (PE on prem), and a tier 2 site (PE in POP).  I don’t fully remember the details, there may have been one or two other sites in my model.

For strictly testing routing issues assuming normally functioning devices, my VMM-based model was a dream.  Before we rolled out changes we could test them in my virtual lab.  We could apply the configuration exactly as entered into the real device, to see what effect there would be in the network.  I just didn’t have the cool marketing term “digital twin” as it didn’t exist yet.

I remember working on a project to roll out multicast on the WAN using Next Generation Multicast VPN (NGMVPN).  NGMVPN was (is?) a complex beast, and as I designed the network and sorted out things like RP placement, I used my virtual lab.  I even filed bugs against Juniper’s NGMVPN code, bugs I found while using my virtual devices.  I remember the night we did I pilot rollout to two sites.  Our Boston office dropped off the network entirely.  Luckily we had out-of-band access and rolled back the config.  I SSHd into my virtual lab, applied the config, and spend a short amount of time diagnosing the problem (a duplicate loopback address applied), and did so without the stress of troubleshooting a live network.

I’ve always been a bit skeptical of the network simulation/modeling approach.  This is where you have some software intellgence layer that tries to “think through” the consequences of applied changes.  The problem is the variability of networks.  So many things can happen in so many ways.  Actual devices running actual NOS code in a virtual environment will behave exactly the way real devices will, given their constraints.  (Such as:  not emulating the harware precisely, not emulating all the different interface types, etc.)  I may be entirely wrong on this one, I’ve spent virtually no time with these products.

The problems I was modeling were protocol issues amongst a friendly group of routers.  When you add in campus networking, the complexity increases quite dramatically.  Aside from wireless being in the mix, you also have hundreds, thousands of non-network devices like laptops, printers, and phones which often cause networks to behave unpredictably.  I don’t think our AI models are yet at the point where they can predict what comes with that complexity.

Of course, the problem you have is always the one you don’t predict.  In TAC, most of the cases I took were bugs.  Hardware and software behaves unexpectedly.  As in the NGMVPN case, if there is a bug in software that is strictly protocol related, you might catch it in an emulation.  But many bugs exist only on certain hardware platforms, or in versions of software that don’t run virtually, etc.

As for digital twins, I do think learning to use CML (of course I’m Cisco-centric) or similar tools is very worthwhile.  Preparing for major changes offline in a virtual environment is a fantastic way to prep for the real thing.  Don’t forget, though, that things never go as planned, and thank goodness for that, as it gives us all job security.

We all have to make a decision, at some point in our career, about whether or not we get into the management track.  At Cisco, there is a very strong path for individual contributors (IC).  You can be come a principal (director-level), a distinguished (senior director-level), and a fellow (VP-level) as an IC, never having to manage a soul.  When I married my wife, I told her:  “Never expect me to get into management, I’m a technical guy and I love being a technical guy, and I have zero interest in managing people.”

Thus, I surprised myself back in 2016 when my boss asked me, out of the blue, to step into management and I said yes.  Partly it was my love of the Technical Marketing Engineer role, partly my desire to have some authority behind my ideas.  At one point my team grew to fifty TMEs.

All technical people know that, when you go that route, your technical skills will atrophy as you will have less and less hands-on experience.  This is very true.  In the first couple of years, I kept up my formidible lab, then over time it sat in Cisco building 23, unused and consuming OpEx.  I almost surrendered it numerous times.

Through attrition and corporate shenanigans, my team is considerably smaller (25 or so) and run by a very strong management team.  Last week, I decided to bring the lab back up.  I’ve been spending a lot of time sorting through servers and devices, figuring out which to scrap and which to keep.  (Many of my old servers require Flash to access the CIMC, which is not feasible going forward.)  I haven’t used ESXi in years, and finding out I can now access vSphere in a browser–from my Mac!!–was a pleasant surprise.  Getting CIMCs upgraded, ESXi installed, and a functional Ubuntu server was a bit of a pain, but this is oddly the sort of pain I miss.

I have several Cat 9k switches in my lab, but I installed Cisco Modeling Labs on one of my servers.  (The nice thing about working for Cisco is the license is free.)  I used VIRL many years ago, which I absolutely hated.  CML is quite slick.  It was simple to install, and within a short time I had a lab up and running with a CSR1kv, a Cat 8k, and a virtual Cat 9k.

When I was in TAC I discovered IOS on Unix, or IOU.  Back then, TAC agents were each given a Sun Sparc station, and I used mine almost exclusively to run IOU.  (I thought it was so cool back then to have a Sun box on my desk.  And those of you who remember them will know what I mean when I say I miss the keyboard.)  IOU allowed me to define a topology in a text file, and then spin up several virtual IOS devices on the Sparc station in that topology.  It only supported sinulated Ethernet links, but for pure routing protocols cases, IOU was more than adequate to recreate a customer environment.  In 15 minutes I could have my recreate up and running.  Other engineers would open a case to have a recreate built by our lab team, which could take days.  I never figured out why they wouldn’t use IOU.

When I left Cisco I had to resort to GNS3, which was a pretty helpful piece of software.  Then, when I went to Juniper I used Junosphere, or actually an internal version of it called VMM, to spin up topologies.  VMM was awesome.  Juniper produced a virtual version of its MX router that was so faithful to the real thing that I could pass the JNCIE Service Provider exam without ever having logged into a real one, at least until exam day.

It’ll be interesting to see what I can do on virtual 9ks in CML–I hear there are some limitations.  But I do plan to spend as much time as possible using the virtual version over the real thing.

One thing I think I lost sight of as I (slightly) climbed the corporate ladder was the necessity of technical leadership.  We have plenty of people managers and MBAs.  We need leaders who understand the technology, badly.  And while I have a lot of legacy knowledge in my mental database, it’s in need of refresh.  It’s hard to stay sharp technically when reading about new technologies in PowerPoint.

The other side of this is that, as engineers, we love the technology.  I love making stuff work.  My wife is not technical at all, and cannot understand why I get a thrill from five little exclamation points when a ping goes through.  I don’t love budgets and handling HR cases, although I’ve come to learn why I need to do those things.  I need to do them so my people can function optimally.  And I’m happy to do them for my team.

On the other hand, I’m glad to be in the frigid, loud, harsh lighting of a massive Cisco lab again. It’s very cool to have all this stuff.   Ain’t life grand!

Update:  From Fred, who was the guy referenced in the first paragraph:

Actually it was a white button with a router icon on it and “make cli great again”, I know this because it was me. It was June 2016. Needless to say in my view that did not age well.

When I attended Cisco Live sometime around the election of Donald Trump, there was a fellow walking around with a red hat with white lettering on it:  MAKE CLI GREAT AGAIN.  Ha!  I love Cisco Live.  These are my people.

I remember back when I worked at Juniper, one exec looked at me working on CLI and said, “you know that’s going to be gone soon.  It’ll all be GUI.”  That was 8 years ago…how’s that going?  When I work on CLI (and I still do!), or programming, my wife always says, “how can you stare at that cryptic black screen for hours?”  Hey, I’ve been doing it since I was a kid.

The black screen won’t go away, I’m afraid.  I’ve recently been learning iOS app development for fun (not profit).  It’s surprisingly hard given the number of successful app developers out there.  I may be too used to Python to program in Swift, and my hatred of object-oriented programming doesn’t help me when there is no way to avoid it in Swift.  Anyways, it took me about a week to sort out the different UI frameworks used in iOS.  There are basically three:

  • Storyboards.  Storyboards are a graphical design framework for UI layout.  Using storyboards, you drag and drop UI elements like buttons and textfields onto a miniature iPhone screen.
  • UIKit.  (Technically storyboards use UIKit, but I don’t know what else to call this.)  Most high-end app developers will delete the storyboard in their project and write the UI as code.  They actually type in code to tell iOS what UI elements they want, how to position them, and what to do in the event they are selected.  Positioning is fairly manual and is done relative to other UI elements.
  • SwiftUI.  Apple is pushing towards this model and will eventually deprecate the other two.  SwiftUI is also a UI-as-code model, but it’s declarative instead of imperative.  You tell SwiftUI what you want and roughly how you want to position things, and Swift does it for you.

Did you catch my point?  The GUI-based layout tool is going away in favor of UI-as-code!  The black screen always comes back!

The difference between computer people and non-computer-computer-people (many industry MBAs, analysts, etc.), is that computer people understand that text-based interaction is far more efficient, even if the learning curve is steeper.

Andrew Tanenbaum, author of the classic Computer Networks, typeset his massive work in troff.  Troff is a text-based typesetting tool where you enter input like this:

.ll 3i
.mk a
.ce
Preamble
.sp
We, the people of the United States, in order
to form a more perfect Union...

Why doesn’t he just use Word?  I’ll let Dr. Tanenbaum speak for himself:

All my typesetting is done using troff. I don’t have any need to see what the output will look like. I am quite convinced that troff will follow my instructions dutifully. If I give it the macro to insert a second-level heading, it will do that in the correct font and size, with the correct spacing, adding extra space to align facing pages down to the pixel if need be. Why should I worry about that? WYSIWYG is a step backwards. Human labor is used to do that which the computer can do better.  (Emphasis added.)

I myself am not quite enough of a cyborg to use troff (though I use vi), but I have used Latex with far better results than Word.  (Dr. Tanenbaum says “real authors use troff,” however.)

One of my more obscure interests (I have many) is Gregorian Chant.  Chant uses a musical notation which is markedly different from modern music notation, and occasionally I need to typeset it.  I use a tool called Gregorio, where I enter the chant like this:

(cb3) Ad(d)ór(f’)o(h) te(h’) de(h)vó(hi)te,(h.) (,) la(g)tens(f) Dé(e’)i(d)tas,(d.)

The letters in parentheses represent the different musical notes.  I once tried typesetting the chant graphically, and it was far more tedious than the above.  Why not enter what I want and let the typesetting system do the work?

Aside from the mere efficiency, text files can be easily version controlled and diff’d.  Try that with your GUI tool!

It’s very ironic that many of my customers who use controllers like DNAC or vManage are actually accessing the tool through APIs.  They bought a GUI tool, but they prefer the black screen.  The controller in this case becomes a point of aggregation for them, a system which at least does discovery and allows some level of abstraction.

The non-computer-computer-people look at SwiftUI, network device CLI, troff, Gregorio, APIs, and rend their garments, crying out to heaven, “why, oh why?!”  Some may even remember the days of text-based editing systems on their DOS machines, which they could never learn, and the great joy that WYSIWYG brought them.  It reminds me of a highly incompetent sales guy I worked with at the Gold partner back in the day.  He once saw me configuring a router and said:  “Wow, you still use DOS to configure routers!”

“It’s actually IOS CLI, not DOS.”

“That’s DOS!” he densely replied.  “I remember DOS.  I can’t believe you still use DOS!”

It’s funny that no matter how hard we try to get away from code, we always come back to it.  We’re hearing a lot about “low code” environments these days.  It tells you something when the first three Google hits on “low code” just come back to Gartner reports.  Gee, have we been down this path before?  Visual Basic was invented in 1991If low code is so great, why is Apple moving from storyboards to SwiftUI?

In my last post I wrote about the war on expertise.  This is one of the fronts in the war.  The non-computer-computer-people cannot understand the black screen, and are convinced they can eliminate it.  They learned about “innovation” in business school, and read case studies about Windows 95 and the end of DOS.  They read about how companies like Sun Microsystems went belly-up because they are not “disruptive.”  They did not, however, read about all the failed attempts to eliminate the black screen, spanning decades.  I believe it was George Santayana who said, “If you don’t remember computer history, you’re doomed to repeat it.”

It’s impossible to count how many people at my college wanted to be “writers”.  So many early-twenty-somethings here in the US think they are going to spend their lives as screenwriters or novelists.  My colleagues from India tell me most people there want to be doctors or engineers, which tells you something about the decline of the United States.

Back in the mid-2000’s, a popular buddy-comedy came out about a novelist and an actor and their adventures in the “California wine country”.  The author of the film is an LA novelist.  The only people he knew, and the only characters he could create, were writers and actors.  I read that his first novel was about a screenwriter.  The movie was popular, but I found the characters utterly boring.  Who cares about a novelist and his romantic adventures?  Herman Melville spent years at sea, giving him the material to write Moby Dick.  Fyodor Dostoevsky wanted to be a writer from an early age, but he spent years in a prison camp followed by years of forced military service, to give him a view into nihilism and its effect on the human soul.  The point is, these great writers earned the right to talk about something, they didn’t just go to college and come out a genius with brilliant things to say.

I’ve been hearing a lot about “product management” lately.  I work in product management, in fact, and I’ve worked with product managers for many years.  However, I didn’t realize until recently that product management is the hot new field.  Everyone wants to major in PM in business school.  As one VP I know told me, “people want to be PMs because that’s where CEOs come from.”  Well, like 19-year-olds feeling entitled to be great novelists, b-school students are apparently expecting to become CEOs.  Somewhere missing in this sense of entitlement is that achievement has to be earned, and that is has to be earned by developing specific expertise.  A college student who wants to be a novelist thinks he or she simply deserves to be a novelist by virtue of his or her brilliance;  a b-school PM student apparently thinks the same way about being a CEO.

Back when I worked in TAC, one of my mentors was a TAC engineer who had previously been a product manager for GSR (12000-series) line cards.  He went back to TAC because he wanted to get into the new CRS-1 router and felt it was the best place to learn the new product quickly.  It made sense at the time, but it is inconceivable now that a PM would go to TAC.  The product manager career path is directed towards managing business, not technology, and it would be a step down for product managers to become technical again.

If you don’t work for a tech company, you may not know a lot about product management, but PMs are very important to the development of the products you use.  They decide what products are brought to market;  what features they will have;  they prioritize product roadmaps.  They are held accountable for the revenue (or lack thereof) for a product.

Imagine, now, that somebody with that responsibility for, say, a router has no direct experience as a network engineer, but instead has an MBA from Kellogg or Haas or Wharton.  They’ve studied product management as a discipline, but know nothing about the technology that they own.  Suppose this person has no particular interest in or passion for their field–they just want to succeed in business and be a CEO some day.  What do you think the roadmap will look like?  Do you think the product will take into account the needs of the customer?  When various technologists come to such a PM, will he be able to rationally sort through their competing proposals and select the correct technology?

To be clear, I am not criticizing any individual or my current employer here.  This problem extends industry-wide and explains why so many badly conceived products exist.  The problem of corporatism, which I’ve written about often, extends beyond product management too.  How often are decisions in IT departments made by business people who have little to no experience in the field they are responsible for?  I got into network engineering because I was fascinated by it and loved it.  I’m not the best engineer out there–I’ve worked with some brilliant people–but I do care about the industry and the products we make.  And most importantly, I care about network engineers because I’ve been one.

Corporatists believe generic management principles can be learned which apply to any business, and that they don’t really need domain-specific expertise.  They know business, so why would they?  True, there are some “business” specific tasks like finance that where generic business knowledge is really all that’s needed.  But the mistaken thinking that generic business knowledge qualifies one to be authoritative on technical topics doesn’t make sense.  This is how tech CEO’s end up CEO of coffee companies–it’s just business, right?

I don’t mean to denigrate product management as a discipline.  PMs have an important role to play, and product management is the art of making decisions between different alternatives with constrained resources.  I am saying this:  that if you want to become a product manager, spend the time to learn not just the business, but the actual thing you are product managing.  You’d be better off spending a couple years in TAC out of business school than going straight into PM.  Not that many CEO-aspiring PMs would ever do that, these days.

Now off to write my first novel.

A couple of years back I purchased an AI-powered energy monitoring system for my home.  It clips on to the power mains and monitors amperage/wattage.  I can view an a graph showing energy usage over time, which is really quite helpful to keep tabs on my electricity consumption at a time when electricity is expensive.

The AI part identifies what devices are drawing power in my house.  Based simply on wattage patterns, so they claim, the app will tell me this device is a light, that device is an air conditioner, and so on.  An electric oven, for example, consumes so much power and switches itself on and off in such a pattern that AI can identify it.  The company has a large database of all of the sorts of products that can be plugged into an outlet, and it uses its database to figure out what you have connected.

So far my AI energy monitor has identified ten different heaters in my house.  That’s really cool, except for the fact that I have exactly one heater.  When the message popped up saying “We’ve identified a new device!  Heater #10!”, I must admit I wasn’t surprised.  It did raise an eyebrow, however, given that it was summer and over 100 degrees (38 C) outside.  At the very least, you’d think the algorithm could correlate location and weather data with its guesses.

Many “futurists” who lurk around Silicon Valley believe in a few years we’ll live for ever when we merge our brains with AI.  I’ve noticed that most of these “futurists” have no technological expertise at all.  Usually they’re journalists or marketing experts.  I, on the other hand, deal with technology every day, and it leaves me more than a little skeptical of the “AI” wave that’s been sweeping over the Valley for a few years.

Of course, once the “analysts” identify a trend, all of us vendors need to move on it.  (“SASE was hot last fall, but this season SSE is in!”)  A part of that involves labeling things with the latest buzzword even when they have nothing to do with it.  (Don’t get me started on “controllers”…)  One vendor has a tool that opens a TAC case after detecting a problem.  They call this something like “AI-driven issue resolution.”  Never mind that a human being gets the TAC case and has to troubleshoot it–this is the exact opposite of AI.  We can broaden the term to mean a computer doing anything on its own, in this case calling a human.  Hey, is there a better indicator of intelligence than asking for help?

Dynamic baselines are neat.  I remember finding the threshold altering capabilities in NMS tools useless back in the 90’s.  Do I set it at 50% of bandwidth?  60%?  80%?  Dynamic baselining determines the normal traffic (or whatever) level at a given time, and sets a variable threshold based on historical data.  It’s AI, I suppose, but it’s basically just pattern analysis.

True issue resolution is a remarkably harder problem.  I once sat in a product meeting where we had been asked to determine all of the different scenarios the tool we were developing would be able to troubleshoot.  Then we were to determine the steps the “AI” would take (i.e., what CLI to execute.)  We built slide after slide, racking our brains for all the ways networks fail and how we’d troubleshoot them.

The problem with this approach is that if you think of 100 ways networks fail, when a customer deploys the product it will fail in the 101st way.  Networks are large distributed systems, running multiple protocols, connecting multiple operating systems, with different media types and they have ways of failing, sometimes spectacularly, that nobody ever thinks about.  A human being can think adaptively and dynamically in a way that a computer cannot.  Troubleshooting an outage involves collecting data from multiple sources, and then thinking through the problem until a resolution is found.  How many times, when I was in TAC, did I grab two or three other engineers to sit around a whiteboard and debate what the problem could be?  Using our collective knowledge and experience, bouncing ideas off of one another, we would often come up with creative approaches to the problem at hand and solve it.  I just don’t see AI doing that.  So, maybe it’s a good thing it phones home for help.

I do see a role for AI and its analysis capabilities in providing troubleshooting information on common problems.  Also, data can be a problem for humans to process.  We’re inundated by numbers and often cannot easily find patterns in what we are presented.  AI-type tools can help to aggregate and analyze data from numerous sources in a single place.  So, I’m by no means saying we should be stuck in 1995 for our NMS tools.  But I don’t see AI tools replacing network operations teams any time soon, despite what may be sold.

And I certainly have no plans to live forever by fusing my brain with a computer.  We can leave that to science fiction writers, and their more respectable colleagues, the futurists.

I haven’t posted in a while, for the simple reason that writing a blog is a challenge.  What the heck am I going to write about?  Sometimes ideas come easily, sometimes not.  Of course, I have a day job, and part of that day job involves Cisco Live, which is next week, in person, for the first time in two years.  Getting myself ready, as well as a coordinating with a team of almost fifty technical marketing engineers, does not leave a lot of free time.

For the last several in-person Cisco Lives, I did a two-hour breakout on programmability and scripting.  The meat of the presentation was NETCONF/RESTCONF/YANG, and how to use Python to configure/operate devices using those protocols.  I don’t really work on this anymore, and I have a very competent colleague who has taken over.  I kept delivering the session because I loved doing it.  But good things have to come to an end.  At the last in-person Cisco Live (Barcelona 2020), I had just wrapped up delivering the session for what I assumed would be the last time.  A couple of attendees approached me afterwards.  “We love your session, we come to it every year!” they told me.

I was surprised.  “But I deliver almost the same content every year,” I replied.  “I even use the same jokes.”

“Well, it’s our favorite session,” they said.

At that point I resolved to keep doing it, even if my experience was diminishing.  Then, COVID.

I had one other session which was also a lot of fun, called “The CCIE in an SDN world.”  Because it was in the certification track, I wasn’t taking a session away from my team by doing it.  There is a bit about the CCIE certification, its history, and its current form, but the thrust of it is this:  network engineers are still relevant, even today with SDN and APIs supposedly taking over everything.  There is so much marketing fluff around SDN and its offshoots, and while there may be good ideas in there (and a lot of bad ones), nevertheless we still need engineers who study who to manage and operate data networks, just like we did in the past.

I will be delivering that session.  I have 50 registered attendees, which is far cry from the 500 I used to pack in at the height of the programmability gig.  Being a Senior Director, you end up in limbo between keynotes (too junior) and breakouts (too senior).  But the cert guys were gracious enough to let me speak to my audience of 50.

Cisco Live is really the highlight of the TME role, and I’m happy to finally be back.  Let’s just hope I’m still over my stage fright, I haven’t had an audience in years!

I shall avoid naming names, but when I worked for Juniper we had a certain CEO who pumped us up as the next $10 billion company.  It never happened, and he left and became the CEO of Starbucks.  Starbucks has nothing to do with computer networking at all.  Why was he hired by Starbucks?  How did his (supposed) knowledge of technology translate into coffee?

Apparently it didn’t.  Howard Schultz, Starbucks’ former CEO, is back at the helm.  “I wasn’t here the last four years, but I’m here now,” he said, according to an article in the Wall Street Journal (paywall).  “I am not in business, as a shareholder of Starbucks, to make every single decision based on the stock price for the quarter…Those days, ladies and gentlemen, are over.”  Which of course, implies that that was exactly what the previous CEO was doing.

What happened under the old CEO?  “Workers noticed an increasing focus on speed metrics, including the average time to prepare an order, by store.”  Ah, metrics, my old enemy.  There’s a reason one of my favorite books is called The Tyranny of Metrics and why I wrote a TAC Tales piece just about the use of metrics in TAC.  More on that in a bit.

As I look at what I refer to as “corporatism” and its effect on our industry, it often becomes apparent that the damage of this ethos extends beyond tech.  The central tenet of corporatism, as I define it, is that organizations are best run by people who have no particular expertise other than management itself.  That is, these individuals are trained and experienced in generic management principles, and this is what qualifies them to run businesses.  The generic management skills are translate-able, meaning that if you become an expert in managing a company that makes paper clips, you can successfully use your management skills to run a company that makes, say, medical-device software.  Or pharmaceuticals.  Or airplanes.  Or whatever.  You are, after all, a manager, maybe even a leader, and you just know what to do without any deep expertise or hard-acquired industry-specific knowledge.

Those of us who spend years, even decades acquiring deep technical knowledge of our fields are, according to this ethos, the least qualified to manage and lead.  That’s because we are stuck in our old ways of doing things, and therefore we don’t innovate, and we probably make things complex, using funny acronyms like EIGRP, OSPF, BGP, STP, MPLS, L2VNI, etc., to confuse the real leaders.

Corporatists simply love metrics.  They may not understand, say, L2VNIs, but they look at graphs all day long.  Everything has to be measured in their world, because once it’s measured it can be graphed, and once it’s graphed it’s simply a matter of making the line go the right direction.  Anyone can do that!

Sadly, as Starbucks seems to be discovering, life is messier than a few graphs.  Management by metric usually leads to unintended consequences, and frequently those who operate in such systems resort to metric-gaming.  As I mentioned in the TAC Tale, measuring TAC agents on create-to-close numbers led to many engineers avoiding complex cases and sticking with RMAs to get their numbers looking good.  Tony Hsieh at Zappos, whatever problems he may have had, was totally right when he had his customer service reps stay on the phone as long as needed with customers, hours if necessary, to resolve an issue with a $20 pair of shoes.  That would never fly with the corporatists.  But he understood that customer satisfaction would make or break his business, and it’s often hard to put a number on that.

Corporatism of various sorts has been present in every company I’ve worked for.  The best, and most successful, leadership teams I’ve worked for have avoided it by employing leaders that grew up within the industry.  This doesn’t make them immune from mistakes, of course, but it allows them to understand their customers, something corporatists have a hard time with.

Unfortunately, we work in an industry (like many) in which the stock value of companies is determined by an army of non-technical “analysts” who couldn’t configure a static route, let alone explain what one is.  And yet somehow, their opinions on (e.g.) the router business move the industry.  They of course adhere to the ethos of corporatism.  And I’m sure they get paid better than I do.

Starbucks seems to be correcting a mistake by hiring back someone who actually knows their business.  Would that all corporations learn from Starbucks’ mistake, and ensure their leaders know at least something about what they are leading.

I must admit, I’m a huge fan of Ivan Peplnjak.  This despite the fact that he is a major thorn in the side of product management at Cisco.  It is, of course, his job to be a thorn in our side and Ivan is too smart to ignore.  He has a long history with Cisco, with networking, and his opinions are well thought out and highly technical.  He is a true network engineer.  The fact that I like Ivan does not mean he gave me an easy time a few years back when I did a podcast with him on NETCONF at Cisco Live Berlin.

Ivan had an interesting post recently entitled “Keep Blogging, Some of Us Still Read“.  It reminds of my own tongue-in-cheek FAQ for this blog, in which I said I wouldn’t use a lot of graphics because I intended my blog for “people who can read”.  As a blogger, I think I quite literally have about 3 regular readers, which occasionally makes me wonder why I do it at all.  I could probably build a bigger readership if I worked at it, but I really don’t work at it.  I think part of the reason I do it is simply that I find it therapeutic.

Anyhow, the main claim Ivan is responding to is that video seems to be dominant these days and blogging is becoming less rewarding.  There is no question video creation has risen dramatically, and in many ways it’s easier to get noticed on YouTube than on some random blog like mine.  Then again, with the popularity of SubStack I think people are actually still reading.

Ivan says “Smart people READ technical content.”  Well, perhaps.  I remember learning MPLS back in 2006 when I worked at TAC.  I took a week off to run through a video series someone had produced and it was one of the best courses I’ve taken.  Sometimes a technical person doesn’t want to learn by reading content.  Sometimes listening to new concepts explained well at a conversational pace and in a conversational style is more conducive to actually understanding the material.  This is why people go to trade shows like Cisco Live.  They want to hear it.

I’ve spent a lot of time on video lately, developing a series on technical public speaking as well as technical videos for Cisco.  In the process I’ve had to learn Final Cut Pro and DaVinci resolve.  Both have, frankly, horrendous user interfaces that are hard to master.  Nine times out of ten I turn to a YouTube video when I’m stuck trying to do something.  Especially with GUI-based tools, video is much faster for me to learn something than screen shots.

On the other hand, it’s much harder to produce video.  I can make a blog post in 15 minutes.  YouTube videos take hours and hours to produce, even simple ones like my Coffee with TMEs series.

The bottom line is I’m somewhere down the middle here.  Ivan’s right, technical documentation in video format is much harder to search and to use for reference.  That said, I think video is often much better for learning, that is for being guided through an unfamiliar concept or technology.

If you are one of my 3 regular readers and you would prefer to have my blogs delivered to your inbox, please subscribe at https://subnetzero.substack.com/ where I am cross-posting content!

A post recently showed up in my LinkedIn feed.  It was a video showing a talk by Steve Jobs and claiming to be the “best marketing video ever”.  I disagree.  I think it is the worst ever.  I hate it.  I wish it would go away.  I have deep respect for Jobs, but on this one, he ruined everything and we’re still dealing with the damage.

A little context:  In the 1990’s, Apple was in its “beige box” era.  I was actively involved in desktop support for Macs at the time.  Most of my clients were advertising agencies, and one of them was TBWA Chiat Day, which had recently been hired by Apple.  Macs, once a brilliant product line, had languished, and had an out-of-date operating system.  The GUI was no longer unique to them as Microsoft had unleashed Windows 95.  Apple was dying, and there were even rumors Microsoft had acquired it.

In came Steve Jobs.  Jobs was what every technology company needs–a visionary.  Apple was afflicted with corporatism, and Jobs was going to have none of it.

One of his most famous moves was working with Chiat Day to create the “Think Different” ad campaign.  When it came out, I hated it immediately.  First, there was the cheap grammatical trick to get attention.  “Think” is a verb, so it’s modified by an adverb (“differently”).  By using poor grammar, Apple got press beyond their purchased ad runs.  Newspapers devoted whole articles to whether Apple was teaching children bad grammar.

The ads featured various geniuses like Albert Einstein and Gandhi and proclaimed various trite sentiments about “misfits” and “round pegs in square holes”.  But the ads said nothing about technology at all.

If you watch the video you can see Jobs’ logic here.  He said that ad campaigns should not be about product but about “values”.  The ads need to say something about “who we are”.

I certainly knew who Chiat Day was since I worked there.  I can tell you that the advertising copywriters who think up pabulum like “Think Different” couldn’t  write technical ads because they could barely turn on their computers without me.  They had zero technological knowledge or capability.  They were creating “vision” and “values” about something they didn’t understand, so they did it cheaply with recycled images of dead celebrities.

Unfortunately, the tech industry seems to have forgotten something.  Jobs didn’t just create this “brilliant” ad campaign with Chiat Day.  He dramatically improved the product.  He got Mac off the dated OS it was running and introduced OS X.  He simplified the product line.  He killed the Apple clone market.  He developed new chips like the G3.  He made the computers look cool.  He turned Macs from a dying product into a really good computing platform.

Many tech companies think they can just do the vision thing without the product.  And so they release stupid ad campaigns with hired actors talking about “connecting all of humanity” or whatever their ad agency can come up with.  They push their inane “values” and “mission” down the throats of employees.  But they never fix their products.  They ship the same crappy products they always shipped but with fancy advertising on top.

The thing about Steve Jobs is that everybody admires his worst characteristics and forgets his best.  Some leaders and execs act like complete jerks because Steve Jobs was reputed to be a complete jerk.  They focus on “values” and slick ad campaigns, thinking Jobs succeeded because of these things.  Instead, he succeeded in spite of them.  At the end of the day, Apple was all about the product and they made brilliant products.

The problem with modern corporatism is the army of non-specialized business types who rule over everything.  They don’t understand the products, they don’t understand those who use them, they don’t understand technology, but…Steve Jobs!  So, they create strategy, mission, values, meaningless and inspiring but insipid ad campaigns, and they don’t build good products.  And then they send old Jobs videos around on LinkedIn to make the problem worse.