Perspectives

I must admit, I’m a huge fan of Ivan Peplnjak.  This despite the fact that he is a major thorn in the side of product management at Cisco.  It is, of course, his job to be a thorn in our side and Ivan is too smart to ignore.  He has a long history with Cisco, with networking, and his opinions are well thought out and highly technical.  He is a true network engineer.  The fact that I like Ivan does not mean he gave me an easy time a few years back when I did a podcast with him on NETCONF at Cisco Live Berlin.

Ivan had an interesting post recently entitled “Keep Blogging, Some of Us Still Read“.  It reminds of my own tongue-in-cheek FAQ for this blog, in which I said I wouldn’t use a lot of graphics because I intended my blog for “people who can read”.  As a blogger, I think I quite literally have about 3 regular readers, which occasionally makes me wonder why I do it at all.  I could probably build a bigger readership if I worked at it, but I really don’t work at it.  I think part of the reason I do it is simply that I find it therapeutic.

Anyhow, the main claim Ivan is responding to is that video seems to be dominant these days and blogging is becoming less rewarding.  There is no question video creation has risen dramatically, and in many ways it’s easier to get noticed on YouTube than on some random blog like mine.  Then again, with the popularity of SubStack I think people are actually still reading.

Ivan says “Smart people READ technical content.”  Well, perhaps.  I remember learning MPLS back in 2006 when I worked at TAC.  I took a week off to run through a video series someone had produced and it was one of the best courses I’ve taken.  Sometimes a technical person doesn’t want to learn by reading content.  Sometimes listening to new concepts explained well at a conversational pace and in a conversational style is more conducive to actually understanding the material.  This is why people go to trade shows like Cisco Live.  They want to hear it.

I’ve spent a lot of time on video lately, developing a series on technical public speaking as well as technical videos for Cisco.  In the process I’ve had to learn Final Cut Pro and DaVinci resolve.  Both have, frankly, horrendous user interfaces that are hard to master.  Nine times out of ten I turn to a YouTube video when I’m stuck trying to do something.  Especially with GUI-based tools, video is much faster for me to learn something than screen shots.

On the other hand, it’s much harder to produce video.  I can make a blog post in 15 minutes.  YouTube videos take hours and hours to produce, even simple ones like my Coffee with TMEs series.

The bottom line is I’m somewhere down the middle here.  Ivan’s right, technical documentation in video format is much harder to search and to use for reference.  That said, I think video is often much better for learning, that is for being guided through an unfamiliar concept or technology.

If you are one of my 3 regular readers and you would prefer to have my blogs delivered to your inbox, please subscribe at https://subnetzero.substack.com/ where I am cross-posting content!

A post recently showed up in my LinkedIn feed.  It was a video showing a talk by Steve Jobs and claiming to be the “best marketing video ever”.  I disagree.  I think it is the worst ever.  I hate it.  I wish it would go away.  I have deep respect for Jobs, but on this one, he ruined everything and we’re still dealing with the damage.

A little context:  In the 1990’s, Apple was in its “beige box” era.  I was actively involved in desktop support for Macs at the time.  Most of my clients were advertising agencies, and one of them was TBWA Chiat Day, which had recently been hired by Apple.  Macs, once a brilliant product line, had languished, and had an out-of-date operating system.  The GUI was no longer unique to them as Microsoft had unleashed Windows 95.  Apple was dying, and there were even rumors Microsoft had acquired it.

In came Steve Jobs.  Jobs was what every technology company needs–a visionary.  Apple was afflicted with corporatism, and Jobs was going to have none of it.

One of his most famous moves was working with Chiat Day to create the “Think Different” ad campaign.  When it came out, I hated it immediately.  First, there was the cheap grammatical trick to get attention.  “Think” is a verb, so it’s modified by an adverb (“differently”).  By using poor grammar, Apple got press beyond their purchased ad runs.  Newspapers devoted whole articles to whether Apple was teaching children bad grammar.

The ads featured various geniuses like Albert Einstein and Gandhi and proclaimed various trite sentiments about “misfits” and “round pegs in square holes”.  But the ads said nothing about technology at all.

If you watch the video you can see Jobs’ logic here.  He said that ad campaigns should not be about product but about “values”.  The ads need to say something about “who we are”.

I certainly knew who Chiat Day was since I worked there.  I can tell you that the advertising copywriters who think up pabulum like “Think Different” couldn’t  write technical ads because they could barely turn on their computers without me.  They had zero technological knowledge or capability.  They were creating “vision” and “values” about something they didn’t understand, so they did it cheaply with recycled images of dead celebrities.

Unfortunately, the tech industry seems to have forgotten something.  Jobs didn’t just create this “brilliant” ad campaign with Chiat Day.  He dramatically improved the product.  He got Mac off the dated OS it was running and introduced OS X.  He simplified the product line.  He killed the Apple clone market.  He developed new chips like the G3.  He made the computers look cool.  He turned Macs from a dying product into a really good computing platform.

Many tech companies think they can just do the vision thing without the product.  And so they release stupid ad campaigns with hired actors talking about “connecting all of humanity” or whatever their ad agency can come up with.  They push their inane “values” and “mission” down the throats of employees.  But they never fix their products.  They ship the same crappy products they always shipped but with fancy advertising on top.

The thing about Steve Jobs is that everybody admires his worst characteristics and forgets his best.  Some leaders and execs act like complete jerks because Steve Jobs was reputed to be a complete jerk.  They focus on “values” and slick ad campaigns, thinking Jobs succeeded because of these things.  Instead, he succeeded in spite of them.  At the end of the day, Apple was all about the product and they made brilliant products.

The problem with modern corporatism is the army of non-specialized business types who rule over everything.  They don’t understand the products, they don’t understand those who use them, they don’t understand technology, but…Steve Jobs!  So, they create strategy, mission, values, meaningless and inspiring but insipid ad campaigns, and they don’t build good products.  And then they send old Jobs videos around on LinkedIn to make the problem worse.

In my years in the corporate world, I’ve attended many corporate self-help type sessions on how to or increase leadership, creativity, and innovation.  There are many young consultants who are starting their careers off helping us to develop new skills in these areas, so I thought I would provide some helpful tips to get started.  Enjoy!

  1. If you are going to do consulting or presentations on innovation and leadership, it’s very important that you have never led anyone or invented anything.  Rather, you simply need to interview people who have done those things.  A lot of them.  Two or three thousand.  Actually, even if it’s only been 10 or 11, just say you’ve interviewed two or three thousand innovators or leaders.  This is called “research.”
  2. It’s especially important, if you are teaching career technology people how to innovate, that you loathe technology and cannot even upgrade your iPhone without help.  Remember, they may understand technology, but you understand how to innovate!  Two different things.
  3. You’re going to be making claims that are either wrong, or so obvious they don’t bear repeating.  Remember that you need to do several things to make those statements credible:
    • Begin by citing unverifiable claims from evolutionary biology as the basis for your statements.  Be sure to mention that we used to live out on open plains where we were at risk for being eaten.  Also be sure to mention “fight or flight.”  Bad:  “Strong leaders need to cultivate loyalty.” Good: “evolutionary biology has shown us that, back when we lived on the plain and were vulnerable to getting eaten by lions, our brains developed a need to be loyal to a leader.”
    • Next, cite the latest neuroscience to substantiate your claims.  In fact, it doesn’t have to be real neuroscience.  Remember, nobody will ever check!  Just say “the latest research on the brain has shown…” and leave it at that.  Bad:  “To be innovative we need time to think.”  Good:  “The latest neuroscience has shown that our brains can’t innovate when they are overwhelmed and don’t have time to reason properly.”
    • Remember, if you’re going to be hired by corporations and paid thousands of dollars in speaking fees, you need to state obvious truths in a technical way that makes you seem smart.  Invent new terminology so when you regurgitate to people what they already know, you sound authoritative.  For example, instead of saying, “criticism hurts people’s feelings and can cause them to leave,” invent a “criticism-despair cycle.”  Make a diagram with arrows showing “criticism->rejection->despair->attrition”.  See how much more impressive you sound already?
  4. It really helps if you are a “Doctor”.  There are many unaccredited diploma mills that will send you a Ph.D. based on your “life experience.”  Or better yet, just start calling yourself “doctor”.  Do you really think anyone will call and verify your doctorate?

Remember, the most lucrative careers don’t involve building skills through years of hard work, but telling people who know better than you how to do their jobs.  I hope you have a rewarding career as a consultant!

I have to give AWS credit for posting a fairly detailed technical description of the cause of their recent outage.  Many companies rely on crisis PR people to phrase vague and uninformative announcements that do little to inform customers and put their minds at ease.  I must admit, having read the AWS post-mortem a couple times, I don’t fully understand what happened, but it seems my previous article on automation running wild was not far off.  Of course, the point of the article was not to criticize automation.  An operation the size of AWS would be simply impossible without it.  The point was to illustrate the unintended consequences of automation systems.  As a pilot and aviation buff, I can think of several examples of airplanes crashing due to out-of-control automation as well.

AWS tells us that “an automated activity to scale capacity of one of the AWS services hosted in the main AWS network triggered an unexpected behavior from a large number of clients inside the internal network.”  What’s interesting here is that the automation event was not itself a provisioning of network devices.  Rather, the capacity increase caused “a large surge of connection activity that overwhelmed the networking devices between the internal network and the main AWS network…”  This is just the old problem of overwhelming link capacity.  I remember one time, when I was at Juniper, and a lab device started sending a flood of traffic to the Internet, crushing the Internet-facing firewalls.  It’s nice to know that an operation like Amazon faces the same challenges.  At the end of the day, bandwidth is finite, and enough traffic will ruin any network engineer’s day.

“This congestion immediately impacted the availability of real-time monitoring data for our internal operations teams, which impaired their ability to find the source of congestion and resolve it.”  This is the age-old problem, isn’t it?  Monitoring our networks requires network connectivity.  How else do we get logs, telemetry, traps, and other information from our devices?  And yet, when our network is down, we can’t get this data.  Most large-scale customers do maintain a separate out-of-band network just for monitoring.  I would assume Amazon does the same, but perhaps somehow this got crushed too?  Or perhaps what they refer to as their “internal network” was the OOB network?  I can’t tell from the post.

“Operators continued working on a set of remediation actions to reduce congestion on the internal network including identifying the top sources of traffic to isolate to dedicated network devices, disabling some heavy network traffic services, and bringing additional networking capacity online. This progressed slowly…”  I don’t want to take pleasure in others’ pain, but this makes me smile.  I’ve spent years telling networking engineers that no matter how good their tooling, they are still needed, and they need to keep their skills sharp.  Here is Amazon, with presumably the best automation and monitoring capabilities of any network operator, and they were trying to figure out top talkers and shut them down.  This reminds me of the first broadcast storm I faced, in the mid-1990’s.  I had to walk around the office unplugging things until I found the source.  Hopefully it wasn’t that bad for AWS!

Outages happen, and Amazon has maintained a high-level of service with AWS since the beginning.  The resiliancy of such a complex environment should be astounding to anyone who has built and managed complex systems.  Still, at the end of the day, no matter how much you automate (and you should), no matter how much you assure (and you should), sometimes you have to dust off the packet sniffer and figure out what’s actually going down the wire.  For network engineers, that should be a reminder that you’re still relevant in a software-defined world.

As I write this, a number of sites out on the Internet are down because of an outage at Amazon Web Services.  Delta Airlines is suffering a major outage.  On a personal note, my wife’s favorite radio app and my Lutron lighting system are not operating correctly.  Of course, this outage is a reminder of the simple principle of not putting one’s eggs in a single basket.  AWS became the dominant web provider early on, but there are multiple viable alternatives now.  Long before the modern cloud emerged, I regularly ran disaster recovery exercises to ensure business continuity when a data center or service provider failed.  Everyone who uses a cloud provider better have a backup, and you better figure out a way to periodically test that backup.  A few startups have emerged to make this easier.

While the cause of the outage is yet unknown, there was an interesting comment in an Newsweek article on the outage.  Doug Madory, director of internet analysis an Kentik Inc, said:  “More and more these outages end up being the product of automation and centralization of administration…”  I’ve been involved in automation in some form or another for my entire six years at Cisco, and one aspect of automation is not talked about enough:  automation gone wild.  Let me give a non-computer example.

Back when I worked at the San Francisco Chronicle, the production department installed a new machine in our Union City printing plant.  The Sunday paper, back then, had a large number of inserts with advertisements and circulars that needed to be stuffed into the paper.  They were doing this manually, if you can believe it.

The new machine had several components.  One part of the process involved grabbing the inserts and carrying them in a conveyor system high above the plant floor, before dropping them down into the inserter.  It’s hard to visualize, so I’ve included a picture of a similar machine.

You can see the inserts coming in via the conveyor, hanging vertically.  This conveyor extended quite far.  One day I was in the plant, working on some networking thing or other, and the insert machine was running.  I looked back and saw the conveyor glitch somehow, and then a giant ball of paper started to form in the corner of the room, before finally exploding and raining paper down on the floor of the plant.  There was a commotion and one of the workers had to shut the machine down.

The point is, automation is great until it doesn’t work.  When it fails, it fails big.  You don’t just get a single problem, but a compounding problem.  It wasn’t just a single insert that got hit by the glitch, but dozens of them, if not more.  When you use manual processes, failures are contained.

Let’s tie this back to networking.  Say you need to configure hundreds of devices with some new code, perhaps adding a new routing protocol.  If you do it by hand in one device, and suddenly routes start dropping out of the routing table, chances are you won’t proceed with the other devices.  You’ll check your config to see what happened and why.  But if you set up, say, a Python script to run around and do this via NETCONF to 100 devices, suddenly you might have a massive outage on your hands.  The same could happen using a tool like Ansible, or even a vendor network management platform.

There are ways to combat this problem, of course.  Automated checks and validation after changes is an important one, but the problem with this approach is you cannot predict every failure.  If you program 10 checks, it’s going to fail in way #11, and you’re out of luck.

As I said, I’ve spent years promoting automation.  You simply couldn’t build a network like Amazon’s without it.  And it’s critical for network engineers to continue developing skills in this area.  We, as vendors and promoters of automation tools, need to be careful how we build and sell these tools to limit customer risk.

Eventually they got the inserter running again.  Whatever the cause of Amazon’s outage, let’s hope it’s not automation gone wild.

I have written more than once (here and here, for example) about my belief that technological progression cannot always be considered a good thing.  We are surrounded in the media by a form of technological optimism which I find disconcerting.  “Tech” will solve everything from world hunger to cancer, and the Peter Thiels of the world would have us believe that we can even solve the problem of death.  I don’t see a lot of movies these days, but there used to be a healthy skepticism of technological progress, which was seen as a potential threat to the human race.  Some movies that come to mind:

  • Demon Seed (1977), a movie I found profoundly disturbing when I was taken to see it as a child. I have no idea why anyone took a child to this movie, by the way.  In it, a scientist invents a powerful supercomputer and uses it to automate his house.  Eventually, the computer forms a prosthesis and uses it to inseminate the doctor’s wife to produce a hybrid human-computer being.
  • The Terminator (1984) presented a world in which humans were at war with computers and robots.
  • The Matrix (1999), a move I actually don’t like, nevertheless presents us with a world in which, again, computers rule humans, this time to the point where we have become fuel cells.

Most readers have certainly heard of the latter two, but I’m guessing almost none have heard of the first.  I could go on.  From West World (1973) to RoboCop (1987), there has been movie after movie presenting “tech” not as the key to human progress, but as a potential destroyer of the human race.  I suspect the advent of nuclear weapons had much to do with this view, and with the receding of the Cold War and the ever-present nuclear threat, maybe we are lest concerned about the destruction our inventions are capable of.

The other day I was thinking about my own resistance to Apple AirPods.  It then occurred to me that they are only one step away from the “Cerebral Communicator” in another movie you probably haven’t heard of, The President’s Analyst.

Produced in 1967, (spoilers follow!) the film features James Coburn as a psychoanalyst who is recruited to provide therapy to the President of the United States.  At first excited by the prospect, Coburn himself quickly becomes overwhelmed by the stress of his assignment, and decides to flee.  He is pursued by agents of the CIA and FBI (referred to as CEA and FBR in the movie), the KGB, and the mysterious organization called “TPC”.  Filmed in the psychedelic style of the time, with numerous double-crossings and double-double-crossings, the movie is hard to follow.  But, in the end, we learn that this mysterious agency called “TPC” is actually The Phone Company.

1967 was long before de-regulation, and there was a single phone company in the United States controlling all telephone communications.  It was quasi-governmental in size and scope, and thus a suitable villain for a movie like The President’s Analyst.  The ultimate goal of TPC is to implant mini-telephones into people’s brains.  From Wikipedia:

TPC has developed a “modern electronic miracle”, the Cerebrum Communicator (CC), a microelectronic device that can communicate wirelessly with any other CC in the world. Once implanted in the brain, the user need only think of the phone number of the person they wish to reach, and they are instantly connected, thus eliminating the need for The Phone Company’s massive and expensive-to-maintain wired infrastructure.

I’ve only seen the movie a couple times, but I wonder if the AirPods implanted in people’s ears remind me too much of the CC.  Already we have seen the remnants of the phone company pushing us to ever-more connectivity, to the point where our phones are with us constantly and we stick ear buds in our heads.  Tech companies love to tell us that being constantly connected to one another is the great path forward for humanity.  Meanwhile, we live in a time as divided as any in history.  If connecting humanity were the solution to the world’s problems, why do we seem to be in a state of bitter conflict?  I wonder if we’ve forgotten the lesson of the Babel Fish in Douglas Adams’ science fiction book, Hitchhiker’s Guide to the Galaxy.  The Babel fish is a convenient device for Adams to explain a dilemma in all science fiction:  how do people from different planets somehow understand each other?  In most science fiction, the aliens just speak English (or whatever) and we never come to know how they could have learned it.  But Adams uses his fictional device to make an amusing point:

The Babel fish is small, yellow, leech-like, and probably the oddest thing in the Universe…if you stick a Babel fish in your ear you can instantly understand anything said to you in any form of language. The speech patterns you actually hear decode the brainwave matrix which has been fed into your mind by your Babel fish…[T]he poor Babel fish, by effectively removing all barriers to communication between different races and cultures, has caused more and bloodier wars than anything else in the history of creation.

 

I’ve been revising my Cisco Live session on IOS XE programmability, and it’s made me think about programming in general, and a particular idea I’ve been embarrassed to admit I loathe: Object Oriented Programming.

Some context:  I started programming on the Apple II+ in BASIC, which shows my age.  Back then programs were input with line numbers and program control was quite simple, consisting of GOTO and GOSUB statements that jumped around the lines of code.  So, you might have something that looked like this:

10 INPUT "Would you like to [C]opy a File or [D]elete a file?"; A$
20 IF A$ = "C" THEN GOTO 100
30 IF A$ = "D" THEN GOTO 200

This was not really an elegant way to build programs, but it was fairly clear.  Given that code was entered directly into the DOS CLI with only line-by-line editing functionality, it could certainly get a bit confusing what happened and where when you had a lot of branches in your code.

In college I took one programming course in Pascal.  Pascal was similar in structure to C, just far more verbose.  Using it required a shift to procedural-style thinking, and while I was able to get a lot of code to work in Pascal, my professor was always dinging me for style mistakes.  I tended to revert to AppleSoft BASIC style coding, using global variables and not breaking things down into procedures/functions enough.  (In Pascal, a procedure is simply a function that returns no value.)  Over time I got used to the new way of thinking, and grew to appreciate it.  BASIC just couldn’t scale, but Pascal could.  I picked up C after college for fun, and then attempted C++, the object-oriented version of C.  I found I had an intense dislike for shifting my programming paradigm yet again, and I felt that the code produced by Object Oriented Programming (OOP) was confusing and hard to read.  At that point I was a full-time network engineer and left programming behind.

When I returned to Cisco in 2015, I was assigned to programmability and had to learn the basics of coding again.  What I teach is really basic coding, scripting really, and I would have no idea how to build or contribute to a large project like many of those being done here at Cisco.  I picked up Python, and I generally like it for scripting.  However, Python has OO functionality, and I was once again annoyed by confusing OO code.

In case you’re not familiar with OOP, here’s how it works.  Instead of writing down (quite intuitively) your program according to the operations it needs to perform, you create objects that have operations associated with them according to the type of object.  An example in C-like pseudocode can help clarify:

Procedural:

rectangle_type: my_rect(20, 10)
print get_area(my_rect)

OOP:

rectangle_type: my_rect(20, 10)
print my_rect.get_area()

Note that in OOP, we create an object of type “rectangle_type” and then is has certain attributes associated with it, including functions.  In the procedural example, we just create a variable and pass it to a function which is not in any way associated to the variable we created.

The problem is, this is counter-intuitive.  Procedural programming follows a logical and clear flow.  It’s easy to read.  It doesn’t have complex class inheritance issues.  It’s just far easier to work with.

I was always ashamed to say this, as I’m more of a scripter than a coder, and a dabbler in the world of programming.  But recently I came across a collection of quotes from people who really do know what they are talking about, and I see many people agree with me.

How should a network engineer, programming neophyte approach this?  My advice is this:  Learn functional/procedural style programming first.  Avoid any course that moves quickly into OOP.

That said, you’ll unfortunately need to learn the fundamentals of OOP.  That’s because many, if not most Python libraries you’ll be using are object oriented.  Even built in Python data-types like strings have a number of OO functions associated with them.  (Want to make a string called “name” lower-case?  Call “name.lower()”) You’ll at least need to understand how to invoke a function associate with a particular object class.

Meanwhile I’ve been programming in AppleSoft quite a bit in my Apple II emulator, and the GOTO’s are so refreshing!

“Progress might have been alright once, but it has gone on too long.”
–  Ogden Nash

The book The Innovator’s Dilemma appears on the desk of a lot of Silicon Valley executives.  Its author, Clayton Christiensen, is famous for having coined the term “disruptive innovation.”  The term has always bothered me, and I keep waiting for the word “disruption” to die a quiet death.  I have the disadvantage of having studied Latin quite a bit.  The word “disrupt” comes from the Latin verb rumperewhich means to “break up”, “tear”, “rend”, “break into pieces.”  The word, as does our English derivative, connotes something quite bad.  If you think “disruption” is good, what would you think if I disrupted a presentation you were giving?  What if I disrupted the electrical system of your heart?

Side note:  I’m fascinated with the tendency of modern English to use “bad” words to connote something good.  In the 1980’s the word “bad” actually came to mean its opposite.  “Wow, that dude is really bad!” meant he was good.  Cool people use the word “sick” in this way.  “That’s a sick chopper” does not mean the motorcycle is broken.

The point, then, of disruption is to break up something that already exists, and this is what lies beneath the b-school usage of it.  If you innovate, in a disruptive way, then you are destroying something that came before you–an industry, a way of working, a technology.  We instantly assume this is a good thing, but what if it’s not?  Beneath any industry, way of working, or technology are people, and disruption is disruption of them, personally.

The word “innovate” also has a Latin root.  It comes from the word novus, which means “new”.  In industry in general, but particularly the tech industry, we positively worship the “new”.  We are constantly told we have to always be innovating.  The second one technology is invented and gets established, we need to replace it.  Frame Relay gave way to MPLS, MPLS is giving way to SD-WAN, and now we’re told SD-WAN has to give way…  The life of a technology professional, trying to understand all of this, is like a man trying to walk on quicksand.  How do you progress when you cannot get a firm footing?

We seem to have forgotten that a journey is worthless unless you set out on it with an end in mind.  One cannot simply worship the “new” because it is new–this is self-referential pointlessness.  There has to be a goal, or an end–a purpose, beyond simply just cooking up new things every couple years.

Most tech people and b-school people have little philosophical education outside of, perhaps (and unfortunately) Atlas Shrugged.  Thus, some of them, realizing the pointlessness of endless innovation cycles, have cooked up ludicrous ideas about the purpose of it all.  Now we have transhumanists telling us we’ll merge our brains with computers and evolve into some sort of new God-species, without apparently realizing how ridiculous they sound.  COVID-19 should disabuse us of any notion that we’re not actually human beings, constrained by human limitations.

On a practical level, the furious pace of innovation, or at least what is passed off as such, has made the careers of technology people challenging.  Lawyers and accountants can master their profession and then worry only about incremental changes.  New laws are passed every year, but fundamentally the practice of their profession remains the same.  For us, however, we seem to face radical disruption every couple of years.  Suddenly, our knowledge is out-of-date.  Technologies and techniques we understood well are yesterday’s news, and we have to re-invent ourselves yet again.

The innovation imperative is driven by several factors:  Wall Street constantly pushes public companies to “grow”, thus disparaging companies that simply figure out how to do something and do it well.  Companies are pressured into expanding to new industries, or into expanding their share of existing industries, and hence need to come up with ways to differentiate themselves.  On an individual level, many technologists are enamored of innovation, and constantly seek to invent things for personal satisfaction or for professional gain.  Wall Street seems to have forgotten the natural law of growth.  Name one thing in nature that can grow forever.  Trees, animals, stars…nothing can keep growing indefinitely.  Why should a company be any different?  Will Amazon simply take over every industry and then take over governing the planet?  Then what?

This may seem a strange article coming from a leader of a team in a tech company that is handling bleeding edge technologies.  And indeed it would seem to be a heresy for someone like me to say these things.  But I’m not calling for an end to inventing new products or technologies.  Having banged out CLI for thousands of hours, I can tell you that automating our networks is a good thing.  Overlays do make sense in that they can abstract complexity out of networks.  TrustSec/Scalable Group Tags are quite helpful, and something like this should have been in IP from the beginning.

What I am saying is that innovation needs a purpose other than just…innovation.  Executives need to stop waxing eloquent about “disrupting” this or that, or our future of fusing our brains with an AI Borg.  Wall Street needs to stop promoting growth at all costs.  And engineers need time to absorb and learn new things, so that they can be true professionals and not spend their time chasing ephemera.

Am I optimistic?  Well, it’s not in my nature, I’m afraid.  As I write this we are in the midst of the Coronavirus crisis.  I don’t know what the world will look like a year from now.  Business as usual, with COVID a forgotten memory?  Perhaps.  Great Depression due to economic shutdown?  Perhaps.  Total societal, governmental, and economic collapse, with rioting in the streets?  I hope not, but perhaps.  Whatever happens, I do hope we remember that word “novel”, as in “novel Coronavirus”, comes from the same Latin root as the word “innovation”.  New isn’t always the best.

Two things can almost go without saying:

  1. If you start a blog, you need to commit time to writing it.
  2. When you move up in the corporate world, time becomes a precious commodity.

When I started this blog several years ago, I was a network architect at Juniper with a fair amount of time on my hands.  Then I came to Cisco as a Principal TME, with a lot less time on my hands.  Then I took over a team of TMEs.  And now I have nearly 40 people reporting to me, and responsibility for technical marketing for Cisco’s entire enterprise software portfolio.  That includes ISE, Cisco DNA Center, SD-Access, SD-WAN (Viptela), and more.  With that kind of responsibility and that many people depending on me, writing TAC Tales becomes a lower priority.

In addition, when you advance in the corporate hierarchy, expressing your opinions freely becomes more dangerous.  What if I say something I shouldn’t?  Or, do I really want to bare my soul on a blog when an employee is reading it?  Might they be offended, or afraid I would post something about them?  Such concerns don’t exist when you’re an individual contributor, even at the director level, which I was.

I can take some comfort in the fact that this blog is not widely read.  The handful of people who stumble across it probably will not cause me problems at work.  And, as for baring my soul, well, my team knows I am transparent.  But time is not something I have much of these days, and I cannot sacrifice work obligations for personal fulfillment.  And that’s definitely what the blog is.  I do miss writing it.

Is this a goodbye piece?  By no means.  The blog will stay, and if I can eek out 10 minutes here or there to write or polish an old piece, I will.  Meanwhile, be warned about corporate ladder climbing–it has a way of chewing up your time.

I think it’s fair to say that all technical marketing engineers are excited for Cisco Live, and happy when it’s over.  Cisco Live is always a lot of fun–I heard one person say “it’s like a family reunion except I like everyone!”  It’s a great chance to see a lot of folks you don’t get to see very often, to discuss technology that you’re passionate about with other like minded people, to see and learn new things, and, for us TMEs, an opportunity to get up in front of a room full of hundreds of people and teach them something.  We all now wait anxiously for our scores, which are used to judge how well we did, and even whether we get invited back.

It always amazes me that it comes together at all.  In my last post, I mentioned all the work we do to pull together our sessions.  A lot of my TMEs did not do sessions, instead spending their Cisco Live on their feet at demo booths.  I’m also always amazed that World of Solutions comes together at all.  Here is a shot of what it looked like at 5:30 PM the night before it opened (at 10 AM.)  How the staff managed to clear out the garbage and get the booths together in that time I can’t imagine, but they did.

The WoS mess…

My boss, Carl Solder, got to do a demo in the main keynote.  There were something like 20,000 people in the room and the CEO was sitting there.  I think I would have been nervous, but Carl is ever-smooth and managed it without looking the least bit uncomfortable.

My boss (left) on the big stage!

The CCIE party was at the air and space museum, a great location for aviation lovers such as myself.  A highlight was seeing an actual Apollo capsule.  It seemed a lot smaller than I would have imagined.  I don’t think I would ever have gotten in that thing to go to the moon.  The party was also a great chance to see some of the legends of the CCIE program, such as Bruce Caslow, who wrote the fist major book on passing the CCIE exam, and Terry Slattery, the first person to actually pass it.

CCIE Party

I delivered two breakouts this year:  The CCIE in an SDN World, and Scripting the Catalyst.  The first one was a lot of fun because it was on Monday and the crowd was rowdy, but also because the changes to the program were just announced and folks were interested in knowing what was going on.  The second session was a bit more focused and deeper, but the audience was attentive and seemed to like it.  If you want to know what it feels like to be a Cisco Live presenter, see the photo below.

My view from the stage

I closed out my week with another interview with David Bombal, as well as the famous Network Chuck.  This was my first time meeting Chuck, who is a bit of a celebrity around Cisco Live and stands out because of his beard.  David and I had already done a two-part interview (part 1, part 2) when he was in San Jose visiting Cisco a couple months back.  We had a good chat about what is going on with the CCIE, and it should be out soon.

As I said, we love CL but we’re happy when it’s over.  This will be the first weekend in a long time I haven’t worked on CL slides.  I can relax, and then…Cisco Live Barcelona!