Skip navigation

Tag Archives: ai

I asked our ChatGPT to create a dialog between two network engineers in the style of the absurdist playwright Eugene Ionesco, author of The Bald Soprano.  It’s not even close to the original, but I do think it captures our existence quite nicely.  “We are the clowns of the digital realm” is perhaps my favorite line:

Engineer 1: Good evening. Have you ever stopped to wonder why networks are like mazes?

Engineer 2: Ah, the eternal question! Networks are like mazes because, well, because mazes are like networks!

Engineer 1: But what purpose do they serve? These intricate paths leading us nowhere. It’s all so absurd!

Engineer 2: Absurd indeed! We tirelessly configure routers and switches, creating connections that resemble a tangled web. Yet, do we ever truly reach our destination?

Engineer 1: No, my friend, we simply pass through endless loops, like Sisyphus rolling the boulder up the hill, only to have it roll back down.

Engineer 2: And what about those cryptic error messages we encounter? They mock us, like whispers of an elusive truth we can never grasp.

Engineer 1: Indeed, they are the whispers of a madman, taunting us with their indecipherable codes. “Error 404: Page not found.” How fitting!

Engineer 2: Ah, the irony! We seek connectivity, yet we are trapped in a world of disconnections and network outages.

Engineer 1: Our lives revolve around troubleshooting, forever trapped in an endless cycle of pinging and tracert-ing.

Engineer 2: It’s a tragic comedy, my friend. We are the clowns of the digital realm, forever chasing packets that vanish into thin air.

Engineer 1: And yet, we persist, day in and day out, our minds slowly unraveling like a faulty Ethernet cable.

Engineer 2: We are but players in this grand theater of network engineering, destined to forever dance to the rhythm of broken connections and elusive solutions.

Engineer 1: So, my friend, let us raise our glasses to the absurdity of it all. To the networks that confound us and the mysteries we strive to unravel!

Engineer 2: Cheers to that, my comrade! May we forever find solace in the chaos and laughter in the face of the inexplicable!

(They clink their glasses together, lost in the absurdity of their network engineer existence.)

I’ve been reluctant to comment on the latest fad in our industry, generative AI, simply because everybody has weighed in on it.  I do also try to avoid commenting on subjects outside of my scope of authority.  Increasingly, though, people are coming to me at work and asking how we can incorporate this technology into our products, how our competitors are doing it, and what our AI strategy is.  So I guess I am an authority.

To be honest, I didn’t play with ChatGPT until this week.  When I first looked at it, it wanted my email address and phone number and I wasn’t sure I wanted to provide that to our new AI overlords.  So I passed on it.  Then Cisco release an internal-only version, which is supposedly anonymous, so I decided to try it out.

My first impression was, as they say, “meh.”  Obviously its ability to interpret and generate natural language are amazing.  Having it recite details of its data set in the style of Faulkner was cool.  But overall, the responses seemed like warmed-over search-engine results.  I asked it if AI is environmentally irresponsible since it will require so much computing power.  The response was middle-of-the-road, “no, AI is not environmentally irresponsible” but “we need to do more to protect the environment.”  Blah, blah.  Non-committal, playing both sides of the coin.  Like almost all of its answers.

Then I decided to dive a bit deeper into a subject I know well:  Ancient Greek.  How accurately would ChatGPT be on a relatively obscure subject (and yet one with thousands of years of data!)

Even if you have no interest, bear with me.  I asked ChatGPT if it knew the difference between the Ionic dialect of Herodotus and the more common dialect of classical Athens.  (Our version, at least, does not allow proper names so I had to refer to Herodotus somewhat elliptically.)  It assured me it did.  I asked it to write “the men are arriving at Athens” in the dialect of Herodotus.  It wrote, “Ἀφίκοντο οἱ ἄνδρες εἰς Ἀθήνας,” which is obviously wrong.  The first word there, “aphikonto“, would actually be “apikonto” in the dialect of Herodotus.  He was well known for dropping aspirations.  The version ChatGPT gave me would be the classical Attic version.

I let ChatGPT know it was wrong, and it dutifully apologized.  Then I asked it to summarize the differences in the dialects.  It said to me:

Herodotus and Ionic writers typically removed initial aspirations, while the Attic dialect retained them. For example, “Ἀφίκοντο” (Herodotus) vs. “ἔφικοντο” (Attic)

Uh, you don’t need to know the Greek alphabet to see it made exactly the same mistake, again.  It should have said that Herodotus would use “Ἀπίκοντο” (apikonto) whereas in Attic the word would be “Ἀφίκοντο” (aphikonto.)

Here’s a little more computer-person-friendly example.  I decided to ask about my other favorite subject, retro computing.  I asked, “What is the difference between the Apple IIGS ProDOS 8, ProDOS 16, and GS/OS operating systems?”  Among other things, ChatGPT responded that GS/OS ” introduced features such as a desktop metaphor, drop-down menus, and a hierarchical file system.”  Which happens to be wrong.  ProDOS 16 actually introduced those features.  (In fact, ProDOS 8 introduced the hierarchy.)  I pointed this out.  “You are correct, and I apologize for the oversight in my previous response. I misspoke in stating that GS/OS introduced [those] features.”  Hmmm.

I’m testing it and I know the right answers.   The problem would arise if I were trusting ChatGPT to provide me with a correct answer.  There have been plenty of examples of mistakes made by ChatGPT, such as adding a “permit any” to the top of access-lists.

The issue is, ChatGPT sounds authoritative when it responds.  Because it is a computer and speaking in natural language, we have a tendency to trust it.  And yet it has consistently proven it can be quite wrong on even simple subjects.  In fact, our own version has the caveat “Cisco Enterprise Chat AI may produce inaccurate information about people, places, or facts” at the bottom of the page, and I’m sure most implementations of ChatGPT carry a similar warning.

Search engines place the burden of determining truth or fiction upon the user.  I get hundreds or thousands of results, and I have to decide which is credible based on the authority of the source, how convincing it sounds, etc.  AI provides one answer.  It has done the work for you.  Sure, you can probe further, but it many cases you won’t even know the answer served back is not trustworthy.  For that reason, I see AI tools to be potentially very misleading and potentially harmful in some circumstances.

That aside, I do like the fact I can dialog with it in ancient Latin and Greek, even if it makes mistakes.  It’s a good way to kill time in boring meetings.

A couple of years back I purchased an AI-powered energy monitoring system for my home.  It clips on to the power mains and monitors amperage/wattage.  I can view an a graph showing energy usage over time, which is really quite helpful to keep tabs on my electricity consumption at a time when electricity is expensive.

The AI part identifies what devices are drawing power in my house.  Based simply on wattage patterns, so they claim, the app will tell me this device is a light, that device is an air conditioner, and so on.  An electric oven, for example, consumes so much power and switches itself on and off in such a pattern that AI can identify it.  The company has a large database of all of the sorts of products that can be plugged into an outlet, and it uses its database to figure out what you have connected.

So far my AI energy monitor has identified ten different heaters in my house.  That’s really cool, except for the fact that I have exactly one heater.  When the message popped up saying “We’ve identified a new device!  Heater #10!”, I must admit I wasn’t surprised.  It did raise an eyebrow, however, given that it was summer and over 100 degrees (38 C) outside.  At the very least, you’d think the algorithm could correlate location and weather data with its guesses.

Many “futurists” who lurk around Silicon Valley believe in a few years we’ll live for ever when we merge our brains with AI.  I’ve noticed that most of these “futurists” have no technological expertise at all.  Usually they’re journalists or marketing experts.  I, on the other hand, deal with technology every day, and it leaves me more than a little skeptical of the “AI” wave that’s been sweeping over the Valley for a few years.

Of course, once the “analysts” identify a trend, all of us vendors need to move on it.  (“SASE was hot last fall, but this season SSE is in!”)  A part of that involves labeling things with the latest buzzword even when they have nothing to do with it.  (Don’t get me started on “controllers”…)  One vendor has a tool that opens a TAC case after detecting a problem.  They call this something like “AI-driven issue resolution.”  Never mind that a human being gets the TAC case and has to troubleshoot it–this is the exact opposite of AI.  We can broaden the term to mean a computer doing anything on its own, in this case calling a human.  Hey, is there a better indicator of intelligence than asking for help?

Dynamic baselines are neat.  I remember finding the threshold altering capabilities in NMS tools useless back in the 90’s.  Do I set it at 50% of bandwidth?  60%?  80%?  Dynamic baselining determines the normal traffic (or whatever) level at a given time, and sets a variable threshold based on historical data.  It’s AI, I suppose, but it’s basically just pattern analysis.

True issue resolution is a remarkably harder problem.  I once sat in a product meeting where we had been asked to determine all of the different scenarios the tool we were developing would be able to troubleshoot.  Then we were to determine the steps the “AI” would take (i.e., what CLI to execute.)  We built slide after slide, racking our brains for all the ways networks fail and how we’d troubleshoot them.

The problem with this approach is that if you think of 100 ways networks fail, when a customer deploys the product it will fail in the 101st way.  Networks are large distributed systems, running multiple protocols, connecting multiple operating systems, with different media types and they have ways of failing, sometimes spectacularly, that nobody ever thinks about.  A human being can think adaptively and dynamically in a way that a computer cannot.  Troubleshooting an outage involves collecting data from multiple sources, and then thinking through the problem until a resolution is found.  How many times, when I was in TAC, did I grab two or three other engineers to sit around a whiteboard and debate what the problem could be?  Using our collective knowledge and experience, bouncing ideas off of one another, we would often come up with creative approaches to the problem at hand and solve it.  I just don’t see AI doing that.  So, maybe it’s a good thing it phones home for help.

I do see a role for AI and its analysis capabilities in providing troubleshooting information on common problems.  Also, data can be a problem for humans to process.  We’re inundated by numbers and often cannot easily find patterns in what we are presented.  AI-type tools can help to aggregate and analyze data from numerous sources in a single place.  So, I’m by no means saying we should be stuck in 1995 for our NMS tools.  But I don’t see AI tools replacing network operations teams any time soon, despite what may be sold.

And I certainly have no plans to live forever by fusing my brain with a computer.  We can leave that to science fiction writers, and their more respectable colleagues, the futurists.