Topological Qubits

I’ve been very unconvinced on the reality and benefits of quantum computing.  Sure, a lot of people with fancy degrees from fancy places say it will work miracles.  Sure, they make really impressive machines with cooling units that look like they’ll beam you into the movie Tron.  Sure, Microsoft just released a new chip with a Hollywood-grade video, B-roll of high-end oscilloscopes included.  All of it is based on some very-theoretical theoretical physics, and I’m not sure that these machines will deliver what they promise.  Sure, I may be wrong.  I’m often wrong with technology predictions.

Satya Nadella, the CEO of Microsoft, announced their Majorana 1 chip, their leap forward in quantum computing, by saying “Imagine a chip that can fit in the palm of your hand yet is capable of solving problems that even all the computers on Earth today combined could not!”

Is it just me or does anyone else find that statement, I dunno…maybe…a little fucking concerning?

More computing power than ALL of the computers on Earth combined?  Do we ever stop to think if this sort of thing is really a good idea?

Oh sure, their video, with slick and well-spoken physicists, extols the ability of their topological qubits to invent medicines, develop new materials, run EV batteries forever, and all but solve world hunger.

On the other hand, nuclear physics gave us both nuclear power and nuclear weapons.  Science fiction movies have been warning about world domination by machines for decades.  If we unleash ChatGPT powered by more horsepower than all computers combined, what the hell is it going to invent?  Why would this be confined to materials scientists in a lab?  Wouldn’t the machine start doing whatever it wants?  Why wouldn’t it invent a fatal virus, unleash it, and rule the world itself?  Or at least, wouldn’t nefarious human beings try to use it to cook up a weapon that could hold the planet hostage, like in a James Bond movie?  Could it enable mass-scale spying and privacy invasion by governments?  Will it be smart enough to warn us of the negative consequences of the materials it invents, or will we be inundated by worse than the microplastics in our brains which weigh as much as a spoon?

I’ve been sick of the tech industry’s worship of technological progress for a long time.  All the hype assumes that technological progress is always and everywhere good.  But that’s been proven false, time and again.

Meanwhile, Nadella makes the asinine statement:  “When productivity rises, economies grow faster, benefiting every sector and every corner of the globe.”  I assume a computer that is more powerful than all computers put together will eliminate a hell of a lot of jobs.  Perhaps it might render human beings redundant.  Technological innovations always tend to replace human labor (except where it depends on third-world exploitation), wouldn’t a computer this ridiculously powerful destroy entire industries and career paths?

I’ve seen enough marketing to suspect Microsoft is exaggerating here.  They’re more than likely less interested in selling quantum computers, and more interested in selling quantum-ready products and services.  Anyways, I sure hope so.

 

Agentic AI for networking

As I’ve pointed out in several posts, and as you’ve certainly noticed, there is a teeny bit of hype surrounding AI these days. We’re told network engineers will be obsolete as our AI buddies take over our jobs. Want to roll out a network? No problem, your receptionist will do it for you while sipping a latte, pushing a button to call her AI agent. Some of us take a more realistic view. AI is a tool. An amazing one in many ways, but a tool nonetheless. And the big question is: how useful is it, really?

Most of the folks making the wild claims are marketers, “analysts”, and others who know little to nothing about networking. John Capobianco is certainly not one of these. John knows a lot about networking, and is a truly technical guy. (I’ve never met him in person, so he could be an AI agent, but I hope not.) John recently posted a detailed video about running an AI agent per device, and aggregating these up to a sort of agent-of-agents. His POC demonstrates that his AoA can go out and perform operations on multiple network devices. Cool.

That said, it brings me back to my early days in programmability, 2015 or so. One of the demos we used to run used Alexa. You could give voice commands to Alexa to go configure your routers, or collect data from them. Very cool! How many network engineers to date use Alexa to configure their networks? Approximately zero.

Of course, we weren’t really expecting to kick off a craze of Alexa-network-engineers. The demo was, as my peer Jason Frazier liked to say, about the “art of the possible.” Our model-driven interfaces made it that much easier to connect things in ways that weren’t previously possible.

As I mentioned in a recent post, programmability didn’t always click with network engineers. We would do a demo where we configured VLAN 100 using NETCONF. We just wrapped a giant block of XML in a giant block of Python, and–voilá–VLAN 100! The only problem was, every network engineer who saw the demo rolled his eyes and said, “I could do that with one line of CLI.”

Here’s the question for Capobianco: Is typing “Please configure an NTP server of 192.168.100.100 on all four devices” easier than, say, configuring an NTP server on all four devices? Or even using an Ansible script to do so? Is typing “What is the IP address of Ethernet 0/1 on R1 and R2” better than just going to R1 and R2 and looking at the “sh ip int brief” output?

We must also bear in mind that AI, still, has trouble with more complex tasks. In an earlier post I found it could not configure VRRP for IPv6 correctly. Even after providing it the operational output of the failed config, it still couldn’t provide the correct config. So, NTP servers are fine, but when we get into really complex stuff, will AI shine or fail?

I’ve been finding ChatGPT incredibly helpful working in my lab lately. I needed to do a few things on ProxMox, and it saved me a lot of searching through StackExchange. But if we want to go from useful tool to fully agentic network management, well, we have a long way to go. Right now the agentic demos feel a bit like the Alexa ones–the art of the possible, but not necessarily the probable.

Are network engineers obsolete?

Continuing on the theme of AI:

I can tell you what the MBAs are saying.  Remember, the MBAs know more about network engineering than you, despite your training and experience, because, well, they’re MBAs!  The went to Stanford!  Or Kellogg!  Or San Jose State!

The MBAs are sure you’re going to be replaced by AI.  I’ve personally seen comments from “analysts” saying that’s the case.  We’re not going to need network engineers anymore!  So all that stuff you’re learning about BGP or OSPF or whatever.  Don’t learn it!  AI will do it for you.  Poof!  Away with network engineers!  After all, they speak in a weird language we didn’t learn in MBA school!  (But it’s ok to speak in our own weird language.)

I happen to live in a world called “reality”, a mythical fairy-tale land to B-school professors.  In “reality”, life is a bit more complicated than B-school types believe it to be.  And those of us who actually have a bit of experience building networks and making them work see AI for what it is:  a tool.  Could it replace some network engineers?  Yes.  All of them?  Count me a skeptic.  But then again, I don’t have an MBA.

I use a note-taking app called Obsidian.  This is becoming more important to me the older I get, as my memory seems to be functioning less optimally.  I was thinking I needed to see a doctor until I talked to other folks my age who have the same problem.  Despite billionaires’ attempts to live forever, the human body has a shelf life and the clock starts ticking the moment you’re born.

Obsidian is great because it stores notes in text files, you write your notes in markdown, and you can generate tags.  I’m terrible at filing things, so I hate programs like OneNote which place the burden on the user.  You have to create notebooks and tabs, and so forth.  With Obsidian, I just add tags where I need them and I can find anything later on.

Anyways, I had taken extensive notes on Hypershield after talking for quite a while with a TME who is working on it.  In Obsidian, I work off of a “daily note” which is like a scratch pad for the day, and if something is important enough, I can select it and extract it to a new note.  In this case, I already had a note on Hypershield and I wanted to extract the text to this exiting note.  I couldn’t find a way to do it.

Enter ChatGPT.  Obsidian is highly extensible, and ChatGPT wrote me a Javascript to extract and append to an existing note.  Fantastic!  I put it into a plug-in, selected the text, gave it the name of the note, and bam!  My text vanished, nowhere to be found.  Not on the daily note, not on the scratchpad, not anywhere in the directory with the text files.  Since the deletion happened from a script, and not a keyboard/menu command, I couldn’t undo.  The text was lost.

ChatGPT was quite apologetic about the mixup and offered several ways to recover the text, none of which worked.

Hallucinations and errors from GenAI are nothing new.  We’re all aware of the problem, and I pointed it out in another context early on in the GenAI days.  But this is a huge problem for the predicted future of networks being totally automated by AI.  Despite the name, AI is not intelligent.  It is a system which generates language, human or machine, by ingesting a lot of content and trying to predict what the correct output should be.  And it can be wildly wrong.

Sure, we can implement checks and balances.  It would have been wise to copy my text to the clipboard before running the script.  Still, whenever we automate, we risk runway automation, and in environments where the network is absolutely critical, trusting AI is going to be difficult.

I’ve heard “analysts” say that their customers are complaining they cannot find qualified network engineers.  In their incredible logic, then they reason, let’s replace them with AI!  Brilliant.  How about asking why it’s so hard to find network engineers?  Is it a great career choice when you are young and reading that network engineers will be replaced by AI?

I’ve written about the “war on expertise” which has been going on in society in general, and in our field in particular.  I got into networking (a) because it fascinated me and (b) because it was a very promising career field.  When you have an interest and a passion for something, acquiring knowledge in that area becomes an end in and of itself.  I remember reading an article about NBA basketball referees, and how, when they travel, they love to challenge each other on obscure rules.  It was like this with network engineers.  We were always trying to one-up each other, learn a new protocol or knob, figure out the mysteries of multicast, or new ways to design networks.  When I go to Cisco Live (and for the second year in a row I will not be going to CL Europe), these are the sort of people who show up.  They are hungry to learn, expand their skills, and up their game.

Now you go to a trade show and all you hear about is how you will be replaced by AI.  Why the hell am I going to devote time to mastering a field if I’ll be obsolete in a couple years?

I’m worried for the future because we’re headed down a concerning path.  Nobody will be entering network engineering anymore, and AI won’t be able to replace the humans who operate networks.  Then what?

It would be nice to see networking vendors (and I mean all of them) encouraging people to learn networking, showing people this is a viable and lucrative career path, and that AI is a tool that they might use to learn networking and augment their capabilities.  And it would be nice to see MBAs stick to finance.  Or can we at least outsource the job of the MBAs to AI?  Yeah, that’d be nice.

The Next Big Thing, TLDR version

At the last Cisco Live in June, I was asked by marketing to do a “center stage” presentation.  My days of getting normal sessions at Cisco Live seem to be over.  Perhaps I’m too far into the management track (although that’s changing) to impress the Cisco Live Session Group Managers.  Eager to speak again, I accepted the proposal.

The abstract was provided for me.  I don’t remember the title, but it was something about AI and the campus.  So, I did my best to craft a set of slides that would be interesting.  When I ran them by marketing, I was told I couldn’t use my own slides.  I had to use theirs.  One of my secrets to success at Cisco Live is that I always build my own slides.  Rarely do I use a single slide from someone else.

Still, I did my best to build a story that would work.  Then I was told I’d be co-presenting with another PM, and we’d also have a customer on stage with us for an Oprah-style panel interview.  Even with these constraints, I spent a lot of time in my hotel room in Vegas practicing to be sure I nailed it.

The center stage is on the show floor (World of Solutions), and presenters there are broadcast onto a series of TVs scattered around the Mandalay Bay convention center.  They walk around the stage like they’re performing King Lear, but nobody watches the TVs or can even hear them.  It’s very performative, but a part of trade shows.

We had a rehearsal with marketing people, stage managers, cameramen, audio technicians, and and army of other people.  On the day of, there were marketing people, stage managers, cameramen, audio technicians, and and army of other people.  There was also a lady there who did intros for all the speakers, to get the audience pumped up.  I’m sure she showed up in Las Vegas decades ago to be a showgirl or something, but now in her 40’s she was doing corporate gigs at Mandalay.  As I got mic’d up and ready to go, I looked out at my audience.  Of the 50 or so chairs, 5 were occupied.  Four of them were friends of the customer presenting.  I looked at the intro lady and said: “I hope you can handle the stage fright.”  She laughed.

I did my shtick, and it all went well enough.  At the very end, the one attendee who was not with the customer, who seemed to have shown up because it was a good spot for a nap, arose like Lazarus, raised his hand and asked:  “Could you guys please stop talking about AI at Cisco Live?”


If you’ve watched the Art of Network Engineering podcast, you probably are familiar with Lexie Cooper, one of the hosts.  I was on the podcast a while back and had a nice talk with her and Andy Lepteff.  The other day, Lexie showed up in my LinkedIn feed in clownish makeup and a bodysuit.  With the audio off, I looked at it and thought, “wow she’s really desperate for attention.”  Then I unmuted it.  I nearly died.

“Have you ever considered…using AI?”  she begins.

“Manage your network devices…with AI!”

“Manage your IOT stuff…with AI!”

“Design a PCB…with AI!”

“Automate your vegetable garden…with AI!”

“Ethernet cables?  Nope…AI!”

“Every vendor in the World of Solutions…with AI!”

…and so forth.

In a minute and thirteen seconds, Lexie captured the Zeitgeist of the current networking world perfectly and hilariously.  It seems that all of the protocols and technologies that make up the “Art of Network Engineering” have been single-handedly wiped away by AI.  Nobody talks about networking anymore, it’s all just AI.


Of course, those protocols and technologies are necessary for AI, for the Internet, and for the modern world to function.  Why do all vendors suddenly have a single-minded focus on AI, and seem to have stopped investing in actual networking technology?

It comes down to the culture of Silicon Valley, the corporate world dominated by Wall Street, and the quest for the “Next Big Thing”.  As network engineers we love acronyms, so I’ll coin a new one:  the NBT.  (With all due respect to NetBIOS over TCP.)

Technology executives are terrified of missing the NBT, and they spend their careers chasing the NBT.  It’s not entirely their fault.  If a technology company is not investing in the NBT, then the industry “analysts” will write somber reports criticizing the company and hurting the stock value.  Because the industry “analysts” have MBAs in topics like marketing and finance, they are experts at technology, and “analyzing” what networking companies should sell to network engineers.  In fact, because they are MBAs, they are experts in anything, really, and far more so than people who actually study and learn their specific fields.

There have indeed been some real NBTs.  Wireless is a good example.  When I started in networking, pretty much everything was hard wired.  Wireless was a major transformation in networking, and a new and different technology domain.  (I’m still not great at understanding it, admittedly.)  Mobile devices and smartphones radically changed the world, and nobody can argue that.

Cloud computing is an interesting one.  First of all, it was (and is) a marketing term.  It refers to several things, but in a broad definition we could say it refers to using someone else’s computing resources instead of your own.  In the case of SaaS, someone else is hosting the application and giving you access to it, whereas in the case of IaaS, they merely host the computing power and you manage the app.  Either way, it was not a new idea.  The idea of shared computing resources has been around since the advent of computing.  In the early days, all computing was done on shared systems.  At the dawn of the Internet, I got my email and other services through an ISP.  I telnetted into their system to check my email.  And in the mid-90’s, I worked at a company that offered a SaaS-based timecard service, before anyone even used the term “SaaS”.

Cloud computing in 1999

Still, we could say Cloud was an NBT.  I used to go to auctions during the dot-bomb of the early 2000’s, and even a small dotcom company had to purchase servers and network gear and host them in rented rack space in a colo.  AWS drastically changed that.

Of course, there have been many potential NBTs that turned out not to be.  The “Metaverse” was one of these.  After 2 years in COVID lockdown, nobody was interested in slapping on a VR headset and meeting their friends using a unicorn avatar floating around a fake version of Mars.


Watch out when an exec begins a presentation with this apocryphal Henry Ford quote:  “If I had asked people what they wanted, they’d have said faster horses.”

Aside from the fact Ford never said it, this quote is recited ad nauseam to inspire people to disruptive innovation.  Nobody ever seems to notice the obvious, however.  The automobile was popularized by Henry Ford over 110 years ago.  It hasn’t changed much since.  Sure, your Subaru is a lot different from a Model T, but the basic idea and design are the same.  The changes to automobiles–fuel injection systems, automatic transmissions–have been major, but nonetheless incremental improvements on the base design.  Once the NBT happened and spawned an industry, things reached a steady state.

From a corporate/investor perspective, this is problematic.  Stock prices are an indicator of future value, and investors demand “growth”.  (Hypothetical question:  is there an end-state to “growth?”  I.e., is a company ever done growing, and if so, when?  Related:  is there anything in nature which can grow indefinitely?)  Steady-state is not good for Wall Street.  So, execs need to go hunting for the NBT.

“Now wait,” many MBAs will correct me.  “The EV is a major disruptor in the automotive industry.”

Leaving aside the fact that EVs have existed in the past, and their questionable future, it just proves my point.  It took 100 years for the Tesla to exist.  But let’s circle back to that in a minute.


Recently I saw a LinkedIn post from a woman, Debbie Gomez, who is making a career change to become a network engineer.  She was joking about the contents of a woman’s purse, comparing it to the books she has in her car.  One of those books was Internet Routing Architectures by Sam Halabi.

When I was studying for my CCIE R/S in 2004, I used Halabi’s book.  It’s clearly visible in a picture of the stack of books I used to study for the infamous exam.  Debbie is studying the same content I was 20 years ago.

This is because, like the automobile, once networking was invented, change became incremental.  BGP hasn’t changed much because it can’t change much.  It’s run across multiple providers and multiple vendors, and it’s not easy to make changes.  Sure, it’s been extended since Halabi’s day, but it’s close enough to the original that his book is still totally relevant.

I’ve written in the past about how non-technical executives view the complexity of networking as a creation of engineers who “revel in complexity”.  In their view, the NBT in networking is to just have “simplicity”, where you don’t need all the fancy BGP, OSPF, EIGRP, ISIS, EVPN, VXLAN, STP stuff.  Just like the Tesla is so much simpler than a traditional car.


I recently started working on cars, because I always like to do things with my hands.  My 2011 BMW 328i is probably the wrong car to start working on.  It’s complex, and designed so that simple tasks require disassembling large parts of the engine.  I recently replaced the valve cover, successfully, but man was it a nightmare of carefully removing various parts.  To even get the thing out took about 30 minutes of me standing on the engine and my brother-in-law working it from the side.  If I learned one thing, it’s how complex a modern car is.

I have a Tesla as well.  There’s no question it’s simple.  There’s hardly an engine to speak of.  There is no gear shifting when you drive it.  You don’t even turn it on.  There is no maintenance required except for tires and brakes. The only fluid required is for the windshield washer.

Many technology executives feel this transformation needs to happen for networking as well.  The problem–they don’t seem to realize–is that the underlying complexity of networking, the protocols, cannot go away.  They exist for a reason.  Can they be improved?  Sure.  Can they be eliminated?  No.

That’s not to say much of the mess of networking cannot be improved.  Vendors have created a lot of that mess, and all are guilty to some degree.  We can distinguish unnecessary complexity from necessary complexity.  A lot of it is unnecessary, but even if you remove that, you’re left with the necessary complexity.

The only option for simplicity when you cannot really simplify, is to abstract.  That is, you hide the complexity.  It’s still there, but it’s easier to deal with.  Take a modern airplane.  It’s just as complex a machine, perhaps more so, than a plane built in the 1970s.  But the cockpit is throughly automated, and the systems throughly instrumented.  It’s much easier to manage than a 1970’s plane.  And yet, someone still needs to know how it all works.


This brings us back to our starting point, AI.

Why is AI driving Lexie to the point of putting own garish makeup and screaming into the camera?  Of course, everyone thinks it’s the NBT.  But is it?

We can easily understate the importance of GenAI and the significance of the technological advancement.  It’s nothing short of astounding.  ChatGPT makes a great search engine, but apart from that, it’s ability to interpret and generate language and code in creative ways is incredible.

Even though I worked on programmability, my knowledge of Python is pretty poor.  If there’s one programming language I feel absolutely comfortable in, it’s Applesoft BASIC from the 1980s.  I’ve found I can have ChatGPT explain some of the more challenging Python concepts by translating them to BASIC.  It’s crazy.  Computers haven’t been able to do anything like that before.

I’ve asked it to generate NETCONF code blocks for configuring IOS XE, with less success.  It gave me an operational data model to configure an IP address on an interface.  These errors can and will be corrected, however.

And yet, even if AI reaches the point of being able to configure and operate network devices, it will still be an abstraction layer.  I cannot fathom AI somehow doing away with networking.  At most, it would be like the automation systems on the plane, not like a Tesla.

I asked ChatGPT to design a networking system that does not use protocols.  It responded:  “Designing a data networking system that does not use protocols is a challenging idea because protocols are fundamental to networking—they define the rules for data exchange.”  It then dutifully attempted to frame out a protocol-free system, but the result was unimpressive, and the AI admitted that it would have a lot of problems.


I am among those working on AI projects at Cisco, both out of interest and out of necessity.  Working at a vendor, I’m caught up in the NBT just like we all are.  While I cannot talk about the specifics of any of the projects, I do see potential for its use beyond the current applications of AI.  (Mainly analyzing operational data.)

Is it really the NBT?  Is it really a “disruptor” on the level of wireless or smartphones?  Or are we tilting at windmills as with the Metaverse?

Time will tell.  But I’m sure Lexie will have plenty of content for more videos.

Meanwhile, keep reading Halabi.  We still need him.

Finally, a use for generative AI?

I asked our ChatGPT to create a dialog between two network engineers in the style of the absurdist playwright Eugene Ionesco, author of The Bald Soprano.  It’s not even close to the original, but I do think it captures our existence quite nicely.  “We are the clowns of the digital realm” is perhaps my favorite line:

Engineer 1: Good evening. Have you ever stopped to wonder why networks are like mazes?

Engineer 2: Ah, the eternal question! Networks are like mazes because, well, because mazes are like networks!

Engineer 1: But what purpose do they serve? These intricate paths leading us nowhere. It’s all so absurd!

Engineer 2: Absurd indeed! We tirelessly configure routers and switches, creating connections that resemble a tangled web. Yet, do we ever truly reach our destination?

Engineer 1: No, my friend, we simply pass through endless loops, like Sisyphus rolling the boulder up the hill, only to have it roll back down.

Engineer 2: And what about those cryptic error messages we encounter? They mock us, like whispers of an elusive truth we can never grasp.

Engineer 1: Indeed, they are the whispers of a madman, taunting us with their indecipherable codes. “Error 404: Page not found.” How fitting!

Engineer 2: Ah, the irony! We seek connectivity, yet we are trapped in a world of disconnections and network outages.

Engineer 1: Our lives revolve around troubleshooting, forever trapped in an endless cycle of pinging and tracert-ing.

Engineer 2: It’s a tragic comedy, my friend. We are the clowns of the digital realm, forever chasing packets that vanish into thin air.

Engineer 1: And yet, we persist, day in and day out, our minds slowly unraveling like a faulty Ethernet cable.

Engineer 2: We are but players in this grand theater of network engineering, destined to forever dance to the rhythm of broken connections and elusive solutions.

Engineer 1: So, my friend, let us raise our glasses to the absurdity of it all. To the networks that confound us and the mysteries we strive to unravel!

Engineer 2: Cheers to that, my comrade! May we forever find solace in the chaos and laughter in the face of the inexplicable!

(They clink their glasses together, lost in the absurdity of their network engineer existence.)

ChatGPT and the problem of authority

I’ve been reluctant to comment on the latest fad in our industry, generative AI, simply because everybody has weighed in on it.  I do also try to avoid commenting on subjects outside of my scope of authority.  Increasingly, though, people are coming to me at work and asking how we can incorporate this technology into our products, how our competitors are doing it, and what our AI strategy is.  So I guess I am an authority.

To be honest, I didn’t play with ChatGPT until this week.  When I first looked at it, it wanted my email address and phone number and I wasn’t sure I wanted to provide that to our new AI overlords.  So I passed on it.  Then Cisco release an internal-only version, which is supposedly anonymous, so I decided to try it out.

My first impression was, as they say, “meh.”  Obviously its ability to interpret and generate natural language are amazing.  Having it recite details of its data set in the style of Faulkner was cool.  But overall, the responses seemed like warmed-over search-engine results.  I asked it if AI is environmentally irresponsible since it will require so much computing power.  The response was middle-of-the-road, “no, AI is not environmentally irresponsible” but “we need to do more to protect the environment.”  Blah, blah.  Non-committal, playing both sides of the coin.  Like almost all of its answers.

Then I decided to dive a bit deeper into a subject I know well:  Ancient Greek.  How accurately would ChatGPT be on a relatively obscure subject (and yet one with thousands of years of data!)

Even if you have no interest, bear with me.  I asked ChatGPT if it knew the difference between the Ionic dialect of Herodotus and the more common dialect of classical Athens.  (Our version, at least, does not allow proper names so I had to refer to Herodotus somewhat elliptically.)  It assured me it did.  I asked it to write “the men are arriving at Athens” in the dialect of Herodotus.  It wrote, “Ἀφίκοντο οἱ ἄνδρες εἰς Ἀθήνας,” which is obviously wrong.  The first word there, “aphikonto“, would actually be “apikonto” in the dialect of Herodotus.  He was well known for dropping aspirations.  The version ChatGPT gave me would be the classical Attic version.

I let ChatGPT know it was wrong, and it dutifully apologized.  Then I asked it to summarize the differences in the dialects.  It said to me:

Herodotus and Ionic writers typically removed initial aspirations, while the Attic dialect retained them. For example, “Ἀφίκοντο” (Herodotus) vs. “ἔφικοντο” (Attic)

Uh, you don’t need to know the Greek alphabet to see it made exactly the same mistake, again.  It should have said that Herodotus would use “Ἀπίκοντο” (apikonto) whereas in Attic the word would be “Ἀφίκοντο” (aphikonto.)

Here’s a little more computer-person-friendly example.  I decided to ask about my other favorite subject, retro computing.  I asked, “What is the difference between the Apple IIGS ProDOS 8, ProDOS 16, and GS/OS operating systems?”  Among other things, ChatGPT responded that GS/OS ” introduced features such as a desktop metaphor, drop-down menus, and a hierarchical file system.”  Which happens to be wrong.  ProDOS 16 actually introduced those features.  (In fact, ProDOS 8 introduced the hierarchy.)  I pointed this out.  “You are correct, and I apologize for the oversight in my previous response. I misspoke in stating that GS/OS introduced [those] features.”  Hmmm.

I’m testing it and I know the right answers.   The problem would arise if I were trusting ChatGPT to provide me with a correct answer.  There have been plenty of examples of mistakes made by ChatGPT, such as adding a “permit any” to the top of access-lists.

The issue is, ChatGPT sounds authoritative when it responds.  Because it is a computer and speaking in natural language, we have a tendency to trust it.  And yet it has consistently proven it can be quite wrong on even simple subjects.  In fact, our own version has the caveat “Cisco Enterprise Chat AI may produce inaccurate information about people, places, or facts” at the bottom of the page, and I’m sure most implementations of ChatGPT carry a similar warning.

Search engines place the burden of determining truth or fiction upon the user.  I get hundreds or thousands of results, and I have to decide which is credible based on the authority of the source, how convincing it sounds, etc.  AI provides one answer.  It has done the work for you.  Sure, you can probe further, but it many cases you won’t even know the answer served back is not trustworthy.  For that reason, I see AI tools to be potentially very misleading and potentially harmful in some circumstances.

That aside, I do like the fact I can dialog with it in ancient Latin and Greek, even if it makes mistakes.  It’s a good way to kill time in boring meetings.

Computers with Brains

A couple of years back I purchased an AI-powered energy monitoring system for my home.  It clips on to the power mains and monitors amperage/wattage.  I can view an a graph showing energy usage over time, which is really quite helpful to keep tabs on my electricity consumption at a time when electricity is expensive.

The AI part identifies what devices are drawing power in my house.  Based simply on wattage patterns, so they claim, the app will tell me this device is a light, that device is an air conditioner, and so on.  An electric oven, for example, consumes so much power and switches itself on and off in such a pattern that AI can identify it.  The company has a large database of all of the sorts of products that can be plugged into an outlet, and it uses its database to figure out what you have connected.

So far my AI energy monitor has identified ten different heaters in my house.  That’s really cool, except for the fact that I have exactly one heater.  When the message popped up saying “We’ve identified a new device!  Heater #10!”, I must admit I wasn’t surprised.  It did raise an eyebrow, however, given that it was summer and over 100 degrees (38 C) outside.  At the very least, you’d think the algorithm could correlate location and weather data with its guesses.

Many “futurists” who lurk around Silicon Valley believe in a few years we’ll live for ever when we merge our brains with AI.  I’ve noticed that most of these “futurists” have no technological expertise at all.  Usually they’re journalists or marketing experts.  I, on the other hand, deal with technology every day, and it leaves me more than a little skeptical of the “AI” wave that’s been sweeping over the Valley for a few years.

Of course, once the “analysts” identify a trend, all of us vendors need to move on it.  (“SASE was hot last fall, but this season SSE is in!”)  A part of that involves labeling things with the latest buzzword even when they have nothing to do with it.  (Don’t get me started on “controllers”…)  One vendor has a tool that opens a TAC case after detecting a problem.  They call this something like “AI-driven issue resolution.”  Never mind that a human being gets the TAC case and has to troubleshoot it–this is the exact opposite of AI.  We can broaden the term to mean a computer doing anything on its own, in this case calling a human.  Hey, is there a better indicator of intelligence than asking for help?

Dynamic baselines are neat.  I remember finding the threshold altering capabilities in NMS tools useless back in the 90’s.  Do I set it at 50% of bandwidth?  60%?  80%?  Dynamic baselining determines the normal traffic (or whatever) level at a given time, and sets a variable threshold based on historical data.  It’s AI, I suppose, but it’s basically just pattern analysis.

True issue resolution is a remarkably harder problem.  I once sat in a product meeting where we had been asked to determine all of the different scenarios the tool we were developing would be able to troubleshoot.  Then we were to determine the steps the “AI” would take (i.e., what CLI to execute.)  We built slide after slide, racking our brains for all the ways networks fail and how we’d troubleshoot them.

The problem with this approach is that if you think of 100 ways networks fail, when a customer deploys the product it will fail in the 101st way.  Networks are large distributed systems, running multiple protocols, connecting multiple operating systems, with different media types and they have ways of failing, sometimes spectacularly, that nobody ever thinks about.  A human being can think adaptively and dynamically in a way that a computer cannot.  Troubleshooting an outage involves collecting data from multiple sources, and then thinking through the problem until a resolution is found.  How many times, when I was in TAC, did I grab two or three other engineers to sit around a whiteboard and debate what the problem could be?  Using our collective knowledge and experience, bouncing ideas off of one another, we would often come up with creative approaches to the problem at hand and solve it.  I just don’t see AI doing that.  So, maybe it’s a good thing it phones home for help.

I do see a role for AI and its analysis capabilities in providing troubleshooting information on common problems.  Also, data can be a problem for humans to process.  We’re inundated by numbers and often cannot easily find patterns in what we are presented.  AI-type tools can help to aggregate and analyze data from numerous sources in a single place.  So, I’m by no means saying we should be stuck in 1995 for our NMS tools.  But I don’t see AI tools replacing network operations teams any time soon, despite what may be sold.

And I certainly have no plans to live forever by fusing my brain with a computer.  We can leave that to science fiction writers, and their more respectable colleagues, the futurists.