A common approach for TAC engineers and customers working on a tough case is to just “throw hardware at it.” Sometimes this can be laziness: why troubleshoot a complex problem when you can send an RMA, swap out a line card, and hope it works? Other times it’s a legitimate step in a complex process of elimination. RMA the card and if the problem still happens, well, you’ve eliminated the card as one source of the problem.
Hence, it was not an uncommon event the day that I got a P1 case from a major service provider, requeued (reassigned) after multiple RMAs. The customer had a 12000-series GSR, top of the line back then, and was frustrated because ISIS wasn’t working.
“We just upgraded the GRP to a PRP to speed the router up,” he said, “but now it’s taking 4 hours for ISIS to converge. Why did we pay all this money on a new route processor when it just slowed our box way down?!”
The GSR router is a chassis-type router, with multiple line cards with ports of different types, a fabric interconnecting them, and a management module (route processor, or RP) acting as the brains of the device. The original RP was called a GRP, but Cisco had released an improved version called the PRP.
The customer seemed to think the new PRP had performance issues, but this didn’t make sense. Performance issues might cause some small delays or possibly packet loss for packets destined to the RP, but not delays of four hours. Something else was amiss. I asked the customer to send me the ISIS database, and it was full of LSPs like this:
#sh isis database IS-IS Level-2 Link State Database LSPID LSP Seq Num LSP Checksum LSP Holdtime 0651.8412.7001.00-00 0x00000000 0x0000 193 0/0/0
ISIS routers periodically send CSNPs, or Complete Sequence Number PDUs, which contain a list of all the link state packets (LSPs) in the router database. In this case, the GSR was directly attached to a Juniper router which was its sole ISIS adjacency. It was receiving the entire ISIS database from this router. Normally an ISIS database entry looks like this:
#sh isis database IS-IS Level-2 Link State Database LSPID LSP Seq Num LSP Checksum LSP Holdtime bb1-sjc.00-00 0x0000041E 0xF97D 65365 0/0/0
Note that instead of a router ID, we actually have a router name. Note also that we have a sequence number and a checksum for each LSP. As the previous output shows, something was wrong with the LSPs we were receiving. Not only was the name not resolving, the sequence and checksum were zero. How can we possibly have an LSP which has no sequence number at all?
Even weirder was that as I refreshed the ISIS outputs, the LSPs started resolving, suddenly popping up with names and non-zero sequences and checksums. I stayed on the phone with the customer for several hours, before finally every LSP was resolved, and the customer had full reachability. “Don’t do anything to the router until I get back to you,” I said before hanging up. If only he had listened.
I was about to pack up for the day and I got called by our hotline. The customer had called in and escalated to a P1 after reloading the router. The entire link state database was zero’d out again, and the network was down. He only had a short maintenance window in which to work, and now he had an outage. It was 6pm. I knew I wasn’t going home for a while.
Whatever was happening was well beyond my ISIS expertise. Even in the routing protocols team, it was hard to find deep knowledge of ISIS. I needed an expert, and Abe Martey, who sat across from me, literally wrote the book on ISIS. The Cisco Press book, that is. The only issue: Abe had decided to take PTO that week. Of course. I pinged a protocols escalation engineer, one of our best BGP guys. He didn’t want anything to do with it. Finally I reached out to the duty manager and asked for help. I also emailed our internal mailers for ISIS, but after 6pm I wasn’t too optimistic.
Why were we seeing what appeared to be invalid LSPs? How could an LSP even have a zero checksum or sequence number? Why did they seem to clear out, and why so slowly? Did the upgrade to the PRP have anything to do with it? Was it hardware? A bug? As a TAC engineer, you have to consider every single possibility, from A to Z.
The duty manager finally got Sanjeev, an “ISIS expert” from Australia on the call. The customer may not realize this while a case is being handled, but if it’s complex and high priority, there is often a flurry of instant messaging going on behind the scenes. We had a chat room up, and as the “expert” listened to the description of the problem and looked at the notes, he typed in the window: “This is way over my head.” Great, so much for expertise. Our conversation was getting heated with the customer, as his frustration with the lack of progress escalated. The so-called expert asked him to run a command, which another TAC engineer suggested.
“Fantastic,” said the customer, “Sanjeev wants us to run a command. Sanjeev, tell us, why do you want to run this command? What’s it going to do?”
“Uh, I’m not sure,” said Sanjeev, “I’ll have to get back to you on that.”
Not a good answer.
By 8:30 PM we also had a senior routing protocols engineer in the chat window. He seemed to think it was a hardware issue and was scraping the error counters on the line cards. The dedicated Advanced Services NCE for the account also signed on and was looking at the errors. It’s a painful feeling knowing you and the customer are stranded, but we honestly had no idea what to do. Because the other end of the problem was a Juniper router, JTAC came on board as well. We may have been competitors, but we were professionals and put it aside to best help the customer.
Looking at the chat transcript, which I saved, is painful. One person suggests physically cleaning the fiber connection. Another thinks it’s memory corruption. Another believes it is packet corruption. We schedule a circuit test with the customer to look for transmission errors.
All the while, the 0x0000 LSPs are re-populating with legitimate information, until, by 9pm, the ISIS database was fully converged and routing was working again. “This time,” I said, “DO NOT touch the router.” The customer agreed. I headed home at 9:12pm, secretly hoping they would reload the router so the case would get requeued to night shift and taken off my hands.
In the morning we got on our scheduled update call with the customer. I was tired, and not happy to make the call. We had gotten nowhere in the night, and had not gotten helpful responses to our emails. I wasn’t sure what I was going to say. I was surprised to hear the customer in a chipper mood. “I’m happy to report Juniper has reproduced the problem in their lab and has identified the problem.”
There was a little bit of wounded pride knowing they found the fix before we did, but also a sense of relief to know I could close the case.
It turns out that the customer, around the same time they installed the PRP, had attempted to normalize the configs between the Juniper and Cisco devices. They had mistakenly configured a timer called the “LSP pacing interval” on the Juniper side. This controls the rate at which the Juniper box sends out LSPs. They had thought they were configuring the same timer as the LSP refresh interval on the Cisco side, but they were two different things. By cranking it way up, they ensured that the hundreds of LSPs in the database would trickle in, taking hours to converge.
Why the 0x0000 entries then? It turns out that in the initial exchange, the ISIS routers share with each other what LSPs they have, without sending the full LSP. Thus, in Cisco ISIS databases, the 0x0000 entry acts as a placeholder until complete LSP data is received. Normally this period is short and you don’t see the entry. We probably would have found the person who knew that eventually, but we didn’t find him that night and our database of cases, newsgroup postings, and bugs turned up nothing to point us in the right direction.
I touched a couple thousand cases in my time at TAC, but this case I remember even 10 years later because of the seeming complexity, the simplicity of the resolution, the weirdness of the symptoms, and the distractors like the PRP upgrade. Often a major outage sends you in a lot of directions and down many rat holes. I don’t think we could have done much differently, since the config error was totally invisible to us. Anyway, if Juniper and Cisco can work together to solve a customer issue, maybe we should have hope for world peace.
Thanks for the new article – I love reading these!
Thanks Matt, I appreciate it!
Thanks for the articles. Went almost through all of the tales and they are excellent example!
As a person being there currently in a technology I have never seen before (uc part) this job can teach you a lot and it is challenging , but at the same time is very stressful at least for me.
When a priority case comes , you don’t even have time to open a doc or research, if you are a completely new guy that is mandatory,since you still do not know how to walk.
Thanks Nikolay! It’s a tough job, especially when working on a new technology. I experienced that myself when I went to service provider since I had never touched GSR, MPLS, etc. Good luck!
I read your article “In Praise of Vendor Lock-In” before this one, and now I understand why vendor lock-in could be a blessing both for a TAC engineer, and for the customer !
The heart stroke ” Often a major outage sends you in a lot of directions and down many rat holes.” Love it