HTTS

All posts tagged HTTS

I feel a bit of guilt for letting this blog languish for a while. I can see from the response to my articles explaining confusing Juniper features that my work had some benefit outside my own edification, and so I hate to leave articles unfinished which might have been helpful. In addition, WordPress is not easy to maintain and I keep losing notifications of comments, which means that when I am not logging in, I miss the opportunity to respond to kind words and questions.

As it is, my work explaining Juniper to the masses will have to be put on hold, as I have left Juniper after six years and returned to my old employer Cisco! I worked at Juniper longer than I had anywhere else, and it’s amazing to consider that I just closed the door on half a decade. But, I even after attaining my JNCIE I always felt like a Cisco guy at heart, and so here I am again. A few random thoughts then:

1. I interviewed for a number of jobs, and now that I am hired I can say that I really hate interviewing. My interviews at Cisco were very fair and reasonable. Just for the heck of it I did a phone screen with Google and completely bombed it. I’m not ashamed to admit that. I’m not supposed to reveal their questions, and I won’t, but they were mostly basic questions about TCP functionality, and MAC/ARP stuff, and it’s amazing how you forget some of the basics over the years. I wasn’t really interested in working there so I did no preparation, and in fact the recruiter warned me to brush up on basics. I just figured my work and blog show that I am at least somewhat technical. I plan to write some posts on the art of technical interviewing, but I was certainly underwhelmed by Google’s screening process, as I’m sure they were by my performance. I really wanted the Cisco job, and what a difference attitude makes! (Oh, and I completely munged an MPLS FRR/Node & Link protection question, less than a year after passing the JNCIE-SP. Uh, whoops.)

2. I bear Juniper no ill will. It was an interesting six years. When I came on board, during the Kevin Johnson years, it was all rah-rah pep talks about how we were going to be the next $10 billion company (errr, no…) followed by a plethora of product disasters. Killing off Netscreen gave the firewall market to Palo Alto, Fortinet, and amazingly resuscitated Checkpoint. Junos Space was a disaster, and Pulse slightly less so. QFabric was not a bad idea, but was far too complex. You needed to buy a professional services contract with the product, because it was too complex to install by itself. And yet it supposedly simplified the data center? There was a fiasco with our load balancer product. And then came the activist investors with their Integrated Operating Plan. I will permanently loathe activist investors. Juniper was hurting and they just magnified the hurt. There’s nothing worse than a bunch of generic business-types who wouldn’t know a router if they saw one trying to tell a router company how to do its business. They thought they could apply the same formula you learn in B-school to any company no matter what it manufactures or does. Then we had the CEO revolving door.

Despite all of this, as I said, I like Juniper. I did ok there, and there are a lot of people I respect working there. Rami Rahim is a good choice for CEO. I left for personal reasons. They still have some good products and good ideas, and I think competition is always good for the marketplace. For the sake of my friends there, I hope Juniper does well.

3. If you read my bio, you will see that I was THE network architect for Juniper IT, meaning I covered everything. This included (in theory at least) campus LAN, WAN, data center, wireless, network security, etc. I did something in all of these spaces. It was a broad level of knowledge, but not deep. That’s why I did my JNCIE-SP–I was hungering to go deep on something. My new job at Cisco is principal technical strategy engineer for data center. This is an opportunity to go deep but not as broad, and I’m happy to be doing that. The data center space is where it’s at these days, and I can’t wait to get deeper into it.

4. Coming back to Cisco after an eight year hiatus was bizarre. It was cool to pull up all my old bugs and postings to internal aliases to see what I was doing back then. Heck, I actually sounded like I knew a thing or two. I was thrilled to find out I am on the same team as Tim Stevenson, whose work as a Cat 6K TME I admired when I worked in TAC. Just for fun I walked though my old building and floor (K, floor 2) and nearly fell over when I saw that it looked identical. I mean, not only the cubes, but there were these giant signs for the different teams (e.g. “HTTS AT&T TEAM”) which were still hanging there as though the intervening eight years had never happened.

Unfortunately, I have to leave a few in progress articles in the dustbin. First, I shouldn’t really be promoting Juniper now that I am working for Cisco. And second, I’ve lost access to VMM, the internal Juniper tool I used to spin up VM versions of Juniper routers. However, I hope to start posting on Cisco topics now that I have access to that gear. Cisco’s products are generally better documented than Juniper’s, but I promise to fill any gaps I might find. And I will leave my previous articles up in hopes that they will benefit future engineers who struggle with Junos.

Onwards!

The case came in P1, and I knew it would be a bad one. One thing you learn as a TAC engineer is that P1 cases are often the easiest. A router is down, send an RMA. But I knew this P1 would be tough because it had been requeued three times. The last engineer who had it was good, very good. And it wasn’t solved. Our hotline gave me a bridge number and I dialed in.

The customer explained to me that he had a 7513 and a 7206, and they had a multilink PPP bundle between them with 8 T1 lines. The MLPPP interface had mysteriously gone down/down and they couldn’t get it back. The member links were all up/down. Why they were connecting them this way was not a question an HTTS engineer was allowed to ask. We were just there to troubleshoot. As I was on the bridge, they were systematically taking each T1 out of the bundle and putting HDLC encapsulation on it, pinging across, and then putting it back into the MLPPP bundle. This bought me time to look over the case notes.

There were multiple RMA’s in the notes. They had RMA’d the line cards and the entire chassis. The 7513 they were shipped had problems and so they RMA’d it a second time. RMA’ing an entire 7513 chassis is a real pain. I perused the configs to see if authentication was configured on the PPP interface, but it wasn’t. It looked like a PPP problem (up/down state) but the interface config was plain MLPPP vanilla.

They finished testing all of the T1’s individually. One of the engineers said “I think we need another RMA.” I told them to hang on. “Take all of the links out of the bundle and give me an MLPPP bundle with one T1,” I said. “But we tested them all individually!” they replied. “Yes, but you tested them with HDLC. I want to test one link with multilink PPP on it.” They agreed. And with a single link it was still down/down. Now we were getting somewhere. I had them switch which link was the active one. Same problem. Now disable multilink and just run straight PPP on a single link. Same thing.

“Can you turn on debug ppp with all options?” I asked. They were worried about doing it on the 7513, but I convinced them to do it on the 7206. They sent me the logs, and this stood out:

AAA/AUTHOR/LCP: Denied

Authorization failed. But why? Nothing was configured under the interface, but I looked at the top of the config, where the AAA commands are, and saw this:

aaa authorization network default

And there it was. “Guys, could you remove this one line from the config?” I asked. They did. The single PPP link came up. “Let’s do this slowly. Add the single link back into multilink mode.” Up/up. “Now add all the links back.” It was working.

It turns out they had a project to standardize their configs across all their routers and accidentally added that line. They had RMA’d an entire 7513 chassis–twice!–for a single line of config. Replacing a 7513 is a lot of work. I still can’t believe it got that far.

Some lessons from this story: first, RMAs don’t always fix the problem. Second, even good engineers make stupid mistakes. Third, when troubleshooting, always limit the scope of the problem. Troubleshoot as little as you can. And finally, even hard P1’s can turn out easy.

Before I worked at TAC, I was pretty careless about how I filled in a TAC case online. For example, when I had to select the technology I was dealing with in the drop-down menu, if I didn’t see exactly what I had then I would go ahead and pick something at random and figure TAC would sort it out. And then I would get frustrated when I didn’t get an answer on my case for hours. Working in TAC showed me why.

When you open a TAC case, and you pick a particular technology, your choice determines into which queue the case is routed. For example, if you pick Catalyst 6500, the case ends up in a queue which is being monitored by engineers who are experts on that platform. Under TAC rules (assuming it is a priority 3 case) the engineers have 20 minutes to pick up the case. If they don’t, it turns blue in their display and their duty manager starts asking questions. (In high touch TAC where I worked, we didn’t have too many blue cases, but in backbone TAC it wasn’t uncommon to see a ton of blue and even black (> 1hr) cases sitting in a busy queue.)

If the customer categorized his case wrong, this meant it was sitting in the wrong queue. Now an engineer had to notice his case, review it, determine where it should go, and “punt” it to the appropriate queue, at which point the counters are reset and the case is sitting again.

Imagine for a moment that you are an overworked TAC engineer with 30 minutes left to go on your shift. You are supposed to clear out your queue and take any cases before the next crew comes on (at least we were in HTTS). You don’t want to take any more cases, however. There is a case sitting in your queue which has turned blue and your colleagues may not be happy to see it sitting there when they come on shift. Well, you’re an experienced TAC engineer and you know what to do: punt the case to another queue, even if it’s the wrong one. If you pick a busy queue, it will take at least 30 minutes for the engineers on that queue to see the “mis-queue” and punt the case back to your queue, at which point you are off shift and it becomes the problem of your colleagues on the next shift.

My recommendation is to be very careful to select the right menu options when you open a case online with any tech support organization. Make sure you route the case to the right place the first time so you don’t have to wait for engineers and managers to look at it and re-categorize it.