Skip navigation

Tag Archives: firewalls

Sun Ultra 10

It was four o’clock in the early hours of one Sunday morning in 2001.  I had been up all night sitting in our data center at the San Francisco Chronicle with our Unix guy.  He was handing off responsibility for managing the firewalls to the network team, and he was walking me through the setup.  He’d been trying all night to get failover to work between the two firewalls, and so far nothing was going right.

We were using Checkpoint which was running on Solaris.  Despite my desire to be Cisco-only, I was interested in security and happy to be managing the firewalls.  Still, looking at the setup our Unix guy had conceived, my enthusiasm was waning.

He drew a complex diagram on a piece of paper, showing the two Solaris servers.  There was no automatic failover, so any failure required manual intervention.  He has two levels of failover.  First, he was using RAID to duplicate the main hard disk over to a secondary hard disk.  If the main disk failed, we’d need to edit some text files with vi to somehow bring the Sparc Ultra 10 up on the second drive.  If the Ultra 10 failed entirely, we would have to edit some text files on the second Ultra 10 to bring it up with the configuration of the first.  With Unix guys, it’s always about editing text files in vi.

Aside from being cumbersome, it didn’t work.  We’d been at it for hours, and whatever disk targets he changed in whatever files, failover wasn’t happening. At the newspaper, we had until 5am Sunday to do our work, after which everything had to be back on line.  And we were getting concerned it wouldn’t come back at all.

Finally the Unix guy did manage to get the firewall booted up and running again.  On Monday I called Checkpoint and asked how we could get off Solaris.  They made a product called SecurePlatform, which installed a hardened Linux and Checkpoint all with one installer.  I ordered it at once, along with two IBM servers.

The software worked as promised, and I brought up a new system, imported our rules, and did interface and box failover with no problem.  I told the Unix guy to decommission his Ultra 10s.  He was furious that there was a *nix system on the network his team wasn’t managing.  I told him it was an appliance and there was no customization allowed.  The new system worked flawlessly and I didn’t even have to touch vi.

Network engineers are used to relatively simple devices that just work.  Routers and switches can be upgraded with a single image, and device and OS-level management is mostly under the hood.  While a lot of network engineers like Linux or Unix and have to work with these operating systems, at the end of the day when we want to do our job, we want systems that install and upgrade quickly, and fail over seamlessly.  As networking vendors move more into “software”, we need to keep that in mind.

2
1

With Coronavirus spreading, events shut down, the Dow crashing, and all the other bad news, how about a little distraction?  Time for some NetStalgia.

Back in the mid 1990’s, I worked at a computer consulting firm called Mann Consulting.  Mann’s clientele consisted primarily of small ad agencies, ranging from a dozen people to a couple hundred.  Most of  my clients were on the small side, and I handled everything from desktop support to managing the small networks that these customers had.  This was the time when the Internet took the world by storm–venture capitalists poured money into the early dotcoms, who in turn poured it into advertising.  San Francisco ad agencies were at the heart of this, and as they expanded they pulled on companies like Mann to build out their IT infrastructure.

I didn’t particularly like doing desktop support.  For office workers, a computer is the primarily tool they use to do their job.  Any time you touch their primary tool, you have the potential to mess something up, and then you are dealing with angry end users.  I loved working on networks, however small they were.  For some of these customers, their network consisted of a single hub (a real hub, not a switch!), but for some it was more complicated, with switches and a router connecting them to the Internet.

Two of my customers went through DDoS episodes.  To understand them, it helps to look at the networks of them time.

Both customers had roughly the same topology.  A stack of switches was connected together via back-stacking.  The entire company, because of its size, was in a single layer2/layer 3 domain.  No VLANs, no subnetting.  To be honest, at the time I had heard of VLANs but didn’t really understand what they were.  Today we all use private, RFC1918 addressing for end hosts, except for DMZs.  Back then, our ISP assigned us a block of addresses and we simply applied the public addresses directly on the end-stations themselves.  That’s right, your laptop had a public IP address on it.  We didn’t know a thing about security;  both companies had routers connected directly to the Internet, without even a simple ACL.  I think most companies were figuring out the benefits of firewalls at the time, but we also had a false sense of security because we were Mac-based, and Macs were rarely hacked back then.

One day, I came into work at a now-defunct ad agency called Leagas Delaney.  Users were complaining that nothing was working–they couldn’t access the Internet and even local resources like printing were failing.  Macs didn’t even have ping available, so I tried hitting a few web sites and got the familiar hung browser.  Not good.

I went into Leagas’ server room.  The overhead lights were off, so the first thing I noticed were the lights on the switches.  Each port had a traffic light, and each port was solid, not blinking like they usually did.  When they did occasionally blink, they all did in unison.  Not good either.  Something was amiss, but what?

Wireshark didn’t exist at the time.  There was a packet sniffer called Etherpeek available on the Mac, but it was pricey–very pricey.  Luckily, you could download it with a demo license.  It’s been over 20 years, so I don’t quite recall how I managed to acquire it with the Internet down and no cell phone tethering, but I did.  Plugging the laptop into one of the switches, I began a packet capture and immediately saw a problem.

The network was being aggressively inundated with packets destined to the subnet broadcast address.  For illustration, I’ll use one of Cisco’s reserved banks of public IP addresses.  If the subnet was 209.165.200.224/27, then the broadcast address would be 209.165.200.255.  Sending a packet to this address means it would be received by every host in the subnet, just like the broadcast address of 255.255.255.255.  Furthermore, because this address was not generic, but had the subnet prefix, a packet sent to that broadcast address could be sent through the Internet to our site.  This is known as directed broadcast.  Now, imagine you spoof the source address to be somebody else’s.  You send a single packet to a network with, say, 100 hosts, and those 100 hosts reply back to the source address, which is actually not yours but belongs to your attack target.  This was known as a smurf attack, and they were quite common at the time.  There is really no good reason to allow these directed broadcasts, so after I called my ISP, I learned how to shut them down with the “no ip directed-broadcast” command.  Nowadays, this sort of traffic isn’t allowed, most companies have firewalls, and they don’t use public IP addresses, so it wouldn’t work anyhow.

My second story is similar.  While still working for Mann, I was asked to fill in for one of our consultants who was permanently stationed at an ad agency as their in-house support guy.  He was going on vacation, and my job was to sit in the server room/IT office and hopefully not do anything at all.  Unfortunately, the day after he left a panicked executive came into the server room complaining that the network was down.  So much for a quiet week.

As I walked around trying to assess the problem, of course I overheard people saying “see, Jon leaves, they send a substitute, and look what happens!”  People started questioning me if I had “done” anything.

A similar emergency download of a packet sniffer immediately led me to the source of the problem.  The network was flooded with broadcast traffic from a single host, a large-format printer.  I tracked it down, unplugged it, and everything started working again.  And yet several employees still seemed suspicious I had “done” something.

Problems such as these led to the invention of new technologies to stop directed broadcasts and contain broadcast storms.  It’s good to remember that there was a time before these thing existed, and before we even had free packet sniffers.  We had to improvise a lot back then, but we got the job done.