appletalk

All posts tagged appletalk

There’s a lot of talk about networking simplicity these days.  There’s been a lot of talk about networking simplicity, in fact, for as long as I can remember.  The drive to simplify networking has certainly been the catalyst for many new products, most (but not all) unsuccessful.  Sometimes we forget that networking has some inherent complexities (a large distributed system with multiple os’s, protocols, media types), but that much of the complexity can be attributed to humans and their choices.  IPv4 is a good example of this.

When I got into network engineering, I had assumed that network protocols were handed down from God and were immaculate in their perfection.  Reading Radia Perlman’s classic book Interconnections changed my understanding.  Aside from her ability to explain complex topics with utter clarity, Perlman also exposed the human side of protocol development.  Protocols are the result of committees, power politics, and the limitations of human personality.  Some protocols are obviously flawed.  Some flaws get fixed, but widely deployed protocols, like IPv4, are hard to fix.  Of course, v6 does remedy many of the problems of v4, but it’s still IP.

My vote for simplest protocol goes to AppleTalk.  When I was a young network guy, I mostly worked on Mac networks.  This was in the beige-box era before Jobs made Apple “cool” again.  The computers may have been lame, but Apple really had the best networking available in the 1990’s.  I’ve written about my love for LocalTalk, and its eminently flexible alternative PhoneNet in the past.  But the AppleTalk protocol suite was phenomenal as well.

N.B.  My description of AppleTalk protocol mechanics is largely from memory.  Even the Wikipedia article is a bit sparse on details.  So please don’t shoot me if I misremember something.

In the first place, you didn’t need to do anything to set up an AppleTalk network.  You just connected the computers together and switched either the printer or modem port into a network port.  Auto-configuration was flawless.  Without any DHCP server, AppleTalk devices figured out what network they were on, and acquired an address.  This was done by first probing for a router on the network, and then randomly grabbing an address.  The host then broadcast its address, and if another host was already using it, it would back off and try another one.  AppleTalk addresses consisted of a two byte network address which was equivalent to the “network” portion of an IP subnet, and a one-byte host address (equivalent to the “host” portion of an IP subnet.)  If this host portion of the address is only one byte, aren’t you limited to 255 (or so) addresses?  No!  AppleTalk (Phase 2) allowed aggregation of contiguous networks into “cable ranges”.  So I could have a cable range of 60001-60011, multiple networks on the same media, and now I could have 2530 end stations, at least in theory.

Routers did need some minimal configuration, and support for dynamic routing protocols was a bit light.  Once the router was up and running, it would create “zones” in the end-user’s computer in an application called “Chooser”.  They might see “1st floor”, “2nd floor”, “3rd floor”, for example, or “finance”, “HR”, “accounting”.  However you chose to divide things.  If they clicked on zone, they would see all of the AppleTalk file shares and printers.  You didn’t need to point end stations at their “default gateway”.  They simply discovered their router by broadcasting for it upon start up.

AppleTalk networks were a breeze to set up and simple to administer.  Were there downsides?  The biggest one was the chattiness of the protocols.  Auto-configuration was accomplished by using a lot of broadcast traffic, and in those days bandwidth was at a premium.  (I believe PhoneNet was around 200 Kbps or so.)  Still, I administered several large AppleTalk networks and was never able to quantify any performance hit from the broadcasts.  Like any network, it required at least some thinking to contain network (cable range) sizes.

AppleTalk was done away with as the Internet arose and IP became the dominant protocol.  For hosts on LocalTalk/PhoneNet networks, which did not support IP, we initially tunneled it over AppleTalk.  Ethernet-connected Macs had a native IP stack.  The worst thing about AppleTalk was the flaky protocol stack (called OpenTransport) in System 7.5, but this was a flaw in implementation, not protocol design.

I’ll end with my favorite Radia Perlman quote:  “We need more people in this industry who hate computers.”  If we did, more protocols might look like AppleTalk, and industry MBAs would need something else to talk about.

I’m thinking of doing some video blogging and kicking it off with a series with my thoughts on technical certifications.  Are they valuable or just a vendor racket?  Should you bother to invest time in them?  Why do the questions sometimes seem plain wrong?

Meanwhile, a little Netstalgia about the first technical certification I (almost) attempted:  The Apple Certified Server Engineer.

Back in the 1990’s, I worked for a small company doing desktop and network support.  When I say small, I mean small.  We had 60 employees and 30 of them had computers.  Still, it was where I first got into the computer industry, and I learned a surprising amount as networking was just starting to take off.

I administered a single AppleShare file server for the company, and I even set up my very first router, a Dayna Pathfinder.  I was looking for more, however, and I didn’t have much of a resume.  A year and a half of desktop support for 30 users was not impressing recruiters.  I felt I needed some sort of credential to prove my worth.

At the time Microsoft certifications, in particular the MCSE, were a hot commodity.  Apple decided to introduce its own program, the ACSE.  Bear in mind, this was back before Steve Jobs returned to Apple.  In the “beige-box” era of Apple, their products were not particularly popular, especially with corporations.  Nonetheless, I saw the ACSE as my ticket out of my pathetic little job.  I set to work on preparing for it.  If memory serves (and I can find little in the Wayback machine), the certification consisted of four exams covering AppleTalk networking, AppleShare file servers, and Backup.

Apple outsourced the certification development to a company called Network Frontiers, and its colorful leader, Dorian Cougias.  I had seen Dorian present at Macworld Expo once, and he clearly was very knowledgeable.  (He asked the room “what’s the difference between a switch and a bridge?” and then answered his own question.  “Marketing.”  Good answer.)  Dorian wrote all of the textbooks required for the program.  He may have known his stuff, but I found his writing style insufferable.  The books were written in an overly conversational tone, and laced with constant bad jokes.  (“To remove the jacketing of the cable you need a special tool…  I’d call it a ‘stripper’ but my mother is reading this.”  Ugh…)  A little levity in technical documentation is nice, but this got annoying fast.

This was in the era before Google, and despite my annoyance I did scour the books for scarce information on how networking actually worked.  I didn’t really study them, however, which you need to do if you want to pass a test.  I downloaded the practice exam on my Powerbook 140 laptop and fired it up.  I assumed with my day-to-day work and having read the book, I’d pass the sample exam and be ready for the real deal.

Instead, I scored 40%.  I used to be a bit dramatic back in my twenties, and went into a severe depression.  “40%???  I know this stuff!  I do it every day!  I read the book!  I’ll never get out of this stupid job!!!”  I had my first ocular migraine the next day.

In reality, it doesn’t matter how good or bad, easy or hard an exam is.  You’re not going to pass it on the first go without even studying.  And this was a practice exam.  I should have taken it four or five times, like I learned to do eventually studying Boson exams for my CCNP.

Instead, I gave up.  And, very shortly later, Apple cancelled the program due to a lack of interest.  Good thing I didn’t waste a lot of time on it.  Of course, I managed to get another job, and pass a few tests along the way.

I learned a few things about technical certifications from that.  In the first place, they can often involve learning a lot of knowledge that may not be practical.  Next, you can’t pass them without studying for them.  Also, that the value and long-term viability of the exams are largely up to the whims of the vendors.  And finally, don’t trust a certification when the author of the study materials thinks he’s Jerry Seinfeld.

 

I was hoping to do a few technical posts but my lab is currently being moved, so I decided to kick off another series of posts I call “NetStalgia”.  The TAC tales continue to be popular, but I only spent two years in TAC and most cases are pretty mundane and not worthy of a blog post.  What about all those other stories I have from various times and places working on networks?  I think there is some value in those stories, not the least because they show where we’ve come from, but also I think there are some universal themes.  So, allow me to take you back to 1995, to a now-defunct company where I first ventured to work on a computer network.

I graduated college with a liberal arts degree, and like most liberal arts majors, I ended up working as an administrative assistant.  I was hired on at company that both designed and built museum exhibits.  It was a small company, with around 60 people, half of whom worked as fabricators, building the exhibits, while the other half worked as designers and office personnel.  The fabricators consisted of carpenters, muralists, large and small model builders, and a number of support staff.  The designers were architects, graphic designers, and museum design specialists.  Only the office workers/designers had their own computers, so it was a quite small network of 30 machines, all Macs.

When the lead designer was spending too much time on maintaining the computer network, the VP of ops called me in and asked me to take over, since seemed to be pretty good with computers and technical stuff, like fixing the fax machine.

Back then, believe it or not, PCs did not come with networking capabilities built in.  You had to install a NIC if you wanted to connect to a network.  Macs actually did come with an Apple-proprietary interface called LocalTalk.  The LocalTalk interface consisted of a round serial port, and with the appropriate connectors and cables, you could connect your Macs in a daisy-chain topology.  Using thick serial cables with short lengths to network office computers was a big limitation, so an enterprising company named Farallon came up with a better solution, called PhoneNet.  PhoneNet plugged into the rear LocalTalk port, but instead of using serial cables it converted the LocalTalk signal so that it ran on a single twisted pair of wires.  The brilliance of this was that most offices had phone jacks at every desk, and PhoneNet could use the spare wires in the jacks to carry its signal.  In our case, we had a digital phone system that consumed two pairs of our four-pair Cat 3 cables, so we could dedicate one to PhoneNet/LocalTalk and call it good.

PhoneNet connector with resistor

We used an internal email system called SnapMail from Cassidy and Greene.  SnapMail was great for small companies because it could run in a peer-to-peer mode, without the need for an expensive server.  In this mode, an email you sent to a colleague went directly to their machine.  The obvious problem with this is that if I work the day shift, and you work the night shift, our computers will never be on at the same time and you won’t get my email.  Thankfully, C&G also offered a server option for store-and-forward messaging, but even with the server enabled it would still attempt a peer-to-peer delivery if both sender and receiver were online.

One day I started getting complaints about the reliability of the email system.  Messages were being sent but not getting delivered.  Looking at some of the trouble devices, I could see that they were only partially communicating with each other and the failed messages were not being queued in the server.  This was because the peers seemed to think each other was online, when in fact there was some communication breakdown.

Determining a cause for the problem was tough.  Our network used the AppleTalk protocol suite and not IP.  There was no ping to test connectivity.  I had little idea what to do.

As I mentioned, PhoneNet used a single pair of phone wiring, and as we expanded, the way I added new users was as follows:  when a new hire came on board, I would connect a new phone jack for him, and then go to the 66 punch-down block in a closet in the cafeteria and tie the wires into another operative jack. Then I would plug a little RJ11 with a resistor on it in the empty port of the LocalTalk dongle, because the dongle had a second port for daisy-chaining and this is what we were supposed to do if it was not in use.  This was a supported configuration known in PhoneNet terminology as a “passive star”.  Passive, because there was nothing in between the stations.  This being before Google, I didn’t know that Farallon only supported 4 branches on a passive star.  I had 30.  Not only did we have too many stations and too much cable length, but the combined resistance on this giant circuit was huge because of all the resistors.

I had a walkthrough with our incredulous “systems integrator”, who refused to believe we had connected so many devices without a hub, which was called a “Star Controller” in Farallon terminology.  When he figured out what I had done, we came up with a plan to remove some of the resistors and migrate the designers off of the LocalTalk network.

Some differences between now and then:

  • Networking capability wasn’t built in on PCs, but it was on Macs.
  • I was directly wiring together computers on a punch-down block.
  • There was no Google to figure out why things weren’t working.
  • We used peer-to-peer email systems.

Some lessons that stay the same:

  • Understand thoroughly the limitations of your system.
  • Call an expert when you need help.
  • And of course:  don’t put resistors on your network unless you really need to!

 

There is one really nice thing about having a blog whose readership consists mainly of car insurance spambots:  I don’t have to feel guilty when I don’t post anything for a while.  I had started a series on programmability, but I managed to get sidetracked by the inevitable runup to Cisco Live that consumes Cisco TME’s, and so that thread got a bit neglected.

Meanwhile, an old article by the great Ivan Pepelnjak got me out of post-CL recuperation and back onto the blog.  Ivan’s article talks about how vendor lock-in is inevitable.  Thank you, Ivan.  Allow me to go further, and write a paean in praise of vendor lock-in.  Now this might seem predicable given that I work at Cisco, and previously worked at Juniper.  Of course, lock-in is very good for the vendor who gets the lock.  However, I also spent many years in IT, and also worked at a partner, and I can say from experience that I prefer to manage single vendor networks.  At least, as single vendor as is possible in a modern network.  Two stories will help to illustrate this.

In my first full-fledged network engineer job, I managed the network for a large metropolitan newspaper (back when such a thing existed.)  The previous network team had installed a bunch of Foundry gear.  They also had a fair amount of Cisco.  It was all first generation, and the network was totally unstable.  Foundry actually had some decent hardware, but their early focus was IP.  We were running a typical 1990’s multi-protocol network, with AppleTalk, IPX, SNA, and a few other things thrown in.  The AppleTalk/IPX stack on the Foundry was particularly bad, and when it interacted with Cisco devices we had a real mess.

We ended up tossing the Foundry and going 100% Cisco.  We managed to stabilize the network, and now we were dealing with a single vendor instead of two.  This made support and maintenance contract management far easier.

Second story:  When I worked for the partner, I had to do a complete retrofit of the network for a small school district.  They had a ton of old HP, and were upgrading their PBX to a Cisco VoIP solution.  This was in the late 2000’s.  I did the data network, and my partner did the voice setup.  The customer didn’t have enough money to replace all their switches, so a couple of classrooms were left with HP.

Well, guess what.  In all the Cisco-based rooms, I plugged in the phones and they came up fine.  The computers hanging off the internal phone switch port also came up fine, on the correct data VLAN.  But on the classrooms with the HP switches, I spent hours trying to get the phones and switches working together.

There is a point here which is obvious, but needs restating.  If Cisco makes the switches, the routers, the firewalls, and the phones, the chances of them all working together is much higher than if several vendors are in the mix.  Even with Cisco’s internal BU structure, it is far easier to call a meeting with different departments within a company than to fix problems that occur between vendors.  Working on Software Defined-Access, I learned very quickly how well we can pull together a team from different product groups, since our product involves switching (my BU), ISE, wireless, and APIC-EM.

As I mentioned above, the other advantage is easier management of the non-technical side of things.  Managing support contracts, and simply having one throat to choke when things go wrong are big advantages of a single-vendor environment.

All this being said, from a programmability perspective we are committed to open standards.  We realize that many customers want a multi-vendor environment and tools like OpenConfig with which to manage it.  Despite Cisco’s reputation, we’re here to make our customers happy and not force them into anything.  From my point of view, however, if I ever go back to managing a network I hope it is a single-vendor network and not a Fraken-network.

Meanwhile, if you’d like to hear my podcast with Ivan, click here.

In the second article in the “Ten Years a CCIE” series, I discuss the Routing and Switching written exam, and the changes to the CCIE exam in the early 2000s.

Passing the written

As was most common in the early 2000’s I attempted my Routing and Switching exam first. Having passed the CCNP exams, my first order of business was to study for the CCIE written exam. Even though I had heard the CCIE written exam was quite easy, I approached it much like I approached a graduate school research project. I didn’t care how easy it was, I wanted to learn the subject material well. I also didn’t want to fail. I had not yet failed a Cisco exam, and I made a pledge with myself: the first Cisco exam I will allow myself to fail will be the CCIE lab. Continue Reading