When I attended Cisco Live sometime around the election of Donald Trump, there was a fellow walking around with a red hat with white lettering on it: MAKE CLI GREAT AGAIN. Ha! I love Cisco Live. These are my people.
I remember back when I worked at Juniper, one exec looked at me working on CLI and said, “you know that’s going to be gone soon. It’ll all be GUI.” That was 8 years ago…how’s that going? When I work on CLI (and I still do!), or programming, my wife always says, “how can you stare at that cryptic black screen for hours?” Hey, I’ve been doing it since I was a kid.
The black screen won’t go away, I’m afraid. I’ve recently been learning iOS app development for fun (not profit). It’s surprisingly hard given the number of successful app developers out there. I may be too used to Python to program in Swift, and my hatred of object-oriented programming doesn’t help me when there is no way to avoid it in Swift. Anyways, it took me about a week to sort out the different UI frameworks used in iOS. There are basically three:
- Storyboards. Storyboards are a graphical design framework for UI layout. Using storyboards, you drag and drop UI elements like buttons and textfields onto a miniature iPhone screen.
- UIKit. (Technically storyboards use UIKit, but I don’t know what else to call this.) Most high-end app developers will delete the storyboard in their project and write the UI as code. They actually type in code to tell iOS what UI elements they want, how to position them, and what to do in the event they are selected. Positioning is fairly manual and is done relative to other UI elements.
- SwiftUI. Apple is pushing towards this model and will eventually deprecate the other two. SwiftUI is also a UI-as-code model, but it’s declarative instead of imperative. You tell SwiftUI what you want and roughly how you want to position things, and Swift does it for you.
Did you catch my point? The GUI-based layout tool is going away in favor of UI-as-code! The black screen always comes back!
The difference between computer people and non-computer-computer-people (many industry MBAs, analysts, etc.), is that computer people understand that text-based interaction is far more efficient, even if the learning curve is steeper.
Andrew Tanenbaum, author of the classic Computer Networks, typeset his massive work in troff. Troff is a text-based typesetting tool where you enter input like this:
.ll 3i .mk a .ce Preamble .sp We, the people of the United States, in order to form a more perfect Union...
Why doesn’t he just use Word? I’ll let Dr. Tanenbaum speak for himself:
All my typesetting is done using troff. I don’t have any need to see what the output will look like. I am quite convinced that troff will follow my instructions dutifully. If I give it the macro to insert a second-level heading, it will do that in the correct font and size, with the correct spacing, adding extra space to align facing pages down to the pixel if need be. Why should I worry about that? WYSIWYG is a step backwards. Human labor is used to do that which the computer can do better. (Emphasis added.)
I myself am not quite enough of a cyborg to use troff (though I use vi), but I have used Latex with far better results than Word. (Dr. Tanenbaum says “real authors use troff,” however.)
One of my more obscure interests (I have many) is Gregorian Chant. Chant uses a musical notation which is markedly different from modern music notation, and occasionally I need to typeset it. I use a tool called Gregorio, where I enter the chant like this:
(cb3) Ad(d)ór(f’)o(h) te(h’) de(h)vó(hi)te,(h.) (,) la(g)tens(f) Dé(e’)i(d)tas,(d.)
The letters in parentheses represent the different musical notes. I once tried typesetting the chant graphically, and it was far more tedious than the above. Why not enter what I want and let the typesetting system do the work?
Aside from the mere efficiency, text files can be easily version controlled and diff’d. Try that with your GUI tool!
It’s very ironic that many of my customers who use controllers like DNAC or vManage are actually accessing the tool through APIs. They bought a GUI tool, but they prefer the black screen. The controller in this case becomes a point of aggregation for them, a system which at least does discovery and allows some level of abstraction.
The non-computer-computer-people look at SwiftUI, network device CLI, troff, Gregorio, APIs, and rend their garments, crying out to heaven, “why, oh why?!” Some may even remember the days of text-based editing systems on their DOS machines, which they could never learn, and the great joy that WYSIWYG brought them. It reminds me of a highly incompetent sales guy I worked with at the Gold partner back in the day. He once saw me configuring a router and said: “Wow, you still use DOS to configure routers!”
“It’s actually IOS CLI, not DOS.”
“That’s DOS!” he densely replied. “I remember DOS. I can’t believe you still use DOS!”
It’s funny that no matter how hard we try to get away from code, we always come back to it. We’re hearing a lot about “low code” environments these days. It tells you something when the first three Google hits on “low code” just come back to Gartner reports. Gee, have we been down this path before? Visual Basic was invented in 1991. If low code is so great, why is Apple moving from storyboards to SwiftUI?
In my last post I wrote about the war on expertise. This is one of the fronts in the war. The non-computer-computer-people cannot understand the black screen, and are convinced they can eliminate it. They learned about “innovation” in business school, and read case studies about Windows 95 and the end of DOS. They read about how companies like Sun Microsystems went belly-up because they are not “disruptive.” They did not, however, read about all the failed attempts to eliminate the black screen, spanning decades. I believe it was George Santayana who said, “If you don’t remember computer history, you’re doomed to repeat it.”