dev@whisperer:~$

PL

Grace Hopper – Lecture for the NSA (Part 1)

The American National Security Agency (NSA) released a lecture by Rear Admiral Grace Hopper from 1982. Its title is “Future Possibilities: Data, Hardware, Software, and People”.

The future as Hopper saw it in 1982 is now our present. It’s 2024, and the core problems of software and hardware engineering remain unchanged.

I decided to quote fragments from Hopper’s lecture and add my own commentary. If you’re interested, I recommend watching both parts of the recording—it’s simply outstanding content. Hopper explains complex problems in a vivid, storytelling manner that is understandable even for non-technical audiences. The link to the lecture is below; the second part is available in the YouTube feed.

The Value of Information

Nobody is looking at the value of information or comparative value of different pieces of information, and we’ve got to look at it because it is cutting up our online systems.

In the era of big data, AI, and IoT, this sentence resonates very loudly in my head. Sure, we have tons of data. We can process it quickly, transport it, and share it anywhere we want—something that simply wasn’t possible in the 1980s. But the problem itself hasn’t changed, because we often don’t know which information is actually relevant to us.

So we end up collecting a lot of junk. And expensive junk at that, because data operations and storage are not free—although cloud providers try to convince us otherwise, at least until all our free trials run out.

In the end, in the universe of digital data there is always a trade-off between the amount of available information and the speed at which we can retrieve the specific piece we actually care about. The cost of memory is related to the speed of access to the data stored in that type of memory—the faster the memory, the more expensive it is. There’s always a trade-off.

Nowadays people also pay attention to the ecological aspect of working with data, something that was not considered in Hopper’s time. Cloud hyperscalers promise to progressively reduce their carbon footprint, and Google even aims to become “net-zero emissions across all of our operations and value chain by 2030″. We’ll see.

In the context of the value of information, however, it is worth asking whether we really need to store absolutely all the data we currently collect.

Mental Ruts

I think the saddest praise I ever hear in a computer installation is that horrible one “but we ve always done it that way.” That’s a forbidden phrase in my office.

“Because that’s how we’ve always done it.” This is a ready-made recipe not only for technical debt but also for the mental stagnation of an entire team. After all, teams are made of people, right?

And here, in my opinion, we have a direct relationship: the kind of people you have determines the kind of technology you end up with. Organizations need people who are able to step back, look at existing conditions from a distance, and question their sense—or propose improvements.

Grace Hopper

Horizontal Scaling

Now, back in the early days of this country, when they moved heavy objects around, they didn’t have any Caterpillar tractors, they didn’t have any big cranes. They used oxen. And when they got a great big log on the ground, and one ox couldn’t budge the darn thing, they did not try to grow a bigger ox. They used two oxen. And I think they’re trying to tell us something. When we need greater computer power, the answer is not get a bigger computer – it’s get another computer. Which, of course, is what common sense would have told us to begin with. Incidentally, common sense is a legitimate scientific technique.

That gigantic ox from Hopper’s analogy is grazing in almost every server room today. The first example that comes to my mind is the classic SQL Server. It is easy to scale vertically by adding more resources, but with large databases this quickly becomes very expensive, and maintaining and managing the database itself becomes difficult.

Ideally, one should think about this in advance and design a database structure—or choose a database type—that has horizontal scaling built into its architecture. After all, the idea behind Kubernetes is based on this type of scaling.

However, the ox analogy—although very illustrative—is not the answer to every IT problem. Many problems cannot be solved simply by parallelizing operations. Still, the issue identified by Hopper eventually found its continuation in multi-processor and multi-core systems as well as in cloud architecture.

The approach she proposed has become a standard in the industry, although many applications—for example games—are not able to use this advantage optimally (or at all).

Microservices

It means you write the system in independent modules. Modules have one entrance point and one exit point. And no module ever accesses the interior of any other module. Never touches it. And the way you exchange data between the modules is through a series of interfaces. This module put something down, another module picks it up.

Half a century ago, Hopper was already proposing decoupling (today mainly associated with microservices) as an approach to computer system architecture. That was 50 years before these terms became fashionable buzzwords.

We’re talking about a time when REST APIs, SOAP, or gRPC did not yet exist. In the year of this lecture, the TCP/IP protocol stack was only just becoming the standard of ARPANET, and the HTTP protocol would not appear for another decade.

I suspect many of us have been—if not the authors, then at least witnesses—of situations where a small change in code we created or maintained triggered a cascade of errors that destabilized an entire application or integration.

I certainly have, and perhaps that’s why Hopper’s statement still resonates so strongly in my mind. And if you haven’t experienced that yourself, I recommend reading about the famous Death Star architecture.

Technical Debt

That was a case where most people totally failed to look at the cost of not doing something.

Once again, we return to the topic of neglect and technological inertia, which in development teams often blossom into technical debt measured in piles of money unnecessarily burned.

After her retirement, Hopper and her team developed software for COBOL validation. At that time, the language had many dialects tailored to the specific needs of computer users. Let’s remember that computers themselves were not only huge but also astronomically expensive, affordable only to government institutions, the military, and the largest enterprises.

The result of Hopper’s team’s work was a COBOL compiler that made it possible to run COBOL code on any machine.

The COBOL compiler became an important part of the program to standardize the language and promote its wider adoption. This translated into measurable savings—hundreds of millions of dollars—for both the state budget and private organizations.

Maintaining incompatible COBOL dialects within government institutions alone was that expensive. Hopper mentions the numbers in the lecture, but you can also find information about it here.

For me, the analogy to modern times is obvious. We have a complicated stack because we were (and still are) Agile, so we write in whatever language and on whatever platform happens to be convenient—usually quickly and somewhat carelessly—just to deliver something ASAP, and after us, the flood.

These kinds of omissions—often included in the TCO of an entire team, platform, or service—disappear somewhere in spreadsheets prepared for the business side and upper management.

Their representatives simply don’t know that this cost could have been avoided if only reasonable Tech Leaders had the vision, time, and resources to mitigate the consequences of technical debt before it revealed itself in full.

One can dream.

With this somewhat pessimistic note, I would like to conclude the first part of this post about Hopper’s rediscovered lecture.

Part Two