The “Bug-Free Software” fallacy

For patching the unpatchableAbout 20 years ago, I worked with a fellow who proudly told me that he had once written a flawless piece of software. He kept its inch-thick line printer listing as a shrine in his cubicle. I never asked him for details, because he got angry when people questioned his judgement on computing. After all, he had once been in a panel discussion with Grace Hopper!

I have my own Grace Hopper stories, but today’s interesting panel discussion took place earlier in December at the 2013 ACSAC in New Orleans. Roger Schell, a luminary in the annals of cyber security, declared that 1980s techniques had indeed created “bug-free software.”

Roger Schell is wrong.

So-called “bug-free software” is simply “too hard to patch” software. Instead of being bulletproof, the software is like a fragile gift padded for shipment. We protect such things by adjusting the world outside: physical security, connection facilities, procedures, and so on. We use boxes, bubble wrap, and duct tape to secure the software.

Setting the Scene

I was a panelist on ACSAC’s “Classic Book Panel” which was talking about the “Trusted Computer Systems Evaluation Criteria,” also known as the TCSEC, or just the “Orange Book.” The panel included Olin Siebert and was chaired by Daniel Faigin. I was the relative newbie – I started working with the Orange Book in the early 1990s, while the others had started much earlier.

Roger Schell was in the audience. Roger is named as one of the half-dozen original architects of the Orange Book. He could have sat on the panel in place of one of us lesser lights, but ACSAC had already honored him in a “Classic Book” event to recognize his 1974 report on Multics security, co-authored with Paul Karger.

My comments as a panelist focused on my experiences working with government-endorsed security evaluations. These started with the LOCK program, which followed the TCSEC’s “A1″ evaluation requirements. Later on, I did some research on the Common Criteria, which replaced the TCSEC. I reported on the impact of security evaluations on software development costs and flaw detection. I also presented old – but never contradicted – statistics to indicate a well-known truth: only a tiny fraction of security vendors bother with these expensive security evaluations.

The Dispute

In the discussion following the panel presentations, Roger argued that an evaluated high-assurance system should never require security patching. He had two specific examples: the BLACKER front end system (evaluated A1) and Multics (evaluated B2).

A couple of ex-NSA people in the audience supported his argument about Multics: the NSA ran at least one Multics system unpatched for several years until it was taken out of service. I tried to capture these comments accurately in some notes at the end of my panel presentation.

Some Contrary Evidence

I have contrary evidence for both the Multics and the BLACKER claim.

Personally, I’ve worked with two Multics systems: one being Honeywell’s Computer Network Operations machine (CNO), and the other being NSA’s Dockmaster system. The CNO system was updated occasionally, so I know it wasn’t installed and switched on bug-free. The Dockmaster system might never have been patched, but the NSA modified the system before deployment to include extra security (the Watchword system).

In other words, neither Multics system was considered bug-free despite its TCSEC evaluation.

As for BLACKER, I must rely on hearsay. I learned a lot about the TCSEC from Earl Boebert, who wrote “The Annotated TCSEC,” was technical director of the LOCK program, and the architect of the Sidewinder firewall. Earl made the following observation about BLACKER back in the early ’90s:

They asked if BLACKER had passed its TCSEC A1 evaluation: the answer was YES.

They asked if BLACKER was safe to deploy: the answer was NO.

In fact, there are reports that they deployed BLACKER during Desert Storm. The military will take security risks during war that they won’t take during peacetime. But since BLACKER was used to protect live military traffic, questions about security flaws (or lack thereof) were blanketed with secrecy.

My Assessment

When I hear people say “The software remained bug-free” they really mean “We couldn’t patch the software.”

Instead of fixing what was wrong, they insulate their fragile, unpatchable code against surprises or attacks. Such insulation is just another form of patching.

Real-world software systems are never so perfect that they won’t require patching. We can build toy software that doesn’t need patching. Anything that achieves a serious purpose requires patching over time, or it gets discarded after a less-than-full lifetime. This is inevitable, because the surrounding environment changes.

There are often pieces of software that are just too fragile to patch. When this happens, we patch the system by adjusting things outside the software: physical security, connection facilities, procedures, and so on. In other words, we use duct tape to fix the software.