by J.M. Porup (@toholdaquill)
sign up for the newsletter
imperfect human beings are incapable of creating perfectly-secure computers
When the bridge began to jiggle, the Army Corps of Engineers assured everyone it would be fine. The pillars were made of reinforced strawberry gelatin, the roadway paved with hardened marshmallow, and the suspension cables of black licorice ropes. What could go wrong? The finest engineers had designed and built the bridge. Just trust us, they said. We’re professionals.
Software engineers, unlike civil engineers, build fortresses of syntactic sugar and call them secure. If bridges were built like software, our infrastructure would collapse around our ears, and the civil engineers responsible would find themselves on trial for gross negligence.
People’s lives are at stake. The code we write today is infrastructure every bit as much as concrete poured over rebar.
But it’s not fair to call out software engineers for negligence. At least, not most of the time. Once you’ve written code for a living, you realize the problem: making perfectly-secure software is impossible.
No matter how many of the world’s greatest software engineers you throw at a project, they will always make mistakes. To err is human.
Degrees of quality exist, yes – on a sliding scale between a billion-dollar Silicon Valley software company and a random tinker pushing code to GitHub in their spare time, the former will tend to be better made, and more secure, than the latter. As a general rule, the more money and time you throw at a project, the better it will be.
The problem is that software breaks in uniquely awful ways. If a single suspension cable on the Golden Gate Bridge snaps, the bridge won’t collapse. Civil engineers build redundancy into their work precisely for this reason – to prevent catastrophic failure. But a single bungled line of code in software is enough to break the security of an encryption app, a web browser, even an entire operating sytem – and by extension, the security of the lives of every single person who uses that software.
One tiny imperfection, and the bridge falls down.
Can you imagine building bridges under such conditions? Knowing that a single misplaced rivet could cause thousands of people to die? Yet software engineers write code under just such conditions every day.
Worse: civil engineers do not typically worry about the constant threat of sabotage. Defending bridges against random ninjas creeping around with explosives in their backpacks and suction cups on their hands and feet is not the sort of thing that keeps a bridge designer up nights.
Accidents and adversaries. Civil engineers worry about accidents. Software engineers worry about accidents and adversaries – and the constant threat of catastrophic failure.
Evil people who want to do evil things are looking for ways to sabotage and destroy the castles of code software engineers build in the air. With enough resources, these attackers will always succeed.
But how would you know? How would a software engineer, or the users, or society at large – how would we know something was wrong?
You wouldn’t know.
If a foreign spy or gangster blows up your bridge, you’re going to know about it. You’re not going to work today. But if the same saboteur breaks your code, it is invisible. You won’t know. You may never know…until it’s too late.
This creates a unique set of challenges for software engineers, but also for software companies, lawmakers, and really, all of us. We live in the cyber domain now, but the solid ground we think we stand on is quicksand, haphazard code glued together with twine and chewing gum and launched into the cyber domain with a silent prayer.
Prayers ain’t gonna help. This problem has no solution: Imperfect creatures create imperfect things. In art, we celebrate imperfection – the flaw makes the work a masterpiece. In engineering, imperfections are life-threatening. Humanity’s new home on a cyber domain that jiggles underfoot like strawberry gelatin ought to give us pause.
The siren song of formal methods lulls some with a distant, chimerical promise of provably-secure code that cannot be broken. Formal methods are a cute academic plaything that do not scale in the real world, nor offer any hope of solving the problems we face today in the cyber domain. Perhaps a future generation may enjoy the fruits of research into formal methods…assuming humanity survives long enough to do so.
Everything is broken. Always. Forever. For all practical intents and purposes, anyway. And security flaws give the possessors power that casts a shadow across the entire planet.
<< introduction | theses[1] >> |
Quinn Norton, “Everything is Broken”