Adversarial Thinking

The Power and Danger of Abstraction

| Comments

I’m at the supermarket for a quick grocery run. I can carry everything I need in my hands, but I almost always grab a basket: why? It doesn’t increase my maximum carry weight — you might even argue that the basket decreases it by forcing everything to hang from a flimsy handle. No, a supermarket basket gives a different advantage: by allowing me to treat multiple separate items as a singular item, it makes it easier to carry several objects at once. The basket is like an abstraction for its contents: rather than worrying about how I’ll juggle a dozen eggs, a bottle of corn syrup, and a gallon of milk, I only need to carry the one basket. This is the power of abstraction: in simplifying, it allows us to manipulate more complex items than we otherwise could.

Imagine describing a supermarket basket to someone who’d never seen one. You’d probably say something like: “An open-topped box, about seventy by thirty-five by thirty-five centimeters, with a carrying handle, capable of holding about ten kilograms.” It’s a serviceable description and certainly matches the baskets you find in your local market the world over, yet it crucially underspecifies the actual object. Sure, that spec works fine — right up until the point that the user tries to fill it with hydrochloric acid, lava, fine sand, or superheated steel ingots. This is the danger of abstraction: the map is not the territory. In simplifying, abstractions elide detail.

This may seem like an odd example — who’d possibly want to do that? Surely a glance could tell you that supermarket baskets are unsuitable for holding such dangerous materials. Yet across the field of engineering, one constantly finds situations where publicly-described abstractions hide potentially-dangerous limitations. To name a few examples, NoSQL stores that impose arbitrary limits on data (for example, advertising themselves as Big Data, as long as your Data isn’t Bigger than 100GB), “authentication” that isn’t (a simple URL parameter check, or worse), APIs that break down on hitting unspecified limits, and so on. It’s not that the designers are deliberately malicious; rather, they constructed their systems around what were at the time reasonable tradeoffs, then failed to communicate them as their systems scaled. An API that offers a 10,000 requests-per-second limit might seem enough for anyone, but if someone builds a bestselling iOS app on top of it, suddenly you both have problems on your hands. Building on such abstractions is constructing one’s house, if not on sand, then on terrain with unstable geology and a propensity for developing sinkholes. It all works fine, right until your kitchen floor vanishes into a sucking Chthonian hellpit.

Part of the job of security researchers is to discover situations like these: the marginal cases where designers’ implicit trust in abstractions breaks down. It’s truly shocking how many things are “secured” by nothing more than “why would anyone want to do that?” Any government process involving a fax machine, for example — I once watched a fascinating conference presentation describing how easy it is to “steal” a Florida LLC through nothing more than a faxed-in form. No authentication whatsoever, because hey, who’d go to that trouble? Another great example is the implicit trust companies place in their domain name registrars. A company can dedicate as much attention as they please to their network security — and still be “hacked” through a registrar compromise. These problems are solveable, but they’re only visible — never mind tractable — if designers and decision-makers view these processes as something other than pure black-box abstractions.

How can we improve this situation? Here are some ideas:

Communicate the limitations of your systems clearly, even if you think no one will ever reach them. If your electronic part stops working at 300 degrees Fahrenheit, communicate that fact — or one day someone may build it into a supposedly-fireproof system.

Abstractions you design should fail visibly, quickly, and gracefully. This especially goes for people who might not read the whole documentation (think: public APIs). It’s much better to display a clear, obvious error than it is to start dropping, say, one in a hundred requests.

Follow the Robustness Principle: “Be conservative in what you do; be liberal in what you accept from others.”

Follow the corollary as well. Understand that those designing the abstractions you use may not have taken the same steps. This is more than just liberal acceptance of potentially-flawed communication. I’d be very cautious about building a business totally dependent on someone else’s API (Facebook or Twitter, for example) lest they do exactly what those companies have done: radically shift the underlying terrain and render my previously-thriving business unsustainable. Twitter is more than an abstraction: it’s a company, run by real people, that will occasionally make business decisions you disagree with. The only question is: how much risk does this expose you to?

Once in a while, take the covers off the abstractions you rely on, especially those you implicitly trust. Ask yourself what things would have to fail for an unacceptable loss event to take place. How do you know those things aren’t going to fail? What can you do about that?

Don’t put lava in your supermarket basket. It’s just not a good idea.

Comments