Adversarial Thinking

Infosec's Jerk Problem

| Comments

Put bluntly: to others, we’re jerks.

If you don’t think this is a problem, you can stop reading here.

The dysfunctional tale of Bob and Alice

Imagine this. Developer Bob just received an email from your Infosec department, subject Important Security Update. He sighs, thinking of the possibilities: a request to rotate his password, or a new rule? Maybe it’s a dressing-down for having violated some policy, a demand for extra work to patch a system, or yet another hair-on-fire security update he doesn’t really see the need for. His manager is on his case: he’s been putting in long hours on the next rev of the backend but library incompatibilities and inconsistent APIs have ruined his week, and he’s way behind schedule. He shelves the security update — he doesn’t have time to deal with it, and most things coming out of Infosec are just sound and fury anyway — and, thinking how nice it would be if his team actually got the resources it needed, continues to code. He’ll get to it later. Promise.

Meanwhile, you, Security Researcher Alice, are trying not to panic. You’ve seen the latest Rails vulnerability disclosure, and you know it’s just a matter of hours before your exposed system gets hit. You remember what happened to Github and Heroku, and you’re not anxious to make the front page of Hacker News (again?!). If only Bob would answer his email! You know he’s at work — what’s happening? The face of your boss the last time your software got exploited appears in your mind, and you cringe, dreading an unpleasant meeting ahead. You fume for several minutes, cursing all developers everywhere, but no response is forthcoming. Angrily, you stand up and march over to his cube, ready to give him a piece of your mind.

Pause. What’s going on here, and what’s about to happen?

Interlude: we are the watchers on the walls

Many in the Infosec community are fond of casting the security world as “us versus them,” where “they” aren’t external, malicious actors but unaware users, clueless managers, and bumbling executives within our own organizations. We like to see ourselves as the Night’s Watch of the tech world: out in the cold with little love or support, putting in long nights protecting the realm against the real threats (which the pampered never take seriously) so everyone else can get on with their lives in comfort. We develop a jaundiced attitude: only we understand the real danger, we think, and while we’re doing our best to stave off outsider threats, when the long night comes we need fast and unquestioning cooperation from the rest of the organization lest (hopefully metaphorical) frozen undead kill us all.

The rest of the organization doesn’t see it that way. To them, we’re Chicken Little crossed with the IRS crossed with their least favorite elementary-school teacher: always yelling about the sky falling, demanding unquestioning obedience to a laundry list of arcane, seemingly arbitrary rules (password complexity requirements, anyone?) that seem of little consequence, and condescendingly remonstrating anyone who steps out of line. Once in a while a visionary (often an Infosec expat) who truly understands the threat tries to help others see the value, but most of “them” don’t get it. Users are stupid. Managers are idiots. Executives are out of touch. So it goes.

Back to the story

From Bob’s perspective, he’s making a reasonable risk/reward tradeoff: not dealing with the email right now might get him yelled at, but judging from history, probably not — he gets lots of “urgent” security emails that turn out to be Windows patches, admonitions to change his password, policy reminders and so on. From your perspective, Bob is being completely irresponsible: you told him it was important; it was right there in the subject line!

You storm into Bob’s cubicle. Images of mocking Hacker News articles dancing in your head, you accuse Bob of flagrant negligence (perhaps letting out some anger over the last security incident; Bob works on that team, doesn’t he?) and demand that he drop whatever he’s doing and fix this, now. This mood of righteous indignation doesn’t lend itself to patient explanations, and Bob’s demand that you explain the vulnerability is met with your impatient demand to “just do it.” There isn’t time for that — someone could be dumping your database as you speak!

Bob, already running out of patience due to his looming deadline, fires back that he can’t deal with this now, he’s too busy, it’s not his problem (there are other devs, right?) and you should take it up with his manager. Even if he could, he wouldn’t: didn’t the last few Infosec red alerts turn out to be nothing? Why are you trying to waste his time? Don’t you understand he has real work to do, work he’ll get fired for not doing?

Bob considers this a horrendous distraction from his critical dev work, and you see Bob as dragging his feet while the building’s on fire. You both walk away angry. Regardless of whether the vulnerability eventually gets closed, serious harm has been done.

A common, less overtly contentious version of this exchange involves FUD on both ends, with vague yet ominous threats coming from Infosec and a haze of scheduling delays, configuration problems, and blaming other teams (QA being a favorite) from the dev team. This usually gets management involved and everyone has a bad day.

There are two problems here. The first is a lack of understanding, the second, a lack of empathy.

Understanding is a three-edged sword: our side, their side, and the truth

Mankiw’s Principles of Economics apply here, particularly the first and fourth: “People face tradeoffs” and “People respond to incentives.” Hanlon’s Razor says “Never attribute to malice that which can be adequately explained by incompetence,” but I would add, “Never attribute to incompetence that which can be explained by differing incentive structures.”

For example:

  • Tradeoffs: if you give someone two tasks, both of which could take up 75% of their time, then tell them they will be fired if they don’t do Task A, don’t be surprised when Task B doesn’t get done.
  • Positive incentives: if you measure QA performance by number of bugs found, you’ll find dozens of spurious bugs in the system.
  • Negative incentives: measure developer performance by number of bugs generated and watch as devs pressure QA to not consider problems “bugs.”
  • “Never argue with a man whose job depends on not being convinced.” — H.L. Mencken

These problems are not amenable to the sort of frontal assault described above. That approach assumes the target doesn’t understand or isn’t aware of the problem, while in many cases they fully are. While yelling at someone may occasionally achieve the result you want, it doesn’t come without collateral damage, including massive loss of goodwill. Sometimes hard authority (executive fiat) is the only way to get the job done, but usually there’s a better way.

Imagine a group of people invite you and your friends to a local football field to play a friendly game. You show up and are quickly bewildered: the others are mocking you, nothing seems to be going the way it should, and when one of them shouts “go!” some of your friends are injured in the ensuing confusion. You could shout at your newfound acquaintances for hurting your friends, complain privately about how stupid they are for not understanding the rules… or pause for a moment, collect the available facts, and realize that they had actually invited you to play rugby. Disregarding your goals and charging off in another direction is not necessarily an indicator of malice or stupidity. People are always playing a different game — it just occasionally has similar rules to yours.

Developers are measured on their ability to get software out the door. QA teams are often measured on speed. Managers are responsible for the performance of their team. Executives worry about the overall direction of the business. You need to show how security aligns with these goals. Security can be more than just a hedge against long-term downside risk: it can be a way for everyone to produce better software.

What you need to understand is others’ value calculus: what factors go into what they consider important? Given that, how can you both decide on some security goals that are in line with both their calculus and yours?

Empathy

The jaundiced attitude among Infosec mentioned above, coupled with differing incentive structures, has an unfortunate tendency to spill over into external interactions. If 90% of lunch conversations are complaints about how terrible users are, how management doesn’t get it, and how the dev team on Project Foo are a bunch of incompetent turd-burglars —– the next time you have to meet with Project Foo’s team, you’ll be hard-pressed to give them a fair hearing as they explain how their lack of proper resources and mountain of technical debt prevent them from addressing problems properly.

When we go for the easy answers:

This {system, product, device, network} is {insecure, vulnerable, unsafe, slow, broken, unprofitable, incomplete, poorly designed, ugly} because the {designer, manager, dev team, executives, QA, sales} {is incompetent, is lazy, doesn’t care about security, is an asshat}

we erode our ability to evaluate the true cause of a situation. (Social psychology refers to this as the Fundamental Attribution Error — the tendency to attribute others’ mistakes to their inherent failings, while attributing our own mistakes to the situation at hand.) We damage our reputation (and that of Infosec as a field), make ourselves unpleasant to deal with, and generally make the world a worse place.

We also get used to thinking of people and teams in that way. We genuinely become less kind people.

What’s the alternative?

Practice active kindness. Go out of your way to do kind things for people, especially people who may not deserve it. If you wait for them to make the first move, you’ll be waiting a while — but extend a hand to someone who expects a kick in the teeth and watch as you gain a new friend. Smile.

Don’t go for the easy, wrong answers. That team isn’t incompetent, they just have too much work to do; how can we work with them to get our thing done? That manager isn’t stonewalling, he just has a different incentive structure — how can we understand what it is?

Seek to understand and make this clear. When asking someone to do something, try to understand their current situation first. Perhaps the request isn’t as urgent as all that — but say it is. “I know you have a lot on your plate, with the X deadline and the Y update, but public-facing system Z could be compromised.” Ask questions and listen to the answers.

Be flexible. Recalibrate “urgent.” Think of the worst possible thing that could happen to your organization. Now try to make it worse. I’ve worked in places where the worst-case scenario involves “loss of multiple human lives.” Will the world end if this minor security patch isn’t applied today? Think of the automated OS updates popping up in the corner of your screen: how often are those more important than what you’re doing? If you practice this and do it well, people will start to feel you understand their value calculus, and this makes them much more likely to take your advice.

Create stakeholders and spread security knowledge. One thing our Infosec team tries to do is have people create their own security goals. The answer to “what does it mean to be safe” ultimately is up to them; we just guide the process. This means they’re invested in security – the more they’ve thought about the safety of their own product, the more likely they are to value it as a goal.

Conclusion

Fixing Infosec’s jerk problem benefits everyone: us, the people we deal with, and ultimately the security of the system — and since that’s our long-term goal, we should actively seek to fix the problem. Be kind, and the rest will follow.

What do you think? Let me know on Twitter; I’m @ternus.

Comments