Adversarial Thinking

Breaking Down Tech Privilege From the Inside

| Comments

I’ve kept this post as jargon-free as possible on purpose. If you know what ‘intersectionality’ means, you probably don’t need this post. Feel free to read anyway!

I work in the tech industry. I’m also a white guy, which makes many things silently, invisibly easier.

It took me a while to truly internalize this. To the extent that I naturally notice disparate treatment at all, my instincts often tell me that this is the way things ought to be, and that any deviation from this “default” is noticeable, uncomfortable and frightening. For many years I’ve struggled with this; for example,

“Why is there a special conference/meetup/tutorial/panel/workshop/scholarship/internship just for women/people of color? Why single out a particular race/gender for special treatment? Shouldn’t we admit just on merit? Isn’t doing otherwise unfair?”

Well, isn’t it? Many people look at such programs mentioned above as violating the principle of equality: if the goal is equal treatment for everyone, don’t such actions explicitly fly in the face of that? As a white guy, you might for example see a higher acceptance rate for women and minorities — whether in elite universities, prestigious conferences, or sought-after jobs — as evidence of a system biased against you. Such explicit discrimination feels unfair and makes it look like those proposing it are simply seeking advantage for themselves, not a more equal system for everyone.

I recently came across a fantastic analogy that puts such things in perspective:

If you have a basketball team that’s played most of the game with 5 players and another that’s played most of the game with 3 players, it’s not unfair to give the other team 1 more player, even if in a vacuum, getting extra players seems unfair. Also, even with the extra player, they’re still down 1 player, and the score is still vastly unbalanced due to the game having been played 5 to 3 in the past.

If the system is already unfair in many little ways, many of which are effectively impossible to correct, the best we can do is attempt to balance the scales in other ways.

But my life is difficult too, and nobody’s lining up to give me special benefits!

A few years back I read Peggy McIntosh’s essay, “Unpacking the Invisible Knapsack”. The essay argues, in short, that the individual merits — hard-earned as they may be — of specific white people have absolutely nothing to do with the barriers minorities face day-to-day. I can be a driven-to-succeed hardworking professional or a lazy good-for-nothing slob, but I’m still less likely to be stopped-and-frisked than an otherwise-identical-looking black person. In my world, I can work my fingers to the bone learning deep secrets of the programming arts or I can fake it and coast, but I still don’t have to deal with common, overt sexual harassment at hacker cons the way an otherwise-identical woman does. That’s what people mean by “privilege”: lots of little, invisible ways in which your life is easier — not necessarily easy.

Only by acknowledging that it exists — and working to compensate for it — can those who benefit from it, like me, make it possible for even an approximately merit-based system to exist at all.

I was deeply moved by the format of the Invisible Knapsack essay. By laying out clear and simple facts — facts that, in large part, are still applicable today, nearly a quarter-century after they were written — McIntosh made her point far more effectively than any number of impassioned rants.

I recently compiled a list of my own, specifically dealing with the tech field. I asked several friends for their own perspectives, and the Etherpad I was using was quickly filled with a heartfelt outpouring of personal experience, much of which caused me to stop and stare. I edited it and let it percolate for a while, pondering the right way to post it. My friend Liz’s recent essay, “I’ve been programming since I was 10, but I don’t feel like a hacker” and Professor Philip Guo’s essay, “Silent Technical Privilege” motivated me to edit, rearrange, and post.

I don’t claim this is exhaustive, sensical, or even 100% right; there are blind spots, both in my own experience and in what I chose to post. In particular, this doesn’t touch on many significant categories of privilege (LGBT-vs-not, able-vs-not, etc.) which may remain entirely invisible to me. My hope is that nonetheless it will prove as illuminating to some as similar writings have been for me.

White Guy Privilege in Tech

Being a white guy in the tech industry makes a lot of things silently, invisibly easier.

Looking for work or making a career move

  • If I’m turned down for a speaker slot or job, it’s probably not because of my race or gender. If I’m hired or accepted, no one will think it’s tokenism or a “diversity hire.”
  • If I see a job post for a “Python guy” or similar, I know that means me. (Related: Is “guys” gender-neutral? and Tech Companies That Only Hire Men)
  • If I’m encouraged to go into management, I won’t wonder whether people are stereotyping me, either presuming lack of technical competence or “good people skills.” Nor will I wonder if I’m just being encouraged to fill someone’s “diversity in high-paid positions” checklist.
  • Being white, I don’t worry about potential employers finding me suspicious and trying especially hard to find something in a criminal background check, or judging me harshly over something minor in my past. Indeed, a minor crime (though I’ve never been arrested) might be looked on as a funny youthful misadventure instead of a disqualifier.
  • I’ll never look at career stats and see that people like me predominantly leave the field well before retirement, and wonder whether I ought to do the same even if I have no reason to change.
  • I never wonder whether I’m being underpaid because of my gender or race.
  • I don’t have to audit ‘exciting career opportunities’ in small companies for possible harassment. If I do encounter harassment and leave, it will be (has been!) treated as a weird story, not evidence of my lack of humor.
  • I’m much more likely to see perks in job descriptions that are designed to appeal to me.
  • If I choose to have a child, people won’t assume I’m taking a break from my career. If I take time off from a job to raise my child, be with my partner at the hospital to deliver the child, or such, I probably won’t lose my job. In general, I can care about my family without people assuming that I don’t care about my job and my career. (I may even be treated with additional respect for doing so, rather than having it seen as unprofessional.)
  • I never hear that hiring more people like me necessarily means hiring less qualified people.

At work

  • I’ll never be assumed to be the non-technical person in the room.
  • If I’m a manager, my managerial style is unlikely to be attributed to my gender (“maternal” or “bitchy,” pick one, and isn’t that a great choice?)
  • My clothing is unlikely to be used to judge my character or abilities. My office dress choices will never be viewed as a sexual come-on or invitation1.
  • If I don’t do my part to promote a more equitable culture in my workplace, I’m unlikely to be called out for failing to do so (or take my silence as an argument for the default — “she’s never said anything; she must be OK with it!”).
  • If I provide someone (a coworker or customer) a technical solution, the person I’m helping won’t look at my race or gender and ask me to find someone who really knows what they’re doing.
  • If I express concerns about racism or sexism in my company’s public-facing image, people won’t think I’m seeking personal advantage.
  • I’m less likely to experience body-image problems at work or be judged by how my body looks. If I’m good looking, people will not assume I’m unintelligent or that I was hired for that reason.
  • It’s unlikely that I’ll have to deal with a coworker’s unsolicited, repeated sexual interest at work. If this happens, their behavior towards me will be judged more harshly than my decision to reject them, and doing so is unlikely to affect my prospects for promotion or my future career trajectory.
  • I can, if I choose, push for subtly exclusionary policies and attitudes and have these accepted. I can ignore the perspectives of people unlike me and not have this significantly impact my or my company’s bottom line.

At conferences or meetups

  • At hacker meetups, professional conference, or anything in between, I’m confident of finding other white guys. As such, I don’t automatically stand out just for showing up. I’m extremely unlikely to be the only white guy in any gathering of tech people, even overseas. No one will ever comment on my finding other white guys to hang out with at conferences.
  • I don’t worry about a conference being a safe space. I can go to a conference and head home late at night without worrying about my safety or about being creeped on. I don’t need to rely on codes of conduct or harassment policies (if they exist at all) to ensure my safety.
  • I can spend time in hackerspaces and other public tech contexts without encountering sexual or racist jokes about people like me.

After work/socially

  • I can be confident that a coworker inviting me to a social event (like after-work drinks) is not a sexual overture. I’m far less likely to experience repeated harassment if I turn down said invitation (though it does happen, it happens a lot less).
  • Other tech people naturally assume I share their technical or social background and not feel the need to explain the basics to me. (“There’s this thing called Python…” “There’s this thing called Star Trek…” Yes. They know.) I won’t be assumed to need special help above and beyond the usual, because I am the usual.
  • My coworkers do not automatically assume I’m uninterested in downtime activities like beer, video games, or foosball.
  • I can get excited about technology in a casual, non-work setting without worrying that my conversation partners will think that I’m sexually interested in them (or doing so in order to seem attractive to them), or have them question my credentials or expertise without reason.

Online

  • If I publish a photo of myself (on Twitter, for example) I’ll almost certainly not receive uncomfortable and unwelcome sexual advances or threats.
  • Similarly, if I express an unpopular opinion, I almost certainly won’t receive rape threats.
  • Articles and blog posts in and about my field are almost always written for people like me. I very rarely start reading an article only to realize halfway through that the mental picture the author had of their audience did not include someone like me.
  • If I choose a gender-neutral pseudonym in an online tech forum (or even Twitter), others’ default assumptions about my race and gender are likely to be correct.
  • It’s likely that tutorials, teaching methods, and practices are designed with someone of my race, gender, and cultural/social background in mind.

Everywhere

  • The accomplishments and history of technology have been told to me as primarily the accomplishments of people who look like me. I’m not fighting the wind of history. (How many unsung Ada Lovelaces and Grace Hoppers had their stories lost to history because they didn’t fit the prevalent narrative?)
  • I’ve never been, and will likely never be, expected to speak for all white men on something, like the “[people like me] experience.” (At this point you may be thinking, “What a silly idea, to think that one white guy can speak for all white guys! Are the experiences of a pasty, besuited COBOL programmer from Illinois somehow representative of all white men in tech? That’s crazy.” And you’d be right.) Those I interact with will not judge the whole of my gender or race based on my actions.
  • It’s much easier for me to find people with similar life trajectories and problems to look up to, or to find a mentor who resembles me.
  • Similarly, I’ll never have tech people question my right to be anywhere, at a meetup, conference or prestigious company, based solely on my race or gender.
  • In a room full of people, I’m a lot less likely to be referred to by some physical feature: “the girl” or “the black guy” or whatever.
  • If I’m having a bad day, it will never be attributed to hormones or “that time of the month.” If I’m angry, people will not look at my skin color and wonder if I’m violent.
  • People will not look at my skin color and make assumptions about my traditions or religious beliefs. (“You’re Hispanic, so you must be Catholic…”)
  • When someone is trying to market products to tech companies or technical people, they tailor their marketing to people like me. I rarely encounter marketing that makes me uncomfortable because that underlying assumption is wrong. (As one webcomic put it, “Welcome to the background radiation of my life.”)
  • When I talk about my job, people think of me as smart and career-oriented instead of an oddball or outcast. I can discuss my job online normally and never encounter disbelief (can you imagine hearing “you can’t be a programmer, you’re a guy”?).
  • Fellow American programmers will never ask me where I learned to speak English, which country I’m from, or tell me I’m not a ‘real’ American.
  • I can make jokes that subtly put down other races and genders and be considered part of the crowd. If someone reacts negatively, their reaction will often be seen as worse than my joke.
  • I can write a post like this, confident that it will not be dismissed as simply my race or gender’s obvious, self-serving opinion.

Whoa

Acknowledging these points of privilege often makes me deeply uncomfortable. It feels as though I’m somehow confessing a crime. I’m often tempted towards emotional backlash — “what, are you expecting me to apologize for my skin color or the fact that I’m male? What am I supposed to do? I didn’t have a choice!

To be clear, nobody’s asking you to apologize for the circumstances of your birth (and if they are, that’s unfair and silly and you should disregard it), just that you acknowledge that you may be benefiting from it and work — for the sake of a fairer and kinder world — towards making the privileges you enjoy available to everyone.

It’s hard. As McIntosh wrote,

The pressure to avoid it is great, for in facing it I must give up the myth of meritocracy. If these things are true, this is not such a free country; one’s life is not what one makes it; many doors open for certain people through no virtues of their own.

“So what can I do about it?”

See what changes you can make to level the playing field. None of these suggestions require you to give up your rights, just to extend them to others wherever possible.

  • Your startup’s hiring? Give your qualified female/minority friends a call.
  • Your company’s reviewing their benefits policies? Ask how they cover maternity leave.
  • You’re organizing a conference? Aim for diversity in your lineup. Ask for help if you need it.
  • Someone’s making a sexist/racist joke? Call them out on it.
  • Everyone’s making sexist/racist jokes? Think about leaving.
  • Someone in your group being constantly overlooked for their technical accomplishments? Make a special point of recognizing them, even if you’re their peer or employee.
  • Someone’s organizing a meetup for women, girls, or minorities in tech? See if you can help. Understand if you can’t.
  • Someone in your life complaining about being marginalized, stereotyped, or put down in any of the ways mentioned above? Help correct the situation if you can and they want you to. Support them. In any case, don’t be a dick about it.

Sidenote: How To Not Be a Dick About It

I often see discussions, especially among rationalist tech-types, that go something like this:

Alice: “Charlie did $FOO, and it made me upset.” Bob: “It was probably unintentional, and besides, what about extenuating circumstances $BAR and $BAZ? I might even do $FOO completely by accident.”

Bob is genuinely trying to help — from Bob’s perspective, he’s pointing out facts that Alice might not have considered and attempting to make Alice’s view of the world more complete. This is especially likely to happen if Alice is calling out what she might see to be sexist or racist behavior — Bob doesn’t like to see people condemned unfairly. But if you’ve ever been in Alice’s shoes, you’ll notice that (unsurprisingly, perhaps) this doesn’t make you feel better. In fact, you might be seriously bothered that Bob is “taking Charlie’s side” — what Bob’s communicating is “you’re wrong to feel that way.” And that sucks. That’s Being A Dick About It2.

The fact that Alice is upset is a problem separate from the specific details of $FOO. It’s especially a problem if $FOO has happened to Alice six times today, and every time it’s happened people try to explain why Alice shouldn’t be upset. Key point: It is entirely possible for Bob or Charlie to be completely well-intentioned and nonetheless harm Alice. Ignoring or attempting to marginalize that harm is Dickish.

Consider this situation: Bob trips over a sidewalk crack and, in his arm-flailing descent to the pavement, punches Charlie in the nose. Charlie gets angry. Bob hastily explains that he didn’t intend violence, that it was completely an accident. Charlie’s anger may thus be mollified… but it doesn’t make the pain in his nose go away. What is Bob most likely to hear? “Hey, watch where you’re walking!

I believe, as conscientious, self-aware people, we have an obligation to understand the effects of our speech on others and factor in others’ feelings when choosing our words. I’m not advocating for pernicious self-censorship or lying, nor am I attempting to silence anyone; I’m simply pointing out that a little attention to choice of words — and choice of statements — combined with empathy and kindness is the path to Not Being A Dick.

But what about my feelings? What about this angry feminist rant I just read?

Your feelings are important. I’ve certainly felt marginalized, even hated, before. It will happen; there are many books, essays, rants, and cris de coeur on the Internet from people who have been on the losing end of the aforementioned sociocultural garbage chute for their entire lives. Much of it is directed at white people and/or men, and much of it may come off as shocking, offensive, and/or insane. Separating the wrong from the merely painful-to-read can take great effort, and it is very easy to take the extreme examples of this genre as representative of the whole and thereby invalidate the lot.

I urge you to empathize; understand that much of this writing is coming from a place of great pain, frustration, and sorrow and that it conveys the spirit of a real problem, one that needs everyone’s help to solve. Don’t write it off — look at it as a personal growth opportunity. Look for the truth under the anger and pain. Calm does not mean correct; angry does not mean wrong. If you can’t do that, close the browser (or book) and walk away. Everyone will be better off.

If a personal friend (someone you know in real life) hurt you through their rhetoric, here’s the best thing to do, in my experience: mention it to them calmly, privately, and if at all possible in person and allow them to talk. From what I’ve seen, if you respond in anger online, you will not get the response you want; people can easily misinterpret your tone (and, depending on the forum, you may become the target of white-hot focused rage). If you do so publicly, you — regardless of your intentions — become a representative of the privileged system I just described. If you talk to them personally, but you’re huffy and indignant, you’ll either hurt them or provoke their ire or both, and neither of you will learn from the experience. Please learn from my mistakes!

Conclusion, and thanks

I don’t have all the answers. I can and will get it wrong and have done so many times. But the state of the world — that giant list above? That sucks. We can make it better.

Thanks for reading. I’d love to hear your comments; I’m @ternus on Twitter.

Acknowledgements

My deepest gratitude goes to the friends who helped me compile this list, contributing their own experiences and frustrations: Stephanie Bachar, Liz Denys, Jacky Chang, Ekate K, Amy Hailes, Ian Smith, Preeya Phadnis, Paul Baranay, J.C., M.E., and others who who wished to remain uncredited. This subject often leads me into unfamiliar waters; you are the stars by which I navigate. Thank you all.


  1. My friend says: “It goes beyond this; there really is no default female tech-industry professional clothing I’ve found that will not send one of the following messages: ‘Pretty’ (and therefore obsessed with fashion and/or a ditz); ‘Dowdy’ (and therefore old/not taking care of herself/uninterested in promotion); ‘Tomboy’ (and therefore assumed to want to be just one of the guys and not caring about any of the issues listed here); ‘Suit’ (and therefore management and not technical). Getting out of these boxes, in my experience, requires serious style sense to pull off ‘stylish but not pretty but not so quirky as to be weird’, especially if you don’t want to drift into suit territory. I can’t just go to the store and buy Work Tops and Work Pants the way guys can. And if I try, there’s no guarantee they’ll fit…”

  2. The technical term here is invalidation.

The Power and Danger of Abstraction

| Comments

I’m at the supermarket for a quick grocery run. I can carry everything I need in my hands, but I almost always grab a basket: why? It doesn’t increase my maximum carry weight — you might even argue that the basket decreases it by forcing everything to hang from a flimsy handle. No, a supermarket basket gives a different advantage: by allowing me to treat multiple separate items as a singular item, it makes it easier to carry several objects at once. The basket is like an abstraction for its contents: rather than worrying about how I’ll juggle a dozen eggs, a bottle of corn syrup, and a gallon of milk, I only need to carry the one basket. This is the power of abstraction: in simplifying, it allows us to manipulate more complex items than we otherwise could.

Imagine describing a supermarket basket to someone who’d never seen one. You’d probably say something like: “An open-topped box, about seventy by thirty-five by thirty-five centimeters, with a carrying handle, capable of holding about ten kilograms.” It’s a serviceable description and certainly matches the baskets you find in your local market the world over, yet it crucially underspecifies the actual object. Sure, that spec works fine — right up until the point that the user tries to fill it with hydrochloric acid, lava, fine sand, or superheated steel ingots. This is the danger of abstraction: the map is not the territory. In simplifying, abstractions elide detail.

This may seem like an odd example — who’d possibly want to do that? Surely a glance could tell you that supermarket baskets are unsuitable for holding such dangerous materials. Yet across the field of engineering, one constantly finds situations where publicly-described abstractions hide potentially-dangerous limitations. To name a few examples, NoSQL stores that impose arbitrary limits on data (for example, advertising themselves as Big Data, as long as your Data isn’t Bigger than 100GB), “authentication” that isn’t (a simple URL parameter check, or worse), APIs that break down on hitting unspecified limits, and so on. It’s not that the designers are deliberately malicious; rather, they constructed their systems around what were at the time reasonable tradeoffs, then failed to communicate them as their systems scaled. An API that offers a 10,000 requests-per-second limit might seem enough for anyone, but if someone builds a bestselling iOS app on top of it, suddenly you both have problems on your hands. Building on such abstractions is constructing one’s house, if not on sand, then on terrain with unstable geology and a propensity for developing sinkholes. It all works fine, right until your kitchen floor vanishes into a sucking Chthonian hellpit.

Part of the job of security researchers is to discover situations like these: the marginal cases where designers’ implicit trust in abstractions breaks down. It’s truly shocking how many things are “secured” by nothing more than “why would anyone want to do that?” Any government process involving a fax machine, for example — I once watched a fascinating conference presentation describing how easy it is to “steal” a Florida LLC through nothing more than a faxed-in form. No authentication whatsoever, because hey, who’d go to that trouble? Another great example is the implicit trust companies place in their domain name registrars. A company can dedicate as much attention as they please to their network security — and still be “hacked” through a registrar compromise. These problems are solveable, but they’re only visible — never mind tractable — if designers and decision-makers view these processes as something other than pure black-box abstractions.

How can we improve this situation? Here are some ideas:

Communicate the limitations of your systems clearly, even if you think no one will ever reach them. If your electronic part stops working at 300 degrees Fahrenheit, communicate that fact — or one day someone may build it into a supposedly-fireproof system.

Abstractions you design should fail visibly, quickly, and gracefully. This especially goes for people who might not read the whole documentation (think: public APIs). It’s much better to display a clear, obvious error than it is to start dropping, say, one in a hundred requests.

Follow the Robustness Principle: “Be conservative in what you do; be liberal in what you accept from others.”

Follow the corollary as well. Understand that those designing the abstractions you use may not have taken the same steps. This is more than just liberal acceptance of potentially-flawed communication. I’d be very cautious about building a business totally dependent on someone else’s API (Facebook or Twitter, for example) lest they do exactly what those companies have done: radically shift the underlying terrain and render my previously-thriving business unsustainable. Twitter is more than an abstraction: it’s a company, run by real people, that will occasionally make business decisions you disagree with. The only question is: how much risk does this expose you to?

Once in a while, take the covers off the abstractions you rely on, especially those you implicitly trust. Ask yourself what things would have to fail for an unacceptable loss event to take place. How do you know those things aren’t going to fail? What can you do about that?

Don’t put lava in your supermarket basket. It’s just not a good idea.

Slouching Towards the Panopticon

| Comments


Flickr photo from mlibrarianus

Abstract

Cryptanalysis always gets better. It never gets worse.

Many in the public policy, defense, technology, and security communities have long known, and Edward Snowden gave his freedom to remind us, that the United States government is progressively expanding a comprehensive, intrusive intelligence-gathering system — a Panopticon with capabilities the average citizen can neither comprehend nor resist — a system with unprecedented power for tyranny and repression — and it will be far harder to dismantle than it was to build. There are ways legislators, technologists, and the public can push back, and we must do so as soon as possible to avert a terrible future.

Preface

A few key facts to set the scene:

  • 9/11 was publicly labeled a failure of intelligence by a wide cross-section of the American media and political establishment.
  • We have a cultural need to place blame on individuals or groups for bad events, even those that result from systemic failure.
  • The United States — hereafter “we”, though much of this applies to the “five eyes” (viz. Australia, Canada, New Zealand, the UK, and the US) as well — has made massive investments in “homeland security” signals intelligence for a variety of purposes, including but not limited to counter-terrorism.
  • The United States National Security Agency (NSA) probably spends more on cryptography research than everyone else put together. They publish very little research. Thus, the cutting edge of public research is likely an inaccurate predictor of the NSA’s cryptologic capabilities.
  • The prospect of collateral damage does not, in practice, deter the American intelligence apparatus from using its capabilities to identify, capture, and kill terrorists.
  • Through Edward Snowden and others, we have significant evidence that the NSA has spent significant effort building a comprehensive apparatus for surveilling both Americans and foreigners. Through the PRISM program and other systems, they have acquired unprecedented access to the private information of citizens worldwide. They have made significant investment in systems to automatically analyze this data. The public does not know the full extent of their capability in this regard.
  • The head of the NSA, Gen. Keith Alexander, believes that only by collecting all available data can the NSA effectively thwart terrorism.
  • The public state of the art in data storage and analysis — storage capacity, processing speed, statistical techniques, available software, automation capability — is several orders of magnitude better than it was 20 years ago. We have no reason to expect this trend will not continue.

Where we could go

Imagine you’re the director of US national intelligence. It’s a few years from now, and you’ve been given an enormous budget and an effectively unlimited supply of smart people to end the terrorism problem. With these resources you build Deep Thought — a machine with access to an unfathomably deep database, capable of answering arbitrarily-formed questions about the population of the US.

Billions of dollars and thousands of man-years later, with the President and the Joint Chiefs watching, you stand in front of this machine and say,

Show me the terrorists.

Obligingly, Deep Thought displays a list of Islamist radicals, anarchist cranks, would-be Unabombers, and so on. The dedicated men and women of our national security agencies round up these dangerous criminals and lock them away. The nation rejoices. The TSA lets you fly with shampoo again. Everyone’s happy.

There is just one catch: Deep Thought is still sitting there, waiting to accept input. The nation has already sunk the cost of building it, and it’s not about to shut it off any more than, say, unilaterally destroy the entire strategic nuclear arsenal. But since we have it:

Show me the murderers.

Show me the child molestors.

Show me the kidnappers.

The New York Times has a field day. Fox News runs a slide show of missing little girls and their rabid killers, finally brought to justice. Government approval soars.

And Deep Thought is still sitting there, waiting for the next question.

Show me the drug dealers.

Hundreds of thousands, ranging from die-hard methheads to small-time marijuana dealers to club kids to ancient hippies passing out LSD. In one swift stroke, the drug problem in the United States is over. Despite the collateral damage (a few politicians’ sons, how embarrassing) support is broad.

Show me anyone with the capability and intent to bring down the United States government.

Show me the leakers.

Show me the hackers.

It is still pretty easy to justify. These are dangerous people — traitors, even. “Crazies.” Nobody really complains.

Who is left? Anyone you want.

Show me the dissidents.

Show me the activists.

Show me…

Frequently exclaimed “but!”s

That system you described is impossible.

I agree — there’s no way we could ever build a system with zero error rate. Either we would miss some terrorists by ensuring P(terrorist)=1 (and if you think we would take this road, I have a deterministic random bit generator to sell you) or we would end up including innocents in the results, like dolphins in a net of tuna. See: this guy.

A 100%-accurate system like this is impossible with the tech we have today. Much more likely is a system — much like ones we have today — that gives a probability of being a terrorist. Unfortunately, decision-makers aren’t often big on fuzzy probabilities:

“I don’t need this,” Harris reports that a senior CIA officer working
on the agency’s drone program once told an NSA analyst who showed up
with a big, nebulous graph. “I just need you to tell me whose ass to
put a Hellfire missile on.”

They might be able to do this in the future, but not now.

Public companies, especially marketers and advertisers, are experts at this sort of behavior. Web advertising companies jumping on the big-data bandwagon have discovered Target et al. already there. What can the NSA accomplish with Google/Facebook/Comcast/Verizon/AT&T’s knowledge of your online habits, plus their own sources? What profile could they build of you? In theory, they can suborn your laptop, your smartphone (a remotely-operable listening device with a location tracker built in, great!), your landline, every Internet connection in your house and business, your physical mail, your email, your OnStar-enabled car, every online profile you have unless you adopt the life of a complete paranoiac — and even then it only takes one misstep for them to get you. Ask John McAfee.

Certainly there are technical capabilities that are out of their reach at this moment; that’s why they’re storing all the data for future analysis. Data that currently seems inconsequential might be far less so later. Currently-unbreakable crypto protocols (and remember, they spend more on crypto research than the entire rest of the world put together) may not be so unbreakable later. Good thing for them it’s all on disk.

This is unconstitutional.

Probably, but that hasn’t stopped them. Their interpretation of the law and the Constitution most likely doesn’t match yours.

The NSA doesn’t consider automated interception and bulk analysis a “collection”, meaning it can build as large of a profile of you as it wants, no approval required, as long as no human intentionally instructs it to do so. Should it ever decide it needs approval, the Foreign Intelligence Surveillance Court’s approval rate is 99.97%.

We would know if this were happening.

The NSA, DEA, and FBI have demonstrated willingness to lie to the courts (through the method of “parallel construction”, also known as intelligence laundering) about the source of their information. They have repeatedly misled Congress about their activities. They have redefined words like “incidental,” “relevant,” and “targeted” in positively Clintonian fashion. They believe that the disclosure of the existence and operation of this technical apparatus is tantamount to treason against the United States.

You are free to believe, as President Obama claims, that we would be having this debate even if Snowden hadn’t done what he did. I do not believe this.

My life is not interesting enough to be surveilled (often rendered as “I’m too boring.”)

The background assumptions there:

  • it costs the government some nontrivial amount to watch me, and
  • they don’t gain anything of worth by doing so (after all, I am not a terrorist!), so
  • it’s not worth it to them to do so.

The first assumption no longer holds.

Before the age of fast automated analysis, surveillance agencies were limited by money and very expensive human time in the scope of their surveillance. If they wanted to watch you, an actual human had to spend time listening to you, following you, reading your physical mail, and so on — with the associated Fourth Amendment implications, to the degree the government followed them.

This is dramatically less true in an era of supercomputers and data centers, where the entire metadata output of, say, Verizon can be recorded by a few racks of servers. In other words, once the infrastructure is in place, the marginal cost to the government to surveil you is effectively zero, and the downside risk for failing to surveil the right people is high. They may have more to lose by not watching you than the other way around.

What harm does it do? If you’re doing nothing wrong, you have nothing to hide.

No. If I have done nothing wrong, you have no reason to watch me.

Just because Google, Facebook, Skype, Verizon and other companies are
routinely monitored by the CIA doesn’t mean that somebody is watching
you every time you order groceries online or voice-chat your sister in
Seoul. It just means that they could if you gave them a reason to do
so. That means you can relax – right up until the time when you want
to go to a protest, or your sister does, or you support the fact that
several thousand complete strangers did.

If you believe you are being watched (or are uncertain but think you may be), you change your behavior. You take fewer risks: the fear of appearing guilty, even with the most innocent of motives, is enough to deter you from taking actions that might appear such to a hostile audience. You watch what you Google, you watch what you say on the phone. In small ways, your life begins to resemble an airport security line (“don’t joke about bombs — they might be listening!”). In order to thrive, our democracy requires that discourse. We need the ability to hold discussions current (or future!) administrations might not like. We need to be able to hold opinions others might detest. We need privacy to flourish, and I do not believe we can be fully human, fully free, without it.

I don’t care. If the government says they need it to catch terrorists, I trust them.

Even with several very generous assumptions about the honesty, integrity, and competence of the current administration (for whom I voted twice, and about which enough has been said) — governments change. We should be deeply cautious of such a powerful tool, potentially the most powerful instrument of tyranny and oppression a government has ever possessed, in the hands of any administration, no matter how benign. All it takes is one election. If the transition from the Bush to Obama administrations has taught us anything, it’s that a change in party in no way necessitates a lessening of executive power.

What we can do

In a very real way, we, the American people, did this to ourselves. We built this machine out of fear and now, at least in the short term, we are stuck with it. By demanding that no terrorist attacks ever happen, and placing the blame on our intelligence apparatus when they do, we have strongly incentivized the creation of a system exactly like this.

In the short term, we can demand radical transparency and oversight for our clandestine security agencies, starting with a top-to-bottom audit from Congress’ loudest complainers (Senators Ron Wyden and Mark Udall come to mind). We can make this a campaign issue: as a society, we must accept this may make them less effective (though who wholeheartedly accepts the doomsaying coming from the NSA in response to the Snowden leaks?) and that we are willing to make that tradeoff. Remember, as easy as it may be for them to forget, our government works for us.

In the medium term, engineers and technologists must refuse to build or support systems that engage in privacy- and liberty-destroying mass surveillance. Transparency reports are a good start, but those with the clout and resources to fight these orders must stand up and do so — large companies, mostly, but individuals can as well. We must establish a strong ethical basis to guide those in my profession as we develop the technology of the future.

In the long term, we must dismantle the current architecture of secret courts, secret orders, and secret law enforcement. As a society we must stand up and say: this is not who we are. Though it may come at a price, we demand freedom of thought and conscience and association. We demand that our messages to our loved ones, our health, our finances, our lives, retain the privacy the Constitution specifically affords them. We demand what Voltaire called “the right most valued by all civilized men”: to be left alone.

I would love to hear your comments; I am @ternus on Twitter.

Low-overhead Paranoid Browsing for Fun and Profit

| Comments

Companies are investing more effort than ever before in tracking you online, and not just on their sites. They use the data provided by your browser to build a profile of you that can follow you across the web. In this post, I’ll describe a suite of defensive techniques to push back against this tracking while still able to enjoy the modern web experience.

Goals

We have two goals:

  • Leak as little information as possible to the sites you visit, while
  • Minimize impact to regular browsing behavior as much as possible.

Techniques

Use an anonymizing VPN

I use Private Internet Access which is $40/year, but there are a number of great options, including self-setup VPNs. All my web traffic flows through them, thus denying sites I visit the opportunity to discriminate based on my IP address — perhaps the easiest way for sites to track you.

Your IP address can be used to geolocate you with an astonishing degree of precision. If you’re serious about privacy, be sure to use a VPN. You’ll also want to enable the option that disconnects your network when the VPN isn’t connected to prevent accidentally using an unencrypted connection.

Use Firefox

I love Google Chrome, and it’s been very difficult for me to switch — but using a browser provided by a company whose business model centers around compiling as precise a profile of you as possible is no longer compatible with the goals I outlined above. An excellent example of this is how the default configuration reports a heap of data to Google, while burying the “do not track” setting (which Chrome was the last browser to implement) in “advanced settings” and giving you a warning if you enable it.

I hadn’t used Firefox in years before I switched, but it’s gotten a lot better. In particular, I’m using the UX build (warning: very beta), which has a Chrome-like interface.

Firefox Settings

It’s easy to tune Firefox to maximize your privacy without diminishing your experience.

Privacy

Under the “Privacy” tab in the preferences page, select “Tell sites I do not want to be tracked.” Of course, sites are under no obligation to obey this flag, but it doesn’t hurt to make our preferences clear.

Set the history preference to “Use custom settings for history,” change “Accept third-party cookies” to “Never.” I’ll let CNET explain why:

So if third-party cookies offer no direct benefit to users and can potentially be a threat, why do all the major browser makers default to allowing sites to leave all the cookies they want on your machine? Because the advertisers are their customers and are at least as important to them as users are.

Continue accepting cookies in general, though. We’ll get to why that’s OK later.

Security

Uncheck “Remember passwords for sites.” We have a better option.

Search Engine

I tried switching to DuckDuckGo but found that after nearly 1.5 decades of using Google the search quality was much poorer. (For example, when searching for the UX build link above, I searched for “firefox UX”, “firefox UX build”, “firefox UX nightly” etc. on DuckDuckGo, none of which produced the result I wanted. One search on Google and the first result was correct.)

If you want a more privacy-respecting engine than Google but can’t give up (most of) Google’s search quality, use Startpage. This site uses Google as a backend for search results.

Block Referer [sic] Header

Type “about:config” in the address bar, then type “referer” in the search bar. You want to set the network.http.sendRefererHeader value to 0, which will prevent the browser from sending Referer: headers. When clicking on a link from page A to B, this will prevent B from seeing the URL of A.

Firefox Addons

Here are some low-impact extensions that can drastically improve your privacy.

LastPass

Fun fact: I don’t actually know the vast majority of my passwords. LastPass is a cross-platform (Windows/Mac/Linux/Android/iOS) password manager that generates and securely stores passwords and form-fill info. It uses client-side encryption to protect your data and allows auto-login and auto-fill. Highly recommended.

HTTPS Everywhere

Automatically and silently redirects you to HTTPS on many popular sites. Easy. This is how the web should work.

Self-Destructing Cookies

Deletes cookies for a site as soon as you close the tab. You’ll want to whitelist sites you want to stay logged into, but this is a one-time operation per site. This can go a long, long way towards preventing sites from building a profile of you. This gets you most of the benefit of disabling cookies while still allowing you to log in.

Adblock Plus

Blocks ads, and more importantly, the request to third-party sites those ads generate.

BetterPrivacy

Has just one purpose: to disable LSOs (locally stored objects), a type of “super-cookie” used by Flash.

Ghostery

Blocks a wide range of tracking methods 99% non-intrusively. The default configuration gives you a little popup at the upper right of your browser, which is occasionally interesting (“I had no idea this page felt the need to track me in 12 different ways”). Will occasionally break a page in a non-obvious way (“I feel like there should be comments here…”) but it’s easy to disable temporarily if reading internet comments is your thing. cough

Secret Agent

Cookies aren’t the only way sites fingerprint your browser — one of the most common methods is the user-agent string. This extension randomizes the user-agent string sent along with your browser on every request, thus spreading your information among a number of different profiles and making you harder to track. It’ll also randomize the HTTP Accept headers.

Unfortunately, the default user-agent list Secret Agent ships with causes problems with sites as their capability-detection mechanisms go haywire (Lynx?!). I’ve created my own list of modern user agents consisting of the most popular browser strings from Firefox, Chrome, and Safari.

Flashblock

Disallows Flash from running by default, replacing Flash applets with a placeholder that runs the applet when clicked. Not only does this protect against malicious (or annoying!) Flash apps, it’ll prevent browsers from using Flash-based fingerprinting methods (such as loading the list of installed fonts). I’ve used click-to-run on Google Chrome for ages and it’s been great.

Out of Scope

There are a number of extensions and options not included here, largely because they have too significant of an impact on regular day-to-day browsing. Tor slows down browsing far too much. NoScript and RequestPolicy require user intervention on virtually every page in order to load properly.

Conclusion

You can see the impact of some of these techniques using Panopticlick. If your browser is unique, don’t worry — just check how it changes on each reload, which Secret Agent should help with.

These techniques should allow you to browse the web with greater confidence that companies aren’t tracking you everywhere you go.

Got any other suggestions for low-impact privacy improvements? Tweet me at @ternus.

Infosec’s Jerk Problem

| Comments

Put bluntly: to others, we’re jerks.

If you don’t think this is a problem, you can stop reading here.

The dysfunctional tale of Bob and Alice

Imagine this. Developer Bob just received an email from your Infosec department, subject Important Security Update. He sighs, thinking of the possibilities: a request to rotate his password, or a new rule? Maybe it’s a dressing-down for having violated some policy, a demand for extra work to patch a system, or yet another hair-on-fire security update he doesn’t really see the need for. His manager is on his case: he’s been putting in long hours on the next rev of the backend but library incompatibilities and inconsistent APIs have ruined his week, and he’s way behind schedule. He shelves the security update — he doesn’t have time to deal with it, and most things coming out of Infosec are just sound and fury anyway — and, thinking how nice it would be if his team actually got the resources it needed, continues to code. He’ll get to it later. Promise.

Meanwhile, you, Security Researcher Alice, are trying not to panic. You’ve seen the latest Rails vulnerability disclosure, and you know it’s just a matter of hours before your exposed system gets hit. You remember what happened to Github and Heroku, and you’re not anxious to make the front page of Hacker News (again?!). If only Bob would answer his email! You know he’s at work — what’s happening? The face of your boss the last time your software got exploited appears in your mind, and you cringe, dreading an unpleasant meeting ahead. You fume for several minutes, cursing all developers everywhere, but no response is forthcoming. Angrily, you stand up and march over to his cube, ready to give him a piece of your mind.

Pause. What’s going on here, and what’s about to happen?

Interlude: we are the watchers on the walls

Many in the Infosec community are fond of casting the security world as “us versus them,” where “they” aren’t external, malicious actors but unaware users, clueless managers, and bumbling executives within our own organizations. We like to see ourselves as the Night’s Watch of the tech world: out in the cold with little love or support, putting in long nights protecting the realm against the real threats (which the pampered never take seriously) so everyone else can get on with their lives in comfort. We develop a jaundiced attitude: only we understand the real danger, we think, and while we’re doing our best to stave off outsider threats, when the long night comes we need fast and unquestioning cooperation from the rest of the organization lest (hopefully metaphorical) frozen undead kill us all.

The rest of the organization doesn’t see it that way. To them, we’re Chicken Little crossed with the IRS crossed with their least favorite elementary-school teacher: always yelling about the sky falling, demanding unquestioning obedience to a laundry list of arcane, seemingly arbitrary rules (password complexity requirements, anyone?) that seem of little consequence, and condescendingly remonstrating anyone who steps out of line. Once in a while a visionary (often an Infosec expat) who truly understands the threat tries to help others see the value, but most of “them” don’t get it. Users are stupid. Managers are idiots. Executives are out of touch. So it goes.

Back to the story

From Bob’s perspective, he’s making a reasonable risk/reward tradeoff: not dealing with the email right now might get him yelled at, but judging from history, probably not — he gets lots of “urgent” security emails that turn out to be Windows patches, admonitions to change his password, policy reminders and so on. From your perspective, Bob is being completely irresponsible: you told him it was important; it was right there in the subject line!

You storm into Bob’s cubicle. Images of mocking Hacker News articles dancing in your head, you accuse Bob of flagrant negligence (perhaps letting out some anger over the last security incident; Bob works on that team, doesn’t he?) and demand that he drop whatever he’s doing and fix this, now. This mood of righteous indignation doesn’t lend itself to patient explanations, and Bob’s demand that you explain the vulnerability is met with your impatient demand to “just do it.” There isn’t time for that — someone could be dumping your database as you speak!

Bob, already running out of patience due to his looming deadline, fires back that he can’t deal with this now, he’s too busy, it’s not his problem (there are other devs, right?) and you should take it up with his manager. Even if he could, he wouldn’t: didn’t the last few Infosec red alerts turn out to be nothing? Why are you trying to waste his time? Don’t you understand he has real work to do, work he’ll get fired for not doing?

Bob considers this a horrendous distraction from his critical dev work, and you see Bob as dragging his feet while the building’s on fire. You both walk away angry. Regardless of whether the vulnerability eventually gets closed, serious harm has been done.

A common, less overtly contentious version of this exchange involves FUD on both ends, with vague yet ominous threats coming from Infosec and a haze of scheduling delays, configuration problems, and blaming other teams (QA being a favorite) from the dev team. This usually gets management involved and everyone has a bad day.

There are two problems here. The first is a lack of understanding, the second, a lack of empathy.

Understanding is a three-edged sword: our side, their side, and the truth

Mankiw’s Principles of Economics apply here, particularly the first and fourth: “People face tradeoffs” and “People respond to incentives.” Hanlon’s Razor says “Never attribute to malice that which can be adequately explained by incompetence,” but I would add, “Never attribute to incompetence that which can be explained by differing incentive structures.”

For example:

  • Tradeoffs: if you give someone two tasks, both of which could take up 75% of their time, then tell them they will be fired if they don’t do Task A, don’t be surprised when Task B doesn’t get done.
  • Positive incentives: if you measure QA performance by number of bugs found, you’ll find dozens of spurious bugs in the system.
  • Negative incentives: measure developer performance by number of bugs generated and watch as devs pressure QA to not consider problems “bugs.”
  • “Never argue with a man whose job depends on not being convinced.” — H.L. Mencken

These problems are not amenable to the sort of frontal assault described above. That approach assumes the target doesn’t understand or isn’t aware of the problem, while in many cases they fully are. While yelling at someone may occasionally achieve the result you want, it doesn’t come without collateral damage, including massive loss of goodwill. Sometimes hard authority (executive fiat) is the only way to get the job done, but usually there’s a better way.

Imagine a group of people invite you and your friends to a local football field to play a friendly game. You show up and are quickly bewildered: the others are mocking you, nothing seems to be going the way it should, and when one of them shouts “go!” some of your friends are injured in the ensuing confusion. You could shout at your newfound acquaintances for hurting your friends, complain privately about how stupid they are for not understanding the rules… or pause for a moment, collect the available facts, and realize that they had actually invited you to play rugby. Disregarding your goals and charging off in another direction is not necessarily an indicator of malice or stupidity. People are always playing a different game — it just occasionally has similar rules to yours.

Developers are measured on their ability to get software out the door. QA teams are often measured on speed. Managers are responsible for the performance of their team. Executives worry about the overall direction of the business. You need to show how security aligns with these goals. Security can be more than just a hedge against long-term downside risk: it can be a way for everyone to produce better software.

What you need to understand is others’ value calculus: what factors go into what they consider important? Given that, how can you both decide on some security goals that are in line with both their calculus and yours?

Empathy

The jaundiced attitude among Infosec mentioned above, coupled with differing incentive structures, has an unfortunate tendency to spill over into external interactions. If 90% of lunch conversations are complaints about how terrible users are, how management doesn’t get it, and how the dev team on Project Foo are a bunch of incompetent turd-burglars —– the next time you have to meet with Project Foo’s team, you’ll be hard-pressed to give them a fair hearing as they explain how their lack of proper resources and mountain of technical debt prevent them from addressing problems properly.

When we go for the easy answers:

This {system, product, device, network} is {insecure, vulnerable, unsafe, slow, broken, unprofitable, incomplete, poorly designed, ugly} because the {designer, manager, dev team, executives, QA, sales} {is incompetent, is lazy, doesn’t care about security, is an asshat}

we erode our ability to evaluate the true cause of a situation. (Social psychology refers to this as the Fundamental Attribution Error — the tendency to attribute others’ mistakes to their inherent failings, while attributing our own mistakes to the situation at hand.) We damage our reputation (and that of Infosec as a field), make ourselves unpleasant to deal with, and generally make the world a worse place.

We also get used to thinking of people and teams in that way. We genuinely become less kind people.

What’s the alternative?

Practice active kindness. Go out of your way to do kind things for people, especially people who may not deserve it. If you wait for them to make the first move, you’ll be waiting a while — but extend a hand to someone who expects a kick in the teeth and watch as you gain a new friend. Smile.

Don’t go for the easy, wrong answers. That team isn’t incompetent, they just have too much work to do; how can we work with them to get our thing done? That manager isn’t stonewalling, he just has a different incentive structure — how can we understand what it is?

Seek to understand and make this clear. When asking someone to do something, try to understand their current situation first. Perhaps the request isn’t as urgent as all that — but say it is. “I know you have a lot on your plate, with the X deadline and the Y update, but public-facing system Z could be compromised.” Ask questions and listen to the answers.

Be flexible. Recalibrate “urgent.” Think of the worst possible thing that could happen to your organization. Now try to make it worse. I’ve worked in places where the worst-case scenario involves “loss of multiple human lives.” Will the world end if this minor security patch isn’t applied today? Think of the automated OS updates popping up in the corner of your screen: how often are those more important than what you’re doing? If you practice this and do it well, people will start to feel you understand their value calculus, and this makes them much more likely to take your advice.

Create stakeholders and spread security knowledge. One thing our Infosec team tries to do is have people create their own security goals. The answer to “what does it mean to be safe” ultimately is up to them; we just guide the process. This means they’re invested in security – the more they’ve thought about the safety of their own product, the more likely they are to value it as a goal.

Conclusion

Fixing Infosec’s jerk problem benefits everyone: us, the people we deal with, and ultimately the security of the system — and since that’s our long-term goal, we should actively seek to fix the problem. Be kind, and the rest will follow.

What do you think? Let me know on Twitter; I’m @ternus.