Security is mostly an accountability function

This is one of those "I made this post because I keep explaining the same small concept repeatedly" things. Both to save time repeating it, and (mostly) to try to clean up the concept.


Companies, decide how secure they want to be twice*

The first time is when the exec team decides, in their own terms, just how much security they want to purchase. That purchase being made in people, software, process costs, and especially meta-costs paid by every other org/effort in the company. The second time is when the exec team decides how much they want to deviate from the previous decision just this one time for this particularly important thing. That second decision gets remade over and over, however many times necessary. You (hopefully) have a security executive because that first decision only matters if someone at the exec tier is holding their peers accountable for their strategic intents. A quick, naive primer for multi-level exec politics:

  • Bosses hold reports accountable for failing to hit goals (your security exec yells at you from here)
  • Peers hold peers accountable for failing to meet shared commitments (your security exec does most of their job here, w/ other execs)
  • Reports hold bosses accountable for failing to provide resources (your security exec does their job here, once or twice a year, with their boss. You do your job with your security exec here on a frequent basis)

In a perfectly aligned company, where no one ever tries to shirk parts of the agreed-to big picture that are temporarily inconvenient, the security team could define what the risk looks like, the execs (or more likely, delegates) could continuously make decisions aligned with their security goals, and everyone would move forward together. Nobody actually does that. So let's dismiss that potential out of hand for at least another decade or so. 

That all means that a well functioning security team & security executive are, from the perspective of the rest of your company, largely an accountability minder. Their role is to point out things that damage that first central security decision and say "you're lying to yourself if you think you can do this without it damaging your other goals" or to drive requirements into the rest of the execs/orgs that are foundational, required elements of meeting the shared security goal. You are rarely actually arguing with other people about the specific topic. You're arguing with them about meeting their commitment to the overall shared security goal.

Constantly re-centering conversations to given risk appetites is incredibly useful. Generally, with honest actors, it's fantastic -- establishing "wait this was our goal, right?" with people who are dealing with you on the level is amazing. Building this habit into your security engineers will proactively prevent a noticeable percentage of escalations, and will also help your junior security engineers not make requests of partners that aren't reasonable for those shared goals. (There's a much latter stage maturity version of this where you've built an agreed-to method for estimating risk and people are generally OK with your impacts on their timelines because they know you warning them off impacting their risk budget here means they get to do something more risky later. Don't rush to this as a goal. If you get there way before the rest of the company is ready to think about these things big-picture, you will accidentally make security a 1000lb gorilla in every "a or b" decision conversation, and you'll strangle the company's ability to take informed guesses at risk decisions -- which is what it absolutely should still be doing early on.)

Unfortunately, it's also often less useful than you'd hope. You'll frequently run into people who habitually think they're the only pragmatist in the conversation. You'll run into people who discount security goals out of years of poor experiences with them. You'll also run into people acting with direct malicious intent, but they'll be indistinguishable from the "only I think pragmatically" type unless they're extremely bad at it. If you've got a choice, it's much more efficient to just choose to not work with assholes (or people who accidentally approximate assholes) who put their personal goals well out ahead of company goals, but it is an imperfect world.

If you're living in this particular intersection of startup and maturity and (hopefully un-)intentional bad actors, it can be very difficult; even demoralizing. Guiding conversations back to common security goals will seem like it isn't helpful sometimes. It still is. Even when it isn't useful for the listener, it is at least putting your head in the right place for the argument. Make it a habit. Don't make it your only one of course. If you're having these types of discussions, you need an bag full of deescalation and empathy strategies to keep a line of communication open, even if the net result of them is "we can respect each other's position, but I'm hard blocking you." Security engagement has a non-trivial number of typical conversations that end with one of the parties extremely dissatisfied. At worst, re-centering on shared goals will at least help keep it impersonal. If you find that you're using multiple conversation strategies successfully, including this one, and still have people who refuse to accept the shared strategy - it's time to decide if you're fighting an independent bad actor, or if that person is doing what the company's culture has shaped them to do. Depending on the answer, you either help them find somewhere else to work, or do so yourself. Sometimes the right answer may be that you have to change the core culture of the company to do your job, but that is a much trickier proposition.

[A note to the reader from future Graham: I quit that job and ranted about the isolated but deeply ingrained toxic asshole culture in one particular org that had forced me to quit a great job because infinite gas lighting drives everyone crazy eventually.]

[A note to Graham from this reality Graham should other realities' Grahams stumble upon this or should time travel become widely available in this reality: That first time that you thought "I shouldn't have to put up with this and if I can't convince anyone that it's happening deliberately I should just get out now," you were right and should have gotten out then. It took you almost an entire year to get over the burnout that staying caused]




*Companies decide, in the trenches, how secure they want to be a dozens times a day, and inside the security team's prioritization decisions, non-stop.