Everyone’s Fault, Nobody’s Fault


Someone pushes a new feature to prod the same day you go on-call. Hours later, your phone goes off - not a gentle buzz, but a full-blown siren that could wake up the entire neighborhood.

You open the alert, and it’s for a feature you didn’t even touch. Maybe it’s unhandled NPEs, maybe something else. Doesn’t matter. You’re the one on-call, so it’s your problem now.


When Things Break

In those moments, it’s usually faster to just debug and fix it - even without full context.

I’m pretty good at debugging (unless it’s latency issues; race conditions are somehow easier).

By the time someone sees my message, figures out what’s going on, and proposes a fix, more customers would’ve been affected. So I’d rather just handle it.

Sometimes I roll back. Sometimes I don’t - if the issue’s isolated and the rest of the system keeps running fine, I’ll patch it directly.

But this newsletter isn’t about debugging, or when to roll back.

It’s about what happens after.


At AWS, We Don’t Point Fingers

Even if it’s your code that caused the issue, you’re not alone in it. Every change has at least one reviewer, multiple analyzers, and automated checks. If something still slips through, it’s not just on you.

When something significant happens, we write a COE (Correction of Error). No names. Just “the engineer.”

Then we go through the five “whys.”

Why did it happen?

Why did that happen?

Why did that happen?

And so on, until you reach the real root cause - maybe a missing test, weak automation, or a blind spot in monitoring.

It’s never just one mistake. It’s a chain.


The Netflix Culture Thing

A friend keeps trying to convince me to apply to Netflix. I’ve read about their culture, though - supposedly, if you cause an outage, you present your mistake to the entire company.

Maybe that’s exaggerated, but still. I know myself. I’d always have that tiny voice in the back of my head: don’t screw up.

We have global ops meetings too, where the more interesting incidents get ~~roasted~~ reviewed. But the difference is - no one’s being judged.


Mistakes Aren’t Failures

Nobody gets fired for one mistake. As IBM’s former CEO Thomas J. Watson supposedly said:

“Recently, I was asked if I was going to fire an employee who made a mistake that cost the company $600,000. No, I replied, I just spent $600,000 training him.”

If you’re causing weekly outages, sure, that’s a problem. But even then, it’s rarely just your problem - it’s the team’s.

So if your company has a blaming culture, be the one who changes it.

It’ll make everyone around you - and the product - better.


P.S. Once, I got paged at a casino while I wasn’t even on-call. Apparently, I had a setting enabled that triggered alerts whenever I had no service. The bug? It didn’t care whether you were on-call or not. My friend thought someone had just hit a jackpot. Turns out, it was just AWS yelling at me.

When I asked around, someone said it happened to them too - during a church service. So yeah, nobody’s safe from bad alert settings.

Cheers!

Evgeny Urubkov (@codevev)

600 1st Ave, Ste 330 PMB 92768, Seattle, WA 98104-2246
Unsubscribe · Preferences

codevev

codevev is a weekly newsletter designed to help you become a better software developer. Every Wednesday, get a concise email packed with value:• Skill Boosts: Elevate your coding with both hard and soft skill insights.• Tool Tips: Learn about new tools and how to use them effectively.• Real-World Wisdom: Gain from my experiences in the tech field.

Read more from codevev

It was my first on-call shift since I’ve been back after surgery. I was also onboarding a new person to be on-call, which is always a fun combo: you’re trying to look calm while quietly hoping nothing explodes. On Wednesday night I went to bed early, around 9pm, trying to catch up on sleep. Of course, my “favorite” sound came from the phone, the pager app. I really didn’t want to get up, so I did the lazy thing: checked which alarm fired through this terrible app we have to use, saw it wasn’t...

A couple weeks ago I wrote about making our reports take a couple seconds instead of 3 minutes. What I discovered later is that we didn’t actually have access to historical reports, because all the DynamoDB entries that pointed to the S3 data behind those reports had a TTL of one day. After asking around, the reason was simple: some partition keys were exceeding 10GB, and that’s the DynamoDB item collection limit per partition key (aka “all items with the same partition key”). So the...

My friend asked me yesterday if I know what AWS is. And it’s not someone I only talk to once in a while. We literally talk every day. I guess I always refer to my employer as just Amazon, so “AWS” never comes up. He recently acquired an app and needed to create a new AWS account, add a new user for his developer, and give him permissions for Lightsail (whatever that is). He managed to do the first two. The permissions part? Yep, he had no idea. I’ve written about IAM before here. My goal...