Back to writing
· 6 min read ·

Can We Stop With the LeetCode for DevOps Roles?

devops interviews career platform-engineering
In this article

Editorial hero illustration for Can We Stop With the LeetCode for DevOps Roles

I came across a post from someone who had just left an interview for a platform engineering role after being asked to reverse a binary tree on a whiteboard.

That reaction made sense to me immediately.

If I am hiring for DevOps, SRE, or platform work, I do not learn much from watching somebody perform a memorized algorithm exercise that has nothing to do with the way the team actually operates. It does not tell me how they debug a failing deployment, how they think about blast radius, or whether they can keep their head when production starts behaving strangely.

That is the part that frustrates people. It is not just that LeetCode can be annoying. It is that the signal is often weak for the work the role is supposed to cover.

The problem is not “coding bad”

I want to be careful here, because this conversation usually swings too far in one direction.

Some DevOps and platform roles absolutely require coding. Sometimes a lot of it. Internal tooling, controllers, automation, migration scripts, CI helpers, incident tooling, cloud integrations, policy checks. That work is real. I do not buy the idea that infrastructure roles never need depth in code.

What I do object to is testing the wrong kind of coding.

If the role involves writing operational tooling in Go or Python, then ask a candidate to read and reason through a small Go or Python problem that looks like the work. Ask them to debug a script. Ask them to clean up a messy piece of automation. Ask them how they would make a tool safe to run twice. That tells you something useful.

Asking them to invert a binary tree because “we want to see how they think” is usually a lazy proxy.

Relevant difficulty beats irrelevant difficulty

There is a huge difference between a hard interview and a relevant one.

A relevant interview can still be challenging. In fact, it probably should be. But the difficulty should come from the same category of thinking the person will need on the job.

For example:

  • A broken Docker Compose stack that will not come up, and the candidate has to talk through how they would approach it.
  • A Terraform change with one subtle but dangerous mistake, and the candidate has to explain what looks risky.
  • A CI pipeline that is suddenly taking forty minutes instead of ten, and the candidate has to describe where they would start.
  • A practical scripting task, like reorganizing objects in S3 based on a naming rule, where the real question is how they structure the work safely and verify the result.

Those are still interviews. The person can still struggle. But at least the struggle is connected to the job.

If the role is about operating systems under change, the interview should look like operating systems under change.

What good DevOps interviews usually reveal

The strongest infrastructure interviews I have seen tend to expose a few things pretty quickly.

First, can the candidate read a messy situation without panicking? A lot of real work starts with incomplete context and a system that is already misbehaving.

Second, do they check the obvious things before inventing a dramatic explanation? Good operators do not start with the most clever theory. They start by narrowing the problem.

Third, can they explain trade-offs in plain language? It is one thing to know what a service mesh is. It is another thing to explain when the complexity is worth it and when it is just extra surface area.

Fourth, do they think about safety? Rollback. Blast radius. Observability. Idempotence. Permissions. Recovery time. These are the instincts that matter when your code touches production.

That is a lot closer to the actual shape of the work than memorizing interview puzzles.

What I would ask instead

If I wanted a practical interview loop for a DevOps or platform role, I would rather ask questions like these:

  • Walk me through how you would debug a DNS issue in a multi-region cluster.
  • Here is a failing deploy and the last few logs. What do you check first?
  • A developer keeps bypassing the CI/CD path because it is slower than deploying manually. How do you handle that?
  • We want to introduce a service mesh. Where do you think the real complexity will show up?
  • Here is a small script that works, but it is fragile. How would you make it safer to run in production?

None of these questions require somebody to perform like a circus act. They still require judgment. They still require technical depth. They just test the kind of depth the role is actually buying.

The “we just want to see how you think” defense

This is the part I hear most often.

People will say the binary tree is not really about the binary tree. It is about seeing how the candidate behaves in an unfamiliar situation.

Fine. Then just use an unfamiliar situation from the actual job.

Give them a broken system. Give them a noisy dashboard. Give them a half-working script. Give them a suspicious Terraform plan. If what you care about is how they reason under uncertainty, infrastructure work gives you endless ways to test that without borrowing somebody else’s software engineering interview ritual.

That is what makes the defense feel weak. The replacement is obvious. It is not even harder to design. It just requires the interviewer to know what the job actually involves.

Candidates are evaluating the company too

One thing I liked in that thread was how many people said they now treat these interviews as a signal about the company, not just as a hurdle.

I think that is fair.

An interview process tells you what the company respects. If they are hiring for platform work but can only imagine evaluating it through generic algorithm puzzles, that says something. Maybe the hiring loop was inherited from another team. Maybe nobody involved understands the role particularly well. Maybe the company wants a software engineer who also carries infra, but without saying that plainly.

Sometimes that mismatch is survivable. Sometimes it is the first warning.

The better standard

The better standard is not “never ask hard questions.” It is “ask questions with transfer.”

If the role needs scripting, test scripting. If the role needs incident response judgment, test judgment. If the role needs deep systems understanding, put the candidate in a realistic systems scenario and see how they move.

And if the role really does require heavier software engineering work, say that clearly and design the loop around the actual code the team writes, not around whatever whiteboard question has survived the longest in tech interview folklore.

Key takeaway

DevOps interviews should test debugging, trade-offs, safety, and relevant coding. Difficulty is fine. Irrelevance is the problem.

You made it to the end