Mapping the Ocean Floor: Interviewing at the Right Resolution
A Video About the Ocean
I was watching a Cleo Abram video about ocean mapping , and it completely reframed how I think about something that has nothing to do with the ocean: hiring.
Here’s the problem she describes. We’ve mapped the surface of Mars in higher resolution than we’ve mapped our own ocean floor. For decades, the best maps we had of the deep ocean came from satellite-based measurements. Broad, global coverage, but terrible resolution. You could see the general shape of things, but the picture was blurry. Too blurry to actually navigate by.
The breakthrough wasn’t better satellites. It was narrower, more focused instruments. Sonar vessels that could map small areas in extraordinary detail. The trade-off was obvious: you couldn’t cover everything. You had to choose where to point the beam. But the result was that a small, well-chosen area mapped in high resolution told you more than the entire ocean mapped in low resolution.
That trade-off, between breadth and resolution, is the exact same problem we face every time we interview someone.
The Resolution Paradox
Here’s the paradox at the heart of every interview process: the more you try to cover, the less you actually learn.
I once had four yeses on a panel where none of the scorecards had real conviction. Surface-level project descriptions, no depth beyond the high-level. Four yeses, zero confidence. That can happen when an interview loop is trying to cover eight dimensions across in a 45-minute conversation. Each dimension only gets a few minutes of airtime. Enough to fill a scorecard, but not enough to produce real signal. You end up with a satellite view where you can’t differentiate a ridge from a trench.
Interviewing is a noisy prediction problem : each assessment is a low-fidelity measurement, and adding more low-resolution measurements doesn’t reduce the noise. You have to choose which parts of the ocean floor matter most for this role, on this team, at this moment, and accept that other areas will stay in low resolution.
Start With What You’re Actually Looking For
I’ve found many interview loops are assembled bottom-up. Someone pulls out their favorite system design prompt. Another interviewer brings in behavioral question they’ve been asking for three years. The loop gets stitched together from existing parts, and the implicit assumption is that if you cover enough topics, you’ll learn enough to decide. But that’s not a first principles approach. If you don’t map your questions back to your criteria, how do you know you’re assessing for the right things?
Instead, before writing a single question, I ask myself: what are the three or four things that, if we had strong signal on them, would give us confidence in a decision? Not a wish list of everything I’d love to know. The specific, prioritized signals that matter most for this role.
A senior engineer joining a team that’s rewriting a core service needs different signal than one joining a steady-state team. A staff role might demand high resolution on how someone handles ambiguity and influences without authority, and only surface-level confirmation that they can write decent code. A management role might have different priorities entirely.
Once we have the rubric, the questions are much easier. Now we’re looking for situations that can reflect the behavior and competencies we’re looking for. We can catch flaws early, like assessing for “empathy” and “making tough calls” separately, missing managers who live their values when having tough performance conversations.
Our interview loop has a clear separation of concerns that sets our interviewers up, and our interviewers can walk into our debrief knowing what they were supposed to be mapping.
Choosing Where to Point the Beam
Now we’ve defined our signals, we can start thinking about the skills involved in conducting our interview. As an interviewer, I have a library of questions that I can use to direct my conversation — but there’s no way I can go deep on all of them. In a system design spanning architecture, domain modeling, error handling, monitoring, scalability and more, I still have to direct my focus. So, maybe the candidate’s approach to error handling raises a question. I spend ten minutes there, while their monitoring answer stays at the surface level. Making these trade-offs in real-time requires three distinct skills:
The broad scan. Early conversations (or the opening of each interview) establish a baseline: where does this person’s experience cluster, what’s their general depth, how do they talk about their work? I’m doing risk assessment — building a low-resolution picture to spot where the terrain looks odd. A candidate’s summary of how they led a migration might sound perfectly reasonable — but is that because they drove the technical decisions, or because they’re narrating someone else’s work? The broad scan tells me where to go deeper.
The deep dives. Some are predetermined. If technical leadership is my top signal, I planned for it. Others emerge from the broad scan. The candidate mentions a project that sounds like it went sideways, then pivots quickly. A claim on their resume doesn’t match the depth of their answers. A description of cross-team work sounds polished. Maybe too polished. These are the oddly shaped features on my map, and spending a few extra minutes at higher resolution might change my read entirely.
The pivots. The hardest skill, and one that requires the most judgment. Mid-conversation, something shifts — I came in planning to probe technical design, but the candidate’s answer opens a more interesting question about how they handle ambiguity. Do I follow that thread or stick to my plan? If I can redirect depth to the most valuable signal, I’m making much better use of my time.
I’ve definitely gotten this wrong before. I’ve walked out of interviews thinking “did I get any real signal?”, plodding along with my planned questions, collecting low-resolution information across the board instead of committing to depth anywhere. These days, I focus. I might only get two or three pieces of signal in an interview, but I’ll feel confident in each one.
Designing for Signal
With signals and depth allocated, a couple habits have made a big difference for me in increasing signal output.
Challenge Your Signal
While I set up interviewers with different conversations, I like having some overlapping territory on competencies from different perspectives. Different interviewers will pull different threads from the same story; one focuses on technical decisions, another on leadership dynamics, another on how the candidate handles the parts that didn’t go well.
This applies within a single interview too. When I form an early impression, my instinct is to spend the rest of the conversation confirming it. I’ve learned to fight that. A positive impression that survives a genuine search for counter-evidence is worth far more than one I never tested. And a negative impression that dissolves under scrutiny has saved me from passing on the right person more than once.
Make the Conversation Flow
I feel very strongly about this: making people feel at ease isn’t just something I do to be polite. People give you far better signal when they’re not anxious, defensive, or confused about what you’re asking. An interview where topics build on each other, where I’m genuinely curious rather than checking boxes, gets the candidate to stop performing and start communicating.
The Debrief Is Where It All Lands
Everything upstream exists in service of this moment: the signal definitions, the depth allocation, the deliberate choices about where to point the beam.
“I liked them” and “something felt off” has come up at one point or another with every group of interviewers I’ve worked with. They’re fine as starting points. But I’ve learned to push past them fast: what specifically did you observe? “They struggled with system design” isn’t super useful. “They defaulted to a monolithic approach, didn’t consider failure modes until I prompted them, but course-corrected quickly with strong first-principles reasoning” is a high resolution data point a group can work with.
The best debriefs I’ve been in usually have one thing in common: people are honest about where they have depth and where they don’t. In one, an interviewer flagged that she hadn’t gotten signal on metrics-driven decision-making for an engineering manager candidate. Rather than treat that as a gap to worry about, we mapped back to what we actually needed: logical consistency, tough calls with data, structured problem-solving. That signal was scattered across other interviews — ownership from the leadership round, goal decomposition from the process round. We composited it and made a confident hire, one of the best managers I’ve brought on.
What I’m after is reproducible decisions — if I ran the same candidate through again with different interviewers, would we land in a similar place? That’s only possible if my interviewers are calibrated. I’ve seen “strong signal on system design” mean completely different things to different people in the same debrief. The only fix I’ve found is ongoing: shadow each other, compare notes on the same candidate, talk about what you’re seeing and why. Not to make everyone agree — different perspectives are the whole point — but so that when we composite the picture, we’re actually working from commensurable data.
Back to the Ship
There’s one more step most people skip: checking the map against reality.
When a hire works out, I look back at what the process predicted. When one doesn’t, I do the same. The feedback cycle is long, the data is messy, and clean attribution is rare. But even rough pattern-matching over time has taught me which of my signals were actually predictive and which were noise that felt like signal.
I’ll never have complete information about a candidate. Nobody will. But I’ve gotten better at being honest about that uncertainty — building processes that make the best use of limited data, and closing the loop so each decision sharpens the next one. Not perfection. Repeatability.