Mapping the Ocean Floor: Interviewing at the Right Resolution

A Video About the Ocean

I was watching a Cleo Abram video about ocean mapping , and it completely reframed how I think about something that has nothing to do with the ocean: hiring.

Here’s the problem she describes. We’ve mapped the surface of Mars in higher resolution than we’ve mapped our own ocean floor. For decades, the best maps we had of the deep ocean came from satellite-based measurements. Broad, global coverage, but terrible resolution. You could see the general shape of things, but the picture was blurry. Too blurry to actually navigate by.

The breakthrough wasn’t better satellites. It was narrower, more focused instruments. Sonar vessels that could map small areas in extraordinary detail. The trade-off was obvious: you couldn’t cover everything. You had to choose where to point the beam. But the result was that a small, well-chosen area mapped in high resolution told you more than the entire ocean mapped in low resolution.

That trade-off, between breadth and resolution, is the exact same problem we face every time we interview someone.


The Resolution Paradox

Here’s the paradox at the heart of every interview process: the more you try to cover, the less you actually learn.

An interview loop that tries to assess system design, coding ability, leadership, communication, collaboration, domain knowledge, culture fit, and strategic thinking across four 45-minute conversations isn’t being thorough. It’s being shallow everywhere. Each of those dimensions gets a few minutes of surface-level probing, enough to produce a score on a rubric but not enough to produce real signal. You end up with a map that looks complete. It has numbers in every box. But it’s the satellite view: blurry enough that you can’t actually tell a ridge from a trench.

And this is where it gets dangerous, because a complete scorecard feels like strong data. Every dimension has a score, every interviewer submitted feedback, the rubric is filled out. It looks like rigor. But if each of those scores is backed by five minutes of surface-level conversation, what you actually have is a collection of first impressions with a professional format. I once had four yeses on a panel where none of the scorecards had strong evidence or real conviction. Surface-level project descriptions, no depth beyond the high-level. Four yeses, zero confidence.

The data backs this up. Interviewing is a noisy prediction problem : each conversation is a low-fidelity measurement, and adding more low-resolution measurements doesn’t reduce the noise. Google’s People Analytics team found that four well-structured interviews predicted hiring outcomes with 86% confidence; each additional interview beyond that added less than 1%. More isn’t better. Better-aimed is better.

The alternative is uncomfortable but more honest. You have to choose which parts of the ocean floor matter most for this role, on this team, at this moment, and accept that other areas will stay in low resolution.


Start With What You’re Actually Looking For

So if coverage is the problem, where do you actually start?

Most interview loops are assembled bottom-up. Someone pulls out their favorite system design prompt. Another interviewer dusts off the behavioral question they’ve been asking for three years. The loop gets stitched together from existing parts, and the implicit assumption is that if you cover enough topics, you’ll learn enough to decide. I once reviewed a loop where three interviews all independently assessed how candidates managed their projects, communicated technical impact, and mentored other engineers. Three hours of conversation, all covering the same ground at surface level, none of them going deep. A full scorecard, no real signal anywhere.

Instead, before you write a single question: what are the three or four things that, if we had strong signal on them, would give us confidence in a decision? Not a wish list of everything you’d love to know. The specific, prioritized signals that matter most for this role.

A senior engineer joining a team that’s rewriting a core service needs different signal than one joining a steady-state team. A tech lead role might demand high resolution on how someone handles ambiguity and influences without authority, and only surface-level confirmation that they can write decent code. A management role might have different priorities entirely.

When you define your signals upfront, the rest of the process follows: interviewers get assigned to specific areas, questions probe the dimensions you actually care about, and everyone walks into the debrief knowing what they were supposed to be mapping.


Choosing Where to Point the Beam

Once you’ve defined your signals, you have to design the process to match. The resolution trade-off doesn’t just apply at the loop level. It applies all the way down. A system design interview spans architecture, domain modeling, error handling, monitoring, scalability and more. You can’t go deep on all of them. Maybe the candidate’s approach to error handling raises a question, so you spend ten minutes there while their monitoring answer stays at surface level. Making these trade-offs in real time requires three distinct skills:

The broad scan. Early conversations (or the opening of each interview) establish a baseline: where does this person’s experience cluster, what’s their general depth, how do they talk about their work? You’re not just getting oriented. You’re doing risk assessment. You’re building a low-resolution picture specifically to spot where the terrain looks odd. A candidate’s summary of how they led a migration might sound perfectly reasonable — but is that because they drove the technical decisions, or because they’re narrating someone else’s work? The broad scan tells you where to go deeper.

The deep dives. Some are predetermined. If technical leadership is your top signal, you planned for it. Others emerge from the broad scan. The candidate mentions a project that sounds like it went sideways, then pivots quickly. A claim on their resume doesn’t match the depth of their answers. A description of cross-team work sounds polished. Maybe too polished. These are the oddly shaped features on your map, and spending a few extra minutes at higher resolution can change your read entirely.

The pivots. The hardest skill, and the one that separates a good interviewer from someone running a script. Mid-conversation, something shifts — you came in planning to probe technical design, but the candidate’s answer opens a more interesting question about how they handle ambiguity. Do you follow that thread or stick to your plan? The interviewer who can redirect depth toward the most valuable signal as the conversation reveals it will produce better reads.

I’ve gotten this wrong myself. I’ve walked out of interviews thinking “did I get any real signal?”, plodding along with my planned questions, collecting low-resolution information across the board instead of committing to depth anywhere. These days, I might only get two or three pieces of signal in an interview, but I’ll feel confident in each one.


Designing for Signal

With your signals defined and your depth allocated, a few principles make the difference between a process that surfaces real signal and one that produces noise.

Challenge Your Signal

Two interviewers covering partially overlapping territory looks like inefficiency, but it’s calibration: two independent reads on the same dimension. Same principle that makes multiple sonar passes valuable: each pass reduces measurement error. Different interviewers will pull different threads from the same story; one focuses on technical decisions, another on leadership dynamics, another on how the candidate handles the parts that didn’t go well.

This applies within a single interview too. When you form an early impression, the natural instinct is to spend the rest of the conversation confirming it. Fight that. A positive impression that survives a genuine search for counter-evidence is worth far more than one that was never tested. And a negative impression that dissolves under scrutiny saves you from passing on the right person.

Make the Conversation Flow

This sounds like a “nice to have” but it’s structural. People give you far better signal when they’re not anxious, defensive, or confused about what you’re asking. An interview where topics build on each other, where the interviewer is genuinely curious rather than checking boxes, gets the candidate to stop performing and start communicating. Ask open questions, then follow where the candidate takes you. The unexpected paths are where the real signal lives.

Without that intentionality, interviews drift. I’ve seen rounds try to assess “empathy” in isolation from “makes tough calls”, which will never surface a manager who leads with empathy but can’t speak to how that shapes their performance management. You can hear the cost in debriefs: “They scored high on empathy but I’m not sure they can manage underperformance.” “They talked about caring for their team but didn’t have examples of hard conversations” — when a more curious interviewer, one who followed a story about a struggling report and asked what happened next, might have found both signals in the same conversation.


The Debrief Is Where It All Lands

Everything upstream exists in service of this moment: the signal definitions, the depth allocation, the deliberate choices about where to point the beam.

“I liked them” and “something felt off” might be valid starting points, but they’re not decisions. A good debrief is structured synthesis: each interviewer presents what they observed (specific examples, not feelings) and the group constructs a composite picture from those inputs.

“They struggled with the system design question” is not useful. “When I asked them to design a notification system, they defaulted to a monolithic approach and didn’t consider failure modes until I prompted them. But once prompted, they course-corrected quickly and their reasoning showed strong first principles.” That’s a high-resolution data point the group can work with.

The fix is simple: ground every claim in observation, and be willing to say “I don’t have enough depth here to have a strong read.” A gap in your map is more useful than false confidence. In one debrief, an interviewer flagged that she hadn’t gotten signal on metrics-driven decision-making for an engineering manager candidate. Rather than treat that as a gap to worry about, we mapped back to what we actually needed: logical consistency, tough calls with data, structured problem-solving. That signal was scattered across other interviews. Ownership from the leadership round, goal decomposition from the process round. We composited it and made a confident hire, one of the best managers I’ve brought on.

What you’re after is reproducible decisions. If you ran the same candidate through your process again with different interviewers, would you reach a similar conclusion? That requires calibration. If “strong signal on system design” means something different to each person in the room, no amount of process will make the synthesis work. Have interviewers shadow each other. Compare notes on the same candidate. Debrief the debrief. You don’t want uniformity (different perspectives are the whole point) but you need enough shared understanding that the data you’re combining is actually commensurable.


Back to the Ship

There’s one more step most people skip: checking the map against reality.

When a hire works out, look back at what your process predicted. When one doesn’t, do the same. The feedback cycle is long, the data is messy, and clean attribution is rare. But even rough pattern-matching over time teaches you which signals were actually predictive and which were noise that felt like signal.

You will never have complete information about a candidate. Be honest about that uncertainty, build structures that make the best use of limited data, and close the loop so that each decision sharpens the next one. Repeatability, not perfection.