I failed my Anthropic interview and came to tell you all about it so you don't have to
Anthropic – those dudes behind Claude. It’s an alternative to OpenAI’s ChatGPT, backed by some seriously massive investments from Amazon. Rumor has it they’ve got a model even more powerful than OpenAI’s o3, but they’re so obsessed with safety (which is a great thing!) that they still keep it private.
What I applied for: Research Fellowship.
And here’s something unusual: they asked for reference contacts—the folks you’ve worked with who can give you feedback. Yup, now that’s not just an academic thing anymore!
The interview was split into several stages:
- Online Coding (1.5 hours).
You had to hack together a class that exposed a public API exactly per the spec. It was a level-4 challenge. A new stage would unlock once you passed all the tests for the current one, which would require you to refactor your code.
I didn’t just say “hack together” for nothing. To pull it off in 90 minutes, you have to code at breakneck speed and completely forget about Big O. Forget about heaps, binary searches, and the like.
I barely managed to get everything running just 2 minutes before the timer finally went off.
There was no human interviewer—only an impersonal, automated system.
How to prepare?
Who the heck knows, honestly. Just code. You don’t need fancy algorithms—in fact, they might even trip you up.
- Face-to-Face Coding with a Human (1 hour).
This round featured a LeetCode medium-level question with a twist. It felt easier than the typical FAANG coding session, where you usually get two different medium-level LeetCode problems.
Preparation? Just the usual grind on LeetCode. Nothing new under the sun.
Around the same time, they also reached out to the people you listed as references and asked them for some written feedback.
- Virtual Onsite (Three Parts).
A marathon stage divided into three segments: a research brainstorm (15 minutes), a take-home assignment with review (5 hours), and a culture fit session (1 hour). You could split it over several days if you felt like it.
At the same time, your references would get an email saying that Anthropic wanted to hop on a call with them! I’ve never seen anything like that before—but in this era of all-out scams, who knows what else might be expected.
3.1. Research Brainstorm (15 minutes).
I hopped on a call with the head of alignment. After a quick intro, he posed two open-ended questions designed to elicit ideas. They didn’t require deep insider knowledge of LLMs, just some experience observing them as black boxes and a dash of creativity.
Alas, my creative juices were nowhere to be found that day. I got stuck on the first question, sitting in silence for about three minutes, mentally sifting through my modest math knowledge. Instead of impressing him with my preliminary understanding of linear algebra, I should have generated ideas! Easier said than done—especially when the clock is ticking. I produced a flurry of ideas… but only after I’d taken a stroll around the pond. I needed them right then and there.
By the end of the call, I noticed the interviewer’s bored expression, it was clear that the situation was deteriorating. And then, after about an hour, my references got an automatic message saying there was no need for further conversation, and my access to the candidate portal vanished. The next day, the recruiter wished me luck and canceled the rest of the virtual onsite.