There is perhaps no aspect of the field of software engineering that is more disliked than algorithmic whiteboard interviews. You know what I'm talking about-- sometimes they're called "algorithms and data structures", other times "leetcode-style" questions.
People dislike, and I mean, really dislike these interviews. Here's a sampling of quotes I randomly pulled from Hacker News. First, there's concerns about their effectiveness on a per-interview basis.
Most interviewers don't ask enough technical questions to have any idea what a candidate knows or doesn't know. If their one or two questions happen to be something the candidate knows well, they'll call them a genius. If they happen to not know, they'll label them an idiot.
We've already covered why companies use such interview questions. The problem is that they may be hit-or-miss for each person, but they're incredibly useful when hiring at scale. Other comments lament the interviewers themselves:
I find most interviewers unprepared to evaluate the person for the role, and instead exercise their own biases, stroke their egos, etc. It's largely a voodoo practice that we'll look back and laugh at as a civilization at some point.
This next one is what we'll be discussing. I've heard over and over again that studying algorithms and data structures isn't really applicable in day to day life.
There are companies that will pay near FANG levels (maybe 10-15% haircut) without subjecting yourself to onerous completion of hundreds of LeetCode problems. There are other options, and we should be aware of them. You'll still need to prove you can code, but the questions we ask have practical application to our codebases: we use recursion and trees, for example, so we ask questions about those.
The above commentor seems to be arguing that the top companies focus more on algorithms and data structures that don't appear frequently in real-life. He talks about recursion and trees are examples of "more applicable" concepts in day-to-day software engineering. This is true to an extent.
No, I have never had to implement a linked list at work. (For some reason, people always point to this as the classic example.)
No, I've never encountered a red-black tree in my career thus far, and I hope I never will.
Yes, I've tended to lean on some pre-built abstraction (database, library, framework, or other) to get my work done, without implementing fundamental algorithms.
However, has studying for them, and knowing these problem-solving patterns, made me a better engineer? I would wager an astounding yes. Here are some ways:
With these kinds of interviews, you're often nervous and coding in front of a stranger. When this happens, it's really easy to lose track of where you are in a problem.
The thing that helped the most with keeping the problem's "state" in my head, whether on game day or practicing these problems at home, is improved variable and function naming. There are two important constraints in this unique situation:
The reason for #2 is because you'll often have multiple pointers, nodes, or counters to keep track of.
Names like pointer1
and pointer2
don't really work when they do completely different things, or are conceptually moving in different directions. You eventually realize startingPtr
and endingPtr
help you remember their purpose, and eventually graduate to start
and end
-- or parent
and child
.
The pragmatic David Heinemeier Hansson once gave a keynote about Conceptual Compression. It's basically the idea of embracing abstractions-- where increasingly, a developer doesn't need to know the often messy or complex details of some particular technology to be effective with it. Studying algorithms and data structures for interviews actually taught me to see the light on this.
New developers tend to think that solving algorithmic challenges is all about holding all the details in your head. In the beginning, when you haven't learned any patterns, that's the whole point. You haven't yet learned the tools of the trade.
These patterns and "tricks" are abstractions that, once understood, let you concentrate on solving the problem. When you know how sliding window
, BFS
and DFS
, or backtracking
works, you suddenly have a filter through the various solutions. You're a level up, that is-- your focus can now be on how to use BFS
in this problem, rather than figuring out the exact steps to get to the shortest path.
It is true that, without knowledge of these patterns, it becomes more difficult to solve these kinds of challenges. However, the majority of companies that employ this style of interview are upfront about the expectation that you do study them. People derail this notion, because outside interviews, we never use these abstractions though, right?
Nope, these patterns actually come up all the time! Don't believe me? Here's a few examples.
I was working with a massive dataset of payments for a company spanning decades. My team had written an ETL
process that would pull records in chunks of 100,000 rows and transform them based on attributes of the payment. When the ETL
was shipped to production, I noticed the process was taking days instead of the hours we had expected.
Upon inspection, one of the methods in the ETL
was doing some de-duping. The way it was de-duping was by performing this routine:
Besides the issues with mutating the original array, the lookup time was just killing our efficiency. The great thing about tech is that most engineering cultures are blameless, so we as a team quickly changed it to this:
There was a literal 10x speed increase from this change.
Some readers might laugh and say that code like that would never make its way to production. Those people have never been under pressure to ship a feature, while having to work with a team of varying experience levels and managing PRs that regularly span hundreds or thousands of lines.
Knowing to use set
s is certainly something that could be picked up just by reading code and gaining more experience. But a concerted effort to learn more of these "tricks" can save you when there's a pickle in production.
I've certainly used hash maps
professionally quite a bit, even on this site. On AlgoDaily, we have the concept of Challenge
s. Each time users finish a problem, we add a Completion
record with that Challenge
's ID referenced. Now, also note that Challenge
s are also tagged on multiple Category
s through a join table Taggings
.
When I was building out the dashboard, I wanted to display how many Completion
s each user had completed per Category
. This was surprisingly difficult to make efficient with the built-in ORM. Even with eager loading, ActiveRecord
would make several n+1
calls to properly group each Completion
by Challenge
, Category
, and user
.
The fastest solution ended up outside of any framework. I simply made a solo database call to get all Completions
s for a given user by joining them to Challenge
s. Then, server side, I used a good old hash map
to group the Completion
s by Category
and used it to look up counts when rendering the UI.
In this case, knowing that multiple network calls would be significantly slower than performing in-memory operations reduced the load time of the dashboard by over 70%.
People always say that 90% of software engineering nowadays is just data-in, data-out-- in other words, CRUD
apps. That may be the start, but what happens when you want to do things and analyze the data? Or debug it?
Modeling things is one aspect, but operating on what you've modeled is usually the next step. As an example, a friend of mine was recently working on a directory of company employees. Nothing fancy, most companies have something like this.
The difference with this one was that it would just display an employee profile, and let you click to see their manager's page. Then when you went to the manager's profile, you could navigate to his manager-- an on, and on, up the chain. Nearly every company has an org chart/directory like that.
Of course, for those who've studied data structures, this is clearly a tree
, or graph
, in the making. And it was implemented as such. Each employee record in the database had a manager_id
column that pointed to another employee.
This "shape" was incredibly helpful when doing renderings/visualizations of org charts, and when shaping the payload to populate the HTML (which itself is a tree!). Knowledge of these data structures was especially handy when the team needed to audit the departments, their headcounts, and the reporting structure.
You may read these examples and assume that all of these are things that come with experience. Yet that is precisely my point. Those who've heavily studied Big O notation, Data Structures, Algorithms, and Systems Design will always have efficiency in the back of their minds.
These will always be fundamentals that have been painstakingly uncovered over decades, and are incredibly useful even in day to day work!
A friendly reminder that weβre running our 50% off summer sale. If you like the AlgoDaily emails and tutorials, you'll love lifetime access to a growing course for half off. We plan to add about several new challenges, lessons, and video tutorials each week for the next few months, so don't miss out!
We'll send you 100+ of the most common coding interview questions, once a day with visual explanations. Join over 53,000+ users who are doubling their salaries in 30 minutes a day.
Welcome to the most accessible guide to technical interviews. AlgoDaily was created to be a gentle, visual introduction to patterns around solving data structures and algorithms challenges.
We believe that technical interviews are a matter of practicing well. We've referenced hundreds of resources on habit change, education design, and algorithms to design the best and most streamlined learning experience.