25 Years of the Software Engineering Interview
How has interviewing changed in the last twenty-five years, and what can these changes tell us about the next generation of software products? And if that next generation is the market trend, what can we say about developer education going forward?
Like most CS graduates, when I was finishing my degree, I was also preparing for an interview to find the software company I wanted to work for and the products I wanted to work on. I was reading books on interviews, working on sample problems like Leetcode problems, and above all practicing my code. I graduated in 2005 - before Facebook was available to everyone, Gmail was in invite-only beta, and Amazon's Simple Queue Service (the first AWS product) had only been available for 6 months. I joined Microsoft in 2006 after a year with Palmchip and seven years later moved on to Facebook. My interviews at each stage, and the interviews I help conduct now as the CEO of Educative, reflect different trends in software development and what we ask of engineers.
How has interviewing changed in the last twenty-five years, and what can these changes tell us about the next generation of software products? And if that next generation is the market trend, what can we say about developer education going forward?
Assemble note: coded inline because we're gods
From the 1970s thru the early 1990s, computing was a business practice first. People with computers at home were mostly hobbyists; prior to the mass adoption of the graphical user interface, those with personal computers were likely to be coders of some sort themselves - the command line interface demanded it. Microsoft was building their consumer-friendly, GUI-based operating system on top of DOS. First came Windows 3.1, and to truly go mainstream, Windows 95. Concurrent with the Windows 9x development - still based on DOS - Microsoft was developing Windows NT, a completely new kernel and file system for workstations and servers that would eventually underpin all of Microsoft's OS development. Linus Torvalds published the first version of the Linux kernel in 1991, building on the instruction set of Unix and the efforts of Richard Stallman's GNU project.
With multiple competing operating systems (*nix-like, DOS-compatible, NT, Mac OS) and competing instruction sets (x86 hadn't yet taken over the world), these early interviews were focused on two-lane programming, kernel development, and relied heavily on lower-level languages like Assembly and C. The need for programmers was growing steadily in what just ten years ago had been a very niche field. Jon Bentley's Programming Pearls taught an entire generation how to think like a programmer and did so in the popular languages of the day - C and C++.
Dot-Coms and Brainteasers
At the same time, the browser wars started to take shape, as Netscape distributed a consumer web browser, and America Online entered with the gateway model. As consumer demand for computers rose and the web became the primary platform for consumption, companies had a gold rush on their hands. The dot-com bubble gave birth to future giants along with dozens of companies with smaller success well into the following decade and just as many busts - and all of them needed talent. Microsoft was the standard at the time. Compared to hardware and services companies like IBM or Apple, Microsoft was unique, building only software for others' hardware, and even software for others' operating platforms (as Microsoft Office was ported to Mac OS in 1997, along with Internet Explorer).
For the next 5-10 years, the massive rush to developer talent was based on Microsoft's model. Particular focus was given to string processing, still heavily invested in C and C++, less so in Assembly. Obviously as the web took hold, many developers found paths in HTML, CSS, JavaScript, and Java, the dominant languages of the web. But beyond the coding stage of the interview, companies focused on brain-teasers. Taken from Microsoft's culture of the time, books like How Would You Move Mount Fuji? describe a belief that creative thinking and generic problem-solving map to coding success. (Full disclosure: when I was preparing for interviews in 2004 and 2005, brain teasers gave me the most stress - I was never very good at them.)
But here's the thing: a reasonably intelligent person can figure out why manhole covers are round; someone who is familiar with urban living and the size of American cities might be able to reasonably estimate the number of window washers in New York City; and anyone who passed high school geometry can figure out the volume of a 747 and approximate the number of tennis balls that could fit onboard. Some of these questions require familiarity with real-world scenarios that don't reflect real programming issues.
When the dot-com bubble burst, there were survivors who would go on to become monoliths in their own right - Google and Amazon - and survivors who would limp along as a shell of their former glory - Yahoo! and AOL, for instance. What did not go away was the awareness that the Internet was the next great leap in human communication and value creation. In the mid-2000s, there was the question of just what the world wide web would be if it weren't just a digital mall for all the dot-com storefronts of the world. Enter what commentators called "Web 2.0" and the rise of social media.
Buffer Overruns
With the Internet entering mainstream use, the connectivity of global networks laid the foundation for emerging large-scale systems (more on this later) while exposing millions of home users to malicious software that could spread through a simple email. Michael Howard and David Leblanc were addressing the emergent security issues with their book Writing Secure Code. Microsoft had released Windows XP on the NT kernel and Apple had switched from Mac OS to OS X, a fork of BSD Unix, essentially creating only two mainstream OS and instruction sets: *nix-like and Windows NT. Though the iPhone was still a few years away, development for consumer-facing applications was moving more and more toward the web and cross-platform operations, exemplified in the explosion of Java, Adobe Flash applications, and Microsoft's .NET framework.
I entered the workforce as part of this wave of hiring. Joel Spolsky's Guerrilla Guide to Interviewing blog post had taught a new generation of technical leads to focus on hiring smart people who get things done; he'd later publish a book focusing on these hires. Google had killed brain teasers in favor of engineering practice, actually solving problems with code - which, of course, is still in use today. Interviewers were countering with Programming Interviews Exposed, which walked developers through comparing strings, handling buffer overruns and injected executable code, and working through real-world problems like converting between decimal and hex. You can try problems like these with short lists of interview questions like Blind 75 problems. But this was a transitional time more than anything: a lull between the dot-com bubble's burst and the explosion of scalable services.
When I was entering the field, distributed systems existed in various organizations, but there wasn't a body of work establishing best practices. Someone at Amazon was working on an Amazon solution; someone at Google was working on a Google solution; Facebook was scaling rapidly and figuring out how to handle the additions of millions of users (effectively network nodes) on an almost daily basis. Scaling issues were handled organically, not algorithmically, and "system design" as an engineering discipline had only started entering software circles in the last year or so.
Move fast; break things
That all had to change during the mid-2000s. Amazon launched SQS in 2004, and S3 became AWS in 2006. Google had started publishing papers in 2004 - first introducing MapReduce, now a standard programming model in data engineering, then with BigTable and scalable data management - to establish practices for large systems. Microsoft, who had predicted the web as the next platform in the early 90s, scrambled to catch up with Azure, originally Project Red Dog, which started in 2005 when they bought Groove Networks. I worked on the distributed data solution when Azure was still Red Dog and saw the product from its 2008 launch through the addition of virtual machines, locally redundant storage, and other SQL reporting in 2012. For those looking to delve into the field of data engineering and learn data science, such firsthand insights and experiences can be incredibly valuable and instructive.
As my own career scaled, so did the demands in the coding interview: "toy" problems started entering the coding interview. Questions would be carved out of actual issues faced in building solutions and entrusted to the interviewer to solve in an hour or two. Rapid growth and the advent of mobile computing, where "big data" lived in the cloud - along with cloud-side processing and shared uses for that data - led to Facebook adding system design interview in their cycles in 2012. As the various platforms faced similar problems scaling, these toy problems almost became standardized - to the point that Cracking the Coding Interview, first published in 2008, helped thousands of software engineers land jobs.
This has led to our present age of software engineering interviews. While Cracking the Coding Interview updated multiple times - the sixth edition was published in 2015, only seven years after its original 2008 release - software companies and interviewers are constantly updating their questions in order to weed out rote answers and easy prep work.
Where Are We Going?
Like the software field itself, interviewing is a living practice, constantly evolving in response to market and product needs. With more complex systems, there is significant focus on code efficiency. Complexity analysis is thus a key focus area to keep operational costs down, especially for companies that buy their cloud processing from providers like AWS or Azure.
If you head over to Quora or Stack Overflow, large online communities revolve around the interview cycles for major tech companies. It's also a (rather large) niche business to offer interview prep and career assistance to developers: LeetCode has been cataloging coding interview exercises from real interviews for almost as long as I've been in the field; HackerRank lets software engineers show their skills compared to other coders; and, yes, Educative has interview help so that you can take the languages and frameworks you learn from our courses and put them into action.
But the reality is that the LeetCode library is in an arms' race against tech interviewers. The hiring organizations at FAANG and Microsoft know these exercises are out there, and they create new questions all the time to try to test developer gumption. These changes aren't only to stay one step ahead of the interview prep curve; they're also because the demands for software engineers are changing as the market evolves and product architectures change with the times and customer demand.
It is important today for candidates to go back to the basics of software engineering and interviewing: understand underlying data structures and algorithms, prepare general purpose skills that can be applied to unique problems, and understand how various components relate to each other in object-oriented programming and multi-entity systems.
The Right Person at the Right Company
When you're a small company like Educative, company culture happens organically; our US office is on only one floor, and there are 40 of us. But when you're running thousands of engineers across tens of offices, the way to maintain corporate culture is to hire engineers who have already bought into what you're doing.
Amazon heavily evaluates their leadership principles during the interview cycle. Facebook hires talent, but any new hire is thrown into their engineering "bootcamp" to learn processes and products and find a strong fit within various teams. Microsoft had cultural questions in their interview process when I first joined, and Google has been focusing on culture fit for the last several years.
CodingInterview.com has a breakdown of what to expect from twenty different firms, but what all of them have in common today are need for scalable systems, competent code, problem-solving, and cultural fit. These are all components that rely on each other. Competent code produces fewer bugs; and in a local product, a few bugs might be fine, but when you have a product running across thousands (or in the case of Big Tech firms, billions) of instances, both locally and in the cloud, those few bugs add up to a lot. So scalable systems rely on quality code and problem-solving, which requires teams that work well together - if people are part of the system of your company, you need to prevent conflict and bugs within your teams.