What AI Means To The Developer Interview

What AI Means To The Developer Interview

In light of tools like ChatGPT and GitHub's "Copilot," we need to rethink the way we conduct software developer interviews. These tools are about to fundamentally reshape the way that we approach software development, and while there are a lot of problems with technical interviews across the industry, those problems are about to become considerably more profound.

Let's start at the beginning which is, as the song says, a very good place to start. Technical interviews are already terrible, though an understandable and perhaps necessary evil. Software developers make a lot of money; their spin-up time to full productivity is often measured in months; and no one wants to shell out $30,000 plus recruitment fees only to find out that a developer can't actually deliver. That's why we conduct these interviews... even if they're not very effective.

A Brief History of Tech Interviews

No alt text provided for this image

The real problem is, being a successful software developer involves a completely different skillset than being good at software developer interviews. This is on old problem.

Back in the 1990s or early oughts we'd give developers some tricky puzzle to work out on a white-board. "Reverse a string in place" was always a fun one (someone once asked me to do that in Java, which has immutable strings). Then, for a while in the late oughts and early 2010s, take-home tests were in vogue. We thought nothing of handing applicants a 6+ hour homework assignment and some companies even farmed out work on their production code base as interview questions. These days a lot of companies prefer services like HackerRank which can facilitate a paired coding session.

But all of these systems have problems. Few real-world problems really call for the kind of on-your-feet, hyper-efficient design that white-board tests emphasize. Take home tests try to create realistic scope but still dramatically over-simplify. And, of course, HackerRank style tests still boil down to contrived, tightly-time-constrained development which rewards bad practices and rigid design.

To make matters worse, many of these processes are a Diversity and Inclusion nightmare - biasing the interview process against care-givers and other folks who can't find time after-hours to complete a take-home assignment or train on the kind of short-from questions that replace them.

What we want -- what we've wanted all these years, really -- is a means to determine which applicants have the technical chops to produce good work and which are going to require more training than they're worth.

And AI is about to flip that on its head.

The AI Development Experience

No alt text provided for this image
AI Generated Images of "Migrating Salmon" -- Magical and Absurd

If you're in engineering leadership you probably remember a lot of those interview trends and but find yourself on the other side of the hiring desk more often than not. You probably also find that you write a good bit less code than you used to.

First, let me encourage you to pick a project and really take some of these tools for a spin. ChatGPT is withering under load but, as I write this it's free to use. GitHub CoPilot has a free trial and plugs into many IDEs. Words can't do justice to the experience of playing with these systems; there's an element of magic about them that has to be felt to be understood.

My current project is in Python and makes use of Pandas. Prior to taking my current job I had no real experience with either so most of the coding I've been doing has been leaning back on a quarter-century of object oriented development experience. That means I spend a lot of time kind of knowing what I want to do but not the language idioms, syntax, or built-in structures I should leverage to do it.

Enter the AI toolkits.

For small, well defined problems, they're simply amazing. I can use natural language to express the kind of data transformations I want to implement in Pandas and ChatGPT will spit out the syntax. I can read that syntax, pick out where the AI went wrong, and ask for corrections. Since I have working syntax as a starting point, all of the Google-based-guess-work is gone. It turns hours of scouring the internet looking for an appropriately shaped example into minutes or even seconds.

When it comes to larger projects, however, AI tools struggle. They make mistakes, misconstrue ideas, and -- unlike a human developer -- lack any sense of being overwhelmed or unsure of themselves. Use AI to write a small function and you'll probably be fine; use it to write a small application and you're going to be in a heap of trouble.

Recruiting for AI

No alt text provided for this image
What you don't want to happen...

But while AI may not be ready to generate entire software products for us -- and may never be ready for that -- it seems safe to say that it's going to be a part of our development toolkit going forward. The efficiency gains are just too great. All other things being equal, developers who don't make strategic use of AI are going to dramatically under-perform their AI-using peers.

Making judicious use of AI is therefore about to be its own development skill. We're still working out what that skill looks like. It certainly means that proficiency in reading and understanding code is going to become more important, but there will be other changes as well. For example, AI benefits from reduced scope. The more succinctly a prompt can be phrased the more likely the AI-generated solution is to be correct. This might mean that we want to place a renewed emphasis on developers who understand software design principles like SOLID, since small, single purpose classes with inverted dependencies are much easier to clearly and accurately describe to an AI.

It almost certainty means that we care a lot less about developers who can crank out the shortest possible solution to some canned sorting problem. If a reasonably clever AI can generate that function it seems far more effective to let it do so and recruit for skill-sets that AIs struggle with.

Interviewing for AI

Given that we explicitly want developers who can leverage AI and given the huge edge that leveraging AI would provide in a standard, human-centric development test, we should eliminate short-form "prove you can code" style questions from our technical interview process entirely. We don't need humans to prove they can code; we need them to prove that they can undertake the software engineering work that surrounds the coding process.

The easiest way to test for this probably won't work. Tempting as it might be to just throw an AI engine into the mix of your next interview, they're just not a reliable, available enough product yet. Neither you nor your candidates will be amused if ChatGPT falls over mid-interview and derails the whole process. Further complicating matters is the fact that, save for some limited IDE integrations, there really isn't a stream-lined way to give applicants access to an AI assistant without fundamentally compromising the security of the development test.

But that doesn't mean we can't start planning and interviewing for an AI assisted workforce. We can create an AI-centric interview process without actually doing AI assisted coding.

  • Challenge candidates to describe AI friendly design principles. This showcases both a critical undertstanding of the process of software development and the shortcomings of AI based tools.
  • Give them a messy class to refactor and especially to decompose. This shows their ability to understand code someone else has written and their ability to demarcate logical boundaries in code. Smaller classes with tight encapsulation will more effectively leverage AI assistance.
  • Talk about the Single Responsibility Principle, information hiding, and dependency inversion. Their answers here will show you how they'd handle a greenfield project with AI assistance.
  • Talk about testing and especially how unit testing goes hand in hand with AI assisted development. Watch out for a tendency to ask an AI to generate tests for an AI generated function. Those tests will create a tautological false sense of security.
  • Ask them to discuss any of their own experiences developing with AI.

But above all, have them read and discuss code. Ask them to balance the trade-offs between different implementations of the same basic process. Even if code is correct, even if an AI "understands" that efficiency often comes at the expense of readability, the determination of which is appropriate under what circumstances is -- for now -- a human decision.

Conclusions

No alt text provided for this image

The simple truth is that the software development industry is about to change. The question is no longer if it will happen but how much and how fast. There may well come a time when you can ask a chatbot to just "code a containerized social media application in which users can securely share short-lived video content" and it just DOES... but that time is not now.

For now, AI is a solidly competent junior developer with exceptional search skills but no experience with application architecture, intuition about dangerous edge cases, or knowledge of the pitfalls of future maintainability. But what it lacks in those critical areas it more than makes up for in speed and familiarity with the minutia of languages and frameworks.

AI is not about to replace developers one to one -- but development teams that can effectively leverage it will get a lot more done with a lot less. The time to start training and hiring for that new reality is now.

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics