If AI "slop" scares you, your PR reviews are broken
One concern I have heard from AI naysayers is that AI slop will make code reviews nearly impossible. AI churns out so much code, documentation, etc. that it's just impossible for any reviewer to keep up with it... right?
Wrong!
If you have good PR review processes, then reviewing AI-assisted code shouldn't be any more onerous than reviewing any other code.
Here are some concerns I have heard and my response. Note that I don't share all of even most of these concerns!
AI generates tons of code #
It's true that AI can sometimes generate tons of code—but your PR process shouldn't allow for massive PRs in the first place! If I get a 100 file PR today, I wouldn't review that. I'd respectfully ask the author to break the PR down into smaller, atomic pieces of work that can be reviewed more carefully. I'd ask this whether or not AI helped generate the code.
As an aside, AI can actually generate small, digestible diffs! You just need to prompt in a way to do so. I have found being more methodical in walking AI through the problem step-by-step not only results in more digestible diffs, but also results in higher-quality code.
AI generates low quality code #
I don't quite know what to say to this one! If you're not reviewing PRs for quality in the first place, then that's a problem. Just apply your regular level of vetting to AI-assisted code as you would regular code. If you don't currently review PRs closely, then the problem isn't the quality of the code—it's that you're phoning it in during PR reviews.
People just accept whatever AI outputs without understanding it #
I tend to review PRs pretty closely and ask questions about anything I don't understand or think may be wrong. If you author a PR and are unable to answer the questions I have about your code, then it's not making it into the codebase. Again, this is as true today as it was 10 years ago. AI or not, I am going to make sure your code makes sense!
AI can use outdated/vulnerable dependencies #
If you add third-party dependencies in a PR, that should be considered a "bigger deal" than some folks treat it today (I'm looking at you, node ecosystem). Your PR review process should include evaluating what new dependencies are being added and a review of the installed version. Ideally, there should also be some consideration of whether an external dependency is even needed.
Outside of the PR review process, you should ideally have automated dependency scanning (sonarqube, dependabot, etc.) that will detect vulnerable dependencies.
Conclusion #
If you're worried about AI "slop" making its way into your codebase, consider how your prevent human "slop" from making its way into your codebase. PR reviews are a critical tool for this—and should remain one as we explore this new AI-assisted world.
Enjoy this post? Please subscribe to my RSS feed on Feedly or add my RSS XML file to another reader!