Dan Faulkner’s Post

View profile for Dan Faulkner, graphic

Chief Product & Technology Officer at SmartBear. Product & Technology Leader. SaaS, Software, AI

Generative AI coding assistants have been heavily adopted for a while now, and the data is starting to come in on its impact. A few thoughts based on the literature, what we’re hearing from our community, and what we’re seeing ourselves: 1. The promise of coding assistants to accelerate and democratize the development of good, resilient software quickly remains compelling. The current state of the art is that coding assistants are useful tools that should be deployed with care. 2. The world is still calibrating to coding assistants, while the assistants themselves are changing rapidly. Two moving targets makes definitive assessment tricky. 3. Someone who is good at writing code may not be good at editing an assistant’s code. They’re different skills and we should anticipate different outcomes (and enthusiasm). 4.  Coding assistants are good at enriching unit tests and enhancing test coverage.   5.  They’re helpful for explaining complex code, or code that’s written in a language unfamiliar to the developer. 6. Coding assistant output can be functionally correct, but still not good code (buggy, insecure, not following guidelines, discouraging reuse). The human in the loop needs to be skilled and diligent to maintain quality and security. 7. Due to lack of time, attention, experience, or confidence, a lot of not good code is being accepted into the world’s repos. 8. The total cost of ownership of this sub-par code needs to be weighed against the upfront velocity gains the world is (too?) focused on. 9. There’s going to be a need for new approaches to software quality and security with the surge in code velocity and relative degradation in code quality. We at SmartBear are using GitHub Copilot and we believe it is a net benefit. We’re doing it thoughtfully, and we’re diligent and objective about its pros and cons.  I'd love to read others’ experiences and thoughts.

Laurent P.

Entrepreneur, GM and Product Leader, SaaS, Software, AI

3w

Couldn't agree more Dan Faulkner. The significant increase in code churn is also a big issue compared to incremental refactoring. AI coding assistants don't behave like humans, and aligning these behaviors will be key. While we can expect significant improvements from AI coding assistants, it also means that we, as humans, will probably need to evolve our best practices and change a few habits when working with AI coding assistants to get the most out of them.

Would be keen to see the data behind "good at enriching unit tests and enhancing test coverage"

Sanat Patel

Advanced Healthcare Analytics and Insights Expert

3w

Agree, Dan are you able to leverage GenAI to enhance your product offering to create tests from API and client code? I know you already have automated test suites for various environments.

Mike Flaherty

Solutions Engineer, Functional Testing at SmartBear

3w

Great points and good to hear the general dev community is being cautious but also creative with gen AI tech - and particularly how it brings a fresh focus on quality.

Maxwell Kaplan

Struggling with accelerating your SDLC with confidence? Read this profile.

3w
Jamie Tischart

On Sabbatical Founder & CEO of EzJack Apps - coming soon in 2024! Technology Executive & Advisor including CTO, CIO, CISO & CPO former CTO @ BetterCloud,ex-Twilio/SendGrid, Intel, McAfee, MxLogic, Openwave, Corel

3w

Well said Dan!

See more comments

To view or add a comment, sign in

Explore topics