Oh shit: AI works!

software-engineering
Back

I recently started managing a junior software engineer - let's call him Sam. Smart guy, but green. He could code, sure, but he needed a lot of hand-holding to stay on track.

To help Sam out, I started writing super detailed tickets for him. I'd break it down into bite-sized pieces: what we're trying to do, how we might do it, what could go wrong - you name it. It was like giving him a roadmap for each task. Sure, it took me a bit more time upfront, but it payed off. Sam's work improved, and I ended up saving hours in the long run.

One night, while wrapping up a detailed ticket, a crazy thought hit me: "Hey, why not toss this whole ticket into ChatGPT and see what happens?"

And holy cow, it actually worked! The AI churned out the entire solution. Sure, there was a tiny hiccup - a small bug - but it was a breeze to fix.

I completed the entire task in just 30 minutes, including the time spent writing the ticket. By comparison, I estimate it would have taken Sam at least 4 hours, potentially up to 8 hours if the pull request required revisions. The efficiency gain was significant.

This was my "Oh shit, AI works" moment. And I'm not alone. I've heard similar stories from some big names in software engineering. Django co-creator Simon Wilson had his own "wow" moment. So did Erik Meijer, the brain behind C#, LINQ, and VB .NET. It seems we're all starting to realize just how powerful this tech can be.

AI works... but it depends on the task...

Since then, I've been treating AI like a new junior dev on my team. I give it tasks and review its work. When it works, it's great. But it doesn't always work.

Generally speaking, the AI performs best the more information you give it about the problem it needs to solve.

Sometimes you're dealing with "common knowledge" programming terms, and so you don't have to prompt much. But sometimes to describe the task you need to use language specific to your codebase. That means you have to explain these concepts to the AI, and that can get tedious very quickly.

...until the tooling matures?

At this point I've realised its just a matter of time before tooling matures to automatically generate these contextual prompts for you.

I can imagine a tool that you could point to a few files and it would traverse the dependency tree up to N levels, then ask it to summarise these files, then to that summary add your actual prompt, and voila! you now have a capable coding agent.

I won't lie, I find this future a bit scary.

If you liked this article, follow me on twitter. I write about once a month about software engineering practices and programming in general.


Other posts you may like...

© Fernando Hurtado Cardenas.RSS