Coding with AI is not all that

I am a long time software engineer and of late college professor. I’ve done a lot of programming and recently have been using AI (I will use that term generically, but for me it is Claude.ai) to assist. I have thought of it as a super-duper Stackoverflow which instead of returning a series of semi-useful posts, actually gives me working code. It is truly remarkable and I use it every single day.

As a teacher, I have encouraged students to think of AI as a power tool which they need to learn to use. Some other teachers treat the use of AI as “cheating”. For me it is no more cheating then using google to search. However…

Of late I have discerned a dark pattern which I had not identified before. And this applies to students but to myself as well. AI in coding gives a nice sugar high. Instead of thinking and understanding how my code works or how a new API works, it is so easy to get the code from AI and just try it. I recently dug into async in Python which I had not used before and that was the trap I fell into.

Now we all know that the code that AI is n to necessarily correct. But it may look almost correct, in that it is syntactically right and sort of works usually.

The problem is that while it an almost correct piece of code it is often the totally wrong solution to your problem. In other words, it often sets you on the wrong path. A dead end. The trap is that you spend your time debugging an approach that is totally wrong.

When does this happen? I find that it happens when I am tackling something I don’t really understand and want a shortcut to my sugar high. Upon reflection I have found more than once that I spent hours debugging an approach which, once working, I realized was all wrong and I had to start over.

So my lesson is: beware of using code generated by AI, not just because it may be “wrong” or “buggy” but because it sends you in a wrong direction and you end up with a cul-de-sac solution which is sub optimal and in the end you will have to rewrite.