Negatives of coding with AI
I have been using Claude code for a several months now. I am quite impressed by it’s abilities. Although I have a growing sense that using GPTs for code generation is double edged sword where swing in the wrong direction is very implicit and highly probable.
- Hard to release new frameworks. Now seeding would be needed. Teaching the ai about the new things. The inertia to learn new framework or languages will become much higher because you would need to invest a lot of time in explaining the AI. How to use that.
- Margin of error is too high. It is almost impossible to predict if your prompt will work or not.
- Possibility of generating a Rube Goldberg machine. Instead of a single coherent backend, you will have to use multiple scripts or multiple services which talk to each other Resulting in a Rube goldberg like apparatus. 1
- Uncontrollable behaviour like in Fantasia. When I asked Claude to write an unit test, Claude replied that it found some bugs. But it is not going to fix them since I only asked to write the unit tests. 2
- Arthur C Clarke’s prediction that at some point people will stop understanding how technology really works. This is the worst risk of all because soon enough, we will reach a point where everything just works. People were talking about making programming as basic as mathematics and English, but vibing seems to be norm.
Clarke’s Third, and probably most well-known, Law was that: ‘Any sufficiently advanced technology is indistinguishable from magic.
3