Negatives of coding with AI
I have been using Claude code for several months now. I am quite impressed by its abilities. Although I have a growing sense that using LLMs for code generation is a double edged sword which can easily swing in the wrong direction.
Here are some of the negatives of using AI for code generation at a large scale.
- Hard to release new languages and frameworks. Now seeding would be needed. Teaching the LLMs about new language and frameworks would require upfront effort from the creators. The inertia to learn new frameworks or languages for developers will become much higher because you would need to invest a lot of time in explaining the code to LLMs. It is yet to be seen how fast can LLMs learn totally new things.
- Margin of error is too high. It is almost impossible to predict if your prompt will work or not. This is a huge challenge when you have no idea about how to judge the correctness of the output.
- Possibility of generating a Rube Goldberg machine. Instead of a single coherent backend, you will have to use multiple scripts or multiple services which talk to each other. Resulting in a Rube Goldberg’s machine like apparatus. LLMs cannot handle large code bases. As dependence on LLMs increase developers will give in to use smaller code bases with a micro services architecture. The risk is to go too micro. 1
- Uncontrollable behaviour shown by LLMs like in classic Mickey Mouse cartoon Fantasia. When I asked Claude to write a unit test, Claude replied that it found some bugs. But it is not going to fix them since I only asked to write the unit tests. It went ahead to write the unit tests without flagging any bugs. Without a human in the loop LLMs can be quite literal in what you ask them to do. 2
- Arthur C Clarke’s prediction that at some point people will stop understanding how technology really works. This is the worst risk of all because soon enough, we will reach a point where everything just works. People were talking about making programming as basic as mathematics and English, but now vibing seems to be the norm. Clarke’s third, and probably most well-known, law was that: Any sufficiently advanced technology is indistinguishable from magic.3
- Future LLMs might be trained on the vibe code written by other LLMs. This will lead to some serious issues when a large portion of code published online will be written by LLMs. The patterns being used in these codebases would deteriorate to the point of nonsense. To select the training data there needs to be a way to detect the correctness of code which sounds like P = NPlevel problem. Detecting if the code is written by an AI is not useful since that would be a contradiction to the whole LLM code generation effort. It is not known what garbage or incorrect data LLMs are being trained on.
- Don’t ever let Claude access your deployment keys or the db password. Letting LLMs access everything is a huge security risk. LLMs bring so many new security risks that we won’t even know what hit us. LLMs can mistakenly install malware, publish your keys, delete your db, deploy bugs and more. There have been cases where prompt injection, phishing and spoofing has caused harm to people.
- Privacy and IP risk is very high when you are sharing all your code with LLMs. You are just one toggle away from letting LLMs companies use your code for training.