The dangers of Vibe Coding
The advances of Large Language Models (LLM) and "AI" has started to transform how we do things. Instead of spending hours figuring out how to word an email for example, you could, just ask an AI tool to do it for you. And this same technology is having an impact in the software development world as well. With tools like Copilot, ChatGPT, Cursor etc. there is a new buzz word, 'Vibe Coding'. But what is Vibe Coding, and why am I saying that it is dangerous?
What is Vibe Coding?
Vibe Coding is where, instead of solving problems yourself, you get AI to solve them for you. By crafting prompts to one of the many AI engines out there, you get the AI tool to create the code needed to write an application, website or game for you. It can give you the illusion of rapid application development, without any of the work or understanding that it would usually take. It can allow you to get something up and working in a very short time-span. Sounds good right? Wrong!
OK, what is wrong with AI LLM's anyway?
To fully understand why I consider it to be one of the most dangerous innovations to happen to software development, first you need to understand something. There is no intelligence in any of the artificial intelligence solutions on the market. None, zip, zero, zilch. It is marketing speak for a different technology, Large Language Models or LLM's. LLM's give the impression of intelligence but it is really a very sophisticated algorithm that emulates intelligence by analysing the entered text and interpreting it, using the data it has been trained with, to generate a suitable response. But it doesn't really understand what is being said, or have any real intelligence. LLM's are trained using lots of data. The more data you give it, the more accurate sounding the responses it can give.
A lot of these models were trained on data provided by the internet, and, as I hope you already know, not everything you read on the internet is in fact true. As humans, we mostly have the intelligence to filter out the nonsense from the facts (although with movements like the flat earth ideas, it is easy to think otherwise). But the algorithms used in LLM's use statistics and probability. So if enough false information is fed into it, that is what it uses and what it will return in response to certain questions.
For software development, these models have been trained on publicly shared code. And like the "facts" on the internet, not all the code shared is good code. There are lots of examples of code which is trimmed down from what you would expect from production quality, so as to convey an idea or methodology in the simplest way. This code is not something you would ever put into production, but it is useful to share and consume as a learning exercise. Again though, the LLM is not able to tell the difference and so it may recommend this code as a complete solution. The vast majority of the code shared on the internet is like this. The only real exceptions are open source projects where actual production code is available.
So why is it so dangerous?
Well, as we have established, LLM's don't really understand or use intelligence, it is an approximation. It will present something that sounds accurate without necessarily understanding the ramifications of what it has come up with. This approximation will be good enough to convince those that do not fully understand the code that is generated that is it complete. So it is possible to create a lot of code that may on the surface look fine, but will contain significant bugs. And because the person using the tool has not spent the time to fully understand the problem or the solution, finding and correcting these bugs becomes a large challenge. Not only that, but the more people stop using their own brains to solve problems and rely on tools, the less capable they are at actual problem solving.
If Vibe Coding catches on, we could be left with a generation of software 'developers' who don't actually know how to code. This will be bad for the quality of software applications, great for hackers who want to abuse the systems they have created, and incredibly hard for actual software developers to fix after the fact.