Just like you they probably don’t grow and grind wheat into flour for bread, most programmers don’t write every line of code in a recent project from scratch. This would be extremely snail-paced and could cause more security problems than it solves. Therefore, developers exploit existing libraries – often open source projects – to implement various core software components.
While this approach is effective, it can result in software exposure and lack of visibility. Increasingly, however, vibration coding is being used in a similar way, allowing developers to quickly create code that they can simply customize rather than writing from scratch. However, security researchers warn that this recent breed of plug-and-play code makes software supply chain security even more elaborate and risky.
“We are now reaching a point where AI will soon run out of its security grace period,” says Alex Zenla, chief technology officer at cloud security company Edera. “Artificial intelligence is its own worst enemy when it comes to generating insecure code. If the artificial intelligence is trained partly on old, vulnerable or low-quality software available, then all existing vulnerabilities can re-occur and be reintroduced, not to mention new problems.”
In addition to siphoning off potentially insecure training data, the reality of vibration coding is that it produces a coarse version of code that may not fully account for all of the specific context and considerations for a given product or service. In other words, even if a company trains a local model based on the design’s source code and natural language description of goals, the production process still relies on the ability of reviewers to detect any possible errors or inconsistencies in the code originally generated by the AI.
“Engineering groups need to think about the software lifecycle in the era of vibration coding,” says Eran Kinsbruner, a researcher at application security company Checkmarx. “If you ask the exact same LLM model to write specific source code, it will get a slightly different result each time. One developer on the team will generate one result, and the other will get a different result. And this introduces an additional complexity beyond open source.”
At Checkmarx questionnaire among chief information security officers, application security managers, and development heads, one-third of respondents said that in 2024, more than 60 percent of their organization’s code would be generated by artificial intelligence. However, only 18 percent of respondents said their organization had a list of approved vibration coding tools. Checkmarx surveyed thousands of professionals and published the results in August, also highlighting that the rise of artificial intelligence makes it tough to track “ownership” of code.
