Ars Technica has an intriguing post on vibe coding from two different models that I think lays out an interesting question if this was done on production data. The article, “Two major AI coding tools wiped out user data after making cascading mistakes“, discusses two models creating code and then wiping data or code. Both models made errors and then tried to cover them up before coming clean.
An AI model cannot introspect itself and its training, which is perhaps where a human operator can potentially step in as well as dealing with hallucinations. As Jason Lemkin points out in a blog post, vibe coding may not clear production level barriers, but it can provide some interesting prototypes. May be this is where the concept of vibe coding is: it creates prototypes but still requires engineering to explore some of the less obvious sections and uses.
Yet a question still remains. If a vibe coded application led to catastrophic data deletion: who is liable? Is the platform who provide the model? That which trains the model if it is separate? The user? The user’s company?
Update: As an aside, the use of vibe coding and liability seems to come up in a 404 media story about Amazon Q and malicious commands. It may be small model and tool in terms of use, but the concept raises additional eyebrows (in my case) about the provision of tools and the supply chain.
No Comments