Stephen Ramsay recently wrote a great post with a self-evident thesis: "If You're Going to Vibe Code, Why Not Do It in C?". His logic: programming languages exist for human convenience, AI doesn't need that convenience, so why not cut out the middleman?
Go read it, it's a truly excellent post: https://stephenramsay.net/posts/vibe-coding.html
Also he says this:
Or hell, why not do it in x86 assembly?
and not super snarkily - which I think is hilarious I want to try it.
He mentions thinking that vibe coding feels "dirty" - he and I strongly disagree on the "vibe" part of vibe coding. I love vibe coding. It is a beautiful thing! I can actually build things at the speed at which I have ideas, which is way too fast. I can finally design frontends that don't suck (I apologize to previous employers who had to experience this when they were desperate enough to pull me out of the backend). I can pick and prod at architecture designs endlessly so I don't miss things or make bad assumptions.
So yeah, I love coding! I love writing code, and moreso I like designing systems, but mostly I love turning ideas into reality and vibe coding has massively amplified my ability to do that. It's never been about the code, it's about what you can do with it.
Ignoring these opinion differences, I want to focus my disagreement on the substantive part of the post: the safety features in modern languages aren't just for humans. They catch the mistakes that AI makes constantly. The two best things about Rust, its borrow checker and its compiler errors, aren't overhead when you're vibe coding. They make LLMs a better coder, too.
So if you're going to vibe code, why not do it in Rust?
Note: I don't know C. My first language was C++ in a couple intro CS electives, but I learned coding later via ML: Python, R, and later, Go and some JS/TS, and finally, Rust.
On vibe-oriented languages
First I want to discuss the conclusion of his post. Ramsay goes further than just "why C?" He speculates about what a VOPL (vibe-oriented programming language) might look like: executable pseudocode that secretly writes assembly, literate programming's final form, something closer to natural language with learned idioms that guide AI toward solutions. "Concurrency slang" instead of goroutines.
It's fun. But it's backwards.
Programming languages have always moved toward more human-centric expression. LLMs work through human communication. Though I'll talk a lot about how great Rust is, it isn't the destination. It's a starting point. It's the best current verification layer for an increasingly natural-language front end. The eventual VOPL is English plus a verification backend that doesn't exist yet.
What does that look like? Maybe a semantic compiler. Describe behavior, it figures out what has to be true, asks where you're ambiguous, refuses to generate code until the spec is coherent. A borrow checker for intent. And maybe then it knows to build a semantic validation layer before writing code, and any code that it writes is checked against that, seeing if the user's intent is upheld as a (probabilistically) provable part of the coding path.
I think Rust is a good model for what tooling should look like when you're coding with LLMs. Don't assume that the LLM is omniscient, rather create tools that a) find the bugs it will inevitably create while b) telling it how to fix them.
So, with the more philosophic part out of the way, let's talk about about C and Rust.
Why C (Python, Java, ...) are the wrong answer
Ramsay suggests C because AI doesn't need safety rails. But AI makes mistakes constantly. Subtle ones. Off-by-one errors, use-after-free, buffer overflows, uninitialized memory, integer overflow, null pointer dereference. (I had Claude come up with that list, yes)
In C, these compile fine and fail at runtime. Or worse, they don't fail, they just corrupt memory and produce wrong results. Debugging AI-generated C means hunting through code you didn't write for bugs you can't see. This is even worse if you don't know the language (me).
And the C code that LLMs are trained on spans many C versions, expansions (C++), ways of doing things (famously endless), and levels of skill (decades of every CS major's Gists in C). What is "correct" to an LLM is potentially incorrect, even when the training is very large.
These issues are not just native to C. They are arguably worse in LLM-generated Python code. Python doesn't have the same type system as C, so in addition to being less efficient, unable to compile, having challenging dependency management (much better with uv now, thanks Astral y'all rock), it also means that when LLMs make mistakes, you can't isolate them to the business logic. They are often in the basic details.
It's easy to hope for C/Python (and Java, etc.) that this is mitigated by the massive corpus of code that LLMs are trained on, probably an order of magnitude larger than Rust's.
...but isn't that an advantage for Rust? I'd speculate that its public code is modern, fairly small, and rarely written by the type of person that generates horrible code. The language has gone through major version changes, especially around concurrency (tokio etc.), but where Rust is being used is often in replacing existing codebases with best-practice patterns. See: uv.
I guess this section can end noting that Ramsay doesn't think that LLMs make basic mistakes. I actually believe that he will be more and more correct on this point in the next few years as LLMs level up. But he uses it as a core argument. If it's true to him right now, why are there so many errors in LLM-generated C code? https://www.reddit.com/r/ClaudeCode/comments/1nkz8j2/help_sometimes_claude_code_is_great_but_too_often/
The "human affordances" Ramsay wants to skip are exactly what catch AI errors. Remove them and you're trusting an LLM to get memory management right every time.
The issue isn't "most of the time, LLMs are doing pretty well at not making fundamental errors inherent to this language". It's "what happens when they do?"
What vibe coding in Rust actually feels like
When Claude generates Rust code, it either compiles or it doesn't. No silent memory corruption waiting to bite me at 2am. No segfault in production from a double-free the AI introduced three prompts ago.
The borrow checker is brutal, which is exactly what I want when I'm moving fast and trusting generated code. Every lifetime error, every move violation, every data race that rustc catches is a bug I never have to debug.
In Python or JavaScript, AI-generated code can "work" while hiding subtle issues. In Rust, the compiler forces correctness at the boundary between my intent and Claude's interpretation.
When I ship Rust, I'm not worried about what I missed like I am when vibe coding with Python or Javascript (TS is better, yes). The type system caught the nulls. The borrow checker caught the data races. The compiler caught the edge cases Claude forgot to handle. I used to spend mental energy on "what could go wrong here?" Now the compiler answers that question. I can focus on what I'm building instead of what might break.
When Claude changes a type or function signature, the compiler shows it every callsite that needs updating. Claude fixes each one. I review the diff. Done.
The compiler becomes the prompt engineer for Claude.
"Rename this field and update all usages." I can say this once and be confident that it's implemented all over. The compiler tells Claude exactly what broke. Claude fixes each location. I review. In dynamic languages, refactoring is scary because you never know what you missed. In Rust, the compiler is Claude's checklist.
For broader changes like "let's make this concurrent", Rust's supply chain has clean solutions for them. It's been really easy to let Claude add the dependencies, and they generally integrate cleanly. No virtualenv confusion. No npm peer dependency hell. Cargo build either succeeds or tells Claude exactly what's wrong. I've had only a couple dependency issues in 6+ months of using Rust every day, and that sounds impossible to Python engineers.
Python dependencies still matter though. I come from that land of vipers. It is still close to my heart. But it remains very relevant for distribution. Most major AI projects release in Python. The AI revolution happened in Python, starting with ML/NN on numpy/pandas. Modern Python tools frequently use Rust backends (polars, e.g.) but the distribution is still pip install. Claude can write performance-critical code in Rust and I can ship it to Python users via PyPI. Users can pip install and never know it's Rust underneath.
A note on efficiency
C is very efficient.
Part of the architecture of my startup requires running a lot of small processes. I started with Python for the prototype because that was fastest to iterate. Then I built the hardened systems in Go. Now it's all Rust. Why?
The memory savings over Go add up when you're running thousands of instances. We're talking 2-5x less memory per (small) process compared to Go, 10-50x less than Python or Node. At scale, that's real money, especially with the recent 2-5x increase in RAM prices and the subsequent tightening of memory costs in most public clouds. Rust isn't just "fast enough." It's the efficient choice when you're paying per megabyte.
Most vibe coded projects will never make it that far, but the smaller version of this is: what if you could put all your projects on a tiny VPS instead of a large one?
I actually still use Go, specifically for a couple large, centralized, high-concurrency pieces. It's really hard to beat goroutines for some tasks. Also, the backend of this website is in Go, because I like writing Go.
But the rest? It's Rust. Rust is great.
I'm not saying "learn Rust, then vibe code." I'm saying vibe code in Rust without knowing much Rust. You don't need to write it. You need to review it. Prompt it after devlopment to check for locking issues, overengineering, and/or unnecessary complexity. LLMs are getting really good at that review step, and they can do it endlessly. Paste the code back with "review this for concurrency issues" and it might find a problems. Knowing Rust helps for debugging, sure. But the barrier to entry is lower than people think. Describe what you want, let the AI write it, let the compiler verify it.
Note: if you're building production systems with vibe code, please learn to read the language your systems use. In fact, I learned Rust by having Claude teach me and walk me through practice.
Ramsay's argument assumes vibe coding removes humans from the equation. But the best vibe coding is collaborative: you steering, AI generating, compiler verifying.
Languages designed to protect humans from themselves turn out to protect humans from AI mistakes too. That's not overhead. That's leverage.
So, again: if you're going to vibe code, why not do it in Rust?