Building a Programming Language for AI to Write
Every general-purpose programming language in existence was designed for humans to write.
C was designed for humans who wanted to talk directly to hardware. Python was designed for humans who wanted to think less about syntax. Rust was designed for humans who wanted memory safety without garbage collection. Even Zig and Odin — the modern systems languages I admire most — were designed for human brains, human fingers, human reading patterns.
But here's the thing: AI is writing more and more of our code. And the languages it's writing in were never designed for it.
I've been building games with Claude Code for over a year now. It writes Zig, GDScript, TypeScript — whatever I need. And it's good at it. But I keep watching it struggle with the same things: implicit conversions it can't see, context-dependent syntax it has to guess at, overloaded operators that mean different things in different scopes.
So I started building Klar — a programming language designed from the ground up so that AI can write correct code on the first attempt.
The Problem with Existing Languages and AI Code Generation
When an AI generates code, it's predicting tokens. It doesn't "understand" your codebase the way you do. It's making statistical guesses about what comes next, informed by everything it's been trained on.
That works surprisingly well — until the language introduces ambiguity.
Consider this Rust:
let x = foo();
What type is x? You don't know. I don't know. The AI doesn't know without reading foo's signature, which might be in another file, behind a trait implementation, or generated by a macro. The AI has to carry that context — or guess.
Or this Go:
result := process(data)
Is result an error? A value? Both? Go's multiple return values and := operator mean the AI needs surrounding context to know what's happening. And "surrounding context" costs tokens, which costs money, which costs time.
These aren't flaws in Rust or Go. They're features designed for humans — humans who have IDEs, who can hover over a variable, who build mental models of a codebase over weeks. AI agents don't have those luxuries. They see what you show them, and they have to work with that. If you've tried agentic coding, you've seen this firsthand.
Klar's Design: No Ambiguity, No Surprises
Klar's core principle is that every line of code should be parseable and understandable without reading any other line of code. That's a radical constraint, and it shapes everything about the language.
Here's the same variable declaration in Klar:
let x: i32 = foo()
The type is right there. Always. No inference, no guessing, no reading another file. An AI sees this line and knows exactly what x is, what it can do with it, and what will happen if it tries to use it wrong.
This might seem verbose if you're a human who writes code all day. But if you're an AI that generates thousands of lines across dozens of files, explicit types are not verbosity — they're guardrails.
Here's how that principle plays out across the language:
Explicit returns. Every function has a visible return statement. No implicit last-expression returns, no hidden control flow.
fn max(a: i32, b: i32) -> i32 {
if a > b {
return a
}
return b
}
Explicit conversions. There are no implicit type coercions. Ever. If you want to widen an i32 to an i64, you say so:
let x: i32 = 42
let y: i64 = x.as#[i64] // safe widening
let z: i16 = x.to#[i16] // checked narrowing (traps on overflow)
let w: i8 = x.trunc#[i8] // truncating (you asked for it)
Three different conversion operators, three different safety guarantees. An AI never has to wonder whether a conversion will silently lose data.
Explicit overflow. The + operator traps on overflow by default. If you want wrapping or saturating arithmetic, you ask for it:
let a: i32 = x + y // traps on overflow (safe default)
let b: i32 = x +% y // wrapping (you opted in)
let c: i32 = x +| y // saturating (you opted in)
Words over symbols. and, or, not instead of &&, ||, !. Less ambiguity, more tokens for the AI to pattern-match against.
The theme is always the same: make the right thing obvious, make the wrong thing impossible, and never force the reader — human or AI — to look somewhere else to understand what's happening here.
The Meta Layer: Code That Explains Why
Here's where Klar goes beyond just being explicit about what the code does. The meta layer lets you embed why directly in the source, where the compiler validates it:
meta decision("i32 not usize -- app-level ergonomics over systems safety")
fn len(self) -> i32 {
return self.count
}
This isn't a comment. It's a compiler-validated annotation that explains a design decision. An AI agent working on this codebase can query meta annotations to understand architectural intent without reading entire files:
klar meta --tag "performance-critical"
klar meta --related Path
klar meta --json
Think about what this means for agentic coding. Instead of dumping 500 lines of context into a prompt, the AI can ask the compiler: "What design decisions affect this module?" and get structured, machine-readable answers. This is the same philosophy behind rethinking how tools communicate with AI — minimize noise, maximize signal.
The meta layer replaces the tribal knowledge that human teams carry in their heads — the stuff that's never in the docs, never in the comments, and always the reason the new developer's first PR gets rejected.
What Kind of Language Is This?
Klar targets application-level programming — the same space as C#, Go, or TypeScript. It's not competing with Rust for embedded systems or Zig for kernel development. It's designed for the 95% of code that is application logic: game systems, web services, tools, CLIs.
That means practical tradeoffs:
.len()returnsi32, notusize, because loop counters shouldn't need casts- Ownership is simpler than Rust — no lifetime annotations
- Array access is bounds-checked by default
- There's a REPL for fast iteration and AI self-verification
That last point matters more than it sounds. When an AI generates Klar code, it can test it in the REPL before presenting it to you. Generate, verify, fix internally, deliver working code. The fast startup time of the REPL makes that loop practical rather than theoretical.
Where Klar Is Today
The compiler is implemented in Zig with three backends: a tree-walking interpreter, a bytecode VM, and an LLVM native compiler. The native backend delivers real performance — 250x faster than the bytecode VM on benchmarks.
The standard library covers the essentials: collections, file I/O, networking, JSON, async/await, and an HTTP server. You can build real things with it today.
My current focus is self-hosting — getting the Klar compiler to compile itself. It's the traditional rite of passage for a new language, and it's also the ultimate test of whether the language is actually practical: if you can't build a compiler in it, you can't build anything serious in it.
Why This Matters for Game Development
I'm building Klar because I'm building games with AI. Stellar Throne, my 4X strategy game, is developed almost entirely through agentic coding. Every day I watch Claude Code generate hundreds of lines of Zig and GDScript, and every day I see it stumble on the same class of problems: implicit behavior, ambiguous syntax, missing context.
The game development world is heading toward a future where AI writes most of the code and humans direct the architecture. When that future arrives, the languages we use will matter as much as the AI coding tools we choose. A language designed for human fingers and human reading patterns will always be a bottleneck for AI code generation.
Klar is my bet on what comes after that bottleneck.
Klar is open source. If you're interested in a programming language designed for AI code generation, check out klarlang.dev, the GitHub repository, or follow along on this blog.