This is a technical write-up of an experimental programming language I've been building, EZ. It's a walkthrough of the actual decisions, the architecture that emerged from them, and the places where I'm still genuinely unsure.
Modern development projects are fragmented by default. You write business logic in one language, reach for C or C++ when you need performance, wire things together with Python or Bash, and then manage a completely separate layer of environment tooling to make any of it reproducible across machines. None of these layers know about each other at the language level; they only connect through build scripts, subprocess calls, and a lot of ambient convention.
The frustration isn't that any individual language is bad. It's that the coordination layer between them is almost always improvised. You end up with subprocess.run(["./mylib"]), hand-written FFI headers, and a Makefile held together by institutional memory.
I wanted to experiment with a language where the coordination layer is a first-class citizen of the language itself, not a build system bolted on top.
The central design bet in EZ is something I'm calling friend modules. The declaration looks like this:
friend native_math: cpp as m; That single line says: there is a C++ source file called native_math.cpp somewhere in the project, I want to call functions from it, and I'll refer to it inside EZ as m. After that, you can write:
int x = 1 + 2 * 3; print("sum:", m.add(x, 2)); EZ takes responsibility for finding the source file, compiling it into a dynamic library (lib m.dylib), loading it at runtime, marshalling the arguments, and calling the symbol. None of that is manual. You don't write a header. You don't call dlopen. You don't manage a Makefile target.
The same syntax works for Python:
friend pymath: python as p; print(p.multiply(3, 4)); EZ generates a C shim that initializes the CPython runtime, imports the module, and bridges the call, including type marshalling from EZ's type system into Python objects and back.
The thing I'm most uncertain about with friend modules is whether the abstraction holds at scale. Right now it works cleanly for simple function calls with primitive arguments. But what happens when you need to pass a struct, a buffer, or a callback? That's where FFI systems always get complicated, and I haven't designed that surface yet.
The parser is built on ANTLR4. This was a deliberate choice rather than a default one.
I considered writing a recursive descent parser by hand. For a language this small, it would have been tractable. But I specifically wanted the grammar to be an explicit, readable artifact: something I could show to someone and have them understand the language structure without reading C++ code. ANTLR gives you that: the grammar is a .g4 file that doubles as documentation.
The grammar generates both a Listener and a Visitor interface. EZ uses both, for different passes:
One thing that surprised me: using ANTLR's generated parser means your parse tree is a concrete syntax tree, not an AST. The interpreter walks the CST directly. This is fine for the current scale, but it's a real design debt: as the language gets more complex, having an explicit AST with a proper lowering pass becomes important. Right now, the interpreter and code generator are both walking the same raw parse tree, which means structural concerns are tangled with execution concerns.
EZ currently has two separate execution paths, and keeping them aligned is one of the main ongoing challenges.
Path 1: The Interpreter (--run)
The interpreter walks the ANTLR parse tree directly and evaluates the program node by node. It handles the full language: all control flow (if/else if/else, while, C-style for, break, continue), all types (int, float, boolean, string, void), user-defined functions with return values, and friend module calls via dlopen/dlsym at runtime.
Friend call dispatch goes through a component called FriendTranslation, which selects a typed invoker based on the argument types. The invoker table supports up to 16 arguments. If any argument is a float, the entire call uses a float-mode invoker. This is a simplification with real consequences: you can't mix int and float in a single friend call and get independent type handling per argument. It was the right trade-off to get something working; it's not the right long-term design.
Path 2: The C Code Generator (--emit-c, --build-native**,** --run-native**)**
The C backend emits a C source file from EZ source, then hands it to clang to produce a native binary. This path exists to validate that the language is compilable in principle and to benchmark interpreter vs. compiled performance on the same inputs.
The C backend currently covers a strict subset: integer arithmetic, boolean logic, basic control flow, and print statements. It deliberately does not support friend calls yet: a friend call in the C backend path produces a diagnostic and stops. That's an intentional boundary while the C emission infrastructure matures.
There's a script (scripts/compare_backends.sh) that runs the same .ez file through both paths and diffs the output. It's a lightweight form of backend parity testing. The test suite currently shows 26 passing tests, 0 failures, and 1 expected failure.
The env native; declaration at the top of an EZ file isn't just metadata. It's the mechanism that tells the runtime which environment contract applies to the current program. Right now native is the only environment type, meaning "build and run against the host system using the configured compilers." The roadmap includes environments that map directly to Nix expressions so env python39; would describe and reproduce the exact Python environment your program needs.
The Nix integration is currently opt-in and shallow. When a Nix environment file is detected, EZ can re-exec itself inside nix-shell before running. The resolved environment metadata (compilers, standards, output directories) is persisted to .ezenv/resolved.json so subsequent runs don't have to re-resolve everything from scratch.
The design question I keep returning to: how tightly should the language be coupled to the environment layer? Right now they're fairly coupled: the same binary handles environment resolution and program execution. An alternative design would push environment concerns entirely into the tooling layer and treat the language as environment-agnostic. I don't think that alternative is right for the goals of EZ, but I'm not fully certain.
Between parsing and execution, EZ runs a semantic analysis pass (semantic.cpp) that builds a SemanticModel a record of all global variable declarations and function signatures. The checker then validates:
int, float, boolean are currently marshalable)This pass runs before any code executes. If it produces any diagnostics, the process exits with a non-zero code. This makes EZ fail-fast on type errors rather than discovering them mid-execution.
The semantic model currently only handles global scope. Function-level scope exists in the interpreter's runtime frame model, but it's not yet reflected in static analysis. This means the semantic checker can miss some errors that the interpreter would catch at runtime.
The interpreter walking the CST directly. I already mentioned this. The right path is an explicit lowering from parse tree to AST before any analysis or execution. Not doing this earlier is the biggest architectural debt in the codebase.
The typed invoker table for friend calls. The current approach, selecting a float-mode or int-mode invoker based on whether any argument is a float, is a hack that happens to work. The correct design is a proper type-aware argument marshalling layer that handles each argument independently. I'll have to rebuild this when I add struct or array support for friend arguments.
foreach and classes are in the grammar but not in the runtime. These are in EZLanguage.g4 and they parse correctly, but the interpreter doesn't execute them yet. That's visible debt a user could write for x in myList {} and the parser will accept it, but at runtime it silently does nothing (or crashes, depending on context). This should either be properly implemented or produce a clear "not yet supported" diagnostic.
The C backend is narrower than documented. The README and wiki accurately note this, but it's worth being explicit: the C backend is more of a proof-of-concept for a simple subset than a production code path. It can't compile friend calls. It has limited type support. I sometimes caught myself framing it as "the compiled path" when it's really "a minimal compiled demonstration path."
How far should friend modules go before complexity explodes?
The current design handles primitive types cleanly. Once you need to pass non-primitive data across language boundaries structs, heap-allocated objects, callbacks you're in FFI territory and the "simplicity" story breaks down. I've seen this happen in every interop layer I've studied. The question is whether there's a principled place to draw the line, or whether the language should just be honest that complex interop requires explicit bridge code.
Should Nix integration stay deeply coupled or become optional/pluggable?
Right now it's in the binary. An alternative is a plugin model where the EZ binary calls out to an external resolver, and Nix is just one implementation of that resolver interface. This would make EZ work cleanly in non-Nix environments without the --no-env workaround flag it currently needs.
How much belongs to the language vs. the tooling layer?
env native; as a language keyword is a strong claim: environment is a first-class concern of the program, not a build system concern. I believe this is right. But there are counterarguments most of what env does today could be expressed in a config file or a CLI flag without requiring a grammar change. The bet is that making it a language-level declaration makes the environment part of the contract of the program, not an implementation detail. I'm still testing that intuition.
The test suite passes at 26/26 (plus 1 expected failure for a known unsupported edge case). The interpreter handles the full core language. The C backend demonstrates compilation of a working subset. The friend module system works end-to-end for C, C++, and Python on macOS.
The project is genuinely experimental. Large parts of the grammar exist ahead of their implementation. The architecture has known debt. AI-assisted scaffolding was used in places. I'm documenting all of this explicitly because I think the useful thing to share isn't the finished product; it's the process and the open problems.
If you're into language design, compilers, or multi-language tooling, the repository is at github.com/Dead-Down-Studio/EZ-Language. Architectural criticism is more useful to me than praise right now.
Key source modules for anyone reading the code: src/main.cpp (orchestration), src/core/bootstrap_listener.cpp (first-pass extraction), src/core/semantic.cpp (type checking), src/core/interpreter.cpp (runtime), src/core/friend_translation.cpp (FFI dispatch), src/friends/friends.cpp (build planning), src/codegen/codegen_c.cpp (C emission).
Thank you for reading.