I started Guillotine in Rust, but quickly switched to Zig. This piece explains why in hindsight that was the right call.
- Zig: fanatically explicit, zero hidden control flow/allocations, first-class C interop, powerful
comptimefor safety + customization, simple mental model for size/perf. - Rust: superb compiler and ecosystem, strong safety model, great tooling; but hidden control flow and panic/unwind mechanics make size/perf reasoning harder, and FFI ergonomics add ceremony.
- Result: For a browser-grade, size-sensitive, performance-critical EVM, Zig’s explicitness improves human reasoning, making it easier to hit strict size and perf targets while keeping strong safety guarantees.
Zig is low-level, C-like, close to the metal, and aggressively explicit:
- No hidden allocations or control flow.
- Explicit I/O and allocator dependencies.
- Minimal but well-chosen features (tagged enums, option/error types,
defer/errdefer, bounds checks in debug,ReleaseSafemode). comptimelets you write ordinary Zig that runs at compile time for checks, specialization, and codegen.
You get the minimalism of Go with the control of C.
Rust is a high-level language with a very powerful compiler:
- Idiomatic Rust focuses on memory safety via ownership/borrowing.
- You can still drop to raw pointers/
unsafewhen needed. - Great language features: traits, pattern matching,
Option/Result. - Best-in-class package ecosystem and build tooling (Cargo).
Rust is fantastic for robust systems. The question is whether it’s optimal for this problem.
- Smaller ecosystem vs. Rust → Zig treats C as first-class; you can import headers directly and ship a built-in C toolchain. Many mature C libs are 1
@cImportaway. - Language churn → True, but manageable with CI. In practice, breakage has been minor for us compared to the benefits we get.
- “Less safe than Rust” → Zig ships a lot of safety (debug checks,
ReleaseSafe, optional/error types, strict pointer types,defer/errdefer,comptimeinvariants). It’s differently safe—and more explicit. - Missing high-level features like traits →
comptimefills this role with simpler, more transparent mechanics.
Rust’s FFI is robust but intentionally inconvenient: crossing the boundary forfeits guarantees and forces wrapper code. That friction nudges you toward the Rust ecosystem even when a C lib would do.
Zig’s model is the opposite:
const c = @cImport({
@cInclude("clib.h");
});With Zig’s build system, using C (and even Rust via C-ABI shims) is straightforward. Guillotine depends on C for EIP-4844 KZG and on audited Rust crypto (arkworks) exposed through stable C interfaces. The integration is painless.
Guillotine targets the browser, so size matters. Prior EVMs ship ~400 KB+; on mobile links that’s painful.
In like-for-like tests tuned for size, Zig builds were ~20% smaller than Rust for the same logic targeting Wasm (observed across multiple examples). Some of this is compiler/runtime differences, but a lot stems from Zig’s “no hidden control flow” philosophy—you don’t accidentally pull in large panic/unwind or iterator machinery.
Example
Rust (pleasant, high-level):
use std::fs::File;
use std::io::{self, BufRead, BufReader};
fn main() -> io::Result<()> {
let file = File::open("numbers.txt")?;
let sum: i32 = BufReader::new(file)
.lines()
.filter_map(|line| line.ok()?.parse::<i32>().ok())
.sum();
println!("Sum: {}", sum);
Ok(())
}Zig (more verbose, but explicit and predictable):
const std = @import("std");
pub fn main() !void {
const allocator = std.heap.page_allocator;
var file = try std.fs.cwd().openFile("numbers.txt", .{});
defer file.close();
var reader = std.io.bufferedReader(file.reader());
var stream = reader.reader();
var total: i32 = 0;
while (true) {
const bytes = stream.readUntilDelimiterOrEofAlloc(allocator, '\n', 1024) catch |err| {
if (err == error.EndOfStream) break;
return err;
} orelse break;
defer allocator.free(bytes);
const n = std.fmt.parseInt(i32, bytes, 10) catch null;
if (n) |v| total += v;
}
std.debug.print("Sum: {}\n", .{ total });
}Why this matters: in Zig, lines map to real work and real size. In Rust, seemingly tiny changes (e.g., a panic!) can balloon code via panic/unwind paths, trait/iterator glue, etc. You can write tiny Rust, but Zig makes tiny the default.
Zig vs. Rust vs. C often trade wins in microbenches depending on compilers and versions. The headline speed isn’t the point.
The point: Zig makes performance reasoning easy. No hidden work. You see the allocations, control flow, and pointer math. That feedback loop shortens the path to a fast EVM.
Rust (revm excerpt):
pub struct Stack {
data: Vec<U256>,
}
#[inline]
pub fn push(&mut self, value: U256) -> bool {
debug_assert!(self.data.capacity() >= STACK_LIMIT);
if self.data.len() == STACK_LIMIT { return false; }
self.data.push(value);
true
}Where/when does it allocate? You need to know how Vec grows, when it re-allocates, and how OOM behaves (usually abort/panic). You can engineer arenas and pools in Rust, but the ergonomics fight you.
Zig (explicit allocator + failure as a value):
const std = @import("std");
const WordType = u256; // example
pub const Stack = struct {
buf_ptr: [*]align(64) WordType,
stack_ptr: [*]WordType,
pub const Error = error{ AllocationError };
pub fn init(allocator: std.mem.Allocator, stack_capacity: usize) Error!Stack {
const memory = allocator.alignedAlloc(WordType, 64, stack_capacity)
catch return Error.AllocationError;
errdefer allocator.free(memory);
const base_ptr: [*]align(64) WordType = memory.ptr;
return .{
.buf_ptr = base_ptr,
.stack_ptr = base_ptr + stack_capacity,
};
}
pub inline fn push(self: *Self, value: WordType) void {
@branchHint(.likely);
self.assert(@intFromPtr(self.stack_ptr) > @intFromPtr(self.stack_limit()), "Stack overflow in push_unsafe");
self.stack_ptr -= 1;
self.stack_ptr[0] = value;
}
};Benefits:
- Allocation sites are explicit (readers see them immediately).
- Failures are values, not panics—so you can recover/propagate.
- Allocator injection makes arenas/slabs/custom growth strategies trivial to plug in and benchmark.
For Guillotine, we aim for near-zero runtime allocation; where unavoidable (e.g., large words, stack growth), we pre-size via arenas and use growth policies tuned from real traces.
Zig has substantial, programmable safety:
- Comptime checks (write ordinary Zig that runs at compile time).
- Strict pointer and optional types, including optional pointers.
- First-class
errorand?Toption types with language syntax. - Bounds checks & leak detection in debug/tests.
ReleaseSafemode keeps guards in production builds.- Deterministic cleanup with
defer/errdefer.
Example (compile-time invariant for synthetic opcodes):
const std = @import("std");
const Opcode = enum { /* ... */ };
const OpcodeSynthetic = enum { /* ... */ };
comptime {
for (@typeInfo(OpcodeSynthetic).@"enum".fields) |syn_field| {
if (std.meta.intToEnum(Opcode, syn_field.value) catch null) |conflict| {
@compileError(std.fmt.comptimePrint(
"Synthetic opcode {s} (0x{X}) conflicts with normal opcode {s}",
.{ syn_field.name, syn_field.value, @tagName(conflict) },
));
}
}
}This is safer than relying on runtime asserts and clearer than sprinkling macros or tricky trait bounds. If you do drop to pointer casts/arithmetic, Zig’s type system still gives you sharp, opt-in safety rails.
Revm is very customizable through traits/generics. Zig achieves the same outcome with simpler mechanics using comptime configs.
Rust-style (schematic):
let mut evm = Evm::<Context<...>, (), EthInstructions<...>, EthPrecompiles::default()>::default();Zig-style (configuration object):
const MyEvm = evm.Evm(.{
.eips = .{ .hardfork = .cancun },
.max_call_depth = 1024,
.stack_size = 500,
.DatabaseType = Database,
.opcode_overrides = MY_OPCODE_OVERRIDES,
.precompile_overrides = precompile_overrides,
.tracer_config = .disabled,
});
const evm = MyEvm(.{}); // construct instanceUnder the hood, Evm returns a type specialized by the config:
pub fn Evm(comptime config: EvmConfig) type {
return struct {
const Self = @This();
pub const Frame = @import("frame/frame.zig").Frame(config.frame_config());
pub const Bytecode = @import("bytecode/bytecode.zig").Bytecode(.{
.max_bytecode_size = config.max_bytecode_size,
.max_initcode_size = config.max_initcode_size,
.fusions_enabled = config.enable_fusion,
});
// ...
};
}You can even pick minimum-width integer types at compile time to avoid over-wide counters:
pub fn PcType(comptime self: Self) type {
return if (self.max_bytecode_size <= std.math.maxInt(u8)) u8
else if (self.max_bytecode_size <= std.math.maxInt(u12)) u12
else if (self.max_bytecode_size <= std.math.maxInt(u16)) u16
else if (self.max_bytecode_size <= std.math.maxInt(u32)) u32
else @compileError("Bytecode size too large (must fit in u32).");
}This kind of specialization is trivial and transparent in Zig.
Zig gives you C-level control with better defaults:
- Manual control when you want it; debug/time-safe modes when you need them.
- No macros, no metaprogramming gotchas—just code you can read and reason about.
- Easy to audit: what you write is what runs.
For a browser-targeted EVM where bundle size and predictability matter as much as raw speed, that explicitness is a superpower.
Rust is excellent.
But for Guillotine—an EVM meant to be tiny, fast, and predictable in the browser—Zig wins. Its explicit control over allocations and control flow, simple FFI, and comptime-driven safety/customization reduce both code size and the human time needed to reason about performance. That combination is exactly what this project demands.