Command Menu

Search pages, blogs, projects, and more...

march 1, 20263 min read

Hakai: I Wrote a Rust + Bun Hybrid to Delete node_modules and I Have No Regrets

npkill was too slow. So I built a parallel directory destroyer in Rust with a Bun TUI because apparently I hate free time.

Dev StoryRustBunCLIPerformance

Hakai: I Wrote a Rust + Bun Hybrid to Delete node_modules and I Have No Regrets

"Throughout the filesystem and the disk, I alone am the honored one."

It started, as all terrible ideas do, with a minor inconvenience.

I ran npkill on my projects folder -- five years of side projects, abandoned tutorials, and repos I swore I'd finish. It found 50,000+ directories. Twelve seconds just to scan. Sequential size calculation. And when I hit delete on a 5GB node_modules? The UI froze while Node.js heroically attempted thousands of blocking I/O calls.

Unacceptable.

So naturally, I chose the hardest possible approach.

The Architecture Nobody Asked For

I call it hakai (破壊) -- "destruction" in Japanese -- because naming things is the second hardest problem in computer science, and for once, this one was easy.

The core insight: JavaScript is not built for high-performance filesystem work. Rust is. So the engine -- scanning, size calculation, deletion, risk analysis -- is compiled Rust. rayon gives me a work-stealing thread pool that distributes directory traversal across every CPU core. It's embarrassingly parallel and I'm not embarrassed at all.

But here's where the yak-shaving gets beautiful.

Building TUIs in Rust is... an experience. So the frontend -- keybinds, diff-based rendering, regex search, multi-select -- runs in Bun. Sub-50ms startup, 60fps rendering, and I already knew TypeScript.

The two halves talk over a newline-delimited JSON IPC protocol on stdin/stdout pipes. Rust fires scan results, sizes, and deletion progress down stdout. Bun catches it on stdin, parses each line, renders without blocking. One JSON object per line, trivially debuggable with jq.

A chaotic marriage that infuriates purists on both sides. Perfect.

The Benchmarks of Spite

Here's what happens when you massively over-engineer a file deletion tool:

Operationnpkill (Node)hakai (Rust+Bun)Gain
Scan 50k dirs~8-12s<1s10-15x
Size calculationSequentialParallel (rayon)8-12x
Delete 5GB folder~45s (blocks UI)Async, non-blocking4-6x
Cold startup~400ms<50ms8x

When you multi-select 100 folders and hit enter, hakai fires up to 8 concurrent deletion tasks via tokio. The UI never blocks. The scan never pauses. Folders just vanish. It finishes before your hand reaches the mouse.

The Weekend That Kept Growing

Let me be transparent about the scope creep.

What started as "make npkill but faster" turned into: cross-platform path handling for Windows' 260-character MAX_PATH limit (solved with \\?\ UNC prefixes). Fixing TTY detection for Windows Git Bash, which is somehow still broken in 2026. Adding risk analysis that flags orphaned node_modules missing a package.json so you don't nuke a monorepo root. Building a headless mode with --json output for CI pipelines because scope creep doesn't stop until the PR is merged.

"It works on my machine" wasn't good enough. It had to work on everyone's machine.

Was It Worth It?

I spent a full weekend configuring Rust cross-compilation targets, debugging IPC buffer edge cases, and fighting tokio task cancellation semantics -- all to save roughly 10 seconds of my day once a month.

Let's do the math. If I use hakai twice a month and save 12 seconds each time, I'll break even on development time in approximately... 47 years.

But that benchmark table looks really good in a README.

So yes. Obviously worth it.

Grab hakai on GitHub. Your node_modules had it coming.