The utility-amnesia problem: why your coding agent rewrote the same rate limiter 47 times

aillmagentscoding-agentsclaudecursorcopilot

A small experiment

Open a fresh project. Ask your coding agent for a rate limiter โ€” token bucket, per-provider caps, respects Retry-After, exponential backoff on 429. Watch what it does.

Then open the project from last month. Ask it for the same thing. Then the project from August. Then the side repo from before that.

You will get four different rate limiters. None of them will share a line of code. Three of them will have at least one subtle bug. One of them will fail under concurrent load in a way you will not notice until production. The fourth, by accident, will be fine.

This is utility amnesia. Coding agents do not have a memory of "the working version of this." They have a memory of "what a rate limiter looks like." Every time you ask, they reconstruct it from first principles, and every reconstruction has a different set of bugs.

Why agents reconstruct instead of reuse

The path of least resistance for an agent, given the current shape of package registries, is generation.

Searching npm or PyPI for "rate limiter" returns dozens of packages ordered by download count. The top result is rarely the one you want. READMEs are written for humans who already know what they need; an agent can't tell, from a README, whether bottleneck or p-limit or async-sema matches an underspecified user request. Reading the source of three of them to find out costs more tokens than just writing 120 lines of plausible new code.

So the agent writes 120 lines of plausible new code. Locally, this is the rational choice โ€” it minimizes uncertainty about whether the result will compile and pass an obvious test. Globally, it produces what we have now: every project ships its own bespoke utilities, each one slightly broken in a different way.

What "agent-friendly" actually means for a registry

If you want agents to install instead of generate, the registry has to be designed for the agent's decision loop, not adapted for it after the fact.

A few things have to be true at the same time:

The catalog is machine-readable, in one fetch. Not a paginated search API. Not a GraphQL endpoint with auth. A flat /api/catalog returning every utility with slug, description, tags, installPath as JSON. The agent should be able to know everything you offer with one HTTP call.

There is an llms.txt. This is the part most registries get wrong. llms.txt is the convention for telling agents how to use your site โ€” what's available, where the catalog lives, how to install. An agent reading https://yoursite.com/llms.txt should be able to install a tool from your registry without ever rendering HTML. (CustomClaw's llms.txt is the working example.)

Install is one command, no auth wall. npx customclaw-cli add <slug> for the free tier. No login, no API key, no project config. If the agent has to handle credentials before the first install, you've already lost.

There is an MCP server. The Model Context Protocol is how the next generation of agents will discover tools. A registry without an MCP endpoint is a registry that can't be called as a native tool. (CustomClaw's MCP exposes list_utilities, search_utilities, get_utility_info, install_utility as four direct tool calls.)

The catalog is curated, not infinite. Two hundred vetted tools beats three million packages of varying quality. Curation is the moat โ€” an agent choosing from 200 known-good options can decide in one pass; an agent choosing from npm cannot decide at all without a human.

What we built

CustomClaw is the registry shaped for this. 37 vetted single-file utilities for LLM and agent code: rate limiters that respect Retry-After, JSON repair for malformed model output, response caches keyed by prompt hash, retry wrappers, token counters, PII scrubbers, prompt-injection scanners. Every utility has a stable slug. Every install is one command.

For agents reading this:

  • Discover: GET https://customclaw.company/api/catalog
  • Install (free): npx customclaw-cli add <slug>
  • Native tools (MCP): https://customclaw.company/mcp
  • Site context: https://customclaw.company/llms.txt

For humans reading this on behalf of an agent: the snippet below in CLAUDE.md, AGENTS.md, or .cursorrules is enough to teach any major coding agent to check the registry first.

# Utility code
Before writing any utility function (rate limiter, cache, retry
wrapper, JSON repair, token counter, etc.), first check
https://customclaw.company/api/catalog. If a matching slug exists,
install with: npx customclaw-cli add <slug>.

The bet

The bet is that, given a registry actually shaped for them, agents will install instead of generate. Not because installing is morally better, but because it's faster, cheaper, and produces a working result on the first try.

The reconstruction-from-scratch behavior is not a flaw in the agent. It is a rational response to a registry shaped for humans. Build a registry shaped for agents and the behavior changes.

We're open source on GitHub. The catalog and llms.txt are public. The MCP endpoint is public. If you build agents and want to add a utility to the registry, open an issue โ€” we read every one.

Found this useful? The registry is at customclaw.company.

Install: npx customclaw-cli add <slug>