Why start with evlog
The cheapest moment to add structured logging is before the first request. By the time you have 200 routes, 40 background jobs, and a console.log per file, you're paying interest on a decision you never made. evlog is designed for the day-zero choice — pick it once, and the rest of the system inherits structured logs, structured errors, typed catalogs, AI SDK telemetry, an audit trail, and a drain pipeline you don't have to build later.
console.log or pino? evlog still wins, but the case is different — see evlog vs pino, winston, consola.What you get from day one
evlog isn't a small primitive you wrap. It's a single dependency that comes with a structured surface, an ecosystem of integrations, and an opinionated drain pipeline — all on by default.
Logging primitives
Auto-redaction
Structured errors
why, fix, and link for your on-call (and your future AI agent). See Structured Errors.Wide events
Typed catalogs
defineErrorCatalog and defineAuditCatalog give you enum-like, refactor-safe codes and actions. Start with two entries, grow to a published package. See Catalogs.Head + tail sampling
Drain pipeline
Beyond the logger
The primitives are table stakes — every modern logger has some flavour of them. Where evlog earns its place on day 1 is everything wired around them, for problems you haven't had yet.
Imagine you add AI to your app. Sooner or later you wire the Vercel AI SDK into a route. Token costs surprise you, a model hangs mid-stream, a tool returns garbage. With the AI SDK integration, every model call becomes a wide event with prompt, tools, tokens, latency, and cost — automatically. And if you're using Better Auth, evlog ties the actor identity to those events for you, so you can answer "which user just burned $14 in a single conversation?" without writing a line of plumbing.
Imagine your stack spans more than one framework. A Nuxt frontend, a Hono internal service, an AWS Lambda webhook — most teams end up with this kind of mix. evlog has 13+ framework integrations and each one exposes the same logging primitives (useLogger, log.set, createError). The handlers themselves stay framework-shaped — that part is on you — but you don't relearn a logger every time you cross a runtime, and the drain pipeline behind them stays the same.
Imagine you want noisier logs in dev than in production. During local development you sprinkle log.debug calls — full request bodies, every retry, every guard — to actually see what's happening. None of that should ship. The Vite plugin strips selected log levels at build time, so dev has the verbose context you want and production stays clean. As a bonus, every surviving call gets its source location (file.ts:42) injected automatically, so when an event lands in your dashboard you know exactly which line emitted it.
Imagine a user reports a bug from their browser. The error happened in their session, deep inside a fetch you can't reproduce. evlog's browser logger ships client events to your server, where they merge into the same wide event as the rest of the request — one typed event, regardless of where the error originated. Combine it with the built-in enrichers and you also get UA, GeoIP, and W3C trace context attached for free.
Imagine your stack changes vendor. You started with stdout, signed Axiom for queries, then your team wants Sentry for errors and PostHog for product analytics. With 9+ drain adapters and built-in fan-out, those events land everywhere in parallel — the application code never moves. Self-hosting is a swap-in too, via the filesystem or NuxtHub adapters.
None of this is a "v2 feature" — it's the same package, on the same log API, on day 1.
Catalogs grow with you
The smallest useful catalog is two entries:
import { defineErrorCatalog } from 'evlog'
export const errors = defineErrorCatalog('billing', {
PAYMENT_DECLINED: { status: 402, message: 'Payment declined' },
INVOICE_NOT_FOUND: { status: 404, message: 'Invoice not found' },
})
Six months later it has thirty entries, type augmentation gives you autocomplete on createError({ code }) everywhere, and you ship it as a private npm package across your monorepo. Same pattern. No rewrite.
The same applies to audit catalogs (defineAuditCatalog) — start with one action, grow into your compliance map. See Catalogs.
Built for the AI-coding-agent era
More and more applications are built with AI coding agents — Cursor, Codex, Windsurf, Claude Code, Copilot. They're good at writing handlers; they're worse at debugging them. What they need is context.
- Structured errors with
why/fix/link. A vagueError: failedis opaque;createError({ message, why, fix, link })is something an agent can read, summarise, or surface to the user without you wiring a translation layer. - Wide events as a single source of truth per request. One typed event the agent can reason about end-to-end — not log lines to grep across.
- Typed catalogs as an enum-like surface. The agent doesn't invent error codes; it picks from
errors.PAYMENT_DECLINED,audit.INVOICE_REFUND, etc., with autocomplete from the catalog. - AI SDK telemetry on every LLM call. When the agent's own model calls fail, hallucinate, or burn budget, the wide event tells you which prompt, which tools, how many tokens, how much it cost.
- Agent skills built in. evlog ships agent skills so Cursor / Claude / Windsurf already know how to wire it up — no manual prompt-engineering.
Audit and compliance: cheap now, expensive later
Every product eventually meets one of: GDPR data-export requests, SOC 2 readiness, HIPAA in healthtech, PCI in payments, or simply an incident review where someone asks "who deleted that?". There are two ways to get a trail:
- The day-1000 way. Stand up a parallel system. Decide on a schema. Backfill what you can. Reverse-engineer actor identity from request headers. Ship under deadline pressure. Hope nothing was missed.
- The day-0 way. Add
auditEnricher()and calllog.audit({ action, actor, target })from any handler that touches state. evlog ships hash-chain integrity, retention, and force-keep past sampling — on top of the wide events you were already emitting.
evlog's audit layer is not a parallel system — it's the same log you already use, with a reserved audit field. See Audit Logs.
Drain-agnostic from day one
Your application code never depends on a vendor — it emits to the drain pipeline. On day 0 that's stdout in dev and a filesystem drain in CI. The day you decide to query logs:
import { createAxiomDrain } from 'evlog/adapters/axiom'
import { createSentryDrain } from 'evlog/adapters/sentry'
export default defineNuxtConfig({
evlog: {
drain: {
adapters: [
createAxiomDrain({ token: '...', dataset: 'app' }),
createSentryDrain({ dsn: '...' }),
],
},
},
})
Zero handler changes. The same events land in Axiom, Datadog, PostHog, Sentry, Better Stack, HyperDX, or OTLP — or all of them, with fan-out.
What "later" actually costs
When teams ship with console.log and decide to add proper logging "when we need it", the bill comes due:
- Settling field-name conventions after the fact. Is it
userId,user_id,uid, oractor.id? Once thirty services log it differently, you're writing migration scripts inside your observability vendor. - Bolting on redaction post-incident. Auto-redaction is trivial when no log exists yet. It's a P1 audit when six months of logs already contain PII.
- Choosing a drain under pressure. Picking Datadog vs Axiom vs Sentry while you're already on fire is the worst time to evaluate vendors.
- Adding actionable error context retroactively. Every
throw new Error('failed')you wrote is one lesswhy, one lessfix, one less link your on-call (or your AI agent) can use.
evlog removes the "later" entirely — the structured surface, the wide event lifecycle, and the drain pipeline are all there from day 1.
| Decision day | Cost to adopt evlog |
|---|---|
| Day 0 (greenfield) | Add the framework module. Done. |
| Day 30 (small app) | Switch the logger surface — about a day of work. |
| Day 365 (production app) | Walk the codebase to swap loggers, settle field-name conventions, fold in audit and redaction. The same cost as migrating between any two structured loggers. |
The asymmetry is the point. Start with evlog because the cost is zero. Stay with evlog because the cost of leaving is higher than building any of this yourself.
Day 0, in practice
Start a new project with evlog wired in from the first commit
Next steps
- Installation — pick your framework
- Quick Start —
useLogger,createLogger,createErrorin 2 minutes - Catalogs — typed errors and audit actions, day-0 to monorepo-scale
- Audit Logs — day-0 compliance posture
- AI SDK — token usage, tool calls, streaming metrics
- Better Auth — auth events with one-line install
- Best Practices — what not to log, redaction, sampling
- evlog vs pino, winston, consola — feature parity matrix and migration snippets
Introduction
A modern TypeScript logger built for everything you ship. Simple structured logs, wide events, and structured errors in one API — drop-in for console.log, pino, or consola.
Installation
Install evlog in your TypeScript project. Supports Nuxt, Next.js, SvelteKit, Hono, Express, Fastify, Elysia, NestJS, and standalone scripts.