Learn
The mental model — three logging modes, the wide event lifecycle, sampling, typed fields, and redaction. Read this section in order if you're new; pick what you need if you're not.

This section is the mental model of evlog. By the end, you'll know exactly what evlog does, when each API fits, and how an event flows from your code to your drain.

If you're new, read it in order. If you've already shipped with evlog, jump to the page that matches your question.

All three modes coexist in the same logger. Pick per call — there's no upgrade path, no advanced mode, no toggle to flip. Same drains, same redaction, same types underneath.
Not running an HTTP framework? See Standalone TypeScript and Cloudflare Workers.

The three logging modes

Simple Logging

A fully-featured general-purpose logger. Replaces console.log, consola, pino, or winston with log.info, log.error, log.warn, log.debug — same level filtering, drain pipeline, redaction, and pretty/JSON output.

Wide Events

Accumulate context over a unit of work (a script, job, queue task, or request) then emit a single comprehensive event.

Request Logging

Auto-managed wide events scoped to HTTP requests. Framework middleware creates the logger and emits it for you.

Quick comparison

Simple Logging (log)

One event per call. No accumulation, no lifecycle management.

src/index.ts
import { log } from 'evlog'

log.info('auth', 'User logged in')
log.error({ action: 'payment', error: 'card_declined', userId: 42 })

Wide Events (createLogger / createRequestLogger)

One event per unit of work. Accumulate context progressively, emit when done.

import { createLogger } from 'evlog'

const log = createLogger({ jobId: 'sync-001', queue: 'emails' })
log.set({ batch: { size: 50, processed: 50 } })
log.emit()

createRequestLogger is a thin wrapper around createLogger that pre-populates method, path, and requestId.

Request Logging (framework middleware)

Framework integrations create a wide event logger automatically on each request. useLogger(event) retrieves the logger that's already attached to the request context:

server/api/checkout.post.ts
import { useLogger } from 'evlog'

export default defineEventHandler(async (event) => {
  const log = useLogger(event)
  log.set({ user: { id: 1, plan: 'pro' } })
  return { success: true }
  // auto-emitted on response end
})
useLogger(event) doesn't create a logger, it retrieves the one the framework middleware already attached to the event. Each framework has its own way to access it (useLogger, req.log, c.get('log'), etc.). In Nuxt, useLogger is auto-imported.

When to use what

logcreateLogger / createRequestLoggerFramework middleware
Use caseQuick one-off eventsScripts, jobs, workers, queues, HTTP without a frameworkAPI routes with a framework integration
ContextSingle callAccumulate with set()Accumulate with set()
EmitImmediateManual emit()Automatic on response end
LifecycleNoneYou manage itFramework manages it
OutputConsole + drainConsole + drainConsole + drain + enrich

By context

ContextBest fitWhy
HTTP route in Nuxt / Next / Hono / Express / …useLogger(event) via framework integrationOne wide event per request, auto-emitted on response end
HTTP handler without a frameworkcreateRequestLogger({ method, path })Same shape as framework middleware, manual emit
CLI tool / one-shot scriptlog.* for steps + createLogger for the run summary — see StandalonePretty in dev, structured in CI, one summary event for the whole run
Published librarycreateLogger only — never initLogger — see StandaloneDon't pollute the host app's global config or force a drain on consumers
Background job / queue worker / croncreateLogger({ jobId, queue }) per invocation — see StandaloneOne wide event per job run, perfect for retry analysis
Cloudflare Worker / edge functioncreateWorkersLogger(req) or createRequestLogger — see Cloudflare WorkersPer-request event, no process globals required
AWS LambdainitLogger once + createLogger per invocation — see AWS LambdaCold-start init, per-event scope, drain flush in the handler
Batch / pipeline stepcreateLogger({ step }) per stageOne event per stage with inputs and outputs side by side
AI agent / LLM callcreateLogger + createAILoggerToken usage, tool calls, streaming metrics on the same wide event
Library function called inside a requestuseLogger(event) from caller, or accept a logger as argumentInherit the parent's request context, contribute to the same wide event
Shared workspace packageTreat it like a library — see StandaloneHost app owns initLogger / drain; packages use createLogger or accept a logger
None of these is an "upgrade" of another. Use log and createLogger in the same file when it makes sense — they share the global drain, redaction, and types.

Shared foundation

All three modes share the same foundation:

  • Pretty output in development, JSON in production (default, no configuration needed)
  • Drain pipeline to send events to Axiom, Sentry, PostHog, and more — see Integrate / Adapters
  • Structured errors with why, fix, and link, plus optional backend-only internal for logs
  • Sampling (head + tail) to control log volume in production
  • Redaction that wipes secrets before they ever leave the process
  • Zero dependencies, ~6 kB gzip

The rest of this section

After the three modes, the rest of Learn covers the concepts that show up across every mode:

  • Structured Errorswhy, fix, link, internal, and how createError differs from throw new Error
  • Catalogs — typed error / audit catalogs that survive refactors
  • Lifecycle — exactly what happens between emit() and your drain
  • Sampling — keep all errors and slow requests; drop healthy noise
  • Typed Fields — augment RequestLogger so log.set is autocompleted
  • Redaction — the rules that strip authorization, password, token, etc. before drain

When you're done with Learn, head to Integrate to wire evlog into your stack.