Recipes
Real-world patterns that combine the in-process bus and stream server with the filesystem reader.
Build a custom evlog devtool / dashboard
1. Build a minimal devtool
A live event panel is essentially EventSource + a list. The full wire format and discovery rules — .evlog/stream.url, /api/_evlog/stream-info, the { evlog: '1', type, data } envelope, and auth — are documented on the stream page. Each recipe below assumes you've grabbed the URL via either of those mechanisms.
Vanilla HTML + JS (drop into any page)
<!doctype html>
<html>
<head>
<meta charset="utf-8">
<title>evlog mini devtool</title>
<style>
body { font: 13px ui-sans-serif, system-ui; margin: 0; padding: 0; }
table { width: 100%; border-collapse: collapse; }
td, th { padding: 6px 10px; border-bottom: 1px solid #eee; text-align: left; }
.lvl-error { color: #ef4444 }
.lvl-warn { color: #f59e0b }
.lvl-info { color: #3b82f6 }
</style>
</head>
<body>
<table id="t">
<thead><tr><th>time</th><th>level</th><th>service</th><th>action</th></tr></thead>
<tbody></tbody>
</table>
<script>
// Replace with the URL printed at startup, or fetch it from /api/_evlog/stream-info
const STREAM_URL = 'http://127.0.0.1:51203'
const tbody = document.querySelector('#t tbody')
const es = new EventSource(STREAM_URL)
es.onmessage = (e) => {
const env = JSON.parse(e.data)
if (env.evlog !== '1') return
if (env.type !== 'event' && env.type !== 'replay') return
const w = env.data
const tr = document.createElement('tr')
tr.innerHTML = `
<td>${new Date(w.timestamp).toLocaleTimeString()}</td>
<td class="lvl-${w.level}">${w.level}</td>
<td>${w.service ?? ''}</td>
<td>${w.action ?? w.message ?? w.path ?? ''}</td>
`
tbody.prepend(tr)
while (tbody.children.length > 200) tbody.lastElementChild.remove()
}
</script>
</body>
</html>
Save as devtool.html, open in any browser tab while your evlog-instrumented dev server is running. That's the whole MVP.
Vue 3 component
<script setup lang="ts">
import { onBeforeUnmount, onMounted, ref } from 'vue'
import type { WideEvent } from 'evlog'
const events = ref<WideEvent[]>([])
let es: EventSource | null = null
onMounted(async () => {
// Discover URL via the same-origin info endpoint (Nuxt)
const { url } = await $fetch<{ url: string | null }>('/api/_evlog/stream-info')
if (!url) return
es = new EventSource(url)
es.onmessage = (e) => {
const env = JSON.parse(e.data)
if (env.evlog !== '1') return
if (env.type === 'event' || env.type === 'replay') {
events.value.unshift(env.data as WideEvent)
if (events.value.length > 500) events.value.length = 500
}
}
})
onBeforeUnmount(() => es?.close())
</script>
<template>
<ul>
<li v-for="(e, i) in events" :key="`${e.timestamp}-${i}`">
<code>{{ e.level }}</code>
<strong>{{ e.service }}</strong>
<span>{{ e.action ?? e.message ?? e.path }}</span>
</li>
</ul>
</template>
React hook
import { useEffect, useState } from 'react'
import type { WideEvent } from 'evlog'
export function useEvlogStream(url: string) {
const [events, setEvents] = useState<WideEvent[]>([])
useEffect(() => {
if (!url) return
const es = new EventSource(url)
es.onmessage = (e) => {
const env = JSON.parse(e.data)
if (env.evlog !== '1') return
if (env.type === 'event' || env.type === 'replay') {
setEvents(prev => [env.data, ...prev].slice(0, 500))
}
}
return () => es.close()
}, [url])
return events
}
That's the entire integration surface. No SDK, no special types beyond WideEvent exported from evlog.
2. Quick CLI inspection with curl + jq
The URL is in .evlog/stream.url:
URL=$(cat .evlog/stream.url)
curl -N "$URL" | jq -c 'select(.type == "event") | .data'
Filter on the client side as needed:
# Only errors
curl -sN "$URL" | jq -c 'select(.type == "event" and .data.level == "error") | .data'
# Only one service
curl -sN "$URL" | jq -c 'select(.type == "event" and .data.service == "checkout") | .data'
# Slow requests
curl -sN "$URL" | jq -c 'select(.type == "event" and .data.duration > 500) | .data'
-N keeps curl in streaming mode (no buffering). -s is silent.
3. Replay history then go live
History on disk (filesystem drain) + live updates from the stream server = a full picture from any point in time.
import { readFsLogs } from 'evlog/fs'
import { readFile } from 'node:fs/promises'
import type { WideEvent } from 'evlog'
async function bootstrap(handle: (e: WideEvent) => void) {
// 1. Replay the last hour from `.evlog/logs/`
const since = new Date(Date.now() - 60 * 60 * 1000)
for await (const event of readFsLogs({ since })) {
handle(event)
}
// 2. Switch to the live SSE stream
const url = (await readFile('.evlog/stream.url', 'utf-8')).trim()
const es = new EventSource(url)
es.onmessage = (e) => {
const env = JSON.parse(e.data)
if (env.evlog !== '1') return
if (env.type === 'event' || env.type === 'replay') {
handle(env.data)
}
}
return () => es.close()
}
readFsLogs skips files outside the date range, so the replay step is fast even if you keep weeks of history. For a tail-only mode without on-disk replay, hit the stream server with ?since=<iso> to reuse the in-process ring buffer instead.
4. Node / Bun client (fetch + ReadableStream)
Same protocol, no EventSource polyfill needed:
import { readFile } from 'node:fs/promises'
const url = (await readFile('.evlog/stream.url', 'utf-8')).trim()
const res = await fetch(url)
const reader = res.body!.getReader()
const decoder = new TextDecoder()
let buffer = ''
while (true) {
const { value, done } = await reader.read()
if (done) break
buffer += decoder.decode(value, { stream: true })
let idx
while ((idx = buffer.indexOf('\n\n')) !== -1) {
const frame = buffer.slice(0, idx)
buffer = buffer.slice(idx + 2)
const dataLine = frame.split('\n').find(l => l.startsWith('data:'))
if (!dataLine) continue
const env = JSON.parse(dataLine.slice(5).trim())
if (env.type === 'event') console.log(env.data)
}
}
5. Filter, transform, aggregate on the consumer
Keep the server dumb — every consumer picks what it cares about:
// Just errors
const errors = events.filter(e => e.level === 'error')
// Slow requests
const slowReqs = events.filter(e => typeof e.duration === 'number' && e.duration > 500)
// Group by service
const byService = Object.groupBy(events, e => e.service)
// Rolling error rate (last 100 events)
const last100 = events.slice(0, 100)
const errorRate = last100.filter(e => e.level === 'error').length / last100.length
// Ad-hoc cost analytics — works because evlog/ai writes ai.* fields on every AI call
const totalCost = events
.filter(e => typeof e.ai?.estimatedCost === 'number')
.reduce((sum, e) => sum + (e.ai?.estimatedCost as number), 0)
6. Self-hosted "tail -f" replacement
Skip the network entirely if the consumer runs on the same machine:
import { tailFsLogs } from 'evlog/fs'
const ac = new AbortController()
process.on('SIGINT', () => ac.abort())
for await (const event of tailFsLogs({ signal: ac.signal })) {
if (event.level === 'error') notifyOps(event)
}
Works without instrumenting the running app — useful for sidecar / observer processes that watch a directory.
What not to do
- Don't run the stream server on Vercel Functions / Cloudflare Workers / Lambda. Each invocation is a separate isolate; subscribers in one isolate never see events emitted by other isolates. Use a real broker (Redis Streams, NATS, Pub/Sub) for cross-instance fan-out.
- Don't put auth-sensitive data in wide events unless your evlog config redacts them. The server relays exactly what your app emitted — including any unredacted PII.
- Don't filter at the server ("only error events please"). The server is purpose-built to be transparent. Filter on the consumer side; that way one filter doesn't starve another consumer.
FS reader
Replay and tail the local NDJSON drain with readFsLogs and tailFsLogs — works in-process or from any external Node tool, survives restarts.
Plugins
definePlugin is the canonical extension point for evlog — opt into any subset of setup, onRequestStart, enrich, keep, drain, onRequestFinish, onClientLog, extendLogger from a single cohesive object.