Build on top

Build on top of evlog

Two ways to build on top of evlog — observe the events flowing through the pipeline, or extend the pipeline itself with custom plugins, drains, enrichers, and shareable catalogs.

evlog is designed to be extensible from both ends. There are two distinct angles to "build on top" — and they answer different questions:

ObserveI want to read what flows through the pipeline.Plug a subscriber into evlog and react to wide events live: a devtool, a dashboard, a CLI tail, an analytics console. The events come from your app, you decide what to do with them.
ExtendI want to plug something into the pipeline.Add custom logic that runs as events flow through evlog itself: enrichers, drains, plugins, error/audit catalogs you can publish as packages. Your code is part of the pipeline.
flowchart LR
    App["Your app"] --> Pipeline["evlog drain pipeline"]
    Pipeline --> Drain["Drains<br/>(Axiom, Datadog, fs, ...)"]

    subgraph extend [Extend]
      direction TB
      Plugins["Plugins"]
      Enrichers["Enrichers"]
      CustomDrains["Custom drains"]
      Catalogs["Error / audit<br/>catalogs"]
    end

    extend -.->|"hook into"| Pipeline

    Pipeline -->|"events"| Stream["evlog/stream<br/>in-process pub/sub"]
    Stream --> StreamServer["mini HTTP server<br/>(SSE bridge)"]

    subgraph observe [Observe]
      direction TB
      InProc["Sync subscribers<br/>(scripts, tests)"]
      Browser["Browser tab<br/>(devtool)"]
      Cli["CLI / curl"]
      FsReader["readFsLogs<br/>tailFsLogs"]
    end

    Stream -.->|"in-process"| InProc
    StreamServer --> Browser
    StreamServer --> Cli
    Drain -->|"fs adapter"| FsReader

Observe — pages here

WhatWhen you want it
Stream APIcreateStreamDrain(), getDefaultStream() — in-process subscribe / iterateA consumer lives in the same Node process as your app
Stream serverMini HTTP server on its own port that exposes the stream over SSEA browser tab, a CLI, an external devtool needs to subscribe
Reading FS logsreadFsLogs() / tailFsLogs() — replay or follow the NDJSON drainYou want history (replay yesterday's errors, post-incident triage)
Identity headersUser-Agent: evlog/<version> + X-Evlog-Source on every drain requestYou want receivers (Axiom, Datadog…) to identify evlog traffic
RecipesCopy-paste patterns: build a devtool, curl + jq, replay-then-live, aggregateYou want a starting point

Extend — where the docs live

These surfaces existed before this section — links into their canonical pages:

WhatDoc
PluginsdefinePlugin() — opt into any subset of evlog's lifecycle hooksCustom adapter / plugin
Custom drainsdefineDrain() / defineHttpDrain() — ship events anywhereBuilding blocks: pipeline, HTTP drain
Custom enrichersdefineEnricher() — derive context (geo, deploy id, tenant…)Custom enrichers
Error catalogsdefineErrorCatalog() — typed error factories with module-augmentationCatalogs
Audit catalogsdefineAuditCatalog() — typed audit actionsAudit overview
Framework integrationscreateMiddlewareLogger() + helpers — bring evlog to any HTTP frameworkCustom integration
Catalogs as packagesPublish a catalog as a reusable npm package (Stripe errors, AWS audit…)Catalogs as packages

A note on serverless

Both observe-side network features (the stream server for live subscription, the in-process stream primitive) work everywhere a Node-like long-lived process runs — local dev, self-hosted servers, containers (Fly, Railway, Coolify), VMs.

They do not work on serverless platforms (Vercel Functions, Cloudflare Workers, AWS Lambda) because each invocation is an isolated process. Use a real broker (Redis Streams, NATS, Pub/Sub) for cross-instance fan-out in those environments.

The fs reader, identity headers, and the entire Extend axis work everywhere — they are not bound to a long-lived process.