Skip to main content
Deep Dive

TypeScript Analytics Events: Type-Safe Tracking with Code Generation

Analytics is the least-typed code in most codebases — and that is why dashboards break. Here is how type-safe event tracking works, what good looks like, and how to migrate an existing codebase.

Apr 17, 2026·11 min read

Your analytics code is the least-typed code in your codebase. Every other call goes through a typed signature: database queries, API clients, internal services, UI components. Then someone writes track("cart_item_added", { price: "29.99" }) and the string-typed abyss swallows it whole. The dashboard shows revenue figures that are wrong by an order of magnitude, and nobody notices for a week.

Type-safe analytics tracking fixes this. Instead of writing event names and properties as free-form strings and objects, you define them in a schema, generate typed tracking functions from that schema, and let the compiler reject bad events before they ship. The rest of this post covers how that works, what it looks like in practice, and the tradeoffs.

Why untyped events break analytics

Most analytics SDKs accept a string event name and an arbitrary object of properties. That signature is the root of every common data quality bug:

  • Typos in event names. Someone writes "cart_items_added" instead of "cart_item_added". The event fires, the data warehouse records it as a new event, and the funnel quietly drops that cohort.
  • Wrong property types. The schema expects price: number, a developer sends price: "29.99". String concatenation produces nonsense averages.
  • Missing required properties. The subscription_started event needs plan_tier to make the funnel work. It is missing from 12% of calls. Nobody notices until the quarterly audit.
  • Renamed properties drifting. The schema gets updated from userId to user_id. Three files still send the old key.

These are all type errors. The compiler catches type errors every day — but only when the code is actually typed. Analytics has historically been the one place where strings and any-typed objects slip through.

What type-safe tracking looks like

The goal: the compiler rejects analytics events that do not match your event schema. The developer cannot send cart_items_added instead of cart_item_added, cannot pass a string where a number is expected, and cannot forget a required property — because TypeScript stops the build.

The shape of the generated code looks like this:

// generated from the tracking plan
export interface CartItemAddedProperties {
  item_id: string;
  item_name: string;
  price: number;
  currency: "USD" | "EUR" | "GBP";
  quantity: number;
}

export function trackCartItemAdded(
  properties: CartItemAddedProperties,
): void {
  analytics.track("cart_item_added", properties);
}

And how it is consumed in application code:

import { trackCartItemAdded } from "@/generated/analytics";

trackCartItemAdded({
  item_id: product.id,
  item_name: product.name,
  price: product.price,        // number ✓
  currency: "USD",             // literal union ✓
  quantity: 1,
});

If a developer writes price: product.price.toString(), the build fails. If they try currency: "usd" (wrong case), the build fails. If they omit quantity, the build fails. The entire class of property-shape bugs is gone.

The three layers of type-safety

There are three places you can enforce types on analytics events. Strong implementations use all three; most teams start with the first.

Layer 1: Compile-time (TypeScript)

Generated types and tracking functions make the compiler enforce your schema. This catches the 90% case: property name typos, wrong types, missing required fields, invalid enum values. It runs on every save and every CI build.

Limitations: only covers code, not what actually fires in production. A developer can still pass a correctly-typed value that is semantically wrong (e.g. sending price: 0 when they meant the discounted price).

Layer 2: CI scanning (codebase vs plan)

Static analysis that parses your codebase looking for analytics.track(...) calls and compares them against the tracking plan. This catches drift: events that were removed from the plan but still fire, events that fire with undocumented properties, and deprecated events that should have been removed.

Compile-time types prevent bad events from being added. CI scanning catches events that were correct once and rotted.

Layer 3: Runtime validation

The generated tracking functions can validate property values at the moment the event fires — not just the shape, but the values. A quantity that must be positive. A price that must be in a reasonable range. A user_id that must match a UUID format.

Runtime validation catches things the compiler cannot: values from API responses, user input, or feature flags that pass the type check but are semantically invalid. Log or drop invalid events instead of polluting the warehouse.

How code generation actually works

The input to code generation is your event schema — typically a JSON or YAML document that defines every event, its properties, and the type of each property. A minimal schema entry looks like this:

{
  "events": [
    {
      "name": "cart_item_added",
      "description": "Fired when a user adds an item to their cart.",
      "properties": [
        { "name": "item_id",   "type": "string",  "required": true },
        { "name": "item_name", "type": "string",  "required": true },
        { "name": "price",     "type": "number",  "required": true },
        { "name": "currency",  "type": "enum",    "values": ["USD","EUR","GBP"], "required": true },
        { "name": "quantity",  "type": "number",  "required": true }
      ]
    }
  ]
}

A generator reads that schema and emits one file per target: a TypeScript module for the web app, a Swift file for iOS, a Kotlin file for Android. Every platform gets the same event names and property shapes, derived from the same source of truth.

The generated file should be committed to the repo, not generated at build time in CI. Committing makes the diff visible: when the schema changes, the generated code changes, and both show up in the same pull request. Reviewers can see exactly what the change does to the tracking surface.

Migrating an existing codebase

Type-safety on a greenfield project is easy. Migrating an existing codebase with hundreds of analytics.track(...) calls spread across the codebase is the hard part. A workable migration path:

  1. Extract the implicit schema. Run a codebase audit to enumerate every event that fires. Document the properties each one uses. This is your starting tracking plan, derived from reality instead of aspiration.
  2. Clean up first, generate second. Resolve duplicate events, standardize property names, and decide on a naming convention before you freeze the schema into generated code. Migrating later is painful.
  3. Introduce generated functions alongside the old API. Ship trackCartItemAdded(...) and leave analytics.track("cart_item_added", ...) working. New code uses the typed function; old code keeps compiling.
  4. Migrate call sites incrementally. Set up an ESLint rule or a simple grep-based CI check that fails on new string-literal track() calls. Existing ones stay until a developer touches that area of code.
  5. Delete the string-based API when usage hits zero. Remove the untyped entry point so drift cannot reintroduce itself.

Common objections (and honest answers)

"This is a lot of boilerplate."

It is — for every event you add. But boilerplate is worth it when it eliminates an entire category of silent bugs. And the boilerplate is generated, not hand-written. You add one entry to the schema; the generator produces the interface, the function, and the call site signature for you.

"Schema changes require a build step."

True. Changing price: number to price_cents: number means updating the schema, regenerating, and updating call sites. But that work happens anyway — type-safety just forces it to happen atomically, in one pull request, instead of gradually across weeks of production drift.

"What about server-side events?"

Generate types for server languages too. If your backend is in Node, use the same TypeScript output. If it is Go, Python, or Ruby, generate language-appropriate bindings. The schema is the source of truth; each consumer gets a strongly-typed client for its stack.

"We ship experimental events all the time. Typing them slows us down."

Have an escape hatch: a lightly-typed trackExperimental() function that accepts a prefixed event name (e.g. exp_*) and an open property bag. Experiments stay fast; production events stay rigorous. When an experiment graduates, move it into the schema.

Tooling options

You have four realistic paths to type-safe analytics:

  • Roll your own generator. A JSON Schema or a simple YAML file, a small script that emits TypeScript. Feasible for a single platform; painful once you add iOS, Android, and a backend.
  • JSON Schema + code generators. Define events in JSON Schema, use tools like json-schema-to-typescript to produce types. Works but does not handle multi-platform output or tracking-plan specific concerns like required-at-event-level properties.
  • Dedicated analytics tooling. Platforms like Avo, Iteratively, or Ordaze generate typed SDKs for web, iOS, Android, and backends from a single tracking plan. Ordaze additionally scans your codebase in CI to detect drift between the generated types and the events actually firing in production.
  • Customer Data Platform Protocols. Segment Protocols validates events at the pipeline level, not in the client code. Catches drift post-hoc, not at compile time — weaker guarantee but no codebase changes required.

The right choice depends on scale. A single Next.js app with one analytics surface can roll its own generator in a few hundred lines. A team shipping to web, iOS, and Android with three analytics destinations needs dedicated tooling unless they want to build and maintain a pipeline internally.

What "good" looks like

A mature type-safe analytics setup has six properties:

  1. A tracking plan stored as structured data (JSON/YAML), not a spreadsheet.
  2. Typed tracking functions generated from the plan and committed to the repo.
  3. No string-literal analytics.track() calls left in production code (enforced by lint rule or CI check).
  4. CI that fails on schema drift between the plan and the codebase.
  5. Runtime validation on a small number of critical events (revenue-bearing, conversion-funnel, identity).
  6. Generated code reviewed alongside schema changes in the same pull request.

Teams at this bar stop having data quality incidents from bad instrumentation. They still have incidents from semantic bugs (wrong value, wrong timing, wrong user), but the entire class of shape-bugs goes to zero.

Start small

You do not need to rewrite your entire analytics surface to benefit. Pick the five highest-value events — the ones your revenue and activation metrics depend on — and make those type-safe first. Ship them, catch a couple of bugs, and use that evidence to justify the broader migration.

If you want a shortcut: Ordaze generates typed analytics SDKs from your tracking plan and scans your codebase in CI for drift. It is designed for exactly this workflow. See how it works or read the complete guide to event tracking if you are earlier in the journey.

Ready to bring structure to your analytics events?

Try Ordaze free