TypeScript’s type system effectively manages much of JavaScript’s dynamism in useful ways, rather than eliminating it. Developers writing TypeScript code can use almost the full range of web technologies in a type-safe manner. However, when issues arise, they’re often the result of the developer’s choices, not the tools themselves.
Most developers follow well-established patterns in their day-to-day programming. Modern frameworks and tools provide solid structures to guide us, offering solutions and guidelines for nearly every question. However, the complexity and long history of the modern web platform ensure that surprises still occur and new, sometimes unsolvable, challenges continue to emerge.
This issue extends beyond people to include their tools and machines. No one can do everything, and certainly, not every tool is suited to every task. TypeScript is no exception: while it can accurately describe 99% of JavaScript features, one percent remains beyond its grasp. This gap doesn’t only consist of reprehensible anti-features. Some JavaScript features that TypeScript doesn’t fully understand can still be useful. Additionally, for some other features, TypeScript operates under assumptions that can’t always align with reality.
Like any tool, TypeScript isn’t perfect; and we should be aware of its blind spots. This article addresses three of these blind spots, offers possible workarounds, and explores the implications of encountering them in our code.
Blind Spot 1: Excluding subtypes in type parameters
The Liskov substitution principle requires that a program can handle subtypes of T wherever a type T is expected. The classic example of object orientation still serves as the best illustration of this principle (Listing 1).
Listing 1: The classic OOP example with animals
class Dog {
name: string;
constructor(name: string) {
this.name = name;
}
}
class Collie extends Dog {
hair = "long";
}
let myDog: Dog = new Collie("Lassie");
// Works!
It makes perfect sense that a Collie instance is assigned to a variable of type Dog, because a Collie is a dog with long hair. The object that ends up in the myDog variable provides all the functions required by the Dog type annotation. The fact that the object can do more (for example, show off long hair) is irrelevant in this context. But what if that additional feature does matter?
Thanks to structural subtyping, TypeScript allows any object that fulfills a given API contract (or implements a given interface) to be treated as a “subtype” (Listing 2).
Listing 2: Structural subtyping in Action
class Dog {
name: string;
constructor(name: string) {
this.name = name;
}
}
type Cat = { name: string };
let myPet: Cat = new Dog("Lassie");
// Works!
In web development, where developers don’t have to manually create every object from a class constructor, this rule is very pragmatic. On one hand, it results in relatively minor semantic errors (Listing 3), but on the other, it can also lead to more significant pitfalls.
Listing 3: Structural subtyping triggers an error
type RGB = [number, number, number];
let green: RGB = [0, 100, 0];
type HSL = [number, number, number];
let red: HSL = [0, 100, 50];
red = green;
// Works! RGB and HSL have the same structure_
// But is that OK at runtime?_
Let’s look at a function that accepts a parameter of type WeakSet< any >:
function takesWeakSet(m: WeakSet<any>) {}
In JavaScript, weak sets are sets with special garbage collection features. They only hold weak references to their contents and can’t cause memory leaks. However, unlike normal sets, weak sets lack many features, mainly all iteration mechanisms. While normal sets can function as universal lists as well as sets, weak sets can only tell us whether they contain a given value, something normal sets can do too. This means that the WeakSet API is a subset of the Set API, meaning that Set is a subtype of WeakSet (Listing 4).
Listing 4: WeakSets and Sets as subtypes
function takesWeakSet(m: WeakSet<any>) {}
// Works obviously
takesWeakSet(new WeakSet());
// Works too, Set is a subtype of WeakSet
takesWeakSet(new Set());
// But is that OK at runtime?
Depending on the function’s intent, this can either be a non-issue (as with Dog and Collie), an easily identifiable problem (as with RGB and HSL), or it can lead to subtle, undesired behavior in our program. When takesWeakSet() expects to receive a true WeakSet, it might store new values in the set and assume that it doesn’t need to worry about removing them later. After all, weak sets automatically prevent memory leaks. However, this assumption can be undermined if Set is considered a subtype of WeakSet.
So, while it’s often safe to accept subtypes of a given type, it’s not always so straightforward. In this case, the implementation is relatively simple, but it’s not possible to generalize this approach.
Unfortunately, subtypes have to stay out
With type-level programming, it’s comparatively easy to construct a type that accepts another type but rejects its subtypes. The key tool for this is generic types, which we can consider to be type functions (Listing 5).
Listing 5: Generic Types as Type Functions
// Type function that wraps the parameter T
// in an array
type Wrap<T> = [T];
// Corresponding JS function that
// wraps the parameter t in an array
let wrap = (t) => [t];
In generic types, we can use conditional types, which work just like the ternary operator in JavaScript (Listing 6).
Listing 6: Conditional Types
// A extends B = “is A a subtype of B?”
// In other words: “is A a subtype of B?”
type Test<T> = T extends number? true : false;
type A = Test<42>; // true (42 is assignable to number)
type B = Test<[]>; _// false ([] is not assignable to number)
Equipped with this knowledge, we can now formulate a type function that accepts two type parameters and determines whether the first parameter exactly matches the type of the second parameter. This is true only if the first parameter is assignable to the second and the second parameter is assignable to the first. If either of these conditions doesn’t apply, then either the first parameter must be a subtype of the second, the second must be a subtype of the first, or both parameters must be completely incompatible. In code, this is illustrated in Listing 7.
Listing 7: ExactType<Type, Base>
type ExactType<Type, Base> =
Type extends Base
? Base extends Type
? Type
: never
: never;
type A = ExactType<WeakSet<any>, WeakSet<any>>;
// Result: WeakSet<any> - A and B are the same
type B = ExactType<Set<any>, WeakSet<any>>;
// Result: never - A is a subtype of B
type C = ExactType<WeakSet<any>, Set<any>>;
// Result: never - B is a subtype of A
type D = ExactType<WeakSet<any>, string>;
// Result never - A and B are incompatible
The type never, used here to model the case where the type and base are different, is a type to which no value can be assigned. Each data type represents a set of possible values (e.g., number is the set of all numbers and Array< string > is the set of all arrays filled with strings), while never represents an empty set. No error, no exception: never simply stands for nothing.
We can now use ExactType<Type, Base> to modify takesWeakSet() so that it only accepts weak sets. We just have to make the function generic and then define the type for the value parameter m with ExactType (Listing 8).
Listing 8: ExactType<Type, Base> in action
type ExactType<Type, Base> =
Type extends Base
? Base extends Type
? Type
: never
: never;
function takesWeakSet<T>(m: ExactType<T, WeakSet<any>>) {}
// Works obviously
takesWeakSet(new WeakSet());
// No longer works!
takesWeakSet(new Set());
The reason why the call with the normal set does not work is that ExactType<Type, Base> computes the type never as a result here, and since no value (and certainly no set object) fits into never, the TypeScript compiler complains at this point as desired. Problem solved?
The difficult subtype exclusion in type parameters
If we can treat generic types like functions, as we suggested earlier, then it should be possible to reproduce the features of the runtime function takesWeakSet() as a type function. As it stands now, the function only accepts a parameter that is restricted to an exact subtype, so it should be possible. The skeleton, a generic type with a type parameter, is easy to set up:
type TakesWeakSet<M> = {};
Since any arbitrary data type can be used here, we need a type for the type T. Fortunately, this isn’t a problem, as extends clauses can be used both in conditional types and as restrictions for type parameters:
type TakesWeakSet<M extends WeakSet<any>> = {};
This puts the type function in the same state that takesWeakSet() was in initially: it’s a single-argument function with a type annotation that specifies a minimal requirement for the input. Subtypes are still accepted (Listing 9).
Listing 9: Subtypes as Input
type TakesWeakSet<M extends WeakSet<any>> = {};
// Obviously works
type A = TakesWeakSet<WeakSet<any>>;
// Also works, Set is a subtype of WeakSet
type B = TakesWeakSet<Set<any>>;
That’s not a problem, as that’s exactly why we wrote ExactType. However, there is a fundamental difference between the type function TakesWeakSet<M> and the runtime function takesWeakSet(m). The latter, if we look closely, has one more parameter than the former (Listing 10).
Listing 10: Type and runtime function in comparison
// One parameter “M”
type TakesWeakSet<M extends WeakSet<any>> = {};
// One parameter “m” AND one type parameter T
function takesWeakSet<T>(m: ExactType<T, WeakSet<any>>) {}
A call to the runtime function takesWeakSet() passes two parameters: a type parameter and a value parameter. The type parameter is used to calculate the type of the value parameter, where an error occurs if ExactType returns never. The type function ExactType is key to excluding subtypes. This trick can’t be reproduced at the type level because self-referential type parameters aren’t allowed, except in a few special cases that aren’t relevant here (Listing 11).
Listing 11: No self-referential constraints in type parameters
// Error: “Type” cannot be input for
// its own constraints
type TakesExact<Type extends ExactType<
Type,
WeakSet<any>>
> = {};
What would work, however, is to move the logic from ExactType to TakesExact. This wouldn’t reject subtypes, but would instead translate them to never, resulting in no error, just a likely unhelpful result (Listing 12).
Listing 12: TakesExact with the logic of ExactType
type TakesExact<Type> = Type extends WeakSet<any>
? WeakSet<any> extends Type
? Type
: never
: never;
type R1 = TakesExact<WeakSet<{}>>;
// OK, R1 = WeakSet<{}>
type R2 = TakesExact<Set<string>>;
// OK, R2 = never (NO error)
type R3 = TakesExact<Array<string>>;
// OK, R3 = never (NO error)
Regardless of how you approach it, rejecting parameters that are subtypes of a given type or enforcing an exact type at the type level isn’t possible. TypeScript has a blind spot here. But is this truly a problem?
How do we deal with subtype constraints at the type level?
The golden rule of programming in statically typed languages is: “Make invalid states unrepresentable.” If developers can write code in such a way that it prevents the program from taking wrong paths (e.g., by using exact type annotations to eliminate invalid inputs), they can save a lot of time debugging unwanted states. In principle, this rule is invaluable and should be followed whenever possible. However, it isn’t always feasible in the unpredictable world of JavaScript and TypeScript development.
To summarize, our goal was to create a program where a variable of type T can only be assigned values of type T and not any of its subtypes. We’ve succeeded in doing this in the runtime code, but we’ve failed at the type programming level. However, according to the Liskov substitution principle, this restriction may be unnecessary. After all, a subtype of T inherently has all the functions of T– so why do we need the restriction in the first place?
In our case, the key factor is that a Set and a WeakSet have very different semantics, even though the WeakSet API is a subset of the API of Set. In TypeScript’s type system, this means that Set is evaluated as a subtype of WeakSet, leading to the assumption of a relationship and substitutability where none exists. This blind spot in the type system leads us to solve a problem that isn’t actually a problem at all, and which we ultimately can’t resolve, especially at the type level.
Instead, we have to accept that the TypeScript type system doesn’t correctly model every detail of JavaScript objects and their relationships. Structural subtyping is a very pragmatic approach for a type system that attempts to describe JavaScript, but it’s not particularly selective. If we find ourselves in a situation where we want to ban subtypes from certain program parts despite TypeScript’s resistance, we should ask ourselves two questions:
- Do we really need to exclude subtypes, or would the program work with subtypes? Can we rewrite the program to work with subtypes like any other program?
- Are we trying to exclude subtypes to compensate for blind spots in the TypeScript type system (as in the Set/WeakSet example)?
For the second case, the solution is simple: don’t use the type system for this task. Trying to exclude subtypes is essentially working against what TypeScript is designed to do (a type system based on structural subtyping) and attempting to compensate for a limitation within TypeScript itself. A more pragmatic approach would be to simply defer the distinction between two types that TypeScript has assessed incorrectly to the runtime. In the case of Set and WeakSet, this is particularly trivial because JavaScript knows that these two objects are unrelated (Listing 13).
Listing 13: Runtime distinction between Set and WeakSet
new WeakSet() instanceof WeakSet
// > true
new Set() instanceof WeakSet
// > false
“Make invalid states unrepresentable” is still a valuable guideline. However, in TypeScript, we sometimes need to use methods other than the type system to implement this, because the type system can’t accurately model every relationship between JavaScript objects. The type system only looks at the API surfaces of objects, and sometimes seemingly related surfaces hide entirely different semantics. In such cases, we shouldn’t use the type system to solve the problem but rather use a more appropriate solution.
Blind Spot 2: Non-Modelable Intermediate States
Combining an object from a list of keys and a parallel list of values of the same length is trivial in JavaScript, as we see in Listing 14.
Listing 14: JS function combines two lists into one object
function combine(keys, vals) {
let obj = {};
for (let i = 0; i < keys.length; i++) {
obj[keys[i]] = vals[i];
}
return obj;
}
let res = combine(["foo", "bar"], [1337, 9001]);
// res = { foo: 1337, bar: 9001 }
Imperative programming doesn’t get any easier than this: you take a bunch of variables and manipulate them until the program reaches the desired target state. But as we all know, this programming style can be error-prone. Every for loop is an off-by-one error in training. So it makes sense to secure this code snippet as thoroughly as possible with TypeScript.
First, we need to ensure that keys and values are tuples (i.e. lists of finite length). The content of keys should be restricted to valid object key data types, while values can contain any values, but must have the exact same length as keys. This isn’t particularly difficult: we can constrain the type variable K for keys to be a tuple with matching content, and then use K as a template to create a tuple of the same length full of any, which is exactly the appropriate restriction for values (Listing 15).
Listing 15: Function signature restricted to two tuples of equal length
type AnyTuple<Template extends any[]> = {
[I in keyof Template]: any;
};
function combine<
const K extends (string | symbol | number)[],
const V extends AnyTuple<K>
>(keys: K, vals: V) {
// ... Rest ...
}
With a type K of all keys and a type V of all values, we can then construct an object type that describes the result of the combine() operation. This is a bit complex, but we can manage. First, we need the auxiliary type UnionToIntersection< T >, which, as the name suggests, turns the members of a union T into an intersection type. The syntax looks a bit weird, and the underlying mechanics of distributive conditional types are equally strange. Overall, I prefer not to dive into the details right now. The key takeaway is that UnionToIntersection< T > turns a union into an intersection (Listing 16).
Listing 16: UnionToIntersection< T >
type UnionToIntersection<T> =
(T extends any ? (x: T) => any : never) extends
(x: infer R) => any ? R : never;
type Test = UnionToIntersection<{ x: number } | { y: string}>
// Test = { x: number } & { y: string }
With this tool, we can now model a type that, similar to the combine() function, combines two tuples into one object, if we can think creatively. Step 1 is to write a generic type that accepts the same type parameters as combine() (Listing 17).
Listing 17: Type Combine<K, V>
type Combine<
K extends (string | symbol | number)[],
V extends AnyTuple<K>
> = {};
type Test = Combine<["foo", "bar"], [1337, 9001]>;
// Test = {}
Step 2: A new tuple is temporarily created with a mapped type that has the same number of positions as K and V (Listing 18).
Listing 18: Two tuples become one tuple
type Combine<
K extends (string | symbol | number)[],
V extends AnyTuple<K>
> = {
[Index in keyof K]: any;
}
type Test = Combine<["foo", "bar"], [1337, 9001]>;
// Test = [any, any]
At first glance, this maneuver seems to distract us from our actual goal, as we want to turn tuples into an object, not just another tuple. However, to do this, we need access to the indices of the input tuples, which is achieved here by the type variable index. This allows us to replace any on the right side of the mapped type with an object type that models a name-value pair of our target object (Listing 19).
Listing 19: Two tuples become one tuple of objects
type Combine<
K extends (string | symbol | number)[],
V extends AnyTuple<K>
> = {
[Index in keyof K]: {
[Field in K[Index]]: V[Index];
};
};
type Test = Combine<["foo", "bar"], [1337, 9001]>;
// Test = [{ foo: 1337 }, { bar: 9001 }]
Now we get a tuple that at least contains all the building blocks of our target object. To unpack it, we index the tuple with number, which leads us to a union of the tuple contents (Listing 20). We can then combine this union into an object type using UnionToIntersection< T > (Listing 21). Mission accomplished!
Listing 20: Two tuples become a union of objects
type Combine<
K extends (string | symbol | number)[],
V extends AnyTuple<K>
> = {
[Index in keyof K]: {
[Field in K[Index]]: V[Index];
};
}[number];
type Test = Combine<["foo", "bar"], [1337, 9001]>;
// Test = { foo: 1337 } | { bar: 9001 }
Listing 21: Two tuples become one object
type Combine<
K extends (string | symbol | number)[],
V extends AnyTuple<K>
> = UnionToIntersection<{
[Index in keyof K]: {
[Field in K[Index]]: V[Index];
};
}[number]>;
type Test = Combine<["foo", "bar"], [1337, 9001]>;
// Test = { foo: 1337, bar: 9001 }
// Genaugenommen { foo: 1337 } & { bar: 9001 }
The result is syntactically a bit strange, but at the type level it does what the combine() function does in the runtime area: two tuples in, combined object out (Listing 22).
Listing 22: Combine Type vs. Combine Function
type Test = Combine<[“foo”, “bar”], [1337, 9001]>;
// Type “Test” = { foo: 1337, bar: 9001 }
let test = combine([“foo”, “bar”], [1337, 9001]);
// Value “test” = { foo: 1337, bar: 9001 }
And if we have a type that models the exact same operation as a runtime function, we can logically use the former to annotate the latter. Right?
The problem with the imperative iteration
Before we add Combine<K, V> to the signature of combine(keys, values), we should fire up TypeScript and ask what it thinks of the current state of our function (without return type annotation). The compiler is not impressed (Listing 23).
Listing 23: Current state of combine()
function combine<
const K extends (string | symbol | number)[],
const V extends AnyTuple<K>
>(keys: K, vals: V) {
let obj = {};
for (let i = 0; i < keys.length; i++) {
obj[keys[i]] = vals[i]; // <- Error here
}
return obj;
}
The key part of the error message is “No index signature with a parameter of type ‘string’ was found on type ‘{}’”. The reference to the type {} comes from the initialization of the obj variable two lines earlier. Since there is no type annotation, the compiler activates its type inference and determines the type {} for obj, based on its initial value– the empty object. Naturally, this means we can’t add any additional fields to this type. But is this type even correct? After all, the function is supposed to return Combine<K, V> as the type. So we add to the initialization what the variable should have at the end (Listing 24).
Listing 24: combine() with Combine<K, V> as annotation
function combine<
const K extends (string | symbol | number)[],
const V extends AnyTuple<K>
>(keys: K, vals: V) {
let obj: Combine<K, V> = {}; // <- Error here
for (let i = 0; i < keys.length; i++) {
obj[keys[i]] = vals[i];
}
return obj;
}
Another error appears. This time, TypeScript reports “Type ‘{}’ is not assignable to type ‘Combine<K, V>’”, which is also understandable. After all, we’re claiming that the variable obj contains the type Combine<K, V> but we’re initializing it with the incompatible value {}. That can’t be correct either. So, what is the correct approach?
The truth is, nothing is correct. The operation that combine(keys, values) performs is not describable with TypeScript in the way it’s implemented here. The problem is that the result object obj mutates from {} to Combine<K, V> in several intermediate steps during the for loop, and that TypeScript doesn’t understand such state transitions. The whole point of TypeScript is that a variable has exactly one type, and it can’t change types (unlike in vanilla JavaScript). However, such type changes are essential in scenarios where objects are iteratively assembled because each mutation represents a new intermediate state on the way from A to B. TypeScript can’t model these intermediate states, and there is no correct way to equip the combine(keys, values) function with type annotations.
What to do with intermediate states that can’t be modeled?
The TypeScript type system is a huge system of equations in which the compiler searches for contradictions. This always happens for the program as a whole and without executing the program. This means that, by design, TypeScript can’t fully understand various language constructs and features, no matter how hard we try. Under these circumstances, the question arises: if we can’t do it right, what should we do instead?
One option is to align the runtime code more closely with the limitations of the type system. After all, there are various means of functional programming in runtime JavaScript. Instead of writing types that are oriented towards runtime JavaScript, it’s often possible to write runtime JavaScript that is based on the types. However, this doesn’t always work and may not be feasible in some teams. Some developers may enjoy writing JavaScript code in such a way that every loop is replaced by recursion, while others would like to keep their imperative language constructs, especially async/await and try/catch.
The more pragmatic solution is to accept the possibilities and limitations of our tools and work with what we have. Unmodelable intermediate states are bound to occur when writing low-level imperative code. If the type system can’t represent them, we need to handle them in other ways. Unit tests can ensure that the affected functions do what they’re supposed to do, documentation and code comments are always helpful, and for an extra layer of safety, we can use runtime type-checking if needed.
I’ve adapted a feature from the programming language Rust for functions with an imperative core that is inscrutable for TypeScript. Rust’s type system is stricter than TypeScript’s, enforcing much more granular rules of data and objects. However, there is a way out: code blocks marked with the unsafe keyword can (to some extent) perform operations that the type system would normally prevent (Listing 25).
Listing 25: unsafe in Rust
// This Rust program uses the C language's foreign
// function interface for the abs() function,
// which the Rust compiler cannot guarantee anything about
extern "C" {
fn abs(input: i32) -> i32;
}
// To be able to call the C function abs() without
// the corresponding code must be wrapped in “unsafe”
fn main() {
unsafe {
println!(
"Absolute value of -3 according to C: {}",
abs(-3)
);
}
}
In its core idea, it’s somewhat comparable to the TypeScript type any, as in both cases developers assume responsibility for what the type checker would normally do. The advantage of unsafe in Rust is that it directly signals that the compiler doesn’t guarantee type safety for the affected area and that maximum caution is required when using it. This is precisely what we want to express for our combine(keys, values) function. First, we have to get the function to work by typing the result object as any (Listing 26).
Listing 26: combine() with any
function combine<
const K extends (string | symbol | number)[],
const V extends AnyTuple<K>
>(keys: K, vals: V) {
let obj: any = {}; // <- anything goes
for (let i = 0; i < keys.length; i++) {
obj[keys[i]] = vals[i];
}
return obj;
}
This makes the code in the function executable and the compiler no longer complains, since any allows everything. We can now use our type Combine<K, V> to annotate the return type (Listing 27).
Listing 27: combine() with any and Combine<K, V>
function combine<
const K extends (string | symbol | number)[],
const V extends AnyTuple<K>
>(keys: K, vals: V): Combine<K, V> { // <- works
/* Rest */
}
This works because the type any also allows it to be assigned to another, stricter type. This function now has a very well-defined interface with strict input and output types, but a core that isn’t protected by the type system. For trivial functions, it’s sufficient to ensure correct functioning with unit tests, and to make the character of the function even more obvious you could add unsafe to its name (Listing 28).
Listing 28: unsafeCombine()
function unsafeCombine<
const K extends (string | symbol | number)[],
const V extends AnyTuple<K>
>(keys: K, vals: V): Combine<K, V> {
/* Rest */
}
Anyone who calls this function can tell from its name that special care is required. Reading the source code makes it clear that the any annotation on the return object wasn’t added out of desperation, time pressure, or inexperience by the developers, but rather as a workaround for a TypeScript blind spot based on careful consideration. No tool is perfect (especially not TypeScript), and dealing with a tool’s limitations confidently and pragmatically is the hallmark of true professionals.
Blind spot 3: Side effects of mix-in modules
For most developers, ECMAScript modules are synonymous with the keywords import and export, but these don’t determine whether a piece of JavaScript is considered a module. For a JS engine, “ECMAScript module” is primarily a separate loading and operating mechanism for JavaScript programs in which
- Permanent strict mode applies without opt-out
- Programs, similar to scripts with the defer attribute, are loaded asynchronously and executed in browsers only at the DOMContentLoaded event
- import and export can be used
Thus, the following JavaScript program can be considered and treated as an ECMAScript module:
// hello.js
window.alert(“Hello World!”);
This mini-module contains no code that violates strict mode. It can handle contact with a fully processed DOM without crashing and can be easily loaded as a module by browsers:
<script type=“module” src=“hello.js”></script>
The presence of the keywords import and export indicates that a JavaScript program is intended to be a module and is only executable in module mode. However, their absence doesn’t mean a program can’t be a module. In most cases, using import and/or export in modules makes sense, but not always: For example, if you want to activate a global polyfill, you don’t have to export anything. Instead, you can directly modify the relevant global objects. This use case may seem a bit unusual (after all, who regularly writes new polyfills?), but the world might be a little better if this use case weren’t so rare.
Modularity vs. fluent interfaces
Zod is a popular and powerful data validation library for JavaScript and TypeScript. It offers a highly convenient, fluent interface for describing data schemas, validates data against those schemas, and, as a special treat, can derive TypeScript types from the schemas (Listing 29).
Listing 29: Zod in action
import { z } from "zod";
const User = z.object({
name: z.string(),
mail: z.string().email(),
});
User.parse({
name: "Test",
mail: "[email protected]",
}); // > OK, no error
type User = z.infer<typeof User>;
// > TS-Type { name: string, mail: string }
The fluent interface with the simple chaining of method calls makes Zod particularly attractive. However, this chaining comes at a price: the z object contains every schema validation feature of Zod at all times, even if, as in the example above, only the object, string, and email functions are used. The result is that, when compiled and minified with esbuild, the 14 lines of code shown above turn into a bundle of over 50 kB. For frontend applications where loading performance is an issue, the use of Zod is therefore out of the question.
This doesn’t mean there is anything wrong with Zod. The inclusion of the entire library’s code in the bundle, even when only one feature is used, is an unavoidable result of its highly convenient API design. This isn’t an issue when used on the server side. For Zod to work, the z object must be a normal JavaScript object with all the features, which means that bundler-based tree-shaking dead code elimination can’t be applied. The Zod developers decided to trade a better API design for a larger bundle—perhaps because the “frontend” use case was not that important to them, or because they considered convenience and developer experience to be more important. And that’s perfectly fine.
For comparison, the self-proclaimed “<1-kB Zod alternative” Valibot uses a completely different API design to Zod to take up only a few bytes (Listing 30).
Listing 30: Valibot in action
import {
type Output,
parse,
object,
string,
email,
} from "valibot";
const User = object({
name: string(),
mail: string([email()]),
});
parse(User, {
name: "Test",
mail: "[email protected]",
}); // > OK, no error
type User = Output<typeof User>;
// > TS-Type { name: string, mail: string }
We see the same feature set as Zod with just one key difference: the fluent interface is no longer supported. Chained conditions (e.g., “this must be a string and the string must be an email address”) are modeled by manually imported and manually concatenated functions. This makes module bundlers like esbuild tree shaking easy, but the API is no longer as convenient.
In other words, fluent interfaces are nice, but they don’t always align with the performance optimizations necessary for frontend performance. Or do they?
The Versatile Swiss Army Knife (in JavaScript)
A Zod-style fluent interface can be implemented in JavaScript as well as in TypeScript using a few object methods that return this (Listing 31).
Listing 31: Basal Fluent Interface
const fluent = {
object() {
return this;
},
string() {
return this;
},
email() {
return this;
},
}
fluent.string().email(); // Runs!
Suppose we step away from the constraints of type safety and delve into pure JavaScript. In that case, we can construct the object provided by the fluent interface, rather than declaring it centrally (Listing 32).
Listing 32: Piece-wise fluent interface
const fluent = {};
fluent.object = function() {
return this;
};
fluent.string = function() {
return this;
};
fluent.email = function() {
return this;
};
fluent.string().email(); // Success!
In JavaScript, there’s no reason not to split the piecemeal assembly into individual modules. We just need to ensure that there is some kind of singleton for the Fluent object, which could be implemented, for example, by a core module imported everywhere (Listing 33).
Listing 33: Modularized Fluent Interface
// Core Module "fluent.js"
const fluent = {};
export { fluent };
// Module "object.js"
import { fluent } from "./fluent.js";
fluent.object = function () {
return this;
};
// Module "string.js"
import { fluent } from "./fluent.js";
fluent.string = function () {
return this;
};
// Module "email.js"
import { fluent } from "./fluent.js";
fluent.email = function () {
return this;
};
The core module fluent.js initializes an object that is imported and extended by all feature modules. This means that only explicitly imported features can be used (and take up kilobytes), but we retain a fluent interface comparable to Zod (Listing 34).
Listing 34: Modularized Fluent Interface in Action
// main.js
import { fluent } from “./fluent.js”;
import “./string.js”; // patches string into “fluent”
import “./email.js”; // patches email into “fluent”
fluent.string().email(); // Works!
fluent.object(); // Error: object.js not imported
This minimal implementation of the modular Fluent pattern is clearly just a demonstrator showing what could be possible in principle: modularity and method chaining peacefully united. Only few people write their own polyfill where modules are pure side effects that patch arbitrary objects. But why not? After all, we could have fluent interfaces and tree shaking. Admittedly, there is a small detail known as “TypeScript” that complicates matters.
Declaration merging, but unconditionally
TypeScript is no stranger to the established JavaScript practice of patching arbitrary objects. It’s an official part of the language via declaration merging. If we create two interface declarations with identical names, this isn’t considered a naming collision, but a distributed declaration (Listing 35).
Listing 35: Declaration Merging
interface Foo {
a: number;
}
interface Foo {
b: string;
}
declare let x: Foo;
// { a: number; b: string }
TypeScript uses this mechanism primarily to support extensions of string-based DOM APIs, such as document.createElement(). This function is known to be able to fabricate an instance of an appropriate type from an HTML tag (Listing 36).
Listing 36: document.createElement() in action
let a = document.createElement(“a”);
// a = HTMLAnchorElement
let t = document.createElement(“table”);
// t = HTMLTableElement
let y = document.createElement(“yolo”);
// y = HTMLElement (base type)
It’s true that document.createElement(), from which the HTML tag creates an instance of HTMLAnchorElement, considers the tag table to be a default for an HTMLTableElement. And, as of August 2024, no specified element < yolo > exists. But how does TypeScript know all this? The answer is simple: at the core of TypeScript’s DOM type definitions, there is a large interface declaration that maps HTML tags to subtypes of HTMLElement (Listing 37).
Listing 37: HTMLElementTagNameMap
interface HTMLElementTagNameMap {
"a": HTMLAnchorElement;
"abbr": HTMLElement;
"address": HTMLElement;
"area": HTMLAreaElement;
"article": HTMLElement;
"aside": HTMLElement;
"audio": HTMLAudioElement;
"b": HTMLElement;
...
}
The type definition of document.createElement() uses type-level programming to derive from this interface the type of instance the function returns for a given HTML tag – with the basic HTMLElement as a fallback level for unknown HTML tags. And what do we do when unknown HTML tags become known HTML tags through Web Components? We merge new fields into the interface.
Listing 38: Declaration merging for web components
export class MyElement extends HTMLElement {
foo = 42;
}
window.customElements.define("my-el", MyElement);
declare global {
interface HTMLElementTagNameMap {
"my-el": MyElement;
}
}
let el = document.createElement("my-el");
el.foo; // number - el is a MyElement
The class declaration or the call to customElements.define() tells the browser at runtime that a new HTML element with a matching tag now exists, while the global interface declaration informs the TypeScript compiler about this new element. It’s therefore possible to extend global objects and have TypeScript record them correctly, and it is not even particularly difficult.
What happens if we move the above web component into its own module in our TypeScript project, fail to import this module, and still call document.createElement(“my-el”) (Listing 39)?
Listing 39: Unconditional Declaration Merging
// Import disabled!
// import “./component”;
const el = document.createElement(“my-el”);
el.foo; // number - el is still a MyElement
The commented-out component remains completely unknown to the browser, while TypeScript still assumes that the affected HTML tag can be used. This happens because TypeScript types are considered on a per-package basis. If a global type declaration is part of an imported package, it’s considered to be in effect. At the individual module level, TypeScript can’t understand that specific imports are needed to implement the effect of the declared types at runtime.
What to do about the side effects of mix-in modules?
In principle, managing TypeScript cleanly requires a somewhat blunt perspective: since TypeScript considers types on a per-package rather than a per-module basis, we can convert relevant modules into (quasi-)packages. Depending on the build setup, this can require more or less effort. The main step is to create a new folder for our module packages in the project and to use the exclude option to remove it from the view of the main project’s tsconfig.json. The modules can now be moved to the folder hidden from the compiler, meaning that TypeScript doesn’t process any type declarations within them when the corresponding modules/packages are actually imported.
The tricky question now is what our project and build system will accept as a “package”. If we don’t run the TypeScript compiler tsc at all or only with the options emitOnly or emitDeclarationOnly (i.e. when the TypeScript compiler doesn’t have to output JavaScript, but at most d.ts files), we can activate the compiler option allowImportingTsExtensions. This allows us to directly import the .ts files from the module packages folder, and thus activate only those global declarations that are actually imported (Listing 40).
Listing 40: Conditional declaration merging through packages
// packages/foo/index.ts
declare global {
interface Window {
foo: number; // new: window.foo
}
}
export {}; // Boilerplate, ignore!
// packages/bar/index.ts
declare global {
interface Window {
bar: string; // new: window.bar
}
}
export {}; // Boilerplate, ignore!
// index.ts
import "./packages/foo/index.ts";
window.foo; // <- Imported, works!
window.bar; // <- Not imported, error!
If, on the other hand, we need the JavaScript output of tsc, it gets a bit more complicated. In this case, the compiler option allowImportingTsExtensions isn’t available and the module packages have to be upgraded to more or less “correct” packages, including their own package.json. Depending on how many such “packages” you want to end up with in your project, this additional effort can either remain manageable or escalate into something completely unacceptable.
Side effects of mix-in modules remain a blind spot of TypeScript because the types known to the TypeScript compiler are determined at the package level, not ECMAScript modules, due to its fundamental design. Any workaround we can come up with has major or minor drawbacks. We can either accept them, try to minimize their effects by adjusting our project or build set-up, or simply accept the blind spot. But is it really a problem if the type system thinks an API is available when it isn’t? For a module with a fluent Interface, definitely. For web components, maybe not. And for other use cases? It depends on the circumstances.
Conclusion: TypeScript isn’t perfect
TypeScript aims to describe JavaScript’s behavior using a static type system, and it does this far better than we might expect. Almost every bizarre behavior of JavaScript can be partially managed by the type checker, with true blind spots existing only in a few peripheral aspects. However, as we’ve seen in this article, these fringe aspects are not without practical relevance, and as TypeScript professionals, we need to acknowledge that TypeScript is not perfect.
So how do we deal with these blind spots in our projects? Personally, I’m a big fan of a pragmatic approach to using tools of all kinds. Tools like TypeScript are machines that we developers operate, not the other way around. When in doubt, I prefer to occasionally accept an any or a few data types that are 99% correct. If an API can be significantly improved through elaborate type juggling, it may justify spending hours in the type-level coding rabbit hole.
However, fighting against the fundamental limitations of TypeScript is rarely worth the effort. There is no prize for the smallest possible any number, no bonus for particularly complex type constructions, and no promotion for a function definition that is 0.1% more watertight. What matters is a functioning product, maintainable code, and efficiency in execution and development – always considering what is possible given the current circumstances and available tools.