Asynchrony in JavaScript – a tool for every problem

Aug 10, 2021

The case for asynchrony in JavaScript is clear: Callbacks are evil and Promises are the only correct solution. No, actually async/await is the solution, but this relies on Promises, which involve some callbacks. So not everything is as clear as it seems. In the following, we will clarify why asynchronicity is necessary.

Why asynchrony? Whether we’re in a browser context or a server context, the term is ubiquitous. Mostly, this has to do with the fundamental architecture of JavaScript engines. Nearly all implementations consist of one process at their core that does all the work. Of course, there are worker processes, both in the browser and in Node.js. Nevertheless, most applications are executed in only one process/ This is good because most of the time it gets bored and waits for something to do. This waiting, respectively, the reaction to a certain event, is solved via JavaScript’s language means.

Admittedly, this sounds a bit abstract at first, so let’s take a look at an example. In the frontend of our application, data is to be loaded from the server. We use the fetch API of the browser and formulate the request. Calling the Fetch API causes the request to be sent. If the browser worked synchronously at this point, it would be the end of any interaction between the user and our application, at least for a short while. No button could be clicked and no input could be made – the browser would freeze. It’s not a very good idea. So we’d rather take the asynchronous approach: we submit the request to the server, but instead of waiting for the response, we register a function to be called as soon as the result is available. In the meantime, the browser can respond to the user’s interaction again or do any other task. Once the server’s response is available, which in the best case takes only a fraction of a second, our registered function is executed and the handling of the response expires. With this solution, we gained better application responsiveness. For the user, many operations feel much more fluid than they actually are. In this case, the browser moves some of the work to another place and creates response options for itself.

How is the asynchrony implemented? Whether you use functions as we just outlined, via async/await, or a stream API, is not important at this point. However, in order to answer the next question, “When do I use which solution and where are their respective limits?”, we need to take a closer look at the whole thing.

Callbacks – are they the scourge of JavaScript?

Let’s start with the most unpopular variant: callbacks. Whoever solves asynchronous tasks with callbacks eats small children for breakfast. This has been said for a long time when it comes to implementing asynchronous solutions. However, if we take a closer look at any JavaScript application, we will find callback functions in abundance. And for good reason: They are a tool for solving a whole category of problems and a basic building block for further architecture forms, such as event-driven architecture. 

Let’s stay on topic and look at a concrete use case: Events. In an application, we need to react to these. These can be simple click events or events on a data stream. As soon as an event of a certain type occurs, the registered callback function is executed. Here we have one of the most common uses: simple event handlers. A callback function in this role is a very lightweight solution. The source code remains clear as long as the function is kept concise and code that is not directly related to event handling is cleanly swapped out. Another big advantage of this solution is that the callback function can be executed multiple times – quite unlike a Promise. So for all asynchronous operations that can be handled easily and may even occur multiple times, callbacks can be used with a clear conscience. But what about this much-cited callback hell? The callback hell or “Pyramid of Doom” refers to nested callbacks. If the source code is cleanly retracted, it creates a pyramid pattern. These structures are hard to maintain and debugging isn’t any fun. But things get really bad when the pyramid builder adds branches and tries to control the asynchronous program flow. This is exactly where JavaScript’s Promise API comes in.

Promises – away with the callbacks

In simple terms, A Promise is a guarantee that it will fulfill an asynchronous operation. According to the motto: Everything will be fine. A Promise can assume different states. Initially, it is pending and floating. At first, no one can say if everything will turn out well or if an error will occur. This is only clarified in the next step, when the Promise settles and is either resolved or rejected, and whether the operation succeeded or failed. For both cases, you can register a callback function, which is executed accordingly. You can accomplish this with the then method of the Promise object, which you can pass both callbacks to. What’s even better is using then for the success case and the catch method for the failure case.

Let’s summarize: Instead of one callback function, we gained two callback functions. Great. And if they start to form pyramids, we will be hopelessly lost. But that’s exactly the point: That won’t happen and shouldn’t happen. As soon as you find a then inside the callback function of another then, this is a clear sign of an antipattern. At this point, you actually start building up Promise pyramids, nullifying all advantages of this interface. The biggest advantage is that instead of the aforementioned pyramid structure, you can create a chain of Promises that build on each other. This is much more readable and makes debugging easier. Another advantage is error handling. Instead of doing success and error handling in one piece as with callbacks, you can cleanly separate the two parts with Promises. A typical example of asynchronous operations building on each other is working with files: First, check if the file exists, then open the file descriptor. Next, read from the file, and finally, close the opened resource again. Each of these sequential actions is asynchronous and has the potential for a pyramid structure (Listing 1).

doesFileExist(filename, () => {
  openFd(filename, (fd) => {
    readFile(fd, (content) => {
      // do stuff with the content
      closeFd(fd, () => {
        // ready

The individual functions accept the callback function as the last argument, which you execute once the operation is done. With a few adjustments, this structure can be converted to a Promise chain. To do this, remove the callback function and instead, return a Promise object that you either resolve or reject when the operation is complete.

  .then(() => openFd(filename))
  .then(fd => readFile(fd))
  .then((fd, content) => {
    // do stuff with the content
    return fd;
  .then(fd => {
  }).then(() => {
    // ready
  }).catch(e => console.log(e));

As a little bonus, the second example also includes rudimentary error handling. This means that if an error occurs somewhere in the chain, it will be passed through to the catch at the end and the application won’t crash just because there was an exception while reading the contents of the file due to missing permissions. You can use async/await to make the source code a little nicer, without the annoying callbacks (Listing 3).

try {
  await doesFileExist(filename);
  const fd = await openFd(filename);
  const content = await readFile(fd);
  // do stuff with the content
  await closeFd(fd);
  // ready 
} catch (e) {

Async/await is based on Promises and hides the asynchronous nature of operations from the developer. However, execution of the code is paused at each step until the operation is completed. In the meantime, the application remains responsive and can focus on other tasks.

Streams – another tool in our toolbox

Developers who have worked with Angular before will rightly interject, “Stop, there’s more!” Angular uses the RxJS library in many places to handle asynchronous operations, such as communication with a server. RxJS takes the idea of asynchronous data streams and models each piece of information in that stream as an event that can pass through a series of operators from source to destination. With these operators, you can model the data stream and adjust the event along the way – but you can also merge multiple streams and implement many other operations.

The idea of streams and the principle that different operators or functions are hooked between the source isn’t new. The destination of a data stream isn’t winning any innovation awards. Streams are a very powerful concept but are often underestimated. Streams are significantly heavier than callbacks or Promises. However, they are an excellent solution for more complex problems. This is supported by the integration of RxJS into the core of Angular as well as the stream module of Node.js. This core module takes a similar approach to RxJS. However, it doesn’t solve the issue as elegantly, nor does it bring with it as extensive a collection of tools as RxJS. In Node.js, you can stream almost anything, starting from input on the console, to streaming from files or into files, to network or database streams. As a small example, let’s look at copying a file while converting the file’s contents to uppercase (Listing 4).

Import {createReadStream, createWriteStream} from 'fs';
Import toUpperCase from './util';

The createReadStream function creates a readable stream – a data stream that can only be read from. On the other side of the stream is createWriteStream, which creates the counterpart with a writable stream. You can insert any number of intermediate pieces between these two endpoints. These are readable and writable stream implementations implemented as transform streams that produce output from a given input – asynchronously.


The creators of JavaScript took pity on us by not including multi-threading, so fortunately we don’t have to deal with concepts like thread safety. This is good for the complexity of our applications, but it limits us to a small part of our applications when it comes to local scaling. Of course, it’s possible to achieve true parallelization of tasks via the detour via worker processes. However, this is too cumbersome in most cases and is rarely necessary.

JavaScript generally fares very well with its “one process, one thread” architecture because it offloads many tasks to other places, forcing us to wait for certain things to happen often in our applications. Here, both JavaScript and the ecosystem offer a whole range of approaches. From callbacks to Promises and streams, there is something for everyone. The only important thing is finding the right solution for the problem at hand. Implementing a click handler for a button that a user can click more than once with a Promise is probably not the best idea.





Best-Practises with Angular


One of the most famous frameworks of modern days

JavaScript Practices & Tools

DevOps, Testing, Performance, Toolchain & SEO


All about Node.js


From Basic concepts to unidirectional data flows