API - International JavaScript Conference Mon, 22 Jul 2024 11:25:35 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 https://javascript-conference.com/wp-content/uploads/2017/03/ijs-favicon-64x64.png API - International JavaScript Conference 32 32 What’s New in Next.js 14? https://javascript-conference.com/blog/whats-next-in-nextjs/ Wed, 17 Jan 2024 09:12:55 +0000 https://javascript-conference.com/?p=90097 Without a doubt, the Next.js JavaScript framework is generating the most attention in the front-end world. It remains to be seen if this attention is entirely positive, but undeniable progress is currently unfolding in this domain. In this article, we’ll examine the newest version, Next.js 14.

The post What’s New in Next.js 14? appeared first on International JavaScript Conference.

]]>
Why is Next.js so popular?

 

Version 14 is the first major release since the Next team published the App Router as a stable part of the framework in version 13.4. Why is Next so popular? Afterall, Next is relatively old for a JavaScript framework, initial releasing back in October 2016. 

 

Next has always aimed to simplify React application development, especially when it comes to interaction with the server side. Although React supports server-side rendering with _ReactDOMServer_, a custom implementation based on a Node.js application that renders React components on the server side and sends the generated HTML to the client is anything but convenient. Next made this and similar features easy to use and available to a wider audience. The current release adds many stabilizations and features to Next. In the following, we’ll take a closer look at what this means for us.

 

:::div {.box}

Next.js 14: Summarizing the new features

  • Static rendering server components at build time significantly improves performance.
  • Server actions enable write access to the server and can be triggered with _form_ elements or _startTransition_ function.
  • Introduces Turbopack as a separate build tool in Rust for improved performance; up to 95% faster code updates with Fast Refresh.
  • New interface between static prerendering and dynamic on-demand rendering through partial prerendering.
  • React’s Suspense component and Next’s streaming feature for efficient rendering of static and dynamic content.
  • Planned replacement of Webpack for better developer experience.

:::

 

iJS Newsletter

Join the JavaScript community and keep up with the latest news!

The App Router – Entering a new era

 

The Next team introduced the App Router as a beta extension in version 13, and by version 13.4, the feature achieved stability. The App Router marks a paradigm shift working with Next, since the App Router uses React Server Components by default. These components are exclusively rendered on the server-side and no longer maintain their local state or ability to influence the component’s lifecycle. Furthermore, React Server Components don’t allow any user interaction. These restrictions are offset by two crucial features. Server Components can handle asynchronous operations, unlike Client Components, and can access server-side APIs.  It might sound abstract at first, but it becomes clear with a specific example.

 

:::div {.codelisting}

Listing 1: Data fetching in a server component

“`javascript

import { getAllTodos } from ‘./lib/todo.api’;

export default async function List() {

  const todos = await getAllTodos();

  <div>

    {todos.map((todo) => (

      <div key={todo.id}>{todo.title}</div>

    ))}

  </div>;

}

“`

:::

 

You can implement a server component as an _async_ function and use the _await_ keyword in the component function. Access to server-side APIs means you can access Node.js’ entire functional scope. This means you can access the file system and databases or web services, all from a React component. The advantage is that you don’t have to hide it behind a combination of _useEffect_ and _useState_. Next performs all operations on the server side, making sure that the rendering process is carried out.

 

Another special Next feature is that the server components are rendered by default at build time. This process, known as static rendering, prepares the server-side structures before a client request and sends only the finished HTML to the client. Performance is similar to a conventional static web server. 

 

But for dynamic content, this type of rendering is only available to a limited extent. There are some cases where you will render dynamically instead of statically. This is especially true if you use dynamic functions like cookies or headers, search parameters, or if you deliberately switch off caching http requests. Regardless of whether you want to use static or dynamic rendering, Next.js optimizes read access to data in both cases.

 

Up until now, when it came to writing you were on your own. Writing accesses usually had to be handled via client components and the browser’s fetch API or additional libraries. These could then be received with Next.js and its API routes and handled accordingly. 

 

With the _revalidatePath_ and _revalidateTag_ functions, Next gives us the option of rebuilding static content when changes are made, combining the advantages of static rendering and dynamic content. In version 14, Next’s Server Actions are stable and there’s now another tool that supports write access in your application.

EVERYTHING AROUND ANGULAR

Explore the iJS Angular Development Track

 

Server Actions in Next.js

 

Hardly any other Next feature attracted as much recent attention as Server Actions. This is due to a presentation of Next Server Actions, where a write SQL access was turned directly into a button component. The concept of Server Actions actually allows write accesses to be triggered from a component. There are three ways to trigger a server action:

 

  • In the _action attribute_ of a _form element’s _action: The server action is executed when the form is submitted.
  • Alternatively, you can use the _formAction_ attribute in buttons or input elements.
  • The third option is using the _startTransition_ function: Here, you’re independent of a form.

 

As the name suggests, Server Actions execute Next on the server side. The client sends a request to the server and the server processes the message and responds accordingly. Listing 2 shows a simple example of how to mark a to-do as read in a to-do list with Server Actions.

 

:::div {.codelisting}
Listing 2: Writing operations with server actions
“`javascript

import { revalidatePath } from ‘next/cache’;

import { getAllTodos } from ‘./lib/todo.api’;

export default async function List() {

  const todos = await getAllTodos();

  return (

    <div>

      {todos.map((todo) => (

        <form

          key={todo.id}

          action={async () => {

            ‘use server’;

            await fetch(`http://localhost:3001/todos/${todo.id}`, {

              method: ‘PUT’,

              headers: {

                ‘Content-Type’: ‘application/json’,

              },

              body: JSON.stringify({ …todo, done: !todo.done }),

            });

            revalidatePath(‘/’);

          }}

        >

          {todo.title}

          <button>{todo.done ? ‘done’ : ‘todo’}</button>

        </form>

      ))}

    </div>

  );

}

“`

:::

 

In the example, there is a list of to-dos. Each data record is enclosed in a form whose _action_ attribute contains a server action. In the simplest case, this is an asynchronous function containing the character string “use server”. This lets Next handle the form correctly. 

 

In the example, the code displays the data record title and a button. When the button is clicked, the form is sent and the Server Action activates. The browser sends a request to the next backend and executes the Server Action. In the server-side function, you can access the data record and, like in the example, send the data record to a REST API to persist the data. Then, the _revalidatePath_ function is called to update the data. This causes Next to rebuild the server-side statically generated data. You can see the updated data in the browser.

 

This Next feature closes the circle of read and write operations on the server. However, server components and Server Actions weren’t invented by the Next team; they’re actually React features. Next integrates these features into the framework so that both can be used without any extra effort.

 

Server Actions are a big topic in Next 14, but not the only one. Another innovation concerns the build process of the framework.

 

Turbopack in Next.js

 

Like many other frameworks, Next relied on Webpack as a build tool. For a long time, Webpack was the first choice for build tools in client-side JavaScript applications. But there’s been a recent surge of challengers such as Rollup or esbuild. 

 

Vercel, the company behind Next, is launching its own webpack challenger, Turbopack. Turbopack follows the strategy of some modern JavaScript tools and is not implemented in JavaScript or TypeScript, but in Rust instead. The programming language choice gives even better performance. According to Vercel, they saw significant improvements for vercel.com, a relatively extensive Next applications:

 

  • Local server startup time is 53% faster.
  • Code updates with Fast Refresh are up to 95% faster.

 

Although Turbopack is currently still in beta, it can already be used in Next with the _–turbo_ option. The Turbopack team is actively working on passing all automated tests for Next, and currently has a success rate of 90%. Once all tests are cleared, Turbopack will be considered stable. Unlike Webpack, which can be used for almost any library and framework, Turbopack development focuses on supporting Next. In the medium-term, this tool will replace the currently used Webpack in Next and give better developer experience and faster builds.

 

Partial pre-rendering

 

Next supports both static pre-rendering content and dynamic on-demand rendering. But there’s gradations in-between, where part of the displayed page is rendered statically and certain content is rendered on-demand. These options can already be used, but in a less convenient way. With the new partial pre-rendering, Next created an interface to fulfill this requirement without any extra adjustments in your application.

 

At the heart of this approach lies React’s Suspense component. Next can efficiently render the outer frame around the Suspense Boundary statically. For the Suspense component, first the framework renders the fallback content and then later inserts the dynamic content into place. This seamless transition is facilitated by Next’s streaming feature, letting the server-generated content be streamed seamlessly over the same HTTP connection, minimizing overhead.

 

What’s next for Next?

 

Next’s rapid development brought significant advancements, but it’s also been accompanied by occasional issues and instability. Despite these challenges, the Next team is working diligently to quickly address shortcomings. Moreover, the release notes for each iteration are remarkably detailed and informative.

 

Even minor releases contain a large number of bug fixes that improve the framework’s overall stability. Given Next’s pivotal role in advancing the React ecosystem and its position as a technology pioneer, it’s not surprising for occasional glitches to arise.

 

One of Next’s biggest advantages is the flexibility it gives developers to choose the right interface. While the new App Router offers enhanced features and capabilities, you don’t have to adopt it immediately. Developers can still rely on the tried-and-tested Pages Router, renowned for its stability. For example, even with the new App Router, you have the choice of using Server Actions. You can also send requests from the client to the server as usual instead.

 

The Next team firmly established a consistent approach of releasing new features, gathering community feedback , and integrating insights into updates. I highly recommend you take the chance to explore and incorporate these new features to stay ahead of the curve and harness Next’s full potential.

The post What’s New in Next.js 14? appeared first on International JavaScript Conference.

]]>
What’s New In Angular 17? https://javascript-conference.com/blog/whats-new-in-angular-17/ Thu, 16 Nov 2023 09:59:07 +0000 https://javascript-conference.com/?p=89897 In early 2023, Sarah Drashner, Google's Engineering Director and head of the Angular team, coined the term "Angular Renaissance" to describe the renewed focus on the framework for developing modern JavaScript applications over the last seven years.

The post What’s New In Angular 17? appeared first on International JavaScript Conference.

]]>
This renewal is incremental, backwards compatible, and takes current trends from front-end frameworks into account. Developer experience and performance are the primary goals of this renewal movement. Standalone components and signals are two well-known features that have already emerged as part of this effort.

Angular 17 adds to the Angular Renaissance in fall 2023 with a new syntax for the control flow, delayed page loading, improved SSR support, and the CLI now relies on esbuild, significantly speeding up the build.

In this article, I will discuss these new features using an example application (Figure 1). The source code used can be found under Example.

 

Figure 1: Example application

New syntax for control flow in templates

Angular has used structural directives such as *ngIf or *ngFor for the control flow since its inception. Because the control flow needed to be extensively revised for Angular 16’s signals anyway, the Angular team decided to give it a complete overhaul. The result is a new built-in control flow that stands out clearly from the rendered markup (Listing 1).

Listing 1

@for (product of products(); track product.id) {
    <div class="card">
        <h2 class="card-title">{{product.productName}}</h2>
        […]
    </div>
}
@empty {
    <p class="text-lg">No Products found!</p>
}

It’s worth noting the new @empty block, which Angular renders if the list to be iterated is empty.

Even if signals were a driver for this new syntax, they’re not a prerequisite for its use. The new control flow blocks can also be used with classic variables or with observables in conjunction with the async pipe.

While signals were a motivation for this new syntax, they are not required for its use.

iJS Newsletter

Join the JavaScript community and keep up with the latest news!

The mandatory track expression allows Angular to identify individual elements that have been moved within the iterated collection. This drastically reduces the rendering effort and allows existing DOM nodes to be reused. When iterating collections of primitive types, e.g. number or string, track should be used with the pseudo variable $index according to the Angular team (Listing 2).

Listing 2

@for (group of groups(); track $index) {
    <a (click)="groupSelected(group)">{{group}}</a>
    @if (!$last) { 
        <span class="mr-5 ml-5">|</span> 
    }
}

In addition to $index, the other values known from *ngFor are also available via pseudo variables: $count, $first, $last, $even, $odd. If required, their values can be stored in template variables using expressions (Listing 3).

Listing 3

@for (group of groups(); track $index; let isLast = $last) {
    <a (click)="groupSelected(group)">{{group}}</a>
    @if (!isLast) { 
        <span class="mr-5 ml-5">|</span> 
    }
}

 

The new @if simplifies the formulation of else/ else-if branches (Listing 4).

Listing 4

@if (product().discountedPrice && product().discountMinCount) {
    […]
}
@else if (product().discountedPrice && !product().discountMinCount) {
    […]
}
@else {
    […]
}

In addition, different cases can also be distinguished with a @switch (Listing 5).

Listing 5

@switch (mode) {
    @case ('full') {

Originally, the Angular CLI used webpack to build bundles. However, webpack is a bit outdated and is currently being challenged by newer tools that are easier to use and much faster. One of these tools is esbuild, which has a notable adoption rate of over 20,000 downloads per week.

 […
    }
    @case ('small') {
      […]
    }
    @default {
      […]
    }
}

In contrast to ngSwitch and *ngSwitchCase, the new syntax is type-safe. In this example, the individual @case blocks must have string values, especially as the variable mode passed to @switch is also of type string.

The new control flow syntax reduces the need to use structural directives, which are powerful but sometimes unnecessarily complex. Nevertheless, the framework will continue to support structural directives. On the one hand, there are some valid use cases for this and, on the other hand, the framework must be backwards compatible despite the many exciting new features.

Box: Automatic migration to Build-in Control Flow

If you want to migrate your program code automatically to the new control flow syntax, you can now find a schematic for this in the @angular/core package:

ng g @angular/core:control-flow

Delayed loading of side panels

Typically, not all areas of a page are equally important. Product suggestions are typically secondary to the product itself on a product detail page. However, this changes when the user scrolls the product suggestions into view in the browser window, or viewport.

For performance-critical web applications like online stores, it’s advisable to delay loading less important page sections. This ensures that the most important elements are available more quickly. Previously, Angular developers had to implement this manually. Previously, anyone who wanted to implement this idea in Angular had to do it manually. Angular 17 drastically simplifies this task with the new @defer block (Listing 6).

Listing 6

@defer (on viewport) {
    <app-recommentations [productGroup]="product().productGroup">
        </app-recommentations>
}
@placeholder {
    <app-ghost-products></app-ghost-products>
}

The use of @defer delays the loading of the specified component (specifically the loading of the specified page area) until a certain event occurs. As a replacement, it presents the placeholder specified under @placeholder. In the demo application used here, ghost elements for the product suggestions are initially presented in this way (Figure 2).

 

Figure 2: Ghost Elements as placeholder

After loading, @defer swaps the ghost elements for the actual suggestions (Figure 3).

 

Figure 3: @defer exchanges the placeholder for the delayed loaded component

In this example, the on viewport event is used. It occurs as soon as the placeholder has been scrolled into the visible area of the browser window. Other supported events can be found in Table 1.

Trigger Description
on idle The browser reports that no critical tasks are currently pending (default).
on viewport The placeholder is loaded into the visible area of the page.
on interaction The user begins to interact with the placeholder.
on hover The mouse cursor is moved over the placeholder.
on immediate As soon as possible after loading the page.
on timer ( < duration >) After a certain time, e.g. on timer(5s) to trigger loading after 5 seconds.
when < condition > As soon as the specified condition is met, e.g. when (userName !=== null)

Table 1: Trigger for @defer

The triggers on viewport, on interaction and on hover force the specification of a @placeholder block by default. Alternatively, they can also refer to other parts of the page that are to be referenced via a template variable:

<h1 #recommentations>Recommentations</h1> 
@defer (on viewport(recommentations)) { <app-recommentations […] />} 

In addition, @defer can be instructed to preload the bundle at an earlier time. As with the preloading of routes, this procedure ensures that the bundles are available as soon as they are needed:

@defer(on viewport; prefetch on immediate) { […] }

In addition to @placeholder, @defer also offers two other blocks: @loading and @error. Angular displays the former while it’s loading the bundle and the latter in the event of an error. To avoid flickering, @placeholder and @loading can be configured with a minimum display duration. The minimum property defines the desired value:

@defer ( […] ) { […] } 
@loading (after 150ms; minimum 150ms) { […] } 
@placeholder (minimum 150ms) { […] }

The after property also specifies that the loading indicator should only be displayed if loading takes longer than 150 ms.

Build performance with esbuild

Originally, the Angular CLI used webpack to build bundles. However, webpack is a bit outdated and is currently being challenged by newer tools that are easier to use and much faster. One of these tools is esbuild [esbuild], which has a notable adoption rate of over 20,000 downloads per week.

The CLI team has been working on an esbuild integration for several releases. In Angular 16, this integration was already included in the developer preview stage. As of Angular 17, this implementation is stable and is used as standard for new Angular projects via the Application Builder described below.

EVERYTHING AROUND ANGULAR

Explore the iJS Angular Development Track

For existing projects, it’s worth considering switching to esbuild. To do this, update the builder entry in angular.json:

"builder": "@angular-devkit/build-angular:browser-esbuild"

In other words, add -esbuild at the end. In most cases, ng serve and ng build should then behave as usual, but much faster. The former uses the vite dev-server [vite] for acceleration to build npm packages only when required. The CLI team has also planned further performance optimizations.

The call of ng build could also be drastically accelerated using esbuild. A factor of 2 to 4 is often quoted as the bandwidth.

Easily enable SSR with the new Application Builder

Angular 17 has also drastically simplified support for server-side rendering (SSR). When generating a new project with ng new, a –ssr switch is now available. If this is not used, the CLI asks whether it should set up SSR (Figure 4).

 

Figure 4: ng new sets up SSR on request

To activate SSR later, simply add the @angular/ssr package:

ng add @angular/ssr

As the scope @angular makes clear, this package comes directly from the Angular team, serving as the successor to the Angular Universal community project. The CLI team has added a new builder that integrates SSR into ng build and ng serve. This application builder uses the above-mentioned esbuild integration to create bundles that can be used both in the browser and on the server side.

A call to ng serve starts a development server that both renders on the server side and delivers the bundles for operation in the browser. A call to ng build –ssr creates bundles for both the browser and server, as well as building a simple Node.js-based server whose source code generates the above-mentioned schematics.

If you cannot or don’t want to run a Node.js server, you can use ng build –prerender to prerender the individual routes of the application during the build.

Further innovations

In addition to the innovations discussed so far, Angular 17 brings numerous other enhancements:

  • The router now supports the View Transitions API. This API offered by some browsers allows the animation of transitions using CSS animations, e.g. from one route to another. This optional feature must be activated when setting up the router using the withViewTransitions function. For demonstration purposes, the enclosed example uses CSS animations taken from View Transitions API.
  • Signals, which were introduced in version 16 as a developer preview, are now stable. One important change is that they are now designed to be used with immutable data structures by default. This makes it easier for Angular to track changes to data managed by signals. The set method, which assigns a new value, or the update method, which maps the existing value to a new one, can be used to update Signals. The mutate method has been removed, because it doesn’t match the semantics of immutables.
  • Now there’s a diagnostic that issues a warning if the getter is not called when reading signals in templates (e.g. {{ products }} instead of {{ products() }}).
  • Animations can now be loaded lazy Lazy Animations.
  • The Angular CLI generates standalone components, directives and pipes by default. By default, ng new also provides for the bootstrapping of a standalone component. This behavior can be deactivated with the –standalone false switch.
  • The ng g interceptor instruction generates functional interceptors.

Summary

Angular’s renaissance continues with version 17, which introduces several new features and improvements. One of the most notable changes is the new control flow, which simplifies the structure of templates. Thanks to deferred loading, less important page areas can be reloaded at a later point in time, speeding up the initial page load. Other features include the use of esbuild, causing the ng build and ng serve instructions run noticeably faster. In addition, the CLI now directly supports SSR and prerendering.

The post What’s New In Angular 17? appeared first on International JavaScript Conference.

]]>
Custom Standalone APIs for Angular https://javascript-conference.com/blog/custom-standalone-apis-angular/ Wed, 25 Oct 2023 11:23:51 +0000 https://javascript-conference.com/?p=89682 Together with standalone components, the Angular team has introduced the so-called standalone APIs. They provide a simple solution for library setup and do not require Angular modules. Popular libraries that already implement this concept include the HttpClient, Router, and NgRx. These libraries are based on several patterns that we find beneficial in our own projects. They also provide our library users with familiar structures and behaviors. In this article, I show three such patterns that I derived from the libraries mentioned.

The post Custom Standalone APIs for Angular appeared first on International JavaScript Conference.

]]>
The source code and examples are available here.

Example

A simple logger library is used here to show the different patterns (Fig. 1). The LogFormatter formats the messages before the Logger publishes them. This is an abstract class that is used as a DI token. The consumers of the logger library can customize the formatting by providing their own implementation. Alternatively, they can settle for a default implementation provided by the library.

Fig. 1

Fig. 1: Structure of an exemplary Logger library

The LogAppender is another replaceable concept that takes care of attaching the message to a log. The default implementation just writes the message to the console.

iJS Newsletter

Join the JavaScript community and keep up with the latest news!

While there can be only one LogFormatter, the library supports multiple LogAppenders. For example, the first LogAppender might write the message to the console, while the second also sends it to the server. To make this possible, each LogAppender is registered via a multiprovider. The injector returns all registered LogAppenders in the form of an array. Since an array cannot be used as a DI token, the example uses an InjectionToken instead:

export const LOG_APPENDERS =
  new InjectionToken<LogAppender[]>("LOG_APPENDERS");

An abstract LoggerConfig, which also acts as a DI token, defines the possible configuration options (Listing 1).

Listing 1

export abstract class LoggerConfig {
  abstract level: LogLevel;
  abstract formatter: Type<LogFormatter>;
  abstract appenders: Type<LogAppender>[];
}
 
export const defaultConfig: LoggerConfig = {
  level: LogLevel.DEBUG,
  formatter: DefaultLogFormatter,
  appenders: [DefaultLogAppender],
};

The default values for these configuration options are in the defaultConfig constant. The LogLevel in the configuration is a filter for log messages. It is of type enum and has for simplification only the values DEBUG, INFO and ERROR:

export enum LogLevel {
  DEBUG = 0,
  INFO = 1,
  ERROR = 2,
}

The Logger only publishes messages that have the LogLevel specified here or a higher LogLevel. The LoggerService itself receives the LoggerConfig, the LogFormatter and an array with LogAppender via DI and uses them to log the received messages (Listing 2).

Listing 2

@Injectable()
export class LoggerService {
  private config = inject(LoggerConfig);
  private formatter = inject(LogFormatter);
  private appenders = inject(LOG_APPENDERS);
 
  log(level: LogLevel, category: string, msg: string): void {
    if (level < this.config.level) {
      return;
    }
    const formatted = this.formatter.format(level, category, msg);
    for (const a of this.appenders) {
      a.append(level, category, formatted);
    }
  }
 
  error(category: string, msg: string): void {
    this.log(LogLevel.ERROR, category, msg);
  }
 
  info(category: string, msg: string): void {
    this.log(LogLevel.INFO, category, msg);
  }
 
  debug(category: string, msg: string): void {
    this.log(LogLevel.DEBUG, category, msg);
  }
}

The golden rule

Before we take a look at the patterns, I want to mention my golden rule for registering services: Use @Injectable({providedIn: ‘root’}) whenever possible! Especially in applications, but also in numerous situations in libraries, this approach is perfectly sufficient. It is simple, treeshakable and even works with lazy loading. The latter aspect is less a merit of Angular than of the underlying bundler. Everything that can only be used in a lazy bundle is also accommodated there.

EVERYTHING AROUND ANGULAR

Explore the iJS Angular Development Track

Pattern: provider factory

A provider factory is a function that returns all services for a reusable library. It can also register configuration objects as services or exchange service implementations.

The returned services are in a provider array that wraps the factory with the EnvironmentProviders type. This approach, designed by the Angular team, ensures that an application can register providers only with so-called environment injectors. These are primarily the injector for the root scope and injectors that Angular sets up via the routing configuration. The provider factory in Listing 3 illustrates this. It takes a LoggerConfig and sets up the individual services for the Logger.

Listing 3

export function provideLogger(
  config: Partial<LoggerConfig>
): EnvironmentProviders {
  // using default values for   // missing properties
  const merged = { ...defaultConfig, ...config };
 
  return makeEnvironmentProviders([
    {
      provide: LoggerConfig,
      useValue: merged,
    },
    {
      provide: LogFormatter,
      useClass: merged.formatter,
    },
    merged.appenders.map((a) => ({
      provide: LOG_APPENDERS,
      useClass: a,
      multi: true,
    })),
  ]);
}

The factory takes missing configuration values from the default configuration. The makeEnvironmentProviders function provided by Angular wraps the provider array into an instance of EnvironmentProviders. This factory allows consumers to set up the logger similarly to how they set up the HttpClient or router (Listing 4).

Listing 4

bootstrapApplication(AppComponent, {
  providers: [
    provideHttpClient(),
    provideRouter(APP_ROUTES),
    [...]
    provideLogger(loggerConfig),
  ]
}

Pattern: feature

The feature pattern allows optional functionality to be enabled and configured. If this functionality is not used, the build process removes it using treeshaking. The optional feature is represented by an object with a providers array. In addition, the object has a kind property that subdivides the feature of a certain category. This categorization enables the validation of the jointly configured features. For example, features can be mutually exclusive. An example of this can be found in the HttpClient: It prohibits the use of a feature for configuring XSRF handling if the consumers have simultaneously activated a feature for disabling it.

The logger library used here uses a ColorFeature that allows messages to be output in different colors depending on the LoggerLevel (Fig. 2).

Fig. 2

Fig. 2: Structure of the ColorFeature

An enum is used to categorize features:

export enum LoggerFeatureKind {
  COLOR,
  OTHER_FEATURE,
  ADDITIONAL_FEATURE
}

Another factory is used to provide the ColorFeature (Listing 5).

Listing 5

export function withColor(config?: Partial<ColorConfig>): LoggerFeature {
  const internal = { ...defaultColorConfig, ...config };
 
  return {
    kind: LoggerFeatureKind.COLOR,
    providers: [
      {
        provide: ColorConfig,
        useValue: internal,
      },
      {
        provide: ColorService,
        useClass: DefaultColorService,
      },
    ],
  };
}

The updated provider factory provideLogger takes on several features via an optional second parameter defined as an array for rest parameters (Listing 6).

Listing 6

export function provideLogger(
  config: Partial<LoggerConfig>,
  ...features: LoggerFeature[]
): EnvironmentProviders {
  const merged = { ...defaultConfig, ...config };
 
  // Inspecting passed features
  const colorFeatures =
    features?.filter((f) => f.kind === LoggerFeatureKind.COLOR)?.length ?? 0;
 
  // Validating passed features
  if (colorFeatures > 1) {
    throw new Error("Only one color feature allowed for logger!");
  }
 
  return makeEnvironmentProviders([
    {
      provide: LoggerConfig,
      useValue: merged,
    },
    {
      provide: LogFormatter,
      useClass: merged.formatter,
    },
    merged.appenders.map((a) => ({
      provide: LOG_APPENDERS,
      useClass: a,
      multi: true,
    })),
 
    // Providing services for the     // features
    features?.map((f) => f.providers),
  ]);
}

The provider factory uses the kind property to examine and validate the passed features. If all is well, it includes the feature’s providers in the EnvironmentProviders object. The DefaultLogAppender fetches the ColorService provided by the ColorFeature via dependency injection (Listing 7).

Listing 7

export class DefaultLogAppender implements LogAppender {
  colorService = inject(ColorService, { optional: true });
 
  append(level: LogLevel, category: string, msg: string): void {
    if (this.colorService) {
      msg = this.colorService.apply(level, msg);
    }
    console.log(msg);
  }
}

Since features are optional, the DefaultLog appender passes the {optional: true} option to inject. This prevents an exception in cases where the feature, and thus the ColorService, has not been provided. Also, the DefaultLogAppender must check for null values.

This pattern occurs in the router, e.g. to configure preloading or to enable tracing. The HttpClient uses it to provide interceptors, to configure JSONP and to configure/disable XSRF token handling.

Pattern: configuration factory

Configuration factories extend the behavior of existing services. They can provide additional configuration options, but also additional services. An extended version of our LoggerService will serve as an illustration. It allows to define an additional LogAppender for each log category:

@Injectable()
export class LoggerService {
  readonly categories: Record<string, LogAppender> = {};
  […]
}

To configure a LogAppender for a category, we introduce a configuration factory named provideCategory (Listing 8).

Listing 8

export function provideCategory(
  category: string,
  appender: Type<LogAppender>
): EnvironmentProviders {
  // Internal/ Local token for registering the service
  // and retrieving the resolved service instance
  // immediately after.
  const appenderToken = new InjectionToken<LogAppender>("APPENDER_" + category);
 
  return makeEnvironmentProviders([
    {
      provide: appenderToken,
      useClass: appender,
    },
    {
      provide: ENVIRONMENT_INITIALIZER,
      multi: true,
      useValue: () => {
        const appender = inject(appenderToken);
        const logger = inject(LoggerService);
 
        logger.categories[category] = appender;
      },
    },
  ]);
}

This factory creates a provider for the LogAppender class. The call to inject gives us an instance of it and resolves its dependencies. The ENVIRONMENT_INITIALIZER token points to a function that Angular triggers when initializing the respective environment injector. It registers the LogAppender with the LoggerService (Listing 9).

Listing 9

export const FLIGHT_BOOKING_ROUTES: Routes = [
 
  {
    path: '',
    component: FlightBookingComponent,
    providers: [
      // Setting up an NgRx      // feature slice
      provideState(bookingFeature),
      provideEffects([BookingEffects]),
 
      // Provide LogAppender for      // logger category
      provideCategory('booking', DefaultLogAppender),
    ],
    children: [
      {
        path: 'flight-search',
        component: FlightSearchComponent,
      },
      [...]
    ],
  },
];

This pattern is found, for example, in NgRx to register feature slices. The feature withDebugTracing offered by the router also uses this pattern to subscribe to the observable events in the router service.

Conclusion

Standalone APIs allow you to set up libraries without Angular modules. Their use is simple to begin with: consumers simply need to look for a provider factory with the name provideXYZ. Additional features can be enabled, if necessary, with functions that follow the withABC naming scheme.

However, the implementation of such APIs is not always trivial. This is exactly where the patterns presented here help. Since they are derived from libraries of the Angular and NgRx teams, they reflect first-hand experience and design decisions.

The post Custom Standalone APIs for Angular appeared first on International JavaScript Conference.

]]>
Making the web more powerful with Project Fugu https://javascript-conference.com/blog/making-the-web-more-powerful-with-project-fugu/ Tue, 31 Mar 2020 13:53:35 +0000 https://javascript-conference.com/?p=30293 The term Progressive Web App (PWA) was coined in 2015. Since 2018 PWAs can be installed on all relevant operating systems and can also be executed offline. However, there is still a certain difference in functionality between PWAs and their native counterparts. With Project Fugu, the gap should continue to shrink.

The post Making the web more powerful with Project Fugu appeared first on International JavaScript Conference.

]]>
When Steve Jobs introduced the iPhone in 2007, there was no mention of apps or the App Store. Instead, developers were asked to write web applications based on HTML to bring third-party applications to the new smartphone. The advantages are obvious: Web applications are cross-platform executable, run in a sandbox and do not have random access to native interfaces. Probably not least because of the last two points, Jobs was flirting with the web as an application platform. But the low performance due to the initially weak hardware and the not very distinguished capabilities of the web ultimately led to a change of mind at Apple, so they offered a Software Development Kit (SDK) for native development later.

However, a lot has happened on the web since then: Modern JavaScript and WebAssembly engines achieve almost native performance and CSS3 animations run very smoothly. HTML5 brought a lot of interfaces to the web, including local storage technologies and access to the user’s location. WebGL brought hardware-accelerated 3D visualizations and WebRTC brought peer-to-peer based real-time communication. Then along came Progressive Web Apps, which can be run offline with the help of Service Workers and, thanks to the Web App Manifest, also be present on the home screen or in the program list of the respective operating system. From there, the PWA can hardly be distinguished from native applications. Figure 1 shows Spotify’s Progressive Web App, which is very similar to its native counterpart. Only a few years ago, we would not have expected all these functions from the web. But despite all efforts, a certain gap between web applications and native apps is still visible today.


Fig. 1: Spotify as a Progressive Web App: Hardly any different from the native version.

Mission: A more powerful web

The three Chromium contributors – Google, Microsoft and Intel – now want to change this and have joined forces to create the Web Capabilities Project, better known by the code name Project Fugu. The aim of the project is to make missing features, that might still prevent developers from implementing their application as a web solution today, available on the web. Project Fugu APIs should be suitable for cross-platform use, where appropriate. In particular, no distinction between platforms should be required, opposed to how it currently is used with other cross-platform approaches. Instead, the web browser is in charge of calling the correct native interface. All three companies have an interest in powerful web applications: Google’s own web browser, Chrome, and the operating system, Chrome OS, for which Progressive Web Apps are, of course, ideally suited, are based on the open source browser Chromium. Microsoft recently gave up implementing its own browser engine, and the new version of Microsoft Edge is also based on Chromium. Since the Microsoft Store application marketplace has never really taken off, the company is pleased about the additional range of applications that Progressive Web Apps bring. Intel, on the other hand, sells hardware, and the demand from a stronger web would increase on two sides at once—for clients and servers.

Today, developers are often forced to develop applications natively or use wrapper approaches like Apache Cordova or GitHub’s Electron. These projects package a web application in a native application framework for mobile devices (Cordova) or desktop systems (Electron). The web application can then use this native application framework to access all interfaces that are also available to native applications. The APIs are then provided to the web application in the form of JavaScript interfaces. Conversely, these approaches also require the application to be deployed via the respective app store (iOS, macOS, Android) or an executable file (Windows, macOS, Linux). Stores often require a paid membership, and apps must also meet certain criteria. Developers are dependent on the goodwill of the store provider. In the case of GitHub’s Electron, the application includes not only the HTML, CSS, and JS source files, but also a copy of the Node.js runtime to call native functions and Chromium to display the web application. These dependencies are pretty large, so that even a Hello World application already requires several dozen megabytes. The additional browser and node processes also lead to an overhead in the use of the working memory. Above all, the application no longer runs without installation in the browser.

With Project Fugu, native wrappers could soon be a thing of the past. Functions that previously required the use of application frames will be available directly in the browser in the future. Project Fugu plans to introduce the Native File System API for example, which will give developers access to just that: the native file system. This interface can bring entire categories of applications to the web that were previously dependent on Electron or Cordova: Image or video editing programs, office applications and productivity apps.

Of course, the security and privacy of users must always be taken into consideration too; not only do web applications get access to the interfaces, but so do advertisers and providers of potentially harmful websites. The Native File System API therefore only allows limited access. For example, no system directories can be read or manipulated – write access requires the consent of the user. In accordance with that, applications should also be able to register as file handlers for a specific file extension. In the future, double-clicking on a file with this extension will open the web application stored for it. Also useful for productivity applications is the Raw Clipboard Access API, which is intended to give developers in-depth access to the clipboard. Going forward, applications should be able to work with any format on the clipboard. Currently this is only possible for text and in some browsers for image data.

In addition to these, many other interfaces are planned, such as the Badging API to display a notification badge on the icon of an installed PWA, comparable to email applications or messenger services. Also, the application menu should be able to customize PWAs in the future, and on macOS developers should also be able to influence the tools displayed in the touch bar. The complete list of all interfaces, sorted by priority, can be seen in the Fugu API Tracker [1] in Figure 2. The implementation progress of the respective interfaces is also shown. To stay up to date, developers can register for notifications about document changes.


Fig. 2: The Fugu API Tracker lists all interfaces that will be implemented.

How Fugu interfaces are created

Ideas for interfaces usually come from the companies involved in Fugu or their partners, but in principle anyone can submit proposals under [2]. The Fugu team reviews the proposals, determines the need and assigns a priority to the proposal. The idea is first roughly outlined in a so-called Explainer. The problem is explained, possible effects on the security and privacy of the users are examined, the current state of technology is described, and the intended interface is outlined. Feedback is then sought from web developers and the other browser manufacturers – primarily Mozilla and Apple – and an interface draft is prepared based on this. This will also be taken directly to the standardization path. As soon as the draft appears to be stable, the interface is implemented directly in Chromium. There, it is initially available behind a browser flag, and later it can be tested in the course of a trial phase (Origin Trial) on individual websites (Origins) for a limited audience – without users having to activate the flag. This procedure should lead to an overall consensus among browser manufacturers. If this is not the case, there is a risk that the interface will only be available in Chromium-based browsers such as Google Chrome, Microsoft Edge, Opera or Samsung Internet.

Web Share API: Sharing content over the web

The Web Share API is a good example of an interface where the Fugu concept has succeeded. With the help of this API, content can be shared with other installed applications via the native share dialog of the operating system. The interface works cross-platform on mobile and desktop devices and is also implemented by third-party browser manufacturers. Apple were actually the first to adopt this API while they are otherwise rather reluctant to provide new web interfaces with native power. Listing 1 shows an example of how the API is used.

Listing 1: Web Share API

async shareUrl() {
  if ('share' in navigator) {
    await navigator.share({
      title: 'Ready for PWAConf?',
      text: 'Check out this awesome conference.',
      url: 'https://pwaconf.io'
    });
  } else {
    // Fallback method
  }
}

The Web Share API proves to be particularly easy to use. Technically speaking, this API is an extension of the navigator interface in JavaScript, which makes web browser-related information and actions available. To share content, the method share() is provided on the navigator object. It receives a configuration object to which a text to be shared (text), a URL (url), or a title (title) can be passed. All properties are optional, but at least one specification must be made on the object. The native share dialog is then displayed with all applications that can receive the respective information. The share() method returns a promise. If the content was successfully shared with another application, the promise is resolved. In all other cases it is rejected, for example if no application can receive the content to be shared or the user cancels the operation. Figure 3 shows how to use the Web Share API under Safari on macOS: Clicking the button shown in the top part of the image calls the shareUrl() method from Listing 1. The specified URL can be shared on this system via the Messages app, AirDrop or the Notes app, as well as to the Simulator or the Reminders app. The share dialog of the Messages app is shown in the bottom of the image. At the top, the contacts to whom the message is to be sent are specified. The message is prefilled with the text and the URL, the title does not fit here and was therefore discarded. If desired, the user can adjust the message again. The message can then either be sent or the process can be cancelled. On mobile devices with iOS and Android, the interface works in exactly the same way – so developers do not have to make any changes to accommodate for the specific platform.


Fig. 3: Web Share API as implemented in Safari.

The listing also includes the concept of progressive enhancement: this concept says developers should check whether an interface is available. This is done by branching around the code in Listing 1. If the Web Share API is not available on a system, no error should occur. This would happen if an attempt was made to call the method shown above on a system without support for this interface. If the API is not available, the function could either be hidden in the user interface or a fallback implementation could be used. For example, the mailto: pseudo protocol could be used to open the user’s email app.

To prevent intrusive advertisements and harmful websites from displaying a content sharing request without user intervention, the interface can only be accessed as a result of user interaction, such as a mouse click or keystroke. Furthermore, it can only be used if the website has been transferred via Hypertext Transfer Protocol Secure (HTTPS). This requirement now applies to virtually all web interfaces that allow access to native functions.

The Web Share API is available in Google Chrome on Android since version 61, under macOS from Safari 12.1 and iOS from version 13.0 (Safari 12.2). This year, implementations in the desktop version of Google Chrome and in Mozilla Firefox will follow.

The second design of the interface also allows the sharing of files via the additional property files. Using the Web Share Target API, Progressive Web Apps can also register as a target for a sharing operation. Both features were developed as part of Project Fugu and are currently only supported in Chromium.

Roadmap for developers

Web developers should strive to implement their application as a Progressive Web App, i.e. a pure web solution. Developers should only use additional or alternative wrapping solutions if a certain function is not available on the web. This is also the case if developers have an urgent reason to make their application available in the iOS App Store or as an executable file. The investment in a web application always pays off in the end. Cordova and Electron should be seen as a kind of polyfill, providing functions for a transitional period that are not yet available on the web today. As soon as the respective function is provided directly on the web, the wrapper can be dropped.

If developers reach a limit on the web, they should report the desired use case and the missing interface to the browser manufacturers, for example via the API request function for Fugu in [2] or the bug trackers of the respective engines.

Conclusion: Good prospects for web developers

Project Fugu strives to make Progressive Web Apps even better. Capabilities that today still require native applications or wrappers like Cordova or Electron, could in the future be provided directly in the browser. The list of planned capabilities promises undreamt-of possibilities for web applications – for web developers, these might shape a bright future.

Sources

[1] https://goo.gle/fugu-api-tracker

[2] https://bit.ly/new-fugu-request

The post Making the web more powerful with Project Fugu appeared first on International JavaScript Conference.

]]>
Why you should use date-fns for manipulating dates with JavaScript https://javascript-conference.com/blog/why-you-should-use-date-fns-for-manipulating-dates-with-javascript/ Wed, 27 Mar 2019 10:43:43 +0000 https://javascript-conference.com/?p=26964 Issues related to working with dates are as old of a problem as it gets with JavaScript. In theory it is possible to perform date calculations with JavaScript’s date object, if it weren’t for the many weaknesses of the API. Fortunately, there are helpful libraries that can save us a lot of work. One of them is date-fns.

The post Why you should use date-fns for manipulating dates with JavaScript appeared first on International JavaScript Conference.

]]>
One problem, for example, is the handling of different time zones with the date object, since JavaScript uses the current system time zone as a basis. This can lead to difficulties, especially when it comes to applications that span multiple time zones. The representation of the month is another peculiarity of the date object in JavaScript. January, for instance, is specified with the value 0. However, when it comes to dates of days and years, JavaScript follows the expected standard again, thus the 5th of the month is represented by the number 5.

When you are implementing an application that works with date values, you will often stumble upon the problem that you have to create, modify, and output them. With the on-board JavaScript tools, the creating and outputting is easily doable. However if a date is modified, for example if you want to subtract two days from a date, this is no longer possible. Of course, you can get the timestamp of the date and then subtract the respective number of milliseconds in order to reach the target date. This solution is not easy to read and maintain, or particularly elegant. Due to this issue, and many more, numerous libraries have been created in the past to make it easier for you to handle date values in JavaScript. One of the most widespread solutions on the market is Moment.js. The top dog got a serious competitor some time ago though: The project date-fns.

How does date-fns differ from Moment.js?

The first and one of the most important differences is already in the name of the project, since fns stands for functions. date-fns is a collection of functions that allow you to work with date values.
In contrast to that, Moment.js has an object-orientated approach. Here you create a Moment-instance and work with the methods of this object. This affects the package size of course.
Moment.js contains the entire interface by default. You can indeed optimize the package, but this requires additional steps. In date-fns you only load the functions that you really need.
This doesn’t matter too much in a backend application with Node.js though, since the package size is a minor concern. You can use date-fns just like Moment.js, in the frontend browser. It’s here that the package size is decisive.

The developers of date-fns have not only made sure that the project is divided into many small and largely independent functions, but also that the functions are pure functions. For example, you pass a date object and the number of hours to add to the addHours function. As a result you get a new date object where the specified number of hours is later than when you entered it. So there are no side effects, such as the direct modification of the input.

How to install date-fns?

Like most other JavaScript libraries, date-fns is available as an npm package and can be installed as such via npm. Use the command npm install date-fns in your project to do so. The package will automatically be added to your package.json file as a dependency. Likewise, you can use yarn with the yarn add date-fns command.

How to use it?

You can use the date-fns package with both the CommonJS module system and also with ES modules. In the following example, you use the format function to output the current date. Listing 1 shows you how to work with the CommonJS module system.

 

const { format } = require('date-fns');

const date = new Date();

console.log(`Heute ist der: ${format(date, 'DD.MM.YYYY')}`);

Newer versions of Node.js also support the keywords import and export in order to import and export modules respectively. At this point you can either import the entire date-fns package and access the required functions, or you can take advantage of the fact that each function is available in a separate file, so you can import the format function individually. You can see how this works in Listing 2.

import { format } from 'date-fns/format';
const date = new Date();
console.log(`Heute ist der: ${format(date, 'DD.MM.YYYY')}`);

 

Become a part of our International JavaScript community now!
LEARN MORE ABOUT iJS Munich 2019:

Formatting date values

With format you have already learned the most important function for formatting date values.
You can use the format string to specify which part of the date you want to format and how.
A comprehensive reference of the individual tokens that you can use in the format string can be found at https://date-fns.org/docs/format.

In addition to this function, you have access to other auxiliary functions such as the distanceInWords function that outputs the difference between two date values in a readable form.

Date arithmetic

An already mentioned, vulnerability of object dates in JavaScript is the lack of support for a date arithmetic. It is therefore not possible to perform addition or subtraction without further ado.
date-fns provides a number of auxiliary functions for this. These functions generally have a uniform naming scheme: First you specify the operation, followed by the unit you want to work with.
This results in function names such as addMinutes or subYears. All functions of this category accept a date object as the first argument and a number as the second indicates how many units you want to add or subtract. For example, to add three quarters of an hour to the current date, you can use the code from Listing 3.


const { addMinutes, addHours, format } = require('date-fns');

const date = addMinutes(addHours(new Date(), 1), 45);

console.log(format(date, 'DD.MM.YYYY HH:mm'));

Comparisons

The comparison functions of date-fns are also very helpful, with their help you can determine whether a date lies before or after another, or whether a certain date lies in the future or in the past. Listing 4 uses the isAfter and isFuture functions as examples to illustrate their use.

const { isAfter, isFuture, addHours } = require('date-fns');

const date1 = new Date();
const date2 = addHours(new Date(), 5);
console.log(`Date1 is ${isAfter(date1, date2) ? 'after' : 'before'} Date2`);
console.log(`Date2 is ${isFuture(date2) ? 'not' : ''} in the past`);

Further operations

The date-fns package offers you not only simple operations such as addition, but also more complex operations such as the areRangesOverlapping function, which you can use to determine whether two time spans overlap.

With the min and max function you can find the earliest or latest date of a series of date values.

With the help of the compareAsc and compareDsc functions you can also sort arrays with date values. This function is passed to the sort method of an array as a comparison function. Listing 5 is an example of this.


const { compareAsc } = require('date-fns');



const sortedDates = [

new Date(2001, 1, 1),

new Date(2003, 3, 3),

new Date(2002, 2, 2),

].sort(compareAsc);



console.log(sortedDates);

Conclusion

A lot of what packages like Moment.js or date-fns offer, you can also achieve with native JavaScript. However, in these cases the source code readability suffers greatly. This is one of the most important arguments in favour of using these libraries, in addition to correcting the peculiarities of the JavaScript date object.

The possibilities of the date-fns shown here represent only a small part of the library and shall only give you a taste of this library’s functional scope. With numerous extensions and very good support for internationalization in applications, you should at least shortlist date-fns the next time for when you decide upon a date library for one of your applications.

The post Why you should use date-fns for manipulating dates with JavaScript appeared first on International JavaScript Conference.

]]>
5 simple Rules to implement Microservices Architecture https://javascript-conference.com/blog/five-rules-of-microservices/ Mon, 12 Jun 2017 08:30:17 +0000 https://javascript-conference.com/?p=23611 The software industry likes to create and follow hype. Unfortunately, the rate at which we can adapt to the hype is much smaller than the rate we generate it. It’s understandable, therefore, that we are sometimes tempted to take whatever solution we currently have at hand and simply rebrand it with minimal changes to suit the current fashion.

The post 5 simple Rules to implement Microservices Architecture appeared first on International JavaScript Conference.

]]>

Microservices is one example. It is an amazing opportunity to reshape how we build server software, but since it implies huge changes and players want to be in the market as “solution providers” who adapt microservices first, they just rebrand their solutions without much change. In doing so, they miss the fundamental principles of microservices.

Microservices is about making software approachable — it’s about enabling. So as an industry, we should be doing our best to make microservices accessible to everyone. Microservices doesn’t require a huge infrastructure investment. It doesn’t require you to maintain several technologies just to run your app.

As long as you follow 5 simple rules, you will benefit from microservices regardless of the technologies you are using.

  1. Zero-configuration: Any microservices system will likely have hundreds of services. A manual configuration of IP addresses, ports and API capabilities is simply infeasible.
  2. Highly-redundant: Service failures are common in this scenario. So it should be very cheap to have copies of your services at your disposal with proper fail-over mechanisms.
  3. Fault-tolerant: The system should tolerate and gracefully handle miscommunication, errors in message processing, timeouts and more. Even if certain services are down, all the other unrelated services should still function.
  4. Self-healing: It’s normal for outages and failures to occur. The implementation should recover any lost service and functionality automatically.
  5. Auto-discovery: The services should automatically identify new services that are introduced to the system to start communication. This should require neither manual intervention nor downtime.

If your architecture demonstrates these capabilities and if you are breaking down the fulfillment of most of your API requests into several independent services, then, yes, you are doing microservices.

Join me on my talk Zero-Configuration Microservices with Node.js and Docker at International JavaScript Conference to learn more about the true properties of microservices and how you can realize such a system with only Node.js and a handy library called cote.
The microservices revolution is upon us. Let’s kick it off and be a part of the change.

Want more exclusive knowledge? Sign up for our newsletter now!

The post 5 simple Rules to implement Microservices Architecture appeared first on International JavaScript Conference.

]]>