Angular - International JavaScript Conference Wed, 05 Feb 2025 13:15:25 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 https://javascript-conference.com/wp-content/uploads/2017/03/ijs-favicon-64x64.png Angular - International JavaScript Conference 32 32 What’s New in Next.js 14? https://javascript-conference.com/blog/whats-next-in-nextjs/ Wed, 17 Jan 2024 09:12:55 +0000 https://javascript-conference.com/?p=90097 Without a doubt, the Next.js JavaScript framework is generating the most attention in the front-end world. It remains to be seen if this attention is entirely positive, but undeniable progress is currently unfolding in this domain. In this article, we’ll examine the newest version, Next.js 14.

The post What’s New in Next.js 14? appeared first on International JavaScript Conference.

]]>
Why is Next.js so popular?

 

Version 14 is the first major release since the Next team published the App Router as a stable part of the framework in version 13.4. Why is Next so popular? Afterall, Next is relatively old for a JavaScript framework, initial releasing back in October 2016. 

 

Next has always aimed to simplify React application development, especially when it comes to interaction with the server side. Although React supports server-side rendering with _ReactDOMServer_, a custom implementation based on a Node.js application that renders React components on the server side and sends the generated HTML to the client is anything but convenient. Next made this and similar features easy to use and available to a wider audience. The current release adds many stabilizations and features to Next. In the following, we’ll take a closer look at what this means for us.

 

:::div {.box}

Next.js 14: Summarizing the new features

  • Static rendering server components at build time significantly improves performance.
  • Server actions enable write access to the server and can be triggered with _form_ elements or _startTransition_ function.
  • Introduces Turbopack as a separate build tool in Rust for improved performance; up to 95% faster code updates with Fast Refresh.
  • New interface between static prerendering and dynamic on-demand rendering through partial prerendering.
  • React’s Suspense component and Next’s streaming feature for efficient rendering of static and dynamic content.
  • Planned replacement of Webpack for better developer experience.

:::

 

iJS Newsletter

Join the JavaScript community and keep up with the latest news!

The App Router – Entering a new era

 

The Next team introduced the App Router as a beta extension in version 13, and by version 13.4, the feature achieved stability. The App Router marks a paradigm shift working with Next, since the App Router uses React Server Components by default. These components are exclusively rendered on the server-side and no longer maintain their local state or ability to influence the component’s lifecycle. Furthermore, React Server Components don’t allow any user interaction. These restrictions are offset by two crucial features. Server Components can handle asynchronous operations, unlike Client Components, and can access server-side APIs.  It might sound abstract at first, but it becomes clear with a specific example.

 

:::div {.codelisting}

Listing 1: Data fetching in a server component

“`javascript

import { getAllTodos } from ‘./lib/todo.api’;

export default async function List() {

  const todos = await getAllTodos();

  <div>

    {todos.map((todo) => (

      <div key={todo.id}>{todo.title}</div>

    ))}

  </div>;

}

“`

:::

 

You can implement a server component as an _async_ function and use the _await_ keyword in the component function. Access to server-side APIs means you can access Node.js’ entire functional scope. This means you can access the file system and databases or web services, all from a React component. The advantage is that you don’t have to hide it behind a combination of _useEffect_ and _useState_. Next performs all operations on the server side, making sure that the rendering process is carried out.

 

Another special Next feature is that the server components are rendered by default at build time. This process, known as static rendering, prepares the server-side structures before a client request and sends only the finished HTML to the client. Performance is similar to a conventional static web server. 

 

But for dynamic content, this type of rendering is only available to a limited extent. There are some cases where you will render dynamically instead of statically. This is especially true if you use dynamic functions like cookies or headers, search parameters, or if you deliberately switch off caching http requests. Regardless of whether you want to use static or dynamic rendering, Next.js optimizes read access to data in both cases.

 

Up until now, when it came to writing you were on your own. Writing accesses usually had to be handled via client components and the browser’s fetch API or additional libraries. These could then be received with Next.js and its API routes and handled accordingly. 

 

With the _revalidatePath_ and _revalidateTag_ functions, Next gives us the option of rebuilding static content when changes are made, combining the advantages of static rendering and dynamic content. In version 14, Next’s Server Actions are stable and there’s now another tool that supports write access in your application.

EVERYTHING AROUND ANGULAR

Explore the iJS Angular Development Track

 

Server Actions in Next.js

 

Hardly any other Next feature attracted as much recent attention as Server Actions. This is due to a presentation of Next Server Actions, where a write SQL access was turned directly into a button component. The concept of Server Actions actually allows write accesses to be triggered from a component. There are three ways to trigger a server action:

 

  • In the _action attribute_ of a _form element’s _action: The server action is executed when the form is submitted.
  • Alternatively, you can use the _formAction_ attribute in buttons or input elements.
  • The third option is using the _startTransition_ function: Here, you’re independent of a form.

 

As the name suggests, Server Actions execute Next on the server side. The client sends a request to the server and the server processes the message and responds accordingly. Listing 2 shows a simple example of how to mark a to-do as read in a to-do list with Server Actions.

 

:::div {.codelisting}
Listing 2: Writing operations with server actions
“`javascript

import { revalidatePath } from ‘next/cache’;

import { getAllTodos } from ‘./lib/todo.api’;

export default async function List() {

  const todos = await getAllTodos();

  return (

    <div>

      {todos.map((todo) => (

        <form

          key={todo.id}

          action={async () => {

            ‘use server’;

            await fetch(`http://localhost:3001/todos/${todo.id}`, {

              method: ‘PUT’,

              headers: {

                ‘Content-Type’: ‘application/json’,

              },

              body: JSON.stringify({ …todo, done: !todo.done }),

            });

            revalidatePath(‘/’);

          }}

        >

          {todo.title}

          <button>{todo.done ? ‘done’ : ‘todo’}</button>

        </form>

      ))}

    </div>

  );

}

“`

:::

 

In the example, there is a list of to-dos. Each data record is enclosed in a form whose _action_ attribute contains a server action. In the simplest case, this is an asynchronous function containing the character string “use server”. This lets Next handle the form correctly. 

 

In the example, the code displays the data record title and a button. When the button is clicked, the form is sent and the Server Action activates. The browser sends a request to the next backend and executes the Server Action. In the server-side function, you can access the data record and, like in the example, send the data record to a REST API to persist the data. Then, the _revalidatePath_ function is called to update the data. This causes Next to rebuild the server-side statically generated data. You can see the updated data in the browser.

 

This Next feature closes the circle of read and write operations on the server. However, server components and Server Actions weren’t invented by the Next team; they’re actually React features. Next integrates these features into the framework so that both can be used without any extra effort.

 

Server Actions are a big topic in Next 14, but not the only one. Another innovation concerns the build process of the framework.

 

Turbopack in Next.js

 

Like many other frameworks, Next relied on Webpack as a build tool. For a long time, Webpack was the first choice for build tools in client-side JavaScript applications. But there’s been a recent surge of challengers such as Rollup or esbuild. 

 

Vercel, the company behind Next, is launching its own webpack challenger, Turbopack. Turbopack follows the strategy of some modern JavaScript tools and is not implemented in JavaScript or TypeScript, but in Rust instead. The programming language choice gives even better performance. According to Vercel, they saw significant improvements for vercel.com, a relatively extensive Next applications:

 

  • Local server startup time is 53% faster.
  • Code updates with Fast Refresh are up to 95% faster.

 

Although Turbopack is currently still in beta, it can already be used in Next with the _–turbo_ option. The Turbopack team is actively working on passing all automated tests for Next, and currently has a success rate of 90%. Once all tests are cleared, Turbopack will be considered stable. Unlike Webpack, which can be used for almost any library and framework, Turbopack development focuses on supporting Next. In the medium-term, this tool will replace the currently used Webpack in Next and give better developer experience and faster builds.

 

Partial pre-rendering

 

Next supports both static pre-rendering content and dynamic on-demand rendering. But there’s gradations in-between, where part of the displayed page is rendered statically and certain content is rendered on-demand. These options can already be used, but in a less convenient way. With the new partial pre-rendering, Next created an interface to fulfill this requirement without any extra adjustments in your application.

 

At the heart of this approach lies React’s Suspense component. Next can efficiently render the outer frame around the Suspense Boundary statically. For the Suspense component, first the framework renders the fallback content and then later inserts the dynamic content into place. This seamless transition is facilitated by Next’s streaming feature, letting the server-generated content be streamed seamlessly over the same HTTP connection, minimizing overhead.

 

What’s next for Next?

 

Next’s rapid development brought significant advancements, but it’s also been accompanied by occasional issues and instability. Despite these challenges, the Next team is working diligently to quickly address shortcomings. Moreover, the release notes for each iteration are remarkably detailed and informative.

 

Even minor releases contain a large number of bug fixes that improve the framework’s overall stability. Given Next’s pivotal role in advancing the React ecosystem and its position as a technology pioneer, it’s not surprising for occasional glitches to arise.

 

One of Next’s biggest advantages is the flexibility it gives developers to choose the right interface. While the new App Router offers enhanced features and capabilities, you don’t have to adopt it immediately. Developers can still rely on the tried-and-tested Pages Router, renowned for its stability. For example, even with the new App Router, you have the choice of using Server Actions. You can also send requests from the client to the server as usual instead.

 

The Next team firmly established a consistent approach of releasing new features, gathering community feedback , and integrating insights into updates. I highly recommend you take the chance to explore and incorporate these new features to stay ahead of the curve and harness Next’s full potential.

The post What’s New in Next.js 14? appeared first on International JavaScript Conference.

]]>
What’s New In Angular 17? https://javascript-conference.com/blog/whats-new-in-angular-17/ Thu, 16 Nov 2023 09:59:07 +0000 https://javascript-conference.com/?p=89897 In early 2023, Sarah Drashner, Google's Engineering Director and head of the Angular team, coined the term "Angular Renaissance" to describe the renewed focus on the framework for developing modern JavaScript applications over the last seven years.

The post What’s New In Angular 17? appeared first on International JavaScript Conference.

]]>
This renewal is incremental, backwards compatible, and takes current trends from front-end frameworks into account. Developer experience and performance are the primary goals of this renewal movement. Standalone components and signals are two well-known features that have already emerged as part of this effort.

Angular 17 adds to the Angular Renaissance in fall 2023 with a new syntax for the control flow, delayed page loading, improved SSR support, and the CLI now relies on esbuild, significantly speeding up the build.

In this article, I will discuss these new features using an example application (Figure 1). The source code used can be found under Example.

 

Figure 1: Example application

New syntax for control flow in templates

Angular has used structural directives such as *ngIf or *ngFor for the control flow since its inception. Because the control flow needed to be extensively revised for Angular 16’s signals anyway, the Angular team decided to give it a complete overhaul. The result is a new built-in control flow that stands out clearly from the rendered markup (Listing 1).

Listing 1

@for (product of products(); track product.id) {
    <div class="card">
        <h2 class="card-title">{{product.productName}}</h2>
        […]
    </div>
}
@empty {
    <p class="text-lg">No Products found!</p>
}

It’s worth noting the new @empty block, which Angular renders if the list to be iterated is empty.

Even if signals were a driver for this new syntax, they’re not a prerequisite for its use. The new control flow blocks can also be used with classic variables or with observables in conjunction with the async pipe.

While signals were a motivation for this new syntax, they are not required for its use.

iJS Newsletter

Join the JavaScript community and keep up with the latest news!

The mandatory track expression allows Angular to identify individual elements that have been moved within the iterated collection. This drastically reduces the rendering effort and allows existing DOM nodes to be reused. When iterating collections of primitive types, e.g. number or string, track should be used with the pseudo variable $index according to the Angular team (Listing 2).

Listing 2

@for (group of groups(); track $index) {
    <a (click)="groupSelected(group)">{{group}}</a>
    @if (!$last) { 
        <span class="mr-5 ml-5">|</span> 
    }
}

In addition to $index, the other values known from *ngFor are also available via pseudo variables: $count, $first, $last, $even, $odd. If required, their values can be stored in template variables using expressions (Listing 3).

Listing 3

@for (group of groups(); track $index; let isLast = $last) {
    <a (click)="groupSelected(group)">{{group}}</a>
    @if (!isLast) { 
        <span class="mr-5 ml-5">|</span> 
    }
}

 

The new @if simplifies the formulation of else/ else-if branches (Listing 4).

Listing 4

@if (product().discountedPrice && product().discountMinCount) {
    […]
}
@else if (product().discountedPrice && !product().discountMinCount) {
    […]
}
@else {
    […]
}

In addition, different cases can also be distinguished with a @switch (Listing 5).

Listing 5

@switch (mode) {
    @case ('full') {

Originally, the Angular CLI used webpack to build bundles. However, webpack is a bit outdated and is currently being challenged by newer tools that are easier to use and much faster. One of these tools is esbuild, which has a notable adoption rate of over 20,000 downloads per week.

 […
    }
    @case ('small') {
      […]
    }
    @default {
      […]
    }
}

In contrast to ngSwitch and *ngSwitchCase, the new syntax is type-safe. In this example, the individual @case blocks must have string values, especially as the variable mode passed to @switch is also of type string.

The new control flow syntax reduces the need to use structural directives, which are powerful but sometimes unnecessarily complex. Nevertheless, the framework will continue to support structural directives. On the one hand, there are some valid use cases for this and, on the other hand, the framework must be backwards compatible despite the many exciting new features.

Box: Automatic migration to Build-in Control Flow

If you want to migrate your program code automatically to the new control flow syntax, you can now find a schematic for this in the @angular/core package:

ng g @angular/core:control-flow

Delayed loading of side panels

Typically, not all areas of a page are equally important. Product suggestions are typically secondary to the product itself on a product detail page. However, this changes when the user scrolls the product suggestions into view in the browser window, or viewport.

For performance-critical web applications like online stores, it’s advisable to delay loading less important page sections. This ensures that the most important elements are available more quickly. Previously, Angular developers had to implement this manually. Previously, anyone who wanted to implement this idea in Angular had to do it manually. Angular 17 drastically simplifies this task with the new @defer block (Listing 6).

Listing 6

@defer (on viewport) {
    <app-recommentations [productGroup]="product().productGroup">
        </app-recommentations>
}
@placeholder {
    <app-ghost-products></app-ghost-products>
}

The use of @defer delays the loading of the specified component (specifically the loading of the specified page area) until a certain event occurs. As a replacement, it presents the placeholder specified under @placeholder. In the demo application used here, ghost elements for the product suggestions are initially presented in this way (Figure 2).

 

Figure 2: Ghost Elements as placeholder

After loading, @defer swaps the ghost elements for the actual suggestions (Figure 3).

 

Figure 3: @defer exchanges the placeholder for the delayed loaded component

In this example, the on viewport event is used. It occurs as soon as the placeholder has been scrolled into the visible area of the browser window. Other supported events can be found in Table 1.

Trigger Description
on idle The browser reports that no critical tasks are currently pending (default).
on viewport The placeholder is loaded into the visible area of the page.
on interaction The user begins to interact with the placeholder.
on hover The mouse cursor is moved over the placeholder.
on immediate As soon as possible after loading the page.
on timer ( < duration >) After a certain time, e.g. on timer(5s) to trigger loading after 5 seconds.
when < condition > As soon as the specified condition is met, e.g. when (userName !=== null)

Table 1: Trigger for @defer

The triggers on viewport, on interaction and on hover force the specification of a @placeholder block by default. Alternatively, they can also refer to other parts of the page that are to be referenced via a template variable:

<h1 #recommentations>Recommentations</h1> 
@defer (on viewport(recommentations)) { <app-recommentations […] />} 

In addition, @defer can be instructed to preload the bundle at an earlier time. As with the preloading of routes, this procedure ensures that the bundles are available as soon as they are needed:

@defer(on viewport; prefetch on immediate) { […] }

In addition to @placeholder, @defer also offers two other blocks: @loading and @error. Angular displays the former while it’s loading the bundle and the latter in the event of an error. To avoid flickering, @placeholder and @loading can be configured with a minimum display duration. The minimum property defines the desired value:

@defer ( […] ) { […] } 
@loading (after 150ms; minimum 150ms) { […] } 
@placeholder (minimum 150ms) { […] }

The after property also specifies that the loading indicator should only be displayed if loading takes longer than 150 ms.

Build performance with esbuild

Originally, the Angular CLI used webpack to build bundles. However, webpack is a bit outdated and is currently being challenged by newer tools that are easier to use and much faster. One of these tools is esbuild [esbuild], which has a notable adoption rate of over 20,000 downloads per week.

The CLI team has been working on an esbuild integration for several releases. In Angular 16, this integration was already included in the developer preview stage. As of Angular 17, this implementation is stable and is used as standard for new Angular projects via the Application Builder described below.

EVERYTHING AROUND ANGULAR

Explore the iJS Angular Development Track

For existing projects, it’s worth considering switching to esbuild. To do this, update the builder entry in angular.json:

"builder": "@angular-devkit/build-angular:browser-esbuild"

In other words, add -esbuild at the end. In most cases, ng serve and ng build should then behave as usual, but much faster. The former uses the vite dev-server [vite] for acceleration to build npm packages only when required. The CLI team has also planned further performance optimizations.

The call of ng build could also be drastically accelerated using esbuild. A factor of 2 to 4 is often quoted as the bandwidth.

Easily enable SSR with the new Application Builder

Angular 17 has also drastically simplified support for server-side rendering (SSR). When generating a new project with ng new, a –ssr switch is now available. If this is not used, the CLI asks whether it should set up SSR (Figure 4).

 

Figure 4: ng new sets up SSR on request

To activate SSR later, simply add the @angular/ssr package:

ng add @angular/ssr

As the scope @angular makes clear, this package comes directly from the Angular team, serving as the successor to the Angular Universal community project. The CLI team has added a new builder that integrates SSR into ng build and ng serve. This application builder uses the above-mentioned esbuild integration to create bundles that can be used both in the browser and on the server side.

A call to ng serve starts a development server that both renders on the server side and delivers the bundles for operation in the browser. A call to ng build –ssr creates bundles for both the browser and server, as well as building a simple Node.js-based server whose source code generates the above-mentioned schematics.

If you cannot or don’t want to run a Node.js server, you can use ng build –prerender to prerender the individual routes of the application during the build.

Further innovations

In addition to the innovations discussed so far, Angular 17 brings numerous other enhancements:

  • The router now supports the View Transitions API. This API offered by some browsers allows the animation of transitions using CSS animations, e.g. from one route to another. This optional feature must be activated when setting up the router using the withViewTransitions function. For demonstration purposes, the enclosed example uses CSS animations taken from View Transitions API.
  • Signals, which were introduced in version 16 as a developer preview, are now stable. One important change is that they are now designed to be used with immutable data structures by default. This makes it easier for Angular to track changes to data managed by signals. The set method, which assigns a new value, or the update method, which maps the existing value to a new one, can be used to update Signals. The mutate method has been removed, because it doesn’t match the semantics of immutables.
  • Now there’s a diagnostic that issues a warning if the getter is not called when reading signals in templates (e.g. {{ products }} instead of {{ products() }}).
  • Animations can now be loaded lazy Lazy Animations.
  • The Angular CLI generates standalone components, directives and pipes by default. By default, ng new also provides for the bootstrapping of a standalone component. This behavior can be deactivated with the –standalone false switch.
  • The ng g interceptor instruction generates functional interceptors.

Summary

Angular’s renaissance continues with version 17, which introduces several new features and improvements. One of the most notable changes is the new control flow, which simplifies the structure of templates. Thanks to deferred loading, less important page areas can be reloaded at a later point in time, speeding up the initial page load. Other features include the use of esbuild, causing the ng build and ng serve instructions run noticeably faster. In addition, the CLI now directly supports SSR and prerendering.

The post What’s New In Angular 17? appeared first on International JavaScript Conference.

]]>
Custom Standalone APIs for Angular https://javascript-conference.com/blog/custom-standalone-apis-angular/ Wed, 25 Oct 2023 11:23:51 +0000 https://javascript-conference.com/?p=89682 Together with standalone components, the Angular team has introduced the so-called standalone APIs. They provide a simple solution for library setup and do not require Angular modules. Popular libraries that already implement this concept include the HttpClient, Router, and NgRx. These libraries are based on several patterns that we find beneficial in our own projects. They also provide our library users with familiar structures and behaviors. In this article, I show three such patterns that I derived from the libraries mentioned.

The post Custom Standalone APIs for Angular appeared first on International JavaScript Conference.

]]>
The source code and examples are available here.

Example

A simple logger library is used here to show the different patterns (Fig. 1). The LogFormatter formats the messages before the Logger publishes them. This is an abstract class that is used as a DI token. The consumers of the logger library can customize the formatting by providing their own implementation. Alternatively, they can settle for a default implementation provided by the library.

Fig. 1

Fig. 1: Structure of an exemplary Logger library

The LogAppender is another replaceable concept that takes care of attaching the message to a log. The default implementation just writes the message to the console.

iJS Newsletter

Join the JavaScript community and keep up with the latest news!

While there can be only one LogFormatter, the library supports multiple LogAppenders. For example, the first LogAppender might write the message to the console, while the second also sends it to the server. To make this possible, each LogAppender is registered via a multiprovider. The injector returns all registered LogAppenders in the form of an array. Since an array cannot be used as a DI token, the example uses an InjectionToken instead:

export const LOG_APPENDERS =
  new InjectionToken<LogAppender[]>("LOG_APPENDERS");

An abstract LoggerConfig, which also acts as a DI token, defines the possible configuration options (Listing 1).

Listing 1

export abstract class LoggerConfig {
  abstract level: LogLevel;
  abstract formatter: Type<LogFormatter>;
  abstract appenders: Type<LogAppender>[];
}
 
export const defaultConfig: LoggerConfig = {
  level: LogLevel.DEBUG,
  formatter: DefaultLogFormatter,
  appenders: [DefaultLogAppender],
};

The default values for these configuration options are in the defaultConfig constant. The LogLevel in the configuration is a filter for log messages. It is of type enum and has for simplification only the values DEBUG, INFO and ERROR:

export enum LogLevel {
  DEBUG = 0,
  INFO = 1,
  ERROR = 2,
}

The Logger only publishes messages that have the LogLevel specified here or a higher LogLevel. The LoggerService itself receives the LoggerConfig, the LogFormatter and an array with LogAppender via DI and uses them to log the received messages (Listing 2).

Listing 2

@Injectable()
export class LoggerService {
  private config = inject(LoggerConfig);
  private formatter = inject(LogFormatter);
  private appenders = inject(LOG_APPENDERS);
 
  log(level: LogLevel, category: string, msg: string): void {
    if (level < this.config.level) {
      return;
    }
    const formatted = this.formatter.format(level, category, msg);
    for (const a of this.appenders) {
      a.append(level, category, formatted);
    }
  }
 
  error(category: string, msg: string): void {
    this.log(LogLevel.ERROR, category, msg);
  }
 
  info(category: string, msg: string): void {
    this.log(LogLevel.INFO, category, msg);
  }
 
  debug(category: string, msg: string): void {
    this.log(LogLevel.DEBUG, category, msg);
  }
}

The golden rule

Before we take a look at the patterns, I want to mention my golden rule for registering services: Use @Injectable({providedIn: ‘root’}) whenever possible! Especially in applications, but also in numerous situations in libraries, this approach is perfectly sufficient. It is simple, treeshakable and even works with lazy loading. The latter aspect is less a merit of Angular than of the underlying bundler. Everything that can only be used in a lazy bundle is also accommodated there.

EVERYTHING AROUND ANGULAR

Explore the iJS Angular Development Track

Pattern: provider factory

A provider factory is a function that returns all services for a reusable library. It can also register configuration objects as services or exchange service implementations.

The returned services are in a provider array that wraps the factory with the EnvironmentProviders type. This approach, designed by the Angular team, ensures that an application can register providers only with so-called environment injectors. These are primarily the injector for the root scope and injectors that Angular sets up via the routing configuration. The provider factory in Listing 3 illustrates this. It takes a LoggerConfig and sets up the individual services for the Logger.

Listing 3

export function provideLogger(
  config: Partial<LoggerConfig>
): EnvironmentProviders {
  // using default values for   // missing properties
  const merged = { ...defaultConfig, ...config };
 
  return makeEnvironmentProviders([
    {
      provide: LoggerConfig,
      useValue: merged,
    },
    {
      provide: LogFormatter,
      useClass: merged.formatter,
    },
    merged.appenders.map((a) => ({
      provide: LOG_APPENDERS,
      useClass: a,
      multi: true,
    })),
  ]);
}

The factory takes missing configuration values from the default configuration. The makeEnvironmentProviders function provided by Angular wraps the provider array into an instance of EnvironmentProviders. This factory allows consumers to set up the logger similarly to how they set up the HttpClient or router (Listing 4).

Listing 4

bootstrapApplication(AppComponent, {
  providers: [
    provideHttpClient(),
    provideRouter(APP_ROUTES),
    [...]
    provideLogger(loggerConfig),
  ]
}

Pattern: feature

The feature pattern allows optional functionality to be enabled and configured. If this functionality is not used, the build process removes it using treeshaking. The optional feature is represented by an object with a providers array. In addition, the object has a kind property that subdivides the feature of a certain category. This categorization enables the validation of the jointly configured features. For example, features can be mutually exclusive. An example of this can be found in the HttpClient: It prohibits the use of a feature for configuring XSRF handling if the consumers have simultaneously activated a feature for disabling it.

The logger library used here uses a ColorFeature that allows messages to be output in different colors depending on the LoggerLevel (Fig. 2).

Fig. 2

Fig. 2: Structure of the ColorFeature

An enum is used to categorize features:

export enum LoggerFeatureKind {
  COLOR,
  OTHER_FEATURE,
  ADDITIONAL_FEATURE
}

Another factory is used to provide the ColorFeature (Listing 5).

Listing 5

export function withColor(config?: Partial<ColorConfig>): LoggerFeature {
  const internal = { ...defaultColorConfig, ...config };
 
  return {
    kind: LoggerFeatureKind.COLOR,
    providers: [
      {
        provide: ColorConfig,
        useValue: internal,
      },
      {
        provide: ColorService,
        useClass: DefaultColorService,
      },
    ],
  };
}

The updated provider factory provideLogger takes on several features via an optional second parameter defined as an array for rest parameters (Listing 6).

Listing 6

export function provideLogger(
  config: Partial<LoggerConfig>,
  ...features: LoggerFeature[]
): EnvironmentProviders {
  const merged = { ...defaultConfig, ...config };
 
  // Inspecting passed features
  const colorFeatures =
    features?.filter((f) => f.kind === LoggerFeatureKind.COLOR)?.length ?? 0;
 
  // Validating passed features
  if (colorFeatures > 1) {
    throw new Error("Only one color feature allowed for logger!");
  }
 
  return makeEnvironmentProviders([
    {
      provide: LoggerConfig,
      useValue: merged,
    },
    {
      provide: LogFormatter,
      useClass: merged.formatter,
    },
    merged.appenders.map((a) => ({
      provide: LOG_APPENDERS,
      useClass: a,
      multi: true,
    })),
 
    // Providing services for the     // features
    features?.map((f) => f.providers),
  ]);
}

The provider factory uses the kind property to examine and validate the passed features. If all is well, it includes the feature’s providers in the EnvironmentProviders object. The DefaultLogAppender fetches the ColorService provided by the ColorFeature via dependency injection (Listing 7).

Listing 7

export class DefaultLogAppender implements LogAppender {
  colorService = inject(ColorService, { optional: true });
 
  append(level: LogLevel, category: string, msg: string): void {
    if (this.colorService) {
      msg = this.colorService.apply(level, msg);
    }
    console.log(msg);
  }
}

Since features are optional, the DefaultLog appender passes the {optional: true} option to inject. This prevents an exception in cases where the feature, and thus the ColorService, has not been provided. Also, the DefaultLogAppender must check for null values.

This pattern occurs in the router, e.g. to configure preloading or to enable tracing. The HttpClient uses it to provide interceptors, to configure JSONP and to configure/disable XSRF token handling.

Pattern: configuration factory

Configuration factories extend the behavior of existing services. They can provide additional configuration options, but also additional services. An extended version of our LoggerService will serve as an illustration. It allows to define an additional LogAppender for each log category:

@Injectable()
export class LoggerService {
  readonly categories: Record<string, LogAppender> = {};
  […]
}

To configure a LogAppender for a category, we introduce a configuration factory named provideCategory (Listing 8).

Listing 8

export function provideCategory(
  category: string,
  appender: Type<LogAppender>
): EnvironmentProviders {
  // Internal/ Local token for registering the service
  // and retrieving the resolved service instance
  // immediately after.
  const appenderToken = new InjectionToken<LogAppender>("APPENDER_" + category);
 
  return makeEnvironmentProviders([
    {
      provide: appenderToken,
      useClass: appender,
    },
    {
      provide: ENVIRONMENT_INITIALIZER,
      multi: true,
      useValue: () => {
        const appender = inject(appenderToken);
        const logger = inject(LoggerService);
 
        logger.categories[category] = appender;
      },
    },
  ]);
}

This factory creates a provider for the LogAppender class. The call to inject gives us an instance of it and resolves its dependencies. The ENVIRONMENT_INITIALIZER token points to a function that Angular triggers when initializing the respective environment injector. It registers the LogAppender with the LoggerService (Listing 9).

Listing 9

export const FLIGHT_BOOKING_ROUTES: Routes = [
 
  {
    path: '',
    component: FlightBookingComponent,
    providers: [
      // Setting up an NgRx      // feature slice
      provideState(bookingFeature),
      provideEffects([BookingEffects]),
 
      // Provide LogAppender for      // logger category
      provideCategory('booking', DefaultLogAppender),
    ],
    children: [
      {
        path: 'flight-search',
        component: FlightSearchComponent,
      },
      [...]
    ],
  },
];

This pattern is found, for example, in NgRx to register feature slices. The feature withDebugTracing offered by the router also uses this pattern to subscribe to the observable events in the router service.

Conclusion

Standalone APIs allow you to set up libraries without Angular modules. Their use is simple to begin with: consumers simply need to look for a provider factory with the name provideXYZ. Additional features can be enabled, if necessary, with functions that follow the withABC naming scheme.

However, the implementation of such APIs is not always trivial. This is exactly where the patterns presented here help. Since they are derived from libraries of the Angular and NgRx teams, they reflect first-hand experience and design decisions.

The post Custom Standalone APIs for Angular appeared first on International JavaScript Conference.

]]>
How to create your own Angular Schematics https://javascript-conference.com/blog/how-to-create-your-own-angular-schematics/ Wed, 29 Apr 2020 09:41:56 +0000 https://javascript-conference.com/?p=30461 Angular Schematics gives us a way to create custom actions, similar to those provided by the Angular CLI. Schematics are used by many Angular libraries to simplify their usage.

The post How to create your own Angular Schematics appeared first on International JavaScript Conference.

]]>
For example, ngrx provides schematics for creating stores and reducers, and nrwl provides a complete workspace management solution, including the ability to create React apps!

We can also use schematics to customise existing CLI commands, so they better suit our own workflows.

1 Photo by Birmingham Museums Trust on Unsplash

 

For example, we have an app which is built as a collection of micro-frontends. Each workflow is built as its own app, and each app lives in its own repository. Every time we need to spin up a new app we need to:

  • create a new project using ng new
  • add .npmrc, .nvmrc and .prettierrc files, with our specific configurations
  • update karma.config.js and tslint.config.js to match our common test and linting settings
  • install prettier and husky, for formatting and git hooks
  • add some additional NPM scripts, for our CI server
  • add git hooks via husky

As you can imagine, doing all this manually is fairly tedious and error-prone. To avoid that, we use a custom schematic which runs ng new for us to create the repository, then makes all the necessary changes.

We’re going to recreate this process by creating a schematic named new-project. The new-project schematic is going to do all the things listed above: create a new project, add additional files, make necessary changes to existing files, install dependencies, add scripts and add a git hook.

iJS Newsletter

Join the JavaScript community and keep up with the latest news!

Creating a schematic

We’ll start by creating the new-project schematic. To do that, we’re going to use an NPM package called schematics. You’ll need to install it globally, just like the Angular CLI.

> npm install -g @angular-devkit/schematics-cli

Once we have this package installed, we can use it to create a new schematic, just like we use the Angular CLI to create a new app.

> schematics blank --name=new-project

The blank parameter creates a minimal schematic, with only the bare necessities in it. The output in the console tells us which files have been created.

CREATE new-project/README.md (639 bytes)
CREATE new-project/.gitignore (191 bytes)
CREATE new-project/.npmignore (64 bytes)
CREATE new-project/package.json (568 bytes)
CREATE new-project/tsconfig.json (656 bytes)
CREATE new-project/src/collection.json (234 bytes)
CREATE new-project/src/new-project/index.ts (319 bytes)
CREATE new-project/src/new-project/index_spec.ts (476 bytes)
✔ Packages installed successfully.

README.md, .gitignore, package.json, and tsconfig.json should be pretty self-explanatory. .npmignore tells NPM to ignore the TypeScript files when bundling up the package. This is necessary because of the way the TypeScript compiler is set up in the project. Rather than outputting all the transpiled files into a common dist/ or build/ directory, the transpiled files remain in the directory with the original TypeScript files. So building new-project/src/new-project/index.ts will result in the file new-project/src/new-project/index.js.

Unsurprisingly, the real interesting stuff is in the src directory.

  • collection.json is like an index of the schematics in this project. It links the name of each schematic with the code that runs it.
  • new-project/index.ts is the code run by our schematic.
  • new-project/index_spec.ts contains the tests. We’re not going to worry about writing tests today, but testing your schematics is definitely possible.

Great, we’ve created our first schematic! Let’s check that it works, using the age-old JavaScript debugging tool, console.log(). I’m going to add a log to src/new-project/index.ts, so it now looks like this:

import { Rule, SchematicContext, Tree } from '@angular-devkit/schematics';
 
export function newRepo(_options: any): Rule {
 
 console.log('Hello from your new schematic!')
 
 return (tree: Tree, _context: SchematicContext) => {
   return tree;
 };
}

Because everything is in TypeScript, we need to build it before we run it. We can do that by running `npm run build`.

Once the build succeeds, we can run our schematic via `schematics .:new-project`. You should get output like:

Hello from your new schematic!
Nothing to be done.

`Nothing to be done` here indicates that we haven’t made any changes to the file system.

Yes, it works!

2 Photo by Nicolas Tissot on Unsplash

 

One small thing though – running the build script manually like this every time we make a change can get quite tedious. Instead, we can get the TypeScript compiler to watch for changes and build automatically, by adding a watch script to package.json, like so:

 "scripts": {
    "build": "tsc -p tsconfig.json",
    "watch": "tsc -p tsconfig.json --watch",
    "test": "npm run build && jasmine src/**/*_spec.js"
  },

Now, if you run npm run watch, the build will re-run automatically whenever a change is made.


[10:57:31 AM] File change detected. Starting incremental compilation…
[10:57:31 AM] Found 0 errors. Watching for file changes.

Time to make our schematic actually do something!

 

Handling User Input

3 Photo by Glenn Carstens-Peters on Unsplash

 

This schematic is going to create a brand new project, using the name passed in by the user. To do that, we’re going to call the existing ng new schematic, passing in the name, and a bunch of default options.

First, we want to tell our schematic that we’re expecting a name to be passed in. We can do this by creating a schema.json file in our new-project directory. The schema.json file is optional, but it improves the user experience. For example, if the user misses a required option, the schema allows the schematic to ask the user for the value. Think of it like adding types in TypeScript.

We’re going to create new-project/schema.json with the following contents.

{
 "$schema": "http://json-schema.org/schema",
 "id": "NewRepoSchematic",
 "title": "ng new options schema",
 "type": "object",
 "description": "Initialise a new project",
 "properties": {
   "name": {
     "type": "string",
     "description": "The name of the project",
     "x-prompt": "Name:",
     "$default": {
       "$source": "argv",
       "index": 0
     }
   } 
 }
}

The x-prompt value is the prompt that the schematic will use to ask the user for a value, if they don’t supply one. The default object allows you to provide a default value. In this case, the default value will be the first argument the user passes in on the command line.

So we’ll be able to use our new schematic in three ways.

    1. Just call the schematic, without passing anything in. The schematic will ask for a name.
      > schematics .:new-project                              ✔ 
      ? Name: fancy-project
    2. Call the schematic, passing in the name option
      > schematics .:new-project --name=fancy-project
    3. Call the schematic, passing the name as an argument
      > schematics .:new-project fancy-project

All three of these options will result in fancy-project being passed in as the value of the name option.

However, it won’t work just yet! First, we need to tell collections.json about our schema. We can do that by adding a schema property, which points to our new schema.

{
 "$schema": "../node_modules/@angular-devkit/schematics/collection-schema.json",
 "extends": ["@schematics/angular"],
 "schematics": {
   "new-project": {
     "description": "A blank schematic.",
     "factory": "./new-project/index#newRepo",
     "schema": "./new-project/schema.json"
   }
 }
}

Now, if you build and run the schema without passing in a name value, it should ask you for it, just like above.

If we have a look in new-project/index.ts now, we can get access to the name as a property of the _options object being passed into our newRepo function.

import { Rule, SchematicContext, Tree } from '@angular-devkit/schematics';
 
export function newRepo(_options: any): Rule {
 
 const name = _options.name
 console.log('The name of the repo will be', name)
 
 return (tree: Tree, _context: SchematicContext) => {
   return tree;
 };
}
> schematics .:new-project                              ✔ 
? Name: fancy-app
The name of the repo will be fancy-app
Nothing to be done.

Obviously this is all pretty exciting, but our schema still doesn’t actually do anything yet.

GAIN INSIGHTS INTO THE DO'S & DON'TS

Angular Tips & Tricks

 

Call an External Schematic

4 Photo by Pavan Trikutam on Unsplash

 

The first thing we want to do is call the ng new schematic, passing in our name option, and a few other default options. To do that, we’re finally going to write some TypeScript!

Currently, our index.ts file looks something like this:

import { Rule, SchematicContext, Tree } from '@angular-devkit/schematics';
 
export function newRepo(_options: any): Rule {
 
 const name = _options.name
 console.log('The name of the repo will be', name)
 
 return (tree: Tree, _context: SchematicContext) => {
   return tree;
 };
}

We have a function, newRepo, which is going to get called when the schematic runs. It’s going to get passed in _options, which we can use to get the name. It’s going to return a function, which takes a Tree and a SchematicContext, and returns a new Tree. A function like this is referred to as a Rule. A function which returns a Rule is a rule factory.

A Tree is a fundamental concept in Schematics. It refers to the schematic’s internal representation of the file system. For those familiar with React, it’s kind of like a virtual DOM, but for the file system. When we make changes, they’re applied to the Tree, rather than the actual files. Once we’ve finished with all our changes, the Tree is written to the file system (or not, if we’re in debug mode).

The SchematicContext object just contains some utility functions and metadata.

So, a schematic is a factory which returns a Rule. The Ruleis applied to a Tree to produce a new Tree.

Currently, our Rule is just passing back the same Tree that was passed in. Instead, we can return the result of calling another Rule factory. In our case, we want to call the externalSchematic Rule factory, which can be found in @angular-devkit/schematics. We need to pass externalSchematic the name of a collection, the name of a schematic in that collection, and an options object to call the schematic with.

import { Rule, SchematicContext, Tree, externalSchematic } from '@angular-devkit/schematics';
 
export function newRepo(_options: any): Rule {
 
 const name = _options.name
 
 return (_: Tree, _context: SchematicContext) => {
   return externalSchematic('@schematics/angular', 'ng-new', {
     name,
     version: '9.0.0',
     directory: name,
     routing: false,
     style: 'scss',
     inlineStyle: false,
     inlineTemplate: false
   });
 };
}

Notice that I had to change the variable tree to _ in the Rule definition. The default linter in a schematics project is super picky, and it won’t compile with unused variables. You can either adjust the linter settings, or just give it what it wants.

Now if we run our schematic, we get an output like this.

> schematics .:new-project fancy-app ✔
CREATE fancy-app/README.md (1025 bytes)
CREATE fancy-app/.editorconfig (274 bytes)
CREATE fancy-app/.gitignore (631 bytes)
CREATE fancy-app/angular.json (3678 bytes)
CREATE fancy-app/package.json (1285 bytes)
CREATE fancy-app/tsconfig.json (489 bytes)
CREATE fancy-app/tslint.json (3125 bytes)
CREATE fancy-app/browserslist (429 bytes)
CREATE fancy-app/karma.conf.js (1021 bytes)
CREATE fancy-app/tsconfig.app.json (210 bytes)
CREATE fancy-app/tsconfig.spec.json (270 bytes)
CREATE fancy-app/src/favicon.ico (948 bytes)
CREATE fancy-app/src/index.html (294 bytes)
CREATE fancy-app/src/main.ts (372 bytes)
CREATE fancy-app/src/polyfills.ts (2835 bytes)
CREATE fancy-app/src/styles.scss (80 bytes)
CREATE fancy-app/src/test.ts (753 bytes)
CREATE fancy-app/src/assets/.gitkeep (0 bytes)
CREATE fancy-app/src/environments/environment.prod.ts (51 bytes)
CREATE fancy-app/src/environments/environment.ts (662 bytes)
CREATE fancy-app/src/app/app.module.ts (314 bytes)
CREATE fancy-app/src/app/app.component.scss (0 bytes)
CREATE fancy-app/src/app/app.component.html (25725 bytes)
CREATE fancy-app/src/app/app.component.spec.ts (951 bytes)
CREATE fancy-app/src/app/app.component.ts (214 bytes)
CREATE fancy-app/e2e/protractor.conf.js (808 bytes)
CREATE fancy-app/e2e/tsconfig.json (214 bytes)
CREATE fancy-app/e2e/src/app.e2e-spec.ts (642 bytes)
CREATE fancy-app/e2e/src/app.po.ts (301 bytes)

However, if we look in our file system, we’ll see that no such files have been created! This is because, by default, schematics runs in debug mode, which is the same as using --dryRun in the CLI. The changes are applied to the Tree, but the Tree isn’t written to the disk.

 

Turning off Debug Mode

If you want to write the files for real, you need to use the --dryRun=false or --debug=false flag. If you do so, the files will be written, and npm install will be run, just like it would be when a user is using your schematic.


> schematics .:new-project fancy-app –debug=false ✔
CREATE fancy-app/README.md (1025 bytes)
CREATE fancy-app/.editorconfig (274 bytes)
CREATE fancy-app/.gitignore (631 bytes)
CREATE fancy-app/angular.json (3678 bytes)
CREATE fancy-app/package.json (1285 bytes)
CREATE fancy-app/tsconfig.json (489 bytes)
CREATE fancy-app/tslint.json (3125 bytes)
CREATE fancy-app/browserslist (429 bytes)
CREATE fancy-app/karma.conf.js (1021 bytes)
CREATE fancy-app/tsconfig.app.json (210 bytes)
CREATE fancy-app/tsconfig.spec.json (270 bytes)
CREATE fancy-app/src/favicon.ico (948 bytes)
CREATE fancy-app/src/index.html (294 bytes)
CREATE fancy-app/src/main.ts (372 bytes)
CREATE fancy-app/src/polyfills.ts (2835 bytes)
CREATE fancy-app/src/styles.scss (80 bytes)
CREATE fancy-app/src/test.ts (753 bytes)
CREATE fancy-app/src/assets/.gitkeep (0 bytes)
CREATE fancy-app/src/environments/environment.prod.ts (51 bytes)
CREATE fancy-app/src/environments/environment.ts (662 bytes)
CREATE fancy-app/src/app/app.module.ts (314 bytes)
CREATE fancy-app/src/app/app.component.scss (0 bytes)
CREATE fancy-app/src/app/app.component.html (25725 bytes)
CREATE fancy-app/src/app/app.component.spec.ts (951 bytes)
CREATE fancy-app/src/app/app.component.ts (214 bytes)
CREATE fancy-app/e2e/protractor.conf.js (808 bytes)
CREATE fancy-app/e2e/tsconfig.json (214 bytes)
CREATE fancy-app/e2e/src/app.e2e-spec.ts (642 bytes)
CREATE fancy-app/e2e/src/app.po.ts (301 bytes)
✔ Packages installed successfully.
Successfully initialized git.

If you’d rather not create a new project from inside your current project, you can also run your schematic from a different directory, but the syntax is slightly different:

> schematics ./path/to/collection.json:schematic-name

For example, if we wanted to run our schematic from its own parent directory, we could use:

> schematics ./new-project/src/collection.json:new-project fancy-app

If you do this, you’ll need to make sure that you have @schematics/angular installed locally, by running npm install -g @schematics/angular first.

Alright, so this is already useful. Now, whenever we want to create a new project, we can run our schematic, and we don’t have to remember which options to pass into ng new!

We can make it even more useful though. For a start, there are a few files that we have to add to every new repository that we create:

      • .npmrc: points to our internal NPM repository
      • .nvmrc: determines which version of Node we’re using
      • .prettierrc: code formatting settings

There are also a few files that get generated by the CLI, but that we need to change

      • browserslist & polyfills.ts: we need to support IE11
      • .karma.conf.js: we need to set up some reporters
      • tslint.json: we use a slightly different set of linting rules

All of these files are exactly the same in each repository, so it would be handy if the schematic could just add them automatically. Well, it turns out it can!

Call an External Schematic

5 Photo by Mr Cup / Fabien Barral on Unsplash

 

The first thing we need to do is create a folder that contains all the files that we want to add, using the directory structure that they need to be added in. We can call this folder anything we want, but I’m going to go with ‘files’. So our folder structure needs to look something like this:


— files
— [project-name]
— .npmrc
— .nvmrc
— .prettierrc
— browserslist
— polyfills.ts
— karma.conf.js
— tslint.json

Remember, when we run this schematic, the current directory is going to be the parent of the project directory, so we need to specify the name of the project directory in our directory structure. Otherwise the files would all be added to the parent directory, which would not be useful. Unfortunately, we don’t know the name of the project in advance, so we need some kind of placeholder. Happily, it turns out the schematics package provides us with a handy way to add placeholders which will be replaced by values we provide.

In a filename, we can use __optionName__. So, in our case, the [project-name] folder would be called __name__. This gives us the following directory structure:


— files
— __name__
— .npmrc
— .nvmrc
— .prettierrc
— browserslist
— polyfills.ts
— karma.conf.js
— tslint.json

You can also add placeholders inside files, using <%= optionName %>. Plus, the schematics package provides a bunch of handy functions to do simple transforms on the option values, like camelize (turns my-option to camel case: myOption) and classify (capitalises my-option like a class name: MyOption). Your IDE will absolutely hate you using these, and will give you a bunch of errors, but don’t worry about it.

{
 "rulesDirectory": ["codelyzer"],
 "extends": ["../../tslint.json"],
 "rules": {
   "directive-selector": [
     true,
     "attribute",
     "<%= prefix %>",
     "camelCase"
   ],
   "component-selector": [
     true,
     "element",
     "<%= prefix %>",
     "kebab-case"
   ]
 }
}

Ok, once we’ve got our files, we need a way to add them to the Tree. To do this, we can use another Rule factory, called mergeWith. To use mergeWith, we need to pass in a templateSource and a MergeStrategy.

We can create a templateSource as follows:

const templateSource = apply(url('./files'), [
  template({..._options, ...strings}),
]);

'./files' is the location of the files we want to include, _options is the options passed into our schematic (which will be used by the placeholders), and strings is a collection of utility functions (like camelize and classify) provided by schematics. You can import string from @angular-devkit/core.

Once we’ve got our templateSource, we can pass it into the mergeWith factory along with a MergeStrategy.

const merged = mergeWith(templateSource, MergeStrategy.Overwrite)

MergeStrategy.Overwrite means that if the Tree contains a file that’s also in our templateSource, then use the one in our templateSource.

Ok, so now we have two rules – one returned by mergeWith, and one from externalSchematic. We need to apply both of them to our tree, but we can only return a single rule. So we need some way to combine the two.

 

Call an External Schematic

6 Photo by JJ Ying on Unsplash

 

Enter chain. chain takes in a list of Rule factories, and returns a factory that combines them all.

const rule = chain([
  generateRepo(name),
  merged
]);
 
return rule(tree, _context) as Rule;

(I factored the code for calling externalSchematic out into a function to simplify this).

So, all together, our schematic now looks like this:

import { Rule, SchematicContext, Tree, externalSchematic, apply, url, template, chain, mergeWith, MergeStrategy } from '@angular-devkit/schematics';
import { strings } from '@angular-devkit/core';
 
export function newRepo(_options: any): Rule {
  const name = _options.name;
 
  return (tree: Tree, _context: SchematicContext) => {
 
    const templateSource = apply(url('./files'), [
      template({..._options, ...strings}),
    ]);
    const merged = mergeWith(templateSource, MergeStrategy.Overwrite)
 
    const rule = chain([
      generateRepo(name),
      merged
    ]);
 
    return rule(tree, _context) as Rule;
  }
}
 
function generateRepo(name: string): Rule {
 return externalSchematic('@schematics/angular', 'ng-new', {
   name,
   version: '9.0.0',
   directory: name,
   routing: false,
   style: 'scss',
   inlineStyle: false,
   inlineTemplate: false
 });
}

And, if we run it, we get something like this:


> schematics .:new-project fancy-app ✔
CREATE fancy-app/README.md (1025 bytes)
CREATE fancy-app/.editorconfig (274 bytes)
CREATE fancy-app/.gitignore (631 bytes)
CREATE fancy-app/angular.json (3678 bytes)
CREATE fancy-app/package.json (1285 bytes)
CREATE fancy-app/tsconfig.json (489 bytes)
CREATE fancy-app/tslint.json (3125 bytes)
CREATE fancy-app/browserslist (429 bytes)
CREATE fancy-app/karma.conf.js (1021 bytes)
CREATE fancy-app/tsconfig.app.json (210 bytes)
CREATE fancy-app/tsconfig.spec.json (270 bytes)
CREATE fancy-app/src/favicon.ico (948 bytes)
CREATE fancy-app/src/index.html (294 bytes)
CREATE fancy-app/src/main.ts (372 bytes)
CREATE fancy-app/src/polyfills.ts (2835 bytes)
CREATE fancy-app/src/styles.scss (80 bytes)
CREATE fancy-app/src/test.ts (753 bytes)
CREATE fancy-app/src/assets/.gitkeep (0 bytes)
CREATE fancy-app/src/environments/environment.prod.ts (51 bytes)
CREATE fancy-app/src/environments/environment.ts (662 bytes)
CREATE fancy-app/src/app/app.module.ts (314 bytes)
CREATE fancy-app/src/app/app.component.scss (0 bytes)
CREATE fancy-app/src/app/app.component.html (25725 bytes)
CREATE fancy-app/src/app/app.component.spec.ts (951 bytes)
CREATE fancy-app/src/app/app.component.ts (214 bytes)
CREATE fancy-app/e2e/protractor.conf.js (808 bytes)
CREATE fancy-app/e2e/tsconfig.json (214 bytes)
CREATE fancy-app/e2e/src/app.e2e-spec.ts (642 bytes)
CREATE fancy-app/e2e/src/app.po.ts (301 bytes)
CREATE fancy-app/.npmrc (71 bytes)
CREATE fancy-app/.nvmrc (7 bytes)
CREATE fancy-app/.prettierrc (228 bytes)

You can see our additional files being added at the end. If you want to check that the existing files were overwritten by our versions, you’ll need to run the schematic with debug mode turned off.

 

Editing files

7 Photo by annekarakash (pixabay.com)

 

The last thing we want to do is make some changes to our package.json.

      • Add prettier, and husky to the devDependencies
      • Add a commit hook using husky
      • Add some additional scripts for our CI server

In theory, we could just include a standard package.json file like we did with our other files. However, when we use the CLI to generate a project, it also generates the package.json, so we’d risk our package.json file getting out of sync with the generated one. Instead, we’re going to use the generated one, and alter it. You can use the same technique to alter your angular.json if you need to – for example if you want to add assets or styles.

To do this, we’re going to write our own Rule factory, called updatePackageJson. This factory will need to be passed in the name of the project, and will return a Rule.

function updatePackageJson(name: string): Rule {
 return (tree: Tree): Tree => {
  
 }
}

The next thing we need to do is read in the current package.json file. We can use tree.read(path) to fetch the contents of the file as a buffer, then, because we’re dealing with a JSON file, we can use JSON.parse() to parse it.

const path = `/${name}/package.json`;
const file = tree.read(path);
const json = JSON.parse(file!.toString());

Don’t forget to convert the buffer to a string, using the buffer’s toString() method. The ! tells TypeScript that we are certain file won’t be null, so it doesn’t need to worry about it.

Now that we’ve parsed our package.json file, we can manipulate it just like any other object. I’m going to extend the scripts property with some additional scripts, and add a husky property, and our devDependencies.

json.scripts = {
  ...json.scripts,
  'build:prod': 'ng build --prod',
  'test:ci': 'ng test --no-watch --code-coverage'
};
 
json.husky = {
  'hooks': {
    'pre-commit': 'pretty-quick --staged --pattern \"apps/**/**/*.{ts,scss,html}\"'
  }
};
 
json.devDependencies.prettier = '^2.0.0';
json.devDependencies.husky = '^4.2.0';

The one downside to this approach is that we need to know the version of prettier and husky to use. Or, we could look on this as an upside, as it means all our projects will be using the same version. If you just want to always install the latest version, you can use the value latest instead. If you want to install the latest version at the time the project is created (and then have everyone working on the project use that same version), you can fetch that information from the NPM API before writing the JSON file. That’s a little complicated for this post though!

The final thing we need to do is update the tree, and return it.

tree.overwrite(path, JSON.stringify(json, null, 2));
return tree;

Note that tree.overwrite() does what it says on the tin, and actually updates the current tree object, rather than returning a new, updated object.

So now, we’ve got our completed schematic.

import { Rule, SchematicContext, Tree, externalSchematic, apply, url, template, chain, mergeWith, MergeStrategy } from '@angular-devkit/schematics';
import { strings } from '@angular-devkit/core';
export function newRepo(_options: any): Rule {
const name = _options.name
return (tree: Tree, _context: SchematicContext) => {
 
   const templateSource = apply(url('./files'), [
     template({..._options, ...strings}),
   ]);
   const merged = mergeWith(templateSource, MergeStrategy.Overwrite)
 
   const rule = chain([
     generateRepo(name),
     merged,
     updatePackageJson(name)
   ]);
 
   return rule(tree, _context) as Rule;
 }
}
 
function generateRepo(name: string): Rule {
 return externalSchematic('@schematics/angular', 'ng-new', {
   name,
   version: '9.0.0',
   directory: name,
   routing: false,
   style: 'scss',
   inlineStyle: false,
   inlineTemplate: false
 });
}
 
function updatePackageJson(name: string): Rule {
 return (tree: Tree, _: SchematicContext): Tree => {
   const path = `/${name}/package.json`;
   const file = tree.read(path);
   const json = JSON.parse(file!.toString());
 
   json.scripts = {
     ...json.scripts,
     'build:prod': 'ng build --prod',
     test: 'ng test --code-coverage',
     lint: 'ng lint --fix',
   };
 
   json.husky = {
     'hooks': {
       'pre-commit': 'pretty-quick --staged --pattern \"apps/**/**/*.{ts,scss,html}\"',
     }
   };
 
   json.devDependencies.prettier = '^2.0.0';
   json.devDependencies.husky = '^4.2.
 
   tree.overwrite(path, JSON.stringify(json, null, 2));
   return tree;
 }
}

 

Publishing our Schematic

Now that our schematic is complete, we need to make it available for others to use. This works just like any other NPM package. At my workplace, we have a Jenkins job which runs npm publish whenever we push a version with a tag.

Once your schematic is published, anyone who wants to use it will need to install it globally.

> npm install -g new-project`

It needs to be installed globally like this because we need to use it before we’ve created our project. If you were creating a schematic that, say, generated new components, after the project had been created, you could just install it locally.

Finally, you can use the schematic via:

> ng new fancy-project --collection=new-project

It’s a little cumbersome, but you only have to do it once per project!

 

Conclusion

So that’s it. We’ve created a new schematic which:

      • calls ng new
      • adds some new files
      • changes some of the files provided by the CLI
      • adds some additional dependencies

We’ve looked at how to create the schematic, how to improve the user experience through the use of a schema, and how to publish and use our schematic. Hopefully you find some of this useful!

The post How to create your own Angular Schematics appeared first on International JavaScript Conference.

]]>
Real-Time in Angular: A journey into Websocket and RxJS https://javascript-conference.com/blog/real-time-in-angular-a-journey-into-websocket-and-rxjs/ Mon, 16 Mar 2020 14:53:28 +0000 https://javascript-conference.com/?p=30163 Real-time is an interesting topic to consider these days. The demand for real-time functionality to be implemented in modern web applications has grown tremendously. The sooner you have the data, the quicker you can react and make decisions. Thus, the chance for higher profits is huge. In this article we will discuss how to implement this real-time feature in your Angular application using WebSocket and RxJS.

The post Real-Time in Angular: A journey into Websocket and RxJS appeared first on International JavaScript Conference.

]]>
First, a bit of background

Websocket protocol has landed with HTML5. It is useful when you want a low latency persistent bidirectional communication between the client and the server so you can both send data from and to the browser. Unlike HTTP, Websocket is a stateful communication protocol that works over TCP. After making a connection, the client and server will exchange data per frame, which is 2 bytes each.

The technology has been around for a while, long enough to enjoy excellent support across all browsers. Having a two-way channel is attractive for use cases like games, messaging applications, and when you need near real-time updates in both directions.

Project Setup

I’ll be using Angular 8 for the client, Node.js for the server which uses the ws library as it is simple to use, blazing fast and thoroughly tested WebSocket client and server for Node.js.
You can use pretty much any front-end or server framework.
This is an overview of a simple node server:

const WebSocket = require('ws');
const wss = new WebSocket.Server({ port: 8081 });
 
wss.on('connection', ws => {
  onConnection(ws);
  ws.on('message', message => {
    onMessage(message, ws);
  });
  ws.on('error', error => {
    OnError(error);
  });
   ws.on('close', ws=> {
    onClose();
})
});

As it is an event driven protocol, you will have to provide actions when:

  • the connection is established (the onConnection method is called)
  • a message is received (the onMessage method is called)
  • an error occurs  (the OnError method is called)
  • the connection is closed  (the onClose method is called)

…and so on and so forth.

iJS Newsletter

Join the JavaScript community and keep up with the latest news!

How to handle real time updates in your Angular application

Many open source packages are available to handle live updates coming from a WebSocket server. Some implement the protocol and leave the rest to the developer. The others are built on top of the protocol with various additional features commonly required by real-time messaging applications.
However, I don’t really recommend adding a third-party dependency in your project every time you have a new feature to support. This will increase your bundle size and affect the performance of your application. You have to also consider some parameters such as compatibility, versions management, reliability, active support, and maintainability.
So, we better pick something from the Angular ecosystem. What you will be really glad to know is that Rxjs surprisingly ships with a special kind of subject WebSocketSubject, which is a wrapper around the w3c-webSocket object available in the browser. It allows us to both send and receive data via WS connection.
Sounds great!

How to put it in place

In order to use it, all you need to do is call the WebSocket factory function that produces this special type of subject and takes as a parameter the endpoint of your ws server. You can use wss for secure websocket connection.

import { webSocket } from "rxjs/"webSocket;
const subject = webSocket("ws://localhost:8081");

This way you have a ready to use subject that you should subscribe to in order to establish the connection with your endpoint and start receiving and sending some data.
As WebSocketSubject is nothing but a regular RxJS subject, it can be considered both observable and the observer at the same time. Therefore you can send data to the WebSocket using next method and register callbacks to process the incoming messages.

Simple, right? Now let’s see the recommended architecture together.

What architecture to adopt

All the interactions with the WebSocketSubject should be isolated in a separate service as follows:

import { Injectable } from '@angular/core';
import { webSocket, WebSocketSubject } from 'rxjs/webSocket';
import { environment } from '../../environments/environment';
import { catchError, tap, switchAll } from 'rxjs/operators';
import { EMPTY, Subject } from 'rxjs';
export const WS_ENDPOINT = environment.wsEndpoint;
 
@Injectable({
  providedIn: 'root'
})
export class DataService {
  private socket$: WebSocketSubject<any>;
  private messagesSubject$ = new Subject();
  public messages$ = this.messagesSubject$.pipe(switchAll(), catchError(e => { throw e }));
 
  public connect(): void {
 
    if (!this.socket$ || this.socket$.closed) {
      this.socket$ = this.getNewWebSocket();
      const messages = this.socket$.pipe(
        tap({
          error: error => console.log(error),
        }), catchError(_ => EMPTY));
      this.messagesSubject$.next(messages);
    }
  }
 
  private getNewWebSocket() {
    return webSocket(WS_ENDPOINT);
  }
  sendMessage(msg: any) {
    this.socket$.next(msg);
  }
  close() {
    this.socket$.complete(); }}

Let’s break this down!

  • getNewWebSocket(): Returns a new webSocketSubject given a url.
  • close(): Closes the connection by completing the subject.
  • connect(): Call the getNewWebSocket and emits messages coming from the server  to a private subject messagesSubject$.
  • sendMessage(): Sends a message to the socket. This latter will send it to the server.
  • messages$: A public observable that we will be subscribing to in every component subject to real time. SwitchAll

What remains is calling the connect method from your root component…

constructor(private service: DataService) {
this.service.connect();
}

…and subscribing to the messages observable in your Angular component to receive the most recent values.

  liveData$ = this.service.messages$.pipe(
    map(rows => rows.data),
    catchError(error => { throw error }),
    tap({
      error: error => console.log('[Live component] Error:', error),
      complete: () => console.log('[Live component] Connection Closed')
    }
    )
  );

As you can see, we don’t subscribe to the messages directly. We process the incoming messages from the server and do a transformation of the current observable map(rows => rows.data) first. The result is stored in the liveData$ observable.

Errors are handled using the RxJS catchError operator and the tap operator is used to log a message when an error occurs or when the connection closes.

We are one step left from the live component, so just subscribe to the liveUpdates$ observable in the component’s template using the async pipe.

And now we are done!

EVERYTHING AROUND ANGULAR

Explore the iJS Angular Development Track

How to improve performance

Now, time for the icing on the cake. It is highly advisable to modify the change detection strategy to onPush in order to gain performance.

  changeDetection: ChangeDetectionStrategy.OnPush

At this point you may be wondering how to handle reconnection. When we restart the server or the connection cuts out for whatever reason, does this Subject restore the lost connection for us?
Well, the answer is no. The reconnection is not supported by the WebSocketSubject or the WebSocket protocol. By design, WebSockets do not handle reconnection.

But don’t worry. You can implement this easily in your Application using RxJS as well.

How to restore a lost connection

When the connection is lost, the socket will be closed and the WebSocketSubjet will no longer emit values. This is not the expected behaviour in the real time world. The reconnection capability is a must in most cases.
So, let’s say that after a disconnection, our application should attempt to reconnect each 2 seconds for example. The trick in this case, is intercepting the closure of the socket and retrying the connection.

How to intercept the closure of the connection?

This is possible thanks to the WebSocketSubjectConfig. The WebSocketSubjectConfig is responsible for customizing some behaviour in the socket lifecycle, namely the opening and the closure of the connection. Instead of calling the webSocket function that takes one string argument which is the url of your endpoint, you can call it by passing to it a whole object which is the WebSocketSubjectConfig .
The following code creates a WebSocket subject using the WebSocketSubjectConfig and simply intercepts the closure event to display a custom message. [DataService]: connection close in the browser’s console.

  private getNewWebSocket() {
    return webSocket({
      url: WS_ENDPOINT,
      closeObserver: {
        next: () => {
          console.log('[DataService]: connection closed');
        }
      },
    });
  }

Cool! But how do you retry the connection?

RxJS has a set of operators that come in handy in many situations. In our case, we can combine the retryWhen operator that will resubscribe to a subject conditionally after it completes, with the delayWhen operator to set the delay between one connection and another.

Let’s implement a function that will retry to connect to a given observable every configurable RECONNECT_INTERVAL . We will log every attempt of reconnection in the browser’s log. The function will look like the following:

  private reconnect(observable: Observable<any>): Observable<any> {
    return observable.pipe(retryWhen(errors => errors.pipe(tap(val => console.log('[Data Service] Try to reconnect', val)), 
      delayWhen(_ => timer(RECONNECT_INTERVAL))))); }

This reconnect function will be used as a custom operator to handle the reconnection after the socket’s closure.

 public connect(cfg: { reconnect: boolean } = { reconnect: false }): void {
 
    if (!this.socket$ || this.socket$.closed) {
      this.socket$ = this.getNewWebSocket();
      const messages = this.socket$.pipe(cfg.reconnect ? this.reconnect : o => o,
        tap({
          error: error => console.log(error),
        }), catchError(_ => EMPTY))
      this.messagesSubject$.next(messages);
    }
  }

As you can tell, a new reconnect flag is added to the connect function to be able to differentiate between the reconnection mode and the first connection mode. This will optimize the code and avoid adding an additional function.
Then, all you have to do is call the connect function with the flag reconnect: true when intercepting the connection closure as follows:

 private getNewWebScket() {
    return webSocket({
      url: WS_ENDPOINT,
      closeObserver: {
        next: () => {
          console.log('[DataService]: connection closed');
          this.socket$ = undefined;
          this.connect({ reconnect: true });
        }
      },
    });

How to send messages to the socket

In order to send messages to the server all that you have to do is call the sendMessge detailed above that calls the next method.

    this.service.sendMessage('Hello');

Message serialization

The message will be serialized before sending it to the server. By default the JSON.stringify method is used by the API. But, if you want to customize the serialization function, you can define your own in the WebSocketSubjectConfig:

  private getNewWebSocket() {
    return webSocket({
      url: WS_ENDPOINT,
      serializer: msg => JSON.stringify({roles: "admin,user", msg: {...msg}})
      },
    });
  }

The same thing is true for the deserialization function. By default the JSON.parse is used and you can define your own function in the WebSocketSubject as well:

  private getNewWebSocket() {
    return webSocket({
      url: WS_ENDPOINT,
    deserializer: ({data}) => data  }
 
        }
      },
    });
  }

Error Handling

You can report a custom error by calling the error method on the subject. This error will close the connection but at least the server will know the cause.

    socket$.error({code: 5555, reason: 'wrong parameter'});

That’s good! But what about event typing?

If you have a closer look, you will notice that you can’t intercept a specific event using an event ID. WebSocket lacks this by design.

How you can get around event typing?

RXJS provides a cool multiplexing feature. It is useful when you want to listen only to specific events coming from the server. The connection to the socket server will remain the same, the same stream is open, and when a message come in, the WebSocketSubject will route it to the adequate observer.
Here’s an example: The multiplex method produces an observable and accepts three parameters. The first two are functions returning subscription and unsubscription messages respectively.
The subscription msg will be sent on every subscription to the observable. The unsubscription msg will be sent on every unsubscription to the observable. This way, the server is notified and can use them to start or stop sending messages to the client.

    const eventX$ = this.socket$.multiplex(
      () => ({subscribe: 'eventX'}),
      () => ({unsubscribe: 'eventX'}),
      message => message.type === 'eventX');
 
    const subA = eventX$.subscribe(messageForAlerts => console.log(messageForAlerts));

In this example, the server will send specific messages when an eventX is fired.
This technique is useful also when you have separate services with different WebSocket endpoints, running on separate machines with only GUI combining them together. You can implement a single Gateway that communicates with the services and manipulates streams separately from your client using the multiplexing.

How to debug it

The browser’s console is a very good tool! Just go to the Network tab and filter the ws request.

The first request is done over HTTP which is known as the protocol upgrade 101 Switching Protocols. The client and the server will agree to speak a different language or use a new websocket protocol. Starting from this point, all communication will be done over WS, no HTTP anymore. Just hit the Messages tab to see the incoming messages:

SmartWebSocketClient is a good plugin to test the WS server.

Let’s take things to the next level!

If your application is scalable then you should consider a management state. NgRx is a cool state management library. This way, the call to the socket service will be triggered from the NgRx effects. The Live component will only dispatch the action. By isolating this side effect from the components you achieve more pure components. The components then select a state and dispatch actions. You can achieve pure components and gain big performance improvements.

You can find two repositories on GitHub for a real project: Real Time Dashboard

Version without NgRx: https://github.com/lamisChebbi/ng-realtime-dashboard
Version with NgRx: https://github.com/lamisChebbi/ng-realtime-dashboard-ngrx

Summary

In this article, we used RxJS to implement a real-time Angular application. We explored the features that the WebSocketSubject provides to support connection control, multiplexing and event typing. We also learned how to add support of reconnection mechanism, how to process messages coming from the server, how to send messages to the server and finally, we went even further with state management using ngrx for scalable and more complex applications.

Thanks for reading. Peace out!

The post Real-Time in Angular: A journey into Websocket and RxJS appeared first on International JavaScript Conference.

]]>
Angular Code Smells https://javascript-conference.com/blog/angular-code-smells/ Mon, 10 Feb 2020 10:14:33 +0000 https://javascript-conference.com/?p=29718 Writing frontend applications is a complex process, it involves lots of difficult scenarios, a myriad of tools and, of course, browser support. But leaving that aside, keeping a high-quality codebase that is maintainable over the long term is just as complicated. In this article, we aim to give you a list of the most villainous coding smells you might run into when writing Angular apps, and the respective solution or alternative.

The post Angular Code Smells appeared first on International JavaScript Conference.

]]>
Code smells

A code smell in itself is not a mistake, but a symptom of an underlying issue in your code. Let’s take a look at one:

Some developers might state that there’s absolutely nothing wrong with the code above, and I’d agree with them. But I’d also agree with those developers who think it can be improved considerably.

The bottom line is that your code will compile either way because this snippet is not a coding error, but a coding smell, it is giving us a clue where the real problem is: TypeScript Configuration.

It turns out that TypeScript provides a configuration file (tsconfig.json) where you can define absolute paths for imports:

By adding those entries in the paths property, you will be able to switch your import statements to:

Now your codebase is easier to read and easier to refactor.

Identifying code smells and discerning the real issue behind them is not easy. We could have said that the real problem was the folder structure of the application, and while there’s some truth to that statement, it would have been unrealistic to think that import paths would have been simplified as much just by moving files and folders around. TypeScript offers an effective approach regardless of the folder structure.

Are you ready?

We will take a look at a few more code smells and we will try to understand the underlying problem, so that next time you stumble upon them, you know exactly how to deal with them!

Double equal vs triple equal

Our next code smell is strictly (pun intended) related to JavaScript. As of today, I’ve read countless tweets, articles, and opinions around the use of the double equal vs triple equal in JavaScript. We can safely say that the matter is settled, but what happens when you encounter a codebase like this:

What’s the underlying problem revealed by this piece of code? Lack of JavaScript knowledge or lack of JavaScript linters?

In reality, your team will often have members whose expertise is not high enough to know the difference – let’s say juniors. Other times you will have members who know the difference but make typos because they’re coding really fast – let’s say seniors.

Both scenarios are plausible, these things happen, and it’s ok. What’s not okay is not having the machine fix it, because machines can, through linters.

Tools like Prettier or ESLint do a terrific job when it comes to formatting JavaScript, but there are numerous options for you pick, customize and apply. The end result should look like this:

Property Binding

Property Binding is one of the most used features of Angular, it is essential to display information in the DOM, but what happens when you find something like this:

You’ll immediately notice something strange, but what is it?
If you try to run this code, it will work perfectly, yet there’s an underlying problem with it. The developer who wrote this snippet (which happens to be me) might have overlooked how Change Detection works when using Property Binding.

It turns out that when you are passing a scalar value to a property, as opposed to a variable, you don’t need Change Detection running over and over to check if the property value has changed. Why? Because it’s not a variable, but a constant (a string, a number, or a boolean directly hardcoded in the HTML).

In such cases, you can get rid of the square brackets entirely:

While the performance improvement is insignificant, on larger codebases you might notice a positive difference.

Loading Speed

Sometimes you are working on pages that look like this:

As you can see, there are plenty of components. Therefore, every time the user opens the website it takes a while to load. Where is the code smell?

Reality is that websites often have lots of components and modules, and if it is a business requirement, there’s not much you can do about it, but you DO have control over how to serve those modules and components.

A popular approach is to serve resources on-demand, as opposed to eagerly loading all of them. In Angular, this strategy is called Lazy Loading, and it allows you to serve only the modules the user needs at a given time, reducing the latency of loading the website the first time.

You won’t believe how common it is for Angular developers to forget about implementing performance strategies. I’ve reviewed many codebases over the last 2 years, and Lazy Loading one of the most skipped features. This is a visual representation of how a performance strategy would look like:

Injecting services

Services are one of the core elements of Angular, they provide structure to your application and allow you to decouple logic, making your codebase more maintainable. They are easy to use thanks to Dependency Injection.

And while injecting services in a constructor seems like a harmless thing to do, sometimes you find more complex scenarios:

Some would say that the snippet above is completely normal, and again, I’d agree with them. What I am concerned about is not the amount of services injected, but what it means. I’m concerned about the code smell.

If a given component has 10 or more services injected, chances are that it is not a small component, it is likely a large overly complex component that is doing too much.

Anyone could wrap a few services into a single service, like a facade. But it wouldn’t make the component simpler, it would at most remove a few lines of code, but the component’s responsibility would remain high, just like when you have an employee who is taking part in every decision, not sharing the knowledge and never delegating responsibilities. It becomes a liability.

In cases like this you need to review your architecture and figure out why this component is doing too much. A redesign might help you reduce the amount of services you need to inject:

Something like this is less suspicious and chances are that the component follows the single responsibility principle.

That’s it for now! Later we will share more code smells in Angular applications and how to deal with them! Thanks for reading and stay tuned!

The post Angular Code Smells appeared first on International JavaScript Conference.

]]>
How to develop GraphQL frontends with Angular and React https://javascript-conference.com/blog/how-to-develop-graphql-frontends-with-angular-and-react/ Mon, 30 Sep 2019 16:35:51 +0000 https://javascript-conference.com/?p=28298 GraphQL, the web API query language developed by Facebook, has been gaining attention for several years now. But while a lot of articles on the subject examined the server-side in detail, the client itself got less attention. This article will focus on the usage of a GraphQL interface in the frontend, while also taking a closer look at both Angular and React.

The post How to develop GraphQL frontends with Angular and React appeared first on International JavaScript Conference.

]]>
GraphQL is a query language, which allows data to be queried in a JSON-like form from a GraphQL interface. Unlike REST, GraphQL does not focus on individual resources with their respective URLs, but on a single GraphQL schema, which is offered by the server. Clients can send queries to this schema and receive the corresponding data from the server. The schema is statically typed, which means that the schema’s developers specify types, which describe what kind of data is offered, and how these correlate to each other. Listing 1 shows an example of a request, as it might occur in a blogging application, with blog articles, authors, and comments. All articles in the listing are queried, but not all of their field are included – only their id and title. The query also contains the authors and comments for each article. We are thereby able to query for the exact data, which is currently required, in a single request. A JSON structure, which corresponds exactly with this query, is the response.

Listing 1

{
  articles {
    id
    title
    authors {
      name
    }
    comments {
      text
    }
  }
}

There are several options to actually use GraphQL in a client application. Theoretically, it would be possible of course to manually send the queries and in turn to manually prepare the result data for the display. However, there are also frameworks that can support developers in using GraphQL. This is especially useful in times of declarative UI libraries such as React, because it is also possible to declaratively describe data requirements besides the UI. A component then no longer states imperatively how the data is fetched, but only which data is required.

The Apollo Client, which is available as open source software, is such a framework that supports this kind of development. Apollo enjoys a good reputation within the GraphQL community and is probably the most widely used client framework for GraphQL applications. But above all, it is also available for various UI frameworks, including React, Angular, Vue.js, native Android, and iOS applications. In this article, we want to focus on the integration and interaction with React, but we will also take a brief look at Angular at the end.

The already mentioned blogging platform will serve as an exemplary context. The complete source code of this application can be found on GitHub, server included, which serves as the application’s backend, and provides a ready-made schema. We only need a current Node.js installation to start the React application, then we can use npx create-react-app <projekt-name> to create the app’s basic framework. The next step is to add some JavaScript packages for the Apollo framework: npm install --save apollo-client apollo-cache-inmemory apollo-link-http graphql graphql-tag react-apollo. We will take a closer look at what these packages do exactly in a moment’s notice.

The basic idea behind Apollo is that there is a store in the fronted, which manages the application data. UI components describe which data they need via GraphQL, and receive this exact data from the Apollo store. The store takes care of loading the data from the server, if required, and manages them internally. A local cache is used for this purpose, in order to store the data. Therefore, it is transparent to the UI component whether a request from a UI component is supplied from the cache or from the server.

Set-up

The first step is to set up this exact Apollo store and integrate it into the React application. The code can be seen in Listing 2. The existing index.js file will be adjusted accordingly.

The store is created with the class ApolloClient. For the configuration, the values link and cache must be filled. The link is used to tell Apollo where to find the server’s GraphQL schema. In the example, the URI is given as a relative path. Alternatively, complete URIs in the form of http://example.com/api/graphql are also possible.

The second parameter defines how the local caching shall take place. InMemoryCache is the usual variant, which stores data in the memory, but only for the duration of the browser session. There are, however, also alternative implementations, which use the LocalStorage functionalities of browsers, for example, to keep the data permanently available in the browser. In this way, they can also be refilled after a reload of the browser without new network communication.

After that, the ApolloClient has to be made available for React. This is done via the component ApolloProvider. This wraps our actual app component and ensures that all underlying React components can use GraphQL.

Listing 2

import { ApolloClient } from "apollo-client"
import { HttpLink } from "apollo-link-http"
import { InMemoryCache } from "apollo-cache-inmemory"
import { ApolloProvider } from "react-apollo"

const apolloClient = new ApolloClient({
   link: new HttpLink({uri: "/api/graphql"}),
   cache: new InMemoryCache()
});

ReactDOM.render(
   <ApolloProvider client={apolloClient}>
       <App />
   </ApolloProvider>
, document.getElementById('root'));

Queries

This completes the basic setup and we can write a React component, which retrieves and displays data. Therefore, we want to build a component that provides an overview of the existing blog articles. The first step is to consider which data the component needs, and what the corresponding GraphQL query looks like.

It is quite common to pack the query into the same file that contains the actual component. This achieves a high level of encapsulation, since a developer who uses the component can no longer see which data sources are addressed and how. Therefore, you only need to touch a single point, if you need to make changes to the query and the display.

The query is stored in a local variable and created with the function gql from the graphql-tag package, as shown in Listing 3. A relatively new JavaScript technique, called Tagged Template Strings, is used. This allows multi-line strings to be provided with template parameters, which in turn can be modified directly by a function (in our case gql). It is important that the string is enclosed with back ticks (also known as the French Accent Grave) and not with normal single or double quotation marks.

The gql-tag function parses the query string and creates an abstract syntax tree, AST for short, which is then used to execute the query. This is done with the React component Query, which is imported from the react-apollo package. It accepts the previously defined query variable as an argument and takes care of executing the query. In order to react to the result, a function is defined as a child element, which is automatically called by Apollo as soon as a new state has set in with regard to the query.

Listing 3

import gql from "graphql-tag"

const articleQuery = gql`
  query articlesQuery {
    articles {
      id
      title
      text
      authors {
        id
        name
      }
    }
  }
`

To do so, the function receives the arguments loading, error and data. loading is a Boolean flag, which expresses whether we are currently still in the process of loading or whether it has already been completed. If an error occurred during loading, the error argument is filled. If successful, the data argument contains the result data.

Within the function, you can react to this information and define how the display should look. In the example in Listing 4, a corresponding text is generated in both loading and error cases.

In the case of success, the data of the result set is iterated and an ad is generated for each blog article.

Listing 4

const ArticleOverview = () => (
  <Query query={articleQuery}>
    {({loading, error, data}) => {
      if (loading) {
        return <div><p>Loading...</p></div>
      }
 
      if (error) {
        return <div><p>Error: {error}</p></div>
      }
 
      return <div>
          {data.articles.map(article => (
            <div key={article.id}>
              <h2>{article.title}</h2>
 
              <div>
                {article.text}
              </div>
            </div>
          ))}
        </div>
    }}
  </Query>
)

Here, the declarative character of React also becomes apparent: There is no imperative reaction to events, and the DOM tree of the component is selectively updated correspondingly, as would be the case with classic jQuery applications, for example. Instead, a description is written that maps the state of the component on to a matching DOM description at any given point in time.

Mutations

Now we can display server data in the client via GraphQL. Most web applications must also be able to change data and trigger actions in the server though. In GraphQL these two aspects are separated from each other. In the schema of the application, the types Query for all offered queries and Mutation for changes exist at the highest level (additionally, there is still a Subscription, which we do not want to take a closer look at at this point). In contrast to REST, GraphQL allows you to define change operations in independence from the data model of the query page. Mutations in GraphQL are more similar to remote procedure calls (i.e. function calls), instead of uniformly defined operations on the same resources as it is the case with REST. Depending on the use case, the offered functions can also be strongly oriented towards business, but of course, simple CRUD operations (Create, Read, Update, Delete) can also be defined. In the example, we assume that our server offers a mutation like in listing 5, which allows adding a comment as a guest, who is not logged in.

Listing 5

type Mutation {
  addCommentAsGuest(
    articleId: ID!
    authorName: String!
    text: String!
  ): Article
}

This Mutation function accepts as a mandatory parameter (marked by exclamation marks) the ID of the article to which the comment belongs, the name of the author, and the comment-text. In GraphQL, mutations always have a return type, in our case Article. Usually, these should be the objects changed by the mutation, so that a client can immediately display the changed data without having to start another query.

For adding comments, we again create our own component AddComment. This component also consists of two parts: the GraphQL-Query and the code for display. The query can be seen in listing 6. Once again, the query is stored in a variable via the gql-tag function: a kind of local function definition with the type mutation is created in the first line of the query. The name we used here, as well as the identifiers for the parameters, can be freely chosen theoretically, but we use the name given to us by the schema as guidance. In the second line, the actual mutation of the schema is called, and the parameters of the external function are passed on. What may look like an unnecessary duplication at first, often turns out to be useful, because the flexibility is increased. The Mutation function is followed by an ordinary GraphQL query, which returns some of the changed data of the article. Here the IDs are especially important, because Apollo uses them to synchronize its local cache with the new data from the server.

This local cache also allows all other locations, in which components display affected data, to be updated automatically, after the mutation has been executed – without requiring an additional request to the server.

Listing 6

const addCommentMutation = gql`
  mutation addComment($text: String!, $authorName: String!, $articleId: ID!) {
    addCommentAsGuest(text: $text, authorName: $authorName, articleId: $articleId) {
      id
      comments {
        id
        text
        guestAuthor
      }
    }
  }
`

In addition to the GraphQL mutation, we now also create the React component, which contains a form for entering the comment and actually executes the mutation after clicking a button. The source code can be seen in Listing 7.

At this point, it makes sense to separate the two aspects mentioned above, i.e. the display of a form, and the placement of the mutation query. For this purpose, we first define a React component AddCommentForm, which itself has no dependencies to GraphQL or Apollo, but only displays the form and manages the local state of the form.

As is usual for React forms, the local state of the form fields is updated using the setState method. When sending the form, we first prevent the standard behavior for HTML forms, namely the sending of a request, and the subsequent reloading of the page. Instead, we want to use JavaScript code to react directly. We assume here that an addComment function was passed to the component from outside, to which we only have to pass the necessary values for the execution of the mutation. We take the text and the author’s name of the commentator from the local state of the form. We also expect the comment’s article ID to be an external value.

We keep the AddCommentForm component as an implementation detail within our JavaScript file. In order to actually execute the mutation, we create another React component, AddComment, which we also make visible to the outside by export default. Similar to the ArticleOverview component above, we also use a mutation component provided by the Apollo framework, which takes care of the actual work. We only have to pass our mutation query variable and define how we want the display to look. For this purpose, a function is defined again as a child element, which contains the GraphQL mutation as a JavaScript-function-argument. The name we used here corresponds to the one that we used as the local function name in our mutation query. We pass this function directly to our AddCommentForm component. As with the ArticleOverview component, it would also be possible to add further function parameters here. You could, for instance, react to the current loading state of the query or you could manually restart the mutation. But in this simple example, we want to do without it.

Listing 7

class AddCommentForm extends React.Component {
  constructor() {
    super()
    this.state = {name: "", text: ""}
  }

  render() {
    return <div>
      <form onSubmit={e => {
        e.preventDefault()
        this.props.addComment({
          variables: {
            text: this.state.text,
            authorName: this.state.name,
            articleId: this.props.articleId
          }
        })
      }}>
        <div>
          <label>Author:</label>
          <input
            type="text"
            value={this.state.name}
            onChange={e => this.setState(
              {name: e.target.value}
            )}/>
        </div>

        <div>
          <textarea
            value={this.state.text}
            onChange={e => this.setState(
              {text: e.target.value}
            )}/>
        </div>

        <button type="submit">Add Comment</button>
      </form>
    </div>
  }
}

const AddComment = ({articleId}) => (
  <Mutation mutation={addCommentMutation}>
    {(addComment) =>
      <AddCommentForm
        articleId={articleId}
        addComment={addComment}/>
    }
  </Mutation>
)

export default AddComment

Now we’ve seen both of the Apollo Client framework’s essential parts in action. The novelty of the Apollo framework is that on the one hand there is a UI framework agnostic part (which includes, for example, the store and the caching), and on the other hand there are UI framework specific libraries (for the pleasant use of the general functionality in a familiar way). The variant, which is shown here (in which UI components can accept functions as child elements and, depending on the context, can call them up for display), corresponds to a pattern that is popular with the React community and it is called Render Props. In a framework designed with an object orientation mind, such as Angular, a functional variant such as this would probably be rather unusual or maybe even impossible from a technical perspective.

Angular

However, the Apollo Client can also be used with Angular, due to the UI-Framework specific integration libraries. Therefore, in the last part of this article, we will briefly build an Angular component for a comparison that provides the same article overview. We can reuse the query string from Listing 3 as it is. As it is usual for Angular, we create fields in the component class for the data to be displayed. In our case this is loading of the Boolean type, as well as article as arrays. In this simple example we do without explicit typing and, therefore, we use any for the article array as the type. In real projects one would certainly create dedicated TypeScript types. To get access to Apollo Client, we injected an instance of Apollo into the constructor. In order to work, Angular’s module system has to be configured accordingly beforehand of course.

To connect the component to the Apollo Client, we use the OnInit Lifecycle Hook of Angular. In the corresponding ngOnInit method we call the method watchQuery from Apollo and pass the GraphQL query from Listing 3. As usual with Angular, RxJavaScript streams are used here as well. Accordingly, the watchQuery method returns such a stream, on which we can ultimately subscribe in order to be notified when data for this query has changed. In the subscriber, we react to the new data and store it in the corresponding fields of our class. To resolve the subscription after removing the component from Angular, we should also implement an OnDestroy Lifecycle Hook. To do this, we also create a field in the class for the subscription itself. The set-up of the GraphQL query is now complete. Now all that is missing is a template, which displays the corresponding data. The whole component can be seen in Listing 8.

Listing 8

@Component({
  selector: 'app-articles-overview-page',
  template: `
    <div *ngIf="loading">
    <p>loading</p>
  </div>

  <div *ngIf="!loading">
    <div *ngFor="let article of articles">
      <h2>{{article.title}}</h2>
      <div>{{article.text}}</div>
    </div>
  </div>
 `,
})
export class ArticlesOverviewPageComponent implements OnInit, OnDestroy {

loading: boolean;
articles: Array<any>;
private querySubstription;

constructor(private apollo: Apollo) { }

ngOnInit() {
  this.querySubstription = this.apollo.watchQuery<any>({
    query: articleQuery
  })
    .valueChanges
    .subscribe(({data, loading}) => {
      this.loading = loading;
      this.articles = data.articles
    })
  }

ngOnDestroy() {
  this.querySubstription.unsubscribe()
}
}

Conclusion

We have seen that developing frontend applications with GraphQL and Apollo Client is not difficult. The aspect that can be abstracted from the technical details of the loading process in the UI components is especially interesting. Instead, the Apollo framework decouples and optimizes via the local cache. At the same time, Apollo also offers extensive options for fine-tuning the caching and loading mechanisms if required. The Apollo Client Developer Extension, offered for the Chrome DevTools, is also helpful here. This allows a deeper insight into the state of the local cache in order to get to the bottom of possible problems. Another interesting aspect of Apollo Client is that it is available for different UI frameworks. This is especially interesting in contexts, in which different technologies are to be used together.

But the Apollo Client is not the only framework that allows working with GraphQL. The biggest competitor is probably Relay, developed by Facebook. As a framework, it is limited to React or React Native.

The post How to develop GraphQL frontends with Angular and React appeared first on International JavaScript Conference.

]]>
back to school—explore the program of iJS https://javascript-conference.com/blog/back-to-school-explore-the-program-of-ijs/ Mon, 02 Sep 2019 16:19:40 +0000 https://javascript-conference.com/?p=28133 September is here! It always reminds people of school time, seeing friends after months and of course studying. We want to take you back to those times and give you the feeling of being a student again! Welcome to the JS Academy and its extensive syllabus!

The post back to school—explore the program of iJS appeared first on International JavaScript Conference.

]]>

JavaScript is fast, dynamic and futuristic—just like our program at iJS! Our infographic is taking you back to school and to your student times by focusing on the highlights of the learning objectives of iJS Munich’s program and speakers while showing you the hottest topics and the latest trends of the JavaScript ecosystem. Every one of the learning objectives are highlighting a different track of iJS and preparing you to be agile in the dynamic world of JS and to take your skills to the next level. Have you heard of the bell? Let’s get started with the first lesson!

 

 

The post back to school—explore the program of iJS appeared first on International JavaScript Conference.

]]>
Web Components & Micro Apps: Angular, React & Vue peacefully united? https://javascript-conference.com/blog/keynote-video-web-components-micro-apps-angular-react-vue-peacefully-united/ Tue, 20 Aug 2019 15:38:55 +0000 https://javascript-conference.com/?p=28036 Angular, React, Vue or some other framework: Which one are you going to use on your next project? The JavaScript ecosystem offers so many choices and all of them have their pros and cons for any given project, making it difficult to choose just one. But there is a solution to that: With micro apps and web components, you can use whatever works best for any single part of your project.

The post Web Components & Micro Apps: Angular, React & Vue peacefully united? appeared first on International JavaScript Conference.

]]>
It’s not either React or Vue.js anymore, but React for one part of your code, and Vue for another! In this keynote from iJS 2018 in Munich, Manfred Steyer explains how to unite all JavaScript frameworks peacefully.

Web development is exciting nowadays! We get new innovative technologies on a regular basis. While this is awesome it can also be overwhelming – especially when you have to maintain a product for the long term.

Web Components and Micro Apps provide a remedy for this dilemma. They allow for decomposing a big solution into small self-contained parts that can be maintained by different teams using the best technology for the requirements in question. Watch this keynote to find out how this idea helps with writing applications that can evolve over a decade and more.

 

The post Web Components & Micro Apps: Angular, React & Vue peacefully united? appeared first on International JavaScript Conference.

]]>
Angular Reactive Forms: Building Custom Form Controls https://javascript-conference.com/blog/angular-reactive-forms-building-custom-form-controls-campaign/ Wed, 27 Jun 2018 14:18:01 +0000 https://javascript-conference.com/?p=25869 Angular’s reactive forms module enables us to build and manage complexed forms in our application using a simple but powerful model. You can build your own custom form controls that work seamlessly with the Reactive Forms API. The core idea behind form controls is the ability to access a control’s value. This is done with a set of directives that implement the ControlValueAccessor interface.

The post Angular Reactive Forms: Building Custom Form Controls appeared first on International JavaScript Conference.

]]>

The ControlValueAccessor Interface

ControlValueAccessor is an interface for communication between a FormControl and the native element. It abstracts the operations of writing a value and listening for changes in the DOM element representing an input control. The following snippet was taken from the Angular source code, along with the original comments:

The ControlValueAccessor interface.

interface ControlValueAccessor {
 /**   
 * Write a new value to the element.
 */ 

writeValue(obj: any): void;
 /**
 * Set the function to be called when the control receives a change event.
 */ 

registerOnChange(fn: any): void;
 /**    
 * Set the function to be called when the control receives a touch event.
 */ 

registerOnTouched(fn: any): void;
 /**    
 * This function is called when the control status changes to or from "DISABLED".
 * Depending on the value, it will enable or disable the appropriate DOM element.
 * @param isDisabled
 */

setDisabledState?(isDisabled: boolean): void;
} 

Download for free: 40+ pages of JS wisdom


 

Extend your knowledge and improve your JS skills with the iJS dossier!

40+ pages of deep insights into the world of JavaScript, TypeScript, node.js, React and more!

Sign up now to get our dossier for free!

 

 

ControlValueAccessor Directives

Each time you use the formControl or formControlName directive on a native <input> element, one of the following directives is instantiated, depending on the type of the input:

  1. DefaultValueAccessor – Deals with all input types, excluding checkboxes, radio buttons, and select elements.
  2. CheckboxControlValueAccessor – Deals with checkbox input elements.
  3. RadioControlValueAccessor – Deals with radio control elements [RH: Or just “radio buttons”
    or “radio button inputs”?].
  4. SelectControlValueAccessor – Deals with a single select element.
  5. SelectMultipleControlValueAccessor – Deals with multiple select elements.

 

Let’s peek under the hood of the CheckboxControlValueAccessor directive to see how it implements the ControlValueAccessor interface. The following snippet was taken from the Angular docs:

checkbox_value_accessor.ts.

import {Directive, ElementRef, Renderer, forwardRef} from '@angular/core';
import {ControlValueAccessor, NG_VALUE_ACCESSOR} from './control_value_accessor'; 

export const CHECKBOX_VALUE_ACCESSOR: any = {
provide: NG_VALUE_ACCESSOR,
useExisting: forwardRef(() => CheckboxControlValueAccessor),
multi: true, 
}; 

@Directive({
selector : `input[type=checkbox][formControlName], 
              input[type=checkbox][formControl],
              input[type=checkbox][ngModel]`,
  host : {
    '(change)': 'onChange($event.target.checked)',
    '(blur)'  : 'onTouched()'
  }
  ,
  providers: [CHECKBOX_VALUE_ACCESSOR]
}) 

export class CheckboxControlValueAccessor implements ControlValueAccessor {
onChange = (_: any) => {};
onTouched = () => {}; 
constructor(private _renderer: Renderer, private _elementRef: ElementRef) {}

writeValue(value: any): void {
 
this._renderer.setElementProperty(this._elementRef.nativeElement, 'checked', value);
} 

registerOnChange(fn: (_: any) => {}): void {
this.onChange = fn; 
}
 
registerOnTouched(fn: () => {}): void {
this.onTouched = fn; 
} 

setDisabledState(isDisabled: boolean): void { 
this._renderer.setElementProperty(this._elementRef.nativeElement, 'disabled',  isDisabled);
  }
}

 

Let’s explain what’s going on:

  1. This directive is instantiated when an input of type checkbox is declared with the formControl, formControlName, or ngModel directives.
  2. The directive listens to change and blur events in the host.
  3. This directive will change both the checked and disabled properties of the element, so the
    ElementRef and Renderer [RH: ElementRef and Renderer what? Classes?] are injected.
  4. The writeValue() implementation is straight forward: it sets the checked property of the
    native element. Similarly, setDisabledState() sets the disabled property.
  5. The function being passed to the registerOnChange() method is responsible for updating the outside world about changes to the value. It is called in response to a change event with the input value.
  6. The function being passed to the registerOnTouched() method is triggered by the blur event.
  7. Finally, the CheckboxControlValueAccessor directive is registered as a provider.

 

Discover the Angular track of iJS 2018!

 

Sample Custom Form Control: Button Group

Let’s build a custom FormControl based on the Twitter Bootstrap button group component.
We will start with a simple component:

custom-control.component.ts.

import {Component} from "@angular/core"; 

@Component({
selector : 'rf-custom-control',
templateUrl: 'custom-control.component.html',
 })
export class CustomControlComponent { 

private level: string;
private disabled: boolean; 

constructor(){
this.disabled = false;
}
 
public isActive(value: string): boolean {
return value === this.level; 
} 

public setLevel(value: string): void {
this.level = value; 
}
} 

 

Here is the template:

custom-control.component.html.

<div class="btn-group btn-group-lg"> 

<button type="button"
class="btn btn-secondary"
[class.active]="isActive('low')"
[disabled]="disabled"
(click)="setLevel('low')">low</button> 

<button type="button"
class="btn btn-secondary"
[class.active]="isActive('medium')"
[disabled]="disabled"
(click)="setLevel('medium')">medium</button>

<button type="button"
class="btn btn-secondary"
[class.active]="isActive('high')"
[disabled]="disabled" (click)="setLevel('high')">high</button> 
</div>

 

Next, let’s implement the ControlValueAccessor interface:

custom-control.ts component class.

export class CustomControlComponent implements ControlValueAccessor {

private level: string;
private disabled: boolean;
private onChange: Function; 
private onTouched: Function; 

constructor() {
this.onChange = (_: any) => {};
this.onTouched = () => {};
this.disabled = false; 
} 

public isActive(value: string): boolean {
return value === this.level; 
} 

public setLevel(value: string): void {
this.level = value;
this.onChange(this.level);
this.onTouched(); 
} 

writeValue(obj: any): void {
this.level = obj; 
} 

registerOnChange(fn: any): void{
this.onChange = fn; 
} 

registerOnTouched(fn: any): void {
this.onTouched = fn; 
} 

setDisabledState(isDisabled: boolean): void {
this.disabled = isDisabled; 
}
} 

 

The last step is to register our custom control component under the NG_VALUE_ACCESSOR token. NG_VALUE_ACCESSOR is an OpaqueToken used to register multiple ControlValue providers. (If you are not familiar with OpaqueToken, the multi property, and the forwardRef() function, read the official dependency injection guide on the Angular website.)

Here’s how we register the CustomControlComponent as a provider:

Registering the control as a provider.

const CUSTOM_VALUE_ACCESSOR: any = {
provide : NG_VALUE_ACCESSOR,
useExisting: forwardRef(() => CustomControlComponent),
multi : true, 
}; 


@Component({
selector : 'app-custom-control',
providers : [CUSTOM_VALUE_ACCESSOR],
templateUrl: 'custom-control.component.html', 
}) 

 

Our custom control is ready. Let’s try it out:

app.component.ts.

import {Component, OnInit} from "@angular/core";
import {FormControl} from "@angular/forms"; 

@Component({
selector: 'rf-root',
template: ` 
<div class="container">
<h1 class="h1">REACTIVE FORMS</h1> 
 
      <rf-custom-control [formControl]="buttonGroup"></rf-custom-control>

<pre>
<code> 
          Control dirty:   {{buttonGroup.dirty}}
          Control touched: {{buttonGroup.touched}}
        </code>
</pre> 
</div>
`, 
})
export class AppComponent implements OnInit {

public buttonGroup: FormControl; 

constructor() {
this.buttonGroup = new FormControl('medium'); 
} 

ngOnInit(): void {
this.buttonGroup.valueChanges.subscribe(value => console.log(value)); 
} 
} 

 

This tutorial is an excerpt from iJS speaker Nir Kaufman’s eBook “Angular Reactive Forms – A comprehensive guide for building forms with Angular”. The complete book can be purchased in the Leanpub store: https://leanpub.com/angular-forms

 

Interview with Nir about Angular Reactive Forms

Why is Angular a good choice for projects that require a high number of forms? The answer is Angular Reactive Forms, says Nir Kaufman in this interview from iJS 2018 in London. We asked him about forms in Angular in general and different approaches for different kinds of forms. Watch the video to find out the answer!

The post Angular Reactive Forms: Building Custom Form Controls appeared first on International JavaScript Conference.

]]>