JavaScript - International JavaScript Conference Mon, 22 Jul 2024 09:05:14 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 https://javascript-conference.com/wp-content/uploads/2017/03/ijs-favicon-64x64.png JavaScript - International JavaScript Conference 32 32 What is the JAMstack? https://javascript-conference.com/blog/what-is-the-jamstack/ Tue, 01 Jun 2021 06:52:36 +0000 https://javascript-conference.com/?p=82665 Especially for content-heavy sites and applications where scaling and security play a major role, the JAMstack offers an approach that can make development easier and cheaper.

The post What is the JAMstack? appeared first on International JavaScript Conference.

]]>
Just as in 2020, we can expect the JAMstack [1] topic to continue to gain momentum and attract interest from numerous companies. In the following, we’ll take a look at what the JAMstack is all about and how it makes our sites more performant and secure while improving the front-end development experience.

To do this, we will first look at what JAMstack actually means and where the differences lie compared to classic approaches. This is followed by a look at possible areas of application where the JAMstack can show its strengths.

For those who want to get hands-on, I’ll introduce a few frameworks that can be used to develop applications. And don’t worry: Fans of Angular, React, and Vue don’t have to learn a new framework right away.

What does JAM mean?

JAM is an acronym that stands for JavaScript, APIs, and Markup. The architecture involves delivering pre-rendered, static pages via a content delivery network (CDN). Applications are made dynamic via the integration of APIs and JavaScript. Technologies that can play to their strengths include common JavaScript frameworks, static site generators, and specialized third-party APIs (e.g., for authentication [2], payment service processing [3], or headless content management systems (CMS) [4]). This gives us good performance, higher security, easy and above all, cheap scaling with a good development experience.

Much of what the JAMstack introduces is not new and is already being used successfully in one project or another. Therefore, the JAMstack is a clear definition of modern architecture for websites and apps. In the following, we will take a look at how the JAMstack stands out from classic applications. Of course, the evaluations made here do not do justice to every application across the board.

Server-side rendering

Let’s start with a classical approach: Our server receives a request and performs matching operations on the database. The result of the database operation is processed and then rendered appropriately for the client. The client then displays this page to the user in the browser. When it comes to performance, we initially depend heavily on the performance of the server and database queries in the background. This can be improved via caching. Since the generated page is static HTML, this part should be extremely performant for the user (Fig. 1).

Fig. 1: The server interacts with a database and renders static web pages according to the requests

 

In this case, scaling often means that several servers (and/or databases) are needed to scale, which of course involves increased costs. Especially if only individual subsystems are under load, this can become too expensive. On the backend, one could resort to a serverless or microservice architecture. On the frontend, these approaches do not always fit.

From a security perspective, the server and database have to be considered, but you can also fall back on one of the numerous standardized solutions. From a developer’s perspective, this approach can result in application logic and presentation being mixed. For sophisticated frontends, the development experience is often better when the frontend is clearly separated from the backend.

Single Page Applications

With Single Page Applications, we move more logic to the client. This presents the page and communicates with the server via REST APIs. The server processes the requests, performs operations on the database, and usually returns the result to the client via JSON or XML, which updates the page accordingly (Fig. 2).

Fig. 2: The frontend is a single page application and communicates with the server via REST

 

From a performance point of view, part of the responsibility moves from the server to the client. The server “only” has to return the determined result to the client in the form of JSON or XML. Caching is also an established means of improving performance here. However, this is at the expense of the client, which has to load and render the JavaScript frameworks and applications in use. Obviously, this can’t keep up with a static page and will degrade the user experience in comparison.

When it comes to scaling and security, we are moving in similar dimensions as with the previously described solution. By caching the REST APIs, there is probably a bit more leeway for most applications before additional costs are incurred. Of course, it should not be underestimated that part of the application logic must be implemented in the frontend and backend, which can be an additional source of security vulnerabilities.

Of course, the development experience is much better in the frontend. Although there are still projects in which the backend and frontend are developed together, the frontend is clearly (more) separated from the backend. At the same time, there is the business logic already mentioned above, which in this case, must be maintained twice.

JAMstack

JAMstack websites and applications render all pages in advance as static HTML pages rather than for each request individually, as with server-side rendering. These pages are then delivered via a CDN. Where necessary, third-party APIs or custom APIs are connected (Fig. 3). Of course, we have two dimensions that we need to consider separately: Frontend and Backend.

Fig. 3: With JAMstack, the frontend is delivered by a CDN

 

On the frontend, this approach is highly performant: CDNs ensure that pages are delivered quickly (Box: “Advantages of CDNs”). Static pages, by their very nature, render quickly to the user. Compared to the classic approaches, the performance is outstanding, and the topic scaling is of interest. CDNs offer this quasi-out-of-the-box and it is much cheaper compared to classic hosting. If more resources are needed, no additional (complicated) server has to be set up, but only static files have to be made available. From a security point of view, this is also an improvement: the delivery of static pages is easier to secure than the operation of an application server.

In the development experience, we have achieved separation from the backend. The application can be developed with the desired toolset. The backend is only connected via APIs. It is immediately noticeable that the JAMstack can score in two areas:

  • for pages that do not need a backend, where we can render the content in advance, and
  • for microservices and serverless applications.

With these solutions, the JAMstack is particularly attractive, especially when we consider the topics of performance and scaling. For classic applications, the recommendation to migrate their application to microservices or serverless would probably be too big a step. Nevertheless, the separation of backend and frontend already brings numerous advantages during development and simplifies the application. The use of the JAMstack can be the first step towards more modern architecture. Parts of the logic could be outsourced to the cloud (either as microservices or in the form of serverless functions). Another way would be to use the domain expertise of third-party providers to handle areas such as authentication or payments and call them directly from the front end via the appropriate API.

Of course, there is no one architecture that covers all requirements. Let’s take a look below at where the JAMstack comes into its own.

Advantages of CDNs

Using CDNs to deliver a frontend became popular with the introduction of JAMstack. But what is a CDN and why do we achieve optimized performance and higher security from it?

A CDN is a group of servers that are geographically distributed. The task of the CDN is to provide static content (such as HTML, CSS, JavaScript, images, …) quickly. CDNs enjoy great popularity in this regard and are responsible for a large part of the worldwide traffic.

For the user, the geographical separation means that pages load faster, because a page is not retrieved from an origin server, but from the geographically nearest server. In the event of a server failure due to hardware fault or if servers reach their limits due to an increase in requests, other servers from the network can step in.

The file size of static data can be optimized via minification and compression, which can also improve traffic (and thus costs) and speed. For companies, this reduces operating costs while improving the user experience.

Of course, you don’t have to operate a CDN yourself: Major cloud providers have solutions ready, while companies like Netlify and Vercel offer them.

 

Application scenarios

There are already numerous examples [5] of pages implemented with the JAMstack. The use for pages that primarily display static content is obvious, starting with blogs up to news sites, which should be highly available. For these pages, the use of static pages naturally leads to better search engine optimization.

The JAMstack not only suitable for static pages. Dynamic content such as comments can be integrated using JavaScript and APIs. In the area of content management systems, there are numerous providers for headless CMS, such as Contentful. Even for complex processes such as payments, there are solutions on the market and also examples of how these can be developed using JAMstack. It is up to the team to decide how much dynamic content to include in a JAMstack page via JavaScript and the appropriate APIs. The own business logic can then initially continue to be integrated via the own server as an API or migrated to the cloud in order to be able to easily scale here as well at any time.

If the use cases fit and the advantages are convincing, we should take a closer look at the topic of development. Below, I describe some best practices for the JAMstack.

JAMstack Best Practices

JAMstack pages do not need a classic server to be executable. Even if we could deploy pages created with the appropriate frameworks on a classic server, I would not be discussing JAMstack, as it is closely linked to CDN. As already described, the use of a CDN plays a significant role in benefiting from the JAMstack. From a development perspective, the benefits of a separate front-end application should be taken advantage of: There are numerous frameworks and build tools in the JavaScript ecosystem that simplify the creation of web pages and improve the development experience. Of course, back-end development also benefits, since the focus is purely on application logic and providing APIs.

Especially for large sites, the issue of atomic deployments should be considered. For large sites, where each build is several hundred pages, changes can lead to site inconsistencies that will likely take a long time. Ideally, only changes are deployed and not complete pages.

As promised in the introduction, we don’t have to relearn everything now to develop applications for the JAMstack. We can even fall back on frameworks known from Single Page Applications.

Frameworks for the JAMstack

Of course, it is tempting to simply deploy an SPA in a CDN, and this is probably a good first step for existing applications. For many applications, however, it is not necessary to deploy a corresponding framework. There are workarounds to continue to use the SPA framework of choice, but provide static pages to the user.

Let’s first look at the solutions for largely static pages. Here, there are numerous generators on the market (e.g. Jekyll [6], Hugo [7] or Eleventy [8]) that generate static pages. This can be done via Markdown or using a headless CMS such as Contentful. If necessary, JavaScript can also be used to make the respective page more dynamic.

For applications that require SPA functionality, frameworks based on React, Angular, or Vue can also be used. In that case, frameworks such as Next.js (React) [9], Scully (Angular) [10], or Nuxt (Vue) [11] offer the possibility to generate static pages from the SPAs. When these pages are loaded in the browser, the associated JavaScript (e.g., the runtime of the framework) is reloaded and the page is subsequently extended with the functionality from the SPA (this is also referred to as hydration). This combines the advantages of a static page with the respective framework, providing an excellent user experience.

Creating an application with Next.js

If you want to create and deploy a JAMstack application now, you can turn to Next.js. Next.js is offered by Vercel [12], a provider that also offers CDNs. In this example, I show how to deploy the whole thing to Netlify [13].

The setup is similar to the approaches of SPAs: there is a suitable npm package to build the application. In this case, all that is needed in the terminal is:

npm init next-app

A dialog guides you through setup. After that, only the build command in package.json needs to be adjusted so that Next.js creates static pages with next export.

"scripts": {
  // ...
  "build": "next build && next export",
},

Now, when we build the application with npm run build, the out folder is where the static files are created that we will deploy later.

Deployment

While in server-side rendering, pages are generated at runtime, in a JAMstack page they are rendered in advance (Fig. 4). For this purpose, sources such as Markdown, CMS, or an RSS feed can be used during the build to generate the appropriate static pages. These pages are then published to the CDN.

Fig. 4: All pages are rendered in advance here

Very few companies will want to operate their own CDN, which is why using suitable providers is recommended. Companies that already rely on cloud providers can also use the JAMstack. With AWS [14], for example, an S3 bucket can be distributed via a CDN with the help of CloudFront.

With Netlify and Vercel, there are specialized providers that provide the required functionality. For example, pages can be deployed at Netlify and distributed via their CDN. Netlify Functions offers serverless functions to extend your own application.

Every major vendor now offers the necessary tooling to deploy your own application. We make sure that our Next.js application is available on GitHub (Netlify is not restricted to GitHub, of course) and link it to Netlify from within the terminal.

First, we have to install the Netlify CLI via npm install -g netlify-cli. With ntl login we log in to Netlify (we may have to register once in the opening browser). In the directory of our site, an ntl init is enough to link our site with Netlify. The dialog you are guided through links the GitHub repository with Netlify and ensures that after each push to the main branch the page is updated at Netlify. By the way, for private and smaller projects the free Netlify package is perfectly sufficient.

That’s all it takes. Now, the page can be developed further. There are numerous online tutorials that show how to integrate static resources into a Next.js page. This article should only serve as a first introduction.

Conclusion

With the JAMstack, we have a modern architectural approach that not only speeds up development, but also provides developers with a modern environment. Users benefit from high-performance pages. Companies can save operations costs and have to worry less about security.

There are numerous examples of JAMstack usage. Even if the idea of generating all pages statically in advance seems unusual at first, there is still a lot possible here. Not only content-heavy pages can benefit from this. With modern frameworks around Next.js, Scully and Nuxt, the advantages of the SPA can be easily combined with those of the static page.

 

Links & Literature

[1]https://JAMstack.org

[2]https://auth0.com

[3]https://stripe.com

[4]https://www.contentful.com

[5]https://JAMstack.org/examples/

[6]https://jekyllrb.com

[7]https://gohugo.io

[8]https://www.11ty.dev

[9]https://nextjs.org

[10]https://scully.io

[11]https://nuxtjs.org

[12]https://vercel.com

[13]https://www.netlify.com

[14]https://aws.amazon.com

The post What is the JAMstack? appeared first on International JavaScript Conference.

]]>
Developing Web APIs with Node – Intro to Node.js part 2 https://javascript-conference.com/blog/developing-web-apis-with-node-intro-to-node-js-part-2/ Mon, 17 May 2021 13:11:53 +0000 https://javascript-conference.com/?p=82648 One of the most common uses of Node.js is the development of web APIs. Numerous modules from the community are available for this, covering a whole range of aspects, such as routing, validation, and CORS.

The post Developing Web APIs with Node – Intro to Node.js part 2 appeared first on International JavaScript Conference.

]]>
The first part of this series introduced Node.js as a server-side runtime environment for JavaScript and showed how to write a simple web server. In addition, the package management npm was introduced, which allows us to easily install modules written by the community into our own application. So, we already know some of the basics, but the developed application still lacks meaningful functionality.

This will change in this part of the series: The application, which so far only launches a rudimentary web server, is supposed to provide an API that can be used to manage a task list. First, it is necessary to make some technical preliminary considerations, because we must define what exactly the application is supposed to do. For example, the following functions are possible:

 

  • It must be possible to write down a new task. In the simplest form, this task consists of only a title, which must not be empty.
  • It must also be possible to call up a list of all tasks that still need to be done, in order to see what still needs doing.
  • Last but not least, it must be possible to check off a completed task so that it is removed from the todo list.

 

These three functions are essential, without them a task list cannot be used meaningfully. All other functions, such as renaming a task or undoing the check-off of a task, are optional. Of course, it would make sense to implement them in order to make the application as user-friendly and convenient as possible – but they are not really necessary. The three functions mentioned above represent the scope of a Minimum Viable Product (MVP), so to speak.

Another restriction should be specified right at the beginning: The task list shall deliberately not have user management in order to keep the example manageable. This means that there will be neither authentication nor authorization, and it will not be possible to manage multiple task lists for different people. This would be essential to use the application in production, but it is beyond the scope of this article and ultimately offers little learning for Node.js.

Current state

The current state of the application we wrote in the first part includes two code files: app. js, which starts the actual server, and lib/getApp.js, which contains the functionality to respond to requests from the outside. In the app.js file, we already used the npm module processenv [1] to be able to set the port to a value other than the default 3000 via an environment variable (Listing 1).

'use strict';
 
const getApp = require('./lib/getApp');
const http = require('http');
const { processenv } = require('processenv');
 
const port = processenv('PORT', 3000);
 
const server = http.createServer(getApp());
 
server.listen(port);

The good news is that at this point, nothing will change in this file. This is because there is already a separation of content in the app.js and getApp.js files: The first file takes care of the HTTP server itself, while the second contains the actual logic of the application. In this part of the article series, only the application logic will be adapted and extended, so the app.js file can remain as it is.

However, the situation is different in the getApp.js file, where we will leave no stone unturned. But, one thing at a time. First, the package.json file must be modified so that the name of the application is more meaningful. For example, instead of my-http-server, the application could be called tasklist:

{
  "name": "tasklist",
  "version": "0.0.1",
  "dependencies": {
    "processenv": "3.0.2"
  }
}

The file and directory structure of the application still looks the same as in the first part:

/
  lib/
    getApp.js
  node_modules/
  app.js
  package.json
  package-lock.json

REST? No thanks!

Now it’s a matter of incorporating routing. As usual with APIs, this is done via different paths in the URLs. In addition, you can fall back on the different HTTP verbs such as GET and POST to map different actions. A common pattern is the so-called REST approach, which specifies that so-called resources are defined via the URL and the HTTP verbs define the actions on these resources. The usual mapping according to REST is as follows:

 

  • POST creates a new resource, and corresponds to a Create.
  • GET retrieves a resource, and represents the classic Read.
  • PUT updates a resource, and corresponds to an Update.
  • DELETE finally deletes a resource, and corresponds to a Delete.

 

As you can see, these four HTTP verbs can be easily mapped to four actions of the so-called CRUD pattern, which in turn corresponds to the common approach of how to access data in (relational) databases. This is one of the most important reasons for the success of REST: It is simple and builds on the already familiar logic of databases. Nevertheless, there are some reasons against using this transfer of CRUD to the API level. The most weighty of these is that the verbs do not conform to the technical language: Users do not talk about creating or updating a task.

Instead, they think in terms of technical processes: They want to make a note of a task or check off a task as completed. This is where a business and a technical view collide. It is obvious that a mapping between these views must take place at some point – but the code of an application should tend to be structured in a domain-oriented rather than a technical way [2]. After all, the application is written to solve a domain-oriented problem, and technology is merely the means to an end. Seen in this light, CRUD is also an antipattern [3].

An alternative approach is provided by the CQRS pattern, which is based on commands and queries [4]. A command is an action that changes the state of the application and reflects a user’s intention. A command is usually in the imperative, since it is a request to the application to do something. In the context of the task list, there are two actions that change the state of the list, noting and checking off a task. If we formulate these actions in the imperative and translate them into English, we get phrases such as “Note a todo.”, “Tick off a todo.”

Analogously, you can formulate a query, i.e. a query that doesn’t change the state of the application, but returns it. This is the difference between a command and a query: A command writes to the application, so to speak, while a query reads from the application. The CQRS pattern states that every interaction with an application should be either a command or a query – but never both at the same time. In particular, this means that Commands should not return the current state of the task list, but that a separate Query is needed for that: For example: “Get pending todos.”

If we abandon the idea that an API must always be structured according to REST and prefer the much simpler pattern of separating writing and reading, the question arises as to how the URLs should be structured and which HTTP verbs should be used. In fact, the answer to this question is surprisingly simple: The URLs are formulated exactly as mentioned above, POST for commands, and GET for queries are used as HTTP verbs – that’s it. This results in the following routes:

 

  • POST /note-todo
  • POST /tick-off-todo
  • GET /pending-todos

 

The beauty of this approach is that it is much more self-explanatory than REST. POST /tick-off-todo is much more technical than a PUT /todo. Here, it is clear that an update is executed, but which functional purpose this update has is unclear. When there are different reasons for initiating a (technical) update, the semantically stronger approach gains a lot in comprehensibility and traceability.

Define routes

Now it is necessary to define the appropriate routes. However, this is not done with Node.js’s on-board tools. Instead, we can use the npm module Express [5]:

$ npm install express

The module can now be loaded and used within the getApp.js file. First, an express application has to be defined, for which only the express function has to be called. Then, the get and post functions can be used to define routes, specifying the desired path name and a callback – similar to the one used in the standard Node.js server (Listing 2).

'use strict';
 
const express = require('express');
 
const getApp = function () {
  const app = express();
 
  app.post('/note-todo', (req, res) => {
    // ...
  });
 
  app.post('/tick-off-todo', (req, res) => {
    // ...
  });
 
  app.get('/pending-todos', (req, res) => {
    // ...
  });
 
  return app;
};
 
module.exports = getApp;

With this, the basic framework for the routes is already built. The individual routes can, of course, also be swapped out into independent files, but for the time being, focus should be on implementing functionality. The next step is to implement a task list, which is initially designed as a pure in-memory solution. However, since it will be backed by a database in a future part of this series, it will be designed from the outset to be seamlessly extensible later. Essentially, this means that all functions to access the task list will be created asynchronously, since accesses to databases in Node.js are usually asynchronous. For the same reason, an asynchronous initialize function is also created, which may seem unnecessary at this stage, but will later be used to establish the database connection.

Defining the todo list

The easiest way to do this is to use a class called Todos, to which corresponding methods are attached. Again, these methods should be named functionally and not technically, i.e. their names should be based on the names of the routes of the API. The class is placed in a new file in the lib directory, resulting in lib/Todos.js as the file name. For each task that is noted, an ID should also be generated, and the time of creation should be noted. While accessing the current time is not a problem, generating an ID requires recourse to an external module such as uuid, which can also be installed via npm:

$ npm install uuid

Last but not least, it is advisable to get into the habit from the very beginning of providing every .js file with strict mode, a special JavaScript execution mode in which some dangerous language constructs are not allowed, for example, the use of global variables. To enable the mode, you need to insert the appropriate string at the beginning of a file as a kind of statement. This makes the full contents of the app.js file look like the one shown in Listing 1.

'use strict';
 
const { v4 } = require('uuid');
 
class Todos {
  constructor () {
    this.items = [];
  }
 
  async initialize () {
    // Intentionally left blank.
  }
 
  async noteTodo ({ title }) {
    const id = v4();
    const timestamp = Date.now();
 
    const todo = {
      id,
      timestamp,
      title
    };
 
    this.items.push(todo);
  }
 
  async tickOffTodo ({ id }) {
    const todoToTickOff = this.items.find(item => item.id === id);
 
    if (!todoToTickOff) {
      throw new Error('Todo not found.');
    }
 
    this.items = this.items.filter(item => item.id !== id);
  }
 
  async getPendingTodos () {
    return this.items;
  }
}
 
module.exports = Todos;

It is striking in the implementation that the functions representing a command actually contain no return, while the function representing a query consists of only a single return. The separation between writing and reading has become very clear.

Now the file getApp.js can be extended accordingly, so that an instance of the task list is created there and the routes are adapted in such a way that they call the appropriate functions. To prepare the code for later, the initialize function should be called now. However, since this is marked as async, the getApp function must call it with the await keyword, and therefore, must also be marked as asynchronous (Listing 4).

'use strict';
 
const express = require('express');
const Todos = require('./Todos');
 
const getApp = async function () {
  const todos = new Todos();
  await todos.initialize();
 
  const app = express();
 
  app.post('/note-todo', async (req, res) => {
    const title = // ...
 
    await todos.noteTodo({ title });
  });
 
  app.post('/tick-off-todo', async (req, res) => {
    const id = // ...
 
    await todos.tickOffTodo({ id });
  });
 
  app.get('/pending-todos', async (req, res) => {
    const pendingTodos = await todos.getPendingTodos();
 
    // ...
  });
 
  return app;
};
 
module.exports = getApp;

Before the application can be executed, three things have to be done:

  1. First, the title and id parameters must be determined from the request body.
  2. Second, the query route must return the read tasks to the client as a JSON array.
  3. Finally, the app.js file must be modified so that the getApp function is called asynchronously there.

Input and output with JSON

Fortunately, all three tasks are easy to accomplish. For the first task, it is first necessary to determine what a request from the client looks like, i.e. what form it takes. In practice, it has proven useful to send the payload as part of a JSON object in the request body. For the server, this means that it must read this object from the request body and parse it. A suitable module called body-parser [6] is available in the community for this purpose and can be easily installed using npm:

$ npm install body-parser

It should be noted that the version number must always consist of three parts and follow the concept of semantic versioning [6]. In addition, however, dependencies can also be stored in this file, whereby required third-party modules are explicitly added. This makes it much easier to restore a certain state later or to get an overview of which third-party modules an application depends on. To install a module, call npm as follows:

$ npm install processenv

It can then be loaded with require:

const bodyParser = require('body-parser');

Since the parser will be available for several routes, it is implemented as so-called middleware. In the context of Express, middleware is a type of plug-in that provides functionality for all routes and therefore only needs to be registered once instead of individually for each route. This is done in Express via the app.use function. Therefore, it is important to insert the following line directly after creating the Express application: app.use(bodyParser.json());

Now the property body of the req object can be accessed within the routes, which was not available before. Provided a valid JSON object was submitted, this property now contains that very object. This allows the two command routes to be extended, as shown in Listing 5.

app.post('/note-todo', async (req, res) => {
  const { title } = req.body;
 
  await todos.noteTodo({ title });
});
 
app.post('/tick-off-todo', async (req, res) => {
  const { id } = req.body;
 
  await todos.tickOffTodo({ id });
});

When implementing the tick-off-todo route, it is noticeable that error handling is still missing: If the task to be ticked off is not found, the tickOffTodo function of the Todos class raises an exception – but this is not caught at the moment. So it is still necessary to provide the corresponding call with a try/catch and to return a corresponding HTTP status code in case of an error. In this case, the error code 404, which stands for an element not found (Listing 6), is a good choice.

app.post('/tick-off-todo', async (req, res) => {
  const { id } = req.body;
 
  try {
    await todos.tickOffTodo({ id });
  } catch {
    res.status(404).end();
  }
});

And finally, in addition to the node_modules directory, npm has also created a file called package-lock.json. It is actually used to lock version numbers despite the roof being specified. However, it has its quirks, so if npm behaves strangely, it’s often a good idea to delete this file and the node_modules directory and run npm install again from scratch. Once a module has been installed via npm, it can be loaded in the same way as a module built into Node.js. In that case, Node.js recognizes that it is not a built-in module and loads the appropriate code from the node_modules directory:

app.get('/pending-todos', async (req, res) => {
  const pendingTodos = await todos.getPendingTodos();
 
  res.json(pendingTodos);
});

Now, if you start the server by entering node app.js and try to call some routes, you will notice that some of the routes work as desired – but others do not, because they never end. This is where an effect comes into play that is very unusual at first: Node.js is inherently designed to stream data, so an HTTP connection is not automatically closed when a route has been processed. Instead, it has to be done explicitly, as in the case of the 404 error. The json function already does this natively, but the two command routes still lack closing the connection successfully. To indicate that the operation was successful, it is a good idea to send the HTTP status code 200. The getApp.js file now looks like Listing 7.

'use strict';
 
const bodyParser = require('body-parser');
const express = require('express');
const Todos = require('./Todos');
 
const getApp = async function () {
  const todos = new Todos();
  await todos.initialize();
 
  const app = express();
  app.use(bodyParser.json());
 
  app.post('/note-todo', async (req, res) => {
    const { title } = req.body;
 
    await todos.noteTodo({ title });
    res.status(200).end();
  });
 
  app.post('/tick-off-todo', async (req, res) => {
    const { id } = req.body;
 
    try {
      await todos.tickOffTodo({ id });
      res.status(200).end();
    } catch {
      res.status(404).end();
    }
  });
 
  app.get('/pending-todos', async (req, res) => {
    const pendingTodos = await todos.getPendingTodos();
 
    res.json(pendingTodos);
  });
 
  return app;
};
 
module.exports = getApp;

Validate the inputs

What is still missing is a validation of the inputs: At the moment, it is quite possible to call one of the command routes without passing the required parameters in the request body. In practice, it has proven useful to validate JSON objects by using a JSON schema. A JSON schema represents a description of the valid structure of a JSON object. In order to be able to use JSON schemas, a module is again required, for example, validate-value [7] which can be installed via npm:

$ npm install validate-value

Now the module can be loaded in the getApp.js file:

const { Value } = require('validate-value');

The next step is to create two schemas. Since these are always the same, it is advisable not to do this inside the routes, but outside them, so that the code does not have to be executed over and over again, ultimately ending up with the same result each time (Listing 8).

const noteTodoSchema = new Value({
  type: 'object',
  properties: {
    title: { type: 'string', minLength: 1 }
  },
  required: [ 'title' ],
  additionalProperties: false
});
 
const tickOffTodoSchema = new Value({
  type: 'object',
  properties: {
    id: { type: 'string', format: 'uuid' }
  },
  required: [ 'id' ],
  additionalProperties: false
});

Within the two command routes, the only thing left to do is to validate the received data using the respective schema, and in case of an error, return an appropriate HTTP status code, for example, a 400 error (Listing 9).

app.post('/note-todo', async (req, res) => {
  if (!noteTodoSchema.isValid(req.body)) {
    return res.status(400).end();
  }
 
  const { title } = req.body;
 
  await todos.noteTodo({ title });
  res.status(200).end();
});
 
app.post('/tick-off-todo', async (req, res) => {
  if (!tickOffTodoSchema.isValid(req.body)) {
    return res.status(400).end();
  }
 
  const { id } = req.body;
 
  try {
    await todos.tickOffTodo({ id });
    res.status(200).end();
  } catch {
    res.status(404).end();
  }
});

CORS and testing

With this the API is almost finished, only a little bit of small stuff is missing. For example, it would be handy to be able to configure CORS – that is, from which clients the server can be accessed. In practice, this topic is a bit more complex than described below, but for development purposes, it is often sufficient to allow access from everywhere. The best way to do this is to use the npm module cors [8], which must first be installed via npm:

$ npm install cors

It must then be loaded, which is again done in the getApp.js file:

const cors = require('cors');

Finally, it must be integrated into the express application in the same way as body-parser, because this module is also middleware. Whether this call is made before or after the body-parser does not really matter – but since access should be denied before the request body is processed, it makes sense to include cors as the first middleware:

// ...
const app = express();
app.use(cors());
app.use(bodyParser.json());
// ...

Now, in order to test the API, a client is still missing. Developing this right now would be too time-consuming, so you can fall back on a tool that is extremely practical for testing HTTP APIs and that is usually pre-installed on macOS and Linux, namely, curl. On Windows, it is also available, at least in the Windows Subsystem for Linux (WSL). First, you can try to retrieve the (initially empty) list of all tasks:

$ curl http://localhost:3000/pending-todos
[]

In the next step, you can now add a task. Make sure that you not only send the required data, but also set the Content-Type header to the correct value – otherwise the body-parser will not be active:

$ curl \
  -X POST \
  -H 'content-type:application/json' \
  -d '{"title":"Develop a Client"}' \
  http://localhost:3000/note-todo

If you retrieve the tasks again, you will get a list with one entry (in fact, the list would be output unformatted in a single line, but for the sake of better readability it is shown formatted in the following):

$ curl http://localhost:3000/pending-todos
[
  {
    "id": "dadd519b-71ec-4d18-8011-acf021e14365",
    "timestamp": 1601817586633,
    "title": "Develop a Client"
  }
]

If you try to check off a task that does not exist, you will notice that this has no effect on the list of all tasks. However, if you use the -i parameter of curl to also output the HTTP headers, you will see that you get the value 404 as the HTTP status code:

$ curl \
  -i \
  -X POST \
  -H 'content-type:application/json' \
  -d '{"id":"43445c25-c116-41ef-9075-7ef0783585cb"}' \
  http://localhost:3000/tick-off-todo

The same applies if you do not pass a UUID as a parameter (or specify an empty title in the previous example). However, in these cases, you get the HTTP status code 400. Last but not least, you can now try to actually check off the noted task by passing the correct ID:

$ curl \
  -X POST \
  -H 'content-type:application/json' \
  -d '{"id":"dadd519b-71ec-4d18-8011-acf021e14365"}' \
  http://localhost:3000/tick-off-todo

If you retrieve the list of all unfinished tasks again, you will get an empty list 

– as desired:

$ curl http://localhost:3000/pending-todos
[]

Outlook

This concludes the second part of this series on Node.js. Of course, there is much more to discover in the context of Node.js and Express for writing Web APIs. Another article could be dedicated to the topics of authentication and authorization alone. But now we have a foundation to build upon.

The biggest shortcoming of the application at the moment is that it is not possible to ensure code quality and the code has already become relatively confusing. There is a lack of structure, binding specifications regarding the code style, and automated tests. These topics will be dealt with in the third part of the series – before further functionality can be added.

The author’s company, the native web GmbH, offers a free video course on Node. js [9] with close to 30 hours of playtime. Episodes 4 and 5 of this video course deal with topics covered in this article, such as developing web APIs, using Express, and using middleware. Therefore, this course is recommended for anyone interested in more details.

 

Links & Literature

[1] https://www.npmjs.com/package/processenv

[2] https://www.youtube.com/watch?v=YmzVCSUZzj0

[3] https://www.youtube.com/watch?v=frUNFrP7C9w

[4] https://www.youtube.com/watch?v=k0f3eeiNwRA

[5] https://www.npmjs.com/package/express

[6] https://www.npmjs.com/package/body-parser

[7] https://www.npmjs.com/package/validate-value

[8] https://www.npmjs.com/package/cors

[9] https://www.thenativeweb.io/learning/techlounge-nodejs

The post Developing Web APIs with Node – Intro to Node.js part 2 appeared first on International JavaScript Conference.

]]>
Introduction to Node.js: First steps https://javascript-conference.com/blog/introduction-to-node-js-first-steps/ Tue, 11 May 2021 11:57:23 +0000 https://javascript-conference.com/?p=82634 If you develop modern web and cloud applications it’s just a matter of time until you encounter the JavaScript runtime environment Node.js. What is Node.js, how do you install and configure it, and how do you develop with it?

The post Introduction to Node.js: First steps appeared first on International JavaScript Conference.

]]>
The introduction of the JavaScript programming language in 1995 changed the web world. Until then, web pages were limited to HTML and CSS technologies and were thus, static. The only way to generate content dynamically was to use appropriate technologies on the server-side. The use of JavaScript as a programming language that could run in the web browser changed that abruptly; it was the basis for what is now known as a “web application”: programs that were once reserved for the desktop, but which can now be run in the web browser. But there was a flaw: JavaScript could only run in the web browser, not on the server. Therefore, a second technology was always needed to map the server side. Over the years, various technologies had their heyday here. While PHP was initially the first choice, this changed increasingly with the advent of Java and .NET. Ruby and Python also played an increasingly important role in the development of web servers after the turn of the millennium. But no matter which language you used, you always had two languages: JavaScript in the client, another on the server. In the long run, this is impractical and error-prone, and it also makes development more difficult. 

This is exactly what Node.js does away with. Node.js is a runtime environment for JavaScript that does not run in the web browser, but on the server. This makes it possible to use JavaScript for the development of the backend as well, so that the technological break that always existed until then is no longer necessary. Conveniently, however, Node.js is based on the same compiler for JavaScript as the Chrome web browser, namely V8 – and thus offers excellent support for modern language features. Meanwhile, Node.js, which was first introduced to the public in 2009, is over 10 years old and is supported by all major web and cloud providers. Unlike Java and .NET, for example, Node.js is not developed by a company but by a community, but this does not detract from its suitability for large and complex enterprise projects. On the contrary, the very fact that Node.js is under an open source license has now become an important factor for many companies when selecting a suitable base technology.

Installing Node.js

If you want to use Node.js, the first step is to install the runtime environment. In theory, you can compile Node.js yourself, but corresponding pre-compiled binary packages are also available for all common platforms. This means that Node.js can be used across platforms, including macOS, Linux, and Windows. However, Node.js can also be run on Raspberry Pi and other ARM-based platforms without any problems. Since the binary packages are only a few MB in size, the basic installation is done very quickly. There are several ways to install it. The most obvious is to use a suitable installer, which can be downloaded from the official website [1]. Although the installation is done with a few clicks, it is recommended to refrain from this for professional use. The reason is that the official installers do not allow side-by-side installation of different versions of Node.js. If one performs an update, the system-wide installed version of Node.js is replaced by a new version, which can lead to compatibility problems with already developed applications and modules. Therefore, it is better to rely on a tool like nvm [2], which allows side-by-side installation of different versions of Node.js and can manage them. However, nvm is only available for macOS and Linux. For Windows, there are ports or replicas, for example, nvm-windows [3], whose functionality is similar. In general, however, macOS and Linux are better off in the world of Node.js. Most tools and modules are primarily developed for these two platforms, and even though JavaScript code is theoretically not platform dependent, there are always little things that you fail at or struggle with on Windows. Although the situation has improved considerably in recent years due to Microsoft’s commitment in this area, the roots of the community are still noticeable. To install nvm, a simple command on the command line is enough:

$ curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.35.3/install.sh | bash

Afterwards it is necessary to restart the command line, otherwise nvm cannot find some environment variables. Simply closing and reopening the terminal is sufficient for this. Then the desired version of Node.js can be installed. For example, to install version 14.9.0, the following command is sufficient: $ nvm install 14.9.0

If necessary, the version number can also be shortened. For example, if you simply want to install the latest version from the 14.x series, you can omit the specific minor and release version: $ nvm install 14

All installed versions can be displayed in a list by entering the command $ nvm ls. To select and activate one of the installed versions, the following command is used, where again an abbreviated version number may be specified: $ nvm use 14.9.0

Often you want to set a specific version as default, for example, to work within a newly opened command line. For this nvm knows the default concept, where default serves as an alias for a specific version. For example, to specify that you normally always want to work with the latest installed version from the 14.x series, you must define the default alias as follows: $ nvm alias default 14

If you take a closer look at the Node.js website, you will notice that there are two versions available for download. On the one hand, there is a so-called LTS version, on the other hand, a current version. LTS stands for Long-Term Support, which means that this version of Node.js is provided with security updates and bug fixes for a particularly long time: However, particularly long in this context means only 30 months. Support for the current version, on the other hand, expires after just 12 months. It is therefore advisable to always rely on the LTS versions for productive use and to update them once a year – a new LTS version is always released in October according to the Node.js roadmap.

Hello world!

After the installation, we start Node.js. If you call it without any further parameters, it opens in an interactive mode where you can enter and evaluate JavaScript statements live. This is occasionally handy for quickly trying out a language construct but is hardly suitable for actual development. You can exit this mode by pressing CTRL + C twice. To develop applications, therefore, a different procedure is needed. First, you need any kind of editor or IDE, provided that the tool of choice can save plain text files with the .js extension. Node.js does not enforce that an application must have a specific name, though app.js has become common for the primary file. Occasionally, you may encounter other names, such as server.js or run.js, but app.js is used below. You can put any JavaScript code in such a file, for example a “hello world” program:

console.log('Hello world!');

To run this application, all you need to do is call Node.js and pass the filename as a parameter:

$ node app.js

Node.js translates the specified application into executable machine code using V8 and then starts execution. Since the program ends after outputting the string to the screen, Node.js also terminates execution, so you return to the command line. However, a pure console program is still not very impressive. It gets much more interesting when you use Node.js to develop your first small web server. To do this, you need to make use of a module that is built into Node.js out of the box, namely the http module. Unlike .NET and Java, Node.js does not contain a class or function library with hundreds of thousands of classes and functions. Instead, Node.js limits itself to the absolute essentials. The philosophy behind it is that everything else can be provided via third-party modules from the community. This may seem unusual at first glance, but it keeps the core of Node.js incredibly lean and lightweight. The http module is one of the few modules built into Node.js out of the box. Others can be found in the documentation [4]. To load a module, you have to import it using the built-in require function. This behaves similarly to use in C# or import in Java, yet there is one serious difference: unlike the aforementioned statements, the require function returns a result, namely a reference to the module to be loaded. This reference must be stored in a variable, otherwise the module cannot be accessed. Therefore, the first line of the Node.js application is as follows:

const http = require('http');

Then you can use the createServer function of the http module to create a server. It is important to make sure that you pass it a function as a parameter that can react to incoming requests and send back a corresponding response. This function is thus called again for each incoming request and can generate an individual result in each case. In the simplest case it always returns the same text. The function res.write is used for this purpose. Afterwards it is necessary to close the connection. This is done with the function res.end. The call to createServer in turn also returns a reference, but this time to the created web server:

const server = http.createServer((req, res) => {
  res.write('Hallo Welt!');
  res.end();
});

Next, the web server must be bound to a port so that it can be reached from outside. This is done using the listen function, which is passed the desired port as a parameter:

server.listen(3000);

Last but not least, it is advisable to get into the habit from the very beginning of providing every .js file with strict mode, a special JavaScript execution mode in which some dangerous language constructs are not allowed, for example, the use of global variables. To enable the mode, you need to insert the appropriate string at the beginning of a file as a kind of statement. This makes the full contents of the app.js file look like the one shown in Listing 1.

'use strict';
 
const http = require('http');
 
const server = http.createServer((req, res) => {
  res.write('Hallo Welt!');
  res.end();
});
 
server.listen(3000);

If you now start this application again, you can access it from the web browser by calling the address http://localhost:3000. In fact, you can also append arbitrary paths to the URL: Since the program does not provide any special handling for paths, HTTP verbs, or anything else, it always responds in an identical way. If one is actually interested in the path or, say, the HTTP verb, one can access these values via the req parameter. The program shown in Listing 2 outputs both values, so it produces output à la GET /.

'use strict';
 
const http = require('http');
 
const server = http.createServer((req, res) => {
  res.write(`${req.method} ${req.url}`);
  res.end();
});
 
server.listen(3000);

In addition to the http module, there are several other built-in modules, for example for accessing the file system (fs), for handling paths (path) or for TCP (net). Node.js also offers support for HTTPS (https) and HTTP/2 (http2) out of the box. Nevertheless, for most tasks, you will have to rely on modules from the community.

Include modules from third parties

Modules developed by the community can be found in a central and publicly accessible registry on the internet, the so-called npm registry. npm is also the name of a command line tool that acts as a package manager for Node.js and is included in the installation scope of Node.js. This means that npm can basically be invoked in the same way as Node.js itself. A simple example of a module from the community is the processenv module [5], which provides access to environment variables. This is also possible using Node.js’ on-board means, but then you always get the values of the environment variables as strings, even if the value is a number or a logical value, for example. The processenv module, on the other hand, converts the values appropriately so that you automatically get the desired value.

Before you can install a third party module, you first have to extend your own application with the file package.json. This file contains metadata about the application. Only a name and a version number are mandatory, which is why the minimum content of this file has the following form:

{
  "name": "my-http-server",
  "version": "0.0.1"
}

It should be noted that the version number must always consist of three parts and follow the concept of semantic versioning [6]. In addition, however, dependencies can also be stored in this file, whereby required third-party modules are explicitly added. This makes it much easier to restore a certain state later or to get an overview of which third-party modules an application depends on. To install a module, call npm as follows:

$ npm install processenv

This extends the package.json file with the dependencies section, where the dependency is entered as follows:

{
  "name": "my-http-server",
  "version": "0.0.1",
  "dependencies": {
    "processenv": "^3.0.2"
  }
}

Also, npm downloads the module from the npm registry and copies it locally to a directory named node_modules. It is recommended that you exclude this directory from version control. If you delete it or retrieve the code for your application from the version control system, which does not include the directory, you can easily restore its contents:

$ npm install

The specification of the desired modules can now be omitted, after all, they can be found together with the version number in the package.json file. A conspicuous feature of this file is the “roof” in front of the version number of processenv. It has the effect that npm install does not necessarily install exactly version 3.0.2, but possibly also a newer version, if it is compatible. However, this mechanism is dangerous, so it is advisable to consistently remove the roof from the package.json file. To avoid having to do this over and over again by hand, npm can alternatively be configured to not write the roof at all. To do this, create a file named .npmrc in the user’s home directory and store the following content there:

save-exact=true

And finally, in addition to the node_modules directory, npm has also created a file called package-lock.json. It is actually used to lock version numbers despite the roof being specified. However, it has its quirks, so if npm behaves strangely, it’s often a good idea to delete this file and the node_modules directory and run npm install again from scratch. Once a module has been installed via npm, it can be loaded in the same way as a module built into Node.js. In that case, Node.js recognizes that it is not a built-in module and loads the appropriate code from the node_modules directory:

const processenv = require('processenv');

Then you can use the module. In this example application, it would be conceivable to read the desired port from an environment variable. However, if this variable is not set, specifying a port as a fallback is still a good idea (Listing 3).

'use strict';
 
const http = require('http');
const processenv = require('processenv');
 
const port = processenv('PORT', 3000);
 
const server = http.createServer((req, res) => {
  res.write(`${req.method} ${req.url}`);
  res.end();
});
 
server.listen(port);

Structure the application

As applications grow larger, it is not advisable to put all the code in a single file. Instead, it is necessary to structure the application into files and directories. This is already possible even in the case of the program, which is still very manageable, because you could separate the actual application logic from the server. In order to illustrate this, however, an intermediate step is introduced first: The function that contains the application logic is swapped out into its own function (Listing 4).

'use strict';
 
const http = require('http');
const processenv = require('processenv');
 
const port = processenv('PORT', 3000);
 
const app = function (req, res) {
  res.write(`${req.method} ${req.url}`);
  res.end();
};
 
const server = http.createServer(app);
 
server.listen(port);

In fact, it would also be conceivable to wrap this function in a function again in order to be able to configure it. Instead of the app function, you would then get a getApp function. The outer function can then be equipped with any parameters that the inner function can access. The signature of the inner function must not be changed, because it is predefined by Node.js through createServer:

const getApp = function () {
  const app = function (req, res) {
    res.write(`${req.method} ${req.url}`);
    res.end();
  };
 
  return app;
};

However, this also means that you have to adjust the call to createServer accordingly:

const server = http.createServer(getApp());

Now the application is prepared to be split into different files. The getApp function is to be placed in its own file called getApp.js. Since the definition of the function is then missing in the app.js file, it must be loaded there, which – unsurprisingly – is again done using the require function. However, a relative or an absolute path must now be specified so that the require function can distinguish files to be reloaded from modules with the same name. The file extension .js can, but does not have to be specified (Listing 5).

'use strict';
 
const getApp = require('./getApp');
const http = require('http');
const processenv = require('processenv');
 
const port = processenv('PORT', 3000);
 
const server = http.createServer(getApp());
 
server.listen(port);

If you now try to start the application in the usual way, you get an error message. This is because Node.js considers everything defined inside a file as private – unless you explicitly export it. Therefore, it tries to import the content of the file getApp.js, but nothing is exported from there. The remedy is to assign the getApp function to the module.exports object (Listing 6).

'use strict';
 
const getApp = function () {
  const app = function (req, res) {
    res.write(`${req.method} ${req.url}`);
    res.end();
  };
 
  return app;
};
 
module.exports = getApp;

Whatever a file exports this way will be imported again by require: So if you export a function, you get a function afterwards; if you export an object, you get an object, and so on.

If you start the application again, it runs as before. The only unpleasant thing is the directory structure since the main directory becomes increasingly full. It is obvious that with even more files it quickly becomes confusing. The position of the files package.json and package-lock.json is predefined, as well as the position of the node_modules directory, and also the file app.js is well placed on the top level. However, any further code placed here will be disruptive:

/
  node_modules/
  app.js
  getApp.js
  package.json
  package-lock.json

Therefore, many projects introduce a directory called lib, which does not contain the main executable file of the application, but any other code. Adapting the directory structure for this project results in the following structure:

/
  lib/
    getApp.js
  node_modules/
  app.js
  package.json
  package-lock.json

But now the import in the file app.js does not fit anymore, because the file getApp.js is still searched in the same directory as the file app.js. So it is necessary to adjust the parameter of require:

const getApp = require('./lib/getApp');

As you can see, this way it is quite easy to structure code in Node.js. Directories take over the role of namespaces. There is no further subdivision of this kind. The next step is to add more functionality to the application, which means writing more code and including more third-party modules from npm. One of the biggest changes when you start working with Node. js is the multitude of npm modules that you come into contact with over time, even on small projects. The idea behind this is that, in terms of complexity, it is more beneficial to maintain many small building blocks whose power comes from their flexible combinability than to use a few large chunks.

Outlook

This concludes the first part of this series on Node.js. Now that the basics are in place, the next part will look at writing web APIs. This will include topics like routing, processing JSON as input and output, validating data, streaming, and the like.

The author’s company, the native web GmbH, offers a free german video course with close to 30 hours of playtime on Node.js [7]. The first three episodes deal with the topics covered in this article, such as installing, getting started, and using npm and modules. Therefore, this course is recommended for anyone interested in more details.

Links & Literature

[1] https://nodejs.org

[2] https://github.com/nvm-sh/nvm

[3] https://github.com/coreybutler/nvm-windows

[4] https://nodejs.org/dist/latest-v14.x/docs/api/

[5] https://www.npmjs.com/package/processenv

[6] https://semver.org

[7] https://www.thenativeweb.io/learning/techlounge-nodejs

The post Introduction to Node.js: First steps appeared first on International JavaScript Conference.

]]>
Getting Started with Svelte https://javascript-conference.com/blog/getting-started-with-svelte/ Tue, 18 Feb 2020 11:16:46 +0000 https://javascript-conference.com/?p=29815 Are you curious to know what Svelte is? Do you know why it’s becoming a popular JavaScript compiler? In this article, we will tackle what Svelte is, who made it, why would you want to use it, its ecosystem, and current state.

The post Getting Started with Svelte appeared first on International JavaScript Conference.

]]>
Ten years ago, the traditional concept of web development was page-oriented. When you built a website or an application, you would think in terms of the pages – how many pages you need, how they should relate to each other, etc.

Every single page may have its own JS and CSS dependencies, or global dependencies, depending on features and functionality.

So, finally, the page may end up with something like the image above. This solution is hard to scale because loading too many scripts can cause a network bottleneck. The second option is to use a big JavaScript file containing all your project code, but this leads to problems in scope, size, readability, and maintainability.


Image credit: Andrew Pishchulin

When you need to deploy the app, you have to copy all of the HTML, JS, and CSS files to a production environment, which might be dozens or even hundreds of files.

Lastly, to host all of your application files, you need a server, which can manage the folders structure, handle routes, and so on.

Now, we are in a component-based web development era.


Image credit: derickbailey

As you can see in the image above, each component is a piece of UI. We can build these components in isolation and put them together to build complex UIs.

Some benefits are:

  • Allows for code re-use
  • Increases your ability to change the software to meet new requirements
  • Ensures UX consistency across a portfolio
  • State-driven

Then, package the application with the help of bundlers such as:

  • Webpack
  • Parcel
  • Fusebox
  • Rollup


Image credit: Andrew Pishchulin
Those bundles are included out of the box in JavaScript frameworks like Angular, Ember.js, Vue.js, and React.js through the CRA bootstrapper. What’s common among these frameworks is that the bulk of the work is in the browser.

A few years ago at a Brooklyn meetup, Jed Schmidt explained his vision of a next-generation UI framework. He said, “The apps we write are directed graphs, but the runtime we have is a tree, the DOM. So, writing an app should be just writing a graph, and the compiler would figure out where the dipoles are and rig up the DOM event code to make that relationship consistent across states.”

There was this smart guy, Rich Harris, listening to Jed. Suddenly, he had an epiphany: “Frameworks are not tools for organizing your code, they are tools for organizing your mind.” Rich Harris says that frameworks help you write better code, but their true value is in the way they help structure your code and express it. Well, I agree with that. So, what if a framework wasn’t a thing that runs in a browser at all? What if it was a compiler?

Now let’s get to the meat of the topic, but first, let’s answer the question: What is Svelte?

  • An alternative to web frameworks such as Vue, Angular, and React
  • A web application compiler, not a runtime library
  • Does not use a virtual DOM
  • Written by Rich Harris at The New York Times in late 2016

Svelte takes inspiration from reactive programming in the way it runs code. But first, let me step back and ask you, what is reactive programming? You can read the definition of reactive programming here on Wikipedia: https://en.wikipedia.org/wiki/Reactive_programming.To make it short, reactive programming has key similarities with the observer pattern commonly used in OOP.


Image credit: developers-club

I’m assuming you have encountered pub/sub design pattern at some point. To differentiate the observer pattern from publish/subscribe: observers are aware of the subject. Also, the subject maintains a record of the observers. Whereas, in publisher/subscriber, they don’t need to know each other.

Here’s a spreadsheet.

When you make a change in a cell, all the cells that depend on it, change too. This makes the spreadsheet so intuitive. This is what Svelte does by default; it brings reactivity to JavaScript itself.

Now let’s see how Svelte works.

You can write regular JavaScript code, but you need to follow a specific syntax.

When you run the Svelte compiler over your code, it compiles that code to optimized-runtime instructions. You are probably saying right now, “Is that important?” Yes, pre-compilation can significantly lower a framework’s overhead.

Take a look at this. Svelte did away with this typical scenario because apparently there’s a performance penalty with frameworks that don’t pre-compile. This is due to compilation/translation that happens on the client side before any of the actual app’s code can be run.


Image credit: Evan You

Let’s review how Angular, React, and Vue detect changes in their app state. Here’s Angular’s dirty checking.


Image credit: Evan You

Here’s React’s reconciliation.


Image credit: Evan You

And here’s Vue using both change detection principles of Angular and React.

Svelte changes detection differently. Its component code will be compiled into vanilla JavaScript with code detection already in place. This is almost exactly the update code that Svelte generates.

Unlike traditional UI frameworks, Svelte is a compiler that knows at build time how things could change in your app, rather than waiting to do the work at run time.

Pre-compilation and avoiding virtual DOM are what make Svelte performant when it comes to memory allocation.


http://krausest.github.io/js-framework-benchmark/current.html

Here’s a js-framework-benchmark screenshot I took last year. Svelte is the first green column, followed by Vue, React, and Angular. This table here shows that Svelte is the winner among the 4 JS frameworks, but take it with a grain of salt. Sometimes these kinds of benchmarks have implementation inaccuracies due to environment modes, not applying framework-specific optimizations, or unintentional errors.

So, let’s go back to how Svelte works. Now, you only ship the code your app needs.

Let me show you how small Svelte can be in comparison with other frameworks.

Here’s Conduit’s RealWorld’s example apps repo.

The smaller the file, the faster the download, and less to parse.

For more info, you can go to this link: https://www.freecodecamp.org/news/a-realworld-comparison-of-front-end-frameworks-with-benchmarks-2019-update-4be0d3c78075/ .

So yes. Svelte can produce extremely lightweight code.

But why should you care? You should care because:

  • It can be extremely fast
  • It uses very little memory
  • It can be used on embedded devices, which you’ll see in a bit

In Addy Osmani’s presentation, he talks about keeping your JavaScript bundles small, especially for mobile devices.

Why? Because small bundles improve download speeds, lower memory usage, and reduce CPU costs.

On mobile, you’ll want to ship much less, mainly because of network speeds, but also to keep plain memory usage low.

He also mentioned that JavaScript execution time is important for phones with slow CPUs. And due to differences in CPU, GPU, thermal throttling, there are huge disparities between the performance of high-end and low-end phones.


This matters for the performance of JavaScript, as execution is CPU-bound.

Let’s go back. Therefore, in the browser, that compiled code executes and then renders the user interface.

Shifting the load to compile step can also be seen in other frameworks nowadays. A good example of that is Stencil of Ionic and Solid.js by Ryan Carniato.

To get started, you need a Node.js runtime from nodejs.org and to run the following command.

You’ll see this if you go to your localhost:5000:

But there’s an easier way to try building Svelte components. You can go to Svelte’s official website then go the example tabs.

Here’s how you can easily write a component.

The component structure is similar to Vue’s HTML-based template syntax. There are three parts: the script, markdown, and style. All of them can be arranged anywhere you want.

Here’s how you would create another component to nest it and share it with another component:


Here’s how you write an if-else flow:

Here’s how you would write a for-each loop to render a list:

One-way or two-way data binding is just a few lines of code:

Since Svelte is a compiler, built-in animation, transition, and easing does not add up to the total file size of the app if they are not in use. This is not the case in component-based libraries.

A built-in store API for complex state management:

There’s a lot more, such as reactive statements, class directive, component composition, etc.
Now let’s go to the real world.

Svelte is used for POS systems in Brazil. Roughly 200,000 POS devices are currently in the cities of Brazil. The device runs a very outdated WebKit on extremely memory-constraint hardware. They tried roughly a dozen different kinds of frameworks. None of them could be used because when you press a button, it responded half a second later. Of course, users would be more likely to press it twice and may end up overcharging the customer. But when they tried Svelte, it worked smoothly.

Mustlab is an IT company in Russia that uses Svelte to build applications for Smart TVs.

As you notice here, POS machines and Smart TVs are low powered devices. This is a trend that is only going to increase.


Image credit: Rich Harries

The new frontier is the embedded web, wearables, the Internet of Things, in-car entertainment systems, and smart home control panels. All of these things have displays, so they need UIs and a lot of the time, those interfaces will be built on the web.

What about SSR for SEO? Svelte has Sapper, similar to Next.js of React and Nuxt.js of Vue.

SSR can speed up the first render of your app and improve its SEO.

For blogging using headless CMS, Sanity has a Sapper template.

A headless CMS is a content management system that provides a way to author content, but instead of web-page-rendering, like your traditional WordPress, it provides the content as data over an API.

In mobile development, there’s Svelte-Native. It works on top of NativeScript.


This project is not an officially supported product of either the NativeScript or Svelte projects yet. So let’s see next year.

You can also use Svelte to build WebGL apps.

svelte-gl is still experimental, which means the API is sure to change in substantial ways.

Other things that are good to know:

How do you write tests in Svelte?

Svelte’s testing library for writing tests is available here: testing-library.com/docs/svelte-testing-library/intro.

You can also talk to Svelte developers from around the world through Svelte’s Discord channel. Svelte.dev/chat

If you have questions about Rollup configuration, you can go to their Gitter channel which is gitter.im/rollup/rollup .

Visit sveltejobs.dev to find Svelte jobs.

So, let’s weigh in the bad parts and the selling points of Svelte.

Bad parts:

  • TypeScript is not supported ‘yet’
  • Small community
  • Few third-party libraries
  • Lack of DevTools
  • No big company backing
  • Very few jobs are available out there

But you know what? Angular, React, and Vue had a similar history before. It’s just a matter of time. You can start introducing Svelte to your boss or managers.

These are the points you need to enumerate when talking to them.

Selling points:

  • Easy to pick up
  • Better performance
  • Write less code
  • Extremely small bundles
  • Reactive out of the box
  • Declarative transitioning and animation
  • Best for low-powered devices

So that’s the 10,000-foot view of Svelte. Before I finish up this post, there’s a video on YouTube by Rich Harris titled “Rethinking Reactivity,” where he also explains why Svelte is so good.

Thank you for reading. Peace out!

References:

  • https://svelte.dev
  • https://sapper.svelte.dev
  • https://svelte-native.technology/tutorial

The post Getting Started with Svelte appeared first on International JavaScript Conference.

]]>
Angular https://javascript-conference.com/blog/angular-bazel-a-new-star-is-born/ Thu, 16 Jan 2020 13:19:53 +0000 https://javascript-conference.com/?p=29311 With Bazel, a new build tool has emerged in the already rich universe of developer tools: A rising star on the horizon that we have all been waiting for. Uh, have we?

The post Angular <3 Bazel: A new star is born? appeared first on International JavaScript Conference.

]]>
Bazel promises a new, better and faster way to build a project. It doesn’t necessarily have to be an Angular or TypeScript project, because Bazel acts independently of the programming language used and is even able to combine different languages in one build. However, these are promises that developers have quite often heard from other tools, so they might just cause a bit of skepticism. After all, which tool out there would not claim to be the best and easiest to use? This is probably the reason why many developers don’t really feel like looking into the topic, quite understandably. Especially in the field of JavaScript there are already lots and lots of tools and aids and development of new ones happens pretty rapidly. Taken together, that might cause a certain fatigue for developers. But with Bazel it’s a bit different and the tool is well worth a look. Let’s start with a short history:

Bazel has been available as open source software since March 2015 and has developed from Google’s internal tool Blaze. Blaze itself has been successfully used internally by Google for over 10 years. In the end, they didn’t want to withhold it from the community anymore and therefore published the essential parts of it. It follows that Bazel is already a very stable and mature tool.

How does Bazel help you?

Anyone who has ever tried to build or extend Angular source code and run some tests afterwards knows how tedious and lengthy this is. Bazel can solve this problem for you. It has reduced the test execution time from one hour to 15 minutes. How does Bazel do that? Having been developed by Google, it is naturally well-adapted to Google’s conditions, meaning it is fit to deal with huge code repositories where every application and its dependencies are built from actual source code. Bazel achieves the promised speed with the help of incremental builds, caching and parallelization of builds. It’s worth to mention that these builds can also be executed remotely in a so-called build cloud, where the load can be distributed across multiple build servers. The result can then be reused by all developers via caching mechanisms.

What’s so special about Bazel?

Getting started with Bazel, the learning curve can be quite steep at first. What is Bazel even? Is it something in the same line of thought as webpacks or maybe a substitute to a Jenkins build server? Well, actually it’s none of that. Bazel neither replaces webpacks nor a build server. Bazel is what we call a build tool. Opposite to it are the dev tools, including webpacks. Figure 1, having been shown by Alex Eagle (Mr. Bazel) during several talks about Bazel, helps enormously in understanding it.


Bazel compared to other tools

For a couple of years so-called dev tools have gained a lot of popularity. For example, attempts have been made to map the entire build in a webpack configuration, rendering tools like Gulp or Grunt less and less important. The intention to save developers some tools is a noble and understandable one, but unfortunately, we are actually generating more complexity than originally intended. Why is that? Figure 2 illustrates the problem.


M-x-N-Plug-in-Matrix

Do you see what’s going on? Each combination of tool and library now requires an own plug-in, so we end up in a plug-in jungle, where each plug-in uses a different set of configurations that now need to be mastered. Do you also have the urge to escape from such a configuration hell? With Bazel, each plug-in only needs to be developed exactly once. Bazel offers support for that by providing a readable language for configuring our builds. Through this unified language, Bazel manages to create uniform configurations that operate independently of the dev tools and programming languages used behind them. The inputs of a configuration and the outputs of a build are thus clearly defined, making it easier for different plug-ins to work together (Fig. 3). This can drastically decrease the number of plug-ins.


Bazel‘s Plug-in-System

Starting a new project with Angular and Bazel

To create a new project with Angular and Bazel, Bazel must first be globally installed via npm. You can use the following command: npm i -g @angular/bazel. After that, use the Angular CLI command to create a new project with Bazel as build tool: ng new –collection=@angular/bazel. Looking at the workspace of the created project, at first glance you can’t see any difference compared to a normal Angular CLI project (Fig. 4), just as intended. This way we can use familiar commands like ng serve, ng build and ng test as we did before switching to Bazel.


Angular Builders API abstracts over Bazel’s usage

Angular Builders, introduced in Angular 8, form an abstraction layer and hide the actual build tool that is used. You need to go to the angular.json file to find a reference to the Bazel command. If we now use ng serve to build our project for the first time, it will take some extra time. But for all of the following builds you won’t notice any difference to the usual build times because of the incremental build.

Adding Bazel to existing projects

Bazel can be added easily to an existing Angular CLI project: ng add @angular/bazel. Use a git status command to take a closer look at all the changes made to the project in this case. However, the specific Bazel configuration files are still hidden from us in this mode and only actually generated at runtime. But since we are most interested in these files when we want to get to know Bazel, we can extract them with the following command: ng build -leaveBazelFilesOnDisk. After that we should find files named WORKSPACE and BUILD.bazel in the root folder, just as we did with src/BUILD.bazel.

However, at the moment we can only successfully build an untapped Angular CLI project. As soon as we add a new dependency to the project, even if it is only an import to the FormsModule, we already have to adjust the Bazel configuration manually. Of course, the Angular CLI will do this at some point in the future, but it isn’t ready to take over these tasks yet. Nevertheless, it does not hurt to learn the basics of Bazel, and the manual changes are not too difficult anyway.

Bazel’s language

In order to be able to really “go bazeling”, we must first clarify a few terms. A Bazel workspace is identified by the WORKSPACE file and located in the main directory of the project, where the files to be built are located and Bazel stores its build results. This workspace can now be further divided by packages. Each directory containing a BUILD file represents a separate package. It makes sense to define several packages, because Bazel only rebuilds the packages where changes have actually been made. The BUILD file itself contains a description of how the package should be built. In Bazel’s language, these are called rules. The rules are first loaded into the BUILD files and then executed. A rule now describes how the desired output files are created from the defined input files. The goal of Bazel is that the BUILD files completely describe the inputs and outputs of the build process, which allows a more detailed analysis of the build artifacts and their changes.

Let us now take a closer look at an example rule located in the generated BUILD file in the src folder of our project and was created by switching to Bazel (Listing 1).

Listing 1: Bazel Rule

// 1. Rules must to be loaded first
load("@npm_angular_bazel//:index.bzl", "ng_module")

// 2. execution of the ng_module rule 
ng_module(       
  name = "src",
  srcs = glob(  
    include = ["**/*.ts"], // Input-Files
    exclude = [
      "**/*.spec.ts",
      "main.ts",
      "test.ts",
      "initialize_testbed.ts",
    ],
  ),
  assets = glob([
    "**/*.css",
    "**/*.html",
  ]) + ([":styles"] if len(glob(["**/*.scss"])) else []),
  deps = [ 
    "@npm//@angular/core",
    "@npm//@angular/platform-browser",
    "@npm//@angular/router",
    "@npm//@types",
    "@npm//rxjs",
    // additional dependencies on libraries can be added here:
    // e.g.: @npm//@angular/forms
  ],
)

What exactly does the ng_module rule from our example do now? It is provided by Angular and is responsible for calling the Angular AOT template compiler. It extends the ts_library rule, which in turn wraps the TypeScript compiler. When calling it, the TypeScript source code is compiled in JavaScript and all Angular compilation steps are executed as well. The glob() function is only a helper function that is used wherever lists of file names are expected. It allows the use of so-called wild card patterns, such as **/*.css. Also noteworthy is that line: ([“:styles”] if len(glob([“**/*.scss”])) else []), referring to the rule created in the same file with the name styles, but only if there are SCSS files that need to be compiled in CSS. This is exactly what the styles rule does.

Summary

Even though Bazel is designed for large monorepos like the ones used by Google, it can make sense to switch to Bazel even for smaller projects in order to reduce build and development times. However, this should only be considered once Bazel is fully integrated into the Angular CLI, thus minimizing the manual configuration effort. For those who want to delve even deeper into the subject, I recommend reviewing the Angular Bazel Example.

The post Angular <3 Bazel: A new star is born? appeared first on International JavaScript Conference.

]]>
Node.js is Dead – Long live Deno! https://javascript-conference.com/blog/node-js-is-dead-long-live-deno/ Fri, 20 Dec 2019 11:28:22 +0000 https://javascript-conference.com/?p=28862 Deno is a new runtime for JavaScript and TypeScript, created by Ryan Dahl - the original creator of Node.js. The project is intended to fix design problems in Node.js described in Dahl's famous talk "10 Things I Regret About Node.js". We talked to Krzysztof Piechowicz (adesso AG) about the differences between Node.js and Deno. In the iJS video, Piechowicz goes into the topic in more detail and shows what is possible with Deno.

The post Node.js is Dead – Long live Deno! appeared first on International JavaScript Conference.

]]>
Deno versus Node.js

iJS editorial team: Hello Krzysztof! You are an expert in Deno – a new JavaScript Framework created by the Node inventor Ryan Dahl. Can you briefly explain what Deno is exactly?

Deno aims to fix Node.js design mistakes and offers a new modern development environment.

Krzysztof Piechowicz: Deno is a new platform for writing applications using JavaScript and TypeScript. Both platforms share the same philosophy – event-driven architecture and asynchronous non-blocking tools to build web servers and services. The author of Deno is Ryan Dahl, original creator of Node.js. In 2018, he gave the famous talk “10 Things I Regret About Node.js“ and announced his new project – Deno. Deno aims to fix Node.js design mistakes and offers a new modern development environment.

iJS editorial team: How does Deno differ from Node.js?

Krzysztof Piechowicz: Both platforms serve the same purpose, but use different mechanisms. Deno uses ES Modules as the default module system, whereas Node.js uses CommonJS. External dependencies are loaded using URLs, similar to browsers. There is also no package manager and centralized registry, modules can be hosted everywhere on the internet. Contrary to Node.js, Deno executes the code in a sandbox, which means that runtime has no access to the network, the file system and the environment. The access needs to be explicitly granted, which means better security. Deno supports TypeScript out of the box, which means that we don’t need to manually install and configure tools to write TypeScript code. Another difference is that Deno provides a set of built-in tools, like a test runner, a code formatter and a bundler.

Deno – an example

iJS editorial team: Can you pick out a difference and demonstrate it with an example?

Krzysztof Piechowicz: In my opinion, the most important difference is how modules are imported. As I mentioned, Deno doesn’t use the CommonJS format and doesn’t provide a package manager like npm. All modules are loaded directly in code using an URL.

Here is a Node.js example:


And here is a Deno example:


At first glance, the Node imports look simpler, but there are a few advantages to using the Deno style. By importing code via URL, it’s possible to host modules everywhere on the internet. Deno packages can be distributed without a centralized registry. There is also no need for the package.json file and a dependency list, because all modules are downloaded, compiled and cached on the application run.

iJS editorial team: What is the current status of Deno? Can it already be used in production?

Krzysztof Piechowicz: Deno is still under heavy development and isn’t production-ready yet. There is also no official date for the release of the 1.0 version.

The future of Deno

iJS editorial team: What’s the next step with Deno? Is it actively being developed? By whom, in which direction?

The goal of Deno is not to replace Node.js, but to offer an alternative.

Krzysztof Piechowicz: Deno is an open-source project and is being developed very actively. The project was started in 2018 by Ryan Dahl. Currently, the project has over 150 contributors. Besides the release of the 1.0 version, there is a plan to provide a command-line debugger and built-in code linter to improve
developer experience. Deno should also serve HTTP more efficiently.

iJS editorial team: What is the core message of your session at iJS?

Krzysztof Piechowicz: The goal of Deno is not to replace Node.js, but to offer an alternative. Some of the differences are quite controversial and it’s hard to predict if they will format in a correct way. I recommend that all Node.js programmers keep an eye on this project. I’m not sure if this project will be a success, but it’s a great opportunity to observe how Node.js could have been implemented differently.

iJS editorial team: Thank you very much!

Deno – a better Node.js?

Watch Krzysztof Piechowicz’s session from iJS 2019: Deno – a better Node.js?

The post Node.js is Dead – Long live Deno! appeared first on International JavaScript Conference.

]]>
Speed, Speed, Speed: JavaScript vs C++ vs WebAssembly [ KEYNOTE ] https://javascript-conference.com/blog/speed-speed-speed-javascript-vs-c-vs-webassembly-keynote/ Mon, 04 Nov 2019 16:18:17 +0000 https://javascript-conference.com/?p=28504 In Node.js, we can use WebAssembly modules and native C++ addons. If your app has performance critical parts, should you stay in JavaScript? Or write a native C++ addon? Or use WebAssembly? Let's have a look at how these options compare performance wise and which one is best for different workloads. So the next time you need to optimize for speed, you know your options.

The post Speed, Speed, Speed: JavaScript vs C++ vs WebAssembly [ KEYNOTE ] appeared first on International JavaScript Conference.

]]>
What’s happening under the hood at the compiler level in JavaScript? In this keynote session, we’ll be talking about JavaScript compilers specifically and see how modern JS performance compares to C++ performance and then we’ll see where WebAssembly fits into this performance story. The concepts that Franziska Hinkelmann will be showing you are fundamental JS concepts and they apply no matter what framework you are using – so doesn’t matter if you are using Angular, Node.js or anything else.

Moreover, as we go through this journey, the questions like how can dynamically typed JS be so fast, when it became faster than before will be answered as well.

Only make performance improvements if you actually have a problem. If nobody complains that your app is too fast, if you are not losing revenue, don’t start making it faster.

The post Speed, Speed, Speed: JavaScript vs C++ vs WebAssembly [ KEYNOTE ] appeared first on International JavaScript Conference.

]]>
React Hooks: React More Functional Than Ever https://javascript-conference.com/blog/react-hooks-react-more-functional-than-ever/ Wed, 11 Sep 2019 12:24:08 +0000 https://javascript-conference.com/?p=28180 For a long time, it has been possible to create React components on the basis of classes or functions. With the newest version, functional approaches are clearly gaining more traction.

The post React Hooks: React More Functional Than Ever appeared first on International JavaScript Conference.

]]>

As you may know, I really like functional programming techniques. Beyond all logical and technical considerations, I favor this style of programming since it is a great experience to build new things from simple functions in a reliable, testable and consistent way.

The continuing evolution of mainstream programming methodologies is not news anymore, we have all been watching this happen for years now. Individual developments can still be worth mentioning, and I was positively surprised a while ago to read about new features and plans for React. With latest developments, React is clearly moving in the direction of functional ideas, much more clearly than it did in the past.

As a first example I should mention the function memo, which became available in React 16.6. The name itself reminds of the functional world: “memoization” is the name of a technique often used by functional programming languages to save return values for later calls. Imagine you have this simple function:

const calc = x => {
return ...; // some complicated calculation!
}

A frequently quoted advantage of functional programming is the fact that functions are stateless, “pure” being the technical term for this. They are independent of influences “from the outside”, they avoid so-called side effects. The simple function calc depends on its input parameter x alone – every time the same value is passed for x, the function renders the same result. A “pure” function like this can be memoized, for instance with this small helper function:

const memo = f => {
  const results = {};

  // For demonstration purposes, f is limited to one parameter!
  return x => {
    if (results[x]) {
      return results[x];
    } else {
      result = f(x);
      results[x] = result;
      return result;
    }
  };
};

const memoizedCalc = memo(calc);

// With this first call, calc performs its calculation
console.log(calc(10));

// For further calls with the same parameter value, the
// stored return value is delivered directly, the calculation
// is not executed again.
console.log(calc(10));

Of course the sample implementation is just an illustration. In reality you don’t need to write such helpers yourself, since existing libraries like Lodash or Ramda already include much better implementations that also deal with additional parameters and other complex cases.

React components include a function that is called to render, i.e. to display, the visual element represented by a component. When you implement a component as a class, you code this logic in the method render. Even in the context of classes there are some expectations defined for this method (as described in the documentation here). For instance, you should not change component state from this method. This recalls the functional idea: the method render depends on state and props of the component, but it shouldn’t trigger any side effects. If you adhere to these guidelines correctly, you can now utilize React mechanisms to ensure that render won’t be called more often than necessary. Traditionally you implement the method shouldComponentUpdate for this purpose, with any logic to decide whether rendering is required or not. In many cases you can also use PureComponent as a base class, which renders visual updates only if state or props have changed.

Before version 16.6, React didn’t have any similar built-in functionality to influence rendering decisions in functional components. If you have followed my examples, you will understand the purpose of the function memo now: it memoizes React components.

const memoizedComponent = React.memo(props => <div>...</div>);

This is no black magic, but it immediately impresses with its brevity and conciseness! The shortest equivalent class-based component would likely derive from PureComponent:

class CleverComponent extends PureComponent {
  render() {
    return <div>...</div>;
  }
}

This code is not just longer and more verbose than the functional variation, it is also somewhat harder to understand. Perhaps it was the author’s intention to achieve the same efficient rendering the functional sample shows – but a reader of the code would only understand this if they know the class PureComponent well and remember how it differs from the base class Component. In comparison, the intention is immediately obvious when a function called memo is used, or at the very least a quick Google search will find an explanation easily.

Shiny and New: Hooks

Since version 16.8, React has received updates to include Hooks. This functionality puts functional components on a level with class-based ones, making the more concise functional syntax a seriously appealing contender.

Before this point, the concept of higher order componentswas used to “extend” a React component using functional approaches. The term is derived from the higher order function, and a higher order componentis nothing more than a function which encapsulates an existing component. To illustrate using a line from the React docs:

const EnhancedComponent = higherOrderComponent(WrappedComponent);

This approach was widely used to interface React components with additional
systems or frameworks. The function connect from the Redux library is a good
example: to make a Redux store available for an independent component, you call
connect as a higher order component:

const reduxConnectedComponent = connect(
  mapStateToProps,
  mapDispatchToProps
)(MyComponent);

The library recompose exists – though it is now deprecated – as a collection of flexible higher order componentsto support everything class-based React components can do. This includes working with state, reacting to props changes and lots more. I built complex components with help from recompose, but a complete sample would be too large to show here. The general structure of such a component might look a bit like this:

const Debounce = compose(
  onlyUpdateForPropTypes,
  setPropTypes({
    // ...
  }),
  defaultProps({
    // ...
  }),
  withState('viewValue', 'setViewValue', ({ value }) => value),
  withPropsOnChange(
    ['debounceWait', 'onChange'],
    ({ debounceWait, onChange }) => ({
      onChangeFunc: _.debounce(debounceWait)(onChange)
    })
  ),
  withPropsOnChange(['extract'], ({ extract }) => ({
    extractFunc: // ...
  })),
  withHandlers({
    childChange: props => e => {
      // ...
    }
  }),
  lifecycle({
    componentWillReceiveProps(np) {
      // ...
    }
  })
)(({ children, valueField, changeEvent, viewValue, childChange }) => {
  // render here
});

This is certainly a rather complex example and the syntax may not look intuitive to you. It is possible to get used to this and with some experience I find the syntax quite readable. The outer function compose indicates a common functional approach where multiple functions are “chained” to create a new function. All the functions passed to compose work together, in sequence, to wrap the original rendering logic. That’s onlyUpdateForPropTypes, setPropTypes, defaultProps, withState and so on, eight functions altogether.

Each of these functions is a higher order component, and that’s exactly the problem with this approach. An onion-style wrapping structure is generated, like a deeply nested function call:

onlyUpdateForPropTypes()(
  setPropTypes(...)(
    defaultProps(...)(
      withState(...)(
        withPropsOnChange(...)(
          withHandlers(...) (
            lifecycle(...)(
              // render function here
            )))))))

The resulting components is equally deeply nested, since it is encapsulated multiple times, eight times in the example! The concern is obvious: won’t this approach impact performance? In any case, the debugging experience is not exactly improved in this scenario. React apps always work with impressively deep hierarchies due to their component structure, but if many individual components add loads of nesting layers simply because they use higher order componentslike in the example, the overall application structure gains great complexity.

The Hooksconcept offers a new and different approach to enable a full component feature set for functional components without generating similar nesting depth. At the same time, the syntax in code is also simplified. Here is a short example from the docs that uses state in a functional component:

const Example = () => {
  const [count, setCount] = useState(0);

  return (
    <div>
      <p>You clicked {count} times</p>
      <button onClick={() => setCount(count + 1)}>Click me</button>
    </div>
  );
};

The function useState is the Hook. It generates state for the component and returns two values, the state itself and a function to modify it. The term Hookis meant to hint at the fact that the mechanism hooksinto React. The inner workings of all hooks are not exactly the same, but it is easy to imagine how these functions could participate in lifecycle management mechanisms supplied by React and thereby influence the behavior of the resulting component.

Another example is the HookuseEffect. As explained above, render is not meant to trigger side effects. In class components you use various React lifecycle methods instead, to trigger side effects outside the render context. For instance, the methods componentDidMount and componentDidUpdate are often used to load data. This was previously impossible with simple functional components, but it can be achieved quite easily with useEffect:

const Example = () => {
  const [count, setCount] = useState(0);

  useEffect(() => {
    // This code triggers a side effect asynchronously, but
    // it has access to the render context through its closure.
  });

  return (...);
}

The following listing shows a more complex example of a context provider implementation I wrote. You can see two additional Hooksin action here, useRef and useCallback. Many of the Hookscalled in this code also use dependency lists. For instance, the memoized callback updateTokenInfo is automatically regenerated when any of the three values serviceBaseUrl, username or password change. This is a powerful feature which makes components implemented on the basis of Hookshighly performant.

const ReportServerProvider = ({
  username,
  password,
  serviceBaseUrl,
  children
}) => {
  const [tokenInfo, setTokenInfo] = useState(null);
  const [updateTokenTimeout, setUpdateTokenTimeout] = useState(null);
  const refUpdateTokenTimeout = useRef(updateTokenTimeout);

  const updateTokenInfo = useCallback(() => {
    getToken(serviceBaseUrl, username, password).then(token => {
      setTokenInfo({ token, authHeaders: getAuthHeaders(token) });
      setUpdateTokenTimeout(
        setTimeout(updateTokenInfo, (token.expires_in - 60) * 1000)
      );
    });
  }, [serviceBaseUrl, username, password]);

  useEffect(() => {
    refUpdateTokenTimeout.current = updateTokenTimeout;
  }, [updateTokenTimeout]);

  useEffect(() => {
    updateTokenInfo();
    return () => clearTimeout(refUpdateTokenTimeout.current);
  }, [serviceBaseUrl, username, password, updateTokenInfo]);

  return (
    <ReportServerContext.Provider value={{ tokenInfo, serviceBaseUrl }}>
      {children}
    </ReportServerContext.Provider>
  );
};

At the time of writing, React supports ten different Hooks. The author of the aforementioned library, recompose, was actively involved in the development of this new concept, and he’s convinced that Hookscover all use cases of recomposeby now. Structural issues of higher order componentnesting are much improved by this approach!

To make the simple syntax of Hookspossible, there are some rules you should observe. Current eslint-plugin-reactversions are fully up to date and will warn you if you are using a Hookincorrectly. As a simple example, Hooksmay only be used within functional React components, and only on the top level of the component function. However, the eslintrules are very clever and can even detect when dependency lists should be extended to include additional items. I definitely recommend taking advantage of eslintwhen you start working with Hooks.

Good documentation for Hooksis available online and many third-party projects now offer their functionality in the form of additional Hooksinstead of higher order components. Facebook points out that Hookslive side-by-side with class-based components, so there is no pressing need to re-implement anything. However, they also mention that Facebook’s own apps based on React will favor functional approaches in the future – an impressive step, and as a fan of functional programming I’m happy to hear that! I recommend you spend some time with functional React and Hooksand get to know this new structure in detail.

The post React Hooks: React More Functional Than Ever appeared first on International JavaScript Conference.

]]>
back to school—explore the program of iJS https://javascript-conference.com/blog/back-to-school-explore-the-program-of-ijs/ Mon, 02 Sep 2019 16:19:40 +0000 https://javascript-conference.com/?p=28133 September is here! It always reminds people of school time, seeing friends after months and of course studying. We want to take you back to those times and give you the feeling of being a student again! Welcome to the JS Academy and its extensive syllabus!

The post back to school—explore the program of iJS appeared first on International JavaScript Conference.

]]>

JavaScript is fast, dynamic and futuristic—just like our program at iJS! Our infographic is taking you back to school and to your student times by focusing on the highlights of the learning objectives of iJS Munich’s program and speakers while showing you the hottest topics and the latest trends of the JavaScript ecosystem. Every one of the learning objectives are highlighting a different track of iJS and preparing you to be agile in the dynamic world of JS and to take your skills to the next level. Have you heard of the bell? Let’s get started with the first lesson!

 

 

The post back to school—explore the program of iJS appeared first on International JavaScript Conference.

]]>
Web Components & Micro Apps: Angular, React & Vue peacefully united? https://javascript-conference.com/blog/keynote-video-web-components-micro-apps-angular-react-vue-peacefully-united/ Tue, 20 Aug 2019 15:38:55 +0000 https://javascript-conference.com/?p=28036 Angular, React, Vue or some other framework: Which one are you going to use on your next project? The JavaScript ecosystem offers so many choices and all of them have their pros and cons for any given project, making it difficult to choose just one. But there is a solution to that: With micro apps and web components, you can use whatever works best for any single part of your project.

The post Web Components & Micro Apps: Angular, React & Vue peacefully united? appeared first on International JavaScript Conference.

]]>
It’s not either React or Vue.js anymore, but React for one part of your code, and Vue for another! In this keynote from iJS 2018 in Munich, Manfred Steyer explains how to unite all JavaScript frameworks peacefully.

Web development is exciting nowadays! We get new innovative technologies on a regular basis. While this is awesome it can also be overwhelming – especially when you have to maintain a product for the long term.

Web Components and Micro Apps provide a remedy for this dilemma. They allow for decomposing a big solution into small self-contained parts that can be maintained by different teams using the best technology for the requirements in question. Watch this keynote to find out how this idea helps with writing applications that can evolve over a decade and more.

 

The post Web Components & Micro Apps: Angular, React & Vue peacefully united? appeared first on International JavaScript Conference.

]]>