iJS CONFERENCE Blog

Off to the console! (Intro to Node.js – part 5)

Oct 12, 2021

A backend programmed in Node.js and executed with Docker is not much use on its own if it cannot be accessed for lack of a client. However, it does not have to have a graphical user interface. A command line tool will do - and is practical in many cases. How does it work?

In the past four parts of this series, we created an application written in Node.js that provides an API for managing a simple task list. The API deliberately does not use REST, but only separates writing from reading. Writing operations are based on POST, reading on GET, according to the design pattern CQRS [1].

 

The actual semantics were moved to the path of the URL, so that the technical intention is preserved and the API is more comprehensible than if it were to rely solely on the four technical verbs provided in REST. The current version of the application contains three routes, one for noting and checking off tasks, the other for listing all unfinished tasks:

  • POST /note-todo
  • POST /tick-off-todo
  • GET /pending-todos

Furthermore, the application can be packaged into a Docker image by entering the command $ docker build -t thenativeweb/tasklist ., which can be executed using the command from Listing 1. By default, the application is run with an in-memory database, but optionally, a connection to MongoDB can be established. This requires passing the environment variables STORE_TYPE and STORE_OPTIONS, where MongoDb is the type and something like {“url”: “mongodb://localhost:27017/”, “databaseName”: “test”} is the connection string. In Docker, this is done via the -e parameter, similar to enabling production mode.

$ docker run \
  -d \
  -p 3000:4000 \
  -e PORT=4000 \
  -e NODE_ENV=production \
  --init \
  thenativeweb/todolist

The application is accessed via an HTTP interface, but a suitable client is missing. In past parts, curl was used as a workaround. It serves its purpose but is not very comfortable. For example, the list of uncompleted tasks can be retrieved via $ curl http://localhost:3000/pending-todos, but creating a new task requires the command from Listing 2.

$ curl \
  -X POST \
  -H 'content-type:application/json' \
  -d '{"title":"Client develop"}' \
  http://localhost:3000/note-todo

Most applications running on the web or in the cloud come with a graphical interface for convenient access. But there is an interesting alternative: you can also provide a dedicated command line tool called a CLI (Command Line Interface). The advantage of a CLI is that it is much easier and faster to develop than a sophisticated graphical interface. In addition, access to the application through a CLI can be scripted, which allows integration with processes.

iJS Newsletter

Keep up with JavaScript’s latest news!

Writing CLIs in Node.js

It’s relatively easy to write CLIs in Node.js. After all, a CLI is just an application like any other, the difference lies in the type of the desired call. For a server application, it is quite common to have to change to the appropriate directory and call node app.js there – but a CLI should be globally callable, ideally without always having to specify node as the runtime environment.

This is possible with a few steps. First, it is necessary to create a project like any other. The initial difference is that the file containing the entry point is marked as executable. This is done at the console level with the command $ chmod a+x app.js. This allows the operating system to call the file like a standalone command: $ ./app.js.

The problem with this, however, is that the shell assumes that it is a shell script. It simply does not know that the program is to be executed with Node.js. To change this, the so-called shebang must be stored in the first line of the app.js file: #!/usr/bin/env node.

This tells the shell to look for the runtime environment named node (which must be installed locally) and then run the app.js file with it. As a further point, it is recommended to deposit the app.js file in the package.json file as a CLI. This is done using the bin key, although theoretically multiple CLIs can be registered. The entry assigns a callable application name to the app.js file, for example, todo (Listing 3).

{
  "name": "todo-cli",
  "version": "1.0.0",
  ...
  "bin": {
    "todo": "app.js"
  },
  ...
}

The effect: If the module todo-cli is installed with npm global (by calling npm install -g todo-cli), the todo command is stored in the system and can then be called directly from anywhere in the system without having to know the exact path of the installation. The file extension .js does not have to be specified anymore.

Parameter parsing and co.

In theory, almost everything has been achieved with this. Now, the only difficulty is recognizing what is to be called. To do this, it is common for CLIs to have subcommands and parameters. For example, the call to add a new task might look like this: $ todo add –title “Develop Client”.

The problem here is that the subcommands and parameters must be analyzed and evaluated. Error cases must be taken into account, and short forms should also be supported. It would also be preferable if a CLI can output help, for example with a given subcommand: $ todo add –help. Or even globally, across all subcommands: $ todo –help.

This parsing is time-consuming and error-prone, which is why I recommend the use of a dedicated module for this purpose. This kind of module is available with the command-line-interface [2], so you must install this first:

$ npm install command-line-interface

Writing the first command

The first thing to do is to add a reference to the module to the application. This is done as usual in Node.js with the require function:

const { runCli } = require('command-line-interface');

Before you can run the CLI, you must create at least one command. Such a command represents a certain part of the logic, for example, adding a new task or unchecking an existing one. The simplest case is the so-called rootCommand, which is executed when the CLI is called without a subcommand. To implement a command, it is necessary to define an object that describes the command (Listing 4).

const rootCommand = {
  name: 'todo',
  description: 'Manage todos.',
 
  optionDefinitions: [],
 
  handle ({ options, getUsage, ancestors }) {
    console.log(getUsage(
      { commandPath: [ ...ancestors, 'todo' ] }
    ));
  }
};

Afterwards, the runCli function must be called, and the previously generated command must be passed as a parameter. Additionally, the command line arguments of the process must be passed to the function:

await runCli({ rootCommand, argv: process.argv });

If you call the application without parameters, it outputs the help. Optionally, the –help parameter can be passed, which causes the same behavior. Although this makes the root command seem a bit pointless, it ensures that a call without subcommands leads to something useful for the user.

 

As you can see, a command is basically nothing more than an object consisting primarily of some metadata and a handle function. This function can also be asynchronous, if needed. If it throws an exception, the execution of the CLI is aborted, the program is terminated with exit code 1, and the error message including stack trace is output formatted to the console.

Implementing further commands

To implement a subcommand, we basically follow the same procedure. A corresponding object must also be created for the add command (Listing 5), but it should contain a parameter (namely –title).

const addCommand = {
  name: 'add',
  description: 'Adds a new todo.',
 
  optionDefinitions: [
    {
      name: 'title',
      description: 'The title of the todo.',
      type: 'string',
      alias: 't',
      isRequired: true
    }
  ],
 
  handle ({ options }) {
    // ...
  }
};

As the example shows, to define parameters the optionDefinitions section must be filled. A parameter has at least a name and a type, whereas types string, number, and boolean are supported.


If you want to be able to specify a parameter multiple times (which would not make sense in the case of –title), you would have to set the multiple option for the respective parameter to true. If you do not mark a parameter as mandatory with isRequired, it will be considered optional. If necessary, a default value can be set for it, which is done via the defaultValue property.

What is still missing is the registration of the add command. Since it is a subcommand, it is not passed directly to the call of runCli, but subordinated to the root command. For this purpose, a new property subcommands must be inserted there, in which the subcommand is listed (Listing 6).

const rootCommand = {
  name: 'todo',
  description: 'Manage todos.',
 
  optionDefinitions: [],
 
  handle ({ options, getUsage, ancestors }) {
    // ...
  },
 
  subcommands: {
    addCommand
  }
};

GAIN INSIGHTS INTO THE DO'S & DON'TS

Angular Tips & Tricks

Optimize the error handling

Last but not least, it may be desirable to optimize error handling. By default, various error cases are caught, but the output is very sober and technical. To do this, different handlers can be registered for individual error cases, for which a handlers block must be added to the call to runCli (Listing 7).

await runCli({
  rootCommand,
  argv: process.argv,
  handlers: {
    commandFailed ({ ex }) {
      // ...
    },
 
    commandUnknown ({ unknownCommandName, recommendedCommandName, ancestors }) {
      // ...
    },
 
    optionInvalid ({ optionDefinition, reason }) {
      // ...
    },
 
    optionMissing ({ optionDefinition }) {
      // ...
    },
 
    optionUnknown ({ optionName }) {
      // ...
    }
  }
});

It isn’t mandatory to specify all handlers. Those that are omitted remain with the default behavior of the module.

What’s still missing is the actual implementation of the add command, i.e. the HTTP communication with the backend, the reaction to the different status values, and the implementation of the remaining commands. However, all of this is independent of how a command line tool is written and would be similar to implement in a graphical interface running in the web browser.

 

Outlook

This concludes the fifth and final part of this series on Node.js. The application is fully developed and ready to run, is verified by code analysis and tests, supports various databases for storing tasks via a plug-in system, can be packaged in a Docker image and run as a Docker container, and can be accessed and controlled remotely via a CLI.

This shows all the important aspects that go into developing a Node.js-based application. The next step is to add more features to the application, make it more comfortable, and develop a graphical user interface if necessary.

The author’s company, the native web GmbH, offers a free video course on Node.js [3] with close to 30 hours of playtime. Episode 22 of this video course deals with the topic covered in this article, namely writing CLIs. This course is recommended to anyone interested in more details.

Sign up for the iJS newsletter and stay tuned to the latest JavaScript news!

 

BEHIND THE TRACKS OF iJS

JavaScript Practices & Tools

DevOps, Testing, Performance, Toolchain & SEO

Angular

Best-Practises with Angular

General Web Development

Broader web development topics

Node.js

All about Node.js

React

From Basic concepts to unidirectional data flows