The post Developing Web APIs with Node – Intro to Node.js part 2 appeared first on International JavaScript Conference.
]]>This will change in this part of the series: The application, which so far only launches a rudimentary web server, is supposed to provide an API that can be used to manage a task list. First, it is necessary to make some technical preliminary considerations, because we must define what exactly the application is supposed to do. For example, the following functions are possible:
These three functions are essential, without them a task list cannot be used meaningfully. All other functions, such as renaming a task or undoing the check-off of a task, are optional. Of course, it would make sense to implement them in order to make the application as user-friendly and convenient as possible – but they are not really necessary. The three functions mentioned above represent the scope of a Minimum Viable Product (MVP), so to speak.
Another restriction should be specified right at the beginning: The task list shall deliberately not have user management in order to keep the example manageable. This means that there will be neither authentication nor authorization, and it will not be possible to manage multiple task lists for different people. This would be essential to use the application in production, but it is beyond the scope of this article and ultimately offers little learning for Node.js.
Current state
The current state of the application we wrote in the first part includes two code files: app. js, which starts the actual server, and lib/getApp.js, which contains the functionality to respond to requests from the outside. In the app.js file, we already used the npm module processenv [1] to be able to set the port to a value other than the default 3000 via an environment variable (Listing 1).
'use strict';
const getApp = require('./lib/getApp');
const http = require('http');
const { processenv } = require('processenv');
const port = processenv('PORT', 3000);
const server = http.createServer(getApp());
server.listen(port);
The good news is that at this point, nothing will change in this file. This is because there is already a separation of content in the app.js and getApp.js files: The first file takes care of the HTTP server itself, while the second contains the actual logic of the application. In this part of the article series, only the application logic will be adapted and extended, so the app.js file can remain as it is.
However, the situation is different in the getApp.js file, where we will leave no stone unturned. But, one thing at a time. First, the package.json file must be modified so that the name of the application is more meaningful. For example, instead of my-http-server, the application could be called tasklist:
{
"name": "tasklist",
"version": "0.0.1",
"dependencies": {
"processenv": "3.0.2"
}
}
The file and directory structure of the application still looks the same as in the first part:
/
lib/
getApp.js
node_modules/
app.js
package.json
package-lock.json
REST? No thanks!
Now it’s a matter of incorporating routing. As usual with APIs, this is done via different paths in the URLs. In addition, you can fall back on the different HTTP verbs such as GET and POST to map different actions. A common pattern is the so-called REST approach, which specifies that so-called resources are defined via the URL and the HTTP verbs define the actions on these resources. The usual mapping according to REST is as follows:
As you can see, these four HTTP verbs can be easily mapped to four actions of the so-called CRUD pattern, which in turn corresponds to the common approach of how to access data in (relational) databases. This is one of the most important reasons for the success of REST: It is simple and builds on the already familiar logic of databases. Nevertheless, there are some reasons against using this transfer of CRUD to the API level. The most weighty of these is that the verbs do not conform to the technical language: Users do not talk about creating or updating a task.
Instead, they think in terms of technical processes: They want to make a note of a task or check off a task as completed. This is where a business and a technical view collide. It is obvious that a mapping between these views must take place at some point – but the code of an application should tend to be structured in a domain-oriented rather than a technical way [2]. After all, the application is written to solve a domain-oriented problem, and technology is merely the means to an end. Seen in this light, CRUD is also an antipattern [3].
An alternative approach is provided by the CQRS pattern, which is based on commands and queries [4]. A command is an action that changes the state of the application and reflects a user’s intention. A command is usually in the imperative, since it is a request to the application to do something. In the context of the task list, there are two actions that change the state of the list, noting and checking off a task. If we formulate these actions in the imperative and translate them into English, we get phrases such as “Note a todo.”, “Tick off a todo.”
Analogously, you can formulate a query, i.e. a query that doesn’t change the state of the application, but returns it. This is the difference between a command and a query: A command writes to the application, so to speak, while a query reads from the application. The CQRS pattern states that every interaction with an application should be either a command or a query – but never both at the same time. In particular, this means that Commands should not return the current state of the task list, but that a separate Query is needed for that: For example: “Get pending todos.”
If we abandon the idea that an API must always be structured according to REST and prefer the much simpler pattern of separating writing and reading, the question arises as to how the URLs should be structured and which HTTP verbs should be used. In fact, the answer to this question is surprisingly simple: The URLs are formulated exactly as mentioned above, POST for commands, and GET for queries are used as HTTP verbs – that’s it. This results in the following routes:
The beauty of this approach is that it is much more self-explanatory than REST. POST /tick-off-todo is much more technical than a PUT /todo. Here, it is clear that an update is executed, but which functional purpose this update has is unclear. When there are different reasons for initiating a (technical) update, the semantically stronger approach gains a lot in comprehensibility and traceability.
Define routes
Now it is necessary to define the appropriate routes. However, this is not done with Node.js’s on-board tools. Instead, we can use the npm module Express [5]:
$ npm install express
The module can now be loaded and used within the getApp.js file. First, an express application has to be defined, for which only the express function has to be called. Then, the get and post functions can be used to define routes, specifying the desired path name and a callback – similar to the one used in the standard Node.js server (Listing 2).
'use strict';
const express = require('express');
const getApp = function () {
const app = express();
app.post('/note-todo', (req, res) => {
// ...
});
app.post('/tick-off-todo', (req, res) => {
// ...
});
app.get('/pending-todos', (req, res) => {
// ...
});
return app;
};
module.exports = getApp;
With this, the basic framework for the routes is already built. The individual routes can, of course, also be swapped out into independent files, but for the time being, focus should be on implementing functionality. The next step is to implement a task list, which is initially designed as a pure in-memory solution. However, since it will be backed by a database in a future part of this series, it will be designed from the outset to be seamlessly extensible later. Essentially, this means that all functions to access the task list will be created asynchronously, since accesses to databases in Node.js are usually asynchronous. For the same reason, an asynchronous initialize function is also created, which may seem unnecessary at this stage, but will later be used to establish the database connection.
Defining the todo list
The easiest way to do this is to use a class called Todos, to which corresponding methods are attached. Again, these methods should be named functionally and not technically, i.e. their names should be based on the names of the routes of the API. The class is placed in a new file in the lib directory, resulting in lib/Todos.js as the file name. For each task that is noted, an ID should also be generated, and the time of creation should be noted. While accessing the current time is not a problem, generating an ID requires recourse to an external module such as uuid, which can also be installed via npm:
$ npm install uuid
Last but not least, it is advisable to get into the habit from the very beginning of providing every .js file with strict mode, a special JavaScript execution mode in which some dangerous language constructs are not allowed, for example, the use of global variables. To enable the mode, you need to insert the appropriate string at the beginning of a file as a kind of statement. This makes the full contents of the app.js file look like the one shown in Listing 1.
'use strict';
const { v4 } = require('uuid');
class Todos {
constructor () {
this.items = [];
}
async initialize () {
// Intentionally left blank.
}
async noteTodo ({ title }) {
const id = v4();
const timestamp = Date.now();
const todo = {
id,
timestamp,
title
};
this.items.push(todo);
}
async tickOffTodo ({ id }) {
const todoToTickOff = this.items.find(item => item.id === id);
if (!todoToTickOff) {
throw new Error('Todo not found.');
}
this.items = this.items.filter(item => item.id !== id);
}
async getPendingTodos () {
return this.items;
}
}
module.exports = Todos;
It is striking in the implementation that the functions representing a command actually contain no return, while the function representing a query consists of only a single return. The separation between writing and reading has become very clear.
Now the file getApp.js can be extended accordingly, so that an instance of the task list is created there and the routes are adapted in such a way that they call the appropriate functions. To prepare the code for later, the initialize function should be called now. However, since this is marked as async, the getApp function must call it with the await keyword, and therefore, must also be marked as asynchronous (Listing 4).
'use strict';
const express = require('express');
const Todos = require('./Todos');
const getApp = async function () {
const todos = new Todos();
await todos.initialize();
const app = express();
app.post('/note-todo', async (req, res) => {
const title = // ...
await todos.noteTodo({ title });
});
app.post('/tick-off-todo', async (req, res) => {
const id = // ...
await todos.tickOffTodo({ id });
});
app.get('/pending-todos', async (req, res) => {
const pendingTodos = await todos.getPendingTodos();
// ...
});
return app;
};
module.exports = getApp;
Before the application can be executed, three things have to be done:
Input and output with JSON
Fortunately, all three tasks are easy to accomplish. For the first task, it is first necessary to determine what a request from the client looks like, i.e. what form it takes. In practice, it has proven useful to send the payload as part of a JSON object in the request body. For the server, this means that it must read this object from the request body and parse it. A suitable module called body-parser [6] is available in the community for this purpose and can be easily installed using npm:
$ npm install body-parser
It should be noted that the version number must always consist of three parts and follow the concept of semantic versioning [6]. In addition, however, dependencies can also be stored in this file, whereby required third-party modules are explicitly added. This makes it much easier to restore a certain state later or to get an overview of which third-party modules an application depends on. To install a module, call npm as follows:
$ npm install processenv
It can then be loaded with require:
const bodyParser = require('body-parser');
Since the parser will be available for several routes, it is implemented as so-called middleware. In the context of Express, middleware is a type of plug-in that provides functionality for all routes and therefore only needs to be registered once instead of individually for each route. This is done in Express via the app.use function. Therefore, it is important to insert the following line directly after creating the Express application: app.use(bodyParser.json());
Now the property body of the req object can be accessed within the routes, which was not available before. Provided a valid JSON object was submitted, this property now contains that very object. This allows the two command routes to be extended, as shown in Listing 5.
app.post('/note-todo', async (req, res) => {
const { title } = req.body;
await todos.noteTodo({ title });
});
app.post('/tick-off-todo', async (req, res) => {
const { id } = req.body;
await todos.tickOffTodo({ id });
});
When implementing the tick-off-todo route, it is noticeable that error handling is still missing: If the task to be ticked off is not found, the tickOffTodo function of the Todos class raises an exception – but this is not caught at the moment. So it is still necessary to provide the corresponding call with a try/catch and to return a corresponding HTTP status code in case of an error. In this case, the error code 404, which stands for an element not found (Listing 6), is a good choice.
app.post('/tick-off-todo', async (req, res) => {
const { id } = req.body;
try {
await todos.tickOffTodo({ id });
} catch {
res.status(404).end();
}
});
And finally, in addition to the node_modules directory, npm has also created a file called package-lock.json. It is actually used to lock version numbers despite the roof being specified. However, it has its quirks, so if npm behaves strangely, it’s often a good idea to delete this file and the node_modules directory and run npm install again from scratch. Once a module has been installed via npm, it can be loaded in the same way as a module built into Node.js. In that case, Node.js recognizes that it is not a built-in module and loads the appropriate code from the node_modules directory:
app.get('/pending-todos', async (req, res) => {
const pendingTodos = await todos.getPendingTodos();
res.json(pendingTodos);
});
Now, if you start the server by entering node app.js and try to call some routes, you will notice that some of the routes work as desired – but others do not, because they never end. This is where an effect comes into play that is very unusual at first: Node.js is inherently designed to stream data, so an HTTP connection is not automatically closed when a route has been processed. Instead, it has to be done explicitly, as in the case of the 404 error. The json function already does this natively, but the two command routes still lack closing the connection successfully. To indicate that the operation was successful, it is a good idea to send the HTTP status code 200. The getApp.js file now looks like Listing 7.
'use strict';
const bodyParser = require('body-parser');
const express = require('express');
const Todos = require('./Todos');
const getApp = async function () {
const todos = new Todos();
await todos.initialize();
const app = express();
app.use(bodyParser.json());
app.post('/note-todo', async (req, res) => {
const { title } = req.body;
await todos.noteTodo({ title });
res.status(200).end();
});
app.post('/tick-off-todo', async (req, res) => {
const { id } = req.body;
try {
await todos.tickOffTodo({ id });
res.status(200).end();
} catch {
res.status(404).end();
}
});
app.get('/pending-todos', async (req, res) => {
const pendingTodos = await todos.getPendingTodos();
res.json(pendingTodos);
});
return app;
};
module.exports = getApp;
Validate the inputs
What is still missing is a validation of the inputs: At the moment, it is quite possible to call one of the command routes without passing the required parameters in the request body. In practice, it has proven useful to validate JSON objects by using a JSON schema. A JSON schema represents a description of the valid structure of a JSON object. In order to be able to use JSON schemas, a module is again required, for example, validate-value [7] which can be installed via npm:
$ npm install validate-value
Now the module can be loaded in the getApp.js file:
const { Value } = require('validate-value');
The next step is to create two schemas. Since these are always the same, it is advisable not to do this inside the routes, but outside them, so that the code does not have to be executed over and over again, ultimately ending up with the same result each time (Listing 8).
const noteTodoSchema = new Value({
type: 'object',
properties: {
title: { type: 'string', minLength: 1 }
},
required: [ 'title' ],
additionalProperties: false
});
const tickOffTodoSchema = new Value({
type: 'object',
properties: {
id: { type: 'string', format: 'uuid' }
},
required: [ 'id' ],
additionalProperties: false
});
Within the two command routes, the only thing left to do is to validate the received data using the respective schema, and in case of an error, return an appropriate HTTP status code, for example, a 400 error (Listing 9).
app.post('/note-todo', async (req, res) => {
if (!noteTodoSchema.isValid(req.body)) {
return res.status(400).end();
}
const { title } = req.body;
await todos.noteTodo({ title });
res.status(200).end();
});
app.post('/tick-off-todo', async (req, res) => {
if (!tickOffTodoSchema.isValid(req.body)) {
return res.status(400).end();
}
const { id } = req.body;
try {
await todos.tickOffTodo({ id });
res.status(200).end();
} catch {
res.status(404).end();
}
});
CORS and testing
With this the API is almost finished, only a little bit of small stuff is missing. For example, it would be handy to be able to configure CORS – that is, from which clients the server can be accessed. In practice, this topic is a bit more complex than described below, but for development purposes, it is often sufficient to allow access from everywhere. The best way to do this is to use the npm module cors [8], which must first be installed via npm:
$ npm install cors
It must then be loaded, which is again done in the getApp.js file:
const cors = require('cors');
Finally, it must be integrated into the express application in the same way as body-parser, because this module is also middleware. Whether this call is made before or after the body-parser does not really matter – but since access should be denied before the request body is processed, it makes sense to include cors as the first middleware:
// ... const app = express(); app.use(cors()); app.use(bodyParser.json()); // ...
Now, in order to test the API, a client is still missing. Developing this right now would be too time-consuming, so you can fall back on a tool that is extremely practical for testing HTTP APIs and that is usually pre-installed on macOS and Linux, namely, curl. On Windows, it is also available, at least in the Windows Subsystem for Linux (WSL). First, you can try to retrieve the (initially empty) list of all tasks:
$ curl http://localhost:3000/pending-todos []
In the next step, you can now add a task. Make sure that you not only send the required data, but also set the Content-Type header to the correct value – otherwise the body-parser will not be active:
$ curl \
-X POST \
-H 'content-type:application/json' \
-d '{"title":"Develop a Client"}' \
http://localhost:3000/note-todo
If you retrieve the tasks again, you will get a list with one entry (in fact, the list would be output unformatted in a single line, but for the sake of better readability it is shown formatted in the following):
$ curl http://localhost:3000/pending-todos
[
{
"id": "dadd519b-71ec-4d18-8011-acf021e14365",
"timestamp": 1601817586633,
"title": "Develop a Client"
}
]
If you try to check off a task that does not exist, you will notice that this has no effect on the list of all tasks. However, if you use the -i parameter of curl to also output the HTTP headers, you will see that you get the value 404 as the HTTP status code:
$ curl \
-i \
-X POST \
-H 'content-type:application/json' \
-d '{"id":"43445c25-c116-41ef-9075-7ef0783585cb"}' \
http://localhost:3000/tick-off-todo
The same applies if you do not pass a UUID as a parameter (or specify an empty title in the previous example). However, in these cases, you get the HTTP status code 400. Last but not least, you can now try to actually check off the noted task by passing the correct ID:
$ curl \
-X POST \
-H 'content-type:application/json' \
-d '{"id":"dadd519b-71ec-4d18-8011-acf021e14365"}' \
http://localhost:3000/tick-off-todo
If you retrieve the list of all unfinished tasks again, you will get an empty list
– as desired:
$ curl http://localhost:3000/pending-todos []
Outlook
This concludes the second part of this series on Node.js. Of course, there is much more to discover in the context of Node.js and Express for writing Web APIs. Another article could be dedicated to the topics of authentication and authorization alone. But now we have a foundation to build upon.
The biggest shortcoming of the application at the moment is that it is not possible to ensure code quality and the code has already become relatively confusing. There is a lack of structure, binding specifications regarding the code style, and automated tests. These topics will be dealt with in the third part of the series – before further functionality can be added.
The author’s company, the native web GmbH, offers a free video course on Node. js [9] with close to 30 hours of playtime. Episodes 4 and 5 of this video course deal with topics covered in this article, such as developing web APIs, using Express, and using middleware. Therefore, this course is recommended for anyone interested in more details.
[1] https://www.npmjs.com/package/processenv
[2] https://www.youtube.com/watch?v=YmzVCSUZzj0
[3] https://www.youtube.com/watch?v=frUNFrP7C9w
[4] https://www.youtube.com/watch?v=k0f3eeiNwRA
[5] https://www.npmjs.com/package/express
[6] https://www.npmjs.com/package/body-parser
[7] https://www.npmjs.com/package/validate-value
[8] https://www.npmjs.com/package/cors
[9] https://www.thenativeweb.io/learning/techlounge-nodejs
The post Developing Web APIs with Node – Intro to Node.js part 2 appeared first on International JavaScript Conference.
]]>The post Introduction to Node.js: First steps appeared first on International JavaScript Conference.
]]>This is exactly what Node.js does away with. Node.js is a runtime environment for JavaScript that does not run in the web browser, but on the server. This makes it possible to use JavaScript for the development of the backend as well, so that the technological break that always existed until then is no longer necessary. Conveniently, however, Node.js is based on the same compiler for JavaScript as the Chrome web browser, namely V8 – and thus offers excellent support for modern language features. Meanwhile, Node.js, which was first introduced to the public in 2009, is over 10 years old and is supported by all major web and cloud providers. Unlike Java and .NET, for example, Node.js is not developed by a company but by a community, but this does not detract from its suitability for large and complex enterprise projects. On the contrary, the very fact that Node.js is under an open source license has now become an important factor for many companies when selecting a suitable base technology.
Installing Node.js
If you want to use Node.js, the first step is to install the runtime environment. In theory, you can compile Node.js yourself, but corresponding pre-compiled binary packages are also available for all common platforms. This means that Node.js can be used across platforms, including macOS, Linux, and Windows. However, Node.js can also be run on Raspberry Pi and other ARM-based platforms without any problems. Since the binary packages are only a few MB in size, the basic installation is done very quickly. There are several ways to install it. The most obvious is to use a suitable installer, which can be downloaded from the official website [1]. Although the installation is done with a few clicks, it is recommended to refrain from this for professional use. The reason is that the official installers do not allow side-by-side installation of different versions of Node.js. If one performs an update, the system-wide installed version of Node.js is replaced by a new version, which can lead to compatibility problems with already developed applications and modules. Therefore, it is better to rely on a tool like nvm [2], which allows side-by-side installation of different versions of Node.js and can manage them. However, nvm is only available for macOS and Linux. For Windows, there are ports or replicas, for example, nvm-windows [3], whose functionality is similar. In general, however, macOS and Linux are better off in the world of Node.js. Most tools and modules are primarily developed for these two platforms, and even though JavaScript code is theoretically not platform dependent, there are always little things that you fail at or struggle with on Windows. Although the situation has improved considerably in recent years due to Microsoft’s commitment in this area, the roots of the community are still noticeable. To install nvm, a simple command on the command line is enough:
$ curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.35.3/install.sh | bash
Afterwards it is necessary to restart the command line, otherwise nvm cannot find some environment variables. Simply closing and reopening the terminal is sufficient for this. Then the desired version of Node.js can be installed. For example, to install version 14.9.0, the following command is sufficient: $ nvm install 14.9.0
If necessary, the version number can also be shortened. For example, if you simply want to install the latest version from the 14.x series, you can omit the specific minor and release version: $ nvm install 14
All installed versions can be displayed in a list by entering the command $ nvm ls. To select and activate one of the installed versions, the following command is used, where again an abbreviated version number may be specified: $ nvm use 14.9.0
Often you want to set a specific version as default, for example, to work within a newly opened command line. For this nvm knows the default concept, where default serves as an alias for a specific version. For example, to specify that you normally always want to work with the latest installed version from the 14.x series, you must define the default alias as follows: $ nvm alias default 14
If you take a closer look at the Node.js website, you will notice that there are two versions available for download. On the one hand, there is a so-called LTS version, on the other hand, a current version. LTS stands for Long-Term Support, which means that this version of Node.js is provided with security updates and bug fixes for a particularly long time: However, particularly long in this context means only 30 months. Support for the current version, on the other hand, expires after just 12 months. It is therefore advisable to always rely on the LTS versions for productive use and to update them once a year – a new LTS version is always released in October according to the Node.js roadmap.
Hello world!
After the installation, we start Node.js. If you call it without any further parameters, it opens in an interactive mode where you can enter and evaluate JavaScript statements live. This is occasionally handy for quickly trying out a language construct but is hardly suitable for actual development. You can exit this mode by pressing CTRL + C twice. To develop applications, therefore, a different procedure is needed. First, you need any kind of editor or IDE, provided that the tool of choice can save plain text files with the .js extension. Node.js does not enforce that an application must have a specific name, though app.js has become common for the primary file. Occasionally, you may encounter other names, such as server.js or run.js, but app.js is used below. You can put any JavaScript code in such a file, for example a “hello world” program:
console.log('Hello world!');
To run this application, all you need to do is call Node.js and pass the filename as a parameter:
$ node app.js
Node.js translates the specified application into executable machine code using V8 and then starts execution. Since the program ends after outputting the string to the screen, Node.js also terminates execution, so you return to the command line. However, a pure console program is still not very impressive. It gets much more interesting when you use Node.js to develop your first small web server. To do this, you need to make use of a module that is built into Node.js out of the box, namely the http module. Unlike .NET and Java, Node.js does not contain a class or function library with hundreds of thousands of classes and functions. Instead, Node.js limits itself to the absolute essentials. The philosophy behind it is that everything else can be provided via third-party modules from the community. This may seem unusual at first glance, but it keeps the core of Node.js incredibly lean and lightweight. The http module is one of the few modules built into Node.js out of the box. Others can be found in the documentation [4]. To load a module, you have to import it using the built-in require function. This behaves similarly to use in C# or import in Java, yet there is one serious difference: unlike the aforementioned statements, the require function returns a result, namely a reference to the module to be loaded. This reference must be stored in a variable, otherwise the module cannot be accessed. Therefore, the first line of the Node.js application is as follows:
const http = require('http');
Then you can use the createServer function of the http module to create a server. It is important to make sure that you pass it a function as a parameter that can react to incoming requests and send back a corresponding response. This function is thus called again for each incoming request and can generate an individual result in each case. In the simplest case it always returns the same text. The function res.write is used for this purpose. Afterwards it is necessary to close the connection. This is done with the function res.end. The call to createServer in turn also returns a reference, but this time to the created web server:
const server = http.createServer((req, res) => {
res.write('Hallo Welt!');
res.end();
});
Next, the web server must be bound to a port so that it can be reached from outside. This is done using the listen function, which is passed the desired port as a parameter:
server.listen(3000);
Last but not least, it is advisable to get into the habit from the very beginning of providing every .js file with strict mode, a special JavaScript execution mode in which some dangerous language constructs are not allowed, for example, the use of global variables. To enable the mode, you need to insert the appropriate string at the beginning of a file as a kind of statement. This makes the full contents of the app.js file look like the one shown in Listing 1.
'use strict';
const http = require('http');
const server = http.createServer((req, res) => {
res.write('Hallo Welt!');
res.end();
});
server.listen(3000);
If you now start this application again, you can access it from the web browser by calling the address http://localhost:3000. In fact, you can also append arbitrary paths to the URL: Since the program does not provide any special handling for paths, HTTP verbs, or anything else, it always responds in an identical way. If one is actually interested in the path or, say, the HTTP verb, one can access these values via the req parameter. The program shown in Listing 2 outputs both values, so it produces output à la GET /.
'use strict';
const http = require('http');
const server = http.createServer((req, res) => {
res.write(`${req.method} ${req.url}`);
res.end();
});
server.listen(3000);
In addition to the http module, there are several other built-in modules, for example for accessing the file system (fs), for handling paths (path) or for TCP (net). Node.js also offers support for HTTPS (https) and HTTP/2 (http2) out of the box. Nevertheless, for most tasks, you will have to rely on modules from the community.
Include modules from third parties
Modules developed by the community can be found in a central and publicly accessible registry on the internet, the so-called npm registry. npm is also the name of a command line tool that acts as a package manager for Node.js and is included in the installation scope of Node.js. This means that npm can basically be invoked in the same way as Node.js itself. A simple example of a module from the community is the processenv module [5], which provides access to environment variables. This is also possible using Node.js’ on-board means, but then you always get the values of the environment variables as strings, even if the value is a number or a logical value, for example. The processenv module, on the other hand, converts the values appropriately so that you automatically get the desired value.
Before you can install a third party module, you first have to extend your own application with the file package.json. This file contains metadata about the application. Only a name and a version number are mandatory, which is why the minimum content of this file has the following form:
{
"name": "my-http-server",
"version": "0.0.1"
}
It should be noted that the version number must always consist of three parts and follow the concept of semantic versioning [6]. In addition, however, dependencies can also be stored in this file, whereby required third-party modules are explicitly added. This makes it much easier to restore a certain state later or to get an overview of which third-party modules an application depends on. To install a module, call npm as follows:
$ npm install processenv
This extends the package.json file with the dependencies section, where the dependency is entered as follows:
{
"name": "my-http-server",
"version": "0.0.1",
"dependencies": {
"processenv": "^3.0.2"
}
}
Also, npm downloads the module from the npm registry and copies it locally to a directory named node_modules. It is recommended that you exclude this directory from version control. If you delete it or retrieve the code for your application from the version control system, which does not include the directory, you can easily restore its contents:
$ npm install
The specification of the desired modules can now be omitted, after all, they can be found together with the version number in the package.json file. A conspicuous feature of this file is the “roof” in front of the version number of processenv. It has the effect that npm install does not necessarily install exactly version 3.0.2, but possibly also a newer version, if it is compatible. However, this mechanism is dangerous, so it is advisable to consistently remove the roof from the package.json file. To avoid having to do this over and over again by hand, npm can alternatively be configured to not write the roof at all. To do this, create a file named .npmrc in the user’s home directory and store the following content there:
save-exact=true
And finally, in addition to the node_modules directory, npm has also created a file called package-lock.json. It is actually used to lock version numbers despite the roof being specified. However, it has its quirks, so if npm behaves strangely, it’s often a good idea to delete this file and the node_modules directory and run npm install again from scratch. Once a module has been installed via npm, it can be loaded in the same way as a module built into Node.js. In that case, Node.js recognizes that it is not a built-in module and loads the appropriate code from the node_modules directory:
const processenv = require('processenv');
Then you can use the module. In this example application, it would be conceivable to read the desired port from an environment variable. However, if this variable is not set, specifying a port as a fallback is still a good idea (Listing 3).
'use strict';
const http = require('http');
const processenv = require('processenv');
const port = processenv('PORT', 3000);
const server = http.createServer((req, res) => {
res.write(`${req.method} ${req.url}`);
res.end();
});
server.listen(port);
Structure the application
As applications grow larger, it is not advisable to put all the code in a single file. Instead, it is necessary to structure the application into files and directories. This is already possible even in the case of the program, which is still very manageable, because you could separate the actual application logic from the server. In order to illustrate this, however, an intermediate step is introduced first: The function that contains the application logic is swapped out into its own function (Listing 4).
'use strict';
const http = require('http');
const processenv = require('processenv');
const port = processenv('PORT', 3000);
const app = function (req, res) {
res.write(`${req.method} ${req.url}`);
res.end();
};
const server = http.createServer(app);
server.listen(port);
In fact, it would also be conceivable to wrap this function in a function again in order to be able to configure it. Instead of the app function, you would then get a getApp function. The outer function can then be equipped with any parameters that the inner function can access. The signature of the inner function must not be changed, because it is predefined by Node.js through createServer:
const getApp = function () {
const app = function (req, res) {
res.write(`${req.method} ${req.url}`);
res.end();
};
return app;
};
However, this also means that you have to adjust the call to createServer accordingly:
const server = http.createServer(getApp());
Now the application is prepared to be split into different files. The getApp function is to be placed in its own file called getApp.js. Since the definition of the function is then missing in the app.js file, it must be loaded there, which – unsurprisingly – is again done using the require function. However, a relative or an absolute path must now be specified so that the require function can distinguish files to be reloaded from modules with the same name. The file extension .js can, but does not have to be specified (Listing 5).
'use strict';
const getApp = require('./getApp');
const http = require('http');
const processenv = require('processenv');
const port = processenv('PORT', 3000);
const server = http.createServer(getApp());
server.listen(port);
If you now try to start the application in the usual way, you get an error message. This is because Node.js considers everything defined inside a file as private – unless you explicitly export it. Therefore, it tries to import the content of the file getApp.js, but nothing is exported from there. The remedy is to assign the getApp function to the module.exports object (Listing 6).
'use strict';
const getApp = function () {
const app = function (req, res) {
res.write(`${req.method} ${req.url}`);
res.end();
};
return app;
};
module.exports = getApp;
Whatever a file exports this way will be imported again by require: So if you export a function, you get a function afterwards; if you export an object, you get an object, and so on.
If you start the application again, it runs as before. The only unpleasant thing is the directory structure since the main directory becomes increasingly full. It is obvious that with even more files it quickly becomes confusing. The position of the files package.json and package-lock.json is predefined, as well as the position of the node_modules directory, and also the file app.js is well placed on the top level. However, any further code placed here will be disruptive:
/ node_modules/ app.js getApp.js package.json package-lock.json
Therefore, many projects introduce a directory called lib, which does not contain the main executable file of the application, but any other code. Adapting the directory structure for this project results in the following structure:
/
lib/
getApp.js
node_modules/
app.js
package.json
package-lock.json
But now the import in the file app.js does not fit anymore, because the file getApp.js is still searched in the same directory as the file app.js. So it is necessary to adjust the parameter of require:
const getApp = require('./lib/getApp');
As you can see, this way it is quite easy to structure code in Node.js. Directories take over the role of namespaces. There is no further subdivision of this kind. The next step is to add more functionality to the application, which means writing more code and including more third-party modules from npm. One of the biggest changes when you start working with Node. js is the multitude of npm modules that you come into contact with over time, even on small projects. The idea behind this is that, in terms of complexity, it is more beneficial to maintain many small building blocks whose power comes from their flexible combinability than to use a few large chunks.
Outlook
This concludes the first part of this series on Node.js. Now that the basics are in place, the next part will look at writing web APIs. This will include topics like routing, processing JSON as input and output, validating data, streaming, and the like.
The author’s company, the native web GmbH, offers a free german video course with close to 30 hours of playtime on Node.js [7]. The first three episodes deal with the topics covered in this article, such as installing, getting started, and using npm and modules. Therefore, this course is recommended for anyone interested in more details.
[2] https://github.com/nvm-sh/nvm
[3] https://github.com/coreybutler/nvm-windows
[4] https://nodejs.org/dist/latest-v14.x/docs/api/
[5] https://www.npmjs.com/package/processenv
[7] https://www.thenativeweb.io/learning/techlounge-nodejs
The post Introduction to Node.js: First steps appeared first on International JavaScript Conference.
]]>The post Deno – the end of Node.js? appeared first on International JavaScript Conference.
]]>It’s time to make it all better! At the conference, Dahl presented his latest development. Similar to his first Node.js presentation it is a very young prototype: Deno – a secure runtime environment for JavaScript and TypeScript [1]. Like Node.js, Deno is based on V8 and, unlike Node.js, uses Rust instead of C++ for the development of Deno itself. Since Ryan is a big TypeScript lover, it is natural that Deno supports TypeScript. Node.js itself can be used with TypeScript, but you have to implement the compiling and debugging support yourself. With Deno, everything comes out of the box. About one year later, in May 2020, Deno was officially released in version 1.0. This was made possible not only by Ryan himself but also by the 328 contributors.
It is important to know that Deno has made a conscious decision to do a little spring cleaning and not be compatible with Node.js. This decision enables Deno to cut off outdated practices and work with the most modern development methods and concepts possible. Deno sees itself as a web browser for executing command-line scripts, so many browser APIs should work as usual in Deno, e.g. fetch.
It is time to take a closer look at Deno and see if it could be a Node.js killer.
In this article, we will explore Deno using a small example. For this purpose, we implement a small HTTP API with access to a SQLite 3 database to manage a simple list of computer games. The finished example can be found on GitHub [2].
Deno must be installed first. On the installation page [3] there are different possibilities for all operating systems. A variant that allows multiple versions is recommended, for example with the runtime manager asdf [4], since Deno currently plans on releasing a new version every two weeks.
Both popular IDEs, VS Code and JetBrain’s WebStorm, support development with Deno. For VS Code the official plugin [5] needs to be installed. For WebStorm the Deno support can be enabled in the preferences of the IDE. However, both offer very early support of Deno, so expect that some things may not work as expected [6]. The overall support is still very rudimentary.Once everything is installed, create a project folder windows-developer-deno and load it into our IDE. Next, create a folder src and a file index.ts. In the index.ts we first write a log output:
console.log("Hello Deno!");
With the help of a command line we execute the file:
deno run index.ts
As a log output, we first receive the message that Deno is compiling our file. Even though Deno supports TypeScript as a first-class citizen, it still has to compile TypeScript to JavaScript, since the underlying V8 can only interpret JavaScript. After compilation, we should see the text “Hello Deno!”. The first step with Deno is made!
The core of our HTTP API is of course an HTTP server. For this purpose, we create the file src/http-server.ts. Listing 1 shows the contents of the file.
Listing 1: Content of the file src/http-server.ts
import { Application, Router } from "https://deno.land/x/[email protected]/mod.ts";
export class HttpServer {
private readonly app = new Application();
constructor(private readonly port: number) {}
async listen(): Promise<void> {
const router = new Router();
this.app.use(router.routes());
this.app.use(router.allowedMethods());
console.log(`Running HTTP Server on port ${this.port}`);
await this.app.listen({ port: this.port });
}
}
In the first line of Listing 1 we see a direct difference to Node.js: We import application and router from one URL. As mentioned in the beginning, the package.json from Node.js was one thing Ryan Dahl regrets. Therefore this concept no longer exists in Deno. Instead, all modules are loaded via an absolute or relative URL.
Deno currently offers two standard repositories for modules. https://deno.land/x/ describes third-party modules. Unlike npm, the modules are not uploaded to https://deno.land/x/. Instead, the service acts as a pure URL rewrite service and then forwards the request to the target, such as GitHub. Instead of https://deno.land/x/, you could also import directly from GitHub.
The second repository is https://deno.land/std/, a collection of standard modules that work without additional dependencies and have been reviewed by the Deno core team. This guarantees that these modules are of high quality and version X of them is always compatible with Deno in version X. Even though, as mentioned at the beginning, Deno is deliberately not compatible with Node.js, the first approach to make CommonJS modules from Node.js compatible with Deno already exists at https://deno.land/std/node. At the moment, however, this approach is still a proof-of-concept..
At runtime, before Deno compiles our TypeScript, all dependencies are downloaded and stored in a central cache (instead of in a node_modules folder in each project). This central cache is also never deleted by Deno. If you want to download modules again, you have to specify the –reload flag at startup. Typically, the modules versioning is done directly in the URL during import. See our example in Listing 1, where we import the module Oak in version 4.0.0, which corresponds to a branch or tag in the corresponding GitHub repository. If you were to omit the version, you would get the current master branch of the repository as a module. For the update, the flag –reload is then mandatory.
Oak is one of the first third-party modules to develop a middleware-based HTTP API. Oak’s API is inspired by the Node.js module Koa. If you prefer a web framework instead of a server module, take a look at the module Alosaur [7]. Similar to .NET Core, it brings features like dependency injection, decorators, controllers, view rendering, and SPA framework integration. In our article example, we use the simpler Oak, because we want to get to know Deno first and not a specific framework.
Further in Listing 1 we define the class HttpServer and create an instance of Application in a private field. Application is a class of the module Oak and offers us an HTTP server. Via the constructor, we get the port with which our server should be started later.
The listen method is used to finally start our HTTP server on the specified port. For this purpose we create an instance of the Oak-Router, which later allows us to map URLs to functions. We then use these routes and the corresponding HTTP methods as middleware in our application. Last but not least, we start the HTTP server with this.app.listen. Here we also see another difference. With Node.js all servers were usually equipped with a callback. Good news: Callbacks in Deno are history. Instead, Deno uses only promises for all asynchronous operations, meaning that we can develop our code using the async/await pattern.
To start our HTTP server, we first switch back to the file index.ts and replace the content with the one in Listing 2.
Listing 2: Content of src/index.ts
import { HttpServer } from "./http-server.ts";
const port = Deno.env.get("PORT") || 8080;
const httpServer = new HttpServer(+port);
await httpServer.listen();
In Listing 2 we first import our HttpServer via a relative URL. Then we read out the environment variable PORT via Deno.env and use 8080 as our default if the variable is not set. In general, all APIs that do not conform to the web standard are available under the global object Deno, for example: reading files, opening TCP sockets, and calling other processes.
After reading the port we create an instance of the HttpServer and call the method list. Exciting is the fact that we can already use await at the top level in the file without wrappers or similar.
If we start our program via deno run index.ts, first the dependencies are downloaded, our TypeScript is compiled, and then we are welcomed with an error message:
error: Uncaught PermissionDenied: access to environment variables, run again with the --allow-env flag
As mentioned at the beginning, Deno sees itself more as a web browser. We are accustomed to the browser providing some security when a website wants to access certain APIs, such as the camera or microphone. This concept was transferred to Deno. Any code runs in a sandbox, without any further rights. Any access to resources such as operating system, environment variables, network or file system must be explicitly allowed when the application is started.
So for using environment variables we need the flag –allow-env, for the HTTP server we need –allow-net. Therefore we have to start our application with this command:
deno run --allow-env --allow-net index.ts
This command can also be stored in a shell script. After the execution, our application starts. In order for it to really be able to respond to HTTP requests, we need to add controllers and the ability to access a SQLite 3 database.
In order to access a SQLite 3 database, we will use the module denodb [8]. denodb is also one of several modules that already allow access to databases. We use SQLite 3 here to avoid the installation of a database engine. denodb itself can address MySQL or PostgreSQL databases in addition to SQLite 3.
First, we create a file src/database/game.entity.ts. The content of the file can be found in Listing 3.
Listing 3: Content of the file src/database/game.entity.ts
import { DATA_TYPES, Database, Model } from "https://deno.land/x/denodb/mod.ts";
export class GameEntity extends Model {
static table = "games";
static timestamp = true;
static fields = {
id: {
primaryKey: true,
autoIncrement: true,
},
name: DATA_TYPES.STRING,
type: DATA_TYPES.STRING,
publisher: DATA_TYPES.STRING,
developer: DATA_TYPES.STRING,
};
}
The API of denodb expects that one class of Model must be derived per database entity, here our GameEntity. The model is described using static fields. The field table indicates the table in which the entity is stored. The field timestamp specifies whether, in addition to our fields defined in fields, the fields created_at and updated_at are also created and updated when the data is manipulated. In the fields section, there is an object with the fields of the model. Each key is mapped to a column. For simple columns, the data type can be defined using DATA_TYPES. If a column (like id) needs further meta information, this is done via another object. It should be noted that denodb is currently not yet able to express relations. This feature is still being implemented.
The next step is to create the file src/database/index.ts. We find the content in Listing 4.
Listing 4: Content of the file src/database/index.ts
import { Database } from "https://deno.land/x/denodb/mod.ts";
import { GameEntity } from "./game.entity.ts";
export class DatabaseProvider {
private connection?: Database;
async connect(filepath: string): Promise<void> {
console.log("Connecting to DB", filepath);
this.connection = new Database("sqlite3", { filepath });
this.connection.link([GameEntity]);
await this.connection.sync({ drop: true });
}
async save(): Promise<void> {
await this.connection?.close();
}
}
export const databaseProvider = new DatabaseProvider();
In Listing 4 we see the implementation of the DatabaseProvider class. It provides two methods. The connect method connects to our SQLite 3 via the specified file path filepath. For this purpose, a new instance of the Database class is created. Via the method link, we communicate all models we want to link to the database. The method sync ensures that all tables and columns are created. With the drop setting we specify that all model tables in the database will be deleted and created again. This is very handy at development time, however, for production this feature should be switched off.
The second method saves our changes in the database. Here you can still find a small design problem (bug) from denodb. There is no Commit for SQLite 3 yet, and writing to the database does not take place until the connection is closed. Therefore, when we save, we simply close the connection to the database and denodb will re-establish it if necessary. Finally, we create an instance of the DatabaseProvider and export it as the variable databaseProvider.
To ensure that the connection is established once when the application is started, we add the code from Listing 5 to our src/index.ts file, immediately before the httpServer.listen location.
Listing 5: Addition in the file src/index.ts
const filepath = Deno.env.get("DB_FILEPATH") || "./windows-developer.sqlite";
await databaseProvider.connect(databaseConfiguration);
// httpServer.listen ...
In Listing 5 we load the path to the database via the environment variable DB_FILEPATH. If it is not set, we use windows-developer.sqlite as the default value. We then use the connect method to establish a connection to the database.
One link between the database and the outside world is still missing: an HTTP controller that accepts HTTP requests and communicates with the database. Normally you would add an additional service layer so that the controller communicates with a service and the service communicates with the database entities. For the demo in this article, we do without this additional indirection and let the controller talk directly to the database.
Start by creating the file src/controllers/game.controller.ts. We find the content in Listing 6.
Listing 6: Content of the file src/controllers/game.controller.ts
import { Router, RouterContext } from "https://deno.land/x/[email protected]/mod.ts";
import { GameEntity } from "../database/game.entity.ts";
import { databaseProvider } from "../database/index.ts";
export class GameController {
constructor(router: Router) {
router.get("/games", (context) => this.list(context));
router.post("/games", (context) => this.create(context));
router.put("/games", (context) => this.update(context));
router.delete("/games/:id", (context) => this.delete(context));
}
async list(context: RouterContext): Promise<void> {
context.response.body = await GameEntity.all();
}
async update(context: RouterContext): Promise<void> {
const { value } = await context.request.body();
const { id } = value;
const entity = await GameEntity.find(id);
if (!entity.length) {
return;
}
try {
await GameEntity.where("id", id).update(value);
await databaseProvider.save();
context.response.status = 200;
} catch (error) {
console.error(error);
context.throw(500);
}
}
// Method create and delete, please see the final GitHub example
}
The GameController implements a CRUD interface for our GameEntity. For demonstration purposes, only the methods list and update are completely displayed in this article. The create and delete methods are identical in structure and can be viewed in the final example on GitHub [2].
In the constructor, we use Oak’s router and implement four HTTP routes to list all our entities, create, delete, and update an entity.
Let’s take a closer look at the two methods list and update. By definition, when using the Oak Router, a handler has a parameter context of the type RouterContext. On this object, there is information about the request and we can give details about the response.
In the method list, the response of the HTTP request is quite simple. We set the property context.response.body on the context object and retrieve all entities available in the database. For a productive scenario, you would implement paging at this point.
In the update method, we first read the body of the HTTP request. In return, we get an object with the property value. There you will find a JSON object, which we will later send to the API to change a record. Then, we check if we can find a GameEntity with the given id in the database. If this is not the case, we return from the method early. Since we do not set a response, Oak will automatically generate an HTTP error 404 here. If we find the entity in the database, we update it according to all values that were transmitted. Again, please note that you would not implement it like this in a production scenario, because you first have to check all values that were transferred to you for validity. After the update, we call our save method so that the change is also persisted. Finally, we set the HTTP status to 200 and tell the client that the update was successful. If something went wrong, our try-catch mechanism will take action, log the error to the console, and end the request with an HTTP error 500.
In order to use our controller, we add the file src/http-server.ts. We find the addition in Listing 7.
Listing 7: Addition of the file src/http-server.ts
// add to import
import { GameController } from "./controllers/game.controller.ts";
// const router = new Router();
new GameController(router);
// this.app.use ...
This completes the development of our HTTP API. If we try to start the application, we are greeted with an error message. The SQLite 3 database is read from and written to disk. For this, we have to give Deno explicit permission by specifying the flags –allow-read and –allow-write at startup. Our start command looks like this:
deno run --allow-net --allow-env --allow-read --allow-write index.ts
For example, via Postman [9] an HTTP request can be used to create a GameEntity (Listing 8).
Listing 8: HTTP POST request to create a GameEntity
POST /games HTTP/1.1
Host: localhost:8080
Content-Type: application/json
{
"name": "Idle Ambulance",
"type": "Idle Game",
"developer": "Boundfox Studios",
"publisher": "Boundfox Studios"
}
To make sure that a URL /games is created, we can develop a small unit test. Deno comes with a test framework and a test runner. To do this, we create the file src/controllers/game.controller.test.ts with the contents of Listing 9.
Listing 9: Content of the file src/controllers/game.controller.test.ts
import { stub } from "https://raw.githubusercontent.com/udibo/mock/v0.3.0/stub.ts";
import { GameController } from "./game.controller.ts";
import { Router } from "https://deno.land/x/[email protected]/mod.ts";
import { assertEquals } from "https://deno.land/std/testing/asserts.ts";
Deno.test("create post route", () => {
const router = new Router();
const postStub = stub(router, "post");
const sut = new GameController(router);
assertEquals(postStub.calls.length, 1);
assertEquals(postStub.calls[0].args[0], "/games");
});
In listing 9, we see a small unit test together with the module mock for stubbing objects. Via Deno.test we can create a unit test that is executed with a callback for testing. The callback can also return a promise so that the async/await pattern can be used. In the callback itself, we create an instance of Router and stub the method post. Unfortunately, mock can’t really mock yet, as you know it from ts-mockito for example, so we have to make do with a stub first. Then we create our GameController and put the router in. Remember that when you create a GameController, the routes are defined. Via our stub postStub we can check if the method was called and if the first argument of the method corresponds to the route /games.
On a command line, we can call deno test to automatically run all tests in the project. Deno only finds files that end in .test or _test.
Let’s take a look at a few more features that Deno brings to the table in this article. For this purpose, we take another look at the module system, which may take some getting used to with its URL import, especially since versioning is also included in the URL. This means that when we update a library, we actually have to change many files, whereas for Node.js we only changed the version in the package.json.
At Deno we can do something similar. To do this, create a file deps.ts, in which you load and export all external dependencies, as shown in Listing 10. This has another advantage. We can have Deno create a lock file. A file hash is stored in this lock file, so that we can make sure that there was no change to this module in case a module is downloaded again. The lock file, like the package-lock.json, is also checked into the source control system. The lock file is created as follows:
deno cache --lock lock.json --lock-write src/deps.ts
The –lock flag specifies the freely selectable name of the lock file. With –lock-write we inform you that we want to write the lock file. To check if everything is correct or if the project is checked out again, we can use the following command:
deno cache -r --lock lock.json src/deps.ts
On the one hand, we specify the file name of our lock file again. The -r flag tells Deno to reload and cache all modules. The lock file is then used to check whether you have exactly the same copies on your computer as when the lock file was created.
Listing 10: Example content of a deps.ts file
export { Router, RouterContext, Application } from "https://deno.land/x/[email protected]/mod.ts";
export { Database, DATA_TYPES, Model } from "https://deno.land/x/denodb/mod.ts";
export { assertEquals } from "https://deno.land/std/testing/asserts.ts";
export { stub } from "https://raw.githubusercontent.com/udibo/mock/v0.3.0/stub.ts";
Deno supports the V8 inspector protocol for debugging [10]. This protocol is implemented by WebStorm, VS Code, and by Chrome itself. All these programs can be used for debugging. To do this, Deno must be started with the –inspect or –inspect-brk flag. In Chrome, the page chrome://inspect can then be called. There you will find a remote target for debugging the Deno application with the Chrome DevTools. Please note that debugging support is currently still very shaky. It is possible that the connection to the application will be terminated or the application itself will simply end. In future Deno versions, the debugging should become much more stable.
Deno comes with a bundler that accepts an entry point to the application and combines the complete application, including all dependencies, into a single JavaScript file as ES modules [11]. This bundle can then either be started via deno run, consumed from other files, or with the browser. Our application from this article can be packed with the following command:
deno bundle index.ts app.bundle.js
The Code Formatter is another practical tool from Deno that can format our TypeScript files [12]. To do this, you can call deno fmt on the command line to format the entire project. If you only want to format a single file or directory, you can alternatively specify them as arguments. In its current state, the Code Formatter of Deno cannot be configured with custom code styles yet. There are already issues and pull requests to be able to use the options of the Prettier formatter in the future. Again, it is a matter of time before these things are implemented.
The Dependency Inspector rounds off the tools around Deno in this article [13]. The Dependency Inspector recursively lists all dependencies of a file, so that you can see at any time which module is loaded and needed by which code. With the following command line command the Inspector can be used for local files as well as for URLs:
deno info index.ts
Even though Deno is still in its infancy in version 1.0, many concepts are clearly recognizable, including those where Node.js was far too negligent. It will be exciting to see how Deno is developed further and, above all, how it is accepted in the real world. With this article, we just scratched the surface of Deno, implemented a first small API together with unit testing, and got to know the most important tools. There is still much to explore, e.g. the in-house documentation generator, the compiler API, the script installer [14], the use of webworkers [15], and the execution of WebAssembly binaries directly in Deno [16].
In the future, will Deno replace Node.js? Right now, this question remains open. As the Deno ecosystem matures, projects face the decision of whether to use Node.js or Deno. Should Node.js no longer offer any advantages over Deno, the choice will probably be in favor of Deno. Bye-bye Node.js, hello Deno!
I myself will implement new small projects with Deno rather than Node.js, because I have been using TypeScript in Node for a long time. I get a better workflow with Deno. As soon as the integration into IDEs is better, nothing will stand in the way of Deno. I am curious to see where the journey will take us. Have fun trying out Deno!
[1] https://deno.land
[2] https://github.com/thinktecture-labs/windows-developer-deno
[3] https://github.com/denoland/deno_install
[4] https://asdf-vm.com
[5] https://marketplace.visualstudio.com/items?itemName=justjavac.vscode-deno
[6] https://youtrack.jetbrains.com/issue/WEB-41607#focus=streamItem-27-4146419.0-0
[7] https://github.com/alosaur/alosaur
[8] https://deno.land/x/denodb
[9] https://www.postman.com
[10] https://deno.land/manual/tools/debugger
[11] https://deno.land/manual/tools/bundler
[12] https://deno.land/manual/tools/formatter
[13] https://deno.land/manual/tools/dependency_inspector
[14] https://deno.land/manual/tools
[15] https://deno.land/manual/runtime/workers
[16] https://deno.land/manual/getting_started/webassembly
The post Deno – the end of Node.js? appeared first on International JavaScript Conference.
]]>The post Node.js is Dead – Long live Deno! appeared first on International JavaScript Conference.
]]>iJS editorial team: Hello Krzysztof! You are an expert in Deno – a new JavaScript Framework created by the Node inventor Ryan Dahl. Can you briefly explain what Deno is exactly?
Deno aims to fix Node.js design mistakes and offers a new modern development environment.
Krzysztof Piechowicz: Deno is a new platform for writing applications using JavaScript and TypeScript. Both platforms share the same philosophy – event-driven architecture and asynchronous non-blocking tools to build web servers and services. The author of Deno is Ryan Dahl, original creator of Node.js. In 2018, he gave the famous talk “10 Things I Regret About Node.js“ and announced his new project – Deno. Deno aims to fix Node.js design mistakes and offers a new modern development environment.
iJS editorial team: How does Deno differ from Node.js?
Krzysztof Piechowicz: Both platforms serve the same purpose, but use different mechanisms. Deno uses ES Modules as the default module system, whereas Node.js uses CommonJS. External dependencies are loaded using URLs, similar to browsers. There is also no package manager and centralized registry, modules can be hosted everywhere on the internet. Contrary to Node.js, Deno executes the code in a sandbox, which means that runtime has no access to the network, the file system and the environment. The access needs to be explicitly granted, which means better security. Deno supports TypeScript out of the box, which means that we don’t need to manually install and configure tools to write TypeScript code. Another difference is that Deno provides a set of built-in tools, like a test runner, a code formatter and a bundler.
iJS editorial team: Can you pick out a difference and demonstrate it with an example?
Krzysztof Piechowicz: In my opinion, the most important difference is how modules are imported. As I mentioned, Deno doesn’t use the CommonJS format and doesn’t provide a package manager like npm. All modules are loaded directly in code using an URL.
Here is a Node.js example:

At first glance, the Node imports look simpler, but there are a few advantages to using the Deno style. By importing code via URL, it’s possible to host modules everywhere on the internet. Deno packages can be distributed without a centralized registry. There is also no need for the package.json file and a dependency list, because all modules are downloaded, compiled and cached on the application run.
iJS editorial team: What is the current status of Deno? Can it already be used in production?
Krzysztof Piechowicz: Deno is still under heavy development and isn’t production-ready yet. There is also no official date for the release of the 1.0 version.
iJS editorial team: What’s the next step with Deno? Is it actively being developed? By whom, in which direction?
The goal of Deno is not to replace Node.js, but to offer an alternative.
Krzysztof Piechowicz: Deno is an open-source project and is being developed very actively. The project was started in 2018 by Ryan Dahl. Currently, the project has over 150 contributors. Besides the release of the 1.0 version, there is a plan to provide a command-line debugger and built-in code linter to improve
developer experience. Deno should also serve HTTP more efficiently.
iJS editorial team: What is the core message of your session at iJS?
Krzysztof Piechowicz: The goal of Deno is not to replace Node.js, but to offer an alternative. Some of the differences are quite controversial and it’s hard to predict if they will format in a correct way. I recommend that all Node.js programmers keep an eye on this project. I’m not sure if this project will be a success, but it’s a great opportunity to observe how Node.js could have been implemented differently.
iJS editorial team: Thank you very much!
Watch Krzysztof Piechowicz’s session from iJS 2019: Deno – a better Node.js?
The post Node.js is Dead – Long live Deno! appeared first on International JavaScript Conference.
]]>The post back to school—explore the program of iJS appeared first on International JavaScript Conference.
]]>JavaScript is fast, dynamic and futuristic—just like our program at iJS! Our infographic is taking you back to school and to your student times by focusing on the highlights of the learning objectives of iJS Munich’s program and speakers while showing you the hottest topics and the latest trends of the JavaScript ecosystem. Every one of the learning objectives are highlighting a different track of iJS and preparing you to be agile in the dynamic world of JS and to take your skills to the next level. Have you heard of the bell? Let’s get started with the first lesson!
→ Progressive Web Apps without frameworks #nomigrations #webstandards #noslides
→ CI/CD for Angular with Docker
→ State Management in Angular: from Facades to NgRx and back
→ Progressive React apps
→ Real-world advanced Redux patterns
→ JAMstack FTW – Static Site Generation with Vue
The post back to school—explore the program of iJS appeared first on International JavaScript Conference.
]]>The post 5 simple Rules to implement Microservices Architecture appeared first on International JavaScript Conference.
]]>Microservices is one example. It is an amazing opportunity to reshape how we build server software, but since it implies huge changes and players want to be in the market as “solution providers” who adapt microservices first, they just rebrand their solutions without much change. In doing so, they miss the fundamental principles of microservices.
Microservices is about making software approachable — it’s about enabling. So as an industry, we should be doing our best to make microservices accessible to everyone. Microservices doesn’t require a huge infrastructure investment. It doesn’t require you to maintain several technologies just to run your app.
As long as you follow 5 simple rules, you will benefit from microservices regardless of the technologies you are using.
If your architecture demonstrates these capabilities and if you are breaking down the fulfillment of most of your API requests into several independent services, then, yes, you are doing microservices.
Join me on my talk Zero-Configuration Microservices with Node.js and Docker at International JavaScript Conference to learn more about the true properties of microservices and how you can realize such a system with only Node.js and a handy library called cote.
The microservices revolution is upon us. Let’s kick it off and be a part of the change.
The post 5 simple Rules to implement Microservices Architecture appeared first on International JavaScript Conference.
]]>