Thursday, August 18, 2016

Real-Time Chat With Modulus and Node.js_part1

In this tutorial, I will show you how to implement a real-time chat application with Node.js, Socket.IO and MongoDB, and then we will deploy this application to Modulus together.

First of all, let me show you the final look of the application that we will have at the end of the article.


Node.js will be the nucleus of the application, with Express as the MVC, MongoDB for the database, and Socket.IO for real-time communication. When we've finished, we will deploy our application to Modulus. The MongoDB part actually exists inside Modulus.

1. Scenario
  1. John wants to use our application, and opens it in the browser.
  2. On the first page, he selects a nickname use during chat, and logs in to chat.
  3. In the text area he writes something and presses Enter.
  4. The text is sent to a RESTful service (Express) and this text is written to MongoDB.
  5. Before writing in MongoDB, the same text will be broadcast to the users that are currently logged in to the chat app.
As you can see, this is a very simple app, but it covers almost everything for a web application. There is no channel system in this application, but you can fork the source code and implement the channel module for practice.

2. Project Design From Scratch

I will try to explain the small pieces of the project first and combine them at the end. I will start from the back end to the front end. So, let's start with the domain objects (MongoDB models).

2.1. Model

For database abstraction, we will use Mongoose. In this project, we have only one model called Message. This message model only contains textcreateDate, and author. There is no model for the author like User, because we will not fully implement a user registration/login system. There will be a simple nickname-providing page, and this nickname will be saved to a cookie. This will be used in the Message model as text in the author field. You can see an example JSON model below:
  1. {
  2.     text: "Hi, is there any Full Stack Developer here?"
  3.     author: "john_the_full_stack",
  4.     createDate: "2015.05.15"
  5. }
In order to create documents like this, you can implement a model by using the Mongoose functions below:
  1. var mongoose = require('mongoose')
  2.  
  3. var Message = new mongoose.Schema({
  4.     author: String,
  5.     message: String,
  6.     createDate: {
  7.         type: Date,
  8.         default: Date.now
  9.     }
  10. });
  11.  
  12. mongoose.model('Message', Message)
Simply import the Mongoose module, define your model with its fields and field attributes in JSON format, and create a model with the name Message. This model will be included in the pages that you want to use.

Maybe you have a question about why we are storing the message in the database, when we already broadcast this message to the user in the same channel. It's true that you do not have to store chat messages, but I just wanted to explain the database integration layer. Anyway, we will use this model in our project inside the controllers. Controllers?

2.2. Controller

As I said earlier, we will use Express for the MVC part. And C here stands for the Controller. For our projects, there will be only two endpoints for messaging. One of them is for loading recent chat messages, and the second one is for handling sent chat messages to store in the database, and then broadcast into the channel.
  1. .....
  2. app.get('/chat', function(req, res){
  3.     res.sendFile(__dirname + '/index.html');
  4. });
  5.  
  6. app.get('/login', function(req, res){
  7.     res.sendFile(__dirname + '/login.html');
  8. });
  9.  
  10. app.post('/messages', function(req, res, next) {
  11.     var message = req.body.message;
  12.     var author = req.body.author;
  13.     var messageModel = new Message();
  14.     messageModel.author = author;
  15.     messageModel.message = message;
  16.     messageModel.save(function (err, result) {
  17.        if (!err) {
  18.            Message.find({}).sort('-createDate').limit(5).exec(function(err, messages) {
  19.                io.emit("message", messages);
  20.            });
  21.            res.send("Message Sent!");
  22.        } else {
  23.            res.send("Technical error occurred!");
  24.        }
  25.     });
  26. });
  27.  
  28. app.get('/messages', function(req, res, next) {
  29.     Message.find({}).sort('-createDate').limit(5).exec(function(err, messages) {
  30.         res.json(messages);
  31.     });
  32. });
  33. .....
The first and second controllers are just for serving static HTML files for the chat and login pages. The third one is for handling the post request to the /messages endpoint for creating new messages. In this controller, first of all the request body is converted to the Message model, and then this model is saved to the database by using the Mongoose function save.

I will not dive into Mongoose very much—you can have a look at the documentation for further details. You can provide a callback function for the save function to check whether there is any problem or not. If it is successful, we have fetched the last five records sorted in descending order by createDate, and have broadcast five messages to the clients in the channel.

Ok, we have finished MC. Let's switch to the View part.

2.3. View

In general, a template engine like Jade, EJS, Handlebars, etc., can be used within Express. However, we have only one page, and that is a chat message, so I will serve this statically. Actually, as I said above, there are two more controllers to serve this static HTML page. You can see the following for serving a static HTML page.
  1. app.get('/chat', function(req, res){
  2.     res.sendFile(__dirname + '/index.html');
  3. });
  4.  
  5. app.get('/login', function(req, res){
  6.     res.sendFile(__dirname + '/login.html');
  7. });
This endpoint simply serves index.html and login.html by using res.sendFile. Both index.html and login.html are in the same folder as server.js, which is why we used __dirname before the HTML file name.

2.4. Front End

In the front-end page, I have used Bootstrap and there is no need to explain how I managed to do that. Simply, I have bound a function to a text box, and whenever you press the Enter key or Send button, the message will be sent to the back-end service.

This page also has a required js file of Socket.IO to listen to the channel called message. The Socket.IO module is already imported in the back end, and when you use this module in the server side, it automatically adds an endpoint for serving the Socket.IO js file, but we use the one that is served from cdn <script src="//cdn.socket.io/socket.io-1.3.5.js"></script>. Whenever a new message comes in to this channel, it will automatically be detected and the message list will be refreshed with the last five messages.
  1. <script>
  2.         var socket = io();
  3.         socket.on("message", function (messages) {
  4.             refreshMessages(messages);
  5.         });
  6.  
  7.         function refreshMessages(messages) {
  8.             $(".media-list").html("");
  9.             $.each(messages.reverse(), function(i, message) {
  10.                 $(".media-list").append('<li class="media"><div class="media-body"><div class="media"><div class="media-body">'
  11.                 + message.message + '<br/><small class="text-muted">' + message.author + ' | ' + message.createDate + '</small><hr/></div></div></div></li>');
  12.             });
  13.         }
  14.  
  15.         $(function(){
  16.  
  17.             if (typeof $.cookie("realtime-chat-nickname") === 'undefined') {
  18.                 window.location = "/login"
  19.             } else {
  20.                 $.get("/messages", function (messages) {
  21.                     refreshMessages(messages)
  22.                 });
  23.  
  24.                 $("#sendMessage").on("click", function() {
  25.                     sendMessage()
  26.                 });
  27.  
  28.                 $('#messageText').keyup(function(e){
  29.                     if(e.keyCode == 13)
  30.                     {
  31.                         sendMessage();
  32.                     }
  33.                 });
  34.             }
  35.  
  36.             function sendMessage() {
  37.                 $container = $('.media-list');
  38.                 $container[0].scrollTop = $container[0].scrollHeight;
  39.                 var message = $("#messageText").val();
  40.                 var author = $.cookie("realtime-chat-nickname");
  41.                 $.post( "/messages", {message: message, author: author}, function( data ) {
  42.                     $("#messageText").val("")
  43.                 });
  44.                 $container.animate({ scrollTop: $container[0].scrollHeight }, "slow");
  45.             }
  46.         })
  47.     </script>
There is one more check in the above code: the cookie part. If you have not chosen any nickname for chat, it means the cookie is not set for the nickname, and you will be automatically redirected to the login page.

If not, the last five messages will be fetched by a simple Ajax call to the /messages endpoint. In the same way, whenever you click the Send button or press the Enter key, the text message will be fetched from the text box, and the nickname will be fetched from the cookie, and those values will be sent to the server with a post request. There is no strict check for the nickname here, because I wanted to focus on the real-time part, not the user authentication part.

As you can see, the overall structure of the project is very simple. Let's come to the deployment part. As I said earlier, we will use Modulus, one of the best PaaS for deploying, scaling and monitoring your application in the language of your choice.
(continue)

If you found this post interesting, follow and support us.
Suggest for you:

Complete Node JS Developer Course Building 5 Real World Apps

Node.js Tutorials: The Web Developer Bootcamp

Learn How To Deploy Node.Js App on Google Compute Engine

Learn and Understand NodeJS

Learn Nodejs by Building 12 Projects

Wednesday, August 17, 2016

Top 10 Mistakes Node.js Developers Make _part 2(end)

3 Executing a callback multiple times

How many times have you saved a file and reloaded your Node web app only to see it crash really fast? The most likely scenario is that you executed the callback twice, meaning you forgot to return after the first time.
Let's create an example to replicate this situation. We will create a simple proxy server with some basic validation. To use it install the request dependency, run the example and open (for instance) http://localhost:1337/?url=http://www.google.com/. The source code for our example is the following:
  1. var request = require('request');
  2.   var http = require('http');
  3.   var url = require('url');
  4.   var PORT = process.env.PORT || 1337;

  5.   var expression = /[-a-zA-Z0-9@:%_\+.~#?&//=]{2,256}\.[a-z]{2,4}\b(\/[-a-zA-Z0-9@:%_\+.~#?&//=]*)?/gi;
  6.   var isUrl = new RegExp(expression);

  7.   var respond = function(err, params) {
  8.     var res = params.res;
  9.     var body = params.body;
  10.     var proxyUrl = params.proxyUrl;

  11.     res.setHeader('Content-type', 'text/html; charset=utf-8');

  12.     if (err) {
  13.       console.error(err);
  14.       res.end('An error occured. Please make sure the domain exists.');
  15.     } else {
  16.       res.end(body);
  17.     }
  18.   };

  19.   http.createServer(function(req, res) {
  20.     var queryParams = url.parse(req.url, true).query;
  21.     var proxyUrl = queryParams.url;

  22.     if (!proxyUrl || (!isUrl.test(proxyUrl))) {
  23.       res.writeHead(200, { 'Content-Type': 'text/html' });
  24.       res.write("Please provide a correct URL param. For ex: ");
  25.       res.end("<a href='http://localhost:1337/?url=http://www.google.com/'>http://localhost:1337/?url=http://www.google.com/</a>");
  26.     } else {
  27.       // ------------------------
  28.       // Proxying happens here
  29.       // TO BE CONTINUED
  30.       // ------------------------
  31.     }
  32.   }).listen(PORT);
The source code above contains almost everything except the proxying itself, because I want you to take a closer look at it:
  1. request(proxyUrl, function(err, r, body) {
  2. if (err) {
  3.     respond(err, {
  4.     res: res,
  5.     proxyUrl: proxyUrl
  6.     });
  7. }

  8. respond(null, {
  9.     res: res,
  10.     body: body,
  11.     proxyUrl: proxyUrl
  12. });
  13. });
In the callback we have handled the error condition, but forgot to stop the execution flow after calling the respond function. That means that if we enter a domain that doesn't host a website, the respond function will be called twice and we will get the following message in the terminal:
  1.   Error: Can't set headers after they are sent.
  2.       at ServerResponse.OutgoingMessage.setHeader (http.js:691:11)
  3.       at respond (/Users/alexandruvladutu/www/airpair-2/3-multi-callback/proxy-server.js:18:7)

  4. This can be avoided either by using the `return` statement or by wrapping the 'success' callback in the `else` statement:
  1.   request(.., function(..params) {
  2.     if (err) {
  3.       return respond(err, ..);
  4.     }

  5.     respond(..);
  6.   });

  7.   // OR:

  8.   request(.., function(..params) {
  9.     if (err) {
  10.       respond(err, ..);
  11.     } else {
  12.       respond(..);
  13.     }
  14.   });
4 The Christmas tree of callbacks (Callback Hell)

Every time somebody wants to bash Node they come up with the 'callback hell' argument. Some of them see callback nesting as unavoidable, but that is simply untrue. There are a number of solutions out there to keep your code nice and tidy, such as:
  • Using control flow modules (such as async);
  • Promises; and
  • Generators.
We are going to create a sample application and then refactor it to use the async module. The app will act as a naive frontend resource analyzer which does the following:
  • Checks how many scripts / stylesheets / images are in the HTML code;
  • Outputs the their total number to the terminal;
  • Checks the content-length of each resource; then
  • Puts the total length of the resources to the terminal.
Besides the async module, we will be using the following npm modules:
  • request for getting the page data (body, headers, etc).
  • cheerio as jQuery on the backend (DOM element selector).
  • once to make sure our callback is executed once.
  1.  var URL = process.env.URL;
  2.   var assert = require('assert');
  3.   var url = require('url');
  4.   var request = require('request');
  5.   var cheerio = require('cheerio');
  6.   var once = require('once');
  7.   var isUrl = new RegExp(/[-a-zA-Z0-9@:%_\+.~#?&//=]{2,256}\.[a-z]{2,4}\b(\/[-a-zA-Z0-9@:%_\+.~#?&//=]*)?/gi);

  8.   assert(isUrl.test(URL), 'must provide a correct URL env variable');

  9.   request({ url: URL, gzip: true }, function(err, res, body) {
  10.     if (err) { throw err; }

  11.     if (res.statusCode !== 200) {
  12.       return console.error('Bad server response', res.statusCode);
  13.     }

  14.     var $ = cheerio.load(body);
  15.     var resources = [];

  16.     $('script').each(function(index, el) {
  17.       var src = $(this).attr('src');
  18.       if (src) { resources.push(src); }
  19.     });

  20.     // .....
  21.     // similar code for stylesheets and images
  22.     // checkout the github repo for the full version

  23.     var counter = resources.length;
  24.     var next = once(function(err, result) {
  25.       if (err) { throw err; }

  26.       var size = (result.size / 1024 / 1024).toFixed(2);

  27.       console.log('There are ~ %s resources with a size of %s Mb.', result.length, size);
  28.     });

  29.     var totalSize = 0;

  30.     resources.forEach(function(relative) {
  31.       var resourceUrl = url.resolve(URL, relative);

  32.       request({ url: resourceUrl, gzip: true }, function(err, res, body) {
  33.         if (err) { return next(err); }

  34.         if (res.statusCode !== 200) {
  35.           return next(new Error(resourceUrl + ' responded with a bad code ' + res.statusCode));
  36.         }

  37.         if (res.headers['content-length']) {
  38.           totalSize += parseInt(res.headers['content-length'], 10);
  39.         } else {
  40.           totalSize += Buffer.byteLength(body, 'utf8');
  41.         }

  42.         if (!--counter) {
  43.           next(null, {
  44.             length: resources.length,
  45.             size: totalSize
  46.           });
  47.         }
  48.       });
  49.     });
  50.   });
This doesn't look that horrible, but you can go even deeper with nested callbacks. From our previous example you can recognize the Christmas tree at the bottom, where you see indentation like this:
  1.         if (!--counter) {
  2.           next(null, {
  3.             length: resources.length,
  4.             size: totalSize
  5.           });
  6.         }
  7.       });
  8.     });
  9.   });
To run the app type the following into the command line:
  1.   $ URL=https://bbc.co.uk/ node before.js
  2.   # Sample output:
  3.   # There are ~ 24 resources with a size of 0.09 Mb.
After a bit of refactoring using async our code might look like the following:
  1.  var async = require('async');

  2.   var rootHtml = '';
  3.   var resources = [];
  4.   var totalSize = 0;

  5.   var handleBadResponse = function(err, url, statusCode, cb) {
  6.     if (!err && (statusCode !== 200)) {
  7.       err = new Error(URL + ' responded with a bad code ' + res.statusCode);
  8.     }

  9.     if (err) {
  10.       cb(err);
  11.       return true;
  12.     }

  13.     return false;
  14.   };

  15.   async.series([
  16.     function getRootHtml(cb) {
  17.       request({ url: URL, gzip: true }, function(err, res, body) {
  18.         if (handleBadResponse(err, URL, res.statusCode, cb)) { return; }

  19.         rootHtml = body;

  20.         cb();
  21.       });
  22.     },
  23.     function aggregateResources(cb) {
  24.       var $ = cheerio.load(rootHtml);

  25.       $('script').each(function(index, el) {
  26.         var src = $(this).attr('src');
  27.         if (src) { resources.push(src); }
  28.       });

  29.       // similar code for stylesheets && images; check the full source for more

  30.       setImmediate(cb);
  31.     },
  32.     function calculateSize(cb) {
  33.       async.each(resources, function(relativeUrl, next) {
  34.         var resourceUrl = url.resolve(URL, relativeUrl);

  35.         request({ url: resourceUrl, gzip: true }, function(err, res, body) {
  36.           if (handleBadResponse(err, resourceUrl, res.statusCode, cb)) { return; }

  37.           if (res.headers['content-length']) {
  38.             totalSize += parseInt(res.headers['content-length'], 10);
  39.           } else {
  40.             totalSize += Buffer.byteLength(body, 'utf8');
  41.           }

  42.           next();
  43.         });
  44.       }, cb);
  45.     }
  46.   ], function(err) {
  47.     if (err) { throw err; }

  48.     var size = (totalSize / 1024 / 1024).toFixed(2);
  49.     console.log('There are ~ %s resources with a size of %s Mb.', resources.length, size);
  50.   });
5 Creating big monolithic applications

Developers new to Node come with mindsets from different languages and they tend to do things differently. For example including everything into a single file, not breaking things into their own modules and publishing to NPM, etc.

Take our previous example for instance. We have pushed everything into a single file, making it hard to test and read the code. But no worries, with a bit of refactoring we can make it much nicer and more modular. This will also help with 'callback hell' in case you were wondering.

If we extract the URL validator, the response handler, the request functionality and the resource processor into their own files our main one will look like so:
  1.  // ...
  2.   var handleBadResponse = require('./lib/bad-response-handler');
  3.   var isValidUrl = require('./lib/url-validator');
  4.   var extractResources = require('./lib/resource-extractor');
  5.   var request = require('./lib/requester');

  6.   // ...
  7.   async.series([
  8.     function getRootHtml(cb) {
  9.       request(URL, function(err, data) {
  10.         if (err) { return cb(err); }

  11.         rootHtml = data.body;

  12.         cb(null, 123);
  13.       });
  14.     },
  15.     function aggregateResources(cb) {
  16.       resources = extractResources(rootHtml);

  17.       setImmediate(cb);
  18.     },
  19.     function calculateSize(cb) {
  20.       async.each(resources, function(relativeUrl, next) {
  21.         var resourceUrl = url.resolve(URL, relativeUrl);

  22.         request(resourceUrl, function(err, data) {
  23.           if (err) { return next(err); }

  24.           if (data.res.headers['content-length']) {
  25.             totalSize += parseInt(data.res.headers['content-length'], 10);
  26.           } else {
  27.             totalSize += Buffer.byteLength(data.body, 'utf8');
  28.           }

  29.           next();
  30.         });
  31.       }, cb);
  32.     }
  33.   ], function(err) {
  34.     if (err) { throw err; }

  35.     var size = (totalSize / 1024 / 1024).toFixed(2);
  36.     console.log('\nThere are ~ %s resources with a size of %s Mb.', resources.length, size);
  37.   });
The request functionality might look like this:
  1. var handleBadResponse = require('./bad-response-handler');
  2.   var request = require('request');

  3.   module.exports = function getSiteData(url, callback) {
  4.     request({
  5.       url: url,
  6.       gzip: true,
  7.       // lying a bit
  8.       headers: {
  9.         'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/38.0.2125.111 Safari/537.36'
  10.       }
  11.     }, function(err, res, body) {
  12.       if (handleBadResponse(err, url, res && res.statusCode, callback)) { return; }

  13.       callback(null, {
  14.         body: body,
  15.         res: res
  16.       });
  17.     });
  18.   };
Note: you can check the full example in the github repo.

Now things are simpler, way easier to read and we can start writing tests for our app. We can go on with the refactoring and extract the response length functionality into its own module as well.

The good thing about Node is that it encourages you to write tiny modules and publish them to NPM. You will find modules for all kinds of things such as generating a random number between an interval. You should strive for modularity in your Node applications and keeping things as simple as possible.

An interesting article on how to write modules is the one from substack.

6 Poor logging

Many Node tutorials show you a small example that contains console.log here and there, so some developers are left with the impression that that's how they should implement logging in their application.
You should use something better than console.log when coding Node apps, and here's why:
  • No need to use util.inspect for large, complex objects;
  • Built-in serializers for things like errors, request and response objects;
  • Support multiple sources for controlling where the logs go;
  • Automatic inclusion of hostname, process id, application name;
  • Supports multiple levels of logging (debug, info, error, fatal etc);
  • Advanced functionality such as log file rotation, etc.
You can get all of those for free when using a production-ready logging module such as bunyan. On top of that you also get a handy CLI tool for development if you install the module globally.

Let's take a look at one of their examples on how to use it:
  1.  var http = require('http');
  2.   var bunyan = require('bunyan');

  3.   var log = bunyan.createLogger({
  4.     name: 'myserver',
  5.     serializers: {
  6.       req: bunyan.stdSerializers.req,
  7.       res: bunyan.stdSerializers.res
  8.     }
  9.   });

  10.   var server = http.createServer(function (req, res) {
  11.     log.info({ req: req }, 'start request');  // <-- this is the guy we're testing
  12.     res.writeHead(200, { 'Content-Type': 'text/plain' });
  13.     res.end('Hello World\n');
  14.     log.info({ res: res }, 'done response');  // <-- this is the guy we're testing
  15.   });

  16.   server.listen(1337, '127.0.0.1', function() {
  17.     log.info('server listening');

  18.     var options = {
  19.       port: 1337,
  20.       hostname: '127.0.0.1',
  21.       path: '/path?q=1#anchor',
  22.       headers: {
  23.         'X-Hi': 'Mom'
  24.       }
  25.     };

  26.     var req = http.request(options, function(res) {
  27.       res.resume();
  28.       res.on('end', function() {
  29.         process.exit();
  30.       })
  31.     });

  32.     req.write('hi from the client');
  33.     req.end();
  34.   });
If you run the example in the terminal you will see something like the following:
  1.   $ node server.js
  2.   {"name":"myserver","hostname":"MBP.local","pid":14304,"level":30,"msg":"server listening","time":"2014-11-16T11:30:13.263Z","v":0}
  3.   {"name":"myserver","hostname":"MBP.local","pid":14304,"level":30,"req":{"method":"GET","url":"/path?q=1#anchor","headers":{"x-hi":"Mom","host":"127.0.0.1:1337","connection":"keep-alive"},"remoteAddress":"127.0.0.1","remotePort":61580},"msg":"start request","time":"2014-11-16T11:30:13.271Z","v":0}
  4.   {"name":"myserver","hostname":"MBP.local","pid":14304,"level":30,"res":{"statusCode":200,"header":"HTTP/1.1 200 OK\r\nContent-Type: text/plain\r\nDate: Sun, 16 Nov 2014 11:30:13 GMT\r\nConnection: keep-alive\r\nTransfer-Encoding: chunked\r\n\r\n"},"msg":"done response","time":"2014-11-16T11:30:13.273Z","v":0}
But in development it's better to use the CLI tool like in the screenshot:


As you can see, bunyan gives you a lot of useful information about the current process, which is vital into production. Another handy feature is that you can pipe the logs into a stream (or multiple streams).

7 No tests

We should never consider our applications 'done' if we didn't write any tests for them. There's really no excuse for that, considering how many existing tools we have for that:
  • Testing frameworks: mocha, jasmine, tape and many other
  • Assertion modules: chai, should.js
  • Modules for mocks, spies, stubs or fake timers such as sinon
  • Code coverage tools: istanbul, blanket
The convention for NPM modules is that you specify a test command in your package.json, for example:
  1.   {
  2.     "name": "express",
  3.     ...
  4.     "scripts": {
  5.       "test": "mocha --require test/support/env --reporter spec --bail --check-leaks test/ test/acceptance/",
  6.       ...
  7.    }
Then the tests can be run with npm test, no matter of the testing framework used.

Another thing you should consider for your projects is to enforce having all your tests pass before committing. Fortunately it is as simple as doing npm i pre-commit --save-dev.

You can also decide to enforce a certain code coverage level and deny commits that don't adhere to that level. The pre-commit module simply runs npm test automatically for you as a pre-commit hook.

In case you are not sure how to get started with writing tests you can either find tutorials online or browse popular Node projects on Github, such as the following:
  • express
  • loopback
  • ghost
  • hapi
  • haraka
8 Not using static analysis tools

Instead of spotting problems in production it's better to catch them right away in development by using static analysis tools.
Tools such as ESLint help solve a huge array of problems, such as:
  • Possible errors, for example: disallow assignment in conditional expressions, disallow the use of debugger.
  • Enforcing best practices, for example: disallow declaring the same variable more then once, disallow use of arguments.callee.
  • Finding potential security issues, such as the use of eval() or unsafe regular expressions.
  • Detecting possible performance problems.
  • Enforcing a consistent style guide.
For a more complete set of rules checkout the ESLint rules documentation page. You should also read the configuration documents if you want to setup ESLint for your project.
In case you were wondering where you can find a sample configuration file for ESLint, the Esprima project has one.

There are other similar linting tools out there such as JSLint or JSHint.

In case you want to parse the AST (abstract source tree) and create a static analysis tool by yourself, consider Esprima or Acorn.

9 Zero monitoring or profiling

Not monitoring or profiling a Node applications leaves you in the dark. You are not aware of vital things such as event loop delay, CPU load, system load or memory usage.

There are proprietary services that care of these things for you, such as the ones from New Relic, StrongLoop or Concurix, AppDynamics.

You can also achieve that by yourself with open source modules such as look or by gluing different NPM modules. Whatever you choose make sure you are always aware of the status of your application at all times, unless you want to receive weird phone calls at night.

10 Debugging with console.log

When something goes bad it's easy to just insert console.log in some places and debug. After you figure out the problem you remove the console.log debugging leftovers and go on.

The problem is that the next developer (or even you) might come along and repeat the process. That's why module like debug exist. Instead of inserting and deleting console.log you can replace it with the debug function and just leave it there.

Once the next guy tries to figure out the problem they just start the application using the DEBUG environment variable.

This tiny module has its benefits:
  • Unless you start the app using the DEBUG environment variable nothing is displayed to the console.
  • You can selectively debug portions of your code (even with wildcards).
  • The output is beautifully colored into your terminal.
Let's take a look at their official example:
  1. // app.js
  2.   var debug = require('debug')('http')
  3.     , http = require('http')
  4.     , name = 'My App';

  5.   // fake app

  6.   debug('booting %s', name);

  7.   http.createServer(function(req, res){
  8.     debug(req.method + ' ' + req.url);
  9.     res.end('hello\n');
  10.   }).listen(3000, function(){
  11.     debug('listening');
  12.   });

  13.   // fake worker of some kind

  14.   require('./worker');

  15. <!--code lang=javascript linenums=true-->

  16.   // worker.js
  17.   var debug = require('debug')('worker');

  18.   setInterval(function(){
  19.     debug('doing some work');
  20.   }, 1000);
If we run the example with node app.js nothing happens, but if we include the DEBUG flag voila:


Besides your applications, you can also use it for tiny modules published to NPM. Unlike a more complex logger it only does the debugging job and it does it well.
Written by Alexandru Vladutu

If you found this post interesting, follow and support us.
Suggest for you:

Complete Node JS Developer Course Building 5 Real World Apps

Node.js Tutorials: The Web Developer Bootcamp

Learn How To Deploy Node.Js App on Google Compute Engine

Learn and Understand NodeJS

Learn Nodejs by Building 12 Projects

Saturday, August 13, 2016

Top 10 Mistakes Node.js Developers Make_part 1


Introduction

Node.js has seen an important growth in the past years, with big companies such as Walmart or PayPal adopting it. More and more people are picking up Node and publishing modules to NPM at such a pace that exceeds other languages. However, the Node philosophy can take a bit to get used to, especially if you have switched from another language.

In this article we will talk about the most common mistakes Node developers make and how to avoid them. You can find the source code for the examples on github.

1 Not using development tools
  • nodemon or supervisor for automatic restart
  • In-browser live reload (reload after static files and/or views change)
Unlike other languages such as PHP or Ruby, Node requires a restart when you make changes to the source code. Another thing that can slow you down while creating web applications is refreshing the browser when the static code changes. While you can do these things manually, there are better solutions out there.

1.1 Automating restarts

Most of us are probably used to saving a file in the editor, hit [CTRL+C] to stop the application and then restart it by pressing the [UP] arrow and [Enter]. However you can automate this repetitive task and make your development process easier by using existing tools such as:
  • nodemon
  • node-supervisor
  • forever
What these modules do is to watch for file changes and restart the server for you. Let us take nodemon for example. First you install it globally:
  1.   npm i nodemon -g
Then you should simply swap the node command for the nodemon command:
  1.   # node server.js
  2.   $ nodemon server.js
  3.   14 Nov 21:23:23 - [nodemon] v1.2.1
  4.   14 Nov 21:23:23 - [nodemon] to restart at any time, enter `rs`
  5.   14 Nov 21:23:23 - [nodemon] watching: *.*
  6.   14 Nov 21:23:23 - [nodemon] starting `node server.js`
  7.   14 Nov 21:24:14 - [nodemon] restarting due to changes...
  8.   14 Nov 21:24:14 - [nodemon] starting `node server.js`
Among the existing options for nodemon or node-supervisor, probably the most popular one is to ignore specific files or folders.

1.2 Automatic browser refresh

Besides reloading the Node application when the source code changes, you can also speed up development for web applications. Instead of manually triggering the page refresh in the browser, we can automate this as well using tools such as livereload.

They work similarly to the ones presented before, because they watch for file changes in certain folders and trigger a browser refresh in this case (instead of a server restart). The refresh is done either by a script injected in the page or by a browser plugin.

Instead of showing you how to use livereload, this time we will create a similar tool ourselves with Node. It will do the following:
  • Watch for file changes in a folder;
  • Send a message to all connected clients using server-sent events; and
  • Trigger the page reload.
First we should install the NPM dependencies needed for the project:
  • express - for creating the sample web application
  • watch - to watch for file changes
  • sendevent - server-sent events, SSE (an alternative would have been websockets)
  • uglify-js - for minifying the client-side JavaScript files
  • ejs - view templates
Next we will create a simple Express server that renders a home view on the front page:
  1. var express = require('express');
  2. var app = express();
  3. var ejs = require('ejs');
  4. var path = require('path');

  5. var PORT = process.env.PORT || 1337;

  6. // view engine setup
  7. app.engine('html', ejs.renderFile);
  8. app.set('views', path.join(__dirname, 'views'));
  9. app.set('view engine', 'html');

  10. // serve an empty page that just loads the browserify bundle
  11. app.get('/', function(req, res) {
  12. res.render('home');
  13. });

  14. app.listen(PORT);
  15. console.log('server started on port %s', PORT);
Since we are using Express we will also create the browser-refresh tool as an Express middleware. The middleware will attach the SSE endpoint and will also create a view helper for the client script. The arguments for the middleware function will be the Express app and the folder to be monitored. Since we know that, we can already add the following lines before the view setup (inside server.js):
  1. var reloadify = require('./lib/reloadify');
  2. reloadify(app, __dirname + '/views');
We are watching the /views folder for changes. And now for the middleware:
  1.  var sendevent = require('sendevent');
  2.   var watch = require('watch');
  3.   var uglify = require('uglify-js');
  4.   var fs = require('fs');
  5.   var ENV = process.env.NODE_ENV || 'development';

  6.   // create && minify static JS code to be included in the page
  7.   var polyfill = fs.readFileSync(__dirname + '/assets/eventsource-polyfill.js', 'utf8');
  8.   var clientScript = fs.readFileSync(__dirname + '/assets/client-script.js', 'utf8');
  9.   var script = uglify.minify(polyfill + clientScript, { fromString: true }).code;

  10.   function reloadify(app, dir) {
  11.     if (ENV !== 'development') {
  12.       app.locals.watchScript = '';
  13.       return;
  14.     }

  15.     // create a middlware that handles requests to `/eventstream`
  16.     var events = sendevent('/eventstream');

  17.     app.use(events);

  18.     watch.watchTree(dir, function (f, curr, prev) {
  19.       events.broadcast({ msg: 'reload' });
  20.     });

  21.     // assign the script to a local var so it's accessible in the view
  22.     app.locals.watchScript = '<script>' + script + '</script>';
  23.   }

  24.   module.exports = reloadify;
As you might have noticed, if the environment isn't set to 'development' the middleware won't do anything. This means we won't have to remove it for production.
The frontend JS file is pretty simple, it will just listen to the SSE messages and reload the page when needed:
  1. (function() {

  2.     function subscribe(url, callback) {
  3.       var source = new window.EventSource(url);

  4.       source.onmessage = function(e) {
  5.         callback(e.data);
  6.       };

  7.       source.onerror = function(e) {
  8.         if (source.readyState == window.EventSource.CLOSED) return;

  9.         console.log('sse error', e);
  10.       };

  11.       return source.close.bind(source);
  12.     };

  13.     subscribe('/eventstream', function(data) {
  14.       if (data && /reload/.test(data)) {
  15.         window.location.reload();
  16.       }
  17.     });

  18.   }());
The eventsource-polyfill.js is Remy Sharp's polyfill for SSE. Last but not least, the only thing left to do is to include the frontend generated script into the page (/views/home.html) using the view helper:
  1.   ...
  2.   <%- watchScript %>
  3.   ...
Now every time you make a change to the home.html page the browser will automatically reload the home page of the server for you (http://localhost:1337/).

2 Blocking the event loop

Since Node.js runs on a single thread, everything that will block the event loop will block everything. That means that if you have a web server with a thousand connected clients and you happen to block the event loop, every client will just...wait.

Here are some examples on how you might do that (maybe unknowingly):
  • Parsing a big json payload with the JSON.parse function;
  • Trying to do syntax highlighting on a big file on the backend (with something like Ace or highlight.js); or
  • Parsing a big output in one go (such as the output of a git log command from a child process).
The thing is that you may do these things unknowingly, because parsing a 15 Mb output doesn't come up that often, right? It's enough for an attacker to catch you off-guard and your entire server will be DDOS-ed.

Luckily you can monitor the event loop delay to detect anomalies. This can be achieve either via proprietary solutions such as StrongOps or by using open-source modules such as blocked.

The idea behind these tools is to accurately track the time spend between an interval repeatedly and report it. The time difference is calculated by getting the time at moment A and moment B, subtracting the time at moment A from moment B and also subtracting the time interval.

Below there's an example on how to achieve that. It does the following:
  • Retrieve the high-resolution time between the current time and the time passed as a param;
  • Determines the delay of the event loop at regular intervals;
  • Displays the delay in green or red, in case it exceeds the threshold; then
  • To see it in action, each 300 miliseconds a heavy computation is executed.
The source code for the example is the following:
  1.  var getHrDiffTime = function(time) {
  2.     // ts = [seconds, nanoseconds]
  3.     var ts = process.hrtime(time);
  4.     // convert seconds to miliseconds and nanoseconds to miliseconds as well
  5.     return (ts[0] * 1000) + (ts[1] / 1000000);
  6.   };

  7.   var outputDelay = function(interval, maxDelay) {
  8.     maxDelay = maxDelay || 100;

  9.     var before = process.hrtime();

  10.     setTimeout(function() {
  11.       var delay = getHrDiffTime(before) - interval;

  12.       if (delay < maxDelay) {
  13.         console.log('delay is %s', chalk.green(delay));
  14.       } else {
  15.         console.log('delay is %s', chalk.red(delay));
  16.       }

  17.       outputDelay(interval, maxDelay);
  18.     }, interval);
  19.   };

  20.   outputDelay(300);

  21.   // heavy stuff happening every 2 seconds here
  22.   setInterval(function compute() {
  23.     var sum = 0;

  24.     for (var i = 0; i <= 999999999; i++) {
  25.       sum += i * 2 - (i + 1);
  26.     }
  27.   }, 2000);
You must install the chalk before running it. After running the example you should see the following output in the terminal:

As said before, existing open source modules are doing it similarly so use them with confidence:
If you couple this technique with profiling, you can determine exactly which part of your code caused the delay.
Written by Alexandru Vladutu

If you found this post interesting, follow and support us.
Suggest for you:

Complete Node JS Developer Course Building 5 Real World Apps

Node.js Tutorials: The Web Developer Bootcamp

Learn How To Deploy Node.Js App on Google Compute Engine

Learn and Understand NodeJS

Learn Nodejs by Building 12 Projects

Friday, August 12, 2016

Building a REST API With AWS SimpleDB and Node.js_part 2 (end)

Building the Server

To get started, let's install Express, a Node.js HTTP server framework:
  1. npm install express —-save
Express manages most of the minutiae in setting up a server, but it doesn't include any facility for handling the HTTP request body, so we'll need to install another module, body-parser, to enable us to read the request body.
  1. npm install body-parser --save
Body-parser has a few different options for parsing the body of the HTTP request. We’ll use the json() method for readability, but switching to another method is just swapping out the method on the bodyParser object. We only need the bodyParser method on the create and update methods, so we can just include it in those particular routes.

Create
Since each SimpleDB itemName needs to be unique, we can auto-generate a new itemName for each newly created item. We’re going to use the cuid module, which is a lightweight way to generate unique identifiers.
  1. npm install cuid --save
SimpleDB expects attributes to be in the attribute name/value pair format:
  1. [
  2.     { "Name" : "attribute1", "Value" : "value1" },
  3.     { "Name" : "attribute1", "Value" : "value2" },
  4.     { "Name" : "attribute2", "Value" : "value3" },
  5.     { "Name" : "attribute3", "Value" : "value4" }
  6. ]
Your server could certainly just accept and pass the values in this format directly to SimpleDB, but it is counter-intuitive to how data is often structured, and it's a difficult concept with which to work. We'll use a more intuitive data structure, an object/value array:
  1. {
  2.     "attribute1"    : ["value1","value2"],
  3.     "attribute2"    : ["value3","value4"]
  4. }
Here is a basic Express-based server with the create operation:
  1. var
  2.   aws         = require('aws-sdk'),
  3.   bodyParser  = require('body-parser'),
  4.   cuid        = require('cuid'),
  5.   express     = require('express'),   
  6.   sdbDomain   = 'sdb-rest-tut',   
  7.   app         = express(),
  8.   simpledb; 
  9. aws.config.loadFromPath(process.env['HOME'] + '/aws.credentials.json');
  10. simpledb = new aws.SimpleDB({
  11.   region        : 'US-East',
  12.   endpoint  : 'https://sdb.amazonaws.com'
  13. }); 
  14. //create
  15. app.post(
  16.   '/inventory', 
  17.   bodyParser.json(),
  18.   function(req,res,next) {
  19.     var
  20.       sdbAttributes   = [],
  21.       newItemName     = cuid();     
  22.     //start with: 
  23.     /*
  24.       { attributeN     : ['value1','value2',..'valueN'] }
  25.     */
  26.     Object.keys(req.body).forEach(function(anAttributeName) {
  27.       req.body[anAttributeName].forEach(function(aValue) {
  28.         sdbAttributes.push({
  29.           Name  : anAttributeName,
  30.           Value : aValue
  31.         });
  32.       });
  33.     });
  34.     //end up with:
  35.     /*
  36.       [ 
  37.         { Name : 'attributeN', Value : 'value1' },
  38.         { Name : 'attributeN', Value : 'value2' },
  39.         ...
  40.         { Name : 'attributeN', Value : 'valueN' },
  41.       ]
  42.     */ 
  43.     simpledb.putAttributes({
  44.       DomainName    : sdbDomain,
  45.       ItemName      : newItemName,
  46.       Attributes    : sdbAttributes
  47.     }, function(err,awsResp) {
  48.       if (err) { 
  49.         next(err);  //server error to user
  50.       } else {
  51.         res.send({
  52.           itemName  : newItemName 
  53.         });
  54.       }
  55.     });
  56.   }
  57. ); 
  58. app.listen(3000, function () {
  59.   console.log('SimpleDB-powered REST server started.');
  60. });
Let's start up your server and try it out. A great way to interact with a REST server is to use the cURL tool. This tool allows you to make an HTTP request with any verb right from the command line. To try out creating an item with our REST server, we'll need to activate a few extra options:

  1. curl -H "Content-Type: application/json" -X POST -d '{"pets" : ["dog","cat"], "cars" : ["saab"]}' http://localhost:3000/inventory


After running the command, you'll see a JSON response with your newly created itemName or ID. If you switch over to SdbNavigator, you should see the new data when you query all the items.

Read
Now let’s build a basic function to read an item from SimpleDB. For this, we don’t need to perform a query since we’ll be getting the itemName or ID from the path of the request. We can perform a getAttributes request with that itemName or ID.

If we stopped here, we would have a functional but not very friendly form of our data. Let’s transform the Name/Value array into the same form we’re using to accept data (attribute : array of values). To accomplish this, we will need to go through each name/value pair and add it to a new array for each unique name. 

Finally, let’s add the itemName and return the results. 
  1. //Read
  2. app.get('/inventory/:itemID', function(req,res,next) {
  3.   simpledb.getAttributes({
  4.     DomainName    : sdbDomain,
  5.     ItemName      : req.params.itemID   //this gets the value from :itemID in the path
  6.   }, function(err,awsResp) {
  7.     var
  8.       attributes = {};       
  9.     if (err) { 
  10.       next(err);  //server error to users
  11.     } else {
  12.       awsResp.Attributes.forEach(function(aPair) {
  13.         // if this is the first time we are seeing the aPair.Name, let's add it to the response object, attributes as an array
  14.         if (!attributes[aPair.Name]) { 
  15.           attributes[aPair.Name] = [];
  16.         }
  17.         //push the value into the correct array
  18.         attributes[aPair.Name].push(aPair.Value);
  19.       });
  20.       res.send({
  21.         itemName    : req.params.itemID,
  22.         inventory   : attributes
  23.       });
  24.     }
  25.   });
  26. });
To test this, we need to use curl again. Try replacing [cuid] with the itemName or ID returned from our example of creating an item earlier in this tutorial.
  1. curl -D- http://localhost:3000/inventory/[cuid]
Notice that we're using the -D- option. This will dump the HTTP head so we can see the response code.

Another aspect of REST is to use your response codes meaningfully. In the current example, if you supply a non-existent ID to curl, the above server will crash because you’re trying to forEach a non-existent array. We need to account for this and return a meaningful HTTP response code indicating that the item was not found. 

To prevent the error, we should test for the existence of the variable awsResp.Attributes. If it doesn’t exist, let’s set the status code to 404 and end the http request. If it exists, then we can serve the response with attributes. 
  1. app.get('/inventory/:itemID', function(req,res,next) {
  2.   simpledb.getAttributes({
  3.     DomainName    : sdbDomain,
  4.     ItemName      : req.params.itemID
  5.   }, function(err,awsResp) {
  6.     var
  7.       attributes = {};
  8.        
  9.     if (err) { 
  10.       next(err);
  11.     } else {
  12.       if (!awsResp.Attributes) {
  13.         //set the status response to 404 because we didn't find any attributes then end it
  14.         res.status(404).end(); 
  15.       } else {
  16.         awsResp.Attributes.forEach(function(aPair) {
  17.           if (!attributes[aPair.Name]) { 
  18.             attributes[aPair.Name] = [];
  19.           }
  20.            
  21.           attributes[aPair.Name].push(aPair.Value);
  22.         });
  23.         res.send({
  24.           itemName    : req.params.itemID,
  25.           inventory   : attributes
  26.         });
  27.       }
  28.     }
  29.   });
  30. });
Try it out with the new code and a non-existent ID and you'll see that the server returns a 404. 

Now that we know how to use status to change the value, we should also update how we are responding to a POST/create. While the 200 response is technically correct as it means ‘OK’, a more insightful response code would be 201, which indicates ‘created’. To make this change, we’ll add it in the status method before sending.
  1. res
  2.  .status(201)
  3.  .send({
  4.    itemName  : newItemName
  5.  });
Update
Update is usually the most difficult operation for any system, and this REST server is no exception. 

The nature of SimpleDB makes this operation a little more challenging as well. In the case of a REST server, an update is where you are replacing the entire piece of stored data; SimpleDB on the other hand, represents individual attribute/value pairs under an itemName. 

To allow for an update to represent a single piece of data rather than a collection of name/value pairs, we need to define a schema for the purposes of our code (even though SimpleDB doesn’t need one). Don’t worry if this is unclear right now—keep reading and I’ll illustrate the requirement.

Compared to many other database systems, our schema will be very simple: just a defined array of attributes. For our example, we have four fields we are concerned with: pets, cars, furniture, and phones:
  1. schema      = ['pets','cars','furniture','phones'],
With SimpleDB you can’t store an empty attribute/value pair, nor does SimpleDB have any concept of individual items, so we’ll assume that if SimpleDB doesn’t return a value, it doesn’t exist. Similarly, if we try to update a SimpleDB item with an empty attribute/value pair, it will ignore that data. Take, for example, this data:
  1. {
  2.   "itemName": "cil89uvnm00011ma2fykmy79c",
  3.   "inventory": {
  4.     "cars": [],
  5.     "pets": [
  6.       "cat",
  7.       "dog"
  8.     ]
  9.   }
  10. }
Logically, we know that cars, being an empty array, should have no values, and pets should have two values, but what about phones and furniture? What do you do to those? Here is how we translate this update request to work with SimpleDB:
  • Put an attribute pet with a value to cat.
  • Put an attribute pet with a value to dog.
  • Delete attributes for cars.
  • Delete attributes for phones.
  • Delete attributes for furniture.
Without some form of schema that at least defines the attributes, we wouldn’t know that phones and furniture needed to be deleted. Luckily, we can consolidate this update operation into two SimpleDB requests instead of five: one to put the attributes, and one to delete the attributes. This is a good time to pull out the code from the post/create function that transforms the attribute/array of values object into the attribute/value pair array.
  1. function attributeObjectToAttributeValuePairs(attrObj, replace) {
  2.    var
  3.      sdbAttributes   = [];
  4.     Object.keys(attrObj).forEach(function(anAttributeName) {
  5.       attrObj[anAttributeName].forEach(function(aValue) {
  6.         sdbAttributes.push({
  7.           Name    : anAttributeName,
  8.           Value   : aValue,
  9.           Replace : replace           //if true, then SimpleDB will overwrite rather than append more values to an attribute
  10.         });
  11.       });
  12.     }); 
  13.    return sdbAttributes; 
  14. }
We’re going to make an important alteration to the create function as well. We’ll be adding a new attribute/value to all items. This attribute will not be added to the schema and is effectively read-only. 

We will add an attribute called created and set the value to 1. With SimpleDB, there is limited ability to check if an item exists prior to adding attributes and values. On every putAttributes request you can check for the value and existence of a single attribute—in our case, we’ll use created and check for a value of 1. While this may seem like a strange workaround, it provides a very important safety to prevent the update operation from being able to create new items with an arbitrary ID.
  1. newAttributes.push({
  2.   Name    : 'created',
  3.   Value   : '1'
  4. });
Since we’ll be doing a couple of asynchronous HTTP requests, let’s install the async module to ease the handling of those callbacks.
  1. npm install async —-save
Remember, since SimpleDB is distributed, there is no reason to sequentially put our attributes and then delete. We’ll use the function async.parallel to run these two operations and get a callback when both have completed. The responses from AWS form putAttributes and deleteAttributes do not provide important information, so we will just send an empty response with a status code 200 if there are no errors.
  1. app.put(
  2.   '/inventory/:itemID', 
  3.   bodyParser.json(),
  4.   function(req,res,next) {
  5.     var
  6.       updateValues  = {},
  7.       deleteValues  = [];
  8.      
  9.     schema.forEach(function(anAttribute) {
  10.       if ((!req.body[anAttribute]) || (req.body[anAttribute].length === 0)) {
  11.         deleteValues.push({ Name : anAttribute});
  12.       } else {
  13.         updateValues[anAttribute] = req.body[anAttribute];
  14.       }
  15.     });
  16.      
  17.     async.parallel([
  18.         function(cb) {
  19.           //update anything that is present
  20.           simpledb.putAttributes({
  21.               DomainName    : sdbDomain,
  22.               ItemName      : req.params.itemID,
  23.               Attributes    : attributeObjectToAttributeValuePairs(updateValues,true),
  24.               Expected      : {
  25.                 Name          : 'created',
  26.                 Value         : '1',
  27.                 Exists        : true
  28.               }
  29.             },
  30.             cb
  31.           );
  32.         },
  33.         function(cb) {
  34.           //delete any attributes that not present
  35.           simpledb.deleteAttributes({
  36.               DomainName    : sdbDomain,
  37.               ItemName      : req.params.itemID,
  38.               Attributes    : deleteValues
  39.             },
  40.             cb
  41.           );
  42.         }
  43.       ],
  44.       function(err) {
  45.         if (err) {
  46.           next(err);
  47.         } else {
  48.           res.status(200).end();
  49.         }
  50.       }
  51.     );
  52.   }
  53. );
To take this for a spin, let's update a previously created entry. This time, we will make the inventory only include a "dog", removing all other items. Again, with cURL, run the command, substituting [cuid] with one of your item IDs.
  1. curl -H "Content-Type: application/json" -X PUT -d '{"pets" : ["dog"] }' http://localhost:3000/inventory/[cuid]
Delete
SimpleDB has no concept of an item deletion, but it can delete attributes, as mentioned above. To delete an item, we’ll need to delete all the attributes and the ‘item' will cease to be. 

Since we’ve defined a list of attributes in our schema, we’ll use the deleteAttributes call to remove all of those attributes as well as the created attribute. As per our plan, this operation will be at the same path as Update, but using the verb delete. 
  1. app.delete(
  2.   '/inventory/:itemID', 
  3.   function(req,res,next) {
  4.     var
  5.       attributesToDelete;
  6.      
  7.     attributesToDelete = schema.map(function(anAttribute){
  8.       return { Name : anAttribute };
  9.     });
  10.      
  11.     attributesToDelete.push({ Name : 'created' });
  12.      
  13.     simpledb.deleteAttributes({
  14.         DomainName    : sdbDomain,
  15.         ItemName      : req.params.itemID,
  16.         Attributes    : attributesToDelete
  17.       },
  18.       function(err) {
  19.         if (err) {
  20.           next(err);
  21.         } else {
  22.           res.status(200).end();
  23.         }
  24.       }
  25.     );
  26.   }
  27. );
List
Rounding out our REST verbs is list. To achieve the list operation, we’re going to use the select command and the SQL-like query language. Our list function will be barebones, but will serve as a good basis for more complex retrieval later on. We’re going to make a very simple query:
  1. select * from `sdb-rest-tut` limit 100
As we ran into with the get/read operation, the response from SimpleDB isn’t very useful as it is focused on the attribute/value pairs. To avoid repeating ourselves, we’ll refactor the part of the get/read operation into a separate function and use it here. While we are at it, we’ll also filter out the created attribute (as it will show up in the get operation).
  1. function attributeValuePairsToAttributeObject(pairs) {
  2.   var
  3.     attributes = {};   
  4.   pairs
  5.     .filter(function(aPair) {
  6.       return aPair.Name !== 'created';
  7.     })
  8.     .forEach(function(aPair) {
  9.     if (!attributes[aPair.Name]) { 
  10.       attributes[aPair.Name] = [];
  11.     }     
  12.     attributes[aPair.Name].push(aPair.Value);
  13.   });   
  14.   return attributes;
  15. }
With a select operation, SimpleDB returns the values in the Items array. Each item is represented by an object that contains the itemName (as simply Name) and the attribute/value pairs. 

To simplify this response, let’s return everything in a single object. First, we’ll convert the attribute/value pairs into an attribute/value array as we did in the read/get operation, and then we can add the itemName as the property ID. 
  1. app.get(
  2.   '/inventory',
  3.   function(req,res,next) {
  4.     simpledb.select({
  5.       SelectExpression  : 'select * from `sdb-rest-tut` limit 100'
  6.     },
  7.     function(err,awsResp) {
  8.       var
  9.         items = [];
  10.       if (err) {
  11.         next(err);
  12.       } else {
  13.         items = awsResp.Items.map(function(anAwsItem) {
  14.           var
  15.             anItem;
  16.            
  17.           anItem = attributeValuePairsToAttributeObject(anAwsItem.Attributes);
  18.            
  19.           anItem.id = anAwsItem.Name;
  20.            
  21.           return anItem;
  22.         });
  23.         res.send(items);
  24.       }
  25.     });
  26.   }
  27. );
To see our results, we can use curl:
  1. curl -D- -X GET  http://localhost:3000/inventory
Validation
Validation is whole a subject of its own, but with the code we’ve already written, we have a start for a simple validation system. 

For now, all we want to make sure is that a user can’t submit anything but what is in the schema. Looking back at the code that was written for update/put, forEaching over the schema will prevent any unauthorized attributes from being added, so we really just need to apply something similar to our create/post operation. In this case, we will filter the attribute/value pairs, eliminating any non-schema attributes.
  1. newAttributes = newAttributes.filter(function(anAttribute) {
  2.   return schema.indexOf(anAttribute.Name) !== -1;
  3. });
In your production code, you will likely want a more robust validation system. I would suggest integrating a JSON schema validator like ajv and building a middleware that sits between bodyParser and your route function on create and update operations.

Next Steps

With the code outlined in this article, you have all the operations needed to store, read and modify data, but this is only the start of your journey. In most cases, you’ll need to start thinking about the following topics:
  • Authentication
  • Pagination
  • Complex list/query operations
  • Additional output formats (xml, csv, etc.)
This basis for a REST server powered by SimpleDB allows you to add middleware and additional logic to build a backbone for your application.
Written by Kyle Davis

If you found this post interesting, follow and support us.
Suggest for you: