Thursday, August 11, 2016

Building a REST API With AWS SimpleDB and Node.js_part1

SimpleDB is a remote database offered by Amazon Web Services (AWS). The world of data stores is usually divided into SQL and NoSQL, based on the use (or non-use) of the SQL language. NoSQL data stores are usually based on a simpler key/value setup. SimpleDB straddles this line—it is a key/value store and it can also use a variant of SQL for retrieval. Most SQL languages are based on a schema that lays out the rows and columns of the data, but SimpleDB is a schema-less database, making for a very flexible data store.

In the SimpleDB database model, you have items, attributes and values.  Each row in the database is an item and can be identified by a unique and assignable item name. Each item can have up to 256 pairs of attributes and values. An unexpected aspect of SimpleDB is that an attribute can have more than one pair per item. I think the best way to think about SimpleDB is to think of a spreadsheet, but instead of each column/row intersection representing a single value, it represents an array of values.


This chart represents two items stored in a SimpleDB domain. The term domain is analogous to a “table” in other databases.

The first column is the item name—this is the only column where you can have only a single value, and you can think of it as a unique index column.

The other four columns (pets, cars, furniture, and phones) represent attributes that are currently in this domain—you aren’t limited to this, so every item can have an entirely unique set of attributes. In this data, the attribute pets on the item personInventory1 has three pairs; expressed in JSON, it’ll look something like this:
  1. { "Name" : "pets", "Value" : "dog" }, 
  2. { "Name" : "pets", "Value" : "cat" }, 
  3. { "Name" : "pets", "Value" : "fish" }
On the other hand, the item personInventory2 has only one pair:
  1. { "Name" : "pets", "Value" : "cat" }
While you don’t have to supply the same attributes for each item, you do need to supply at least one pair. This means that you cannot have an ‘empty’ item. Each attribute can have a value up to 1kb in size, so this means that each item is functionally limited to 256kb, due to the 1kb value limit and the 256 pair limit.

SimpleDB is distributed, which has some distinct traits that you need to understand and keep in mind as you design your app. Being a distributed database means a whole group of machines will respond to your requests and your data will be replicated throughout these servers. This distribution will be completely transparent to your program, but it does introduce the possibility of consistency issues—your data cannot be guaranteed to be present on all servers initially.

Don’t panic: it’s not as bad as it sounds for a few reasons. With SimpleDB, the consistency isn’t promised, but it is usually pretty good and quickly reaches all nodes from my experience. Designing around this also isn’t so hard—normally you try to avoid immediately reading a record you just wrote. Finally, SimpleDB has the option to perform consistent reads, but they are slower and may consume more resources. If your app requires consistent reading every time, you might want to reconsider using SimpleDB as your data store, but for many applications, this can be designed around or not even worried about.

On the upside, the distributed nature also affords SimpleDB a few advantages that mesh nicely with the Node.js environment. Since you don’t have a single server responding to your requests, you don’t need to worry about saturating the service, and you can achieve good performance by making many parallel requests to SimpleDB. Parallel and asynchronous requests are something that Node.js can handle easily.

Unlike many AWS services, there isn’t an Amazon-delivered console for management of SimpleDB. Luckily, there is a nice in-browser management console in the form of a Google Chrome plugin, SdbNavigator. In the SdbNavigator you can add or delete domains, insert, update and delete items, modify attributes, and perform queries.

AWS SDK

Now that we’ve gotten to know the SimpleDB service, let’s start writing our REST server. First, we’ll need to install the AWS SDK. This SDK handles not just SimpleDB but all the AWS services, so you may already be including it in your package.json file. To install the SDK, run the following from the command line:
  1. npm install aws-sdk —-save
To use SimpleDB, you’ll also need to get your AWS credentials, which include an Access Key and a Secret Key. SimpleDB is a pay-as-you-go service, but AWS currently includes a generous free allowance for SimpleDB.

Word of warning: As with any pay-as-you-go service, be aware that it’s possible to write code that can rack up big bills, so you’re going to want to keep an eye on your usage and keep your credentials private and safe. 

Once you get the AWS SDK installed and have acquired your credentials, you’ll need to set up SimpleDB in your code. In this example, we'll use AWS credentials stored in a JSON file in your home directory. First, you’ll need to include the SDK module, create an AWS object, and finally set up your SimpleDB interface.
  1. var
  2.   aws         = require('aws-sdk'),
  3.   simpledb;
  4. aws.config.loadFromPath(process.env['HOME'] + '/aws.credentials.json');
  5. //We'll use the Northern Virginia datacenter, change the region / endpoint for other datacenters http://docs.aws.amazon.com/general/latest/gr/rande.html#sdb_region
  6. simpledb = new aws.SimpleDB({
  7.   region    : 'US-East',
  8.   endpoint  : 'https://sdb.amazonaws.com'
  9. });
Notice that we are using a specific endpoint and region. Each datacenter is entirely independent, so if you create a Domain named “mysuperawesomedata” in Northern Virginia, it will not be replicated to nor present in the Sao Paulo datacenter, for example.

The SimpleDB object that you’ve created with new aws.SimpleDB is where all your methods for interacting with SimpleDB will be based. The AWS SDK for SimpleDB has only a few methods:

Batch Operations
  • batchDeleteAttributes
  • batchPutAttributes
Domain Management & Information
  • createDomain
  • deleteDomain
  • domainMetadata
  • listDomains
Item/Attribute Manipulation
  • deleteAttributes
  • getAttributes
  • putAttributes
Querying
  • select
In this tutorial, we will only be dealing with Item/Attribute Manipulation and Querying; while the other categories are useful, many applications will not have any use for them.

Test Data

Using SdbNavigator, enter your access and security keys into the tool, select ‘US-East’, and click connect.

Once you’ve successfully connected, let’s create a domain for testing. Click Add domain.


Then enter the domain name ‘sdb-rest-tut’ and click OK.


Now that you’ve created a domain, let’s enter some test data. Click Add property and add a property named “colors”. As a convention, I usually name properties in plural form to reflect the multi-value nature of SimpleDB.

Finally, we’ll click Add record to create our first SimpleDB item. In the ItemName() column, enter your unique item name. A quirk of SdbNavigator is that, by default, it will only accept a single value to each item, but this obscures the fact that a property can contain multiple values. To enter multiple values, click the S along the right edge of the property column.


In the new box, select Array to enter multiple values. In the Value column, enter “red”, and then click Add value and enter “blue”.


Finally, click Update to save the changes to this row.


Now that we’ve entered some test data, let’s make our first SimpleDB request from Node. We’ll just get everything in the Domain, which, at this point, will be just a single row.
  1. var
  2.   aws         = require('aws-sdk'),
  3.   simpledb;
  4.  
  5. aws.config.loadFromPath(process.env['HOME'] + '/aws.credentials.json');
  6.  
  7. simpledb = new aws.SimpleDB({
  8.   region        : 'US-East',
  9.   endpoint  : 'https://sdb.amazonaws.com'
  10. });
  11.  
  12. simpledb.select({
  13.   SelectExpression  : 'select * from `sdb-rest-tut` limit 100'
  14. }, function(err,resp) {
  15.   if (err) {
  16.     console.error(err);
  17.   } else {
  18.     console.log(JSON.stringify(resp,null,' '));
  19.   }
  20. });
The response will be logged to the console. Here is the response, annotated for explanation:
  1. {
  2.  "ResponseMetadata": {
  3.   "RequestId": "...",             //Every request made to SimpleDB has a request ID
  4.   "BoxUsage": "0.0000228616"      //This is how your account is charged, as of time of writing US-East region is 14 US cents per hour, so this request costs 0.00032 cents + the transfer cost (if you are currently outside of your free tier)
  5.  },
  6.  "Items": [                       //For a Select, your response will be in the "Items" object property
  7.   {
  8.    "Name": "myfirstitem",         //this is the itemName()
  9.    "Attributes": [                //these are the attribute pairs
  10.     {
  11.      "Name": "colors",            //attribute name
  12.      "Value": "red"               //value - note that every Value is a string, regardless of the contents
  13.     },
  14.     {
  15.      "Name": "colors",            //Since the attribute name is repeated, we can see that `colors` has more than one value
  16.      "Value": "blue"
  17.     }
  18.    ]
  19.   }
  20.  ]
  21. }
A REST Server

Since we’ll be building a REST Server that stores data in SimpleDB, it’s important to understand what a REST server does. REST stands for REpresentational State Transfer. A REST server is really just a server that uses HTTP standard mechanisms as an interface for your data. Often, REST is used for server-to-server communications, but you can use REST servers with the client through JavaScript libraries such as jQuery or Angular. Generally, however, an end-user won’t interact directly with a REST server.

Interestingly, the AWS SDK actually uses the REST protocol to interact with SimpleDB, so it may seem odd to create a REST server to another REST server. You wouldn’t want to use the SimpleDB REST API directly because you need to authenticate your requests, which would risk the security of your AWS account. Also, by writing a server, you’ll be able to add a layer of both abstraction and validation to your data storage that will make the rest of your whole application much easier to deal with.

In this tutorial we will be building the basic CRUD+L functions, that is Create, Read, Update, Delete and List. If you think about it, you can break down most applications into CRUD+L. With REST, you will use a limited number of paths and several HTTP methods or verbs to create an intuitive API. Most developers are familiar with a few of the HTTP verbs, namely GET and POST, as they are used most often in web applications, but there are several others.


Notice that Read and List both use the same verb; we will be using slightly different paths to differentiate between the two. We’re using POST to represent Create as creating is not considered idempotent. Idempotent means that multiple identical calls will have the same result to the user and in your data, so an update (aka PUT) would be considered idempotent.

As our example, we’ll build a personal inventory server—a database to save whatever you own. Here is how the paths will look:


1234 is a placeholder for the person identifier (ID)—note that ‘create' and ‘list' do not have an ID. In the case of create, the ID will be generated, and with list, we’ll be getting all the names, so we don’t need a specific ID.
Written by Kyle Davis(continue)

If you found this post interesting, follow and support us.
Suggest for you:

How to Build a Slack Chatbot In Node.js using Botkit

Make A Real-Time Chat Room using Node Webkit, Socket.io, MEAN

Learn How To Deploy Node.Js App on Google Compute Engine

Learn and Understand NodeJS

Complete Node JS Developer Course Building 5 Real World Apps





Monday, August 8, 2016

Getting Started with Docker for the Node.js Developer_part2 (end)

Docker Run: Running our Ubuntu image and accessing the container
We've got our Ubuntu image (our blueprint ☺). Now let's start a new container based on our image and pass a command to it:
  1. $ docker run ubuntu /bin/echo ‘Hello world’
That should output in your terminal the message "Hello World". Well, it's pretty neat that we just started a container running a completely isolated instance of Ubuntu and executed a command, but that's not really useful.

So now, let's run a new container with Ubuntu and connect to it:
  1. $ docker run -i -t ubuntu
Note: The run command is huge (check $ docker help run) and we'll go more in-depth in the next blog post
The -t flag assigns a pseudo-tty or terminal inside our new container and the -i flag allows us to make an interactive connection by grabbing the standard in (STDIN) of the container. If it worked correctly, you should be connected to a terminal inside the container showing something like this:
  1. $ root@c9989236296d:/#
Run ls -ls and see that your running commands in the root of a Ubuntu system. ☺


I think it's nice to stop for a minute and think about what we just did. This is just one of the awesome parts of containers. We just downloaded and started a container running Ubuntu. That happened (depending on your internet connection) in 5 minutes? Compare that to downloading a VM Ubuntu image and spinning up a new VM. That would probably take you around 15–30min? And then creating new VMs, stopping, rebooting, how long that would take? When you add all of those up, the time you can save using containers is enormous!

Docker Commit: Installing node, npm, express and committing the changes

Okay, now that we are inside a running Ubuntu container, let's install the tools we need to run a node application (remember that you only need to execute the part after $ root: ):
  1. $ root: apt-get update  
  2. $ root: apt-get install nodejs  
  3. $ root: apt-get install nodejs-legacy
  4. $ root: apt-get install npm
Note: We need to install nodejs-legacy to run the express-generator module
Running node -v should give you an output:
  1. $ root: node -v  
  2. v0.10.25
With node installed, we can go ahead and install the express generator module from npm:
  1. $ root: npm install -g express-generator
Now we have our container with everything we're gonna need installed in it. Let's go ahead and exit from our container:
  1. $ root: exit
When we exit our container, Docker will stop running it. We can use the $ docker ps command to list containers, so let's do:
  1. $ docker ps -a

The $ docker ps command by default only displays running containers, so we pass the -a flag so we can see our Ubuntu container we just exited.
Now we can use that container to create a new image that other people can use. We do that by using the commit command:
  1. $ docker commit -a "Your Name <youremail@email.com>" -m "node and express" CONTAINER_ID node-express:0.1
Note: Change the contents from the -a flag, and the CONTAINER_ID with the ID from your container shown in the $ docker ps -a output. You can use just the first 3/4 characters from the ID. ☺
The commit command takes a few parameters. The -a flag sets the author, you can set a message using the -m flag, and finally we reference our container ID and the name of the image we're creating, in this case node-express. We also set a tag for our image by adding the :0.1 after the image name. If we run:
  1. $ docker images
We should see:

Awesome, you just created your first Docker image!

Now let's add another tag to our newly created image. Run:
  1. $ docker tag node-express:0.1 node-express:latest
It's good practice to tag images with a specific version so people can know exactly which image they're running. Adding the latest tag helps so that other people can simply refer to your image when downloading it by its name (node-express in our case), and Docker will automatically download the latest tag version. If you run $ docker images again, you can see that there's two rows with our image, but they both have the same ID, which means they're not ocuppying any extra space in our HD. ☺

Now we can start as many containers as we want ready to go with our image! Let's remove our old container:
  1. $ docker ps -a   
  2. $ docker rm YOUR_CONTAINER_ID
Note: Remember that you can just use the ID first 3–4 characters.
And let's run a container based on our new image, connect to it using the -i -t flags, and expose port 8080 of the host (VirtualBox) as the port 3000 of the container (VM):
  1. $ docker run -i -t -p 8080:3000 node-express
Let's use the express-generator we installed to create a new Node.js app:
  1. $ root: express mynodeapp
Following the instructions in the terminal, move to the app folder, install the dependencies and start the application:
  1. $ root: cd mynodeapp  
  2. $ root: npm install  
  3. $ root: npm start
Now we have a Node.js application running inside a container, and exposing port 3000. To see our application we need to find the Boot2Docker VM IP, so open another terminal and run:
  1. $ boot2docker ip  
  2. 192.168.59.103
And remember that we actually exposed port 8080 of our container to access port 3000. So go to your browser and open:
  1. 192.168.59.103:8080
Ta-ra!



Now, you might start wondering: this is a lot of work just to have a running application! I already have my development environment, I could have done all of that in 30 seconds! Well, that's true, but in this tutorial we're running a super simple application that doesn't have many dependencies. When you are running a real project that has much more dependencies, you may require a development environment with different packages, Python, Redis, MongoDB, Postgres, Node.js or io.js, etc. There're so many things involved that can make an application running in your computer not run correctly in another machine (or in QA/Test/Production), that is the main reason why Docker is so popular. Going back to the tutorial introduction, by providing a fundamental unit (our container/lego brick) that can be executed independent of hardware, and also easily run, moved, shared, Docker absolutely changes the way we can develop, test and share applications.

Docker Push: Pushing our container image so other people can use it

Okay, now let's share our "great" Ubuntu image with node, npm, and express-generator installed so other people can also use it. Exit our running Node application and the container:
  1. # Ctrl+C to stop our node app  
  2. $ root: exit
Head over to Docker Hub and create a free account: http://hub.docker.com
After that, go back to your terminal and run:
  1. $ docker login
Now that we're logged in in the cli we can push our image to the Docker Hub. Let's first rename it and add our username to it, so just like adding a tag:
  1. $ docker tag node-express your_docker_hub_username/node-express  
  2. $ docker rmi node-express  
  3. $ docker push your_docker_hub_username/node-express
Done! Now anyone with Docker can execute:
  1. $ docker pull your_docker_hub_username/node-express
And have the exact same environment with Ubuntu, Node.js, npm and the express-generator package as the one we previously created.
Written by Heitor Tashiro Sergent

If you found this post interesting, follow and support us.
Suggest for you:

Complete Node JS Developer Course Building 5 Real World Apps

Node.js Tutorials: The Web Developer Bootcamp

Learn and Understand NodeJS

Learn Nodejs by Building 12 Projects

Saturday, August 6, 2016

Getting Started with Docker for the Node.js Developer_part1


Difficulty level: Beginner

Requirements: Mac OS X (This tutorial assumes you're using a Mac, but you can find installation instructions for Windows or Ubuntu and skip ahead the Setup section)

Docker has just celebrated its 2nd birthday, but it's still a "new" powerful piece of technology. A lot of developer friends that I talk to have either heard or read about it but haven't actually used it. It lets you do really cool things like quickly test your app in development with the exact same environment as in QA/Test/Production, or share that app with other developers for a quick a painless onboarding. A commonly used analogy for Docker is to compare it to actual real-life containers or lego bricks: it provides a fundamental unit, and with it a way for an application to be portable and moveable, regardless of hardware.
In this tutorial, I'll give a quick overview of what Docker is and why you might want to use it, how to install it, and then we'll work on setting up a Node container and creating an express starter app inside it. This is a long tutorial! The official Docker getting started guide gets you up and running quicker, what I aim to do here is explain what's happening on each step along the way.
What we’ll cover:
  • Introduction (What's Docker and why use it)
  • Installation
  • Docker Hub and Dockerfiles
  • Docker Pull: Pulling an Ubuntu image
  • Docker Run: Running our Ubuntu image and accessing the container
  • Docker Commit: Installing node, npm, express and committing the changes
  • Docker Push: Pushing our container back so other people can use it
Notes:

I'll be referring to commands executed in your own terminal with:
  1. $ command
And commands inside a container with:
  1. $ root: command
Introduction

You've probably heard of Docker by now. Every day there's some front-page HackerNews mention of it, or you see people on Twitter/IRC talking about it. Its popularity has grown enormously in the past couple years, and most cloud providers already support it. If you are curious about it, but still haven't tried it out, this tutorial is for you. ☺
Okay, so what is Docker? Well, Docker can be a reference to a few things:
  • Docker client: this is what's running in our machine. It's the docker binary that we'll be interfacing with whenever we open a terminal and type $ docker pull or $ docker run. It connects to the docker daemon which does all the heavy-lifting, either in the same host (in the case of Linux) or remotely (in our case, interacting with our VirtualBox VM).
  • Docker daemon: this is what does the heavy lifting of building, running, and distributing your Docker containers.
  • Docker Images: docker images are the blueprints for our applications. Keeping with the container/lego brick analogy, they're our blueprints for actually building a real instance of them. An image can be an OS like Ubuntu, but it can also be an Ubuntu with your web application and all its necessary packages installed.
  • Docker Container: containers are created from docker images, and they are the real instances of our containers/lego bricks. They can be started, run, stopped, deleted, and moved.
  • Docker Hub (Registry): a Docker Registry is a hosted registry server that can hold Docker Images. Docker (the company) offers a public Docker Registry called the Docker Hub which we'll use in this tutorial, but they offer the whole system open-source for people to run on their own servers and store images privately.
Now that we cleared the different parts of Docker, here are a few reasons why you might want to use it:
  • Simplifying configuration of a development environment
  • Quickly testing your app in an environment similar to QA/Test/Production (less overhead compared to VMs)
  • Sharing your app+environment with other developers, which allows for fast/reliable onboarding.
  • Ability to diff containers (this can be immensely useful in debugging)
Installation

Running a container, and therefore Docker, requires a Linux machine. Since we're using a Mac, that means we'll need a VM. To make the installation process easier, we can use Boot2Docker which installs the Boot2Docker management tool, VirtualBox, and sets up a VM inside it with Docker installed.
Head over to this link to download the latest release of Boot2Docker, and install it (Boot2Docker-1.5.0.pkg at the time this was written):

https://github.com/boot2docker/osx-installer/releases/latest

After the installation is done, go to your Applications folder and open Boot2Docker. That's going to open a new terminal and run a few commands which basically start a VM that already has Docker installed, inside VirtualBox, and then sets a few environment variables so we can access the VM from our terminal. If you don't want to always open Boot2Docker to interact with Docker, just run the following commands:
  1. # Creates a new VM if you don't have one  
  2. $ boot2docker init

  3. # Starts the VM  
  4. $ boot2docker start

  5. # Sets the required environment variables  
  6. $ $(boot2docker shellinit)
Now type in:
  1. $ docker run hello-world
That's gonna make Docker download the hello-world image from Docker Hub and start a container based on it. Your terminal should give you an output that says:
  1. Hello from Docker.  
  2. This message shows that your installation appears to be working correctly.
Awesome! Docker is installed. ☺
(If you have any problems, feel free to ping me or you can find Docker's official installation instructions here)

Dockerfiles and Docker Hub

Before we move forward, I think it's important to understand what happened when we executed $ docker run hello-world so you're not just copy+pasting the next instructions.docker run is the basic command that we use to start a container based on an image while passing commands to it. In this case, we said, "Docker, start a container based on the image hello-world, no extra commands". Then it downloaded the image from Docker Hub and started a container inside the VirtualBox VM based on that image. But where does the hello-world image come from? That's where Docker Hub comes in. The Docker Hub, like we mentioned in the introduction, is the public registry containing container images to be used with Docker, created by Docker, other companies, and individuals. Here you can find the image for hello-world we just executed:

Docker Hub Hello-World Image

Every image is built using a Dockerfile. In the description for the hello-world image, you can find a link to its Dockerfile which only has 3 lines:
  1. FROM scratch  
  2. COPY hello /  
  3. CMD [“/hello”]
Dockerfiles are just text files containing instructions for Docker on how to build a container image. You can think of an image as a snapshot of a machine, and a container as being the actual running instance of the machine. Dockerfiles will always have the format:
  1. INSTRUCTION arguments
So in our hello-world example, we can take a look at the root of the GitHub repo which contains the Dockerfile. The image is being created from another image called "scratch" (all Dockerfiles start with the FROM instruction), then copying the hello file to the root of the system, and finally running hello. You can also find the contents of the hello file here, which contains the output we just saw in our terminal.

Docker Pull: Downloading an Ubuntu image

Now that we know our Docker installation is correctly setup, let's start playing with it! Our next step is getting an Ubuntu image. To find an image we can either go to the Docker Hub website or just run in the terminal:
  1. $ docker search ubuntu
This is going to give a list of all the images containing Ubuntu in its name. This is what's shown in my terminal:


The output is sorted by number of stars in each image repository. You can see that there's an Official and Automated column there.
  • Official images are images maintained by the docker-library project and accepted by the Docker team. That means they adhere to a few guidelines found here, some of which are living in a git repository and that repository being at least read-only so users can check its contents. You can count on those images for working correctly with Docker. Also, contrary to other images where you need to reference them to pull using USERNAME/IMAGE_NAME, these images can simply be referred to in commands by IMAGE_NAME (such as Ubuntu). All of their Dockerfiles can be found in this organization.
  • The automated column refers to Automated Build images. It simply means that the image is being built from a Dockerfile inside a GitHub or BitBucket repository, and it's automatically updated when changes are made to it.
Let's download the official Ubuntu image:
  1. $ docker pull ubuntu
The $ docker pull IMAGE_NAME command is the way to explicitly download an image, but that is also done if you use the $ docker run IMAGE_NAME command, and Docker can't find the image you're referring to.
Written  by Heitor Tashiro Sergent

If you found this post interesting, follow and support us.
Suggest for you: