Getting Started with Docker: Simplifying Devops

If you like whales, or are simply interested in quick and painless continuous delivery of your software to production, then I invite you to read this introductory Docker Tutorial. Everything seems to indicate that software containers are the future of IT, so let’s go for a quick dip with the container whales Moby Dock andMolly.

Docker, represented by a logo with a friendly looking whale, is an open source project that facilitates deployment of applications inside of software containers. Its basic functionality is enabled by resource isolation features of the Linux kernel, but it provides a user-friendly API on top of it. The first version was released in 2013, and it has since become extremely popular and is being widely used by many big players such as eBay, Spotify, Baidu, and more. In the last funding round, Docker has landed a huge $95 million.

Transporting Goods Analogy

The philosophy behind Docker could be illustrated with a following simple analogy. In the international transportation industry, goods have to be transported by different means like forklifts, trucks, trains, cranes, and ships. These goods come in different shapes and sizes and have different storing requirements: sacks of sugar, milk cans, plants etc. Historically, it was a painful process depending on manual intervention at every transit point for loading and unloading.

It has all changed with the uptake of intermodal containers. As they come in standard sizes and are manufactured with transportation in mind, all the relevant machineries can be designed to handle these with minimal human intervention. The additional benefit of sealed containers is that they can preserve the internal environment like temperature and humidity for sensitive goods. As a result, the transportation industry can stop worrying about the goods themselves and focus on getting them from A to B.

And here is where Docker comes in and brings similar benefits to the software industry.

How Is It Different from Virtual Machines?

At a quick glance, virtual machines and Docker containers may seem alike. However, their main differences will become apparent when you take a look at the following diagram:

Applications running in virtual machines, apart from the hypervisor, require a full instance of the operating system and any supporting libraries. Containers, on the other hand, share the operating system with the host. Hypervisor is comparable to the container engine (represented as Docker on the image) in a sense that it manages the lifecycle of the containers. The important difference is that the processes running inside the containers are just like the native processes on the host, and do not introduce any overheads associated with hypervisor execution. Additionally, applications can reuse the libraries and share the data between containers.

As both technologies have different strengths, it is common to find systems combining virtual machines and containers. A perfect example is a tool named Boot2Docker described in the Docker installation section.

Docker Architecture

Docker Architecture

At the top of the architecture diagram there are registries. By default, the main registry is the Docker Hub which hosts public and official images. Organizations can also host their private registries if they desire.

On the right-hand side we have images and containers. Images can be downloaded from registries explicitly (docker pull imageName) or implicitly when starting a container. Once the image is downloaded it is cached locally.

Containers are the instances of images – they are the living thing. There could be multiple containers running based on the same image.

At the centre, there is the Docker daemon responsible for creating, running, and monitoring containers. It also takes care of building and storing images. Finally, on the left-hand side there is a Docker client. It talks to the daemon via HTTP. Unix sockets are used when on the same machine, but remote management is possible via HTTP based API.

Installing Docker

For the latest instructions you should always refer to the official documentation.

Docker runs natively on Linux, so depending on the target distribution it could be as easy as sudo apt-get install docker.io. Refer to the documentation for details. Normally in Linux, you prepend the Docker commands with sudo, but we will skip it in this article for clarity.

As the Docker daemon uses Linux-specific kernel features, it isn’t possible to run Docker natively in Mac OS or Windows. Instead, you should install an application called Boot2Docker. The application consists of a VirtualBox Virtual Machine, Docker itself, and the Boot2Docker management utilities. You can follow the official installation instructions for MacOS and Windows to install Docker on these platforms.

Using Docker

Let us begin this section with a quick example:

docker run phusion/baseimage echo "Hello Moby Dock. Hello Molly."

We should see this output:

Hello Moby Dock. Hello Molly.

However, a lot more has happened behind the scenes than you may think:

  • The image ‘phusion/baseimage’ was download from Docker Hub (if it wasn’t already in local cache)
  • A container based on this image was started
  • The command echo was executed within the container
  • The container was stopped when the command exitted

On first run, you may notice a delay before the text is printed on screen. If the image had been cached locally, everything would have taken a fraction of a second. Details about the last container can be retrieved by by running docker ps -l:

CONTAINER ID		IMAGE					COMMAND				CREATED			STATUS				PORTS	NAMES
af14bec37930		phusion/baseimage:latest		"echo 'Hello Moby Do		2 minutes ago		Exited (0) 3 seconds ago		stoic_bardeen 

Taking the Next Dive

As you can tell, running a simple command within Docker is as easy as running it directly on a standard terminal. To illustrate a more practical use case, throughout the remainder of this article, we will see how we can utilize Docker to deploy a simple web server application. To keep things simple, we will write a Java program that handles HTTP GET requests to ‘/ping’ and responds with the string ‘pong\n’.

import java.io.IOException;
import java.io.OutputStream;
import java.net.InetSocketAddress;
import com.sun.net.httpserver.HttpExchange;
import com.sun.net.httpserver.HttpHandler;
import com.sun.net.httpserver.HttpServer;

public class PingPong {

    public static void main(String[] args) throws Exception {
        HttpServer server = HttpServer.create(new InetSocketAddress(8080), 0);
        server.createContext("/ping", new MyHandler());
        server.setExecutor(null);
        server.start();
    }

    static class MyHandler implements HttpHandler {
        @Override
        public void handle(HttpExchange t) throws IOException {
            String response = "pong\n";
            t.sendResponseHeaders(200, response.length());
            OutputStream os = t.getResponseBody();
            os.write(response.getBytes());
            os.close();
        }
    }
}

Dockerfile

Before jumping in and building your own Docker image, it’s a good practice to first check if there is an existing one in the Docker Hub or any private registries you have access to. For example, instead of installing Java ourselves, we will use an official image: java:8.

To build an image, first we need to decide on a base image we are going to use. It is denoted by FROMinstruction. Here, it is an official image for Java 8 from the Docker Hub. We are going to copy it into our Java file by issuing a COPY instruction. Next, we are going to compile it with RUN. EXPOSE instruction denotes that the image will be providing a service on a particular port. ENTRYPOINT is an instruction that we want to execute when a container based on this image is started and CMD indicates the default parameters we are going to pass to it.

FROM java:8
COPY PingPong.java /
RUN javac PingPong.java
EXPOSE 8080
ENTRYPOINT ["java"]
CMD ["PingPong"]

After saving these instructions in a file called “Dockerfile”, we can build the corresponding Docker image by executing:

docker build -t toptal/pingpong .

The official documentation for Docker has a section dedicated to best practices regarding writing Dockerfile.

Running Containers

When the image has been built, we can bring it to life as a container. There are several ways we could run containers, but let’s start with a simple one:

docker run -d -p 8080:8080 toptal/pingpong

where -p [port-on-the-host]:[port-in-the-container] denotes the ports mapping on the host and the container respectively. Furthermore, we are telling Docker to run the container as a daemon process in the background by specifying -d. You can test if the web server application is running by attempting to access ‘http://localhost:8080/ping’. Note that on platforms where Boot2docker is being used, you will need to replace ‘localhost’ with the IP address of the virtual machine where Docker is running.

On Linux:

curl http://localhost:8080/ping

On platforms requiring Boot2Docker:

curl $(boot2docker ip):8080/ping

If all goes well, you should see the response:

pong

Hurray, our first custom Docker container is alive and swimming! We could also start the container in an interactive mode -i -t. In our case, we will override the entrypoint command so we are presented with a bash terminal. Now we can execute whatever commands we want, but exiting the container will stop it:

docker run -i -t --entrypoint="bash" toptal/pingpong

There are many more options available to use for starting up the containers. Let us cover a few more. For example, if we want to persist data outside of the container, we could share the host filesystem with the container by using -v. By default, the access mode is read-write, but could be changed to read-only mode by appending :ro to the intra-container volume path. Volumes are particularly important when we need to use any security information like credentials and private keys inside of the containers, which shouldn’t be stored on the image. Additionally, it could also prevent the duplication of data, for example by mapping your local Maven repository to the container to save you from downloading the Internet twice.

Docker also has the capability of linking containers together. Linked containers can talk to each other even if none of the ports are exposed. It can be achieved with –link other-container-name. Below is an example combining mentioned above parameters:

docker run -p 9999:8080 
    --link otherContainerA --link otherContainerB 
    -v /Users/$USER/.m2/repository:/home/user/.m2/repository 
    toptal/pingpong
 Other Container and Image Operations

Unsurprisingly, the list of operations that one could apply to the containers and images is rather long. For brevity, let us look at just a few of them:

  • stop – Stops a running container.
  • start – Starts a stopped container.
  • commit – Creates a new image from a container’s changes.
  • rm – Removes one or more containers.
  • rmi – Removes one or more images.
  • ps – Lists containers.
  • images – Lists images.
  • exec – Runs a command in a running container.

Last command could be particularly useful for debugging purposes, as it lets you to connect to a terminal of a running container:

docker exec -i -t <container-id> bash

Docker Compose for the Microservice World

If you have more than just a couple of interconnected containers, it makes sense to use a tool like docker-compose. In a configuration file, you describe how to start the containers and how they should be linked with each other. Irrespective of the amount of containers involved and their dependencies, you could have all of them up and running with one command: docker-compose up.

Docker in the Wild

Let’s look at three stages of project lifecycle and see how our friendly whale could be of help.

Development

Docker helps you keep your local development environment clean. Instead of having multiple versions of different services installed such as Java, Kafka, Spark, Cassandra, etc., you can just start and stop a required container when necessary. You can take things a step further and run multiple software stacks side by side avoiding the mix-up of dependency versions.

With Docker, you can save time, effort, and money. If your project is very complex to set up, “dockerise” it. Go through the pain of creating a Docker image once, and from this point everyone can just start a container in a snap.

You can also have an “integration environment” running locally (or on CI) and replace stubs with real services running in Docker containers.

Testing / Continuous Integration

With Dockerfile, it is easy to achieve reproducible builds. Jenkins or other CI solutions can be configured to create a Docker image for every build. You could store some or all images in a private Docker registry for future reference.

With Docker, you only test what needs to be tested and take environment out of the equation. Performing tests on a running container can help keep things much more predictable.

Another interesting feature of having software containers is that it is easy to spin out slave machines with the identical development setup. It can be particularly useful for load testing of clustered deployments.

Production

Docker can be a common interface between developers and operations personnel eliminating a source of friction. It also encourages the same image/binaries to be used at every step of the pipeline. Moreover, being able to deploy fully tested container without environment differences help to ensure that no errors are introduced in the build process.

You can seamlessly migrate applications into production. Something that was once a tedious and flaky process can now be as simple as:

docker stop container-id; docker run new-image

And if something goes wrong when deploying a new version, you can always quickly roll-back or change to other container:

docker stop container-id; docker start other-container-id

… guaranteed not to leave any mess behind or leave things in an inconsistent state.

Summary

A good summary of what Docker does is included in its very own motto: Build, Ship, Run.

  • Build – Docker allows you to compose your application from microservices, without worrying about inconsistencies between development and production environments, and without locking into any platform or language.
  • Ship – Docker lets you design the entire cycle of application development, testing, and distribution, and manage it with a consistent user interface.
  • Run – Docker offers you the ability to deploy scalable services securely and reliably on a wide variety of platforms.

Have fun swimming with the whales!

Part of this work is inspired by an excellent book Using Docker by Adrian Mouat.

This article was written by RADEK OSTROWSKI, a Toptal Java developer.

Developing for the Cloud in the Cloud: BigData Development with Docker in AWS

Why you may need it?

I am a developer, and I work daily in Integrated Development Environments (IDE), such as Intellij IDEA or Eclipse. These IDEs are desktop applications. Since the advent of Google Documents, I have seen more and more people moving their work from desktop versions of Word or Excel to the cloud using an online equivalent of a word processor or a spreadsheet application.

There are obvious reasons for using a cloud to keep your work. Today, compared to the traditional desktop business applications, some web applications do not have a significant disadvantage in functionalities. The content is available wherever there is a web browser, and these days, that’s almost everywhere. Collaboration and sharing are easier, and losing files is less likely.

Unfortunately, these cloud advantages are not as common in the world of software development as is for business applications. There are some attempts to provide an online IDE, but they are nowhere close to traditional IDEs.

That is a paradox; while we are still bound to our desktop for daily coding, the software is now spawned on multiple servers. Developers needs to work with stuff they cannot keep any more on their computer. Indeed, laptops are no longer increasing their processing power; having more than 16GB of RAM on a laptop is rare and expensive, and newer devices, tablets, for example, have even less.

However, even if it is not yet possible to replace classic desktop applications for software development, it is possible to move your entire development desktop to the cloud. The day I realized it it was no longer necessary to have all my software on my laptop, and noticing the availability of web version of terminals and VNC, I moved everything to the cloud. Eventually, I developed a build kit for creating that environment in an automated way.

Developer in the cloud

What is the cloud about for a developer? Developing in it, of course!

In this article I present a set of scripts to build a cloud-based development environment for Scala and big data applications, running with Docker in Amazon AWS, and comprising of a web-accessible desktop with IntelliJ IDE, Spark, Hadoop and Zeppelin as services, and also command line tools like a web based SSH, SBT and Ammonite. The kit is freely available on GitHub, and I describe here the procedure for using it to build your instance. You can build your environment and customize it to your particular needs. It should not take you more than 10 minutes to have it up and running.

What is in the “BigDataDevKit”?

My primary goal in developing the kit was that my development environment should be something I can simply fire up, with all the services and servers I work with, and then destroy them when they are no longer needed. This is especially important when you work on different projects, some of them involving a large number of servers and services, as when you work on big data projects.

My ideal cloud-based environment should:

  • Include all the usual development tools, most importantly a graphical IDE.
  • Have the servers and services I need at my fingertips.
  • Be easy and fast to create from scratch, and expandable to add more services.
  • Be entirely accessible using only a web browser.
  • Optionally, allow access with specialized clients (VNC client and SSH client).

Leveraging modern cloud infrastructure and software, the power of modern browsers, a widespread availability of broadband, and the invaluable Docker, I created a development environment for Scala and big data development that, for the better, replaced my development laptop.

Currently, I can work at any time, either from a MacBook Pro, a Surface Tablet, or even an iPad (with a keyboard), although admittedly the last option is not ideal. All these devices are merely clients; the desktop and all the servers are in the cloud.

Docker and Amazon AWS!

My current environment is built using following online services:

  • Amazon Web Services for the servers.
  • GitHub for storing the code.
  • Dropbox to save files.

I also use a couple of free services, like DuckDns for dynamic IP addresses and Let’s encrypt to get a free SSL certificate.

In this environment, I currently have:

  • A graphical desktop with Intellij idea, accessible via a web browser.
  • Web accessible command line tools like SBT and Ammonite.
  • Hadoop for storing files and running MapReduce jobs.
  • Spark Job Server for scheduled jobs.
  • Zeppelin for a web-based notebook.

Most importantly, the web access is fully encrypted with HTTPS, for both web-based VNC and SSH, and there are multiple safeguards to avoid losing data, a concern that is, of course, important when you do not “own” the content on your physical hard disk. Note that getting a copy of all your work on your computer is automatic and very fast. If you lose everything because someone stole your password, you have a copy on your computer anyway, as long as you configured everything correctly.

Using a Web Based Development Environment with AWS and Docker

Now, let’s start describing how the environment works. When I start work in the morning, the first thing is to log into the Amazon Web Services console where I see all my instances. Usually, I have many development instances configured for different projects, and I keep the unused ones turned off to save billing. After all, I can only work on one project at a time. (Well, sometimes I work on two.)

Screen 1

So, I select the instance I want, start it, I wait for a little or go grab a cup of coffee. It’s not so different to turning on your computer. It usually takes a bunch of seconds to have the instance up and running. Once I see the green icon, I open a browser, and I go to a well known URL: https://msciab.duckdns.org/vnc.html. Note, this is my URL; when you create a kit, you will create your unique URL.

Since AWS assigns a new IP to each machine when you start, I configured a dynamic DNS service, so you can always use the same URL to access your server, even if you stop and restart it. You can even bookmark it in your browser. Furthermore, I use HTTPS, with valid keys to get total protection of my work from sniffers, in case I need to manage passwords and other sensitive data.

Screen 2

Once loaded, the system will welcome you with a Web VNC web client, NoVNC. Simply log in and a desktop appears. I use a minimal desktop, intentionally, just a menu with applications, and my only luxury is a virtual desktop (since I open a lot of windows when I develop). For mail, I still rely on other applications, nowadays mostly other browser tabs.

In the virtual machine, I have what I need to develop big data applications. First and foremost, there is an IDE. In the build, I use the IntelliJ Idea community edition. Also, there is the SBT build tool and a Scala REPL, Ammonite.

Screen 3

The key features of this environment, however, are services deployed as containers in the same virtual machine. In particular, I have:

  • Zeppelin, the web notebook for using Scala code on the fly and doing data analysis (http://zeppelin:8080)
  • The Spark Job Server, to execute and deploy spark jobs with a Rest interface (http://sparkjobserver:8080).
  • An instance of Hadoop for storing and retrieving data from the HDFS (http://hadoop:50070).

Note, these URLs are fixed but are accessible within the virtual environment. You can see their web interfaces in the following screenshot.

Screen 4

Each service runs in a separate Docker container. Without becoming too technical, you can think of this as three separate servers inside your virtual machine. The beauty of using Docker is you can add services, and even add two or three virtual machines. Using Amazon containers, you can scale your environment easily.

Last, but not least, you have a web terminal available. Simply access your URL with HTTPS and you will be welcomed with a terminal in a web page.

Screen 5

In the screenshot above you can see I list the containers, which are the three servers plus the desktop. This command line shell gives you access to the virtual machine holding the containers, allowing you to manage them. It’s as if your servers are “in the Matrix” (virtualized within containers), but this shell gives you an escape outside the “Matrix” to manage servers, and desktop. From here, you can restart the containers, access their filesystems and perform other manipulations allowed by Docker. I will not discuss in detail Docker here, but there is a vast amount of documentation on Docker website.

How to setup your instance

Do you like this so far, and you want your instance? It is easy and cheap. You can get it for merely the cost of the virtual machine on Amazon Web Services, plus the storage. The kit in the current configuration requires 4GB of RAM to get all the services running. If you are careful to use the virtual machine only when you need it, and you work, say, 160 hours a month, a virtual machine at current rates will cost 160 x $0.052, or $8 per month. You have to add the cost of storage. I use around 30GB, but everything altogether can be kept under $10.

However, this does bot include the cost of an (eventual) Dropbox (Pro) account, should you want to backup more than 2GB of code. This costs another $15 per month, but it provides important safety for your data. Also, you will need a private repository, either a paid GitHub or another service, such as Bitbucket, which offers free private repositories.

I want to stress that if you use it only when you need it, it is cheaper than a dedicated server. Yes, everything mentioned here can be setup on a physical server, but since I work with big data I need a lot of other AWS services, so I think it is logical to have everything in the same place.

Let’s see how to do the whole setup.

Prerequisites

Before starting to build a virtual machine, you need to register with the following four services:

The only one you need your credit card for is Amazon Web Services. DuckDns is entirely free, while DropBox gives you 2GB of free storage, which can be enough for many tasks. Let’s Encrypt is also free, and it is used internally when you build the image to sign your certificate. Besides these, I recommend a repository hosting service too, like GitHub or Bitbucket, if you want to store your code, however, it is not required for the setup.

To start, navigate to the GitHub BigDataDevKit repository.

Screen 6

Scroll the page and copy the script shown in the image in your text editor of choice:

Screen 7

This script is needed to bootstrap the image. You have to change it and provide some values to the parameters. Carefully, change the text within the quotes. Note you cannot use characters like the quote itself, the backslash or the dollar sign in the password, unless you quote them. This problem is relevant only for the password. If you want to play safe, avoid a quote, dollar sign, or backslashes.

The PASSWORD parameter is a password you choose to access the virtual machine via a web interface. The EMAIL parameter is your email, and will be used when you register an SSL certificate. You will be required to provide your email, and it is the only requirement for getting a free SSL Certificate from Let’s Encrypt.

To get the values for TOKEN and HOST, go to the DuckDNS site and log in. You will need to choose an unused hostname.

Screen 8

Look at the image to see where you have to copy the token and where you have to add your hostname. You must click on the “add domain” button to reserve the hostname.

Configuring your instance

Assuming you have all the parameters and have edited the script, you are ready to launch your instance. Log in to the Amazon Web Services management interface, go to the EC2 Instances panel and click on “Launch Instance”.

Screen 9

In the first screen, you will choose an image. The script is built around the Amazon Linux, and there are no other options available. Select Amazon Linux, the first option in the QuickStart list.

Screen 10

On the second screen, choose the instance type. Given the size of the software running, there are multiple services and you need at least 4GB of memory, so I recommend you select the t2.medium instance. You could trim it down, using the t2.small if you shut down some services, or even the micro if you only want the desktop.

Screen 11

On the third screen, click “Advanced Details” and paste the script you configured in the previous step. I also recommend you enable protection against termination, so that with an accidental termination you won’t lose all your work.

Screen 12

The next step is to configure the storage. The default for an instance is 8GB, which is not enough to contain all the images we will build. I recommend increasing it to 20GB. Also, while it is not needed, I suggest another block device of at least 10GB. The script will mount the second block device as a data folder.You can make a snapshot of its contents, terminate the instance, then recreate it using the snapshot and recovering all the work. Furthermore, a custom block device is not lost when you terminate the instance so have double protection against accidental loss of your data. To increase your safety even further, you can backup your data automatically with Dropbox.

Screen 13

The fifth step is naming the instance. Pick your own. The sixth step offers a way to configure the firewall. By default only SSH is available but we also need HTTPS, so do not forget to add also a rule opening HTTPS. You could open HTTPS to the world, but it’s better if it’s only to your IP to prevent others from accessing your desktop and shell, even though that is still protected with a password.

Once done with this last configuration, you can launch the instance. You will notice that the initialization can take quite a few minutes the first time since the initialization script is running and it will also do some lengthy tasks like generating an HTTPS certificate with Let’s Encrypt.

Screen 14

When you eventually see the management console “running” with a confirmation, and it is no longer “initializing”, you are ready to go.

Assuming all the parameters are correct, you can navigate to https://YOURHOST.duckdns.org.

Replace YOURHOST with the hostname you chose, but do not forget it is an HTTPS site, not HTTP, so your connection to the server is encrypted so you must write https// in the URL. The site will also present a valid certificate for Let’s Encrypt. If there are problems getting the certificate, the initialization script will generate a self-signed certificate. You will still be able to connect with an encrypted connection, but the browser will warn you it is an unknown site, and the connections are insecure. It should not happen, but you never know.

Screen 15

Assuming everything is working, you then access the web terminal, Butterfly. You can log in using the user app and the password you put in the setup script.

Once logged in, you have a bootstrapped virtual machine, which also includes Docker and other goodies, such as a Nginx Frontend, Git, and the Butterfly Web Terminal. Now, you can complete the setup by building the Docker images for your development environment.

Next, type the following commands:

git clone https://github.com/sciabarra/BigDataDevKit
cd BigDataDevKit
sh build.sh

The last command will also ask you to type a password for the Desktop access. Once done, it will start to build the images. Note the build will take a about 10 minutes, but you can see what is happening because everything is shown on the screen.

Once the build is complete, you can also install Dropbox with the following command:

/app/.dropbox-dist/dropboxd

The system will show a link you must click to enable Dropbox. You need to log into Dropbox and then you are done. Whatever you put in the Dropbox folder is automatically synced between all your Dropbox instances.

Once done, you can restart the virtual machine, and access your environment at the https://YOURHOST.dyndns.org/vnc.html URL.

You can stop your machine and restart it when you resume work. The access URL stay the same. This way, you will pay only for the time you are using it, plus monthly extra for the used storage.

Preserving your data

The following discussion requires some knowledge of how Docker and Amazon works. If you do not want to understand the details, just keep in mind following simple rule: In the virtual machine, there is an /app/Dropbox folder available, whatever you place in /app/Dropbox is preserved, and everything else is disposable and can go away. To improve security further, also store your precious code in a version control system.

Now, if you do want to understand this, read on. If you followed my directions in the virtual machine creation, the virtual machine is protected from termination, so you cannot destroy it accidentally. If you expressly decide to terminate it, the primary volume will be destroyed. All the Docker images will be lost, including all the changes you made.

However, since the folder /app/Dropbox is mounted as a Docker Volume for containers, it is not part of Docker images. In the virtual machine, the folder /app is mounted in the Amazon Volume you created, which is also not destroyed even when you expressly terminate the virtual machine. To remove the volume, you have to remove it expressly.

Do not confuse Docker volumes, which are a Docker logical entity, with Amazon Volumes, which is a somewhat physical entity. What happens is that the /app/Dropbox Docker volume is placed inside the /appAmazon volume.

The Amazon Volume is not automatically destroyed when you terminate the virtual machine, so whatever is placed in it will be preserved, until you also expressly destroy the volume. Furthermore, whatever you put in the Docker volume is stored outside of the container, so it is not destroyed when the container is destroyed. If you enabled Dropbox, as recommended, all your content is copied to the Dropbox servers, and to your hard disk if you sync Dropbox with your computer(s). Also, it is recommended that the source code be stored in a version control system.

So, if you place your stuff in version control system under the Dropbox folder, to lose your data all of this must happen:

  • You expressly terminate your virtual machine.
  • You expressly remove the data volume from the virtual machine.
  • You expressly remove the data from Dropbox, including the history.
  • You expressly remove the data from the version control system.

I hope your data is safe enough.

I keep a virtual machine for each project, and when I finish, I keep the unused virtual machines turned off. Of course, I have all my code on GitHub and backed up in Dropbox. Furthermore, when I stop working on a project, I take a snapshot of the Amazon Web Services block before removing the virtual machine entirely. This way, whenever a project resumes, for example for maintenance, all I need to do is start a new virtual machine using the snapshot. All my data goes back in place, and I can resume working.

Optimizing access

First, if you have direct internet access, not mediated by a proxy, you can use native SSH and VNC clients. Direct SSH access is important if you need to copy files in and out of the virtual machine. However, for file sharing, you should consider Dropbox as a simpler alternative.

The VNC web access is invaluable, but sometimes, it can be slower than a native client. You have access to the VNC server on the virtual machine using port 5900. You must expressly open it because it is closed by default. I recommend that you only open it to your IP address, because the internet is full of “robots” that scan the internet looking for services to hook into, and VNC is a frequent target of those robots.

Conclusion

This article explains how you can leverage modern cloud technology to implement an effective development environment. While a machine in the cloud cannot be a complete replacement for your working computer or a laptop, it is good enough for doing development work when it is important to have access to the IDE. In my experience, with current internet connections, it is fast enough to work with.

Being in the cloud, server access and manipulation is faster than having them locally. You can quickly increase (or decrease) memory, fire up another environment, create an image, and so on. You have a datacenter at your fingertips, and when you work with big data projects, well, you need robust services and lots of space. That is what the cloud provides.

The original article was written by MICHELE SCIABARRA – FREELANCE SOFTWARE ENGINEER @ TOPTAL and can be read here.

If you’d like to learn more about Toptal designers or hire one, check this out.

Home Smart Home: Domesticating the Internet of Things

The following article is a guest post from Toptal. Toptal is an elite network of freelancers that enables businesses to connect with the top 3% of software engineers and designers in the world.

The smart home technology boom is upon us. Despite lucrative projections for the market, and ever increasing numbers of connected devices, we have yet to witness much social impact from consumer adoption into the home. As a potential tipping point looms, there are several debates surrounding privacy, integration and other technical issues. Yet, there seems to be less speculation regarding why consumers still haven’t bought into the hype, nor how domestic life has improved. Considering how personal the home is, should it be concerning that those advertising these products discuss quality of life less than data, energy and ‘security’? Is the adoption of the Internet of Things into our homes inevitable, or is it already here?

Smart Home Interface

Somewhere in the Near Future

The smart person returns to their certified ‘Internet – of -Things‘ smart home after a long day at work. The smart security system senses that the smart person is alone and initiates the ‘Friday Night In’ sequence. Inside, an intercom with a standardized motherly voice suggests that the smart person might want to order in tonight. The smart person unloads their things in the kitchen where the smart stove displays a selection of take out, rather than it’s default recipe guide. Following the arrival of the food, the smart person retreats to the living room to wind down, and watch some TV in their underwear. The smart TV prepares a selection of Netflix marathons categorized by mood. The smart person chooses: ‘Looking to be cheered up? Comedy Playlist’. Before starting the show, the smart person reviews a set of graphs that display the data from activity and diet throughout the day. A list of tips for smart living is generated at the bottom, one of which reads that based on the amount of consecutive nights that the Smart Person has had alone, they might consider investigating a selection of popular dating sites instead of watching TV tonight. At the slip of a thumb the smart person OKS the request and instantly a set of profiles are displayed, each chosen from a generated list of Smart Person’s tracked preferences. Suddenly, a flurry of pings and messages from other stay-at-home hopefuls fill the screen. The smart home intercom repeats aloud ‘You’ve got mail!’. The smart person fumbles for the remote and – oops” – the TV snaps a selfie in response to the flood of pings. Their image, sitting in their underwear eating noodles appears briefly on the screen before being whisked off into the ether. The flood of messages doubles only to freeze the system, causing the smart home to reboot. The house goes dark. In the now blank screen of the smart TV reflects the image of the smart person again, finally alone.

Home Smart Home

With all the debate and headlines regarding the Internet of Things, and amounts of devices connected and market valuations – is there anyone left to ask about what will happen to the home once Smart Homes take over? The keeping of a home is one of, if not the oldest traditions that we have as humans. Does the Smart Home mean the end of the home as we know it?

The home is the original place where we build our identity and mark our place in the world – the original profile. Each generation has formed its radical dwellings as their respective marks on the world. We can now look back into those past homes as windows into the past lives of those generations, their values and ambitions. What do our Smart Homes tell us about ourselves? Or perhaps instead, what is it telling everyone else?Smart Home History

A Brief History of the Automated Home

The process towards the automated home began almost two centuries ago now. When we first plugged our homes in, the light bulb gave us the night. No longer was man confined to the limits of the sun. The technology offered liberation from the natural hours of the day. Later, appliances replaced tools and everything that moved, or could move, became battery powered. The first generation of the automated home advertised better performance for leisure in exchange. More time for the family, or affording the once confined housewife to pursue her career as well. The automated home liberated us from the need to maintain it.

Now, the technological trend continues to carry us through the next generation into a new domesticity. Although there will always be laments for what has passed, perhaps change isn’t so bad. If there is a new liberation perhaps it is the freedom to stay home. The freedom to sit and allow the world to visit us inside. Freedom from the outdoors. Freedom from each other. Yet, this time, is the freedom coming at a cost?

The home is the original security device – the original firewall. But now, as we allow the entire world to float through our walls and into our homes, have we deflated the entire meaning of our home that has stood for millennia? We speak of security and privacy now in the context of technical systems and hardware. But have we forgotten the origin of what privacy meant? In the spaces where we were once the most intimate, by inviting the world in we are becoming the most exposed. To adopt the Smart Home, must we forfeit the home?

Given the acceptance worldwide that privacy is dead, these thoughts may be obsolete falling on ears of developers and web designers. The point though is to fundamentally question what the Smart Home is offering us in exchange for what we must give it. Effectively, is the trade worth it? Will domestic life improve as it did during the first generations of automation? Or, how do we ensure, especially as the community who may be taking part in that change, that some amount of domesticity is salvaged?

Home Alone With the Internet of Things

The State of the Internet of Things

The Internet of Things is one of those monster hot topic terms that when we hear it, we know of its significance, and yet may know much less about its tangible effects. We may hear that there are estimates of 200 billion devices being connected in the near future, or that the market’s value is projected at 80 billion dollars. The numbers have similar significance as knowing that the earth is 92.2 million miles away from the sun. They’re very important numbers, yet most of us don’t understand the specifics, or address the numbers daily. All that most people want to know is most cases, is how is this thing going to change, or improve our life?

What is resisting smart home technology from becoming adopted with the same ubiquity as smart phones today? Consider the early phases of smartphone technology. What caused them to make the leap from a niche device, to being fastened to the hip of nearly every person and their grandparent?

Smart phones existed for about a decade before the market saw a significant boom. The release of the iPhone in 2007 generated a major shift as Apple was the first to design and market the device for everyday use. The elements that the iPhone introduced or improved seem to be subtle interface adjustments, and yet were able to catalyze a major shift. Thus, what might be that critical tipping point, or product for smart homes?

The Issue With Niche Products

Everyday there seems to be more smart home products to outfit your home with. Yet with each new application there is a new device, with a new remote that might connect with another new app on your smart phone. No matter how potentially helpful any of these devices might be, they appear to be novel and excessive without being associate with a greater purpose.

Recently, Yves Behar released the designs for a connected garden tracking system called Edyn. The system contains two products, one to monitor the soil, and another to respond to the tracked data and irrigate the garden as necessary. With the data, the app can recommend which plants might be the easiest to grow, and what the produce might need in order to flourish.

Seems pretty handy – but is it necessary enough to become widely adopted, or to really change the way we maintain our yards? Especially considering that for many, gardening is a pleasurable activity. People often garden as a meditative tool. In fact, there was a study recently suggesting that the happier someone is, the more they garden, and the more someone gardens, the happier they are. So, what exactly is Edyn suggesting that they can improve upon in this process that gardening was without before? If gardening becomes easier, what else is to be gained in terms of time in the day? Perhaps now the smart gardener can fit in another Netflix episode, rather than do the watering themselves. Hopefully one day they won’t even have to lift a green thumb at all! Imagine all the Netflix that could be watched.

Now, I don’t mean to pick on Edyn too much, but my point is that this tool is representative of most smart home technology. A lot of it would be nice to have I suppose, but it doesn’t quite seem worth the cost and the trouble.

So What’s Next

The issue with the clunky interaction between multiple apps is now clear to the market, as there is a definite push towards who can develop the ‘hub’ for all smart home devices. Some of the key players so far are SmartThings, Wink, Apple is supposedly generating momentum through AppleTV, and Amazon has the device Echo.

However, there so far is no real front runner, nor have many homes really begun to adopt the technology. What seems to be lacking, as was the case with the iPhone, is the proper interface that can relieve the smart home technology of all of the headaches that get in the way of convenience.

Most of the processes that these technologies are looking to improve are not exactly major burdens – turning of lights, playing music etc. Thus, if the experience is hardly more enjoyable, the new technology won’t be adopted.

One product, Josh.ai, has recognized this need for cohesion between devices, and focuses its manifesto towards interfacing. Josh.ai anticipates the need for programs to develop and build over time along witht the user. Eventually, certain commands become memorized such that Josh.ai will know that every morning it should slowly raise the lights, turn on the morning news, and prepare a cup of coffee. The interface takes any work out of choosing between apps, and instead responses to voice commands to manage all connected devices. Josh.ai is advertised as your home’s best friend. You even talk to Josh.ai as if its your best friend.

Josh.ai bases its product around the use of voice rather than any other interface. This may be a critical move for smart home technology. Josh.ai’s platform essentially asks how can the user program their home with as little work as possible? The hope is that even complex processes like morning routines might be programmed by voice, then memorized for future instances. “Hey Josh, when I get home can you turn up the lights, set the oven to 300, and maybe put on something like Frank Sinatra? I’m bringing a friend home!”

If Josh.ai is one of the more optimistic potential hubs to move the Smart Home trend into the next stage, what does its manifesto tell us about the ambition of what we can expect from the technology in near future?

Very often does the phrase ‘make your life easier and more productive’ occur in many of these manifestos. But, most of what these hubs offer is the autonomous control of small things like lights and music, how much easier will our lives get? Can this at all compare to the transition from brooms to vacuums?

The three essential points of Josh.ai are: thoughtless energy saving, continuous awareness of devices, and a more networked system. How do these new values compare to past technologies that significantly liberated the average person? In the manifestos of most of these hub devices, there is hardly any language that appears human at all.

Although the benefits of some of the products seem underwhelming, the issue is that even the smallest thing poses a significant risk. Take Nest for example, what could be more harmless than a thermostat? Yet already in its relatively short life has Nest been now caught for the sale of information surrounding home fire history to insurance companies.

Even if the Smart Home Hub were your ‘best friend’, do you want your best friend to know everything about your preferences? How much do we even allow our actual best friends to know about ourselves? Do we tell them that we’re lonely? Who’s profiles we look at online? Even if our best friend were really good at keeping secrets, would we tell them? About that one time?

Anytime that someone cries out against the inevitable, they come across as a cranky soap boxer. But this is an odd transition where there seems to be little gained by most of these products, and yet there is such significant risk in what might be lost if the technologies are adopted. Doesn’t it feel strange that in the last century our parents and grandparents marched for privacy, and here we are eagerly handing it back?

 

Persisting Data Across Page Reloads: Cookies, IndexedDB and Everything In-Between

The following article is a guest post from Toptal. Toptal is an elite network of freelancers that enables businesses to connect with the top 3% of software engineers and designers in the world.

Suppose I’m visiting a web site. I right-click on one of the navigation links and select to open the link in a new window. What should happen? If I’m like most users, I expect the new page to have the same content as if I had clicked the link directly. The only difference should be that the page appears in a new window. But if your web site is a single-page application (SPA), you may see weird results unless you’ve carefully planned for this case.

Recall that in an SPA, a typical navigation link is often a fragment identifier, starting with a hash mark (#). Clicking the link directly does not reload the page, so all the data stored in JavaScript variables are retained. But if I open the link in a new tab or window, the browser does reload the page, reinitializing all the JavaScript variables. So any HTML elements bound to those variables will display differently, unless you’ve taken steps to preserve that data somehow.

Persisting Data Across Page Reloads: Cookies, IndexedDB and Everything In-Between

Persisting Data Across Page Reloads: Cookies, IndexedDB and Everything In-Between

There’s a similar issue if I explicitly reload the page, such as by hitting F5. You may think I shouldn’t ever need to hit F5, because you’ve set up a mechanism to push changes from the server automatically. But if I’m a typical user, you can bet I’m still going to reload the page. Maybe my browser seems to have repainted the screen incorrectly, or I just want to be certain I have the very latest stock quotes.

APIs May Be Stateless, Human Interaction Is Not

Unlike an internal request via a RESTful API, a human user’s interaction with a web site is not stateless. As a web user, I think of my visit to your site as a session, almost like a phone call. I expect the browser to remember data about my session, in the same way that when I call your sales or support line, I expect the representative to remember what was said earlier in the call.

An obvious example of session data is whether I’m logged in, and if so, as which user. Once I go through a login screen, I should be able to navigate freely through the user-specific pages of the site. If I open a link in a new tab or window and I’m presented with another login screen, that’s not very user friendly.

Another example is the contents of the shopping cart in an e-commerce site. If hitting F5 empties the shopping cart, users are likely to get upset.

In a traditional multi-page application written in PHP, session data would be stored in the $_SESSION superglobal array. But in an SPA, it needs to be somewhere on the client side. There are four main options for storing session data in an SPA:

  • Cookies
  • Fragment identifier
  • Web storage
  • IndexedDB

Four Kilobytes of Cookies

Cookies are an older form of web storage in the browser. They were originally intended to store data received from the server in one request and send it back to the server in subsequent requests. But from JavaScript, you can use cookies to store just about any kind of data, up to a size limit of 4 KB per cookie. AngularJS offers the ngCookies module for managing cookies. There is also a js-cookies package that provides similar functionality in any framework.

Keep in mind that any cookie you create will be sent to the server on every request, whether it’s a page reload or an Ajax request. But if the main session data you need to store is the access token for the logged-in user, you want this sent to the server on every request anyway. It’s natural to try to use this automatic cookie transmission as the standard means of specifying the access token for Ajax requests.

You may argue that using cookies in this manner is incompatible with RESTful architecture. But in this case it is just fine as each request via the API is still stateless, having some inputs and some outputs. It’s just that one of the inputs is being sent in a funny way, via a cookie. If you can arrange for the login API request to send the access token back in a cookie also, then your client side code hardly needs to deal with cookies at all. Again, it’s just another output from the request being returned in an unusual way.

Cookies offer one advantage over web storage. You can provide a “keep me logged in” checkbox on the login form. With the semantics, I expect if I leave it unchecked then I will remain logged in if I reload the page or open a link in a new tab or window, but I’m guaranteed to be logged out once I close the browser. This is an important safety feature if I’m using a shared computer. As we’ll see later, web storage does not support this behavior.

So how might this approach work in practice? Suppose you’re using LoopBack on the server side. You’ve defined a Person model, extending the built-in User model, adding the properties you want to maintain for each user. You’ve configured the Person model to be exposed over REST. Now you need to tweak server/server.js to achieve the desired cookie behavior. Below is server/server.js, starting from what was generated by slc loopback, with the marked changes:

var loopback = require('loopback');
var boot = require('loopback-boot');

var app = module.exports = loopback();

app.start = function() {
  // start the web server
  return app.listen(function() {
    app.emit('started');
    var baseUrl = app.get('url').replace(/\/$/, '');
    console.log('Web server listening at: %s', baseUrl);
    if (app.get('loopback-component-explorer')) {
      var explorerPath = app.get('loopback-component-explorer').mountPath;
      console.log('Browse your REST API at %s%s', baseUrl, explorerPath);
    }
  });
};

// start of first change
app.use(loopback.cookieParser('secret'));
// end of first change

// Bootstrap the application, configure models, datasources and middleware.
// Sub-apps like REST API are mounted via boot scripts.
boot(app, __dirname, function(err) {
  if (err) throw err;

  // start of second change
  app.remotes().after('Person.login', function (ctx, next) {
    if (ctx.result.id) {
      var opts = {signed: true};
      if (ctx.req.body.rememberme !== false) {
        opts.maxAge = 1209600000;
      }
      ctx.res.cookie('authorization', ctx.result.id, opts);
    }
    next();
  });
  app.remotes().after('Person.logout', function (ctx, next) {
    ctx.res.cookie('authorization', '');
    next();
  });
  // end of second change

  // start the server if `$ node server.js`
  if (require.main === module)
    app.start();
});

The first change configures the cookie parser to use ‘secret’ as the cookie signing secret, thereby enabling signed cookies. You need to do this because although LoopBack looks for an access token in either of the cookies ‘authorization’ or ‘access_token’, it requires that such a cookie be signed. Actually, this requirement is pointless. Signing a cookie is intended to ensure that the cookie hasn’t been modified. But there’s no danger of you modifying the access token. After all, you could have sent the access token in unsigned form, as an ordinary parameter. Thus, you don’t need to worry about the cookie signing secret being hard to guess, unless you’re using signed cookies for something else.

The second change sets up some postprocessing for the Person.login and Person.logout methods. For Person.login, you want to take the resulting access token and send it to the client as the signed cookie ‘authorization’ also. The client may add one more property to the credentials parameter, rememberme, indicating whether to make the cookie persistent for 2 weeks. The default is true. The login method itself will ignore this property, but the postprocessor will check it.

For Person.logout, you want to clear out this cookie.

You can see the results of these changes right away in the StrongLoop API Explorer. Normally after a Person.login request, you would have to copy the access token, paste it into the form at the top right, and click Set Access Token. But with these changes, you don’t have to do any of that. The access token is automatically saved as the cookie ‘authorization’, and sent back on each subsequent request. When the Explorer is displaying the response headers from Person.login, it omits the cookie, because JavaScript is never allowed to see Set-Cookie headers. But rest assured, the cookie is there.

On the client side, on a page reload you would see if the cookie ‘authorization’ exists. If so, you need to update your record of the current userId. Probably the easiest way to do this is to store the userId in a separate cookie on successful login, so you can retrieve it on a page reload.

The Fragment Identifier

As I’m visiting a web site that has been implemented as an SPA, the URL in my browser’s address bar might look something like “https://example.com/#/my-photos/37”. The fragment identifier portion of this, “#/my-photos/37”, is already a collection of state information that could be viewed as session data. In this case, I’m probably viewing one of my photos, the one whose ID is 37.

You may decide to embed other session data within the fragment identifier. Recall that in the previous section, with the access token stored in the cookie ‘authorization’, you still needed to keep track of the userId somehow. One option is to store it in a separate cookie. But another approach is to embed it in the fragment identifier. You could decide that while I’m logged in, all the pages I visit will have a fragment identifier beginning with “#/u/XXX”, where XXX is the userId. So in the previous example, the fragment identifier might be “#/u/59/my-photos/37” if my userId is 59.

Theoretically, you could embed the access token itself in the fragment identifier, avoiding any need for cookies or web storage. But that would be a bad idea. My access token would then be visible in the address bar. Anyone looking over my shoulder with a camera could take a snapshot of the screen, thereby gaining access to my account.

One final note: it is possible to set up an SPA so that it doesn’t use fragment identifiers at all. Instead it uses ordinary URLs like “http://example.com/app/dashboard” and “http://example.com/app/my-photos/37”, with the server configured to return the top level HTML for your SPA in response to a request for any of these URLs. Your SPA then does its routing based on the path (e.g. “/app/dashboard” or “/app/my-photos/37”) instead of the fragment identifier. It intercepts clicks on navigation links, and uses History.pushState() to push the new URL, then proceeds with routing as usual. It also listens for popstate events to detect the user clicking the back button, and again proceeds with routing on the restored URL. The full details of how to implement this are beyond the scope of this article. But if you use this technique, then obviously you can store session data in the path instead of the fragment identifier.

Web Storage

Web storage is a mechanism for JavaScript to store data within the browser. Like cookies, web storage is separate for each origin. Each stored item has a name and a value, both of which are strings. But web storage is completely invisible to the server, and it offers much greater storage capacity than cookies. There are two types of web storage: local storage and session storage.

An item of local storage is visible across all tabs of all windows, and persists even after the browser is closed. In this respect, it behaves somewhat like a cookie with an expiration date very far in the future. Thus, it is suitable for storing an access token in the case where the user has checked “keep me logged in” on the login form.

An item of session storage is only visible within the tab where it was created, and it disappears when that tab is closed. This makes its lifetime very different from that of any cookie. Recall that a session cookie is still visible across all tabs of all windows.

If you use the AngularJS SDK for LoopBack, the client side will automatically use web storage to save both the access token and the userId. This happens in the LoopBackAuth service in js/services/lb-services.js. It will use local storage, unless the rememberMe parameter is false (normally meaning the “keep me logged in” checkbox was unchecked), in which case it will use session storage.

The result is that if I log in with “keep me logged in” unchecked, and I then open a link in a new tab or window, I won’t be logged in there. Most likely I’ll see the login screen. You can decide for yourself whether this is acceptable behavior. Some might consider it a nice feature, where you can have several tabs, each logged in as a different user. Or you might decide that hardly anyone uses shared computers any more, so you can just omit the “keep me logged in” checkbox altogether.

So how would the session data handling look if you decide to go with the AngularJS SDK for LoopBack? Suppose you have the same situation as before on the server side: you’ve defined a Person model, extending the User model, and you’ve exposed the Person model over REST. You won’t be using cookies, so you won’t need any of the changes described earlier.

On the client side, somewhere in your outermost controller, you probably have a variable like $scope.currentUserId which holds the userId of the currently logged in user, or null if the user is not logged in. Then to handle page reloads properly, you just include this statement in the constructor function for that controller:

$scope.currentUserId = Person.getCurrentId();

It’s that easy. Add ‘Person’ as a dependency of your controller, if it isn’t already.

IndexedDB

IndexedDB is a newer facility for storing large amounts of data in the browser. You can use it to store data of any JavaScript type, such as an object or array, without having to serialize it. All requests against the database are asynchronous, so you get a callback when the request is completed.

You might use IndexedDB to store structured data that’s unrelated to any data on the server. An example might be a calendar, a to-do list, or saved games that are played locally. In this case, the application is really a local one, and your web site is just the vehicle for delivering it.

At present, Internet Explorer and Safari only have partial support for IndexedDB. Other major browsers support it fully. One serious limitation at the moment, though, is that Firefox disables IndexedDB entirely in private browsing mode.

As a concrete example of using IndexedDB, let’s take the sliding puzzle application by Pavol Daniš, and tweak it to save the state of the first puzzle, the Basic 3×3 sliding puzzle based on the AngularJS logo, after each move. Reloading the page will then restore the state of this first puzzle.

I’ve set up a fork of the repository with these changes, all of which are in app/js/puzzle/slidingPuzzle.js. As you can see, even a rudimentary usage of IndexedDB is quite involved. I’ll just show the highlights below. First, the function restore gets called during page load, to open the IndexedDB database:

/*
 * Tries to restore game
 */
this.restore = function(scope, storekey) {
    this.storekey = storekey;
    if (this.db) {
        this.restore2(scope);
    }
    else if (!window.indexedDB) {
        console.log('SlidingPuzzle: browser does not support indexedDB');
        this.shuffle();
    }
    else {
        var self = this;
        var request = window.indexedDB.open('SlidingPuzzleDatabase');
        request.onerror = function(event) {
            console.log('SlidingPuzzle: error opening database, ' + request.error.name);
            scope.$apply(function() { self.shuffle(); });
        };
        request.onupgradeneeded = function(event) {
            event.target.result.createObjectStore('SlidingPuzzleStore');
        };
        request.onsuccess = function(event) {
            self.db = event.target.result;
            self.restore2(scope);
        };
    }
};

The request.onupgradeneeded event handles the case where the database doesn’t exist yet. In this case, we create the object store.

Once the database is open, the function restore2 is called, which looks for a record with a given key (which will actually be the constant ‘Basic’ in this case):

/*
 * Tries to restore game, once database has been opened
 */
this.restore2 = function(scope) {
    var transaction = this.db.transaction('SlidingPuzzleStore');
    var objectStore = transaction.objectStore('SlidingPuzzleStore');
    var self = this;
    var request = objectStore.get(this.storekey);
    request.onerror = function(event) {
        console.log('SlidingPuzzle: error reading from database, ' + request.error.name);
        scope.$apply(function() { self.shuffle(); });
    };
    request.onsuccess = function(event) {
        if (!request.result) {
            console.log('SlidingPuzzle: no saved game for ' + self.storekey);
            scope.$apply(function() { self.shuffle(); });
        }
        else {
            scope.$apply(function() { self.grid = request.result; });
        }
    };
}

If such a record exists, its value replaces the grid array of the puzzle. If there is any error in restoring the game, we just shuffle the tiles as before. Note that grid is a 3×3 array of tile objects, each of which is fairly complex. The great advantage of IndexedDB is that you can store and retrieve such values without having to serialize them.

We use $apply to inform AngularJS that the model has been changed, so the view will be updated appropriately. This is because the update is happening inside a DOM event handler, so AngularJS wouldn’t otherwise be able to detect the change. Any AngularJS application using IndexedDB will probably need to use $apply for this reason.

After any action that would change the grid array, such as a move by the user, the function save is called which adds or updates the record with the appropriate key, based on the updated grid value:

/*
 * Tries to save game
 */
this.save = function() {
    if (!this.db) {
        return;
    }
    var transaction = this.db.transaction('SlidingPuzzleStore', 'readwrite');
    var objectStore = transaction.objectStore('SlidingPuzzleStore');
    var request = objectStore.put(this.grid, this.storekey);
    request.onerror = function(event) {
        console.log('SlidingPuzzle: error writing to database, ' + request.error.name);
    };
    request.onsuccess = function(event) {
        // successful, no further action needed
    };
}

The remaining changes are to call the above functions at appropriate times. You can review the commitshowing all of the changes. Note that we are calling restore only for the basic puzzle, not for the three advanced puzzles. We exploit the fact that the three advanced puzzles have an api attribute, so for those we just do the normal shuffling.

What if we wanted to save and restore the advanced puzzles also? That would require some restructuring. In each of the advanced puzzles, the user can adjust the image source file and the puzzle dimensions. So we’d have to enhance the value stored in IndexedDB to include this information. More importantly, we’d need a way to update them from a restore. That’s a bit much for this already lengthy example.

Conclusion

In most cases, web storage is your best bet for storing session data. It’s fully supported by all major browsers, and it offers much greater storage capacity than cookies.

You would use cookies if your server is already set up to use them, or if you need the data to be accessible across all tabs of all windows, but you also want to ensure it will be deleted when the browser is closed.

You already use the fragment identifier to store session data that’s specific to that page, such as the ID of the photo the user is looking at. While you could embed other session data in the fragment identifier, this doesn’t really offer any advantage over web storage or cookies.

Using IndexedDB is likely to require a lot more coding than any of the other techniques. But if the values you’re storing are complex JavaScript objects that would be difficult to serialize, or if you need a transactional model, then it may be worthwhile. Source: Toptal.

The Art of War Applied To Software Development

The following article is a guest post from Toptal. Toptal is an elite network of freelancers that enables businesses to connect with the top 3% of software engineers and designers in the world.

If you work in the software industry, it’s likely that you have heard about the divide and conquer design paradigm, which basically consists of recursively splitting a problem into two or more sub-problems (divide), until these become simple enough to be solved directly (conquer).

What you might not know is that this paradigm originates from an old political strategy (the name is derived from the Latin saying divide et impera) that suggests it is possible to maintain control over one’s subordinates or subjects by encouraging dissent between them.

This strategy has been used by countless politicians and military leaders throughout history, such as Julius Caesar (who used it during the Gallic Wars to defeat the militarily strong Gauls) and Napoleon (the French artillery expert would divide the enemy troops so no portion was stronger than his own troops, and then disrupt their communications, impeding enemy efforts to coordinate and execute attacks).

The Art Of War: Ancient Principles Applied To Development

However, the divide and conquer rule is not the only political strategy that can be applied to software development. Although politics and warfare have little to do with software development, just like politicians and generals, developers must lead subordinates, coordinate efforts between teams, find the best strategies to resolve problems, and administer resources.

Sun Tzu’s principles and teachings have practical applications in politics, business, sports, and software development.

Sun Tzu’s principles and teachings have practical applications in politics, business, sports, and software development.

The Art of War is an ancient military treatise written in the fifth century B.C. and attributed to Sun Tzu, an ancient Chinese military strategist, whose theories had a profound influence on both Eastern and Western philosophy.

Despite its age, the text is still included in the syllabus at many military schools in East Asia and it’s listed as a recommended reading in some military academies in the West. The text is divided into 13 chapters, each one devoted to a different aspect of warfare.

However, in addition to warfare, Sun Tzu’s principles and teachings have practical applications in politics, business, sports, and, believe it or not, software development. In fact, you might just be applying some of these principles in your daily routine, without even knowing their origins.

Detailed below, you will a find a brief list of basic tactics and tips explained in the Art of War. They can probably be applied to your job in the software industry, or any of a number of other industries.

Time Is Crucial In Any Campaign

Chapter II, paragraph 2

“When you engage in actual fighting, if victory is long in coming, then men’s weapons will grow dull and their ardor will be damped.”

This principle can be applied to software development, as a rule describing the relationship between the length of development cycles and the developer’s morale.

If a group of developers work on the same projects for months, with no clear goals or end in sight, they may become frustrated and their productivity may decline.

Divide your development roadmap into easily achievable goals and milestones. It’s good for morale.

Divide your development roadmap into easily achievable goals and milestones. It’s good for morale.

Software development is an intellectual endeavour, so motivation is the main fuel for productivity. Working every day without perceiving that your work is generating real results can be very demotivating.

As indicated in some agile methodologies, the development roadmap should be divided into several goals and milestones, which the team might be able to achieve in short timeframes, and they are going to give them a sense of progress and achievement.

Chapter II, paragraph 18

“In war, then, let your great object be victory, not lengthy campaigns.”

This phrase can be interpreted in two ways:

First, it can be seen as a precursor of the UNIX philosophy: Write programs that do one thing and do it well. When developing software, you must always keep in mind the main objective of the program, the key feature that it provides, or the biggest problem that it solves, and ensure proper implementation.

Sometimes you might get inspired and think of a really cool feature to add, but do not forget that applications that have lot of infrequently used features have a disparaging name: bloatware.

Second, the statement may also be considered as a precursor for one of the lean software development principles: Deliver as fast as possible.

The sooner you deliver software without major defects, the sooner you will get feedback from the client, and you will be able to incorporate the changes into the next iteration.

If on the other hand, you deliver non-working software, you will miss out on valuable feedback, because clients won’t get a chance to test it properly. This will make the next stage of development more difficult, or impossible in situations where your next iteration depends on customer feedback.

No Leadership, No Results

Chapter III, paragraph 11

“Now the general is the bulwark of the State; if the bulwark is complete at all points, the State will be strong; if the bulwark is defective, the State will be weak.”

This quote describes the importance of the role of the manager in a development team: the success of a project depends on the strength of all people involved, and the manager is the bulwark of the project. Responsibility starts at the top.

Responsibility starts at the top. If your team lead is bad, no amount of talented engineers will help.

Responsibility starts at the top. If your team lead is bad, no amount of talented engineers will help.

Even though developers frequently work alone (each sitting behind a computer, with limited communication with coworkers), that does not mean that they don’t need good leadership. Project managers are in charge of keeping the team on track, ensuring effective communication and dispute resolution, and leaders, obviously, define the priorities of the project (among other tasks), so their role should not be underestimated. Neither should their responsibility if something goes wrong. Imagine what would happen to a military leader whose unit failed to perform its duty in the field of battle?

A team can produce great software even if it has a few bad apples in development positions, but that’s unlikely to happen if the project manager is the bad apple, no matter how many rockstar developers the team has.

Chapter VI, paragraph 28

“Do not repeat the tactics which have gained you one victory, but let your methods be regulated by the infinite variety of circumstances.”

Sometimes, when starting a project, it’s tempting to use the same set of technologies we used in previously successful projects (the same programming language, the same libraries, the same server, etc). However, unless the requirements of the new projects are exactly the same as previous ones, this might be the wrong approach.

In programming, as in most domains, the panacea (a supposed remedy capable of curing all diseases) does not exist. There is no single combination of technologies that you can use for solving all problems; each technology has its upsides and downsides.

Of course, learning a new programming language or using an unknown API might initially be expensive but in the long term, the quality of the software will be superior and you will become a better developer.

Chapter XIII, paragraph 27

“Hence it is only the enlightened ruler and the wise general who will use the highest intelligence of the army for purposes of spying, and thereby they achieve great results. Spies are a most important element in war, because on them depends an army’s ability to move.”

This phrase may be interpreted as the importance of using monitoring tools and logging libraries during the maintenance phase.

Although sometimes clients might not think so, development does not end when you get a stable and fully tested release. Software is always evolving, either by fixing bugs, adding new features or improving efficiency.

And there is no better source of information for knowing what changes to make than having spies monitoring the software in production environments, checking which features are used the most, the most common errors and the lengthiest operations.

Error reports, logging entries and usage data are fundamental for detecting bugs, identifying bottlenecks and other issues since it is not always possible to reproduce the same conditions in controlled testing environments.

Teamwork And Motivation

Chapter X, paragraph 24

“He who advances without seeking fame, Who retreats without escaping blame, He whose one aim is to protect his people and serve his lord, The man is a jewel of the Realm.”

Basically, this is the ancient Chinese version of “there’s no I in team”. It is more important to work together with others rather than to pursue personal gain.

Software development is a complex activity that requires developers to work effectively as a team. A good developer is not the one who fixes the most bugs, implements the most features or finishes assignments ahead of schedule; a good developer is the one who helps the team reach its goals.

Teamwork wins battles. Remember, the best developers are inspiring individuals who help other team members reach their objectives.

Teamwork wins battles. Remember, the best developers are inspiring individuals who help other team members reach their objectives.

Claiming credit for everything you’ve done, not recognizing your errors or blaming others for them, or calling yourself a code ninja might fool some inexperienced managers and might even get you a raise, but you will become a counterproductive member of your team.

Chapter VII, paragraph 21

“Ponder and deliberate before you make a move.”

This phrase indicates the importance of team development meetings, such as those proposed by agile methodologies.

When working on a team, it is important to discuss any major changes before implementing them. It doesn’t matter if you are the team leader, or if you are the person with the most experience of the subject, you should always talk with, or at least inform, the rest of the team.

Remember that other developers could give you insights into unfamiliar parts the software. This means they could start implementing the changes faster than expected, because they could be fully aware of the effects of said changes.

Chapter X, paragraph 25

“Regard your soldiers as your children, and they will follow you into the deepest valleys; look upon them as your own beloved sons, and they will stand by you even unto death.”

This quote indicates the importance of motivation, a principle of management that is sometimes forgotten by managers and team leaders. Motivated developers will write better code, work faster, commit less errors and be more willing to put in extra hours.

Motivation must be generated by managers, by taking genuine interest in their subordinates, listening to them, caring about their work-life balance, building positive work environments and caring about their career paths.

Also, you should not mistake motivation with remuneration. Recent studies demonstrates that money do not motivate most workers, money is mostly good at attracting and retaining employees, but not at make them happy about their jobs. So raises and promotions should not be seen as motivational tools.

Thinking Outside The Box

Chapter V, paragraph 7, 8 and 9

“There are not more than five musical notes, yet the combinations of these five give rise to more melodies than can ever be heard.”

“There are not more than five primary colors, yet in combination they produce more hues than can ever been seen.”

“There are not more than five cardinal tastes, yet combinations of them yield more flavors than can ever be tasted.”

One of the good things about programming is that the possibilities are endless; you can develop basically wherever you want (at least, as long is not an NP-complete problem).

Mobile apps, websites, games, desktop applications… if you know programming, all of them are within your reach.

If you’re a talented developer, you need to think outside the box. The box is there to prevent incompetent people from wrecking stuff. It’s not for you.

If you’re a talented developer, you need to think outside the box. The box is there to prevent incompetent people from wrecking stuff. It’s not for you.

Chapter III, paragraph 1

“In the practical art of war, the best thing of all is to take the enemy’s country whole and intact; to shatter and destroy it is not so good. So, too, it is better to capture an army entire than to destroy it, to capture a regiment, a detachment or a company entire than to destroy them.”

When working on a project with a large code base, it is common to find modules or sections of code that have been implemented with bad practices or by using deprecated libraries. Although it might be tempting to erase (or destroy) this code, it might not be the best idea for several reasons:

  • Legacy code is not necessarily bad, sometimes it is good code that was written when other methodologies and technologies were considered the way to go. However, just because it is old doesn’t mean that it is not working.
  • You might lose time fixing code that still works instead of focusing on fixing other, more critical parts of the code.
  • Unless you are really sure of what you are doing, replacing a section of code that works means you are risking introducing new errors or bugs.

This does not mean that the phrase “If it ain’t broke, don’t fix it” is a good strategy, but that every project has priorities, goals and time constraints. So, if you find code that could be improved, discuss it with the rest of the team or with the project manager in order to figure out when to optimize it.

Chapter VIII, paragraph 3

“There are roads which must not be followed, armies which must not be attacked, towns which must not be besieged, positions which must not be contested, commands of the sovereign which must not be obeyed.”

Even it does not say it directly, we could interpret this principle as a warning to avoid anti-patterns.

Although using an anti-pattern may resolve a short-term issue, you should remember that in the long-term it is going to be counter-productive. So, no matter how much time you save, how many bugs you fix or how convenient it is for you, avoid them.

Still, there are times you may be tempted to use an anti-pattern to resolve an urgent task, promising yourself you will implement a proper fix when you have more time, but remember one of Murphy’s laws: All quick fixes become permanent changes.

Conclusion

Although developing software is different from commanding soldiers in war or leading a country, all that they must solve problems that require teamwork, good leadership, efficiency and long-term solutions.

However, the Art of War is not the only book which contains principles that may be applied to software development. Niccolò Machiavelli’s The Prince, is an example.

In fact, here is a list of quotes from Machiavelli that are still relevant. Try guessing which are the corresponding principles in the world of software development.

  1. The lion cannot protect himself from traps, and the fox cannot defend himself from wolves. One must therefore be a fox to recognize traps, and a lion to frighten wolves.
  2. Never attempt to win by force what can be won by deception.
  3. Never was anything great achieved without danger.
  4. Whosoever desires constant success must change his conduct with the times.
  5. Men in general judge more from appearances than from reality. All men have eyes, but few have the gift of penetration.
  6. He who wishes to be obeyed must know how to command.
  7. Wisdom consists of knowing how to distinguish the nature of trouble, and in choosing the lesser evil.
  8. There is no avoiding war; it can only be postponed to the advantage of your enemy.
  9. Nature creates few men brave; industry and training makes many.

Fixing the “Heartbleed” OpenSSL Bug: A Tutorial for Sys Admins

The following article is a guest post from Toptal. Toptal is an elite network of freelancers that enables businesses to connect with the top 3% of software engineers and designers in the world.

So what exactly is the bug anyway?

Here’s a very quick rundown:

A potentially critical problem has surfaced in the widely used OpenSSL cryptographic library. It is nicknamed “Heartbleed” because the vulnerability exists in the “heartbeat extension” (RFC6520) to the Transport Layer Security (TLS)  and it is a memory leak (“bleed”) issue.  User passwords and other important data may have been compromised on any site affected by the vulnerability.

The vulnerability is particularly dangerous for two reasons:

  1. Potentially critical data is leaked.
  2. The attack leaves no trace.

The affected OpenSSL versions are 1.0.1 through 1.0.1f, 1.0.2-beta, and 1.0.2-beta1.

Who is affected by the problem?

Short answer:  Anyone and everyone who uses these versions of OpenSSL.

And that’s a LOT of companies and a LOT of people.

Before we get into our Heartbleed tutorial, here’s just a brief sampling of major companies and websites that are known to have been affected and that needed to patch their sites:  GmailYahoo MailIntuit TurboTaxUSAA, Dropbox, Flickr, Instagram, PinterestSoundCloud, Tumblr, GitHubGoDaddyBoingo Wireless, and many more.

If you're wondering how to protect against openssl Heartbleed, start by using the Heartbleed test.

Many, many corporate websites, of companies of all sizes, have been (or still need to be!) patched to fix the Heartbleed vulnerability.

The vulnerability has existed since December 31, 2011, with OpenSSL being used by about 66% of Internet hosts.

As a user, chances are that sites you frequent regularly are affected and that your data may have been compromised. As a developer or sys admin, sites or servers you’re responsible for are likely to have been affected as well.

So what do I need to do to protect myself if I use any of the affected sites?

The main thing you should do immediately is to change your passwords for any of the affected sites for which you have a login account.

And what do I need to do to fix and protect against Heartbleed if I’m the sys admin for a site that uses OpenSSL?

If you’re using OpenSSL 1.0.1, do one of the following immediately:

  • Upgrade to OpenSSL 1.0.1g, or
  • Recompile OpenSSL with -DOPENSSL_NO_HEARTBEATS.

If you’re using OpenSSL 1.0.2, the vulnerability will be fixed in 1.0.2-beta2 but you can’t wait for that.  In the interim, do one of the following immediately:

  • Revert to OpenSSL 1.0.1g, or
  • Recompile OpenSSL with -DOPENSSL_NO_HEARTBEATS.

Most distributions (e.g., Ubuntu, Fedora, Debian, Arch Linux) have upgraded their packages already.  In cases like Gentoo, you can upgrade to a patched ebuild.

Once you’ve upgraded (or recompiled) and have established a secure version on your server:

  • Be sure to restart all potentially affected processes.  Major daemons affected by the bug include Apache, Nginx, OpenVPN, and sshd; basically anything and everything linked against libssl. (Note that a restart of these daemons should be sufficient.  There should be no need to rebuild these binaries since they are dynamically linked with the openssl libraries.)
  • Verify that you are no longer vulnerable using tools like this online test or this tool on GitHub or this tool on Pastebin.

If your infrastructure was vulnerable, there are Heartbleed tutorial steps that you can and should take.  A useful list of such mitigations is available here.

More gory Heartbleed details, for those who are interested…

As explained in the GitHub commit for the fix, a missing bounds check in the handling of the TLS heartbeat extension could be exploited to reveal up to 64k of memory to a connected client or server.

While the exposed memory could potentially just be garbage, it could just as easily turn out to be extremely valuable to a malicious attacker.

Here’s how the Heartbleed vulnerability works:  An attacker provides the payload as well as the payload length.  However, no validation is done to confirm that the payload length was actually provided by the attacker.  If the payload length was not provided, an out-of-bounds read occurs, which in turn leaks process memory from the heap.

Leaking previous request headers can be a very serious security problem. Specifically, a prior user’s login post data might still be available with their username, password, and cookies, all of which can then be exposed and exploited. Moreover, although private key leakage through Heartbleed was initially deemed to be unlikely, it has been verified that private SSL keys can be stolen by exploiting this vulnerability.

Fixing Heartbleed is critical as it has been confirmed that private SSL keys can be stolen this way.

The vulnerability is also made possible due to OpenSSL’s silly use of a malloc() cache.  By wrapping away libc functions and not actually freeing memory, the exploitation countermeasures in libc are never given the chance to kick in and render the bug useless.

Additional details on these ways to fix Heartbleed are available here and here.

And, for what it’s worth, here’s a more amusing perspective.

Kudos to the discoverer, Neel Mehta of Google Security, as well as Adam Langley and Bodo Moeller who promptly provided the patch and helped sys admins determine how to fix Heartbleed. I also encourage you to educate yourself on some of the other common web security vulnerabilities to avoid issues in the future.

Avoid the 10 Most Common Mistakes Web Developers Make: A Tutorial for Developers

The following article is a guest post from Toptal. Toptal is an elite network of freelancers that enables businesses to connect with the top 3% of software engineers and designers in the world.

Since the term the World Wide Web was coined back in 1990, web application development has evolved from serving static HTML pages to completely dynamic, complex business applications.

Today we have thousands of digital and printed resources that provide step-by-step instructions about developing all kinds of different web applications. Development environments are “smart” enough to catch and fix many mistakes that early developers battled with regularly. There are even many different development platforms that easily turn simple static HTML pages into highly interactive applications.

All of these development patterns, practices, and platforms share common ground, and they are all prone to similar web development issues caused by the very nature of web applications.

The purpose of these web development tips is to shed light on some of the common mistakes made in different stages of the web development process and to help you become a better developer. I have touched on a few general topics that are common to virtually all web developers such as validation, security, scalability, and SEO. You should of course not be bound by the specific examples I’ve described in this guide, as they are listed only to give you an idea of the potential problems you might encounter.

Think of all the web development mistakes that could be avoided at this campout!

Common mistake #1: Incomplete input validation

Validating user input on client and server side is simply a must do! We are all aware of the sage advice “do not trust user input” but, nevertheless, mistakes stemming from validation happen all too often.

One of the most common consequences of this mistake is SQL Injection which is in OWASP Top 10 year after year.

Remember that most front-end development frameworks provide out-of-the-box validation rules that are incredibly simple to use. Additionally, most major back-end development platforms use simple annotations to assure that submitted data are adhering to expected rules. Implementing validation might be time consuming, but it should be part of your standard coding practice and never set aside.

Common mistake #2: Authentication without proper Authorization

Before we proceed, let’s make sure we are aligned on these two terms. As stated in the 10 Most Common Web Security Vulnerabilities:

Authentication: Verifying that a person is (or at least appears to be) a specific user, since he/she has correctly provided their security credentials (password, answers to security questions, fingerprint scan, etc.).

Authorization: Confirming that a particular user has access to a specific resource or is granted permission to perform a particular action.

Stated another way, authentication is knowing who an entity is, while authorization is knowing what a given entity can do.

Let me demonstrate this issue with an example:

Consider that your browser holds currently logged user information in an object similar to the following:

{
    username:'elvis',
    role:'singer',
    token:'123456789'
}

When doing a password change, your application makes the POST:

POST /changepassword/:username/:newpassword

In your /changepassword method, you verify that user is logged and  token has not expired. Then you find the user profile based on the :username parameter, and you change your user’s password.

So, you validated that your user is properly logged-in, and then you executed his request thus changing his password. Process seems OK, right? Unfortunately, the answer is NO!

At this point it is important to verify that the user executing the action and the user whose password is changed are the same. Any information stored on the browser can be tampered with, and any advanced user could easily update username:'elvis' to username:'Administrator' without using anything else but built-in browser tools.

So in this case, we just took care of Authentication making sure that the user provided security credentials. We can even add validation that /changepassword method can only be executed by Authenticated users. However, this is still not enough to protect your users from malicious attempts.

You need to make sure that you verify actual requestor and content of request within your /changepasswordmethod and implement proper Authorization of the request making sure that user can change only her data.

Authentication and Authorization are two sides of the same coin. Never treat them separately.

Common mistake #3: Not ready to scale

In today’s world of high speed development, startup accelerators, and instant global reach of great ideas, having your MVP (Minimum Viable Product) out in the market as soon as possible is a common goal for many companies.

However, this constant time pressure is causing even good web development teams to often overlook certain issues. Scaling is often one of those things teams take for granted. The MVP concept is great, but push it too far, and you’ll have serious problems. Unfortunately, selecting a scalable database and web server and separating all application layers on independent scalable servers is not enough. There are many details you need to think about if you wish to avoid rewriting significant parts of your application later – which becomes a major web development problem.

For example, say that you choose to store uploaded profile pictures of your users directly on a web server. This is a perfectly valid solution–files are quickly accessible to the application, file handling methods are available in every development platform, and you can even serve these images as static content, which means minimum load on your application.

But what happens when your application grows, and you need to use two or more web servers behind a load balancer? Even though you nicely scaled your database storage, session state servers, and web servers, your application scalability fails because of a simple thing like profile images. Thus, you need to implement some kind of file synchronization service (that will have a delay and will cause temporary 404 errors) or another workaround to assure that files are spread across your web servers.

This bloated blowfish is full of web development tips and tutorials to avoid common mistakes.

What you needed to do to avoid the problem in the first place was just use shared file storage location, database, or any other remote storage solution. It would have probably cost few extra hours of work to have it all implemented, but it would have been worth the trouble.

Common mistake #4: Wrong or missing SEO

The root cause of incorrect or missing SEO best practices on web sites is misinformed “SEO specialists”. Many web developers believe that they know enough about SEO and that it is not especially complex, but that’s just not true. SEO mastery requires significant time spent researching best practices and the ever-changing rules about how Google, Bing, and Yahoo index the web. Unless you constantly experiment and have accurate tracking + analysis, you are not a SEO specialist, and you should not claim to be one.

Furthermore, SEO is too often postponed as some activity that is done at the end. This comes at a high price of web development issues. SEO is not just related to setting good content, tags, keywords, meta-data, image alt tags, site map, etc. It also includes eliminating duplicate content, having crawlable site architecture, efficient load times, intelligent back linking, etc.

Like with scalability, you should think about SEO from the moment you start building your web application, or you might find that completing your SEO implementation project means rewriting your whole system.

Common mistake #5: Time or processor consuming actions in request handlers

One of the best examples of this mistake is sending email based on a user action. Too often developers think that making a SMTP call and sending a message directly from user request handler is the solution.

Let’s say you created an online book store, and you expect to start with a few hundred orders daily. As part of your order intake process, you send confirmation emails each time a user posts an order. This will work without problem at first, but what happens when you scale your system, and you suddenly get thousands of requests sending confirmation emails? You either get SMTP connection timeouts, quota exceeded, or your application response time degrades significantly as it is now handling emails instead of users.

Any time or processor consuming action should be handled by an external process while you release your HTTP requests as soon as possible. In this case, you should have an external mailing service that is picking up orders and sending notifications.

Common mistake #6: Not optimizing bandwidth usage

Most development and testing takes place in a local network environment. So when you are downloading 5 background images each being 3MB or more, you might not identify an issue with 1Gbit connection speed in your development environment. But when your users start loading a 15MB home page over 3G connections on their smartphones, you should prepare yourself for a list of complaintsand problems.

Optimizing your bandwidth usage could give you a great performance boost, and to gain this boost you probably only need a couple of tricks. There are few things that many good web deveopers do by default, including:

  1. Minification of all JavaScript
  2. Minification of all CSS
  3. Server side HTTP compression
  4. Optimization of image size and resolution

Common mistake #7: Not developing for different screen sizes

Responsive design has been a big topic in the past few years. Expansion of smartphones with different screen resolutions has brought many new ways of accessing online content, which also comes with a host of web development issues. The number of website visits that come from smartphones and tablets grows every day, and this trend is accelerating.

In order to ensure seamless navigation and access to website content, you must enable users to access it from all types of devices.

There are numerous patterns and practices for building responsive web applications. Each development platform has its own tips and tricks, but there are some frameworks that are platform independent. The most popular is probably Twitter Bootstrap. It is an open-source and free HTML, CSS, and JavaScript framework that has been adopted by every major development platform. Just adhere to Bootstrap patterns and practices when building your application, and you will get responsive web application with no trouble at all.

Common mistake #8: Cross browser incompatibility

The development process is, in most cases, under a heavy time pressure. Every application needs to be released as soon as possible and even good web developers are often focused on delivering functionality over design. Regardless of the fact that most developers have Chrome, Firefox, IE installed, they are using only one of these 90% of the time. It is common practice to use one browser during development and just as the application nears completion will you start testing it in other browsers. This is perfectly reasonable–assuming you have a lot of time to test and fix issues that show up at this stage.

However, there are some web development tips that can save you significant time when your application reaches the cross-browser testing phase:

  1. You don’t need to test in all browsers during development; it is time consuming and ineffective. However, that does not mean that you cannot switch browsers frequently. Use a different browser every couple of days, and you will at least recognize major problems early in development phase.
  2. Be careful of using statistics to justify not supporting a browser. There are many organizations that are slow in adopting new software or upgrading. Thousands of users working there might still need access to your application, and they cannot install the latest free browser due to internal security and business policies.
  3. Avoid browser specific code. In most cases there is an elegant solution that is cross-browser compatible.

Common mistake #9: Not planning for portability

Assumption is the mother of all problems! When it comes to portability, this saying is more true than ever. How many times have you seen issues in web development like hard coded file paths, database connection strings, or assumptions that a certain library will be available on the server? Assuming that the production environment will match your local development computer is simply wrong.

Ideal application setup should be maintenance-free:

  1. Make sure that your application can scale and run on a load-balanced multiple server environment.
  2. Allow simple and clear configuration–possibly in a single configuration file.
  3. Handle exceptions when web server configuration is not as expected.

Common mistake #10: RESTful anti patterns

RESTful API’s have taken their place in web development and are here to stay. Almost every web application has implemented some kind of REST services, whether for internal use or integrating with external system. But we still see broken RESTful patterns and services that do not adhere to expected practices.

Two of the most common mistakes made when writing a RESTful API are:

  1. Using wrong HTTP verbs. For example using GET for writing data. HTTP GET has been designed to be idempotent and safe, meaning that no matter how many times you call GET on the same resource, the response should always be the same and no change in application state should occur.
  2. Not sending correct HTTP status codes. The best example of this mistake is sending error messages with response code 200.
     HTTP 200 OK
     {
         message:'there was an error'
     }
    

You should only send HTTP 200 OK when the request has not generated an error. In the case of an error, you should send 400, 401, 500 or any other status code that is appropriate for the error that has occurred.

A detailed overview of standard HTTP status codes can be found here.

Wrap up

Web development is an extremely broad term that can legitimately encompass development of a website, web service, or complex web application.

The main takeaway of this web development guide is the reminder that you should always be careful about authentication and authorization, plan for scalability, and never hastily assume anything – or be ready to deal with a long list of web development problems!

 

REST Security with JWT using Java and Spring Security

The following article is a guest post from Toptal. Toptal is an elite network of freelancers that enables businesses to connect with the top 3% of software engineers and designers in the world.

Security

Security is the enemy of convenience, and vice versa. This statement is true for any system, virtual or real, from the physical house entrance to web banking platforms. Engineers are constantly trying to find the right balance for the given use case, leaning to one side or the other. Usually, when a new threat appears, we move towards security and away from convenience. Then, we see if we can recover some lost convenience without reducing the security too much. Moreover, this vicious circle goes on forever.

Security vs. convenience

Security is the enemy of convenience, and vice versa.

Let’s try to see where REST services currently stand regarding security and convenience. REST (which stands for Representational State Transfer) services started off as an extremely simplified approach to Web Services that had huge specifications and cumbersome formats, such as WSDL for describing the service, or SOAP for specifying the message format. In REST, we have none of those. We can describe the REST service in a plain text file and use any message format we want, such as JSON, XML or even plain text again. The simplified approach was applied to the security of REST services as well; no defined standard imposes a particular way to authenticate users.

Although REST services do not have much specified, an important one is the lack of state. It means the server does not keep any client state, with sessions as a good example. Thus, the server replies to each request as if it was the first the client has made. However, even now, many implementations still use cookie based authentication, which is inherited from standard website architectural design. The stateless approach of REST makes session cookies inappropriate from the security standpoint, but nevertheless, they are still widely used. Besides ignoring the required statelessness, simplified approach came as an expected security trade-off. Compared to the WS-Security standard used for Web Services, it is much easier to create and consume REST services, hence convenience went through the roof. The trade-off is pretty slim security; session hijacking and cross-site request forgery (XSRF) are the most common security issues.

In trying to get rid of client sessions from the server, some other methods have been used occasionally, such as Basic or Digest HTTP authentication. Both use an Authorization header to transmit user credentials, with some encoding (HTTP Basic) or encryption (HTTP Digest) added. Of course, they carried the same flaws found in websites: HTTP Basic had to be used over HTTPS since username and password are sent in easily reversible base64 encoding, and HTTP Digest forced the use of obsolete MD5 hashing that is proven to be insecure.

Finally, some implementations used arbitrary tokens to authenticate clients. This option seems to be the best we have, for now. If implemented properly, it fixes all the security problems of HTTP Basic, HTTP Digest or session cookies, it is simple to use, and it follows the stateless pattern.

However, with such arbitrary tokens, there’s little standard involved. Every service provider had his or her idea of what to put in the token, and how to encode or encrypt it. Consuming services from different providers required additional setup time, just to adapt to the specific token format used. The other methods, on the other hand (session cookie, HTTP Basic and HTTP Digest) are well known to developers, and almost all browsers on all devices work with them out of the box. Frameworks and languages are ready for these methods, having built-in functions to deal with each seamlessly.

JWT

JWT (shortened from JSON Web Token) is the missing standardization for using tokens to authenticate on the web in general, not only for REST services. Currently, it is in draft status as RFC 7519. It is robust and can carry a lot of information, but is still simple to use even though its size is relatively small. Like any other token, JWT can be used to pass the identity of authenticated users between an identity provider and a service provider (which are not necessarily the same systems). It can also carry all the user’s claim, such as authorization data, so the service provider does not need to go into the database or external systems to verify user roles and permissions for each request; that data is extracted from the token.

Here is how JWT is designed to work:

JWT flow

  • Clients logs in by sending their credentials to the identity provider.
  • The identity provider verifies the credentials; if all is OK, it retrieves the user data, generates a JWT containing user details and permissions that will be used to access the services, and it also sets the expiration on the JWT (which might be unlimited).
  • Identity provider signs, and if needed, encrypts the JWT and sends it to the client as a response to the initial request with credentials.
  • Client stores the JWT for a limited or unlimited amount of time, depending on the expiration set by the identity provider.
  • Client sends the stored JWT in an Authorization header for every request to the service provider.
  • For each request, the service provider takes the JWT from the Authorization header and decrypts it, if needed, validates the signature, and if everything is OK, extracts the user data and permissions. Based on this data solely, and again without looking up further details in the database or contacting the identity provider, it can accept or deny the client request. The only requirement is that the identity and service providers have an agreement on encryption so that service can verify the signature or even decrypt which identity was encrypted.

This flow allows for great flexibility while still keeping things secure and easy to develop. By using this approach, it is easy to add new server nodes to the service provider cluster, initializing them with only the ability to verify the signature and decrypt the tokens by providing them a shared secret key. No session replication, database synchronization or inter-node communication is required. REST in its full glory.

The main difference between JWT and other arbitrary tokens is the standardization of the token’s content. Another recommended approach is to send the JWT token in the Authorization header using the Bearer scheme. The content of the header should look like this:

Authorization: Bearer <token>

Implementation

For REST services to work as expected, we need a slightly different authorization approach compared to classic, multi-page websites.

Instead of triggering the authentication process by redirecting to a login page when a client requests a secured resource, the REST server authenticates all requests using the data available in the request itself, the JWT token in this case. If such an authentication fails, redirection makes no sense. The REST API simply sends an HTTP code 401 (Unauthorized) response and clients should know what to do; for example, a browser will show a dynamic div to allow the user to supply the username and password.

On the other hand, after a successful authentication in classic, multi-page websites, the user is redirected by using HTTP code 301 (Moved permanently), usually to a home page or, even better, to the page the user initially requested that triggered the authentication process. With REST, again this makes no sense. Instead we would simply continue with the execution of the request as if the resource was not secured at all, return HTTP code 200 (OK) and expected response body.

Spring Security

REST Security with JWT, Spring Security and Java

Now, let’s see how can we implement the JWT token based REST API using Java and Spring, while trying to reuse the Spring security default behavior where we can. As expected, Spring Security framework comes with many ready to plug-in classes that deal with “old” authorization mechanisms: session cookies, HTTP Basic, and HTTP Digest. However, it lacks the native support for JWT, and we need to get our hands dirty to make it work.

First, we start with the usual Spring Security filter definition in web.xml:

<filter>
	<filter-name>springSecurityFilterChain</filter-name>
	<filter-class>org.springframework.web.filter.DelegatingFilterProxy</filter-class>
</filter>
<filter-mapping>
	<filter-name>springSecurityFilterChain</filter-name>
	<url-pattern>/*</url-pattern>
</filter-mapping>

Note that the name of the filter must be exactly springSecurityFilterChain for the rest of the Spring config to work out of the box.

Next comes the XML declaration of the Spring beans related to security. In order to simplify the XML, we will set the default namespace to security by adding xmlns="http://www.springframework.org/schema/security" to the root XML element. The rest of the XML looks like this:

    <global-method-security pre-post-annotations="enabled" />  (1)
    
    <http pattern="/api/login" security="none"/>   (2)
    <http pattern="/api/signup" security="none"/>

    <http pattern="/api/**" entry-point-ref="restAuthenticationEntryPoint" create-session="stateless"> (3)
        <csrf disabled="true"/>  (4)
        <custom-filter before="FORM_LOGIN_FILTER" ref="jwtAuthenticationFilter"/>  (5)
    </http>
    
    <beans:bean id="jwtAuthenticationFilter" class="com.toptal.travelplanner.security.JwtAuthenticationFilter">  (6)
        <beans:property name="authenticationManager" ref="authenticationManager" />
        <beans:property name="authenticationSuccessHandler" ref="jwtAuthenticationSuccessHandler" />  (7)
    </beans:bean>

    <authentication-manager alias="authenticationManager">
        <authentication-provider ref="jwtAuthenticationProvider" />  (8)
    </authentication-manager>
  • (1) In this line, we activate @PreFilter@PreAuthorize@PostFilter@PostAuthorize annotations on any spring beans in the context.
  • (2) We define the login and signup endpoints to skip security; even “anonymous” should be able to do these two operations.
  • (3) Next, we define the filter chain applied to all requests while adding two important configs: Entry point reference and setting the session creation to stateless (we do not want the session created for security purposes as we are using tokens for each request).
  • (4) We do not need csrf protection because our tokens are immune to it.
  • (5) Next, we plug in our special authentication filter within the Spring’s predefined filter chain, just before the form login filter.
  • (6) This bean is the declaration of our authentification filter; since it is extending Spring’s AbstractAuthenticationProcessingFilter, we need to declare it in XML to wire its properties (auto wire does not work here). We will explain later what the filter does.
  • (7) The default success handler of AbstractAuthenticationProcessingFilter is not good enough for REST purposes because it redirects the user to a success page; that is why we set our own here.
  • (8) The declaration of the provider created by the authenticationManager is used by our filter to authenticate users.

Now let’s see how we implement the specific classes declared in the XML above. Note that Spring will wire them for us. We start with the simplest ones.

RestAuthenticationEntryPoint.java

public class RestAuthenticationEntryPoint implements AuthenticationEntryPoint {

    @Override
    public void commence(HttpServletRequest request, HttpServletResponse response, AuthenticationException authException) throws IOException {
        // This is invoked when user tries to access a secured REST resource without supplying any credentials
        // We should just send a 401 Unauthorized response because there is no 'login page' to redirect to
        response.sendError(HttpServletResponse.SC_UNAUTHORIZED, "Unauthorized");
    }
}

As explained above, this class just returns HTTP code 401 (Unauthorized) when authentication fails, overriding default Spring’s redirecting.

JwtAuthenticationSuccessHandler.java

public class JwtAuthenticationSuccessHandler implements AuthenticationSuccessHandler {

    @Override
    public void onAuthenticationSuccess(HttpServletRequest request, HttpServletResponse response, Authentication authentication) {
        // We do not need to do anything extra on REST authentication success, because there is no page to redirect to
    }

}

This simple override removes the default behavior of a successful authentication (redirecting to home or any other page the user requested). If you are wondering why we do not need to override the AuthenticationFailureHandler, it is because default implementation will not redirect anywhere if its redirect URL is not set, so we just avoid setting the URL, which is good enough.

JwtAuthenticationFilter.java

public class JwtAuthenticationFilter extends AbstractAuthenticationProcessingFilter {

    public JwtAuthenticationFilter() {
        super("/**");
    }

    @Override
    protected boolean requiresAuthentication(HttpServletRequest request, HttpServletResponse response) {
        return true;
    }

    @Override
    public Authentication attemptAuthentication(HttpServletRequest request, HttpServletResponse response) throws AuthenticationException {

        String header = request.getHeader("Authorization");

        if (header == null || !header.startsWith("Bearer ")) {
            throw new JwtTokenMissingException("No JWT token found in request headers");
        }

        String authToken = header.substring(7);

        JwtAuthenticationToken authRequest = new JwtAuthenticationToken(authToken);

        return getAuthenticationManager().authenticate(authRequest);
    }

    @Override
    protected void successfulAuthentication(HttpServletRequest request, HttpServletResponse response, FilterChain chain, Authentication authResult)
            throws IOException, ServletException {
        super.successfulAuthentication(request, response, chain, authResult);

        // As this authentication is in HTTP header, after success we need to continue the request normally
        // and return the response as if the resource was not secured at all
        chain.doFilter(request, response);
    }
}

This class is the entry point of our JWT authentication process; the filter extracts the JWT token from the request headers and delegates authentication to the injected AuthenticationManager. If the token is not found, an exception is thrown that stops the request from processing. We also need an override for successful authentication because the default Spring flow would stop the filter chain and proceed with a redirect. Keep in mind we need the chain to execute fully, including generating the response, as explained above.

JwtAuthenticationProvider.java

public class JwtAuthenticationProvider extends AbstractUserDetailsAuthenticationProvider {

    @Autowired
    private JwtUtil jwtUtil;

    @Override
    public boolean supports(Class<?> authentication) {
        return (JwtAuthenticationToken.class.isAssignableFrom(authentication));
    }

    @Override
    protected void additionalAuthenticationChecks(UserDetails userDetails, UsernamePasswordAuthenticationToken authentication) throws AuthenticationException {
    }

    @Override
    protected UserDetails retrieveUser(String username, UsernamePasswordAuthenticationToken authentication) throws AuthenticationException {
        JwtAuthenticationToken jwtAuthenticationToken = (JwtAuthenticationToken) authentication;
        String token = jwtAuthenticationToken.getToken();

        User parsedUser = jwtUtil.parseToken(token);

        if (parsedUser == null) {
            throw new JwtTokenMalformedException("JWT token is not valid");
        }

        List<GrantedAuthority> authorityList = AuthorityUtils.commaSeparatedStringToAuthorityList(parsedUser.getRole());

        return new AuthenticatedUser(parsedUser.getId(), parsedUser.getUsername(), token, authorityList);
    }

}

In this class, we are using Spring’s default AuthenticationManager, but we inject it with our own AuthenticationProvider that does the actual authentication process. To implement this, we extend the AbstractUserDetailsAuthenticationProvider, which requires us only to return UserDetails based on the authentication request, in our case, the JWT token wrapped in the JwtAuthenticationToken class. If the token is not valid, we throw an exception. However, if it is valid and decryption by JwtUtil is successful, we extract the user details (we will see exactly how in the JwtUtil class), without accessing the database at all. All the information about the user, including his or her roles, is contained in the token itself.

JwtUtil.java

public class JwtUtil {

    @Value("${jwt.secret}")
    private String secret;

    /**
     * Tries to parse specified String as a JWT token. If successful, returns User object with username, id and role prefilled (extracted from token).
     * If unsuccessful (token is invalid or not containing all required user properties), simply returns null.
     * 
     * @param token the JWT token to parse
     * @return the User object extracted from specified token or null if a token is invalid.
     */
    public User parseToken(String token) {
        try {
            Claims body = Jwts.parser()
                    .setSigningKey(secret)
                    .parseClaimsJws(token)
                    .getBody();

            User u = new User();
            u.setUsername(body.getSubject());
            u.setId(Long.parseLong((String) body.get("userId")));
            u.setRole((String) body.get("role"));

            return u;

        } catch (JwtException | ClassCastException e) {
            return null;
        }
    }

    /**
     * Generates a JWT token containing username as subject, and userId and role as additional claims. These properties are taken from the specified
     * User object. Tokens validity is infinite.
     * 
     * @param u the user for which the token will be generated
     * @return the JWT token
     */
    public String generateToken(User u) {
        Claims claims = Jwts.claims().setSubject(u.getUsername());
        claims.put("userId", u.getId() + "");
        claims.put("role", u.getRole());

        return Jwts.builder()
                .setClaims(claims)
                .signWith(SignatureAlgorithm.HS512, secret)
                .compact();
    }
}

Finally, JwtUtil class is in charge of parsing the token into User object and generating the token from the User object. It is straightforward since it uses the jjwt library to do all the JWT work. In our example, we simply store the username, user ID and user roles in the token. We could also store more arbitrary stuff and add more security features, such as the token’s expiration. Parsing of the token is used in the AuthenticationProvider as shown above. The generateToken() method is called from login and signup REST services, which are unsecured and will not trigger any security checks or require a token to be present in the request. In the end, it generates the token that will be returned to the clients, based on the user.

Conclusion

Although the old, standardized security approaches (session cookie, HTTP Basic, and HTTP Digest) will work with REST services as well, they all have problems that would be nice to avoid by using a better standard. JWT arrives just in time to save the day, and most importantly it is very close to becoming an IETF standard.

JWT’s main strength is handling user authentication in a stateless, and therefore scalable, way, while keeping everything secure with up-to-date cryptography standards. Storing claims (user roles and permissions) in the token itself creates huge benefits in distributed system architectures where the server that issues the request has no access to the authentication data source.

5 Golden Rules for Great Web API Design

http://www.toptal.com/#explore-competent-developers-today

The following article is a guest post from Toptal. Toptal is an elite network of freelancers that enables businesses to connect with the top 3% of software engineers and designers in the world.

Ever found yourself wondering “what were they thinking?” when integrating a web service via its API? If not, you’ve been far luckier than I have.

Any software developer knows how easy it is to let a project devolve into spaghetti code, and web APIs are no less prone to resulting in a tangled web. But it doesn’t need to be that way. In truth, it’s possible to design great web APIs that people will actually enjoy using, and that you’ll enjoy creating as well. But how? The answer to that question is what this post is all about.

Perspective

Most of the time when you’re building solutions, you’re designing for end users who are not programmers, or who are generally not technically sophisticated. You’re giving them a graphical interface and, if you’ve been doing your job right, you’ve gleaned a pretty good idea from them of what they need the interface to do.

But API development is different. You’re designing an interface for programmers, probably without even knowing who they are. And whoever they are, they will have the technical sophistication (or at least will think they have the technical sophistication) to point out every little flaw in your software. Your users are likely to be as critical of your API as you would be of theirs, and will thoroughly enjoy critiquing it.

And therein lies part of the irony, by the way. If anyone should understand how to make a web API that’s easy-to-use, it’s you. After all, you’re a software engineer just like the users of your API, so you share their perspective. Don’t you?

Well, while you certainly understand their perspective, you don’t necessarily share their perspective. When you’re developing or enhancing your API, you have the perspective of an API designer whereas they have the perspective of an API user.

API designers typically focus on questions like “What does this service need to do?” or “What does this service need to provide?”, while API users are focused on “How can I use this API to do what I need?”, or more accurately, “How can I spend the bare minimum of effort to get what I need out of this API?”.

These different questions lead to two vastly different perspectives. As a result, the necessary prerequisite to designing a great API is to shift your perspective from that of the API designer to that of the API user. In other words, continually ask yourself the questions you would naturally ask if you were your own user. Rather than thinking about what your API can do, think about the different ways it may need or want to be used and then focus on making those tasks as easy as possible for your API’s users.

While this may sound easy and obvious, it’s astounding how infrequently APIs appear to be designed this way. Think about the APIs you’ve encountered in your career. How frequently do they appear to have been designed with this perspective in mind? Web API design can be challenging.

So with that said, let’s proceed and talk about the 5 Golden Rules for Designing a Great Web API, namely:

  1. Documentation
  2. Stability and Consistency
  3. Flexibility
  4. Security
  5. Ease of Adoption

A diagram of users accessing a well-designed web API

Rule 1: Documentation

Documentation. Yes, I’m starting here.

Do you hate documentation? Well, I can empathize, but put on your “user perspective” hat and I’ll bet that the one thing you hate more than having to write documentation is having to try to use an undocumented API. I rest my case.

The bottom line is that, if you want anyone to use your API, documentation is essential. You’ve simply got to get this right. It’s the first thing users will see, so in some ways it’s like the gift wrap. Present well, and people are more likely to use your API and put up with any idiosyncrasies.

So how do we write good documentation?

The relatively easy part is documenting the API methods themselves; i.e., example requests and responses, along with descriptions of each of the elements in both. Fortunately, there are an increasing number of software tools that facilitate and simplify the task of generating documentation. Or you can write something yourself that introspects your API, endpoints, and functions, and generates the corresponding documentation for you.

But what separates great documentation from adequate documentation is the inclusion of usage examples and, ideally, tutorials. This is what helps the user understand your API and where to start. It orients them and helps them load your API into their brain.

For example, if the developers of Twilio were to list out every class, every method, and every possible response to their API, but didn’t bother to mention that you can send an SMS, track a call, or buy a phone number through their API, it would take a really long time for the API user to find that information and understand it cohesively. Can you imagine sorting through a giant tree of classes and methods without any insight into what they were used for, other than their name? Sounds terrible right? But that’s exactly what so many API providers do, thereby leaving their APIs opaque to anybody but themselves. The Rackspace CloudFiles developer and API guide is one such example; it’s difficult to get your bearings unless you already understand what they’re doing and what they’re providing.

So write concise tutorials that help get the developer up and running quickly, with at least a skeleton of what they’re trying to do, and then point them in the direction of the more detailed, fully-documented list of functionality so they can expand on what they have.

Once you’re done with your documentation, be sure to validate that it makes sense to people other than yourself. Send it out to other developers in your network, give them no instruction other than pointing them to the documentation, and ask them to follow a tutorial or build something really basic in about 15 minutes. If they can’t have a basic integration with your API in 15 minutes, you have more work to do.

For some noteworthy examples of excellent and detailed documentation, check out TwilioDjango, and MailChimp. None of these products are necessarily the best in their markets (although they are all good products), yet they do distinguish themeselves by providing some of the best documentation within their markets, which has certainly facilitated their wide acceptance and market share.

Rule 2: Stability and Consistency

If you’ve ever used Facebook’s API, you know how often they deprecate and completely rewrite their APIs. No matter how much you respect their hacker culture, or their product, their’s is not a developer-friendly perspective. The reason they are still successful is because they have a billion users, not because their API is great.

But you probably don’t have the luxury of such a mammoth user base and market share, so you’re going to need have a much less volatile API, keeping old versions running and supported for quite a long period of time. Maybe even years. So toward that end, here are some tips and tricks.

Let’s say, for example, that your API is accessible via the URL http://myapisite.com/api/widgets and provides its response in JSON format. While this may seem fine at first blush, what happens when you need to modify the format of the JSON response? Everyone that’s already integrated with you is going to break. Oops.

So do some planning ahead, and version your API from the outset, explicitly incorporating a version number into the URL (e.g., http://myapisite.com/api/widgets?version=1 or http://myapisite.com/api/widgets/v1) so that people can rely on version 1 working and can upgrade to any subsequent version when they’re ready to do so. If you need to phase out a prior version at some point, go ahead, but give plenty of notice and offer some sort of transition plan.

A good URL scheme will include major versions in the URL. Any change to the output format or supported data types should result in bumping up to a new major version. Generally, it’s acceptable to keep the same version if all you are doing is adding keys or nodes to your output, but to be on the safe side, any time the output changes, bump a version.

In addition to being stable over time, APIs need to be internally consistent. I’ve seen many APIs that change parameter names or methods of POSTing data, depending on the endpoint that is being used. Instead, you should handle common parameters globally within your API and use inheritance or a shared architecture to reuse the same naming conventions and data handling consistently throughout your API.

Finally, you need to record and publish a changelog to show differences between versions of your API so that users know exactly how to upgrade.

Rule 3: Flexibility

Garbage in, garbage out (GIGO) is a well known mantra to most programmers. As applied to web API design, this guiding principle tends to dictate a fairly rigid approach to request validation. Sounds great, right? No mess, no problem.

Yet as with everything, there needs to be some balance. As it is not possible to anticipate every way that users will want to employ your service, and since not every client platform is consistent (i.e., not every platform has very good JSON support, a decent OAuth library, etc.), it’s good to have at least some degree of flexibility or tolerance with regard to your input and output constraints.

For example, many APIs will support a variety of output formats, like JSON, YAML, XML, et. al., but will only support specifying the format in the URL itself. In the spirit of remaining flexible, you could allow this to also be specified in the URL (e.g., /api/v1/widgets.json), or you might also read and recognize an Accept: application/json HTTP header, or support a querystring variable such as ?format=JSON, and so on.

And while we’re at it, why not allow for the format specified to be case-insensitive, so the user could specify ?format=json as well? That’s a classic example of a way to alleviate unnecessary frustration for the user of your API.

Another example is allowing for different ways of inputting variables. So, just like you have a variety of output formats, allow for a variety of input formats as well (e.g., plain POST variables, JSON, XML, etc.). You should at least be supporting standard POST variables, and many modern applications support JSON as well, so those two are a good place to start.

The point here is that you shouldn’t assume that everyone shares your technical preferences. With a little research into how other APIs work, and through dialog with other developers, you can glean other valuable alternatives that are useful and include them in your API.

Rule 4: Security

Security is obviously one of the most important things to build into your web service, but so many developers make it ridiculously hard to use. As the API provider, you should be offering usable examples of how to authenticate and authorize when accessing your API. This should not be a difficult issue that an end user spends hours working on. Make it your goal that they either don’t have to write any code, or it takes them less than 5 minutes to write it.

For most APIs, I prefer a simple token-based authentication, where the token is a random hash assigned to the user and they can reset it at any point if it has been stolen. Allow the token to be passed in through POST or an HTTP header. For example, the user could (and should) send an SHA-1 token as a POST variable, or as a header in a format such as “Authorization: da39a3ee5e6b4b0d3255bfef95601890afd80709”.

Also, choose a secure token, not a short numeric identifier. Something irreversible is best. For example, it’s relatively simple to just generate out an SHA token during user creation and store it in the database. Then, you can simply query your database for any users matching that token. You could also do a token generated with a unique identifier and a salt value, something like SHA(User.ID + "abcd123"), and then query for any user that matches; e.g., where TokenFromPost = SHA(User.ID + "abcd123").

Another very good option is OAuth 2 + SSL. You should be using SSL anyway, but OAuth 2 is reasonably simple to implement on the server side, and libraries are available for many common programming languages.

If the API you have made is supposed to be accessible on a public website via JavaScript, you need to also make sure you validate a list of URLs per-account for the token. That way, nobody can go inspect the calls to your API, steal the token from your user, and go use it for themselves.

Here are some other important things to keep in mind:

  • Whitelisting Functionality. APIs generally allow you to do basic create, read, update, and delete operations on data. But you don’t want to allow these operations for every entity, so make sure each has a whitelist of allowable actions. Make sure, for example, that only authorized users can run commands like /user/delete/<id>. Similarly, all useful headers that are sent in the user’s request need to be validated against a whitelist as well. If you are allowing Content-type headers, verify that whatever the user sends in actually matches a whilelist of supported content types. If it doesn’t, then send back an error message such as a 406 Not Acceptable response. Whitelisting is important as a lot of APIs are automatically generated, or use a blacklist instead, which means you have to be explicit about what you don’t want. However, the golden rule of security is to start with absolutely nothing, and only explicitly allow what you do want.
  • Protect yourself against Cross-Site Request Forgery (CSRF). If you are allowing session or cookie authentication, you need to make sure that you’re protecting yourself from CSRF attacks. The Open Web Application Security Project (OWASP) provides useful guidance on ways to preclude these vulnerabilities.
  • Validate access to resources. In every request, you need to verify that a user is in fact allowed access to the specific item they are referencing. So, if you have an endpoint to view a user’s credit card details (e.g., /account/card/view/152423), be sure that the ID “152423” is referencing a resource that the user really is authorized to access.
  • Validate all input. All input from a user needs to be securely parsed, preferably using a well-known library if you are using complicated input like XML or JSON. Don’t build your own parser, or you’re in for a world of hurt.

Rule 5: Ease Of Adoption

This is really the most important rule in the bunch, and builds on all the others. As I mentioned during the documentation rule, try this out with people that are new to your API. Make sure that they can get up and running with at least a basic implementation of your API, even if it’s just following a tutorial, within a few minutes. I think 15 minutes is a good goal.

Here are some specific recommendations to ease and facilitate adoption of your API:

  • Make sure people can actually use your API and that it works the first time, every time. Have new people try to implement your API occasionally to verify that it’s not confusing in some way that you’ve become immune to.
  • Keep it simple. Don’t do any fancy authentication. Don’t do some crazy custom URL scheme. Don’t reinvent SOAP, or JSON, or REST, or anything. Use all the tools you can that have already been implemented and are widely accepted, so that developers only have to learn your API, not your API + 10 obscure new technologies.
  • Provide language-specific libraries to interface with your service. There are some nice tools to automatically generate a library for you, such as Alpaca or Apache Thrift. Currently Alpaca supports Node, PHP, Python, and Ruby. Thrift supports C++, Java, Python, PHP, Ruby, Erlang, Perl, Haskell, C#, Cocoa, JavaScript, Node.js, Smalltalk, OCaml, Delphi and more.
  • Simplify any necessary signup. If you are not developing an open source API, or if there is a signup process of any sort, make sure that upon signup, a user is very quickly directed to a tutorial. And make the signup process completely automated without any need for human interaction on your part.
  • Provide excellent support. A big barrier to adoption is lack of support. How will you handle and respond to a bug report? What about unclear documentation? An unsophisticated user? Forums, bug trackers, and email support are fantastic starts, but do make sure that when someone posts a bug, you really address it. Nobody wants to see a ghost town forum or a giant list of bugs that haven’t been addressed.

Web API Wrap-up

Web services and their APIs abound. Unfortunately, the vast majority are difficult to use. Reasons range from poor design, to lack of documentation, to volatility, to unresolved bugs, or, in some cases, all of the above.

Following the guidance in this post will help ensure that your web API is clean, well-documented, and easy-to-use. Such APIs are truly rare and are therefore that much more likely to be widely adopted and used.

System Security Expert Job Description Template

The following hiring guide is from Toptal. Toptal is an elite network of freelancers that enables businesses to connect with the top 3% of software engineers and designers in the world.

System Security professionals are experts on preventing and mitigating security breaches that may arise from vulnerabilities in computer systems. This field covers a range of areas of expertise, including information security, network security, and mobile security. System Security professionals must “think like a hacker” in order to anticipate the main vectors of likely attack by malicious outsiders. They should have a keen eye for the most minute details, plus a well-rounded and in-depth knowledge of computer systems and the the tools of the trade.

System Security professionals are experts on preventing and mitigating security breaches that may arise from vulnerabilities in computer systems. This field covers a range of areas of expertise, including information security, network security, and mobile security. System Security professionals must “think like a hacker” in order to anticipate the main vectors of likely attack by malicious outsiders. They should have a keen eye for the most minute details, plus a well-rounded and in-depth knowledge of computer systems and the the tools of the trade.

There is a shortage of formal training available to those who wish to go into the security field. While a formal education in computer systems is desirable, many of the best security experts are self-taught hobbyists when it comes to their familiarity with “hacking” and the techniques and vulnerabilities exploited by attackers. Since your security expert will be intimately aware of all of your system’s weakest points, it is critical that you can trust him or her.

This article provides a sample System Security Expert job description that will help you write the perfect job ad, and ensure that you find and hire the person that matches your specific criteria.

System Security Expert – Job Description and Ad Template

Company Introduction

{{Write a short and catchy paragraph about your company. Provide information about the company’s culture, perks, and benefits. Mention office hours, remote working possibilities, and anything else that makes your company interesting.}}

Job Description

We are looking for a Systems Security Expert with emphasis on {{WLAN/networks/web applications/penetration testing}}. You must be passionate about security and staying ahead of all possible threats to the system. You will be working with our development team to ensure that our network and technology stack is watertight. You will become thoroughly familiar with our architecture, helping to direct its development. You will provide regular reports of potential vulnerabilities and risk profiles. Good communication skills are critical to efficiently coordinate with our team and build a secure end product.

Responsibilities

  • Actively look for security vulnerabilities in our application and network, reporting issues and describing possible solutions.
  • Design and maintain our security infrastructure.
  • Stay up to date with security news, keeping an eye out for the latest vulnerabilities and remedies emerging in the field.
  • Actively liaise with the development team to ensure a secure architecture.
  • Actively liaise with the development team to ensure thorough automated testing of all source code (e.g., via Test-Driven Development).
  • Provide regular reports auditing our current services and latest changes, as well as our internal practices.
  • Monitor our server traffic, ticketing and reporting unusual packets.
  • {{Add other responsibilities here that are relevant.}}

Skills and Qualifications

  • {{Relevant education in Computer Science or Computer Engineering.}}
  • {{Relevant certification, such as CCNA or OWSE.}}
  • Thorough understanding of potential attack vectors such as XSS, injection, hijacking, social engineering, and so on.
  • Thorough understanding of cryptographic primitives and their underlying principles.
  • Sound command of {{your application’s environment}} and the command line.
  • Thorough understanding of networking protocols, such as TCP/IP.
  • Thorough understanding of HTTP and HTTPS, as well as their underlying implementations.
  • Thorough understanding of Content Delivery Networks and their integration into applications.
  • Basic programming proficiency, sufficient to write and execute scripts from the command line.
  • Active in the security community. Regularly attends meetups or conferences.
  • Contributes to open source projects, or creates purpose built tools for security work.
  • Working understanding of Agile Development processes.
  • Excellent communication skills. Ability to translate complex ideas into simple solutions on paper.
  • {{Include any framework, library, or any other technology related to your development stack.}}