web analytics

The Art of War Applied To Software Development

The following article is a guest post from Toptal. Toptal is an elite network of freelancers that enables businesses to connect with the top 3% of software engineers and designers in the world.

If you work in the software industry, it’s likely that you have heard about the divide and conquer design paradigm, which basically consists of recursively splitting a problem into two or more sub-problems (divide), until these become simple enough to be solved directly (conquer).

What you might not know is that this paradigm originates from an old political strategy (the name is derived from the Latin saying divide et impera) that suggests it is possible to maintain control over one’s subordinates or subjects by encouraging dissent between them.

This strategy has been used by countless politicians and military leaders throughout history, such as Julius Caesar (who used it during the Gallic Wars to defeat the militarily strong Gauls) and Napoleon (the French artillery expert would divide the enemy troops so no portion was stronger than his own troops, and then disrupt their communications, impeding enemy efforts to coordinate and execute attacks).

The Art Of War: Ancient Principles Applied To Development

However, the divide and conquer rule is not the only political strategy that can be applied to software development. Although politics and warfare have little to do with software development, just like politicians and generals, developers must lead subordinates, coordinate efforts between teams, find the best strategies to resolve problems, and administer resources.

Sun Tzu’s principles and teachings have practical applications in politics, business, sports, and software development.

Sun Tzu’s principles and teachings have practical applications in politics, business, sports, and software development.

The Art of War is an ancient military treatise written in the fifth century B.C. and attributed to Sun Tzu, an ancient Chinese military strategist, whose theories had a profound influence on both Eastern and Western philosophy.

Despite its age, the text is still included in the syllabus at many military schools in East Asia and it’s listed as a recommended reading in some military academies in the West. The text is divided into 13 chapters, each one devoted to a different aspect of warfare.

However, in addition to warfare, Sun Tzu’s principles and teachings have practical applications in politics, business, sports, and, believe it or not, software development. In fact, you might just be applying some of these principles in your daily routine, without even knowing their origins.

Detailed below, you will a find a brief list of basic tactics and tips explained in the Art of War. They can probably be applied to your job in the software industry, or any of a number of other industries.

Time Is Crucial In Any Campaign

Chapter II, paragraph 2

“When you engage in actual fighting, if victory is long in coming, then men’s weapons will grow dull and their ardor will be damped.”

This principle can be applied to software development, as a rule describing the relationship between the length of development cycles and the developer’s morale.

If a group of developers work on the same projects for months, with no clear goals or end in sight, they may become frustrated and their productivity may decline.

Divide your development roadmap into easily achievable goals and milestones. It’s good for morale.

Divide your development roadmap into easily achievable goals and milestones. It’s good for morale.

Software development is an intellectual endeavour, so motivation is the main fuel for productivity. Working every day without perceiving that your work is generating real results can be very demotivating.

As indicated in some agile methodologies, the development roadmap should be divided into several goals and milestones, which the team might be able to achieve in short timeframes, and they are going to give them a sense of progress and achievement.

Chapter II, paragraph 18

“In war, then, let your great object be victory, not lengthy campaigns.”

This phrase can be interpreted in two ways:

First, it can be seen as a precursor of the UNIX philosophy: Write programs that do one thing and do it well. When developing software, you must always keep in mind the main objective of the program, the key feature that it provides, or the biggest problem that it solves, and ensure proper implementation.

Sometimes you might get inspired and think of a really cool feature to add, but do not forget that applications that have lot of infrequently used features have a disparaging name: bloatware.

Second, the statement may also be considered as a precursor for one of the lean software development principles: Deliver as fast as possible.

The sooner you deliver software without major defects, the sooner you will get feedback from the client, and you will be able to incorporate the changes into the next iteration.

If on the other hand, you deliver non-working software, you will miss out on valuable feedback, because clients won’t get a chance to test it properly. This will make the next stage of development more difficult, or impossible in situations where your next iteration depends on customer feedback.

No Leadership, No Results

Chapter III, paragraph 11

“Now the general is the bulwark of the State; if the bulwark is complete at all points, the State will be strong; if the bulwark is defective, the State will be weak.”

This quote describes the importance of the role of the manager in a development team: the success of a project depends on the strength of all people involved, and the manager is the bulwark of the project. Responsibility starts at the top.

Responsibility starts at the top. If your team lead is bad, no amount of talented engineers will help.

Responsibility starts at the top. If your team lead is bad, no amount of talented engineers will help.

Even though developers frequently work alone (each sitting behind a computer, with limited communication with coworkers), that does not mean that they don’t need good leadership. Project managers are in charge of keeping the team on track, ensuring effective communication and dispute resolution, and leaders, obviously, define the priorities of the project (among other tasks), so their role should not be underestimated. Neither should their responsibility if something goes wrong. Imagine what would happen to a military leader whose unit failed to perform its duty in the field of battle?

A team can produce great software even if it has a few bad apples in development positions, but that’s unlikely to happen if the project manager is the bad apple, no matter how many rockstar developers the team has.

Chapter VI, paragraph 28

“Do not repeat the tactics which have gained you one victory, but let your methods be regulated by the infinite variety of circumstances.”

Sometimes, when starting a project, it’s tempting to use the same set of technologies we used in previously successful projects (the same programming language, the same libraries, the same server, etc). However, unless the requirements of the new projects are exactly the same as previous ones, this might be the wrong approach.

In programming, as in most domains, the panacea (a supposed remedy capable of curing all diseases) does not exist. There is no single combination of technologies that you can use for solving all problems; each technology has its upsides and downsides.

Of course, learning a new programming language or using an unknown API might initially be expensive but in the long term, the quality of the software will be superior and you will become a better developer.

Chapter XIII, paragraph 27

“Hence it is only the enlightened ruler and the wise general who will use the highest intelligence of the army for purposes of spying, and thereby they achieve great results. Spies are a most important element in war, because on them depends an army’s ability to move.”

This phrase may be interpreted as the importance of using monitoring tools and logging libraries during the maintenance phase.

Although sometimes clients might not think so, development does not end when you get a stable and fully tested release. Software is always evolving, either by fixing bugs, adding new features or improving efficiency.

And there is no better source of information for knowing what changes to make than having spies monitoring the software in production environments, checking which features are used the most, the most common errors and the lengthiest operations.

Error reports, logging entries and usage data are fundamental for detecting bugs, identifying bottlenecks and other issues since it is not always possible to reproduce the same conditions in controlled testing environments.

Teamwork And Motivation

Chapter X, paragraph 24

“He who advances without seeking fame, Who retreats without escaping blame, He whose one aim is to protect his people and serve his lord, The man is a jewel of the Realm.”

Basically, this is the ancient Chinese version of “there’s no I in team”. It is more important to work together with others rather than to pursue personal gain.

Software development is a complex activity that requires developers to work effectively as a team. A good developer is not the one who fixes the most bugs, implements the most features or finishes assignments ahead of schedule; a good developer is the one who helps the team reach its goals.

Teamwork wins battles. Remember, the best developers are inspiring individuals who help other team members reach their objectives.

Teamwork wins battles. Remember, the best developers are inspiring individuals who help other team members reach their objectives.

Claiming credit for everything you’ve done, not recognizing your errors or blaming others for them, or calling yourself a code ninja might fool some inexperienced managers and might even get you a raise, but you will become a counterproductive member of your team.

Chapter VII, paragraph 21

“Ponder and deliberate before you make a move.”

This phrase indicates the importance of team development meetings, such as those proposed by agile methodologies.

When working on a team, it is important to discuss any major changes before implementing them. It doesn’t matter if you are the team leader, or if you are the person with the most experience of the subject, you should always talk with, or at least inform, the rest of the team.

Remember that other developers could give you insights into unfamiliar parts the software. This means they could start implementing the changes faster than expected, because they could be fully aware of the effects of said changes.

Chapter X, paragraph 25

“Regard your soldiers as your children, and they will follow you into the deepest valleys; look upon them as your own beloved sons, and they will stand by you even unto death.”

This quote indicates the importance of motivation, a principle of management that is sometimes forgotten by managers and team leaders. Motivated developers will write better code, work faster, commit less errors and be more willing to put in extra hours.

Motivation must be generated by managers, by taking genuine interest in their subordinates, listening to them, caring about their work-life balance, building positive work environments and caring about their career paths.

Also, you should not mistake motivation with remuneration. Recent studies demonstrates that money do not motivate most workers, money is mostly good at attracting and retaining employees, but not at make them happy about their jobs. So raises and promotions should not be seen as motivational tools.

Thinking Outside The Box

Chapter V, paragraph 7, 8 and 9

“There are not more than five musical notes, yet the combinations of these five give rise to more melodies than can ever be heard.”

“There are not more than five primary colors, yet in combination they produce more hues than can ever been seen.”

“There are not more than five cardinal tastes, yet combinations of them yield more flavors than can ever be tasted.”

One of the good things about programming is that the possibilities are endless; you can develop basically wherever you want (at least, as long is not an NP-complete problem).

Mobile apps, websites, games, desktop applications… if you know programming, all of them are within your reach.

If you’re a talented developer, you need to think outside the box. The box is there to prevent incompetent people from wrecking stuff. It’s not for you.

If you’re a talented developer, you need to think outside the box. The box is there to prevent incompetent people from wrecking stuff. It’s not for you.

Chapter III, paragraph 1

“In the practical art of war, the best thing of all is to take the enemy’s country whole and intact; to shatter and destroy it is not so good. So, too, it is better to capture an army entire than to destroy it, to capture a regiment, a detachment or a company entire than to destroy them.”

When working on a project with a large code base, it is common to find modules or sections of code that have been implemented with bad practices or by using deprecated libraries. Although it might be tempting to erase (or destroy) this code, it might not be the best idea for several reasons:

  • Legacy code is not necessarily bad, sometimes it is good code that was written when other methodologies and technologies were considered the way to go. However, just because it is old doesn’t mean that it is not working.
  • You might lose time fixing code that still works instead of focusing on fixing other, more critical parts of the code.
  • Unless you are really sure of what you are doing, replacing a section of code that works means you are risking introducing new errors or bugs.

This does not mean that the phrase “If it ain’t broke, don’t fix it” is a good strategy, but that every project has priorities, goals and time constraints. So, if you find code that could be improved, discuss it with the rest of the team or with the project manager in order to figure out when to optimize it.

Chapter VIII, paragraph 3

“There are roads which must not be followed, armies which must not be attacked, towns which must not be besieged, positions which must not be contested, commands of the sovereign which must not be obeyed.”

Even it does not say it directly, we could interpret this principle as a warning to avoid anti-patterns.

Although using an anti-pattern may resolve a short-term issue, you should remember that in the long-term it is going to be counter-productive. So, no matter how much time you save, how many bugs you fix or how convenient it is for you, avoid them.

Still, there are times you may be tempted to use an anti-pattern to resolve an urgent task, promising yourself you will implement a proper fix when you have more time, but remember one of Murphy’s laws: All quick fixes become permanent changes.

Conclusion

Although developing software is different from commanding soldiers in war or leading a country, all that they must solve problems that require teamwork, good leadership, efficiency and long-term solutions.

However, the Art of War is not the only book which contains principles that may be applied to software development. Niccolò Machiavelli’s The Prince, is an example.

In fact, here is a list of quotes from Machiavelli that are still relevant. Try guessing which are the corresponding principles in the world of software development.

  1. The lion cannot protect himself from traps, and the fox cannot defend himself from wolves. One must therefore be a fox to recognize traps, and a lion to frighten wolves.
  2. Never attempt to win by force what can be won by deception.
  3. Never was anything great achieved without danger.
  4. Whosoever desires constant success must change his conduct with the times.
  5. Men in general judge more from appearances than from reality. All men have eyes, but few have the gift of penetration.
  6. He who wishes to be obeyed must know how to command.
  7. Wisdom consists of knowing how to distinguish the nature of trouble, and in choosing the lesser evil.
  8. There is no avoiding war; it can only be postponed to the advantage of your enemy.
  9. Nature creates few men brave; industry and training makes many.

Fixing the “Heartbleed” OpenSSL Bug: A Tutorial for Sys Admins

The following article is a guest post from Toptal. Toptal is an elite network of freelancers that enables businesses to connect with the top 3% of software engineers and designers in the world.

So what exactly is the bug anyway?

Here’s a very quick rundown:

A potentially critical problem has surfaced in the widely used OpenSSL cryptographic library. It is nicknamed “Heartbleed” because the vulnerability exists in the “heartbeat extension” (RFC6520) to the Transport Layer Security (TLS)  and it is a memory leak (“bleed”) issue.  User passwords and other important data may have been compromised on any site affected by the vulnerability.

The vulnerability is particularly dangerous for two reasons:

  1. Potentially critical data is leaked.
  2. The attack leaves no trace.

The affected OpenSSL versions are 1.0.1 through 1.0.1f, 1.0.2-beta, and 1.0.2-beta1.

Who is affected by the problem?

Short answer:  Anyone and everyone who uses these versions of OpenSSL.

And that’s a LOT of companies and a LOT of people.

Before we get into our Heartbleed tutorial, here’s just a brief sampling of major companies and websites that are known to have been affected and that needed to patch their sites:  GmailYahoo MailIntuit TurboTaxUSAA, Dropbox, Flickr, Instagram, PinterestSoundCloud, Tumblr, GitHubGoDaddyBoingo Wireless, and many more.

If you're wondering how to protect against openssl Heartbleed, start by using the Heartbleed test.

Many, many corporate websites, of companies of all sizes, have been (or still need to be!) patched to fix the Heartbleed vulnerability.

The vulnerability has existed since December 31, 2011, with OpenSSL being used by about 66% of Internet hosts.

As a user, chances are that sites you frequent regularly are affected and that your data may have been compromised. As a developer or sys admin, sites or servers you’re responsible for are likely to have been affected as well.

So what do I need to do to protect myself if I use any of the affected sites?

The main thing you should do immediately is to change your passwords for any of the affected sites for which you have a login account.

And what do I need to do to fix and protect against Heartbleed if I’m the sys admin for a site that uses OpenSSL?

If you’re using OpenSSL 1.0.1, do one of the following immediately:

  • Upgrade to OpenSSL 1.0.1g, or
  • Recompile OpenSSL with -DOPENSSL_NO_HEARTBEATS.

If you’re using OpenSSL 1.0.2, the vulnerability will be fixed in 1.0.2-beta2 but you can’t wait for that.  In the interim, do one of the following immediately:

  • Revert to OpenSSL 1.0.1g, or
  • Recompile OpenSSL with -DOPENSSL_NO_HEARTBEATS.

Most distributions (e.g., Ubuntu, Fedora, Debian, Arch Linux) have upgraded their packages already.  In cases like Gentoo, you can upgrade to a patched ebuild.

Once you’ve upgraded (or recompiled) and have established a secure version on your server:

  • Be sure to restart all potentially affected processes.  Major daemons affected by the bug include Apache, Nginx, OpenVPN, and sshd; basically anything and everything linked against libssl. (Note that a restart of these daemons should be sufficient.  There should be no need to rebuild these binaries since they are dynamically linked with the openssl libraries.)
  • Verify that you are no longer vulnerable using tools like this online test or this tool on GitHub or this tool on Pastebin.

If your infrastructure was vulnerable, there are Heartbleed tutorial steps that you can and should take.  A useful list of such mitigations is available here.

More gory Heartbleed details, for those who are interested…

As explained in the GitHub commit for the fix, a missing bounds check in the handling of the TLS heartbeat extension could be exploited to reveal up to 64k of memory to a connected client or server.

While the exposed memory could potentially just be garbage, it could just as easily turn out to be extremely valuable to a malicious attacker.

Here’s how the Heartbleed vulnerability works:  An attacker provides the payload as well as the payload length.  However, no validation is done to confirm that the payload length was actually provided by the attacker.  If the payload length was not provided, an out-of-bounds read occurs, which in turn leaks process memory from the heap.

Leaking previous request headers can be a very serious security problem. Specifically, a prior user’s login post data might still be available with their username, password, and cookies, all of which can then be exposed and exploited. Moreover, although private key leakage through Heartbleed was initially deemed to be unlikely, it has been verified that private SSL keys can be stolen by exploiting this vulnerability.

Fixing Heartbleed is critical as it has been confirmed that private SSL keys can be stolen this way.

The vulnerability is also made possible due to OpenSSL’s silly use of a malloc() cache.  By wrapping away libc functions and not actually freeing memory, the exploitation countermeasures in libc are never given the chance to kick in and render the bug useless.

Additional details on these ways to fix Heartbleed are available here and here.

And, for what it’s worth, here’s a more amusing perspective.

Kudos to the discoverer, Neel Mehta of Google Security, as well as Adam Langley and Bodo Moeller who promptly provided the patch and helped sys admins determine how to fix Heartbleed. I also encourage you to educate yourself on some of the other common web security vulnerabilities to avoid issues in the future.

Avoid the 10 Most Common Mistakes Web Developers Make: A Tutorial for Developers

The following article is a guest post from Toptal. Toptal is an elite network of freelancers that enables businesses to connect with the top 3% of software engineers and designers in the world.

Since the term the World Wide Web was coined back in 1990, web application development has evolved from serving static HTML pages to completely dynamic, complex business applications.

Today we have thousands of digital and printed resources that provide step-by-step instructions about developing all kinds of different web applications. Development environments are “smart” enough to catch and fix many mistakes that early developers battled with regularly. There are even many different development platforms that easily turn simple static HTML pages into highly interactive applications.

All of these development patterns, practices, and platforms share common ground, and they are all prone to similar web development issues caused by the very nature of web applications.

The purpose of these web development tips is to shed light on some of the common mistakes made in different stages of the web development process and to help you become a better developer. I have touched on a few general topics that are common to virtually all web developers such as validation, security, scalability, and SEO. You should of course not be bound by the specific examples I’ve described in this guide, as they are listed only to give you an idea of the potential problems you might encounter.

Think of all the web development mistakes that could be avoided at this campout!

Common mistake #1: Incomplete input validation

Validating user input on client and server side is simply a must do! We are all aware of the sage advice “do not trust user input” but, nevertheless, mistakes stemming from validation happen all too often.

One of the most common consequences of this mistake is SQL Injection which is in OWASP Top 10 year after year.

Remember that most front-end development frameworks provide out-of-the-box validation rules that are incredibly simple to use. Additionally, most major back-end development platforms use simple annotations to assure that submitted data are adhering to expected rules. Implementing validation might be time consuming, but it should be part of your standard coding practice and never set aside.

Common mistake #2: Authentication without proper Authorization

Before we proceed, let’s make sure we are aligned on these two terms. As stated in the 10 Most Common Web Security Vulnerabilities:

Authentication: Verifying that a person is (or at least appears to be) a specific user, since he/she has correctly provided their security credentials (password, answers to security questions, fingerprint scan, etc.).

Authorization: Confirming that a particular user has access to a specific resource or is granted permission to perform a particular action.

Stated another way, authentication is knowing who an entity is, while authorization is knowing what a given entity can do.

Let me demonstrate this issue with an example:

Consider that your browser holds currently logged user information in an object similar to the following:

{
    username:'elvis',
    role:'singer',
    token:'123456789'
}

When doing a password change, your application makes the POST:

POST /changepassword/:username/:newpassword

In your /changepassword method, you verify that user is logged and  token has not expired. Then you find the user profile based on the :username parameter, and you change your user’s password.

So, you validated that your user is properly logged-in, and then you executed his request thus changing his password. Process seems OK, right? Unfortunately, the answer is NO!

At this point it is important to verify that the user executing the action and the user whose password is changed are the same. Any information stored on the browser can be tampered with, and any advanced user could easily update username:'elvis' to username:'Administrator' without using anything else but built-in browser tools.

So in this case, we just took care of Authentication making sure that the user provided security credentials. We can even add validation that /changepassword method can only be executed by Authenticated users. However, this is still not enough to protect your users from malicious attempts.

You need to make sure that you verify actual requestor and content of request within your /changepasswordmethod and implement proper Authorization of the request making sure that user can change only her data.

Authentication and Authorization are two sides of the same coin. Never treat them separately.

Common mistake #3: Not ready to scale

In today’s world of high speed development, startup accelerators, and instant global reach of great ideas, having your MVP (Minimum Viable Product) out in the market as soon as possible is a common goal for many companies.

However, this constant time pressure is causing even good web development teams to often overlook certain issues. Scaling is often one of those things teams take for granted. The MVP concept is great, but push it too far, and you’ll have serious problems. Unfortunately, selecting a scalable database and web server and separating all application layers on independent scalable servers is not enough. There are many details you need to think about if you wish to avoid rewriting significant parts of your application later – which becomes a major web development problem.

For example, say that you choose to store uploaded profile pictures of your users directly on a web server. This is a perfectly valid solution–files are quickly accessible to the application, file handling methods are available in every development platform, and you can even serve these images as static content, which means minimum load on your application.

But what happens when your application grows, and you need to use two or more web servers behind a load balancer? Even though you nicely scaled your database storage, session state servers, and web servers, your application scalability fails because of a simple thing like profile images. Thus, you need to implement some kind of file synchronization service (that will have a delay and will cause temporary 404 errors) or another workaround to assure that files are spread across your web servers.

This bloated blowfish is full of web development tips and tutorials to avoid common mistakes.

What you needed to do to avoid the problem in the first place was just use shared file storage location, database, or any other remote storage solution. It would have probably cost few extra hours of work to have it all implemented, but it would have been worth the trouble.

Common mistake #4: Wrong or missing SEO

The root cause of incorrect or missing SEO best practices on web sites is misinformed “SEO specialists”. Many web developers believe that they know enough about SEO and that it is not especially complex, but that’s just not true. SEO mastery requires significant time spent researching best practices and the ever-changing rules about how Google, Bing, and Yahoo index the web. Unless you constantly experiment and have accurate tracking + analysis, you are not a SEO specialist, and you should not claim to be one.

Furthermore, SEO is too often postponed as some activity that is done at the end. This comes at a high price of web development issues. SEO is not just related to setting good content, tags, keywords, meta-data, image alt tags, site map, etc. It also includes eliminating duplicate content, having crawlable site architecture, efficient load times, intelligent back linking, etc.

Like with scalability, you should think about SEO from the moment you start building your web application, or you might find that completing your SEO implementation project means rewriting your whole system.

Common mistake #5: Time or processor consuming actions in request handlers

One of the best examples of this mistake is sending email based on a user action. Too often developers think that making a SMTP call and sending a message directly from user request handler is the solution.

Let’s say you created an online book store, and you expect to start with a few hundred orders daily. As part of your order intake process, you send confirmation emails each time a user posts an order. This will work without problem at first, but what happens when you scale your system, and you suddenly get thousands of requests sending confirmation emails? You either get SMTP connection timeouts, quota exceeded, or your application response time degrades significantly as it is now handling emails instead of users.

Any time or processor consuming action should be handled by an external process while you release your HTTP requests as soon as possible. In this case, you should have an external mailing service that is picking up orders and sending notifications.

Common mistake #6: Not optimizing bandwidth usage

Most development and testing takes place in a local network environment. So when you are downloading 5 background images each being 3MB or more, you might not identify an issue with 1Gbit connection speed in your development environment. But when your users start loading a 15MB home page over 3G connections on their smartphones, you should prepare yourself for a list of complaintsand problems.

Optimizing your bandwidth usage could give you a great performance boost, and to gain this boost you probably only need a couple of tricks. There are few things that many good web deveopers do by default, including:

  1. Minification of all JavaScript
  2. Minification of all CSS
  3. Server side HTTP compression
  4. Optimization of image size and resolution

Common mistake #7: Not developing for different screen sizes

Responsive design has been a big topic in the past few years. Expansion of smartphones with different screen resolutions has brought many new ways of accessing online content, which also comes with a host of web development issues. The number of website visits that come from smartphones and tablets grows every day, and this trend is accelerating.

In order to ensure seamless navigation and access to website content, you must enable users to access it from all types of devices.

There are numerous patterns and practices for building responsive web applications. Each development platform has its own tips and tricks, but there are some frameworks that are platform independent. The most popular is probably Twitter Bootstrap. It is an open-source and free HTML, CSS, and JavaScript framework that has been adopted by every major development platform. Just adhere to Bootstrap patterns and practices when building your application, and you will get responsive web application with no trouble at all.

Common mistake #8: Cross browser incompatibility

The development process is, in most cases, under a heavy time pressure. Every application needs to be released as soon as possible and even good web developers are often focused on delivering functionality over design. Regardless of the fact that most developers have Chrome, Firefox, IE installed, they are using only one of these 90% of the time. It is common practice to use one browser during development and just as the application nears completion will you start testing it in other browsers. This is perfectly reasonable–assuming you have a lot of time to test and fix issues that show up at this stage.

However, there are some web development tips that can save you significant time when your application reaches the cross-browser testing phase:

  1. You don’t need to test in all browsers during development; it is time consuming and ineffective. However, that does not mean that you cannot switch browsers frequently. Use a different browser every couple of days, and you will at least recognize major problems early in development phase.
  2. Be careful of using statistics to justify not supporting a browser. There are many organizations that are slow in adopting new software or upgrading. Thousands of users working there might still need access to your application, and they cannot install the latest free browser due to internal security and business policies.
  3. Avoid browser specific code. In most cases there is an elegant solution that is cross-browser compatible.

Common mistake #9: Not planning for portability

Assumption is the mother of all problems! When it comes to portability, this saying is more true than ever. How many times have you seen issues in web development like hard coded file paths, database connection strings, or assumptions that a certain library will be available on the server? Assuming that the production environment will match your local development computer is simply wrong.

Ideal application setup should be maintenance-free:

  1. Make sure that your application can scale and run on a load-balanced multiple server environment.
  2. Allow simple and clear configuration–possibly in a single configuration file.
  3. Handle exceptions when web server configuration is not as expected.

Common mistake #10: RESTful anti patterns

RESTful API’s have taken their place in web development and are here to stay. Almost every web application has implemented some kind of REST services, whether for internal use or integrating with external system. But we still see broken RESTful patterns and services that do not adhere to expected practices.

Two of the most common mistakes made when writing a RESTful API are:

  1. Using wrong HTTP verbs. For example using GET for writing data. HTTP GET has been designed to be idempotent and safe, meaning that no matter how many times you call GET on the same resource, the response should always be the same and no change in application state should occur.
  2. Not sending correct HTTP status codes. The best example of this mistake is sending error messages with response code 200.
     HTTP 200 OK
     {
         message:'there was an error'
     }
    

You should only send HTTP 200 OK when the request has not generated an error. In the case of an error, you should send 400, 401, 500 or any other status code that is appropriate for the error that has occurred.

A detailed overview of standard HTTP status codes can be found here.

Wrap up

Web development is an extremely broad term that can legitimately encompass development of a website, web service, or complex web application.

The main takeaway of this web development guide is the reminder that you should always be careful about authentication and authorization, plan for scalability, and never hastily assume anything – or be ready to deal with a long list of web development problems!

 

REST Security with JWT using Java and Spring Security

The following article is a guest post from Toptal. Toptal is an elite network of freelancers that enables businesses to connect with the top 3% of software engineers and designers in the world.

Security

Security is the enemy of convenience, and vice versa. This statement is true for any system, virtual or real, from the physical house entrance to web banking platforms. Engineers are constantly trying to find the right balance for the given use case, leaning to one side or the other. Usually, when a new threat appears, we move towards security and away from convenience. Then, we see if we can recover some lost convenience without reducing the security too much. Moreover, this vicious circle goes on forever.

Security vs. convenience

Security is the enemy of convenience, and vice versa.

Let’s try to see where REST services currently stand regarding security and convenience. REST (which stands for Representational State Transfer) services started off as an extremely simplified approach to Web Services that had huge specifications and cumbersome formats, such as WSDL for describing the service, or SOAP for specifying the message format. In REST, we have none of those. We can describe the REST service in a plain text file and use any message format we want, such as JSON, XML or even plain text again. The simplified approach was applied to the security of REST services as well; no defined standard imposes a particular way to authenticate users.

Although REST services do not have much specified, an important one is the lack of state. It means the server does not keep any client state, with sessions as a good example. Thus, the server replies to each request as if it was the first the client has made. However, even now, many implementations still use cookie based authentication, which is inherited from standard website architectural design. The stateless approach of REST makes session cookies inappropriate from the security standpoint, but nevertheless, they are still widely used. Besides ignoring the required statelessness, simplified approach came as an expected security trade-off. Compared to the WS-Security standard used for Web Services, it is much easier to create and consume REST services, hence convenience went through the roof. The trade-off is pretty slim security; session hijacking and cross-site request forgery (XSRF) are the most common security issues.

In trying to get rid of client sessions from the server, some other methods have been used occasionally, such as Basic or Digest HTTP authentication. Both use an Authorization header to transmit user credentials, with some encoding (HTTP Basic) or encryption (HTTP Digest) added. Of course, they carried the same flaws found in websites: HTTP Basic had to be used over HTTPS since username and password are sent in easily reversible base64 encoding, and HTTP Digest forced the use of obsolete MD5 hashing that is proven to be insecure.

Finally, some implementations used arbitrary tokens to authenticate clients. This option seems to be the best we have, for now. If implemented properly, it fixes all the security problems of HTTP Basic, HTTP Digest or session cookies, it is simple to use, and it follows the stateless pattern.

However, with such arbitrary tokens, there’s little standard involved. Every service provider had his or her idea of what to put in the token, and how to encode or encrypt it. Consuming services from different providers required additional setup time, just to adapt to the specific token format used. The other methods, on the other hand (session cookie, HTTP Basic and HTTP Digest) are well known to developers, and almost all browsers on all devices work with them out of the box. Frameworks and languages are ready for these methods, having built-in functions to deal with each seamlessly.

JWT

JWT (shortened from JSON Web Token) is the missing standardization for using tokens to authenticate on the web in general, not only for REST services. Currently, it is in draft status as RFC 7519. It is robust and can carry a lot of information, but is still simple to use even though its size is relatively small. Like any other token, JWT can be used to pass the identity of authenticated users between an identity provider and a service provider (which are not necessarily the same systems). It can also carry all the user’s claim, such as authorization data, so the service provider does not need to go into the database or external systems to verify user roles and permissions for each request; that data is extracted from the token.

Here is how JWT is designed to work:

JWT flow

  • Clients logs in by sending their credentials to the identity provider.
  • The identity provider verifies the credentials; if all is OK, it retrieves the user data, generates a JWT containing user details and permissions that will be used to access the services, and it also sets the expiration on the JWT (which might be unlimited).
  • Identity provider signs, and if needed, encrypts the JWT and sends it to the client as a response to the initial request with credentials.
  • Client stores the JWT for a limited or unlimited amount of time, depending on the expiration set by the identity provider.
  • Client sends the stored JWT in an Authorization header for every request to the service provider.
  • For each request, the service provider takes the JWT from the Authorization header and decrypts it, if needed, validates the signature, and if everything is OK, extracts the user data and permissions. Based on this data solely, and again without looking up further details in the database or contacting the identity provider, it can accept or deny the client request. The only requirement is that the identity and service providers have an agreement on encryption so that service can verify the signature or even decrypt which identity was encrypted.

This flow allows for great flexibility while still keeping things secure and easy to develop. By using this approach, it is easy to add new server nodes to the service provider cluster, initializing them with only the ability to verify the signature and decrypt the tokens by providing them a shared secret key. No session replication, database synchronization or inter-node communication is required. REST in its full glory.

The main difference between JWT and other arbitrary tokens is the standardization of the token’s content. Another recommended approach is to send the JWT token in the Authorization header using the Bearer scheme. The content of the header should look like this:

Authorization: Bearer <token>

Implementation

For REST services to work as expected, we need a slightly different authorization approach compared to classic, multi-page websites.

Instead of triggering the authentication process by redirecting to a login page when a client requests a secured resource, the REST server authenticates all requests using the data available in the request itself, the JWT token in this case. If such an authentication fails, redirection makes no sense. The REST API simply sends an HTTP code 401 (Unauthorized) response and clients should know what to do; for example, a browser will show a dynamic div to allow the user to supply the username and password.

On the other hand, after a successful authentication in classic, multi-page websites, the user is redirected by using HTTP code 301 (Moved permanently), usually to a home page or, even better, to the page the user initially requested that triggered the authentication process. With REST, again this makes no sense. Instead we would simply continue with the execution of the request as if the resource was not secured at all, return HTTP code 200 (OK) and expected response body.

Spring Security

REST Security with JWT, Spring Security and Java

Now, let’s see how can we implement the JWT token based REST API using Java and Spring, while trying to reuse the Spring security default behavior where we can. As expected, Spring Security framework comes with many ready to plug-in classes that deal with “old” authorization mechanisms: session cookies, HTTP Basic, and HTTP Digest. However, it lacks the native support for JWT, and we need to get our hands dirty to make it work.

First, we start with the usual Spring Security filter definition in web.xml:

<filter>
	<filter-name>springSecurityFilterChain</filter-name>
	<filter-class>org.springframework.web.filter.DelegatingFilterProxy</filter-class>
</filter>
<filter-mapping>
	<filter-name>springSecurityFilterChain</filter-name>
	<url-pattern>/*</url-pattern>
</filter-mapping>

Note that the name of the filter must be exactly springSecurityFilterChain for the rest of the Spring config to work out of the box.

Next comes the XML declaration of the Spring beans related to security. In order to simplify the XML, we will set the default namespace to security by adding xmlns="http://www.springframework.org/schema/security" to the root XML element. The rest of the XML looks like this:

    <global-method-security pre-post-annotations="enabled" />  (1)
    
    <http pattern="/api/login" security="none"/>   (2)
    <http pattern="/api/signup" security="none"/>

    <http pattern="/api/**" entry-point-ref="restAuthenticationEntryPoint" create-session="stateless"> (3)
        <csrf disabled="true"/>  (4)
        <custom-filter before="FORM_LOGIN_FILTER" ref="jwtAuthenticationFilter"/>  (5)
    </http>
    
    <beans:bean id="jwtAuthenticationFilter" class="com.toptal.travelplanner.security.JwtAuthenticationFilter">  (6)
        <beans:property name="authenticationManager" ref="authenticationManager" />
        <beans:property name="authenticationSuccessHandler" ref="jwtAuthenticationSuccessHandler" />  (7)
    </beans:bean>

    <authentication-manager alias="authenticationManager">
        <authentication-provider ref="jwtAuthenticationProvider" />  (8)
    </authentication-manager>
  • (1) In this line, we activate @PreFilter@PreAuthorize@PostFilter@PostAuthorize annotations on any spring beans in the context.
  • (2) We define the login and signup endpoints to skip security; even “anonymous” should be able to do these two operations.
  • (3) Next, we define the filter chain applied to all requests while adding two important configs: Entry point reference and setting the session creation to stateless (we do not want the session created for security purposes as we are using tokens for each request).
  • (4) We do not need csrf protection because our tokens are immune to it.
  • (5) Next, we plug in our special authentication filter within the Spring’s predefined filter chain, just before the form login filter.
  • (6) This bean is the declaration of our authentification filter; since it is extending Spring’s AbstractAuthenticationProcessingFilter, we need to declare it in XML to wire its properties (auto wire does not work here). We will explain later what the filter does.
  • (7) The default success handler of AbstractAuthenticationProcessingFilter is not good enough for REST purposes because it redirects the user to a success page; that is why we set our own here.
  • (8) The declaration of the provider created by the authenticationManager is used by our filter to authenticate users.

Now let’s see how we implement the specific classes declared in the XML above. Note that Spring will wire them for us. We start with the simplest ones.

RestAuthenticationEntryPoint.java

public class RestAuthenticationEntryPoint implements AuthenticationEntryPoint {

    @Override
    public void commence(HttpServletRequest request, HttpServletResponse response, AuthenticationException authException) throws IOException {
        // This is invoked when user tries to access a secured REST resource without supplying any credentials
        // We should just send a 401 Unauthorized response because there is no 'login page' to redirect to
        response.sendError(HttpServletResponse.SC_UNAUTHORIZED, "Unauthorized");
    }
}

As explained above, this class just returns HTTP code 401 (Unauthorized) when authentication fails, overriding default Spring’s redirecting.

JwtAuthenticationSuccessHandler.java

public class JwtAuthenticationSuccessHandler implements AuthenticationSuccessHandler {

    @Override
    public void onAuthenticationSuccess(HttpServletRequest request, HttpServletResponse response, Authentication authentication) {
        // We do not need to do anything extra on REST authentication success, because there is no page to redirect to
    }

}

This simple override removes the default behavior of a successful authentication (redirecting to home or any other page the user requested). If you are wondering why we do not need to override the AuthenticationFailureHandler, it is because default implementation will not redirect anywhere if its redirect URL is not set, so we just avoid setting the URL, which is good enough.

JwtAuthenticationFilter.java

public class JwtAuthenticationFilter extends AbstractAuthenticationProcessingFilter {

    public JwtAuthenticationFilter() {
        super("/**");
    }

    @Override
    protected boolean requiresAuthentication(HttpServletRequest request, HttpServletResponse response) {
        return true;
    }

    @Override
    public Authentication attemptAuthentication(HttpServletRequest request, HttpServletResponse response) throws AuthenticationException {

        String header = request.getHeader("Authorization");

        if (header == null || !header.startsWith("Bearer ")) {
            throw new JwtTokenMissingException("No JWT token found in request headers");
        }

        String authToken = header.substring(7);

        JwtAuthenticationToken authRequest = new JwtAuthenticationToken(authToken);

        return getAuthenticationManager().authenticate(authRequest);
    }

    @Override
    protected void successfulAuthentication(HttpServletRequest request, HttpServletResponse response, FilterChain chain, Authentication authResult)
            throws IOException, ServletException {
        super.successfulAuthentication(request, response, chain, authResult);

        // As this authentication is in HTTP header, after success we need to continue the request normally
        // and return the response as if the resource was not secured at all
        chain.doFilter(request, response);
    }
}

This class is the entry point of our JWT authentication process; the filter extracts the JWT token from the request headers and delegates authentication to the injected AuthenticationManager. If the token is not found, an exception is thrown that stops the request from processing. We also need an override for successful authentication because the default Spring flow would stop the filter chain and proceed with a redirect. Keep in mind we need the chain to execute fully, including generating the response, as explained above.

JwtAuthenticationProvider.java

public class JwtAuthenticationProvider extends AbstractUserDetailsAuthenticationProvider {

    @Autowired
    private JwtUtil jwtUtil;

    @Override
    public boolean supports(Class<?> authentication) {
        return (JwtAuthenticationToken.class.isAssignableFrom(authentication));
    }

    @Override
    protected void additionalAuthenticationChecks(UserDetails userDetails, UsernamePasswordAuthenticationToken authentication) throws AuthenticationException {
    }

    @Override
    protected UserDetails retrieveUser(String username, UsernamePasswordAuthenticationToken authentication) throws AuthenticationException {
        JwtAuthenticationToken jwtAuthenticationToken = (JwtAuthenticationToken) authentication;
        String token = jwtAuthenticationToken.getToken();

        User parsedUser = jwtUtil.parseToken(token);

        if (parsedUser == null) {
            throw new JwtTokenMalformedException("JWT token is not valid");
        }

        List<GrantedAuthority> authorityList = AuthorityUtils.commaSeparatedStringToAuthorityList(parsedUser.getRole());

        return new AuthenticatedUser(parsedUser.getId(), parsedUser.getUsername(), token, authorityList);
    }

}

In this class, we are using Spring’s default AuthenticationManager, but we inject it with our own AuthenticationProvider that does the actual authentication process. To implement this, we extend the AbstractUserDetailsAuthenticationProvider, which requires us only to return UserDetails based on the authentication request, in our case, the JWT token wrapped in the JwtAuthenticationToken class. If the token is not valid, we throw an exception. However, if it is valid and decryption by JwtUtil is successful, we extract the user details (we will see exactly how in the JwtUtil class), without accessing the database at all. All the information about the user, including his or her roles, is contained in the token itself.

JwtUtil.java

public class JwtUtil {

    @Value("${jwt.secret}")
    private String secret;

    /**
     * Tries to parse specified String as a JWT token. If successful, returns User object with username, id and role prefilled (extracted from token).
     * If unsuccessful (token is invalid or not containing all required user properties), simply returns null.
     * 
     * @param token the JWT token to parse
     * @return the User object extracted from specified token or null if a token is invalid.
     */
    public User parseToken(String token) {
        try {
            Claims body = Jwts.parser()
                    .setSigningKey(secret)
                    .parseClaimsJws(token)
                    .getBody();

            User u = new User();
            u.setUsername(body.getSubject());
            u.setId(Long.parseLong((String) body.get("userId")));
            u.setRole((String) body.get("role"));

            return u;

        } catch (JwtException | ClassCastException e) {
            return null;
        }
    }

    /**
     * Generates a JWT token containing username as subject, and userId and role as additional claims. These properties are taken from the specified
     * User object. Tokens validity is infinite.
     * 
     * @param u the user for which the token will be generated
     * @return the JWT token
     */
    public String generateToken(User u) {
        Claims claims = Jwts.claims().setSubject(u.getUsername());
        claims.put("userId", u.getId() + "");
        claims.put("role", u.getRole());

        return Jwts.builder()
                .setClaims(claims)
                .signWith(SignatureAlgorithm.HS512, secret)
                .compact();
    }
}

Finally, JwtUtil class is in charge of parsing the token into User object and generating the token from the User object. It is straightforward since it uses the jjwt library to do all the JWT work. In our example, we simply store the username, user ID and user roles in the token. We could also store more arbitrary stuff and add more security features, such as the token’s expiration. Parsing of the token is used in the AuthenticationProvider as shown above. The generateToken() method is called from login and signup REST services, which are unsecured and will not trigger any security checks or require a token to be present in the request. In the end, it generates the token that will be returned to the clients, based on the user.

Conclusion

Although the old, standardized security approaches (session cookie, HTTP Basic, and HTTP Digest) will work with REST services as well, they all have problems that would be nice to avoid by using a better standard. JWT arrives just in time to save the day, and most importantly it is very close to becoming an IETF standard.

JWT’s main strength is handling user authentication in a stateless, and therefore scalable, way, while keeping everything secure with up-to-date cryptography standards. Storing claims (user roles and permissions) in the token itself creates huge benefits in distributed system architectures where the server that issues the request has no access to the authentication data source.

5 Golden Rules for Great Web API Design

http://www.toptal.com/#explore-competent-developers-today

The following article is a guest post from Toptal. Toptal is an elite network of freelancers that enables businesses to connect with the top 3% of software engineers and designers in the world.

Ever found yourself wondering “what were they thinking?” when integrating a web service via its API? If not, you’ve been far luckier than I have.

Any software developer knows how easy it is to let a project devolve into spaghetti code, and web APIs are no less prone to resulting in a tangled web. But it doesn’t need to be that way. In truth, it’s possible to design great web APIs that people will actually enjoy using, and that you’ll enjoy creating as well. But how? The answer to that question is what this post is all about.

Perspective

Most of the time when you’re building solutions, you’re designing for end users who are not programmers, or who are generally not technically sophisticated. You’re giving them a graphical interface and, if you’ve been doing your job right, you’ve gleaned a pretty good idea from them of what they need the interface to do.

But API development is different. You’re designing an interface for programmers, probably without even knowing who they are. And whoever they are, they will have the technical sophistication (or at least will think they have the technical sophistication) to point out every little flaw in your software. Your users are likely to be as critical of your API as you would be of theirs, and will thoroughly enjoy critiquing it.

And therein lies part of the irony, by the way. If anyone should understand how to make a web API that’s easy-to-use, it’s you. After all, you’re a software engineer just like the users of your API, so you share their perspective. Don’t you?

Well, while you certainly understand their perspective, you don’t necessarily share their perspective. When you’re developing or enhancing your API, you have the perspective of an API designer whereas they have the perspective of an API user.

API designers typically focus on questions like “What does this service need to do?” or “What does this service need to provide?”, while API users are focused on “How can I use this API to do what I need?”, or more accurately, “How can I spend the bare minimum of effort to get what I need out of this API?”.

These different questions lead to two vastly different perspectives. As a result, the necessary prerequisite to designing a great API is to shift your perspective from that of the API designer to that of the API user. In other words, continually ask yourself the questions you would naturally ask if you were your own user. Rather than thinking about what your API can do, think about the different ways it may need or want to be used and then focus on making those tasks as easy as possible for your API’s users.

While this may sound easy and obvious, it’s astounding how infrequently APIs appear to be designed this way. Think about the APIs you’ve encountered in your career. How frequently do they appear to have been designed with this perspective in mind? Web API design can be challenging.

So with that said, let’s proceed and talk about the 5 Golden Rules for Designing a Great Web API, namely:

  1. Documentation
  2. Stability and Consistency
  3. Flexibility
  4. Security
  5. Ease of Adoption

A diagram of users accessing a well-designed web API

Rule 1: Documentation

Documentation. Yes, I’m starting here.

Do you hate documentation? Well, I can empathize, but put on your “user perspective” hat and I’ll bet that the one thing you hate more than having to write documentation is having to try to use an undocumented API. I rest my case.

The bottom line is that, if you want anyone to use your API, documentation is essential. You’ve simply got to get this right. It’s the first thing users will see, so in some ways it’s like the gift wrap. Present well, and people are more likely to use your API and put up with any idiosyncrasies.

So how do we write good documentation?

The relatively easy part is documenting the API methods themselves; i.e., example requests and responses, along with descriptions of each of the elements in both. Fortunately, there are an increasing number of software tools that facilitate and simplify the task of generating documentation. Or you can write something yourself that introspects your API, endpoints, and functions, and generates the corresponding documentation for you.

But what separates great documentation from adequate documentation is the inclusion of usage examples and, ideally, tutorials. This is what helps the user understand your API and where to start. It orients them and helps them load your API into their brain.

For example, if the developers of Twilio were to list out every class, every method, and every possible response to their API, but didn’t bother to mention that you can send an SMS, track a call, or buy a phone number through their API, it would take a really long time for the API user to find that information and understand it cohesively. Can you imagine sorting through a giant tree of classes and methods without any insight into what they were used for, other than their name? Sounds terrible right? But that’s exactly what so many API providers do, thereby leaving their APIs opaque to anybody but themselves. The Rackspace CloudFiles developer and API guide is one such example; it’s difficult to get your bearings unless you already understand what they’re doing and what they’re providing.

So write concise tutorials that help get the developer up and running quickly, with at least a skeleton of what they’re trying to do, and then point them in the direction of the more detailed, fully-documented list of functionality so they can expand on what they have.

Once you’re done with your documentation, be sure to validate that it makes sense to people other than yourself. Send it out to other developers in your network, give them no instruction other than pointing them to the documentation, and ask them to follow a tutorial or build something really basic in about 15 minutes. If they can’t have a basic integration with your API in 15 minutes, you have more work to do.

For some noteworthy examples of excellent and detailed documentation, check out TwilioDjango, and MailChimp. None of these products are necessarily the best in their markets (although they are all good products), yet they do distinguish themeselves by providing some of the best documentation within their markets, which has certainly facilitated their wide acceptance and market share.

Rule 2: Stability and Consistency

If you’ve ever used Facebook’s API, you know how often they deprecate and completely rewrite their APIs. No matter how much you respect their hacker culture, or their product, their’s is not a developer-friendly perspective. The reason they are still successful is because they have a billion users, not because their API is great.

But you probably don’t have the luxury of such a mammoth user base and market share, so you’re going to need have a much less volatile API, keeping old versions running and supported for quite a long period of time. Maybe even years. So toward that end, here are some tips and tricks.

Let’s say, for example, that your API is accessible via the URL http://myapisite.com/api/widgets and provides its response in JSON format. While this may seem fine at first blush, what happens when you need to modify the format of the JSON response? Everyone that’s already integrated with you is going to break. Oops.

So do some planning ahead, and version your API from the outset, explicitly incorporating a version number into the URL (e.g., http://myapisite.com/api/widgets?version=1 or http://myapisite.com/api/widgets/v1) so that people can rely on version 1 working and can upgrade to any subsequent version when they’re ready to do so. If you need to phase out a prior version at some point, go ahead, but give plenty of notice and offer some sort of transition plan.

A good URL scheme will include major versions in the URL. Any change to the output format or supported data types should result in bumping up to a new major version. Generally, it’s acceptable to keep the same version if all you are doing is adding keys or nodes to your output, but to be on the safe side, any time the output changes, bump a version.

In addition to being stable over time, APIs need to be internally consistent. I’ve seen many APIs that change parameter names or methods of POSTing data, depending on the endpoint that is being used. Instead, you should handle common parameters globally within your API and use inheritance or a shared architecture to reuse the same naming conventions and data handling consistently throughout your API.

Finally, you need to record and publish a changelog to show differences between versions of your API so that users know exactly how to upgrade.

Rule 3: Flexibility

Garbage in, garbage out (GIGO) is a well known mantra to most programmers. As applied to web API design, this guiding principle tends to dictate a fairly rigid approach to request validation. Sounds great, right? No mess, no problem.

Yet as with everything, there needs to be some balance. As it is not possible to anticipate every way that users will want to employ your service, and since not every client platform is consistent (i.e., not every platform has very good JSON support, a decent OAuth library, etc.), it’s good to have at least some degree of flexibility or tolerance with regard to your input and output constraints.

For example, many APIs will support a variety of output formats, like JSON, YAML, XML, et. al., but will only support specifying the format in the URL itself. In the spirit of remaining flexible, you could allow this to also be specified in the URL (e.g., /api/v1/widgets.json), or you might also read and recognize an Accept: application/json HTTP header, or support a querystring variable such as ?format=JSON, and so on.

And while we’re at it, why not allow for the format specified to be case-insensitive, so the user could specify ?format=json as well? That’s a classic example of a way to alleviate unnecessary frustration for the user of your API.

Another example is allowing for different ways of inputting variables. So, just like you have a variety of output formats, allow for a variety of input formats as well (e.g., plain POST variables, JSON, XML, etc.). You should at least be supporting standard POST variables, and many modern applications support JSON as well, so those two are a good place to start.

The point here is that you shouldn’t assume that everyone shares your technical preferences. With a little research into how other APIs work, and through dialog with other developers, you can glean other valuable alternatives that are useful and include them in your API.

Rule 4: Security

Security is obviously one of the most important things to build into your web service, but so many developers make it ridiculously hard to use. As the API provider, you should be offering usable examples of how to authenticate and authorize when accessing your API. This should not be a difficult issue that an end user spends hours working on. Make it your goal that they either don’t have to write any code, or it takes them less than 5 minutes to write it.

For most APIs, I prefer a simple token-based authentication, where the token is a random hash assigned to the user and they can reset it at any point if it has been stolen. Allow the token to be passed in through POST or an HTTP header. For example, the user could (and should) send an SHA-1 token as a POST variable, or as a header in a format such as “Authorization: da39a3ee5e6b4b0d3255bfef95601890afd80709”.

Also, choose a secure token, not a short numeric identifier. Something irreversible is best. For example, it’s relatively simple to just generate out an SHA token during user creation and store it in the database. Then, you can simply query your database for any users matching that token. You could also do a token generated with a unique identifier and a salt value, something like SHA(User.ID + "abcd123"), and then query for any user that matches; e.g., where TokenFromPost = SHA(User.ID + "abcd123").

Another very good option is OAuth 2 + SSL. You should be using SSL anyway, but OAuth 2 is reasonably simple to implement on the server side, and libraries are available for many common programming languages.

If the API you have made is supposed to be accessible on a public website via JavaScript, you need to also make sure you validate a list of URLs per-account for the token. That way, nobody can go inspect the calls to your API, steal the token from your user, and go use it for themselves.

Here are some other important things to keep in mind:

  • Whitelisting Functionality. APIs generally allow you to do basic create, read, update, and delete operations on data. But you don’t want to allow these operations for every entity, so make sure each has a whitelist of allowable actions. Make sure, for example, that only authorized users can run commands like /user/delete/<id>. Similarly, all useful headers that are sent in the user’s request need to be validated against a whitelist as well. If you are allowing Content-type headers, verify that whatever the user sends in actually matches a whilelist of supported content types. If it doesn’t, then send back an error message such as a 406 Not Acceptable response. Whitelisting is important as a lot of APIs are automatically generated, or use a blacklist instead, which means you have to be explicit about what you don’t want. However, the golden rule of security is to start with absolutely nothing, and only explicitly allow what you do want.
  • Protect yourself against Cross-Site Request Forgery (CSRF). If you are allowing session or cookie authentication, you need to make sure that you’re protecting yourself from CSRF attacks. The Open Web Application Security Project (OWASP) provides useful guidance on ways to preclude these vulnerabilities.
  • Validate access to resources. In every request, you need to verify that a user is in fact allowed access to the specific item they are referencing. So, if you have an endpoint to view a user’s credit card details (e.g., /account/card/view/152423), be sure that the ID “152423” is referencing a resource that the user really is authorized to access.
  • Validate all input. All input from a user needs to be securely parsed, preferably using a well-known library if you are using complicated input like XML or JSON. Don’t build your own parser, or you’re in for a world of hurt.

Rule 5: Ease Of Adoption

This is really the most important rule in the bunch, and builds on all the others. As I mentioned during the documentation rule, try this out with people that are new to your API. Make sure that they can get up and running with at least a basic implementation of your API, even if it’s just following a tutorial, within a few minutes. I think 15 minutes is a good goal.

Here are some specific recommendations to ease and facilitate adoption of your API:

  • Make sure people can actually use your API and that it works the first time, every time. Have new people try to implement your API occasionally to verify that it’s not confusing in some way that you’ve become immune to.
  • Keep it simple. Don’t do any fancy authentication. Don’t do some crazy custom URL scheme. Don’t reinvent SOAP, or JSON, or REST, or anything. Use all the tools you can that have already been implemented and are widely accepted, so that developers only have to learn your API, not your API + 10 obscure new technologies.
  • Provide language-specific libraries to interface with your service. There are some nice tools to automatically generate a library for you, such as Alpaca or Apache Thrift. Currently Alpaca supports Node, PHP, Python, and Ruby. Thrift supports C++, Java, Python, PHP, Ruby, Erlang, Perl, Haskell, C#, Cocoa, JavaScript, Node.js, Smalltalk, OCaml, Delphi and more.
  • Simplify any necessary signup. If you are not developing an open source API, or if there is a signup process of any sort, make sure that upon signup, a user is very quickly directed to a tutorial. And make the signup process completely automated without any need for human interaction on your part.
  • Provide excellent support. A big barrier to adoption is lack of support. How will you handle and respond to a bug report? What about unclear documentation? An unsophisticated user? Forums, bug trackers, and email support are fantastic starts, but do make sure that when someone posts a bug, you really address it. Nobody wants to see a ghost town forum or a giant list of bugs that haven’t been addressed.

Web API Wrap-up

Web services and their APIs abound. Unfortunately, the vast majority are difficult to use. Reasons range from poor design, to lack of documentation, to volatility, to unresolved bugs, or, in some cases, all of the above.

Following the guidance in this post will help ensure that your web API is clean, well-documented, and easy-to-use. Such APIs are truly rare and are therefore that much more likely to be widely adopted and used.

System Security Expert Job Description Template

The following hiring guide is from Toptal. Toptal is an elite network of freelancers that enables businesses to connect with the top 3% of software engineers and designers in the world.

System Security professionals are experts on preventing and mitigating security breaches that may arise from vulnerabilities in computer systems. This field covers a range of areas of expertise, including information security, network security, and mobile security. System Security professionals must “think like a hacker” in order to anticipate the main vectors of likely attack by malicious outsiders. They should have a keen eye for the most minute details, plus a well-rounded and in-depth knowledge of computer systems and the the tools of the trade.

System Security professionals are experts on preventing and mitigating security breaches that may arise from vulnerabilities in computer systems. This field covers a range of areas of expertise, including information security, network security, and mobile security. System Security professionals must “think like a hacker” in order to anticipate the main vectors of likely attack by malicious outsiders. They should have a keen eye for the most minute details, plus a well-rounded and in-depth knowledge of computer systems and the the tools of the trade.

There is a shortage of formal training available to those who wish to go into the security field. While a formal education in computer systems is desirable, many of the best security experts are self-taught hobbyists when it comes to their familiarity with “hacking” and the techniques and vulnerabilities exploited by attackers. Since your security expert will be intimately aware of all of your system’s weakest points, it is critical that you can trust him or her.

This article provides a sample System Security Expert job description that will help you write the perfect job ad, and ensure that you find and hire the person that matches your specific criteria.

System Security Expert – Job Description and Ad Template

Company Introduction

{{Write a short and catchy paragraph about your company. Provide information about the company’s culture, perks, and benefits. Mention office hours, remote working possibilities, and anything else that makes your company interesting.}}

Job Description

We are looking for a Systems Security Expert with emphasis on {{WLAN/networks/web applications/penetration testing}}. You must be passionate about security and staying ahead of all possible threats to the system. You will be working with our development team to ensure that our network and technology stack is watertight. You will become thoroughly familiar with our architecture, helping to direct its development. You will provide regular reports of potential vulnerabilities and risk profiles. Good communication skills are critical to efficiently coordinate with our team and build a secure end product.

Responsibilities

  • Actively look for security vulnerabilities in our application and network, reporting issues and describing possible solutions.
  • Design and maintain our security infrastructure.
  • Stay up to date with security news, keeping an eye out for the latest vulnerabilities and remedies emerging in the field.
  • Actively liaise with the development team to ensure a secure architecture.
  • Actively liaise with the development team to ensure thorough automated testing of all source code (e.g., via Test-Driven Development).
  • Provide regular reports auditing our current services and latest changes, as well as our internal practices.
  • Monitor our server traffic, ticketing and reporting unusual packets.
  • {{Add other responsibilities here that are relevant.}}

Skills and Qualifications

  • {{Relevant education in Computer Science or Computer Engineering.}}
  • {{Relevant certification, such as CCNA or OWSE.}}
  • Thorough understanding of potential attack vectors such as XSS, injection, hijacking, social engineering, and so on.
  • Thorough understanding of cryptographic primitives and their underlying principles.
  • Sound command of {{your application’s environment}} and the command line.
  • Thorough understanding of networking protocols, such as TCP/IP.
  • Thorough understanding of HTTP and HTTPS, as well as their underlying implementations.
  • Thorough understanding of Content Delivery Networks and their integration into applications.
  • Basic programming proficiency, sufficient to write and execute scripts from the command line.
  • Active in the security community. Regularly attends meetups or conferences.
  • Contributes to open source projects, or creates purpose built tools for security work.
  • Working understanding of Agile Development processes.
  • Excellent communication skills. Ability to translate complex ideas into simple solutions on paper.
  • {{Include any framework, library, or any other technology related to your development stack.}}

Separation Anxiety: A Tutorial for Isolating Your System with Linux Namespaces

The following article is a guest post by Mahmud Ridwan, Technical Editor at Toptal. Toptal is an elite network of freelancers that enables businesses to connect with the top 3% of software engineers and designers in the world.

With the advent of tools like DockerLinux Containers, and others, it has become super easy to isolate Linux processes into their own little system environments. This makes it possible to run a whole range of applications on a single real Linux machine and ensure no two of them can interfere with each other, without having to resort to using virtual machines. These tools have been a huge boon to PaaS providers. But what exactly happens under the hood?

These tools rely on a number of features and components of the Linux kernel. Some of these features were introduced fairly recently, while others still require you to patch the kernel itself. But one of the key components, using Linux namespaces, has been a feature of Linux since version 2.6.24 was released in 2008.

Anyone familiar with chroot already has a basic idea of what Linux namespaces can do and how to use namespace generally. Just as chroot allows processes to see any arbitrary directory as the root of the system (independent of the rest of the processes), Linux namespaces allow other aspects of the operating system to be independently modified as well. This includes the process tree, networking interfaces, mount points, inter-process communication resources and more.

Why Use Namespaces for Process Isolation?

In a single-user computer, a single system environment may be fine. But on a server, where you want to run multiple services, it is essential to security and stability that the services are as isolated from each other as possible. Imagine a server running multiple services, one of which gets compromised by an intruder. In such a case, the intruder may be able to exploit that service and work his way to the other services, and may even be able compromise the entire server. Namespace isolation can provide a secure environment to eliminate this risk.

For example, using namespacing, it is possible to safely execute arbitrary or unknown programs on your server. Recently, there has been a growing number of programming contest and “hackathon” platforms, such as HackerRankTopCoderCodeforces, and many more. A lot of them utilize automated pipelines to run and validate programs that are submitted by the contestants. It is often impossible to know in advance the true nature of contestants’ programs, and some may even contain malicious elements. By running these programs namespaced in complete isolation from the rest of the system, the software can be tested and validated without putting the rest of the machine at risk. Similarly, online continuous integration services, such as Drone.io, automatically fetch your code repository and execute the test scripts on their own servers. Again, namespace isolation is what makes it possible to provide these services safely.

Namespacing tools like Docker also allow better control over processes’ use of system resources, making such tools extremely popular for use by PaaS providers. Services like Heroku and Google App Engine use such tools to isolate and run multiple web server applications on the same real hardware. These tools allow them to run each application (which may have been deployed by any of a number of different users) without worrying about one of them using too many system resources, or interfering and/or conflicting with other deployed services on the same machine. With such process isolation, it is even possible to have entirely different stacks of dependency softwares (and versions) for each isolated environment!

If you’ve used tools like Docker, you already know that these tools are capable of isolating processes in small “containers”. Running processes in Docker containers is like running them in virtual machines, only these containers are significantly lighter than virtual machines. A virtual machine typically emulates a hardware layer on top of your operating system, and then runs another operating system on top of that. This allows you to run processes inside a virtual machine, in complete isolation from your real operating system. But virtual machines are heavy! Docker containers, on the other hand, use some key features of your real operating system, including namespaces, and ensure a similar level of isolation, but without emulating the hardware and running yet another operating system on the same machine. This makes them very lightweight.

Process Namespace

Historically, the Linux kernel has maintained a single process tree. The tree contains a reference to every process currently running in a parent-child hierarchy. A process, given it has sufficient privileges and satisfies certain conditions, can inspect another process by attaching a tracer to it or may even be able to kill it.

With the introduction of Linux namespaces, it became possible to have multiple “nested” process trees. Each process tree can have an entirely isolated set of processes. This can ensure that processes belonging to one process tree cannot inspect or kill – in fact cannot even know of the existence of – processes in other sibling or parent process trees.

Every time a computer with Linux boots up, it starts with just one process, with process identifier (PID) 1. This process is the root of the process tree, and it initiates the rest of the system by performing the appropriate maintenance work and starting the correct daemons/services. All the other processes start below this process in the tree. The PID namespace allows one to spin off a new tree, with its own PID 1 process. The process that does this remains in the parent namespace, in the original tree, but makes the child the root of its own process tree.

With PID namespace isolation, processes in the child namespace have no way of knowing of the parent process’s existence. However, processes in the parent namespace have a complete view of processes in the child namespace, as if they were any other process in the parent namespace.

This namespace tutorial outlines the separation of various process trees using namespace systems in Linux.

It is possible to create a nested set of child namespaces: one process starts a child process in a new PID namespace, and that child process spawns yet another process in a new PID namespace, and so on.

With the introduction of PID namespaces, a single process can now have multiple PIDs associated with it, one for each namespace it falls under. In the Linux source code, we can see that a struct named pid, which used to keep track of just a single PID, now tracks multiple PIDs through the use of a struct named upid:

struct upid {
  int nr;                     // the PID value
  struct pid_namespace *ns;   // namespace where this PID is relevant
  // ...
};

struct pid {
  // ...
  int level;                  // number of upids
  struct upid numbers[0];     // array of upids
};

To create a new PID namespace, one must call the clone() system call with a special flag CLONE_NEWPID. (C provides a wrapper to expose this system call, and so do many other popular languages.) Whereas the other namespaces discussed below can also be created using the unshare() system call, a PID namespace can only be created at the time a new process is spawned using clone(). Once clone() is called with this flag, the new process immediately starts in a new PID namespace, under a new process tree. This can be demonstrated with a simple C program:

#define _GNU_SOURCE
#include <sched.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/wait.h>
#include <unistd.h>

static char child_stack[1048576];

static int child_fn() {
  printf("PID: %ld\n", (long)getpid());
  return 0;
}

int main() {
  pid_t child_pid = clone(child_fn, child_stack+1048576, CLONE_NEWPID | SIGCHLD, NULL);
  printf("clone() = %ld\n", (long)child_pid);

  waitpid(child_pid, NULL, 0);
  return 0;
}

Compile and run this program with root privileges and you will notice an output that resembles this:

clone() = 5304
PID: 1

The PID, as printed from within the child_fn, will be 1.

Even though this namespace tutorial code above is not much longer than “Hello, world” in some languages, a lot has happened behind the scenes. The clone() function, as you would expect, has created a new process by cloning the current one and started execution at the beginning of the child_fn() function. However, while doing so, it detached the new process from the original process tree and created a separate process tree for the new process.

Try replacing the static int child_fn() function with the following, to print the parent PID from the isolated process’s perspective:

static int child_fn() {
  printf("Parent PID: %ld\n", (long)getppid());
  return 0;
}

Running the program this time yields the following output:

clone() = 11449
Parent PID: 0

Notice how the parent PID from the isolated process’s perspective is 0, indicating no parent. Try running the same program again, but this time, remove the CLONE_NEWPID flag from within the clone() function call:

pid_t child_pid = clone(child_fn, child_stack+1048576, SIGCHLD, NULL);

This time, you will notice that the parent PID is no longer 0:

clone() = 11561
Parent PID: 11560

However, this is just the first step in our tutorial. These processes still have unrestricted access to other common or shared resources. For example, the networking interface: if the child process created above were to listen on port 80, it would prevent every other process on the system from being able to listen on it.

Linux Network Namespace

This is where a network namespace becomes useful. A network namespace allows each of these processes to see an entirely different set of networking interfaces. Even the loopback interface is different for each network namespace.

Isolating a process into its own network namespace involves introducing another flag to the clone() function call: CLONE_NEWNET;

#define _GNU_SOURCE
#include <sched.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/wait.h>
#include <unistd.h>


static char child_stack[1048576];

static int child_fn() {
  printf("New `net` Namespace:\n");
  system("ip link");
  printf("\n\n");
  return 0;
}

int main() {
  printf("Original `net` Namespace:\n");
  system("ip link");
  printf("\n\n");

  pid_t child_pid = clone(child_fn, child_stack+1048576, CLONE_NEWPID | CLONE_NEWNET | SIGCHLD, NULL);

  waitpid(child_pid, NULL, 0);
  return 0;
}

Output:

Original `net` Namespace:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp4s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 00:24:8c:a1:ac:e7 brd ff:ff:ff:ff:ff:ff


New `net` Namespace:
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN mode DEFAULT group default
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

What’s going on here? The physical ethernet device enp4s0 belongs to the global network namespace, as indicated by the “ip” tool run from this namespace. However, the physical interface is not available in the new network namespace. Moreover, the loopback device is active in the original network namespace, but is “down” in the child network namespace.

In order to provide a usable network interface in the child namespace, it is necessary to set up additional “virtual” network interfaces which span multiple namespaces. Once that is done, it is then possible to create Ethernet bridges, and even route packets between the namespaces. Finally, to make the whole thing work, a “routing process” must be running in the global network namespace to receive traffic from the physical interface, and route it through the appropriate virtual interfaces to to the correct child network namespaces. Maybe you can see why tools like Docker, which do all this heavy lifting for you, are so popular!

Linux network namespace is comprised of a routing process to multiple child net namespaces.

To do this by hand, you can create a pair of virtual Ethernet connections between a parent and a child namespace by running a single command from the parent namespace:

ip link add name veth0 type veth peer name veth1 netns <pid>

Here, <pid> should be replaced by the process ID of the process in the child namespace as observed by the parent. Running this command establishes a pipe-like connection between these two namespaces. The parent namespace retains the veth0 device, and passes the veth1 device to the child namespace. Anything that enters one of the ends, comes out through the other end, just as you would expect from a real Ethernet connection between two real nodes. Accordingly, both sides of this virtual Ethernet connection must be assigned IP addresses.

Mount Namespace

Linux also maintains a data structure for all the mountpoints of the system. It includes information like what disk partitions are mounted, where they are mounted, whether they are readonly, et cetera. With Linux namespaces, one can have this data structure cloned, so that processes under different namespaces can change the mountpoints without affecting each other.

Creating separate mount namespace has an effect similar to doing a chroot()chroot() is good, but it does not provide complete isolation, and its effects are restricted to the root mountpoint only. Creating a separate mount namespace allows each of these isolated processes to have a completely different view of the entire system’s mountpoint structure from the original one. This allows you to have a different root for each isolated process, as well as other mountpoints that are specific to those processes. Used with care per this tutorial, you can avoid exposing any information about the underlying system.

Learning how to use namespace correctly has multiple benefits as outlined in this namespace tutorial.

The clone() flag required to achieve this is CLONE_NEWNS:

clone(child_fn, child_stack+1048576, CLONE_NEWPID | CLONE_NEWNET | CLONE_NEWNS | SIGCHLD, NULL)

Initially, the child process sees the exact same mountpoints as its parent process would. However, being under a new mount namespace, the child process can mount or unmount whatever endpoints it wants to, and the change will affect neither its parent’s namespace, nor any other mount namespace in the entire system. For example, if the parent process has a particular disk partition mounted at root, the isolated process will see the exact same disk partition mounted at the root in the beginning. But the benefit of isolating the mount namespace is apparent when the isolated process tries to change the root partition to something else, as the change will only affect the isolated mount namespace.

Interestingly, this actually makes it a bad idea to spawn the target child process directly with the CLONE_NEWNSflag. A better approach is to start a special “init” process with the CLONE_NEWNS flag, have that “init” process change the “/”, “/proc”, “/dev” or other mountpoints as desired, and then start the target process. This is discussed in a little more detail near the end of this namespace tutorial.

Other Namespaces

There are other namespaces that these processes can be isolated into, namely user, IPC, and UTS. The user namespace allows a process to have root privileges within the namespace, without giving it that access to processes outside of the namespace. Isolating a process by the IPC namespace gives it its own interprocess communication resources, for example, System V IPC and POSIX messages. The UTS namespace isolates two specific identifiers of the system: nodename and domainname.

A quick example to show how UTS namespace is isolated is shown below:

#define _GNU_SOURCE
#include <sched.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/utsname.h>
#include <sys/wait.h>
#include <unistd.h>


static char child_stack[1048576];

static void print_nodename() {
  struct utsname utsname;
  uname(&utsname);
  printf("%s\n", utsname.nodename);
}

static int child_fn() {
  printf("New UTS namespace nodename: ");
  print_nodename();

  printf("Changing nodename inside new UTS namespace\n");
  sethostname("GLaDOS", 6);

  printf("New UTS namespace nodename: ");
  print_nodename();
  return 0;
}

int main() {
  printf("Original UTS namespace nodename: ");
  print_nodename();

  pid_t child_pid = clone(child_fn, child_stack+1048576, CLONE_NEWUTS | SIGCHLD, NULL);

  sleep(1);

  printf("Original UTS namespace nodename: ");
  print_nodename();

  waitpid(child_pid, NULL, 0);

  return 0;
}

This program yields the following output:

Original UTS namespace nodename: XT
New UTS namespace nodename: XT
Changing nodename inside new UTS namespace
New UTS namespace nodename: GLaDOS
Original UTS namespace nodename: XT

Here, child_fn() prints the nodename, changes it to something else, and prints it again. Naturally, the change happens only inside the new UTS namespace.

More information on what all of the namespaces provide and isolate can be found in the tutorial here

Cross-Namespace Communication

Often it is necessary to establish some sort of communication between the parent and the child namespace. This might be for doing configuration work within an isolated environment, or it can simply be to retain the ability to peek into the condition of that environment from outside. One way of doing that is to keep an SSH daemon running within that environment. You can have a separate SSH daemon inside each network namespace. However, having multiple SSH daemons running uses a lot of valuable resources like memory. This is where having a special “init” process proves to be a good idea again.

The “init” process can establish a communication channel between the parent namespace and the child namespace. This channel can be based on UNIX sockets or can even use TCP. To create a UNIX socket that spans two different mount namespaces, you need to first create the child process, then create the UNIX socket, and then isolate the child into a separate mount namespace. But how can we create the process first, and isolate it later? Linux provides unshare(). This special system call allows a process to isolate itself from the original namespace, instead of having the parent isolate the child in the first place. For example, the following code has the exact same effect as the code previously mentioned in the network namespace section:

#define _GNU_SOURCE
#include <sched.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/wait.h>
#include <unistd.h>


static char child_stack[1048576];

static int child_fn() {
  // calling unshare() from inside the init process lets you create a new namespace after a new process has been spawned
  unshare(CLONE_NEWNET);

  printf("New `net` Namespace:\n");
  system("ip link");
  printf("\n\n");
  return 0;
}

int main() {
  printf("Original `net` Namespace:\n");
  system("ip link");
  printf("\n\n");

  pid_t child_pid = clone(child_fn, child_stack+1048576, CLONE_NEWPID | SIGCHLD, NULL);

  waitpid(child_pid, NULL, 0);
  return 0;
}

And since the “init” process is something you have devised, you can make it do all the necessary work first, and then isolate itself from the rest of the system before executing the target child.

Conclusion

This tutorial is just an overview of how to use namespaces in Linux. It should give you a basic idea of how a Linux developer might start to implement system isolation, an integral part of the architecture of tools like Docker or Linux Containers. In most cases, it would be best to simply use one of these existing tools, which are already well-known and tested. But in some cases, it might make sense to have your very own, customized process isolation mechanism, and in that case, this namespace tutorial will help you out tremendously.

There is a lot more going on under the hood than I’ve covered in this article, and there are more ways you might want to limit your target processes for added safety and isolation. But, hopefully, this can serve as a useful starting point for someone who is interested in knowing more about how namespace isolation with Linux really works.

Biometric Security – The Key To Passwordless Authentication Or A Fad?

The following article is a guest post by Nermin Hajdarbegovic, Technical Editor at Toptal. Toptal is an elite network of freelancers that enables businesses to connect with the top 3% of software engineers and designers in the world.

Passwordless authentication has been the Holy Grail of security for years, but progress has been painfully slow. This does not mean that huge strides have not been made, but unfortunately, most of these developments have been relegated to research labs or professional niches. Until a few years ago, the technology to implement passwordless authentication on a grand scale simply wasn’t available.

biometrics and biometric security

However, the industry juggernaut is slowly but surely changing this. There are a few technical, legal and even ethical considerations to take into account, but be as it may, biometric security and passwordless authentication is here to stay.

Biometrics are already changing the game, and they will continue to do so.

Why Go Paswordless In The First Place?

Since this is an engineering blog, I don’t feel the need to explain to a group of security-minded developers the upsides of fast logins. We need not look at the problem from a consumer perspective – all of us are compelled to use a myriad of online services and an ever increasing number of devices. This won’t change anytime soon, and if anything, the number of services and devices we will have to log into will keep increasing.

Of course, there are plenty of ways passwords are being dispensed with, including biometric authentication. From a user perspective, the use of Google, Microsoft and Facebook accounts to log into third-party services works, since the user can avoid password bloat and not have to create accounts for every service and device.

OAuth and OpenID have been used for years to consolidate digital identities, and the standards are employed by some of the biggest names in the tech industry.

Technically, this is not really a passwordless approach, but the average user might not see the distinction.

The pros and cons of using this approach are:

Pros:

  • Convenience
  • Easy to implement
  • Good security
  • Brand name peace of mind

Cons:

  • Dependency on a centralised service
  • All eggs in one basket – by compromising one account, an attacker can gain access to others
  • Potential security vulnerabilities, beyond your control, can be used against you
  • People may be reluctant to use such services due to privacy concerns

Much of this is true of alternative solutions, although it does not apply to security certificates which are usually relegated to business users rather than consumers. The pros outweigh the cons, hence we can already log into countless third-party services using our existing accounts.

How Can Biometrics and Biometric Security Help?

Using biometric authentication systems addresses many issues; there’s no reliance on centralised services, privacy is not a concern, and the user experience is not compromised – provided it’s done right. So, let’s take a look at the pros and cons.

Pros:

  • Fingerprint scanning is quick, cheap and relatively secure
  • Voice recognition is easy to use and difficult to manipulate
  • Iris scans are very secure and potentially more convenient than fingerprint scanning
  • Electrocardiogram technology offers “always on” authentication
  • All biometric security methods address privacy concerns while offering good security

Cons:

  • Biometrics are not suitable for all applications
  • Cost of deploying biometric security is often prohibitive
  • Support is limited to certain platforms and unavailable on most
  • Some technologies are still immature
  • Biometrics are not a silver bullet – security can still be compromised

Biometrics are not a new concept, or a new technology. Biometric security has been used in many industries for decades, and it’s been a staple of Hollywood script writers even longer. I am sure many readers had a chance to play around with facial recognition and fingerprint scanners on their notebooks years ago – I know I did, and I also know I was not impressed; most of these early solutions were cheap gimmicks.

However, we’ve come a long way since then. More processing power is available, along with vastly superior imaging sensors, and everything is backed by increasingly sophisticated software. This is why some of these technologies are making a comeback, which they’re doing with a vengeance.

Industry Gives Thumbs Up To Fingerprint Scanners

Apple’s Touch ID is probably the most recognisable fingerprint authentication solution on the market, but it’s by no means the only one. Apple opened Touch ID to third-party developers in iOS 8 and proceeded to integrate the technology in new iPhones and iPads, as well as its Apple Pay service.

This is why iOS has a clear lead over Android and other platforms; every new iPhone and iPad will ship with Touch ID until Cupertino comes up with something better.

fingerprint biometric security

This does not mean that Android should be written off because an increasing number of Android phones are shipping with fingerprint scanners. The first biometric authentication devices featured small scanners that required the user to swipe their finger over the scanner, but touch-scan units, similar to Apple’s, are becoming increasingly common. It is important to note that this feature is not reserved for expensive, flagship products – even some $200 phones marketed by Chinese vendors feature such scanners.

However, there is still a consideration; Google has not integrated a fingerprint scanner on any of its Nexus devices, although it is rumoured that it originally intended to include it on the Nexus 6 smartphone. In fact, Android Open Source Project (AOSP) provided evidence that fingerprint support was removed from the device. This is not good news for Android developers, as Google usually showcases new technology on Nexus devices and follows up with documentation and APIs, as was the case with NFC support on the Nexus S, or the barometer sensor on the Galaxy Nexus.

Still, this did not prevent vendors from using their own code, with a few types of scanners. But, this is bad news for developers whose hands are tied since there is no standard that would eliminate fragmentation and insure interoperability. Samsung tried to overcome the problem by allowing developers to play around with its Pass API, but this is still not an ideal solution. Motorola tried to do the same four years ago with its old Atrix devices.

A number of hardware manufacturers and developers also released SDKs enabling developers to integrate support for various fingerprint scanners, but the lack of a standardised environment that would reduce or eliminate fragmentation is still a big issue.

It may take a while before we see fingerprint scanners on most phones, but a lot of progress is being made. We went from no scanners on flagship phones to relatively reliable scanners on $200 phones in the space of a couple of years.

fingerprint scanners

But, how useful are they? Are they just gimmicks like first-generation fingerprint scanners on old notebooks?

The technology works, there is no doubt about that, but for the time being applications are limited. Software development has to catch up with hardware, we need more services that can use such solution, and we need more APIs and standards and guidelines from industry leaders (namely, Google). At this point, fingerprint scanners on many Android devices are gimmicks, nothing more.

Overall, fingerprint scanners are convenient, but they’re not an ideal solution. While every fingerprint is unique, there are still some security concerns. Many scanners can be tricked, although it is getting increasingly difficult to pull this off with a simple image. There are alternatives though, including 3D printing, and some morbid ways of doing this, as one security expert pointed out a couple of years ago.

Needless to say, you can’t use fingerprint readers with gloves, an injured thumb, or in other extreme situations. But, these are relatively minor drawbacks.

Microsoft Wants To Look You In The Eye

So, let’s sum up. Android and iOS can already use fingerprint scanners for biometric security, and they are currently underutilised. But what about desktop environments? We can unlock our phones and authenticate payments using biometrics, but we still work on desktops, so how about making them truly passwordless?

Microsoft recently announced Windows Hello and in case you missed it, check out the official Windows blog for a comprehensive overview of this initiative.

This is how Microsoft explains its vision for Windows Hello:

Instead of using a shared or shareable secret like a password, Windows 10 helps to securely authenticate to applications, websites and networks on your behalf—without sending up a password. Thus, there is no shared password stored on their servers for a hacker to potentially compromise.

Windows 10 will ask you to verify that you have possession of your device before it authenticates on your behalf, with a PIN or Windows Hello on devices with biometric sensors. Once authenticated with ‘Passport,’ you will be able to instantly access a growing set of websites and services across a range of industries – favorite commerce sites, email and social networking services, financial institutions, business networks and more.

Windows Hello is a biometric authentication system that will enable users to instantly access their Windows 10 devices, using fingerprint scanning, iris scanning or facial recognition. Microsoft says “plenty” of new Windows 10 devices will support Windows Hello, but, personally, I find one technique particularly interesting.

Iris scanning is one of the methods supported by Microsoft and it has a few benefits over the alternatives. It should be more reliable, and potentially more convenient, than fingerprint scanning. In case you were wondering, this won’t be handled by our webcams or phone cameras – Microsoft wants to use “a combination of special hardware and software” to make sure the system can’t be beaten.

iris scanning and biometrics

The iris scanner will rely on infrared technology (potentially, near-infrared). This means it will be able to operate in all lighting conditions and see your iris through glasses, even tinted glasses. Hardware designers won’t have to set aside a lot of room on a device to integrate the scanner; it could be integrated right next to the selfie cam on our mobiles, or as an addition to a standalone web cameras used on many office machines today. This means it could be easily retrofitted to existing desktop PCs.

Aside from infrared scanners, Microsoft will also use more traditional biometric security measures such as facial recognition, relying on Intel RealSense camera technology. This should help make Windows Hello more prolific, especially as users upgrade to new notebooks and hybrids based on Intel platforms.

On the mobile front, an iris scan offers several advantages over fingerprint authentication; it can work with gloves, iris injuries are a lot less common than thumb injuries, and it should be much more difficult to beat a consumer grade iris scanner than a fingerprint scanner.

There is another angle to Microsoft’s approach – the software giant won’t store users’ biometric data. The biometric signature will be secured locally on devices and shared with no one but the user. The signature will only be used to unlock the device and Passport, so it won’t be used to authenticate users over the network.

The jury is still out on Microsoft’s biometrics plans and we will have to wait for Windows 10 to see it in action.

What About Always-On Authentication?

While all these technologies might do a good job at replacing traditional passwords, there are emerging concepts that could give engineers more freedom. What if we could dispense with the process entirely, no passwords, no fingerprint scans – nothing?

“Always-on authentication” is the next frontier, and a number of ways of getting there have already been proposed. However, an important distinction needs to be made. Always-on authentication usually refers to machine-to-machine authentication, such as a system of “always-on” SSL authentication, SHH connections, NFC credentials and various networking technologies. These are usually developed to monitor and authenticate financial transactions, thus reducing the risk of online fraud.

There are relatively few solutions for always-on user authentication. One such example is Bionym’s Nymiwristband. It is a wearable device that looks a lot like your average fitness tracker, but it’s more clever than that.

always on authentication

Nymi scans the user’s unique electrocardiogram (ECG). This means that you only need to have the device on your wrist to provide always-on authentication. As long as your heart keeps beating, you’ll be logged in.

If you’re thinking of trying the same trick on the Apple Watch or Android Wear watches, hold your horses, we’re not there yet. The Nymi doesn’t merely track the user’s heart rate like a smartwatch, it actually analyses the shape of the user’s ECG wave, which takes a more sensitive sensor. Smartwatches sound like the ideal hardware platform for this application and, sooner or later, they will be able to do the same thing.

Imagine unlocking your phone, car, office and computer simply by being there and having a pulse? Logging into any account seamlessly, then paying for lunch, doing some shopping on the way home and maybe withdrawing cash from an ATM, all without having to juggle your groceries and credit cards. We’re not there, yet, but we are slowly getting there.

What Does All This Mean For Software Developers And Users?

For the time being, software developers can use off-the-shelf middleware and tokenization to deploy paswordless solutions. One such example is Passwordless, a token-based, open-source framework for Node.js and Express. In case you are interested in how it’s deployed, Mozilla has a comprehensive blog post that explains it.

It will take a while, but biometric building blocks are slowly falling into place. The current crop of passwordless technologies will be augmented, and eventually replaced by biometric authentication.

Many biometric security skeptics including many of my colleagues, don’t believe this will happen anytime soon, but I am an incorrigible optimist; I think passwordless security will be standard by the end of the decade, and this is why: If we merely observe one particular field, be it software or hardware, we will find countless problems with biometrics, many of which I’ve already outlined. However, if we take a few steps back and look at the big picture, if we take a look at new industry trends and the increasing emphasis on personal and corporate security, highly publicised security breaches, privacy concerns – we are bound to see things from a different perspective.

Even so, the elephant in the room isn’t privacy or B2B security, it’s mobile payments.

The volume of mobile transactions in the US is expected to more than double this year to $10bn. By 2018, Bloomberg expects the volume to reach $110bn. On a per-capita basis, the average American consumer will make about $30 in transactions this year, but by 2018 the number will go up to $330 per capita, for every man, woman and child. Assuming the same compound annual growth rate in 2019 and 2020, we could be looking at four digits per capita by 2021.

With that sort of money in play, what do you think?

Top 10 Most Common Mistakes That Java Developers Make: A Java Beginner’s Tutorial

The following article is a guest post by Mikhail Selivanov, Freelance Java Developer at Toptal. Toptal is an elite network of freelancers that enables businesses to connect with the top 3% of software engineers and designers in the world.

Java is a programming language that was initially developed for interactive television, but over time it has become widespread over everywhere software can be used. Designed with the notion of object-oriented programming, abolishing the complexities of other languages such as C or C++, garbage collection, and an architecturally agnostic virtual machine, Java created a new way of programming. Moreover, it has a gentle learning curve and appears to successfully adhere to its own moto – “Write once, run everywhere”, which is almost always true; but Java problems are still present. I’ll be addressing ten Java problems that I think are the most common mistakes. In case you missed it, you may check out my post on The 5 Most Common Mistakes HTML5 Developers Make: A Beginner’s Guide here.

common java mistakes

Common Mistake #1: Neglecting Existing Libraries

It’s definitely a mistake for Java Developers to ignore the innumerable amount of libraries written in Java. Before reinventing the wheel, try to search for available libraries – many of them have been polished over the years of their existence and are free to use. These could be logging libraries, like logback and Log4j, or network related libraries, like Netty or Akka. Some of the libraries, such as Joda-Time, have become a de facto standard.

The following is a personal experience from one of my previous projects. The part of the code responsible for HTML escaping was written from scratch. It was working well for years, but eventually it encountered a user input which caused it to spin into an infinite loop. The user, finding the service to be unresponsive, attempted to retry with the same input. Eventually, all the CPUs on the server allocated for this application were being occupied by this infinite loop. If the author of this naive HTML escape tool had decided to use one of the well known libraries available for HTML escaping, such as HtmlEscapers from Google Guava, this probably wouldn’t have happened. At the very least, true for most popular libraries with a community behind it, the error would have been found and fixed earlier by the community for this library.

Common Mistake #2: Missing the ‘break’ Keyword in a Switch-Case Block

These Java issues can be very embarrassing, and sometimes remain undiscovered until run in production. Fallthrough behavior in switch statements is often useful; however, missing a “break” keyword when such behavior is not desired can lead to disastrous results. If you have forgotten to put a “break” in “case 0” in the code example below, the program will write “Zero” followed by “One”, since the control flow inside here will go through the entire “switch” statement until it reaches a “break”. For example:

public static void switchCasePrimer() {
    	int caseIndex = 0;
    	switch (caseIndex) {
        	case 0:
            	System.out.println("Zero");
        	case 1:
            	System.out.println("One");
            	break;
        	case 2:
            	System.out.println("Two");
            	break;
        	default:
            	System.out.println("Default");
    	}
}

In most cases, the cleaner solution would be to use polymorphism and move code with specific behaviors into separate classes. Java mistakes such as this one can be detected using static code analyzers, e.g. FindBugsand PMD.

Common Mistake #3: Forgetting to Free Resources

Every time a program opens a file or network connection, it is important for Java beginners to free the resource once you are done using it. Similar caution should be taken if any exception were to be thrown during operations on such resources. One could argue that the FileInputStream has a finalizer that invokes the close() method on a garbage collection event; however, since we can’t be sure when a garbage collection cycle will start, the input stream can consume computer resources for an indefinite period of time. In fact, there is a really useful and neat statement introduced in Java 7 particularly for this case, called try-with-resources:

private static void printFileJava7() throws IOException {
    try(FileInputStream input = new FileInputStream("file.txt")) {
        int data = input.read();
        while(data != -1){
            System.out.print((char) data);
            data = input.read();
        }
    }
}

This statement can be used with any object that implements the AutoClosable interface. It ensures that each resource is closed by the end of the statement.

Common Mistake #4: Memory Leaks

Java uses automatic memory management, and while it’s a relief to forget about allocating and freeing memory manually, it doesn’t mean that a beginning Java developer should not be aware of how memory is used in the application. Problems with memory allocations are still possible. As long as a program creates references to objects that are not needed anymore, it will not be freed. In a way, we can still call this memory leak. Memory leaks in Java can happen in various ways, but the most common reason is everlasting object references, because the garbage collector can’t remove objects from the heap while there are still references to them. One can create such a reference by defining class with a static field containing some collection of objects, and forgetting to set that static field to null after the collection is no longer needed. Static fields are considered GC roots and are never collected.

Another potential reason behind such memory leaks is a group of objects referencing each other, causing circular dependencies so that the garbage collector can’t decide whether these objects with cross-dependency references are needed or not. Another issue is leaks in non-heap memory when JNI is used.

The primitive leak example could look like the following:

final ScheduledExecutorService scheduledExecutorService = Executors.newScheduledThreadPool(1);
final Deque<BigDecimal> numbers = new LinkedBlockingDeque<>();
final BigDecimal divisor = new BigDecimal(51);

scheduledExecutorService.scheduleAtFixedRate(() -> {
	BigDecimal number = numbers.peekLast();
   	if (number != null && number.remainder(divisor).byteValue() == 0) {
     	System.out.println("Number: " + number);
		System.out.println("Deque size: " + numbers.size());
	}
}, 10, 10, TimeUnit.MILLISECONDS);

	scheduledExecutorService.scheduleAtFixedRate(() -> {
		numbers.add(new BigDecimal(System.currentTimeMillis()));
	}, 10, 10, TimeUnit.MILLISECONDS);

try {
	scheduledExecutorService.awaitTermination(1, TimeUnit.DAYS);
} catch (InterruptedException e) {
	e.printStackTrace();
}

This example creates two scheduled tasks. The first task takes the last number from a deque called “numbers” and prints the number and deque size in case the number is divisible by 51. The second task puts numbers into the deque. Both tasks are scheduled at a fixed rate, and run every 10 ms. If the code is executed, you’ll see that the size of the deque is permanently increasing. This will eventually cause the deque to be filled with objects consuming all available heap memory. To prevent this while preserving the semantics of this program, we can use a different method for taking numbers from the deque: “pollLast”. Contrary to the method “peekLast”, “pollLast” returns the element and removes it from the deque while “peekLast” only returns the last element.

To learn more about memory leaks in Java, please refer to our article that demystified this problem.

Common Mistake #5: Excessive Garbage Allocation

Excessive Garbage Allocation

Excessive garbage allocation may happen when the program creates a lot of short-lived objects. The garbage collector works continuously, removing unneeded objects from memory, which impacts applications’ performance in a negative way. One simple example:

String oneMillionHello = "";
for (int i = 0; i < 1000000; i++) {
    oneMillionHello = oneMillionHello + "Hello!";
}
System.out.println(oneMillionHello.substring(0, 6));

In Java, strings are immutable. So, on each iteration a new string is created. To address this we should use a mutable StringBuilder:

StringBuilder oneMillionHelloSB = new StringBuilder();
    for (int i = 0; i < 1000000; i++) {
        oneMillionHelloSB.append("Hello!");
    }
System.out.println(oneMillionHelloSB.toString().substring(0, 6));

While the first version requires quite a bit of time to execute, the version that uses StringBuilder produces a result in a significantly less amount of time.

Common Mistake #6: Using Null References without Need

Avoiding excessive use of null is a good practice. For example, it’s preferable to return empty arrays or collections from methods instead of nulls, since it can help prevent NullPointerException.

Consider the following method that traverses a collection obtained from another method, as shown below:

List<String> accountIds = person.getAccountIds();
for (String accountId : accountIds) {
    processAccount(accountId);
}

If getAccountIds() returns null when a person has no account, then NullPointerException will be raised. To fix this, a null-check will be needed. However, if instead of a null it returns an empty list, then NullPointerException is no longer a problem. Moreover, the code is cleaner since we don’t need to null-check the variable accountIds.

To deal with other cases when one wants to avoid nulls, different strategies may be used. One of these strategies is to use Optional type that can either be an empty object or a wrap of some value:

Optional<String> optionalString = Optional.ofNullable(nullableString);
if(optionalString.isPresent()) {
    System.out.println(optionalString.get());
}

In fact, Java 8 provides a more concise solution:

Optional<String> optionalString = Optional.ofNullable(nullableString);
optionalString.ifPresent(System.out::println);

Optional type has been a part of Java since version 8, but it has been well known for a long time in the world of functional programming. Prior to this, it was available in Google Guava for earlier versions of Java.

Common Mistake #7: Ignoring Exceptions

It is often tempting to leave exceptions unhandled. However, the best practice for beginner and experienced Java developers alike is to handle them. Exceptions are thrown on purpose, so in most cases we need to address the issues causing these exceptions. Do not overlook these events. If necessary, you can either rethrow it, show an error dialog to the user, or add a message to the log. At the very least, it should be explained why the exception has been left unhandled in order to let other developers know the reason.

selfie = person.shootASelfie();
try {
    selfie.show();
} catch (NullPointerException e) {
    // Maybe, invisible man. Who cares, anyway?
}

A clearer way of highlighting an exceptions’ insignificance is to encode this message into the exceptions’ variable name, like this:

try { selfie.delete(); } catch (NullPointerException unimportant) {  }

Common Mistake #8: Concurrent Modification Exception

This exception occurs when a collection is modified while iterating over it using methods other than those provided by the iterator object. For example, we have a list of hats and we want to remove all those that have ear flaps:

List<IHat> hats = new ArrayList<>();
hats.add(new Ushanka()); // that one has ear flaps
hats.add(new Fedora());
hats.add(new Sombrero());
for (IHat hat : hats) {
    if (hat.hasEarFlaps()) {
        hats.remove(hat);
    }
}

Concurrent Modification Exception

If we run this code, “ConcurrentModificationException” will be raised since the code modifies the collection while iterating it. The same exception may occur if one of the multiple threads working with the same list is trying to modify the collection while others iterate over it. Concurrent modification of collections in multiple threads is a natural thing, but should be treated with usual tools from the concurrent programming toolbox such as synchronization locks, special collections adopted for concurrent modification, etc. There are subtle differences to how this Java issue can be resolved in single threaded cases and multithreaded cases. Below is a brief discussion of some ways this can be handled in a single threaded scenario:

Collect objects and remove them in another loop

Collecting hats with ear flaps in a list to remove them later from within another loop is an obvious solution, but requires an additional collection for storing the hats to be removed:

List<IHat> hatsToRemove = new LinkedList<>();
for (IHat hat : hats) {
    if (hat.hasEarFlaps()) {
        hatsToRemove.add(hat);
    }
}
for (IHat hat : hatsToRemove) {
    hats.remove(hat);
}

Use Iterator.remove method

This approach is more concise, and it doesn’t need an additional collection to be created:

Iterator<IHat> hatIterator = hats.iterator();
while (hatIterator.hasNext()) {
    IHat hat = hatIterator.next();
    if (hat.hasEarFlaps()) {
        hatIterator.remove();
    }
}

Use ListIterator’s methods

Using the list iterator is appropriate when the modified collection implements List interface. Iterators that implement ListIterator interface support not only removal operations, but also add and set operations. ListIterator implements the Iterator interface so the example would look almost the same as the Iterator remove method. The only difference is the type of hat iterator, and the way we obtain that iterator with the “listIterator()” method. The snippet below shows how to replace each hat with ear flaps with sombreros using “ListIterator.remove” and “ListIterator.add” methods:

IHat sombrero = new Sombrero();
ListIterator<IHat> hatIterator = hats.listIterator();
while (hatIterator.hasNext()) {
    IHat hat = hatIterator.next();
    if (hat.hasEarFlaps()) {
        hatIterator.remove();
        hatIterator.add(sombrero);
    }
}

With ListIterator, the remove and add method calls can be replaced with a single call to set:

IHat sombrero = new Sombrero();
ListIterator<IHat> hatIterator = hats.listIterator();
while (hatIterator.hasNext()) {
    IHat hat = hatIterator.next();
    if (hat.hasEarFlaps()) {
        hatIterator.set(sombrero); // set instead of remove and add
    }
}

Use stream methods introduced in Java 8 With Java 8, programmers have the ability to transform a collection into a stream and filter that stream according to some criteria. Here is an example of how stream api could help us filter hats and avoid “ConcurrentModificationException”.

hats = hats.stream().filter((hat -> !hat.hasEarFlaps()))
        .collect(Collectors.toCollection(ArrayList::new));

The “Collectors.toCollection” method will create a new ArrayList with filtered hats. This can be a problem if the filtering condition were to be satisfied by a large number of items, resulting in a large ArrayList; thus, it should be use with care. Use List.removeIf method presented in Java 8 Another solution available in Java 8, and clearly the most concise, is the use of the “removeIf” method:

hats.removeIf(IHat::hasEarFlaps);

That’s it. Under the hood, it uses “Iterator.remove” to accomplish the behavior.

Use specialized collections

If at the very beginning we decided to use “CopyOnWriteArrayList” instead of “ArrayList”, then there would have been no problem at all, since “CopyOnWriteArrayList” provides modification methods (such as set, add, and remove) that don’t change the backing array of the collection, but rather create a new modified version of it. This allows iteration over the original version of the collection and modifications on it at the same time, without the risk of “ConcurrentModificationException”. The drawback of that collection is obvious – generation of a new collection with each modification.

There are other collections tuned for different cases, e.g. “CopyOnWriteSet” and “ConcurrentHashMap”.

Another possible mistake with concurrent collection modifications is to create a stream from a collection, and during the stream iteration, modify the backing collection. The general rule for streams is to avoid modification of the underlying collection during stream querying. The following example will show an incorrect way of handling a stream:

List<IHat> filteredHats = hats.stream().peek(hat -> {
    if (hat.hasEarFlaps()) {
        hats.remove(hat);
    }
}).collect(Collectors.toCollection(ArrayList::new));

The method peek gathers all the elements and performs the provided action on each one of them. Here, the action is attempting to remove elements from the underlying list, which is erroneous. To avoid this, try some of the methods described above.

Common Mistake #9: Breaking Contracts

Sometimes, code that is provided by the standard library or by a third-party vendor relies on rules that should be obeyed in order to make things work. For example, it could be hashCode and equals contract that when followed, makes working guaranteed for a set of collections from the Java collection framework, and for other classes that use hashCode and equals methods. Disobeying contracts isn’t the kind of error that always leads to exceptions or breaks code compilation; it’s more tricky, because sometimes it changes application behavior without any sign of danger. Erroneous code could slip into production release and cause a whole bunch of undesired effects. This can include bad UI behavior, wrong data reports, poor application performance, data loss, and more. Fortunately, these disastrous bugs don’t happen very often. I already mentioned the hashCode and equals contract. It is used in collections that rely on hashing and comparing objects, like HashMap and HashSet. Simply put, the contract contains two rules:

  • If two objects are equal, then their hash codes should be equal.
  • If two objects have the same hash code, then they may or may not be equal.

Breaking the contract’s first rule leads to problems while attempting to retrieve objects from a hashmap. The second rule signifies that objects with the same hash code aren’t necessarily equal. Let us examine the effects of breaking the first rule:

public static class Boat {
    private String name;

    Boat(String name) {
        this.name = name;
    }

    @Override
    public boolean equals(Object o) {
        if (this == o) return true;
        if (o == null || getClass() != o.getClass()) return false;

        Boat boat = (Boat) o;

        return !(name != null ? !name.equals(boat.name) : boat.name != null);
    }

    @Override
    public int hashCode() {
        return (int) (Math.random() * 5000);
    }
}

As you can see, class Boat has overridden equals and hashCode methods. However, it has broken the contract, because hashCode returns random values for the same object every time it’s called. The following code will most likely not find a boat named “Enterprise” in the hashset, despite the fact that we added that kind of boat earlier:

public static void main(String[] args) {
    Set<Boat> boats = new HashSet<>();
    boats.add(new Boat("Enterprise"));

    System.out.printf("We have a boat named 'Enterprise' : %b\n", boats.contains(new Boat("Enterprise")));
}

Another example of contract involves the finalize method. Here is a quote from the official java documentation describing its function:

”The general contract of finalize is that it is invoked if and when the JavaTM virtual machine has determined that there is no longer any means by which this object can be accessed by any thread (that has not yet died), except as a result of an action taken by the finalization of some other object or class which is ready to be finalized. The finalize method may take any action, including making this object available again to other threads; the usual purpose of finalize, however, is to perform cleanup actions before the object is irrevocably discarded. For example, the finalize method for an object that represents an input/output connection might perform explicit I/O transactions to break the connection before the object is permanently discarded.“

One could decide to use the finalize method for freeing resources like file handlers, but that would be a bad idea. This is because there’s no time guarantees on when finalize will be invoked, since it’s invoked during the garbage collection, and GC’s time is indeterminable.

Common Mistake #10: Using Raw Type Instead of a Parameterized One

Raw types, according to Java specifications, are types that are either not parametrized, or non-static members of class R that are not inherited from the superclass or superinterface of R. There were no alternatives to raw types until generic types were introduced in Java. It supports generic programming since version 1.5, and generics were undoubtedly a significant improvement. However, due to backward compatibility reasons, a pitfall has been left that could potentially break the type system. Let’s look at the following example:

List listOfNumbers = new ArrayList();
listOfNumbers.add(10);
listOfNumbers.add("Twenty");
listOfNumbers.forEach(n -> System.out.println((int) n * 2));

Here we have a list of numbers defined as a raw ArrayList. Since its type isn’t specified with type parameter, we can add any object into it. But in the last line we cast elements to int, double it, and print the doubled number to standard output. This code will compile without errors, but once running it will raise a runtime exception because we attempted to cast a string to an integer. Obviously, the type system is unable to help us write safe code if we hide necessary information from it. To fix the problem we need to specify the type of objects we’re going to store in the collection:

List<Integer> listOfNumbers = new ArrayList<>();

listOfNumbers.add(10);
listOfNumbers.add("Twenty");

listOfNumbers.forEach(n -> System.out.println((int) n * 2));

The only difference from the original is the line defining the collection:

List<Integer> listOfNumbers = new ArrayList<>();

The fixed code wouldn’t compile because we are trying to add a string into a collection that is expected to store integers only. The compiler will show an error and point at the line where we are trying to add the string “Twenty” to the list. It’s always a good idea to parametrize generic types. That way, the compiler is able to make all possible type checks, and the chances of runtime exceptions caused by type system inconsistencies are minimized.

Conclusion

Java as a platform simplifies many things in software development, relying both on sophisticated JVM and the language itself. However, its features, like removing manual memory management or decent OOP tools, don’t eliminate all the problems and issues a regular Java developer faces. As always, knowledge, practice and Java tutorials like this are the best means to avoid and address application errors – so know your libraries, read java, read JVM documentation, and write programs. Don’t forget about static code analyzers either, as they could point to the actual bugs and highlight potential bugs.

The 5 Most Common Mistakes HTML5 Developers Make: A Beginner’s Guide

The following article is a guest post by Demir Selmanovic, the Lead Technical Editor at Toptal. Toptal is an elite network of freelancers that enables businesses to connect with the top 3% of software engineers and designers in the world.

It’s been over 20 years since Tim Berners-Lee and Robert Cailliau specified HTML, which became the standard markup language used to build the Internet. Ever since then, the HTML development community has begged for improvements to this language, but this cry was mostly heard by web browser developers who tried to ease the HTML issues of their colleagues. Unfortunately, this led to even more problems causing many cross-browser compatibility issues and duplication of development work. Over these 20 years, HTML was upgraded 4 times, while most of the browsers got double-digit numbers of major updates plus numerous small patches.

HTML5 was supposed to finally solve our problems and become one standard to rule them all (browsers). This was probably one of the most anticipated technologies since creation of the World Wide Web. Has it happened? Did we finally get one markup language that will be fully cross-browser compatible and will work on all desktop and mobile platforms giving us all of those features we were asking for? I dont know! Just few days ago (Sept. 16th 2014) we received one more call for review by W3C so the HTML5 specification is still incomplete.

Despite HTML5 issues and mistakes, is it the one language ring to rule them all?

Hopefully, when the specification is one day finalized, browsers will not already have significant obsolete code, and they will easily and properly implement great features like Web WorkersMultiple synchronized audio and video elements, and other HTML5 components that we’ve needed for a long time.

Give hasty developers an incomplete spec, and you’ll have a recipe for disaster.

We do however have thousands of companies that have based their businesses on HTML5 and are doing great. There are also many great HTML5-based applications and games used by millions of people, so success is obviously possible and HTML5 is, and will continue to be, used regardless on the status of its specification.

However, the recipe I mentioned can easily blow up in our faces, and thus I’ve emphasized some of the most basic HTML5 mistakes that can be avoided. Most of the mistakes listed below are consequence of incomplete or missing implementation of certain HTML5 elements in different browsers, and we should hope that in the near future they will become obsolete. Until this happens, I suggest reading the list and having it in mind when building your next HTML5 application whether you’re an HTML5 beginner or an experienced vet.

Common mistake #1: Trusting local storage

Let them eat cake! This approach has been a burden on developers for ages. Due to reasonably sensible fear of security breaches and protection of computers, in the “dark ages” when the Internet was feared by many, web applications were allowed to leave unreasonably small amounts of data on computers. True, there were things like user data that “great browser masters from Microsoft(r)” gave us, or things like Local Shared Objects in Flash, but this was far from perfect.

It is therefore reasonable that one of the first basic HTML5 features adopted by developers was Web Storage. However, be alert that Web Storage is not secure. It is better than using cookies because it will not be transmitted over the wire, but it is not encrypted. You should definitely never store security tokens there. Your security policy should never rely on data stored in Web Storage as a malicious user can easily modify her localStorage and sessionStorage values at any time.

To get more insight on Web Storage and how to use it, I suggest reading this post.

Common mistake #2: Expecting compatibility among browsers

HTML5 is much more than a simple markup language. It has matured enough to combine behavior together with layout, and you should consider HTML5 as extended HTML with advanced JavaScript on top. If you look at all the trouble we had before just to make a static combination of HTML+CSS look identical on all browsers, it is fair to assume that more complexity will just mean more effort assuring cross-browser compatibility.

HTML5 interpretation on different browsers is far from identical, just like the case was with JavaScript. All major players in the browser wars lended a hand in the HTML5 spec, so it’s fair to assume that deviations between browsers should reduce over time. But even now some browsers decided to fully ignore certain HTML5 elements making it very difficult to follow a baseline and common set of features. If we add more internet connected devices and platforms to the equation, the situation gets even more complicated, causing problems with HTML5.

For example Web Animations are great feature that is supported only by Chrome and Opera, while Web Notification feature that allows alerting the user outside the context of a web page of an occurrence, such as the delivery of email, is fully ignored by Internet Explorer.

To learn more about HTML5 features and support by different browsers check out the HTML5 guide at www.caniuse.com.

So the fact remains that even though HTML5 is new and well specified, we should expect a great deal of cross-browser compatibility issues and plan for them in advance. There are just too many gaps that browsers need to fill in, and it is fair to expect that they cannot overcome all of the differences between platforms well.

Common mistake #3: Assuming high performance

Regardless of the fact that HTML5 is still evolving, it is a very powerful technology that has many advantages over alternate platforms used before its existence. But, with great power comes great responsibility, especially for HTML5 beginners. HTML5 has been adopted by all major browsers on desktop and mobile platforms. Having this in mind, many development teams pick HTML5 as their preferred platform, hoping that their applications will run equally on all browsers. However, assuming sensible performance on both desktop and mobile platforms just because HTML5 specification says so, is not sensible. Lots of companies (khm! Facebook khm!) placed their bets on HTML5 for their mobile platform and suffered from HTML5 not working out as they planned.

Again, however, there are some great companies that rely heavily on HTML5. Just look at the numerous online game development studios that are doing amazing stuff while pushing HTML5 and browsers to their limits. Obviously, as long as you are aware of the performance issues and are working around these, you can be in a great place making amazing applications.

Common mistake #4: Limited accessibility

Web has become a very important part of our lives. Making applications accessible to people who rely upon assistive technology is important topic that is often put aside in software development. HTML5 tries to overcome this by implementing some of the advanced accessibility features. More than a few developers accepted this to be sufficient and haven’t really spent any time implementing additional accessibility options in their applications. Unfortunately, at this stage HTML5 has issues that prevent it from making your applications available to everyone and you should expect to invest additional time if you want to include a broader range of users.

You can check this place for more information about accessibility in HTML5 and the current state of the most common accessibility features.

Common mistake #5: Not using HTML5 enhancements

HTML5 has extended the standard HTML/XHTML set of tags significantly. In addition to new tags, we got quite a few new rules and behaviors. Too many developers picked up just a few enhancements and have skipped some of the very cool new features of HTML5.

One of the coolest things in HTML5 is client-side validation. This feature was probably one of the earliest elements of HTML5 that web browsers picked up. But, unfortunately, you can find more than a few developers who add novalidate attribute to their forms by default. The reasons for doing this arereasonable, and I’m quite sure we will have a debate about this one. Due to backward compatibility, many applications implemented custom JavaScript validators and having out-of-the-box validation done by browsers on top of that is inconvenient. However, it is not too difficult to assure that two validation methods will not collide, and standardizing user validation will ensure common experience while helping to resolve accessibility issues that I mentioned earlier.

Another great feature is related to the way user input is handled in HTML5. Before HTML5 came, we had to keep all of our form fields contained inside the <form></form> tag. New form attributes make it perfectly valid to do something like this:

<form action="demo_form.asp" id="form1">
  First name: <input type="text" name="fname"><br>
  <input type="submit" value="Submit">
</form>

Last name: <input type="text" name="lname" form="form1">

Even if lname is not inside the form, it will be posted together with fname.

For more information about new form attributes and enhancements, you can check the Mozilla Developer Network.

Wrap up

I understand that web developers are collateral damage in the browser wars as many of the above mistakes are a direct consequence of problematic HTML5 implementation in different browsers. However, it is still crucial that we avoid common HTML5 issues and spend some time understanding the new features of HTML5. Once we have it all under control, our applications will utilize great new enhancements and take the web to the next level.

Web developers are collateral damage in the browser wars.