web analytics

Archive for the ‘Application security’ Category

Securing the software supply chain in the cloud

The software supply chain is considered one of the common threats in today’s modern cloud-native development, which poses a high risk to any organization.

It is about consuming software packages, source code, or even APIs from a third-party or untrusted source.

The last thing we wish to do is to block developers from building new applications, but we need to understand the threats to the software supply chain.

What are the common threats?

There are a couple of common threats that can arise from a software supply chain attack:

As we can see, most supply chain attacks begin with a download of an untrusted piece of code, which leads to malware infection, or pulling data from an external API, which inserts unverified data into a backend system.

Steps to mitigate the risk of supply chain attacks

The modern development lifecycle is based on CI/CD (Continuous Integration / Continuous Deployment or Delivery), we can embed security gates at various stages of the CI/CD pipeline, as explained below.

Source Code

  • Scan for software vulnerabilities (such as binaries and open-source libraries), before storing components/code/libraries inside VM or container images inside an image repository.

Example of services:

  • Amazon Inspector – Vulnerability scanner for Amazon EC2, container images (inside Amazon ECR), and Lambda functions
  • Microsoft Defender for Containers – Vulnerability scanner for containers
  • Google Container Analysis – Vulnerability scanner for containers
  • Scan your code stored in your repositories, to make sure it does not contain sensitive data (such as secrets, API keys, credentials, etc.)

Example of tools:

Example of tools:

  • Snyk – Scan for open-source, code, container, and Infrastructure-as-Code vulnerabilities
  • Trivy – Scan for open-source, code, container, and Infrastructure-as-Code vulnerabilities
  • Chekov – Scan for open-source and Infrastructure-as-Code vulnerabilities
  • KICS – Scan for Infrastructure-as-Code vulnerabilities
  • Terrascan – Scan for Infrastructure-as-Code vulnerabilities
  • Kubescape – Scan for Kubernetes vulnerabilities
  • Scan your binaries to verify their trustworthiness – especially important when you import binaries from an external source.

Example of services:

Repositories

  • Create a private repository for storing source code, VM images, or container images
  • Enforce authentication and authorization for who can access and make changes to the repository
  • Sign all source code/images stored in the repository
  • Audit access to the repositories

Example of services for storing source code:

Example of services for storing VM images:

Example of services for storing container images:

Example of service for storing serverless code:

Authentication & Authorization

  • Configure authentication and authorization process (who has written permissions to the repository), and enforce the use of MFA.

Example of services:

Example of services:

Handling data from external APIs

There are many cases where we rely on data from external third parties, exposed using APIs.

Since we cannot verify the trustworthiness of external data, we must follow the following guidelines:

  • Never rely on unauthenticated APIs – always make sure the connectivity to the external APIs requires proper authentication (such as certificates, rotated API key, etc.) and proper
  • Always make sure the remote API enforces proper authorization mechanism – if the remote API allows admin or even write access to anyone on the Internet, the data it provides is not considered trusted anymore
  • Always make sure data is encrypted at transit – it allows to keep data confidentiality and provides a high degree of trust in the remote endpoint
  • Always perform input validation and proper escaping, before storing data from an external source into any backend database

For further reading, see:

Summary

In the post, we have reviewed threats as a result of software supply chain vulnerabilities, and various tools and services that can assist us in securing the modern development process of cloud-native applications.

It is possible to mitigate the risks coming from the software supply chain, whether it is code that we develop in-house or code/binaries/libraries that we import from a third-party source, but we must always follow the concept of “Trust but verify”.

References

Why not just have DevOps without the Sec?

If you don’t include security testing, risk assessments, compliance evaluations as part of the entire software delivery or release pipeline, you’re putting your organizations at risk. It goes beyond just failed release or delay in getting a feature out in the market, you’ll be introducing vulnerabilities into production, bypassing compliance and failing audit tests. All of these instances of security negligence could have penalties and fines associated with them. 

Security in DevOps is a part of the natural evolution DevOps

DevOps is not a technology but a cultural organization shift that organizations need to make. If we break down DevOps, it comes down to developers and IT operations. What is in the name? DevOps, if we look down at the history, it all started with developers, as in agile, breaking down codes into smaller components. Then they moved around on to the next constraint, “How can we deploy faster”. So we got really good at infrastructure as code and at deploying to not just our private cloud but our public clouds as well. The third constraint was testing. So we started to introduce automation testing into the release process, which evolved into continuous testing as we started shifting testing earlier in the release process.

Security is a constraint if you think otherwise

As I said, it is a natural evolution, now we are in rendezvous with the fourth constraint—security. However, this constraint is not going very well with existing DevOps practices of continuous integration and delivery. To fix the things, it might take renaming DevOps to DevSecOps. DevOps are fine with it because security has always been the last step in that release process and security teams really don’t step in until the code is ready to move into production. I mean in terms of DevOps, a delay in release process of months even weeks is synonymous to blasphemy. Introducing security to DevOps is not as simple as we introduced testing to it. The notion “you move security ‘left’ and things will go right” will not work outright. It requires a change in mindset at organizational level of getting security to work with developers. It requires us to reevaluate and come up with better technology to be able to introduce security into our existing DevOps pipelines. 

Security is boring but ‘right’

Security people have traditionally been tool operators. Some security people might do scripting but there’s a wide disassociation with software engineering group. Developers tend to criticize security people. For them, it is easy to break something than it is to build it. As companies are getting velocity and everyone’s a software company, they’re all building things faster quicker and security is last in the pipeline because it doesn’t inherently provide any business value other than risk reduction. Therefore, the security was left behind not just because DevOps was moving delivery pipelines too fast for security to catchup, but security teams took DevOps as something of a trend in line with agile. 

Eventually, it comes down to business value

Ultimately, security became a part of DevOps after organizations recognized it was a legitimate movement and provides business value. We finally realize that’s conveying and communicating risk reduction must be done at every part of the pipeline whatever your software development cycle looks like. From waterfall to continuous integration whatever you have in place, putting security in that as part of it whether it’s in name of the title or how we talk about it. I think that’s more of a sort of gimmick. Really, those cultures can come from works but just sliding security in DevOps is a start but there’s a lot behind that.

Modern DevSecOps schemes are about the right approach

The fundamentals of the modern DevSecOps schemes rely both on processes as well as the automation of DevSecOps. By processes I mean that there’s a big gap today between the security teams and the development teams in a DevOps process. To be frank, developers don’t really like fixing security issues. Eventually, the security teams take on the issues, and raise an alert but when it comes to communicating these issues and remediating them, they need the cooperation of the development teams. When they have a DevSecOps team that promotes collaboration, they understand how developers think and work, and automate the whole process. That’s the key to a successful relationship between the security teams and the development teams. 

If you look at number of people in those teams you will find hundreds or thousands of developers, probably a dozen or so DevOps people, and a couple of security people. In a nutshell, there is no workaround to introduce security into DevOps. You must automate and have the right tools in place to communicate and close the loops on resolving application security issues. 

There are two approaches to introduce security into DevOps. Security teams would put developers into security teams and teach themselves how developers work and want to see security issues communicated to them and resolved. 

Another approach is to put security analysts into dev teams in order to help developers improve the way they think about security and the way they develop their application. Regardless the approach, that’s something that resolves the gaps that hold automation and the communications by fixing friction between the dev and security team.

DevSecOps is a mindset afterall that closely follows your culture

The purpose and intent of the word “DevSecOps” is a sort of mindset that an agile team is responsible for all aspects from design and development to operations and security. This achieves speed and scale without sacrificing the safety of the code. Traditionally, system is designed and implemented and before release the defects are determined by security staff. With agile practices, it is important to inject security and operational details as early as possible in the development cycle. 

“DevOps is the practice of operations and development engineers participating together in the entire service lifecycle, from design through the development process to production support.

“DevOps is also characterized by operation staff using many of the same techniques as developers use for their system works.

Integrate security aspects in a DevOps process

A diagram of a common DevOps lifecycle:

The DevOps world meant to provide complementary solution for both quick development (such as Agile) and a solution for cloud environments, where IT personnel become integral part of the development process. In the DevOps world, managing large number of development environments manually is practically infeasible. Monitoring mixed environments become a complex solution and deploying large number of different builds is becoming extremely fast and sensitive to changes.

The idea behind any DevOps solution is to provide a solution for deploying an entire CI/CD process, which means supporting constant changes and immediate deployment of builds/versions.
For the security department, this kind of process is at first look a nightmare – dozen builds, partial tests, no human control for any change, etc.

For this reason, it is crucial for the security department to embrace DevOps attitude, which means, embedding security in any part of the development lifecycle, software deployment or environment change.

It is important to understand that there are no constant stages as we used to have in waterfall development lifecycle, and most of the stages are parallel – in the CI/CD world everything changes quickly, components can be part of different stages, and for this reason it is important to confer the processes, methods and tools in all developments and DevOps teams.

In-order to better understand how to embed security into the DevOps lifecycle, we need to review the different stages in the development lifecycle:

Planning phase

This stage in the development process is about gathering business requirements.

At this stage, it is important to embed the following aspects:

  • Gather information security requirements (such as authentication, authorization, auditing, encryptions, etc.)
  • Conduct threat modeling in-order to detect possible code weaknesses
  • Training / awareness programs for developers and DevOps personnel about secure coding

 

Creation / Code writing phase

This stage in the development process is about the code writing itself.

At this stage, it is important to embed the following aspects:

  • Connect the development environments (IDE) to a static code analysis products
  • Review the solution architecture by a security expert or a security champion on his behalf
  • Review open source components embedded inside the code

 

Verification / Testing phase

This stage in the development process is about testing, conducted mostly by QA personnel.

At this stage, it is important to embed the following aspects:

  • Run SAST (Static application security tools) on the code itself (pre-compiled stage)
  • Run DAST (Dynamic application security tools) on the binary code (post-compile stage)
  • Run IAST (Interactive application security tools) against the application itself
  • Run SCA (Software composition analysis) tools in-order to detect known vulnerabilities in open source components or 3rd party components

 

Software packaging and pre-production phase

This stage in the development process is about software packaging of the developed code before deployment/distribution phase.

At this stage, it is important to embed the following aspects:

  • Run IAST (Interactive application security tools) against the application itself
  • Run fuzzing tools in-order to detect buffer overflow vulnerabilities – this can be done automatically as part of the build environment by embedding security tests for functional testing / negative testing
  • Perform code signing to detect future changes (such as malwares)

 

Software packaging release phase

This stage is between the packaging and deployment stages.

At this stage, it is important to embed the following aspects:

  • Compare code signature with the original signature from the software packaging stage
  • Conduct integrity checks to the software package
  • Deploy the software package to a development environment and conduct automate or stress tests
  • Deploy the software package in a green/blue methodology for software quality and further security quality tests

 

Software deployment phase

At this stage, the software package (such as mobile application code, docker container, etc.) is moving to the deployment stage.

At this stage, it is important to embed the following aspects:

  • Review permissions on destination folder (in case of code deployment for web servers)
  • Review permissions for Docker registry
  • Review permissions for further services in a cloud environment (such as storage, database, application, etc.) and fine-tune the service role for running the code

 

Configure / operate / Tune phase

At this stage, the development is in the production phase and passes modifications (according to business requirements) and on-going maintenance.

At this stage, it is important to embed the following aspects:

  • Patch management processes or configuration management processes using tools such as Chef, Ansible, etc.
  • Scanning process for detecting vulnerabilities using vulnerability assessment tools
  • Deleting and re-deployment of vulnerable environments with an up-to-date environments (if possible)

 

On-going monitoring phase

At this stage, constant application monitoring is being conducted by the infrastructure or monitoring teams.

At this stage, it is important to embed the following aspects:

  • Run RASP (Runtime application self-production) tools
  • Implement defense at the application layer using WAF (Web application firewall) products
  • Implement products for defending the application from Botnet attacks
  • Implement products for defending the application from DoS / DDoS attacks
  • Conduct penetration testing
  • Implement monitoring solution using automated rules such as automated recovery of sensitive changes (tools such as GuardRails)

 

Security recommendations for developments based on CI/CD / DevOps process

  • It is highly recommended to perform on-going training for the development and DevOps teams on security aspects and secure development
  • It is highly recommended to nominate a security champion among the development and DevOps teams in-order to allow them to conduct threat modeling at early stages of the development lifecycle and in-order to embed security aspects as soon as possible in the development lifecycle
  • Use automated tools for deploying environments in a simple and standard form.
    Tools such as Puppet require root privileges for folders it has access to. In-order to lower the risk, it is recommended to enable folder access auditing.
  • Avoid storing passwords and access keys, hard-coded inside scripts and code.
  • It is highly recommended to store credentials (SSH keys, privileged credentials, API keys, etc.) in a vault (Solutions such as HashiCorp vault or CyberArk).
  • It is highly recommended to limit privilege access based on role (Role based access control) using least privileged.
  • It is recommended to perform network separation between production environment and Dev/Test environments.
  • Restrict all developer teams’ access to production environments, and allow only DevOps team’s access to production environments.
  • Enable auditing and access control for all development environments and identify access attempts anomalies (such as developers access attempt to a production environment)
  • Make sure sensitive data (such as customer data, credentials, etc.) doesn’t pass in clear text at transit. In-case there is a business requirement for passing sensitive data at transit, make sure the data is passed over encrypted protocols (such as SSH v2, TLS 1.2, etc.), while using strong cipher suites.
  • It is recommended to follow OWASP organization recommendations (such as OWASP Top10, OWASP ASVS, etc.)
  • When using Containers, it is recommended to use well-known and signed repositories.
  • When using Containers, it is recommended not to rely on open source libraries inside the containers, and to conduct scanning to detect vulnerable versions (including dependencies) during the build creation process.
  • When using Containers, it is recommended to perform hardening using guidelines such as CIS Docker Benchmark or CIS Kubernetes Benchmark.
  • It is recommended to deploy automated tools for on-going tasks, starting from build deployments, code review for detecting vulnerabilities in the code and open source code, and patch management processes that will be embedded inside the development and build process.
  • It is recommended to perform scanning to detect security weaknesses, using vulnerability management tools during the entire system lifetime.
  • It is recommended to deploy configuration management tools, in-order to detect and automatically remediate configuration anomalies from the original configuration.

 

Additional reading sources:

 

 

This article was written by Eyal Estrin, cloud security architect and Vitaly Unic, application security architect.

JSON Web Token Tutorial: An Example in Laravel and AngularJS

What To Look Out For In Software Development NDAs

The demand for technical talent, and the ease with which information can be shared, has increased entrepreneurs’ reliance on business relationships with outsiders. It has never been easier for an entrepreneur to find, meet, communicate and eventually enter into some sort of business relationship with an individual or company that is otherwise not associated with the business.

ndas and software development

Moreover, sky-high valuations and fairytale overnight success stories have fueled the notion that even a basic idea can be worth millions, if not billions, in a relatively short time. In light of these factors, you might presume that Non-Disclosure Agreements (NDAs) have been widely accepted in the tech world as a means to protect sensitive and potentially valuable information from theft and abuse. Not so fast.

Before jumping into the debate, though, it helps to have a quick understanding of what an NDA is, what one looks like and, eventually, what to look out for if you’re asked to sign one as a freelance software developer.

What Is A Non-Disclosure Agreement?

An NDA is exactly what its name implies — a legal agreement between two or more parties that (i) defines certain confidential information that will be disclosed and (ii) imposes a legal obligation on the receiving party to keep that information confidential. NDAs are most commonly used when a business relationship between two companies or individuals requires the sharing of confidential information.

For example:

Company A, a local retailer, has hired ABC IT Co. to build an online inventory and order management system. To build the system, Company A must provide ABC IT Co. with a list of Company A’s suppliers and certain pricing information. Before disclosing its supplier list and pricing information, Company A asks ABC IT Co. to sign an NDA forbidding ABC IT Co. from disclosing or using Company A’s confidential information.

If a party to an NDA breaches the agreement, by disclosing or using confidential information for example, the other party to the NDA may sue the breaching party for monetary damages (compensation for lost profits or business), injunctive relief (a court order requiring the breaching party to refrain from taking some action) or specific performance (a court order requiring that the breaching party take some specified action).

So What Do Software Development NDAs Look Like?

NDAs are negotiated legal agreements that can be as simple or as complex as the parties desire. An NDA can be a one page fill-in-the-blank form or a lengthy document drafted from scratch to reflect the unique circumstances of the parties’ relationship, the different negotiating leverage of each party, and the nature of the information that will be disclosed.

what ndas look like

Although there is no such thing as a one-size-fits-all software NDA, for purposes of this overview, and to understand generally how NDAs work, it’s important to appreciate the three “main-event” provisions that are common to all NDAs.

(a) The Definition of Confidential Information:

The definition of “Confidential Information” will set forth the type of disclosed information that is subject to the limitations on use and disclosure and, importantly, the type of disclosed information that is not subject to such limitations.

(b) The Term of the Recipient’s Obligations

The term of an NDA sets forth the time limit on the parties’ obligations. The term of an NDA may be measured in days, weeks, months or years depending on the circumstances of the relationship and the nature of the disclosed information.

(c) The Limitation on Use and Disclosure:

This provision will describe what a recipient party may do and what a recipient party may not do with disclosed information that falls within the definition of Confidential Information. This provision will almost certainly forbid disclosure of Confidential Information, but may also limit the use of Confidential Information and, in some cases, require that the recipient take certain affirmative steps to protect the confidentiality of Confidential Information.

The NDA Debate: Should You Ask For An NDA? Should You Sign One?

Although NDAs have been around for as long as there has been information worth protecting, the high-tech startup boom has thrust their use into the limelight and sparked a debate as to their value. As an industry that is highly dependent on data and constantly evolving technology, one would think that the high-tech startup world would embrace the use of software development NDAs. To understand why that isn’t the case, and to better gauge whether you should ask for an NDA or sign one presented to you, consider the following:

NDAs Are Often Unilateral

NDAs are unilateral when the business relationship requires that only one party disclose confidential information (rather than a mutual exchange of information by each party).

A startup seeks to hire an engineer to build its mobile app and has asked the engineer to sign an NDA. The startup will disclose information to the engineer, but the relationship does not require the engineer to provide confidential information to the startup. The NDA will be unilateral and will impose legal obligations, and potential liability, on the engineer only.

Because only one party is exchanging confidential information, only one party (the recipient party) has a legal obligation to comply with and, as such, only the recipient party is subject to potential liability. What an entrepreneur might view as a means by which to protect an idea, an NDA recipient might view as a one-sided contract.

Entrepreneurs Often Overstate the Need for an NDA

There are surely circumstances where NDAs make sense. Customer lists, pricing information, proprietary formulas and algorithms might have intrinsic value that is best protected by an NDA. Many argue, however, that some entrepreneurs are NDA trigger happy and think that every idea is worthy of legal protection. Ideas though, it is argued, are rarely new and, moreover, often have no value without execution.

NDAs should be asked for only when there is something worth protecting, and many argue that an idea alone does not warrant asking for an NDA. Finally, those most often asked to sign software NDAs – investors and engineers – rarely have any interest in stealing an idea when doing so would likely ruin any professional goodwill and reputation they’ve earned in their respective professional communities.

NDAs Indicate Mistrust

In a perfect world, business would be business and would never be personal. In reality, though, business is often about perception. What might be “just a contract” to an entrepreneur asking for a software development NDA, may be perceived as an indication of mistrust and a questioning of personal integrity by the person being asked to sign one. NDAs are most often requested at the outset of a business relationship, signaling mistrust and calling into question one’s professional integrity may start the relationship off on the wrong foot—even if that wasn’t the intention…perception is powerful.

This issue is less of a concern for business relationships where both parties will be disclosing confidential information and, thus, an NDA will be bilateral and both parties subject to legal obligation. Outside of strategic joint ventures, partnerships, mergers and similar arrangements, however, bilateral exchanges are rare and unilateral NDAs are much more common.

NDAs Can Limit An Information Recipient’s Ability To Earn A Living

As discussed earlier, an NDA defines a set of information that is to be considered “Confidential Information” and then specifies what a recipient may and may not do with that information during the term of the NDA. Whether an NDA is three pages or three-hundred pages, no contract can predict and plan for every possible circumstance and this limitation often works against the recipient of disclosed information.

What if, after signing an NDA, an engineer is asked to build a similar product or to execute a similar but technically different idea? Is using similar code on a different application a violation of the NDA’s non-use provision? What if the engineer learned new skills during the engagement? Can the engineer use those skills for another client? Can the engineer list the client on his or her resume?

There is a real concern that signing even one NDA, whether as an engineer, an investor or otherwise, can drastically shrink one’s pool of potential business. At worst, signing an NDA might foreclose a person’s ability to work on even slightly related projects. At best, signing an NDA complicates future business development efforts as every new opportunity requires a time consuming analysis of conflicts and liability under each and every NDA that the person may be subject to.

Enforcement Isn’t Cheap

The whole point of entering into an NDA is to have some legal remedy if the recipient party discloses confidential information in violation of the agreement. An NDA gives a disclosing party a basis to file a lawsuit seeking money damages and/or a court order against the breaching party. What many NDA proponents don’t fully appreciate, however, is the cost of enforcement.

Filing a lawsuit can be extremely costly and time consuming. A lawsuit for breach of contract will very likely require hiring a lawyer to gather evidence, assess possible legal claims, file the initial complaint and supporting documents, depose the allegedly breaching party and any witnesses and related parties, and argue the case before a judge. Lawsuits can take years, and lawyers typically charge by the hour. Before asking for an NDA, one should assess whether the information to be protected is more valuable than the potential cost of enforcement.

Though the above factors have contributed to a move away from NDAs in the startup world, NDAs are not without their value. Whether you should ask for an NDA before disclosing information, or agree to sign one if you’re on the receiving end of the equation, depends on the particular circumstances of the intended business relationship and each party’s motivation to enter into the relationship. The more valuable the relationship is to a party, the less leverage that party has to negotiate for or against the use of an NDA. The less valuable the relationship is to a party, the more leverage that party has to get its way or walk away. This push and pull is at the heart of all negotiations, the party with the better “Best Alternative to a Negotiated Agreement (BATNA)” has the upper hand.

If You Must Have An NDA…

So what if you’re an engineer and the opportunity to work on a particular project outweighs the risk of signing a software development NDA? What if you’re a startup and the intrinsic value of your information justifies the need for an NDA, despite the difficulty of finding an engineer that will sign one? If you have to sign an NDA, or if you must ask for one, what are some of the things to look out for and consider?

nda without reading

As is always the case, I strongly suggest seeking the guidance of a competent and licensed attorney.Contracts can get complex quickly and legal rights and obligations shouldn’t be left to “winging it.” As you’re finding an attorney, though, you can start by reviewing the some of the NDA’s main operative provisions. The following are a few preliminary things you might consider when presented with or requesting an NDA:

1. Definition of Confidential Information

Recall that this provision defines the type of disclosed information that is subject to the confidentiality obligations of the NDA and, as such, it should reflect the nature of the business relationship and that of the information to be disclosed.

If you’re a disclosing party, you’ll likely ask for a broad definition of Confidential Information to cover everything that might be disclosed to the receiving party during the course of the relationship. If you’re a receiving party, however, you might resist this request and seek instead to narrow the definition to include only specifically designated information such as, for example, written information that is marked “Confidential.” Regardless of where the negotiations come out, the parties should think carefully about striking the right balance between a definition of Confidential Information that is too broad (and thus extremely restrictive to the recipient party), on the one hand, and, on the other hand, too narrow (thus minimizing the protective effect to the disclosing party).

Though it’s important to determine the information that is to be held in confidence, it is equally important to “carve-out” certain information that is not subject to the confidentiality provisions. Common examples of such carve-outs include information that is or becomes publicly available and information that is lawfully known before entry into the business relationship.

2. Term of Confidentiality

The term of an NDA should reflect the nature of the parties’ business relationship and the nature of the information to be disclosed. If the relationship is limited to a one-year engagement, it might not make sense for the term of the NDA to extend too far after termination of the relationship. Similarly, certain types of information become less valuable or sensitive over time. Financial statements, for example, may be particularly valuable at and immediately after the time they are prepared, but probably don’t accurately reflect a company’s financial health months or years after their preparation. If information is of a type that decreases in value or sensitivity over time, a long term is likely not necessary.

3. Disclosure to Representatives

As discussed throughout this article, NDAs are typically signed by a single disclosing party and a single recipient party. The problem, though, is that a recipient party may not always work alone and, rather, may from time to time need to disclose information protected by an NDA to such recipient party’s colleagues, employees or representatives in order to carry out the terms of the business relationship.

David Developer has signed an NDA with BigCo to create a mobile app for BigCo.

During the project, David needs to enlist the help of his colleague, Peter Programmer, to write some code in a language that David is less familiar with. Peter has not signed an NDA, can David disclose information to Peter so that Peter can assist with the project?

Rather than go through the hassle of signing a new NDA for each new person to whom information needs to be disclosed during the course of a project, or trying to predict ahead of time every person to whom information may need to be disclosed, the parties to an NDA may include a representatives provision addressing permitted disclosures to certain defined persons.

The representatives provision is straightforward from a drafting perspective and is simply a definition of “Representatives” that specifies the persons or classes of persons to whom confidential information may be disclosed. A recipient party will likely want the definition to be broad and inclusive of any person with whom the recipient party may collaborate. The disclosing party, of course, will likely want to keep the definition of Representatives as narrow as possible to permit the project to move forward, on the one hand, while maintaining the protections of the software development NDA, on the other. Finally, the disclosing party will very likely wish to include a clause providing that, prior to any disclosure of confidential information to a Representative, the recipient party inform such Representative of the confidential nature of the information and of the terms of the NDA. A representatives clause may look something like the following:

During the Term of this Agreement, the Recipient Party will not disclose the Confidential Information to any person other than the Representatives, provided that, prior to any such disclosure to a Representative, the Recipient Party informs such Representative of the confidential nature of the information and the terms of this Agreement. “Representatives” shall include the employees, independent contractors, partners, agents and other third parties that are or may be engaged by the Recipient Party for purposes of the Project.

4. Non-Disclosure v. Non-Use

This is a big one. As mentioned earlier, NDAs will almost always include a prohibition on disclosure of Confidential Information. Some software NDAs, however, will also prohibit or limit use of Confidential Information. For example:

The Recipient Party agrees that, during the Term of this Agreement, the Recipient Party will not (i) disclose the Confidential Information to any person other than its Representatives and (ii) will not use the Confidential Information for any purpose other than for those purposes directly related to the Project.

Depending on the term of the NDA and the type of information disclosed, restriction on use may not be an issue. If the term is particularly long, however, or the definition of Confidential Information particularly broad, the “use prohibition” may be extraordinarily restrictive on the recipient party. For example, consider the following definition of Confidential Information:

“Confidential Information” includes (i) all information furnished by the Disclosing Party to the Recipient Party, whether furnished before or after the date of this Agreement, whether oral or written, and regardless of the manner in which it was furnished, and (ii) all analyses, compilations, forecasts, studies, interpretations, documents, code and similar work product prepared by the Recipient Party or its Representatives in connection with the Project.

What this means is that, for as long as the NDA is in effect, the Recipient Party cannot disclose or use anyinformation that the Disclosing Party made available to the Recipient Party or any information prepared in connection with the particular Project. Without any carve-outs or qualifications, these clauses could be incredibly limiting.

An engineer signs an NDA which includes the two provisions set out above. During the course of the Project, the engineer learns a new way of putting together common strings of code. The new method could be considered work product that was prepared in connection with the Project and, as such, the engineer may be prohibited from using the method in future projects during the term of the NDA.

Before hearing Startup A’s pitch, an investor signs an NDA which includes provisions similar to those set out above. During the pitch, Startup A reveals its most recent financial statements and its strategy for growth. The investor does not invest. A few months later, the investor is approached by a similar startup, Startup B, and asked to attend a pitch. The investor may be precluded from investing in Startup B as doing so might involve use of information learned during Startup A’s pitch, even if only remembered by the investor.

The above examples are admittedly extreme, but are used to stress the point that the combination of a broad definition of Confidential Information, an unnecessarily long term, and restrictions on use can be paralyzing. Additionally, these are by no means the only red-flags that can sneak into an NDA and what might be a red-flag for one NDA may be perfectly tolerable for a different business relationship.

understanding ndas

So What Do I Do…Specifically?

Though you might now have a better understanding of what an NDA is, what a software development NDA might look like, and why many in the tech world are reluctant to sign them, you might still be wondering what, specifically, you should do when on the receiving end of an NDA. There is no substitute for the advice of a competent attorney, but, with an understanding of the concepts discussed in this article, you can approach the first read of an NDA armed with some knowledge as to what is most important to watch for:

  • Is this a bilateral or unilateral NDA? Will both parties be disclosing information? If so, are the parties subject to identical limitations and requirements?
  • How broad, or narrow, is the definition of Confidential Information?
  • How long are the obligations in effect? Does the term of the NDA match the nature of the business relationship and the information to be disclosed?
  • Am I only prohibited from disclosing the Confidential Information, or disclosing and using the Confidential Information?
  • Am I permitted to disclose the information to my employees and colleagues who may assist with the project?
  • Is this relationship valuable enough to assume a legal obligation that can be enforced in a court?

Finally, the above considerations, and this write-up generally, are not solely for the benefit of those who may be asked to sign an NDA. Certainly, a recipient party should consider very carefully an NDA’s provisions before signing, but a party considering asking for an NDA, too, would be wise to consider these factors.

NDAs, like most contracts, have the most value, and are therefore most likely to be signed, when both parties are comfortable with the balance of risks managed by the NDA and the benefit to be realized by the underlying contractual relationship. By considering the perspective of the recipient party, a party asking for an NDA may be better able to tailor the scope of an NDA to match the business relationship and present to the recipient party a fair and balanced agreement.

Though the information in this write-up should give you a good starting point, there is a lot to consider when asking for or presented with an NDA. A competent attorney can work with both parties to draft an NDA that is protective to the disclosing party, without being overly restrictive to the recipient party, and help move the parties towards a mutually beneficial business relationship.

If you want to learn more about legal issues faced by startups and developers, I suggest you check outStartup Law Hacks as well.

Disclaimer: the contents of this article were written and are made available solely as general information and for educational purposes and not to provide specific legal advice of any kind or to establish an attorney-client relationship. This article should not be used as a substitute for competent legal advice from an attorney licensed in your jurisdiction. This article has been written by Bret Stancil in his individual capacity and the views and opinions expressed herein are his own.

Source: Toptal

Integrating Facebook Authentication in AngularJS App with Satellizer

With the arrival of feature-rich front-end frameworks such as AngularJS, more and more logic is being implemented on the front-end, such as data manipulation/validation, authentication, and more. Satellizer, an easy to use token-based authentication module for AngularJS, simplifies the process of implementing authentication mechanism in AngularJS, The library comes with built-in support for Google, Facebook, LinkedIn, Twitter, Instagram, GitHub, Bitbucket, Yahoo, Twitch, and Microsoft (Windows Live) accounts.

Integrating Facebook Login in AngularJS App with Satellizer

In this article, we will build a very simple webapp similar to the one here which allows you to login and see current user’s information.

Authentication vs Authorization

These are 2 scary words that you often encounter once your app starts integrating a user system. According to Wikipedia:

Authentication is the act of confirming the truth of an attribute of a single piece of data (a datum) claimed true by an entity.

Authorization is the function of specifying access rights to resources related to information security and computer security in general and to access control in particular.

In layman terms, let’s take an example of a blog website with some people working on it. The bloggers write articles and the manager validates the content. Each person can authenticate (login) into the system but their rights (authorisation) are different, so the blogger cannot validate content whereas the manager can.

Why Satellizer

You can create your own authentication system in AngularJS by following some tutorials such as this very detailed one: JSON Web Token Tutorial: An Example in Laravel and AngularJS. I suggest reading this article as it explains JWT (JSON Web Token) very well, and shows a simple way to implement authentication in AngularJS using directly the local storage and HTTP interceptors.

So why Satellizer? The principal reason is that it supports a handful of social network logins such as Facebook, Twitter, etc. Nowadays, especially for websites used on mobile, typing username and password is quite cumbersome and users expect to be able to use your website with little hindrance by using social logins. As integrating the SDK of each social network and following their documentations is quite repetitive, it would be nice to support these social logins with minimal effort.

Moreover Satellizer is an active project on Github. Active is key here as these SDKs change quite frequently and you don’t want to read their documentation every now and then (anyone working with Facebook SDK knows how annoying it is)

AngularJS App with Facebook Login

This is where things start to become interesting.

We will build a web app that has regular login/register (i.e. using username, password) mechanism and supports social logins as well. This webapp is very simple as it has only 3 pages:

  • Home page: anyone can see
  • Login page: to enter username/password
  • Secret page: that only logged in users can see

For backend, we will use Python and Flask. Python and the framework Flask are quite expressive so I hope porting the code to other languages/frameworks will not be very hard. We will, of course, use AngularJS for front-end. And for the social logins, we will integrate with Facebook only as it is the most popular social network at this time.

Let’s start!

Step #1: Bootstrap Project

Here is how we will structure our code:

- app.py
- static/
	- index.html
- app.js
	- bower.json
	- partials/
		- login.tpl.html
		- home.tpl.html
		- secret.tpl.html

All the back-end code is in app.py. The front-end code is put in static/ folder. By default, Flask will automatically serve the contents of static/ folder. All the partial views are in static/partials/ and handled by the ui.router module.

To start coding the back-end, we’ll need Python 2.7.* and install the required libraries using pip. You can of course use virtualenv to isolate a Python environment. Below is the list of required Python modules to put in requirements.txt:

Flask==0.10.1
PyJWT==1.4.0
Flask-SQLAlchemy==1.0
requests==2.7.0

To install all these dependencies:

pip install -r requirements.txt

In app.py we have some initial code to bootstrap Flask (import statements are omitted for brevity):

app = Flask(__name__)

@app.route('/')
def index():
    return flask.redirect('/static/index.html')

if __name__ == '__main__':
    app.run(debug=True)

Next we init bower and install AngularJS and ui.router:

bower init # here you will need to answer some question. when in doubt, just hit enter :)
bower install angular angular-ui-router --save # install and save these dependencies into bower.json

Once these libraries are installed, we need to include AngularJS and ui-router in index.html and create routings for 3 pages: home, login, and secret.

<body ng-app="DemoApp">

<a ui-sref="home">Home</a>
<a ui-sref="login">Login</a>
<a ui-sref="secret">Secret</a>
<div ui-view></div>

<script src="bower_components/angular/angular.min.js"></script>
<script src="bower_components/angular-ui-router/release/angular-ui-router.min.js"></script>
<script src="main.js"></script>
</body>

Below is the code that we need in main.js to configure routing:

var app = angular.module('DemoApp', ['ui.router']);

app.config(function ($stateProvider, $urlRouterProvider) {
  $stateProvider
    .state('home', {
      url: '/home',
      templateUrl: 'partials/home.tpl.html'
    })
    .state('secret', {
      url: '/secret',
      templateUrl: 'partials/secret.tpl.html',
    })
    .state('login', {
      url: '/login',
      templateUrl: 'partials/login.tpl.html'
    });
  $urlRouterProvider.otherwise('/home');

});

At this point if you run the server python app.py, you should have this basic interface at http://localhost:5000

The links Home, Login, and Secret should work at this point and show the content of the corresponding templates.

Congratulation, you just finished setting up the skeleton! If you encounter any error, please check out thecode on GitHub

Step #2: Login and Register

At the end of this step, you’ll have a webapp that you can register/login using email and password.

The first step is to configure the backend. We need a User model and a way to generate the JWT token for a given user. The User model shown below is really simplified and does not perform even any basic checks such as if field email contains “@”, or if field password contains at least 6 characters, etc.

class User(db.Model):
    id = db.Column(db.Integer, primary_key=True)
    email = db.Column(db.String(100), nullable=False)
    password = db.Column(db.String(100))

    def token(self):
        payload = {
            'sub': self.id,
            'iat': datetime.utcnow(),
            'exp': datetime.utcnow() + timedelta(days=14)
        }
        token = jwt.encode(payload, app.config['TOKEN_SECRET'])
        return token.decode('unicode_escape')

We use the jwt module in python to generate the payload part in JWT. The iat and exp part correspond to the timestamp that token is created and expired. In this code, the token will be expired in 2 weeks.

After the model User was created, we can add the “login” and “register” endpoints. The code for both are quite similar, so here I will just show the “register” part. Please note that by default, Satellizer will call the endpoints /auth/login and /auth/signup for the “login” and “register” respectively.

@app.route('/auth/signup', methods=['POST'])
def signup():
    data = request.json

    email = data["email"]
    password = data["password"]

    user = User(email=email, password=password)
    db.session.add(user)
    db.session.commit()

    return jsonify(token=user.token())

Let’s check the endpoint using curl first:

curl localhost:5000/auth/signup -H "Content-Type: application/json" -X POST -d '{"email":"[email protected]","password":"xyz"}'

The result should look like this:

{
  "token": "very long string…."
}

Now that the back-end part is ready, let’s attack the front-end! First, we need to install satellizer and add it as a dependency in main.js:

bower install satellizer --save

Add satellizer as dependency:

var app = angular.module('DemoApp', ['ui.router', 'satellizer']);

Login and signup in satellizer is actually quite simple in comparison to all the setup until now:

$scope.signUp = function () {
    $auth
      .signup({email: $scope.email, password: $scope.password})
      .then(function (response) {
        // set the token received from server
        $auth.setToken(response);
        // go to secret page
        $state.go('secret');
      })
      .catch(function (response) {
        console.log("error response", response);
      })
  };

If you have any difficulty setting up the code, you can take a look at the code on GitHub.

Step #3: But Secret View Is Not Really Secret, Because Anyone Can See It!

Yes, that is correct! Until now, anyone can go to secret page without logging in.

It’s time to add some interceptor in AngularJS to make sure that if someone goes to secret page and if this user is not logged in, they will be redirected to the login page.

First, we should add a flag requiredLogin to distinguish secret page from other ones.

    .state('secret', {
      url: '/secret',
      templateUrl: 'partials/secret.tpl.html',
      controller: 'SecretCtrl',
      data: {requiredLogin: true}
    })

The “data” part will be used in the $stateChangeStart event which is fired each time the routing changes:

app.run(function ($rootScope, $state, $auth) {
  $rootScope.$on('$stateChangeStart',
    function (event, toState) {
      var requiredLogin = false;
      // check if this state need login
      if (toState.data && toState.data.requiredLogin)
        requiredLogin = true;
      
      // if yes and if this user is not logged in, redirect him to login page
      if (requiredLogin && !$auth.isAuthenticated()) {
        event.preventDefault();
        $state.go('login');
      }
    });
});

Now, the user cannot go directly to the secret page without logging in. Hooray!

As usual, the code of this step can be found here.

Step #4: It’s Time to Get Something Really Secret!

At this moment, there’s nothing really secret in the secret page. Let’s put something personal there.

This step starts by creating an endpoint in the back-end which is only accessible for an authenticated user, such as having a valid token. The endpoint /user below returns the user_id and email of the user corresponding to the token.

@app.route('/user')
def user_info():
    # the token is put in the Authorization header
    if not request.headers.get('Authorization'):
        return jsonify(error='Authorization header missing'), 401
    
    # this header looks like this: “Authorization: Bearer {token}”
    token = request.headers.get('Authorization').split()[1]
    try:
        payload = jwt.decode(token, app.config['TOKEN_SECRET'])
    except DecodeError:
        return jsonify(error='Invalid token'), 401
    except ExpiredSignature:
        return jsonify(error='Expired token'), 401
    else:
        user_id = payload['sub']
        user = User.query.filter_by(id=user_id).first()
        if user is None:
            return jsonify(error='Should not happen ...'), 500
        return jsonify(id=user.id, email=user.email), 200
    return jsonify(error="never reach here..."), 500

Again, we make use of the module jwt to decode the JWT token included in the ‘Authorization’ header and to handle the case when the token is expired or not valid.

Let’s test this endpoint using curl. First, we need to get a valid token:

curl localhost:5000/auth/signup -H "Content-Type: application/json" -X POST -d '{"email":"[email protected]","password":"xyz"}'

Then with this token:

curl localhost:5000/user -H "Authorization: Bearer {put the token here}"

Which gives this result:

{
  "email": "[email protected]",
  "id": 1
}

Now we need to include this endpoint in the Secret Controller. This is quite simple as we just need to call the endpoint using the regular $http module. The token is automatically inserted to the header by Satellizer, so we don’t need to bother with all the details of saving the token and then putting it in the right header.

  getUserInfo();

  function getUserInfo() {
    $http.get('/user')
      .then(function (response) {
        $scope.user = response.data;
      })
      .catch(function (response) {
        console.log("getUserInfo error", response);
      })
  }

Finally, we have something truly personal in the secret page!

The code of this step is on GitHub.

Step #5: Facebook Login with Satellizer

A nice thing about Satellizer, as mentioned at the beginning, is it makes integrating social login a lot easier. At the end of this step, users can login using their Facebook account!

First thing to do is to create an application on Facebook developers page in order to have an application_idand a secret code. Please follow developers.facebook.com/docs/apps/register to create a Facebook developer account if you don’t have one already and create a website app. After that, you will have the application ID and application secret as in the screenshot below.

Once the user chooses to connect with Facebook, Satellizer will send an authorization code to the endpoint/auth/facebook. With this authorization code, the back-end can retrieve an access token from Facebook/oauth endpoint that allows the call to Facebook Graph API to get user information such as location, user_friends, user email, etc.

We also need to keep track of whether a user account is created with Facebook or through regular signup. To do so, we add facebook_id to our User model.

facebook_id = db.Column(db.String(100)) 

The facebook secret is configured via env variables FACEBOOK_SECRET that we add to app.config.

app.config['FACEBOOK_SECRET'] = os.environ.get('FACEBOOK_SECRET')

So to launch the app.py, you should set this env variable:

FACEBOOK_SECRET={your secret} python app.py

Here is the method which handles Facebook logins. By default Satellizer will call the endpoint /auth/facebook.

@app.route('/auth/facebook', methods=['POST'])
def auth_facebook():
    access_token_url = 'https://graph.facebook.com/v2.3/oauth/access_token'
    graph_api_url = 'https://graph.facebook.com/v2.5/me?fields=id,email'

    params = {
        'client_id': request.json['clientId'],
        'redirect_uri': request.json['redirectUri'],
        'client_secret': app.config['FACEBOOK_SECRET'],
        'code': request.json['code']
    }

    # Exchange authorization code for access token.
    r = requests.get(access_token_url, params=params)
    # use json.loads instead of urlparse.parse_qsl
    access_token = json.loads(r.text)

    # Step 2. Retrieve information about the current user.
    r = requests.get(graph_api_url, params=access_token)
    profile = json.loads(r.text)

    # Step 3. Create a new account or return an existing one.
    user = User.query.filter_by(facebook_id=profile['id']).first()
    if user:
        return jsonify(token=user.token())

    u = User(facebook_id=profile['id'], email=profile['email'])
    db.session.add(u)
    db.session.commit()
    return jsonify(token=u.token())

To send a request to the Facebook server, we use the handy module requests. Now the difficult part on the back-end is done. On the front-end, adding Facebook login is quite simple. First, we need to tell Satellizer ourfacebook_id by adding this code into app.config function:

$authProvider.facebook({
    clientId: {your facebook app id},
    // by default, the redirect URI is http://localhost:5000
    redirectUri: 'http://localhost:5000/static/index.html'
  });

To login using Facebook, we can just call:

$auth.authenticate(“facebook”)

As usual, you can check the code on GitHub

At this time, the webapp is complete in terms of functionality. The user can login/register using regular email and password or by using Facebook. Once logged in, the user can see his secret page.

Make a Pretty Interface

The interface is not very pretty at this point, so let’s add a little bit of Bootstrap for the layout and the angular toaster module to handle an error message nicely, such as when login fails.

The code for this beautifying part can be found here.

Conclusion

This article shows a step-by-step integration of Satellizer in a (simple) AngularJS webapp. With Satellizer, we can easily add other social logins such as Twitter, Linkedin, and more. The code on the front-end is quite the same as in the article. However, the back-end varies as social network SDKs have different endpoints with different protocols. You can take a look at https://github.com/sahat/satellizer/blob/master/examples/server/python/app.py which contains examples for Facebook, Github, Google, Linkedin, Twiter and Bitbucket. When in doubt, you should take a look at the documentation on https://github.com/sahat/satellizer.

This article was written by Son Nguyen Kim, a Toptal freelance developer.

9 Essential System Security Interview Questions

  1. What is a pentest?

“Pentest” is short for “penetration test”, and involves having a trusted security expert attack a system for the purpose of discovering, and repairing, security vulnerabilities before malicious attackers can exploit them. This is a critical procedure for securing a system, as the alternative method for discovering vulnerabilities is to wait for unknown agents to exploit them. By this time it is, of course, too late to do anything about them.

In order to keep a system secure, it is advisable to conduct a pentest on a regular basis, especially when new technology is added to the stack, or vulnerabilities are exposed in your current stack.

 

2. What is social engineering?

“Social engineering” refers to the use of humans as an attack vector to compromise a system. It involves fooling or otherwise manipulating human personnel into revealing information or performing actions on the attacker’s behalf. Social engineering is known to be a very effective attack strategy, since even the strongest security system can be compromised by a single poor decision. In some cases, highly secure systems that cannot be penetrated by computer or cryptographic means, can be compromised by simply calling a member of the target organization on the phone and impersonating a colleague or IT professional.

Common social engineering techniques include phishing, clickjacking, and baiting, although several other tricks are at an attacker’s disposal. Baiting with foreign USB drives was famously used to introduce the Stuxnet worm into Iran’s uranium enrichment facilities, damaging the nation’s ability to produce nuclear material.

For more information, a good read is Christopher Hadnagy’s book Social Engineering: The Art of Human Hacking.

3. You find PHP queries overtly in the URL, such as /index.php=?page=userID. What would you then be looking to test? 

This is an ideal situation for injection and querying. If we know that the server is using a database such as SQL with a PHP controller, it becomes quite easy. We would be looking to test how the server reacts to multiple different types of requests, and what it throws back, looking for anomalies and errors.

One example could be code injection. If the server is not using authentication and evaluating each user, one could simply try /index.php?arg=1;system(‘id’) and see if the host returns unintended data.

4. You find yourself in an airport in the depths of of a foreign superpower. You’re out of mobile broadband and don’t trust the WI-FI. What do you do? Further, what are the potential threats from open WI-FIs?

Ideally you want all of your data to pass through an encrypted connection. This would usually entail tunneling via SSH into whatever outside service you need, over a virtual private network (VPN). Otherwise, you’re vulnerable to all manner of attacks, from man-in-the-middle, to captive portals exploitation, and so on.

5. What does it mean for a machine to have an “air gap”? Why are air gapped machines important?

An air gapped machine is simply one that cannot connect to any outside agents. From the highest level being the internet, to the lowest being an intranet or even bluetooth.

Air gapped machines are isolated from other computers, and are important for storing sensitive data or carrying out critical tasks that should be immune from outside interference. For example, a nuclear power plant should be operated from computers that are behind a full air gap. For the most part, real world air gapped computers are usually connected to some form of intranet in order to make data transfer and process execution easier. However, every connection increases the risk that outside actors will be able to penetrate the system.

 

6. You’re tasked with setting up an email encryption system for certain employees of a company. What’s the first thing you should be doing to set them up? How would you distribute the keys?

The first task is to do a full clean and make sure that the employees’ machines aren’t compromised in any way. This would usually involve something along the lines of a selective backup. One would take only the very necessary files from one computer and copy them to a clean replica of the new host. We give the replica an internet connection and watch for any suspicious outgoing or incoming activity. Then one would perform a full secure erase on the employee’s original machine, to delete everything right down to the last data tick, before finally restoring the backed up files.

The keys should then be given out by transferring them over wire through a machine or device with no other connections, importing any necessary .p7s email certificate files into a trusted email client, then securely deleting any trace of the certificate on the originating computer.

The first step, cleaning the computers, may seem long and laborious. Theoretically, if you are 100% certain that the machine is in no way affected by any malicious scripts, then of course there is no need for such a process. However in most cases, you’ll never know this for sure, and if any machine has been backdoored in any kind of way, this will usually mean that setting up secure email will be done in vain.

7. You manage to capture email packets from a sender that are encrypted through Pretty Good Privacy (PGP). What are the most viable options to circumvent this?

First, one should be considering whether to even attempt circumventing the encryption directly. Decryption is nearly impossible here unless you already happen to have the private key. Without this, your computer will be spending multiple lifetimes trying to decrypt a 2048-bit key. It’s likely far easier to simply compromise an end node (i.e. the sender or receiver). This could involve phishing, exploiting the sending host to try and uncover the private key, or compromising the receiver to be able to view the emails as plain text.

8. What makes a script fully undetectable (FUD) to antivirus software? How would you go about writing a FUD script? 

A script is FUD to an antivirus when it can infect a target machine and operate without being noticed on that machine by that AV. This usually entails a script that is simple, small, and precise

To know how to write a FUD script, one must understand what the targeted antivirus is actually looking for. If the script contains events such as Hook_Keyboard(), File_Delete(), or File_Copy(), it’s very likely it wil be picked up by antivirus scanners, so these events are not used. Further, FUD scripts will often mask function names with common names used in the industry, rather than naming them things like fToPwn1337(). A talented attacker might even break up his or her files into smaller chunks, and then hex edit each individual file, thereby making it even more unlikely to be detected.

As antivirus software becomes more and more sophisticated, attackers become more sophisticated in response. Antivirus software such as McAfee is much harder to fool now than it was 10 years ago. However, there are talented hackers everywhere who are more than capable of writing fully undetectable scripts, and who will continue to do so. Virus protection is very much a cat and mouse game.

9. What is a “Man-in-the-Middle” attack?

A man-in-the-middle attack is one in which the attacker secretly relays and possibly alters the communication between two parties who believe they are directly communicating with each other. One example is active eavesdropping, in which the attacker makes independent connections with the victims and relays messages between them to make them believe they are talking directly to each other over a private connection, when in fact the entire conversation is controlled by the attacker, who even has the ability to modify the content of each message. Often abbreviated to MITM, MitM, or MITMA, and sometimes referred to as a session hijacking attack, it has a strong chance of success if the attacker can impersonate each party to the satisfaction of the other. MITM attacks pose a serious threat to online security because they give the attacker the ability to capture and manipulate sensitive information in real-time while posing as a trusted party during transactions, conversations, and the transfer of data. This is straightforward in many circumstances; for example, an attacker within reception range of an unencrypted WiFi access point, can insert himself as a man-in-the-middle.

This article is from Toptal.

Persisting Data Across Page Reloads: Cookies, IndexedDB and Everything In-Between

The following article is a guest post from Toptal. Toptal is an elite network of freelancers that enables businesses to connect with the top 3% of software engineers and designers in the world.

Suppose I’m visiting a web site. I right-click on one of the navigation links and select to open the link in a new window. What should happen? If I’m like most users, I expect the new page to have the same content as if I had clicked the link directly. The only difference should be that the page appears in a new window. But if your web site is a single-page application (SPA), you may see weird results unless you’ve carefully planned for this case.

Recall that in an SPA, a typical navigation link is often a fragment identifier, starting with a hash mark (#). Clicking the link directly does not reload the page, so all the data stored in JavaScript variables are retained. But if I open the link in a new tab or window, the browser does reload the page, reinitializing all the JavaScript variables. So any HTML elements bound to those variables will display differently, unless you’ve taken steps to preserve that data somehow.

Persisting Data Across Page Reloads: Cookies, IndexedDB and Everything In-Between

Persisting Data Across Page Reloads: Cookies, IndexedDB and Everything In-Between

There’s a similar issue if I explicitly reload the page, such as by hitting F5. You may think I shouldn’t ever need to hit F5, because you’ve set up a mechanism to push changes from the server automatically. But if I’m a typical user, you can bet I’m still going to reload the page. Maybe my browser seems to have repainted the screen incorrectly, or I just want to be certain I have the very latest stock quotes.

APIs May Be Stateless, Human Interaction Is Not

Unlike an internal request via a RESTful API, a human user’s interaction with a web site is not stateless. As a web user, I think of my visit to your site as a session, almost like a phone call. I expect the browser to remember data about my session, in the same way that when I call your sales or support line, I expect the representative to remember what was said earlier in the call.

An obvious example of session data is whether I’m logged in, and if so, as which user. Once I go through a login screen, I should be able to navigate freely through the user-specific pages of the site. If I open a link in a new tab or window and I’m presented with another login screen, that’s not very user friendly.

Another example is the contents of the shopping cart in an e-commerce site. If hitting F5 empties the shopping cart, users are likely to get upset.

In a traditional multi-page application written in PHP, session data would be stored in the $_SESSION superglobal array. But in an SPA, it needs to be somewhere on the client side. There are four main options for storing session data in an SPA:

  • Cookies
  • Fragment identifier
  • Web storage
  • IndexedDB

Four Kilobytes of Cookies

Cookies are an older form of web storage in the browser. They were originally intended to store data received from the server in one request and send it back to the server in subsequent requests. But from JavaScript, you can use cookies to store just about any kind of data, up to a size limit of 4 KB per cookie. AngularJS offers the ngCookies module for managing cookies. There is also a js-cookies package that provides similar functionality in any framework.

Keep in mind that any cookie you create will be sent to the server on every request, whether it’s a page reload or an Ajax request. But if the main session data you need to store is the access token for the logged-in user, you want this sent to the server on every request anyway. It’s natural to try to use this automatic cookie transmission as the standard means of specifying the access token for Ajax requests.

You may argue that using cookies in this manner is incompatible with RESTful architecture. But in this case it is just fine as each request via the API is still stateless, having some inputs and some outputs. It’s just that one of the inputs is being sent in a funny way, via a cookie. If you can arrange for the login API request to send the access token back in a cookie also, then your client side code hardly needs to deal with cookies at all. Again, it’s just another output from the request being returned in an unusual way.

Cookies offer one advantage over web storage. You can provide a “keep me logged in” checkbox on the login form. With the semantics, I expect if I leave it unchecked then I will remain logged in if I reload the page or open a link in a new tab or window, but I’m guaranteed to be logged out once I close the browser. This is an important safety feature if I’m using a shared computer. As we’ll see later, web storage does not support this behavior.

So how might this approach work in practice? Suppose you’re using LoopBack on the server side. You’ve defined a Person model, extending the built-in User model, adding the properties you want to maintain for each user. You’ve configured the Person model to be exposed over REST. Now you need to tweak server/server.js to achieve the desired cookie behavior. Below is server/server.js, starting from what was generated by slc loopback, with the marked changes:

var loopback = require('loopback');
var boot = require('loopback-boot');

var app = module.exports = loopback();

app.start = function() {
  // start the web server
  return app.listen(function() {
    app.emit('started');
    var baseUrl = app.get('url').replace(/\/$/, '');
    console.log('Web server listening at: %s', baseUrl);
    if (app.get('loopback-component-explorer')) {
      var explorerPath = app.get('loopback-component-explorer').mountPath;
      console.log('Browse your REST API at %s%s', baseUrl, explorerPath);
    }
  });
};

// start of first change
app.use(loopback.cookieParser('secret'));
// end of first change

// Bootstrap the application, configure models, datasources and middleware.
// Sub-apps like REST API are mounted via boot scripts.
boot(app, __dirname, function(err) {
  if (err) throw err;

  // start of second change
  app.remotes().after('Person.login', function (ctx, next) {
    if (ctx.result.id) {
      var opts = {signed: true};
      if (ctx.req.body.rememberme !== false) {
        opts.maxAge = 1209600000;
      }
      ctx.res.cookie('authorization', ctx.result.id, opts);
    }
    next();
  });
  app.remotes().after('Person.logout', function (ctx, next) {
    ctx.res.cookie('authorization', '');
    next();
  });
  // end of second change

  // start the server if `$ node server.js`
  if (require.main === module)
    app.start();
});

The first change configures the cookie parser to use ‘secret’ as the cookie signing secret, thereby enabling signed cookies. You need to do this because although LoopBack looks for an access token in either of the cookies ‘authorization’ or ‘access_token’, it requires that such a cookie be signed. Actually, this requirement is pointless. Signing a cookie is intended to ensure that the cookie hasn’t been modified. But there’s no danger of you modifying the access token. After all, you could have sent the access token in unsigned form, as an ordinary parameter. Thus, you don’t need to worry about the cookie signing secret being hard to guess, unless you’re using signed cookies for something else.

The second change sets up some postprocessing for the Person.login and Person.logout methods. For Person.login, you want to take the resulting access token and send it to the client as the signed cookie ‘authorization’ also. The client may add one more property to the credentials parameter, rememberme, indicating whether to make the cookie persistent for 2 weeks. The default is true. The login method itself will ignore this property, but the postprocessor will check it.

For Person.logout, you want to clear out this cookie.

You can see the results of these changes right away in the StrongLoop API Explorer. Normally after a Person.login request, you would have to copy the access token, paste it into the form at the top right, and click Set Access Token. But with these changes, you don’t have to do any of that. The access token is automatically saved as the cookie ‘authorization’, and sent back on each subsequent request. When the Explorer is displaying the response headers from Person.login, it omits the cookie, because JavaScript is never allowed to see Set-Cookie headers. But rest assured, the cookie is there.

On the client side, on a page reload you would see if the cookie ‘authorization’ exists. If so, you need to update your record of the current userId. Probably the easiest way to do this is to store the userId in a separate cookie on successful login, so you can retrieve it on a page reload.

The Fragment Identifier

As I’m visiting a web site that has been implemented as an SPA, the URL in my browser’s address bar might look something like “https://example.com/#/my-photos/37”. The fragment identifier portion of this, “#/my-photos/37”, is already a collection of state information that could be viewed as session data. In this case, I’m probably viewing one of my photos, the one whose ID is 37.

You may decide to embed other session data within the fragment identifier. Recall that in the previous section, with the access token stored in the cookie ‘authorization’, you still needed to keep track of the userId somehow. One option is to store it in a separate cookie. But another approach is to embed it in the fragment identifier. You could decide that while I’m logged in, all the pages I visit will have a fragment identifier beginning with “#/u/XXX”, where XXX is the userId. So in the previous example, the fragment identifier might be “#/u/59/my-photos/37” if my userId is 59.

Theoretically, you could embed the access token itself in the fragment identifier, avoiding any need for cookies or web storage. But that would be a bad idea. My access token would then be visible in the address bar. Anyone looking over my shoulder with a camera could take a snapshot of the screen, thereby gaining access to my account.

One final note: it is possible to set up an SPA so that it doesn’t use fragment identifiers at all. Instead it uses ordinary URLs like “http://example.com/app/dashboard” and “http://example.com/app/my-photos/37”, with the server configured to return the top level HTML for your SPA in response to a request for any of these URLs. Your SPA then does its routing based on the path (e.g. “/app/dashboard” or “/app/my-photos/37”) instead of the fragment identifier. It intercepts clicks on navigation links, and uses History.pushState() to push the new URL, then proceeds with routing as usual. It also listens for popstate events to detect the user clicking the back button, and again proceeds with routing on the restored URL. The full details of how to implement this are beyond the scope of this article. But if you use this technique, then obviously you can store session data in the path instead of the fragment identifier.

Web Storage

Web storage is a mechanism for JavaScript to store data within the browser. Like cookies, web storage is separate for each origin. Each stored item has a name and a value, both of which are strings. But web storage is completely invisible to the server, and it offers much greater storage capacity than cookies. There are two types of web storage: local storage and session storage.

An item of local storage is visible across all tabs of all windows, and persists even after the browser is closed. In this respect, it behaves somewhat like a cookie with an expiration date very far in the future. Thus, it is suitable for storing an access token in the case where the user has checked “keep me logged in” on the login form.

An item of session storage is only visible within the tab where it was created, and it disappears when that tab is closed. This makes its lifetime very different from that of any cookie. Recall that a session cookie is still visible across all tabs of all windows.

If you use the AngularJS SDK for LoopBack, the client side will automatically use web storage to save both the access token and the userId. This happens in the LoopBackAuth service in js/services/lb-services.js. It will use local storage, unless the rememberMe parameter is false (normally meaning the “keep me logged in” checkbox was unchecked), in which case it will use session storage.

The result is that if I log in with “keep me logged in” unchecked, and I then open a link in a new tab or window, I won’t be logged in there. Most likely I’ll see the login screen. You can decide for yourself whether this is acceptable behavior. Some might consider it a nice feature, where you can have several tabs, each logged in as a different user. Or you might decide that hardly anyone uses shared computers any more, so you can just omit the “keep me logged in” checkbox altogether.

So how would the session data handling look if you decide to go with the AngularJS SDK for LoopBack? Suppose you have the same situation as before on the server side: you’ve defined a Person model, extending the User model, and you’ve exposed the Person model over REST. You won’t be using cookies, so you won’t need any of the changes described earlier.

On the client side, somewhere in your outermost controller, you probably have a variable like $scope.currentUserId which holds the userId of the currently logged in user, or null if the user is not logged in. Then to handle page reloads properly, you just include this statement in the constructor function for that controller:

$scope.currentUserId = Person.getCurrentId();

It’s that easy. Add ‘Person’ as a dependency of your controller, if it isn’t already.

IndexedDB

IndexedDB is a newer facility for storing large amounts of data in the browser. You can use it to store data of any JavaScript type, such as an object or array, without having to serialize it. All requests against the database are asynchronous, so you get a callback when the request is completed.

You might use IndexedDB to store structured data that’s unrelated to any data on the server. An example might be a calendar, a to-do list, or saved games that are played locally. In this case, the application is really a local one, and your web site is just the vehicle for delivering it.

At present, Internet Explorer and Safari only have partial support for IndexedDB. Other major browsers support it fully. One serious limitation at the moment, though, is that Firefox disables IndexedDB entirely in private browsing mode.

As a concrete example of using IndexedDB, let’s take the sliding puzzle application by Pavol Daniš, and tweak it to save the state of the first puzzle, the Basic 3×3 sliding puzzle based on the AngularJS logo, after each move. Reloading the page will then restore the state of this first puzzle.

I’ve set up a fork of the repository with these changes, all of which are in app/js/puzzle/slidingPuzzle.js. As you can see, even a rudimentary usage of IndexedDB is quite involved. I’ll just show the highlights below. First, the function restore gets called during page load, to open the IndexedDB database:

/*
 * Tries to restore game
 */
this.restore = function(scope, storekey) {
    this.storekey = storekey;
    if (this.db) {
        this.restore2(scope);
    }
    else if (!window.indexedDB) {
        console.log('SlidingPuzzle: browser does not support indexedDB');
        this.shuffle();
    }
    else {
        var self = this;
        var request = window.indexedDB.open('SlidingPuzzleDatabase');
        request.onerror = function(event) {
            console.log('SlidingPuzzle: error opening database, ' + request.error.name);
            scope.$apply(function() { self.shuffle(); });
        };
        request.onupgradeneeded = function(event) {
            event.target.result.createObjectStore('SlidingPuzzleStore');
        };
        request.onsuccess = function(event) {
            self.db = event.target.result;
            self.restore2(scope);
        };
    }
};

The request.onupgradeneeded event handles the case where the database doesn’t exist yet. In this case, we create the object store.

Once the database is open, the function restore2 is called, which looks for a record with a given key (which will actually be the constant ‘Basic’ in this case):

/*
 * Tries to restore game, once database has been opened
 */
this.restore2 = function(scope) {
    var transaction = this.db.transaction('SlidingPuzzleStore');
    var objectStore = transaction.objectStore('SlidingPuzzleStore');
    var self = this;
    var request = objectStore.get(this.storekey);
    request.onerror = function(event) {
        console.log('SlidingPuzzle: error reading from database, ' + request.error.name);
        scope.$apply(function() { self.shuffle(); });
    };
    request.onsuccess = function(event) {
        if (!request.result) {
            console.log('SlidingPuzzle: no saved game for ' + self.storekey);
            scope.$apply(function() { self.shuffle(); });
        }
        else {
            scope.$apply(function() { self.grid = request.result; });
        }
    };
}

If such a record exists, its value replaces the grid array of the puzzle. If there is any error in restoring the game, we just shuffle the tiles as before. Note that grid is a 3×3 array of tile objects, each of which is fairly complex. The great advantage of IndexedDB is that you can store and retrieve such values without having to serialize them.

We use $apply to inform AngularJS that the model has been changed, so the view will be updated appropriately. This is because the update is happening inside a DOM event handler, so AngularJS wouldn’t otherwise be able to detect the change. Any AngularJS application using IndexedDB will probably need to use $apply for this reason.

After any action that would change the grid array, such as a move by the user, the function save is called which adds or updates the record with the appropriate key, based on the updated grid value:

/*
 * Tries to save game
 */
this.save = function() {
    if (!this.db) {
        return;
    }
    var transaction = this.db.transaction('SlidingPuzzleStore', 'readwrite');
    var objectStore = transaction.objectStore('SlidingPuzzleStore');
    var request = objectStore.put(this.grid, this.storekey);
    request.onerror = function(event) {
        console.log('SlidingPuzzle: error writing to database, ' + request.error.name);
    };
    request.onsuccess = function(event) {
        // successful, no further action needed
    };
}

The remaining changes are to call the above functions at appropriate times. You can review the commitshowing all of the changes. Note that we are calling restore only for the basic puzzle, not for the three advanced puzzles. We exploit the fact that the three advanced puzzles have an api attribute, so for those we just do the normal shuffling.

What if we wanted to save and restore the advanced puzzles also? That would require some restructuring. In each of the advanced puzzles, the user can adjust the image source file and the puzzle dimensions. So we’d have to enhance the value stored in IndexedDB to include this information. More importantly, we’d need a way to update them from a restore. That’s a bit much for this already lengthy example.

Conclusion

In most cases, web storage is your best bet for storing session data. It’s fully supported by all major browsers, and it offers much greater storage capacity than cookies.

You would use cookies if your server is already set up to use them, or if you need the data to be accessible across all tabs of all windows, but you also want to ensure it will be deleted when the browser is closed.

You already use the fragment identifier to store session data that’s specific to that page, such as the ID of the photo the user is looking at. While you could embed other session data in the fragment identifier, this doesn’t really offer any advantage over web storage or cookies.

Using IndexedDB is likely to require a lot more coding than any of the other techniques. But if the values you’re storing are complex JavaScript objects that would be difficult to serialize, or if you need a transactional model, then it may be worthwhile. Source: Toptal.

The Art of War Applied To Software Development

The following article is a guest post from Toptal. Toptal is an elite network of freelancers that enables businesses to connect with the top 3% of software engineers and designers in the world.

If you work in the software industry, it’s likely that you have heard about the divide and conquer design paradigm, which basically consists of recursively splitting a problem into two or more sub-problems (divide), until these become simple enough to be solved directly (conquer).

What you might not know is that this paradigm originates from an old political strategy (the name is derived from the Latin saying divide et impera) that suggests it is possible to maintain control over one’s subordinates or subjects by encouraging dissent between them.

This strategy has been used by countless politicians and military leaders throughout history, such as Julius Caesar (who used it during the Gallic Wars to defeat the militarily strong Gauls) and Napoleon (the French artillery expert would divide the enemy troops so no portion was stronger than his own troops, and then disrupt their communications, impeding enemy efforts to coordinate and execute attacks).

The Art Of War: Ancient Principles Applied To Development

However, the divide and conquer rule is not the only political strategy that can be applied to software development. Although politics and warfare have little to do with software development, just like politicians and generals, developers must lead subordinates, coordinate efforts between teams, find the best strategies to resolve problems, and administer resources.

Sun Tzu’s principles and teachings have practical applications in politics, business, sports, and software development.

Sun Tzu’s principles and teachings have practical applications in politics, business, sports, and software development.

The Art of War is an ancient military treatise written in the fifth century B.C. and attributed to Sun Tzu, an ancient Chinese military strategist, whose theories had a profound influence on both Eastern and Western philosophy.

Despite its age, the text is still included in the syllabus at many military schools in East Asia and it’s listed as a recommended reading in some military academies in the West. The text is divided into 13 chapters, each one devoted to a different aspect of warfare.

However, in addition to warfare, Sun Tzu’s principles and teachings have practical applications in politics, business, sports, and, believe it or not, software development. In fact, you might just be applying some of these principles in your daily routine, without even knowing their origins.

Detailed below, you will a find a brief list of basic tactics and tips explained in the Art of War. They can probably be applied to your job in the software industry, or any of a number of other industries.

Time Is Crucial In Any Campaign

Chapter II, paragraph 2

“When you engage in actual fighting, if victory is long in coming, then men’s weapons will grow dull and their ardor will be damped.”

This principle can be applied to software development, as a rule describing the relationship between the length of development cycles and the developer’s morale.

If a group of developers work on the same projects for months, with no clear goals or end in sight, they may become frustrated and their productivity may decline.

Divide your development roadmap into easily achievable goals and milestones. It’s good for morale.

Divide your development roadmap into easily achievable goals and milestones. It’s good for morale.

Software development is an intellectual endeavour, so motivation is the main fuel for productivity. Working every day without perceiving that your work is generating real results can be very demotivating.

As indicated in some agile methodologies, the development roadmap should be divided into several goals and milestones, which the team might be able to achieve in short timeframes, and they are going to give them a sense of progress and achievement.

Chapter II, paragraph 18

“In war, then, let your great object be victory, not lengthy campaigns.”

This phrase can be interpreted in two ways:

First, it can be seen as a precursor of the UNIX philosophy: Write programs that do one thing and do it well. When developing software, you must always keep in mind the main objective of the program, the key feature that it provides, or the biggest problem that it solves, and ensure proper implementation.

Sometimes you might get inspired and think of a really cool feature to add, but do not forget that applications that have lot of infrequently used features have a disparaging name: bloatware.

Second, the statement may also be considered as a precursor for one of the lean software development principles: Deliver as fast as possible.

The sooner you deliver software without major defects, the sooner you will get feedback from the client, and you will be able to incorporate the changes into the next iteration.

If on the other hand, you deliver non-working software, you will miss out on valuable feedback, because clients won’t get a chance to test it properly. This will make the next stage of development more difficult, or impossible in situations where your next iteration depends on customer feedback.

No Leadership, No Results

Chapter III, paragraph 11

“Now the general is the bulwark of the State; if the bulwark is complete at all points, the State will be strong; if the bulwark is defective, the State will be weak.”

This quote describes the importance of the role of the manager in a development team: the success of a project depends on the strength of all people involved, and the manager is the bulwark of the project. Responsibility starts at the top.

Responsibility starts at the top. If your team lead is bad, no amount of talented engineers will help.

Responsibility starts at the top. If your team lead is bad, no amount of talented engineers will help.

Even though developers frequently work alone (each sitting behind a computer, with limited communication with coworkers), that does not mean that they don’t need good leadership. Project managers are in charge of keeping the team on track, ensuring effective communication and dispute resolution, and leaders, obviously, define the priorities of the project (among other tasks), so their role should not be underestimated. Neither should their responsibility if something goes wrong. Imagine what would happen to a military leader whose unit failed to perform its duty in the field of battle?

A team can produce great software even if it has a few bad apples in development positions, but that’s unlikely to happen if the project manager is the bad apple, no matter how many rockstar developers the team has.

Chapter VI, paragraph 28

“Do not repeat the tactics which have gained you one victory, but let your methods be regulated by the infinite variety of circumstances.”

Sometimes, when starting a project, it’s tempting to use the same set of technologies we used in previously successful projects (the same programming language, the same libraries, the same server, etc). However, unless the requirements of the new projects are exactly the same as previous ones, this might be the wrong approach.

In programming, as in most domains, the panacea (a supposed remedy capable of curing all diseases) does not exist. There is no single combination of technologies that you can use for solving all problems; each technology has its upsides and downsides.

Of course, learning a new programming language or using an unknown API might initially be expensive but in the long term, the quality of the software will be superior and you will become a better developer.

Chapter XIII, paragraph 27

“Hence it is only the enlightened ruler and the wise general who will use the highest intelligence of the army for purposes of spying, and thereby they achieve great results. Spies are a most important element in war, because on them depends an army’s ability to move.”

This phrase may be interpreted as the importance of using monitoring tools and logging libraries during the maintenance phase.

Although sometimes clients might not think so, development does not end when you get a stable and fully tested release. Software is always evolving, either by fixing bugs, adding new features or improving efficiency.

And there is no better source of information for knowing what changes to make than having spies monitoring the software in production environments, checking which features are used the most, the most common errors and the lengthiest operations.

Error reports, logging entries and usage data are fundamental for detecting bugs, identifying bottlenecks and other issues since it is not always possible to reproduce the same conditions in controlled testing environments.

Teamwork And Motivation

Chapter X, paragraph 24

“He who advances without seeking fame, Who retreats without escaping blame, He whose one aim is to protect his people and serve his lord, The man is a jewel of the Realm.”

Basically, this is the ancient Chinese version of “there’s no I in team”. It is more important to work together with others rather than to pursue personal gain.

Software development is a complex activity that requires developers to work effectively as a team. A good developer is not the one who fixes the most bugs, implements the most features or finishes assignments ahead of schedule; a good developer is the one who helps the team reach its goals.

Teamwork wins battles. Remember, the best developers are inspiring individuals who help other team members reach their objectives.

Teamwork wins battles. Remember, the best developers are inspiring individuals who help other team members reach their objectives.

Claiming credit for everything you’ve done, not recognizing your errors or blaming others for them, or calling yourself a code ninja might fool some inexperienced managers and might even get you a raise, but you will become a counterproductive member of your team.

Chapter VII, paragraph 21

“Ponder and deliberate before you make a move.”

This phrase indicates the importance of team development meetings, such as those proposed by agile methodologies.

When working on a team, it is important to discuss any major changes before implementing them. It doesn’t matter if you are the team leader, or if you are the person with the most experience of the subject, you should always talk with, or at least inform, the rest of the team.

Remember that other developers could give you insights into unfamiliar parts the software. This means they could start implementing the changes faster than expected, because they could be fully aware of the effects of said changes.

Chapter X, paragraph 25

“Regard your soldiers as your children, and they will follow you into the deepest valleys; look upon them as your own beloved sons, and they will stand by you even unto death.”

This quote indicates the importance of motivation, a principle of management that is sometimes forgotten by managers and team leaders. Motivated developers will write better code, work faster, commit less errors and be more willing to put in extra hours.

Motivation must be generated by managers, by taking genuine interest in their subordinates, listening to them, caring about their work-life balance, building positive work environments and caring about their career paths.

Also, you should not mistake motivation with remuneration. Recent studies demonstrates that money do not motivate most workers, money is mostly good at attracting and retaining employees, but not at make them happy about their jobs. So raises and promotions should not be seen as motivational tools.

Thinking Outside The Box

Chapter V, paragraph 7, 8 and 9

“There are not more than five musical notes, yet the combinations of these five give rise to more melodies than can ever be heard.”

“There are not more than five primary colors, yet in combination they produce more hues than can ever been seen.”

“There are not more than five cardinal tastes, yet combinations of them yield more flavors than can ever be tasted.”

One of the good things about programming is that the possibilities are endless; you can develop basically wherever you want (at least, as long is not an NP-complete problem).

Mobile apps, websites, games, desktop applications… if you know programming, all of them are within your reach.

If you’re a talented developer, you need to think outside the box. The box is there to prevent incompetent people from wrecking stuff. It’s not for you.

If you’re a talented developer, you need to think outside the box. The box is there to prevent incompetent people from wrecking stuff. It’s not for you.

Chapter III, paragraph 1

“In the practical art of war, the best thing of all is to take the enemy’s country whole and intact; to shatter and destroy it is not so good. So, too, it is better to capture an army entire than to destroy it, to capture a regiment, a detachment or a company entire than to destroy them.”

When working on a project with a large code base, it is common to find modules or sections of code that have been implemented with bad practices or by using deprecated libraries. Although it might be tempting to erase (or destroy) this code, it might not be the best idea for several reasons:

  • Legacy code is not necessarily bad, sometimes it is good code that was written when other methodologies and technologies were considered the way to go. However, just because it is old doesn’t mean that it is not working.
  • You might lose time fixing code that still works instead of focusing on fixing other, more critical parts of the code.
  • Unless you are really sure of what you are doing, replacing a section of code that works means you are risking introducing new errors or bugs.

This does not mean that the phrase “If it ain’t broke, don’t fix it” is a good strategy, but that every project has priorities, goals and time constraints. So, if you find code that could be improved, discuss it with the rest of the team or with the project manager in order to figure out when to optimize it.

Chapter VIII, paragraph 3

“There are roads which must not be followed, armies which must not be attacked, towns which must not be besieged, positions which must not be contested, commands of the sovereign which must not be obeyed.”

Even it does not say it directly, we could interpret this principle as a warning to avoid anti-patterns.

Although using an anti-pattern may resolve a short-term issue, you should remember that in the long-term it is going to be counter-productive. So, no matter how much time you save, how many bugs you fix or how convenient it is for you, avoid them.

Still, there are times you may be tempted to use an anti-pattern to resolve an urgent task, promising yourself you will implement a proper fix when you have more time, but remember one of Murphy’s laws: All quick fixes become permanent changes.

Conclusion

Although developing software is different from commanding soldiers in war or leading a country, all that they must solve problems that require teamwork, good leadership, efficiency and long-term solutions.

However, the Art of War is not the only book which contains principles that may be applied to software development. Niccolò Machiavelli’s The Prince, is an example.

In fact, here is a list of quotes from Machiavelli that are still relevant. Try guessing which are the corresponding principles in the world of software development.

  1. The lion cannot protect himself from traps, and the fox cannot defend himself from wolves. One must therefore be a fox to recognize traps, and a lion to frighten wolves.
  2. Never attempt to win by force what can be won by deception.
  3. Never was anything great achieved without danger.
  4. Whosoever desires constant success must change his conduct with the times.
  5. Men in general judge more from appearances than from reality. All men have eyes, but few have the gift of penetration.
  6. He who wishes to be obeyed must know how to command.
  7. Wisdom consists of knowing how to distinguish the nature of trouble, and in choosing the lesser evil.
  8. There is no avoiding war; it can only be postponed to the advantage of your enemy.
  9. Nature creates few men brave; industry and training makes many.

Avoid the 10 Most Common Mistakes Web Developers Make: A Tutorial for Developers

The following article is a guest post from Toptal. Toptal is an elite network of freelancers that enables businesses to connect with the top 3% of software engineers and designers in the world.

Since the term the World Wide Web was coined back in 1990, web application development has evolved from serving static HTML pages to completely dynamic, complex business applications.

Today we have thousands of digital and printed resources that provide step-by-step instructions about developing all kinds of different web applications. Development environments are “smart” enough to catch and fix many mistakes that early developers battled with regularly. There are even many different development platforms that easily turn simple static HTML pages into highly interactive applications.

All of these development patterns, practices, and platforms share common ground, and they are all prone to similar web development issues caused by the very nature of web applications.

The purpose of these web development tips is to shed light on some of the common mistakes made in different stages of the web development process and to help you become a better developer. I have touched on a few general topics that are common to virtually all web developers such as validation, security, scalability, and SEO. You should of course not be bound by the specific examples I’ve described in this guide, as they are listed only to give you an idea of the potential problems you might encounter.

Think of all the web development mistakes that could be avoided at this campout!

Common mistake #1: Incomplete input validation

Validating user input on client and server side is simply a must do! We are all aware of the sage advice “do not trust user input” but, nevertheless, mistakes stemming from validation happen all too often.

One of the most common consequences of this mistake is SQL Injection which is in OWASP Top 10 year after year.

Remember that most front-end development frameworks provide out-of-the-box validation rules that are incredibly simple to use. Additionally, most major back-end development platforms use simple annotations to assure that submitted data are adhering to expected rules. Implementing validation might be time consuming, but it should be part of your standard coding practice and never set aside.

Common mistake #2: Authentication without proper Authorization

Before we proceed, let’s make sure we are aligned on these two terms. As stated in the 10 Most Common Web Security Vulnerabilities:

Authentication: Verifying that a person is (or at least appears to be) a specific user, since he/she has correctly provided their security credentials (password, answers to security questions, fingerprint scan, etc.).

Authorization: Confirming that a particular user has access to a specific resource or is granted permission to perform a particular action.

Stated another way, authentication is knowing who an entity is, while authorization is knowing what a given entity can do.

Let me demonstrate this issue with an example:

Consider that your browser holds currently logged user information in an object similar to the following:

{
    username:'elvis',
    role:'singer',
    token:'123456789'
}

When doing a password change, your application makes the POST:

POST /changepassword/:username/:newpassword

In your /changepassword method, you verify that user is logged and  token has not expired. Then you find the user profile based on the :username parameter, and you change your user’s password.

So, you validated that your user is properly logged-in, and then you executed his request thus changing his password. Process seems OK, right? Unfortunately, the answer is NO!

At this point it is important to verify that the user executing the action and the user whose password is changed are the same. Any information stored on the browser can be tampered with, and any advanced user could easily update username:'elvis' to username:'Administrator' without using anything else but built-in browser tools.

So in this case, we just took care of Authentication making sure that the user provided security credentials. We can even add validation that /changepassword method can only be executed by Authenticated users. However, this is still not enough to protect your users from malicious attempts.

You need to make sure that you verify actual requestor and content of request within your /changepasswordmethod and implement proper Authorization of the request making sure that user can change only her data.

Authentication and Authorization are two sides of the same coin. Never treat them separately.

Common mistake #3: Not ready to scale

In today’s world of high speed development, startup accelerators, and instant global reach of great ideas, having your MVP (Minimum Viable Product) out in the market as soon as possible is a common goal for many companies.

However, this constant time pressure is causing even good web development teams to often overlook certain issues. Scaling is often one of those things teams take for granted. The MVP concept is great, but push it too far, and you’ll have serious problems. Unfortunately, selecting a scalable database and web server and separating all application layers on independent scalable servers is not enough. There are many details you need to think about if you wish to avoid rewriting significant parts of your application later – which becomes a major web development problem.

For example, say that you choose to store uploaded profile pictures of your users directly on a web server. This is a perfectly valid solution–files are quickly accessible to the application, file handling methods are available in every development platform, and you can even serve these images as static content, which means minimum load on your application.

But what happens when your application grows, and you need to use two or more web servers behind a load balancer? Even though you nicely scaled your database storage, session state servers, and web servers, your application scalability fails because of a simple thing like profile images. Thus, you need to implement some kind of file synchronization service (that will have a delay and will cause temporary 404 errors) or another workaround to assure that files are spread across your web servers.

This bloated blowfish is full of web development tips and tutorials to avoid common mistakes.

What you needed to do to avoid the problem in the first place was just use shared file storage location, database, or any other remote storage solution. It would have probably cost few extra hours of work to have it all implemented, but it would have been worth the trouble.

Common mistake #4: Wrong or missing SEO

The root cause of incorrect or missing SEO best practices on web sites is misinformed “SEO specialists”. Many web developers believe that they know enough about SEO and that it is not especially complex, but that’s just not true. SEO mastery requires significant time spent researching best practices and the ever-changing rules about how Google, Bing, and Yahoo index the web. Unless you constantly experiment and have accurate tracking + analysis, you are not a SEO specialist, and you should not claim to be one.

Furthermore, SEO is too often postponed as some activity that is done at the end. This comes at a high price of web development issues. SEO is not just related to setting good content, tags, keywords, meta-data, image alt tags, site map, etc. It also includes eliminating duplicate content, having crawlable site architecture, efficient load times, intelligent back linking, etc.

Like with scalability, you should think about SEO from the moment you start building your web application, or you might find that completing your SEO implementation project means rewriting your whole system.

Common mistake #5: Time or processor consuming actions in request handlers

One of the best examples of this mistake is sending email based on a user action. Too often developers think that making a SMTP call and sending a message directly from user request handler is the solution.

Let’s say you created an online book store, and you expect to start with a few hundred orders daily. As part of your order intake process, you send confirmation emails each time a user posts an order. This will work without problem at first, but what happens when you scale your system, and you suddenly get thousands of requests sending confirmation emails? You either get SMTP connection timeouts, quota exceeded, or your application response time degrades significantly as it is now handling emails instead of users.

Any time or processor consuming action should be handled by an external process while you release your HTTP requests as soon as possible. In this case, you should have an external mailing service that is picking up orders and sending notifications.

Common mistake #6: Not optimizing bandwidth usage

Most development and testing takes place in a local network environment. So when you are downloading 5 background images each being 3MB or more, you might not identify an issue with 1Gbit connection speed in your development environment. But when your users start loading a 15MB home page over 3G connections on their smartphones, you should prepare yourself for a list of complaintsand problems.

Optimizing your bandwidth usage could give you a great performance boost, and to gain this boost you probably only need a couple of tricks. There are few things that many good web deveopers do by default, including:

  1. Minification of all JavaScript
  2. Minification of all CSS
  3. Server side HTTP compression
  4. Optimization of image size and resolution

Common mistake #7: Not developing for different screen sizes

Responsive design has been a big topic in the past few years. Expansion of smartphones with different screen resolutions has brought many new ways of accessing online content, which also comes with a host of web development issues. The number of website visits that come from smartphones and tablets grows every day, and this trend is accelerating.

In order to ensure seamless navigation and access to website content, you must enable users to access it from all types of devices.

There are numerous patterns and practices for building responsive web applications. Each development platform has its own tips and tricks, but there are some frameworks that are platform independent. The most popular is probably Twitter Bootstrap. It is an open-source and free HTML, CSS, and JavaScript framework that has been adopted by every major development platform. Just adhere to Bootstrap patterns and practices when building your application, and you will get responsive web application with no trouble at all.

Common mistake #8: Cross browser incompatibility

The development process is, in most cases, under a heavy time pressure. Every application needs to be released as soon as possible and even good web developers are often focused on delivering functionality over design. Regardless of the fact that most developers have Chrome, Firefox, IE installed, they are using only one of these 90% of the time. It is common practice to use one browser during development and just as the application nears completion will you start testing it in other browsers. This is perfectly reasonable–assuming you have a lot of time to test and fix issues that show up at this stage.

However, there are some web development tips that can save you significant time when your application reaches the cross-browser testing phase:

  1. You don’t need to test in all browsers during development; it is time consuming and ineffective. However, that does not mean that you cannot switch browsers frequently. Use a different browser every couple of days, and you will at least recognize major problems early in development phase.
  2. Be careful of using statistics to justify not supporting a browser. There are many organizations that are slow in adopting new software or upgrading. Thousands of users working there might still need access to your application, and they cannot install the latest free browser due to internal security and business policies.
  3. Avoid browser specific code. In most cases there is an elegant solution that is cross-browser compatible.

Common mistake #9: Not planning for portability

Assumption is the mother of all problems! When it comes to portability, this saying is more true than ever. How many times have you seen issues in web development like hard coded file paths, database connection strings, or assumptions that a certain library will be available on the server? Assuming that the production environment will match your local development computer is simply wrong.

Ideal application setup should be maintenance-free:

  1. Make sure that your application can scale and run on a load-balanced multiple server environment.
  2. Allow simple and clear configuration–possibly in a single configuration file.
  3. Handle exceptions when web server configuration is not as expected.

Common mistake #10: RESTful anti patterns

RESTful API’s have taken their place in web development and are here to stay. Almost every web application has implemented some kind of REST services, whether for internal use or integrating with external system. But we still see broken RESTful patterns and services that do not adhere to expected practices.

Two of the most common mistakes made when writing a RESTful API are:

  1. Using wrong HTTP verbs. For example using GET for writing data. HTTP GET has been designed to be idempotent and safe, meaning that no matter how many times you call GET on the same resource, the response should always be the same and no change in application state should occur.
  2. Not sending correct HTTP status codes. The best example of this mistake is sending error messages with response code 200.
     HTTP 200 OK
     {
         message:'there was an error'
     }
    

You should only send HTTP 200 OK when the request has not generated an error. In the case of an error, you should send 400, 401, 500 or any other status code that is appropriate for the error that has occurred.

A detailed overview of standard HTTP status codes can be found here.

Wrap up

Web development is an extremely broad term that can legitimately encompass development of a website, web service, or complex web application.

The main takeaway of this web development guide is the reminder that you should always be careful about authentication and authorization, plan for scalability, and never hastily assume anything – or be ready to deal with a long list of web development problems!