web analytics

Best Practices for Deploying New Environments in the Cloud for the First Time

When organizations take their first steps to use public cloud services, they tend to look at a specific target.

My recommendation – think scale!

Plan a couple of steps ahead instead of looking at single server that serves just a few customers. Think about a large environment comprised of hundreds or thousands of servers, serving 10,000 customers concurrently.

Planning will allow you to manage the environment (infrastructure, information security and budget) when you do reach a scale of thousands of concurrent customers. The more we plan the deployment of new environments in advance, according to their business purposes and required resources required for each environment, it will be easier to plan to scale up, while maintaining high level security, budget and change management control and more.

In this three-part blog series, we will review some of the most important topics that will help avoid mistakes while building new cloud environments for the first time.

Resource allocation planning

The first step in resources allocation planning is to decide how to divide resources based on an organizational structure (sales, HR, infrastructure, etc.) or based on environments (production, Dev, testing, etc.)

In-order to avoid mixing resources (or access rights) between various environments, the best practice is to separate the environments as follows:

  • Share resource account (security products, auditing, billing management, etc.)
  • Development environment account (consider creating separate account for test environment purposes)
  • Production environment account

Separating different accounts or environments can be done using:

Tagging resources

Even when deploying a single server inside a network environment (AWS VPC, Azure Resource Group, GCP VPC), it is important to tag resources. This allows identifying which resources belong to which projects / departments / environments, for billing purposes.

Common tagging examples:

  • Project
  • Department
  • Environment (Prod, Dev, Test)

Beyond tagging, it is recommended to add a description to resources that support this kind of meta-data, in-order to locate resources by their target use.

Authentication, Authorization and Password Policy

In-order to ease the management of working with accounts in the cloud (and in the future, multiple accounts according to the various environments), the best practice is to follow the rules below:

  • Central authentication – In case the organization isn’t using Active Directory for central account management and access rights, the alternative is to use managed services such as AWS IAM, Google Cloud IAM, Azure AD, Oracle Cloud IAM, etc.

If managed IAM service is chosen, it is critical to set password policy according to the organization’s password policy (minimum password length, password complexity, password history, etc.)

  • If the central directory service is used by the organization, it is recommended to connect and sync the managed IAM service in the cloud to the organizational center directory service on premise (federated authentication).
  • It is crucial to protect privileged accounts in the cloud environment (such as AWS Root Account, Azure Global Admin, Azure Subscription Owner, GCP Project Owner, Oracle Cloud Service Administrator, etc.), among others, by limiting the use of privileged accounts to the minimum required, enforcing complex passwords, and password rotation every few months. This enables multi-factor authentication and auditing on privileged accounts, etc.
  • Access to resources should be defined according to the least privilege principle.
  • Access to resources should be set to groups instead of specific users.
  • Access to resources should be based on roles in AWS, Azure, GCP, Oracle Cloud, etc.

Audit Trail

It is important to enable auditing in all cloud environments, in-order to gain insights on access to resources, actions performed in the cloud environment and by whom. This is both security and change management reasons.

Common managed audit trail services:

  • AWS CloudTrail – It is recommended to enable auditing on all regions and forward the audit logs to a central S3 bucket in a central AWS account (which will be accessible only for a limited amount of user accounts).
  • Working with Azure, it is recommended to enable the use of Azure Monitor for the first phase, in-order to audit all access to resources and actions done inside the subscription. Later on, when the environment expands, you may consider using services such as Azure Security Center and Azure Sentinel for auditing purposes.
  • Google Cloud Logging – It is recommended to enable auditing on all GCP projects and forward the audit logs to the central GCP project (which will be accessible only for a limited amount of user accounts).
  • Oracle Cloud Infrastructure Audit service – It is recommended to enable auditing on all compartments and forward the audit logs to the Root compartment account (which will be accessible only for a limited amount of user accounts).

Budget Control

It is crucial to set a budget and budget alerts for any account in the cloud at in the early stages of working with in cloud environment. This is important in order to avoid scenarios in which high resource consumption happens due to human error, such as purchasing or consuming expensive resources, or of Denial of Wallet scenarios, where external attackers breach an organization’s cloud account and deploys servers for Bitcoin mining.

Common examples of budget control management for various cloud providers:

  • AWS Consolidated Billing – Configure central account among all the AWS account in the organization, in-order to forward billing data (which will be accessible only for a limited amount of user accounts).
  • GCP Cloud Billing Account – Central repository for storing all billing data from all GCP projects.
  • Azure Cost Management – An interface for configuring budget and budget alerts for all Azure subscriptions in the organization. It is possible to consolidate multiple Azure subscriptions to Management Groups in-order to centrally control budgets for all subscriptions.
  • Budget on Oracle Cloud Infrastructure – An interface for configuring budget and budget alerts for all compartments.

Secure access to cloud environments

In order to avoid inbound access from the Internet to resources in cloud environments (virtual servers, databases, storage, etc.), it is highly recommended to deploy a bastion host, which will be accessible from the Internet (SSH or RDP traffic) and will allow access and management of resources inside the cloud environment.

Common guidelines for deploying Bastion Host:

The more we expand the usage of cloud environments, we can consider deploying a VPN tunnel from the corporate network (Site-to-site VPN) or allow client VPN access from the Internet to the cloud environment (such as AWS Client VPN endpoint, Azure Point-to-Site VPN, Oracle Cloud SSL VPN).

Managing compute resources (Virtual Machines and Containers)

When selecting to deploy virtual machines in cloud environment, it is highly recommended to follow the following guidelines:

  • Choose an existing image from a pre-defined list in the cloud providers’ marketplace (operating system flavor, operating system build, and sometimes an image that includes additional software inside the base image).
  • Configure the image according to organizational or application demands.
  • Update all software versions inside the image.
  • Store an up-to-date version of the image (“Golden Image”) inside the central image repository in the cloud environment (for reuse).
  • In case the information inside the virtual machines is critical, consider using managed backup services (such as AWS Backup or Azure Backup).
  • When deploying Windows servers, it is crucial to set complex passwords for the local Administrator’s account, and when possible, join the Windows machine to the corporate domain.
  • When deploying Linux servers, it is crucial to use SSH Key authentication and store the private key(s) in a secure location.
  • Whenever possible, encrypt data at rest for all block volumes (the server’s hard drives / volumes).
  • It is highly recommended to connect the servers to a managed vulnerability assessment service, in order to detect software vulnerabilities (services such as Amazon Inspector or Azure Security Center).
  • It is highly recommended to connect the servers to a managed patch management service in-order to ease the work of patch management (services such as AWS Systems Manager Patch Manager, Azure Automation Update Management or Google OS Patch Management).

When selecting to deploy containers in the cloud environment, it is highly recommended to follow the following guidelines:

Storing sensitive information

It is highly recommended to avoid storing sensitive information, such as credentials, encryption keys, secrets, API keys, etc., in clear text inside virtual machines, containers, text files or on the local desktop.

Sensitive information should be stored inside managed vault services such as:

Object Storage

When using Object Storage, it is recommended to follow the following guidelines:

  • Avoid allowing public access to services such as Amazon S3, Azure Blob Storage, Google Cloud Storage, Oracle Cloud Object Storage, etc.
  • Enable audit access on Object Storage and store the access logs in a central account in the cloud environment (which will be accessible only for a limited amount of user accounts).
  • It is highly recommended to encrypt data at rest on all data inside Object Storage and when there is a business or regulatory requirement, and encrypt data using customer managed keys.
  • It is highly recommended to enforce HTTPS/TLS for access to object storage (users, computers and applications).
  • Avoid creating object storage bucket names with sensitive information, since object storage bucket names are unique and saved inside the DNS servers worldwide.

Networking

Advanced use of cloud environments

It allows consumption of services, rather than maintaining servers, operating systems, updates/patches, backup and availability, assuming managed services in cluster or replica mode is chosen.

  • Use Infrastructure as a Code (IoC) in-order to ease environment deployments, lower human errors and standardize deployment on multiple environments (Prod, Dev, Test).

Common Infrastructure as a Code alternatives:

Summary

To sum up:

Plan. Know what you need. Think scale.

If you use the best practices outlined here, taking off to the cloud for the first time will be an easier, safer and smoother ride then you might expect.

Additional references

Top Six Cloud Myths Debunked

Organizations considering migrating to cloud services encounter many myths along the way. In most cases, the myths are based on FUD factor (Fear, Uncertainty and Doubt).

In this post, I will review some of the common myths about clouds and see if they stand up to reality.

Myth #1: The Cloud is less secure than on premise or the cloud is more secure than the on premise

The truth is that they are both right and both wrong. The real answer lies somewhere in the middle.

When comparing managed services (SaaS) such as SAP SuccessFactors, Oracle CRM, Office 365, SalesForce, etc., to similar services provided on premise, they cannot be compared to on premise models.

On the one hand, the customer shifts the burden of security and maintenance to the cloud provider, and on the other, mature cloud providers (such as those mentioned above), invest huge amounts of money (when compared to most organizations) in information security, penetration testing, audit trails and constant training to their support and information security teams.

The comparison of IaaS solutions and on premise models is different. According to the shared responsibility model, customers get access from the operating system and above, and they are solely responsible for everything that happens inside the operating system. This includes backups, hardening, updates, authorization, access control and defense against attacks.

The IaaS model is similar to traditional virtualization from the local data center. But top IaaS providers enable access to various managed services in order to ease the burden on server maintenance (from managed databases, through backup services, patch management, vulnerability management, etc.) Bottom line – It is possible to reach the “sweet spot” where using the cloud makes an organization more secure than using on premise, as long as we are familiar with the cloud providers services and capabilities and as long as we learn how to make the most out of these services on-going basis.

Myth #2: The cloud is more expensive than on premise or the cloud is less expensive than on premise

Again, the truth can be found somewhere in the middle.

In order to make an accurate comparison of on premise and cloud solutions, we need to take into consideration several factors. These include the cost of hardware, several years of maintenance, licenses, backup and archive storage, system availability and most important – manpower costs for system maintenance, including training of IT, development and security teams.

When comparing managed services, such as managed databases vs. manual maintenance of on premise databases, the calculation would look something like this. The cloud saves maintenance cost, hardening, patching/updating, and even backups, if they are part of the managed service. That translates into significant savings on maintenance costs, compared to on premise environments, and allows organizations to consume services without the burden of maintaining the systems.

When comparing IaaS environments, the picture changes. The costs of the cloud servers, in a pay-as-you-go model, in most cases are higher than comparable on-premises models (when comparing same amount of vCPU and memory). In-order to cut costs in IaaS model, we need to understand if we are dealing with a high performance workload and changing run time, or are we are dealing with servers operating 24×7 for long period of time. And if we are dealing with the long-term, it is highly recommended to purchase reserved instances for 1 or 3 years in advance.

Another alternative for saving server costs in an IaaS model is to choose Spot model and save up to 90% of the price, assuming the service itself is not fault-sensitive and can be recovered automatically, such as batch processing, image processing, etc. The best alternative for saving server costs will require re-architecting our systems (as much as possible) and migrating to building systems based on micro-service architecture, or use Serverless services and cut the cost on resources and monthly costs to the minimum required.

Myth #3: The cloud is complex or Cloud migration is always complex

Migrating existing services from on premise to managed services in a SaaS model varies from one cloud provider to another, which makes it hard to generalize.

Many SaaS vendors publish guidelines and tools to assist organizations with the migration process. Some examples are SalesForce, Oracle CRM, SAP, Office 365, Google G Suite, etc.

When migrating to PaaS services, there are many guidelines and tools to assist organizations with the migration process. Some examples include AWS Database Migration Service, Azure Database Migration Service, Google BigQuery Data Transfer Service, etc.

Migrating systems to IaaS model requires training IT personnel on how cloud providers implement infrastructure services, such as VM deployment, setting network access rules, connecting to storage, settings permissions, etc.

Organizations who train their IT, networking and information security teams on working with IaaS and PaaS environments will be able to make the migration process easier. There are many low cost online courses to assist closing the required knowledge gap.

If you want to migrate really easily (“fast and dirty”), you can always choose to migrate systems using “lift & shift” method, at least during first phase, although it is not considered a cost effective solution. Sometimes similar hardware in on premise environments is cheaper than similar hardware in an IaaS environment. But this method will allow the organization access to migrated environments and later on, to adapt the required resources to allow the system to function, change the system architecture, such as replacing servers with managed services, etc. Bottom line – It all begins with organizations willing to adapt to working in cloud environments and, of course, management support for the cloud migration.

Myth #4: Multi-Cloud will prevent Vendor Lock-In

When organizations take their first steps toward working with public cloud solutions, it make sense to choose a single IaaS provider in-order to allow the organization to train employees, plan cloud migration strategy and begin the actual cloud migration phase and deployment of the new environments.

The fear of vendor lock-in, or from the cloud provider going bankrupt, is not unreasonable. However the most likely complimentary control mechanism is to choose one of the hyper-scale cloud providers, and mitigate the risk of the cloud provider going bankrupt.

Theoretically, selecting multiple IaaS providers might allow migration between providers, but in reality, moving to multi-cloud environments creates many challenges. These include the requirement to enforce central authentication, requirements to understand how each cloud provider implements services differently (such as storage, network, compute, etc.), understanding how to deploy new environments over multiple cloud providers’ infrastructure, understanding how to enforce logging/auditing and how to centrally handle incident response processes over multiple providers, etc.

When you want to mitigate the risk of vendor lock-in and allow organizations to move environments between cloud providers, we need to plan our infrastructure architecture ahead of time – from the very beginning phases and based architecture on Containers or Kubernetes. As long as services are wrapped in containers, you will be able to deploy and run them over multiple cloud providers. Also take into consideration the integration with each cloud providers’ ecosystem, such as storage, monitoring, message queuing services, etc. Bottom line – deploying production environments over multiple cloud providers requires deep understanding of the cloud ecosystem. Instead of resolving vendor lock-in risks, it can create high overhead for the organization, which may not be justified relative to the risk of vendor lock-in. Moving to container-based architectures might ease the organization’s ability to work with multiple cloud providers.

Myth #5: Auditing cloud environments is hard

Correct. But only partially.

Cloud migration requires customers to understand that they may not be able to conduct on premise audits of the cloud providers’ data centers, as we used to conduct with hosting providers in the past. But on the other hand, mature cloud vendors provide us with complimentary audit controls, such as:

Bottom line – It is possible and highly recommended to constantly audit cloud environments. Choosing one of the mature cloud providers will allow various complimentary controls in order to assure that cloud environments are secure and comply with standards and regulations.

Myth #6: Migration to the cloud will cut manpower and cause employee job security issues

This perhaps one of the most common myths. But inaccurate.

It is true that IT and information security teams will need to undergo training to work with various services and adapt existing knowledge from the on premise environments to cloud environments. But here lies the great potential.

If in the past we used to have deep knowledge in a specific field, such as operating systems, networking, storage, databases, information security, etc., today organizations migrating to the cloud are looking for employees with multidisciplinary knowledge.

The migration from on premise models to the cloud will require organizations to streamline. Although migration to SaaS or managed services requires less IT personnel, the migration to IaaS/PaaS environments requires a shift in the mindset of IT teams. They will need to adapt existing knowledge from manual work, like server maintenance, databases, environment deployments, etc., to automation, like writing code (but no need for professional developers), switching to environment deployment based on Infrastructure as a Code, etc. This ability to adapt will be in high demand by organizations. They will seek professional IT personnel, and will make existing IT teams, who adapt to the changing world, even a more valuable asset to their organizations.

Smart use of cloud services

Many organizations are talking and beginning to embrace system migration to the cloud, as part of a global IT strategy to use public clouds.
The migration from the on premise to the public cloud, allows organizations to choose from a variety of alternatives, each with its own benefits, knowledge requirement and different pricing model (as compared to the on premise licensing model).
In this article, I shell review the different methods, with their pros and cons and I will try to explain which method suites which scenario.

Lift & Shift Migration
In this method, the customer chooses an existing system from the on premise environment, reviewing the required resources for running the system in a cloud environment (number of CPU/amount of Memory and required disk space), the operating system version (assuming the cloud provider has an image for this operating system), checking with the software vendor the ability to run the system a cloud environment (licensing issues) and deploying all software components on a virtual server in the cloud environment (or migrating the entire system, assuming the cloud provider offers a managed service for system migration).

This is the most common method and the simplest one (comparing to other alternatives…) for migrating to the cloud, and most cloud vendors (Infrastructure as a Service) support this method, but we should take under consideration, that cost-wise, this method is considered expensive (in terms of cost and resource usage) when comparing to purchasing physical hardware for 3-5 years in advanced in the on premise environment.
The most common ways to lower the server cost are:
• Resizing the server size (number of CPU/amount of memory) to the actual usage
• Purchase reserved instance for 1 or 3 years in advanced
• Using Spot instances for servers who does not require availability of 24×7 or for applications who can survive temporary downtime, without harming the entire service

Moving to Micro-Services and Containers
In this method, the organization begins migration from monolith application (a system where all components relay on each other and required to be deployed together) to micro-services based development, where each component runs independently (usually inside its own container) and it can be replaced, upgraded and vertically scale out as needed and independently from the rest of the system components.
It is possible to run containers on virtual servers (the entire management, update and scale is the customer’s responsibility) or as part of a managed service (such as managed Kubernetes clusters service).
This method requires the developer’s teams to know how to package their applications inside containers, take care of scaling, monitoring of containers activities (and the communications between containers), and taking care of security topics (such as who can access a container or whether or not the communication between containers is encrypted, etc.)
This method is suitable for organizations who wish to change their current application architecture or being developing new applications. Modern applications are being developed today as containers and allows the customer to migrate between the on premise environments to the public cloud, and with proper adjustments between different cloud providers (once we solve topics such as connectivity to current cloud vendor’s services such as message queuing, storage, logging, etc.)

Moving to Serverless / Function as a Service
In this method, the customer isn’t in charge of operating system maintenance, system availability or scale. Due to the fact that the entire infrastructure is been managed by the cloud vendor, the vendor takes care of scale, as required by the application needs.
This method is suitable for event based services, with short running time (few seconds to few minutes). As part of moving to modern applications, there are many scenarios for choosing specific development language, uploading the code to a managed cloud environment (Serverless), selecting the desired compute power (amount of memory, which effects the number of CPU) and creating triggers for running the function.
It is possible to embed Serverless capabilities, as part of modern micro-services architecture.
The pricing model for this method is based on the amount of time the function was running and the amount of memory used for running the function.
Common use cases for Serverless – image processing, data analysis from IoT devices, etc.
This method is not suitable for every application (due to short running intervals), and also not every development language is currently been supported by every cloud vendor.
For example:
• AWS Lambda (currently) support natively the following languages: Java, Go, PowerShell, Node.JS, C#, Python, Ruby
• Azure Functions (currently) support natively the following languages: Java, JavaScript, C#, PowerShell, Python, TrueScript
• Google Cloud Functions (currently) support natively the following languages: Python, Go, Node.JS
• Oracle Functions (currently) support natively the following languages: Java, Python, Node.JS, Go, Ruby

Migration to managed services (SaaS / PaaS)
In this method, the organization chooses an existing SaaS (such as Messaging, CRM, ERP, etc.) or existing PaaS (such as Database, Storage, etc.)
This method suites many scenarios in which the organization would like to consume existing service, without the need to maintain the infrastructure (operating system, storage, backup, security aspects, etc.). After choosing an existing service, the organization begin migrating data to the managed service, configure proper access rights, sometimes configure VPN connectivity between the on premise and the cloud environment, configures backup (according to the service support this capability) and being consuming the service.
The pricing model changes between cloud vendors (sometime is it based on monthly pricing and sometimes it is based on consumed compute power or consumed storage space).
Mature and transparent the cloud vendors, reveal accurate monthly billing information.
Also, mature cloud vendors knows how to handle privacy, low and regulation aspects (such as GDPR compliance and other privacy regulations) using data processing agreements.

Summary
In this article, I have reviewed the various methods of using cloud service wisely. As we can see, not every method suites every scenario or every organization, but there is no doubt that the future is heading cloud and managed services.
My recommendation for organizations – focus on what brings your organization business value (such as banking, commerce, retail, education, etc.), train your IT and development teams on the coming technological changes and migrate the responsibility for maintaining your organization infrastructure to vendors who specialized on the topic.

To Tokenize Or Not Tokenize – The Truth About Scope And Compliance Risk

The hard-to-face reality is that billions of personal records are exposed each year. A commonly used, yet incomplete solution, is tokenization. Tokenizing sensitive data does not eliminate the need to achieve and certify PCI DSS compliance.

In order to completely descope from PCI, a business can partner with a data custodian (VGS) that handles 100% of data capture and vaulting – removing any compliance risk and completely avoiding data leaks.

Massive data leaks, at this point, are becoming a frequent occurrence – with headlines regularly popping up highlighting cybersecurity disasters that have impacted millions of consumers.

Earlier this summer, news of a cybersecurity disaster rattled North American consumers. The highly-publicized Capital One data breach of 2019 led to the sensitive data exposure of 100 million Americans and 6 million Canadians – including hundreds of thousands of social security numbers and bank account numbers.

Similarly, in July, we learned about the whopping $700 million settlement as a result of the Equifax data breach. Now, years later, the 147 million customers impacted by that disaster all get a piece of that pie.

It only seems like a matter of time until the next multi-million-dollar data breach settlement will be announced, and another consumer data-handling organization will have their feet publicly held to the fire.

From the improper configuration of web applications to the massive security risk involved in cloud storage generally, companies have wisely been seeking alternatives to storing their own sensitive user data and opening themselves up to data breach risk.

The rise of tokenization

In order to reduce data leakage risk as much as possible, many tech organizations have leveraged a method called tokenization. It’s a way to limit storing plain text sensitive data within a company’s own systems by using “tokens” to replace private user information like medical data or credit card numbers.

Unlike encryption, where a mathematical equation can “solve” the data replacement and reveal the original sensitive information, tokenization is not reversible. With no mathematical relationship to the original data point, tokenization is widely considered to be a safe way of transmitting and storing critical information.

However, tokenizing sensitive data does not eliminate the need to achieve and certify PCI DSS compliance – although it can reduce the number of system components to which PCI DSS compliance would apply.

With tokenization, sensitive data is mostly hidden. But, there are two points where tokenized data still remains within the scope of PCI DSS compliance: the data vault and the original point of capture.

Even when organizations store and use tokens instead of raw sensitive data, the original data is within PCI scope because they’re still within the cardholder data environment. In this environment, a business is still responsible for any leaked data.

But what if businesses could offload this data risk fully, and enjoy the benefits of tokenization while keeping all the original data completely off their own systems?

Descoping entirely

In order to completely descope from PCI DSS compliance, a business can partner with a data custodian that handles 100% of data capture and vaulting – removing any compliance risk and completely avoiding data leaks.

VGS is an innovative data custodian that takes on their clients’ compliance risk, securely storing all sensitive data and removing businesses from PCI scope in full.

By leveraging synthetic data ‘aliases,’ which are generated in real-time to protect all sensitive customer information, VGS empowers businesses to use customer data just as they did before – but never actually touch the sensitive data themselves.

Startups that work with VGS remove their compliance risk and avoid the potential risk associated with data breaches because there’s nothing to steal in the first place.

By partnering with a data custodian, organizations can completely descope their systems and win over new customers with the peace of mind that their sensitive personal data is in safe hands.

When businesses implement VGS solutions to handle their sensitive data, they instantly inherit VGS’s best-in-class security posture, which enables them to fast-track their certifications like PCI, SOC2 and others.

With data security as one less thing to worry about, organizations can focus their time and resources on growing their core businesses.
This article was originally published in Very Good Security.

Threat Modeling for Data Protection

When evaluating the security of an application and data model ask the questions:

  • What is the sensitivity of the data?
  • What are the regulatory, compliance, or privacy requirements for the data?
  • What is the attack vector that a data owner is hoping to mitigate?
  • What is the overall security posture of the environment, is it a hostile environment or a relatively trusted one?

Data When threat modeling, consider the following common scenarios:

Data at rest (“DAR”)

In information technology means inactive data that is stored physically in any digital form (e.g. database/data warehouses, spreadsheets, archives, tapes, off-site backups, mobile devices etc.).

  • Transparent Data Encryption (often abbreviated to TDE) is a technology employed by Microsoft SQL, IBM DB2 and Oracle to encrypt the “table-space” files in a database. TDE offers encryption at the file level. It solves the problem of protecting data at rest by encrypting databases both on the hard drive as well as on backup media. It does not protect data in motion DIM nor data in use DIU.
  • Mount-point encryption: This is another form of TDE is available for database systems which do not natively support table-space encryption. Several vendors offer mount-point encryption for Linux/Unix/Microsoft Windows file system mount-points. When a vendor does not support TDE, this type of encryption effectively encrypts the database table-space and stores the encryption keys separate from the file system. So, if the physical or logical storage medium is detached from the compute resource, the database table-space remains encrypted.

Data in Motion (“DIM”)

Data in motion considers the security of data that is being copied from one medium to another. Data in motion typically considers data being transmitted over a network transport. Web Applications represent common data in motion scenarios.

  • Transport Layer Security (TLS or SSL): is commonly used to encrypt internet protocol based network transports. TLS works by encrypting the internet layer 7 “application layer” packets of a given network stream using symmetric encryption.
  • Secure Shell/Secure File Transport (SSH, SCP, SFTP): SSH is a protocol used to securely login and access remote computers. SFTP runs over the SSH protocol (leveraging SSH security and authentication functionality) but is used for secure transfer of files. The SSH protocol utilizes public key cryptography to authenticate access to remote systems.
  • Virtual Private Networks (VPNs) A virtual private network (VPN) extends a private network across a public network, and enables users to send and receive data across shared or public networks as if their computing devices were directly connected to the private network.

Data in Use (“DIU”)

Data in use happens whenever a computer application reads data from a storage medium into volatile memory.

  • Full memory encryption: Encryption to prevent data visibility in the event of theft, loss, or unauthorized access or theft. This is commonly used to protect Data in Motion and Data at Rest. Encryption is increasingly recognized as an optimal method for protecting Data in Use. There have been multiple approaches to encrypt data in use within memory. Microsoft’s Xbox has a capability to provide memory encryption. A company Private Core presently has a commercial software product cage to provide attestation along with full memory encryption for x86 servers.
  • RAM Enclaves: enable an enclave of protected data to be secured with encryption in RAM. Enclave data is encrypted while in RAM but available as clear text inside the CPU and CPU cache, when written to disk, when traversing networks etc. Intel Corporation has introduced the concept of “enclaves” as part of its Software Guard Extensions in technical papers published in 2013.
  • 2013 papers: from Workshop on Hardware and Architectural Support for Security and Privacy 2013
  • Innovative Instructions and Software Model for Isolated Execution
  • Innovative Technology for CPU Based Attestation and Sealing

Where do traditional data protection techniques fall short?

TDE: Database and mount point encryption both fall short of fully protecting data across the data’s entire lifecycle. For instance: TDE was designed to defend against theft of physical or virtual storage media only. An authorized system administrator, or and unauthorized user or process can gain access to sensitive data either by running a legitimate query and , or by scraping RAM. TDE does not provide granular access control to data at rest once the data has been mounted.

TLS/SCP/STFP/VPN, etc: TCP/IP Transport layer encryption also falls short of protecting data across the entire data lifecycle. For example, TLS does not protect data at rest or in use. Quite often TLS is only enabled on Internet facing application load balancers. Often TLS calls to web applications are plaintext on the datacenter or cloud side of the application load-balancer.

DIU: Memory encryption, Data in use full memory encryption falls short of protecting data across the entire data lifecycle. DIU techniques are cutting edge and not generally available. Commodity compute architecture has just begun to support memory encryption. With DIU memory encryption, data is only encrypted while in memory. Data is in plaintext while in the CPU, Cache, written to disk, and traversing network transports.

Complimentary or Alternative Approach: Tokenization

We need an alternative approach that address all the exposure gaps 100% of the time. In information security, we really want a defense in depth strategy. That is, we want layers of controls so that if a single layer is fails or is compromised another layer can compensate for the failure.

Tokenization and format preserving encryption are unique in the fact they protect sensitive data throughout the data lifecycle/across a data-flow. Tokenization and FPE are portable and remain in force across mixed technology stacks. Tokenization and Format preserving encryption do not share the same exposures as traditional data protection techniques.

How does this work? Fields of sensitive data are cryptographically transformed at the system of origin, that is during intake. A cryptographic transform of a sensitive field is applied, producing a non-sensitive token representation of the original data.

Tokenization, when applied to data security, is the process of substituting a sensitive data element with a non-sensitive equivalent, referred to as a token, that has no extrinsic or exploitable meaning or value. The token is a reference (i.e. identifier) that maps back to the sensitive data through a tokenization system.

Format preserving encryption takes this a step further and allows the data element to maintain its original format and data type. For instance, a 16-digit credit card number can be protected and the result is another 16-digit value. The value here is to reduce the overall impact of code changes to applications and databases while reducing the time to market of implementing end to end data protection.

In Closing

Use of tokenization or format preserving encryption to replace live data in systems results in minimized exposure of sensitive data to those applications, stores, people and processes. Replacing sensitive data results in reduced risk of compromise or accidental exposure and unauthorized access to sensitive data.

Applications can operate using tokens instead of live data, with the exception of a small number of trusted applications explicitly permitted to detokenize when strictly necessary for an approved business purpose. Moreover: in several cases removal of sensitive data from an organization’s applications, databases, business processes will result in reduced compliance and audit scope, resulting in significantly less complex and shorter audits.

This article was originally published in Very Good Security.

WordPress 5.x Hardening Guide for CentOS 7.6

This document explains the process of installation, configuration and hardening of Apache server from source files, based on CentOS 7.6 default installation (Linux Firewall and SELinux enabled by default), including support for TLS v1.2 and PHP 7.3

    Pre-Requirements
    • Linux server installed with CentOS 7.6 (64bit)
    • policycoreutils-python-* package installed
    • setools-libs-* package installed
    • libcgroup-* package installed
    • audit-libs-python-* package installed
    • libsemanage-python-* package installed
    • gcc* package installed
    • gcc-c++* package installed
    • autoconf* package installed
    • automake* package installed
    • libtool* package installed
    • perl-core package installed
    • zlib-devel package installed
    • expat-devel package installed
    • yum-utils package installed
    OpenSSL upgrade phase
  1. Login using privileged account
  2. Run the commands below to download the latest build of OpenSSL:
    cd /usr/local/src
    wget https://www.openssl.org/source/openssl-1.1.1.tar.gz
    tar -xvzf openssl-1.1.1.tar.gz
  3. Run the commands below to compile the latest build of OpenSSL:
    cd openssl-1.1.1
    ./config --prefix=/usr/local/ssl --openssldir=/usr/local/ssl shared zlib
    make
    make test
    make install
  4. Edit using VI the file /etc/ld.so.conf.d/openssl-1.1.1.conf and add the following string to the file:
    /usr/local/ssl/lib
  5. Run the command below to reload the dynamic link:
    ldconfig -v
  6. Backup the original OpenSSL binary:
    mv /usr/bin/openssl /usr/bin/openssl.BEKUP
  7. Create using VI the file /etc/profile.d/openssl.sh and add the following content:
    #Set OPENSSL_PATH
    OPENSSL_PATH=/usr/local/ssl/bin
    export OPENSSL_PATH
    PATH=$PATH:$OPENSSL_PATH
    export PATH
  8. Run the commands below to complete the configuration of the OpenSSL:
    chmod +x /etc/profile.d/openssl.sh
    source /etc/profile.d/openssl.sh
    echo $PATH
    which openssl
    Apache 2.4.6 installation phase
  1. Login using privileged account
  2. Run the command below to install Apache 2.4.6:
    yum install httpd -y
  3. Updating Ownership and Permissions on Apache folders:
    chown root:root /usr/sbin/apachectl
    chown root:root /usr/sbin/httpd
    chmod 770 /usr/sbin/apachectl
    chmod 770 /usr/sbin/httpd
    chown -R root:root /etc/httpd
    chmod -R go-r /etc/httpd
    chown -R root:root /etc/httpd/logs
    chmod -R 700 /etc/httpd/logs
  4. Create folder for the web content:
    mkdir -p /www
  5. Updating Ownership and Permissions on the web content folder:
    chown -R root /www
    chmod -R 775 /www
  6. Fix the SELinux security context on the new web folder:
    semanage fcontext -a -t httpd_sys_content_t "/www(/.*)?"
    restorecon -F -R -v /www
    chcon -R -t httpd_sys_content_t /www
  7. Create folder for the first WordPress site:
    mkdir /www/WebSiteA
    Note: Replace WebSiteA with the relevant name
  8. Create folder for the secondWordPress site:
    mkdir /www/WebSiteB
    Note: Replace WebSiteB with the relevant name
  9. Create logs folder for the first WordPress site:
    mkdir /www/WebSiteA/logs
    Note: Replace WebSiteA with the relevant name
  10. Create logs folder for the second WordPress site:
    mkdir /www/WebSiteB/logs
    Note: Replace WebSiteB with the relevant name
  11. Configure permissions on the logs folder for the first WordPress site:
    chown -R apache:apache /www/WebSiteA/logs
    chmod -R 700 /www/WebSiteA/logs

    Note: Replace WebSiteA with the relevant name
  12. Configure permissions on the logs folder for the second WordPress site:
    chown -R apache:apache /www/WebSiteB/logs
    chmod -R 700 /www/WebSiteB/logs

    Note: Replace WebSiteB with the relevant name
  13. Fix the SELinux security context on the new web folder for the first WordPress site:
    semanage fcontext -a -t httpd_log_t "/www/WebSiteA/logs(/.*)?"
    restorecon -F -R -v /www/WebSiteA/logs
    chcon -R -t httpd_log_t /www/WebSiteA/logs

    Note: Replace WebSiteA with the relevant name
  14. Fix the SELinux security context on the new web folder for the second WordPress site:
    semanage fcontext -a -t httpd_log_t "/www/WebSiteB/logs(/.*)?"
    restorecon -F -R -v /www/WebSiteB/logs
    chcon -R -t httpd_log_t /www/WebSiteB/logs

    Note: Replace WebSiteB with the relevant name
  15. Create the following folders:
    mkdir /etc/httpd/sites-available
    mkdir /etc/httpd/sites-enabled
  16. Edit using VI the file /etc/httpd/conf/httpd.conf and change the following strings:
    From:
    LogLevel warnTo:
    LogLevel notice

    From:
    DocumentRoot "/var/www/html"

    To:
    # DocumentRoot "/var/www/html"

    From:
    ScriptAlias /cgi-bin/ "/var/www/cgi-bin/"

    To:
    # ScriptAlias /cgi-bin/ "/var/www/cgi-bin/"

  17. Comment out the entire sections below inside the /etc/httpd/conf/httpd.conf
    <Directory />
    <Directory "/var/www">
    <Directory "/var/www/html">
    <Directory "/var/www/cgi-bin">
  18. Add the following sections to the end of the /etc/httpd/conf/httpd.conf file:
    IncludeOptional sites-enabled/*.conf
    # Configure custom error message:
    ErrorDocument 400 "The requested URL was not found on this server."
    ErrorDocument 401 "The requested URL was not found on this server."
    ErrorDocument 403 "The requested URL was not found on this server."
    ErrorDocument 404 "The requested URL was not found on this server."
    ErrorDocument 405 "The requested URL was not found on this server."
    ErrorDocument 408 "The requested URL was not found on this server."
    ErrorDocument 410 "The requested URL was not found on this server."
    ErrorDocument 411 "The requested URL was not found on this server."
    ErrorDocument 412 "The requested URL was not found on this server."
    ErrorDocument 413 "The requested URL was not found on this server."
    ErrorDocument 414 "The requested URL was not found on this server."
    ErrorDocument 415 "The requested URL was not found on this server."
    ErrorDocument 500 "The requested URL was not found on this server."
    # Configure Server Tokens
    ServerTokens Prod
    # Disable Server Signature
    ServerSignature Off
    # Disable Tracing
    TraceEnable Off
    # Maximum size of the request body.
    LimitRequestBody 4000000
    # Maximum number of request headers in a request.
    LimitRequestFields 40
    # Maximum size of request header lines.
    LimitRequestFieldSize 4000
    # Maximum size of the request line.
    LimitRequestLine 4000
    MaxRequestsPerChild 10000
    # Configure clickjacking protection
    Header always append X-Frame-Options SAMEORIGIN
  19. Remove the files below:
    mv /etc/httpd/conf.d/autoindex.conf /etc/httpd/conf.d/autoindex.conf.bak
    mv /etc/httpd/conf.d/userdir.conf /etc/httpd/conf.d/userdir.conf.bak
  20. Comment out the lines inside the /etc/httpd/conf.modules.d/00-base.conf file below to disable default modules:
    LoadModule status_module modules/mod_status.so
    LoadModule info_module modules/mod_info.so
    LoadModule autoindex_module modules/mod_autoindex.so
    LoadModule include_module modules/mod_include.so
    LoadModule userdir_module modules/mod_userdir.so
    LoadModule env_module modules/mod_env.so
    LoadModule negotiation_module modules/mod_negotiation.so
    LoadModule actions_module modules/mod_actions.so
  21. Comment out the lines inside the /etc/httpd/conf.modules.d/01-cgi.conf file below to disable default modules:
    LoadModule cgi_module modules/mod_cgi.so
  22. Using VI, create configuration file for the first WordPress site called /etc/httpd/sites-available/websitea.com.conf with the following content:
    <VirtualHost *:80>
    ServerAdmin [email protected]websitea.com
    ServerName www.websitea.com
    ServerAlias websitea.com
    DocumentRoot /www/WebSiteA
    <Directory />
    Options FollowSymLinks
    AllowOverride None
    </Directory>
    <Directory /www/WebSiteA>
    Options Indexes FollowSymLinks MultiViews
    AllowOverride all
    Require all granted
    Order allow,deny
    Allow from all
    <LimitExcept GET POST>
    deny from all
    </limitexcept>
    </Directory>
    ErrorLog /www/WebSiteA/logs/error.log
    CustomLog /www/WebSiteA/logs/access.log combined
    </VirtualHost>

    Note: Replace WebSiteA with the relevant name
  23. Using VI, create configuration file for the first WordPress site called /etc/httpd/sites-available/websiteb.com.conf with the following content:
    <VirtualHost *:80>
    ServerAdmin [email protected]websiteb.com
    ServerName www.websiteb.com
    ServerAlias websiteb.com
    DocumentRoot /www/WebSiteB
    <Directory />
    Options FollowSymLinks
    AllowOverride None
    </Directory>
    <Directory /www/WebSiteB>
    Options Indexes FollowSymLinks MultiViews
    AllowOverride all
    Require all granted
    Order allow,deny
    Allow from all
    <LimitExcept GET POST>
    deny from all
    </limitexcept>
    </Directory>
    ErrorLog /www/WebSiteB/logs/error.log
    CustomLog /www/WebSiteB/logs/access.log combined
    </VirtualHost>

    Note: Replace WebSiteB with the relevant name
  24. Run the commands below to enable the new virtual host files:
    ln -s /etc/httpd/sites-available/websitea.com.conf /etc/httpd/sites-enabled/websitea.com.conf
    ln -s /etc/httpd/sites-available/websiteb.com.conf /etc/httpd/sites-enabled/websiteb.com.conf

    Note 1: Replace WebSiteA with the relevant name
    Note 2: Replace WebSiteB with the relevant name
  25. Run the command below to configure Apache to load at startup:
    systemctl enable httpd
  26. To start the Apace service, run the command below:
    systemctl start httpd
  27. Run the commands below to enable HTTPD rule on the firewall:
    firewall-cmd --zone=public --add-service=http --permanent
    systemctl restart firewalld
    MariaDB installation phase
  1. Login using privileged account
  2. Install MariaDB:
    yum install -y mariadb-server mariadb-client
  3. Enable the MariaDB service:
    systemctl enable mariadb.service
  4. Start the MariaDB service:
    systemctl start mariadb.service
  5. Run the command bellow to set ownership and permissions for /etc/my.cnf file:
    chown root /etc/my.cnf
    chmod 644 /etc/my.cnf
  6. Edit using VI, the file /etc/my.cnf and add the string bellow under the [mysqld] section
    bind-address = 127.0.0.1
  7. Run the command below to secure the MySQL:
    mysql_secure_installation
  8. Specify the MySQL root account password (leave blank) -> Press Y to set the Root password -> specify new complex password (at least 14 characters, upper case, lower case, number, special characters) and document it -> Press Y to remove anonymous users -> Press Y to disallow root login remotely -> Press Y to remove test database -> Press Y to reload privilege tables and exit the script.
  9. Restart the MariaDB service:
    systemctl restart mariadb.service
    PHP 7.3 installation phase
  1. Login using privileged account
  2. Run the commands below to install PHP 7.3:
    yum install http://rpms.remirepo.net/enterprise/remi-release-7.rpm -y
    yum-config-manager --enable remi-php73
    yum install php php-mcrypt php-cli php-gd php-curl php-mysql php-ldap php-zip php-fileinfo -y
  3. Change the permissions on the php.ini file:
    chmod 640 /etc/php.ini
  4. Edit using VI, the file /etc/php.ini
    From:
    mysqli.default_host =
    To:
    mysqli.default_host = 127.0.0.1:3306From:
    allow_url_fopen = On

    To:
    allow_url_fopen = Off

From:
expose_php = On

To:
expose_php = Off

From:
memory_limit = 128M

To:
memory_limit = 8M

From:
post_max_size = 8M

To:
post_max_size = 2M

From:
upload_max_filesize = 2M

To:
upload_max_filesize = 1M

From:
disable_functions =

To:
disable_functions = fpassthru,crack_check,crack_closedict,crack_getlastmessage,crack_opendict, psockopen,php_ini_scanned_files,shell_exec,chown,hell-exec,dl,ctrl_dir,phpini,tmp,safe_mode,systemroot,server_software, get_current_user,HTTP_HOST,ini_restore,popen,pclose,exec,suExec,passthru,proc_open,proc_nice,proc_terminate, proc_get_status,proc_close,pfsockopen,leak,apache_child_terminate,posix_kill,posix_mkfifo,posix_setpgid, posix_setsid,posix_setuid,escapeshellcmd,escapeshellarg,posix_ctermid,posix_getcwd,posix_getegid,posix_geteuid,posix_getgid,posix_getgrgid, posix_getgrnam,posix_getgroups,posix_getlogin,posix_getpgid,posix_getpgrp,posix_getpid, posix_getppid,posix_getpwnam,posix_getpwuid,posix_getrlimit,system,posix_getsid,posix_getuid,posix_isatty, posix_setegid,posix_seteuid,posix_setgid,posix_times,posix_ttyname,posix_uname,posix_access,posix_get_last_error,posix_mknod, posix_strerror,posix_initgroups,posix_setsidposix_setuid

  • Restart the Apache service:
    systemctl restart httpd.service
      • WordPress 5.x installation phase

    1. Login using privileged account.
    2. Run the command bellow to login to the MariaDB:
      /usr/bin/mysql -uroot -p
      Note: When prompted, specify the password for the MariaDB root account.
    3. Run the following commands from the MariaDB prompt:
      CREATE USER 'blgusr'@'localhost' IDENTIFIED BY 'A3fg1j7x!s2gEq';
      CREATE USER 'hswjm'@'localhost' IDENTIFIED BY '[email protected]';
      CREATE DATABASE m6gf42s;
      CREATE DATABASE b7mf3aq;
      GRANT ALL PRIVILEGES ON m6gf42s.* TO "blgusr"@"localhost" IDENTIFIED BY "A3fg1j7x!s2gEq";
      GRANT ALL PRIVILEGES ON b7mf3aq.* TO "hswjm"@"localhost" IDENTIFIED BY "[email protected]";
      FLUSH PRIVILEGES;
      quit

      Note 1: Replace “blgusr” with a username to access first the database.
      Note 2: Replace “A3fg1j7x!s2gEq” with complex password for the account who will access the first database (at least 14 characters, upper case, lower case, number, special characters).
      Note 3: Replace “hswjm” with a username to access second the database.
      Note 4: Replace “[email protected]” with complex password for the account who will access the second database (at least 14 characters, upper case, lower case, number, special characters).
      Note 5: Replace “m6gf42s” with the first WordPress database name.
      Note 6: Replace “b7mf3aq” with the second WordPress database name.
    4. Run the commands below to download the latest build of WordPress:
      cd /usr/local/src
      wget https://wordpress.org/latest.zip
      unzip latest.zip -d /www/WebSiteA
      unzip latest.zip -d /www/WebSiteB

      Note 1: Replace WebSiteA with the relevant name
      Note 2: Replace WebSiteB with the relevant name
    5. Fix the SELinux security context on the new web folder for the first WordPress site:
      semanage fcontext -a -t httpd_sys_content_t "/www/WebSiteA(/.*)?"
      restorecon -F -R -v /www/WebSiteA
      chcon -R -t httpd_sys_content_t /www/WebSiteA
      semanage fcontext -a -t httpd_sys_rw_content_t "/www/WebSiteA/wp-content(/.*)?"
      restorecon -F -R -v /www/WebSiteA/wp-content
      chcon -R -t httpd_sys_rw_content_t /www/WebSiteA/wp-content

      Note: Replace WebSiteA with the relevant name
    6. Fix the SELinux security context on the new web folder for the second WordPress site:
      semanage fcontext -a -t httpd_sys_content_t "/www/WebSiteB(/.*)?"
      restorecon -F -R -v /www/WebSiteB
      chcon -R -t httpd_sys_content_t /www/WebSiteB
      semanage fcontext -a -t httpd_sys_rw_content_t "/www/WebSiteB/wp-content(/.*)?"
      restorecon -F -R -v /www/WebSiteB/wp-content
      chcon -R -t httpd_sys_rw_content_t /www/WebSiteB/wp-content

      Note: Replace WebSiteB with the relevant name
    7. Create using VI the file /www/WebSiteA/config.php with the following content:
      <?php
      define('DB_NAME', 'm6gf42s');
      define('DB_USER', 'blgusr');
      define('DB_PASSWORD', 'A3fg1j7x!s2gEq');
      define('DB_HOST', 'localhost');
      $table_prefix = 'm6gf42s_';
      define('AUTH_KEY', 'put your unique phrase here');
      define('SECURE_AUTH_KEY', 'put your unique phrase here');
      define('LOGGED_IN_KEY', 'put your unique phrase here');
      define('NONCE_KEY', 'put your unique phrase here');
      define('AUTH_SALT', 'put your unique phrase here');
      define('SECURE_AUTH_SALT', 'put your unique phrase here');
      define('LOGGED_IN_SALT', 'put your unique phrase here');
      define('NONCE_SALT', 'put your unique phrase here');
      define('FS_METHOD', 'direct');
      ?>

      Note 1: Make sure there are no spaces, newlines, or other strings before an opening ‘< ?php’ tag or after a closing ‘?>’ tag.
      Note 2: Replace “blgusr” with MariaDB account to access the first database.
      Note 3: Replace “A3fg1j7x!s2gEq” with complex password (at least 14 characters).
      Note 4: Replace “m6gf42s” with the first WordPress database name.
      Note 5: In-order to generate random values for the AUTH_KEY, SECURE_AUTH_KEY, LOGGED_IN_KEY and NONCE_KEY, use the web site bellow:
      http://api.wordpress.org/secret-key/1.1/
    8. Create using VI the file /www/WebSiteB/config.php with the following content:
      <?php
      define('DB_NAME', 'b7mf3aq');
      define('DB_USER', 'hswjm');
      define('DB_PASSWORD', '[email protected]');
      define('DB_HOST', 'localhost');
      $table_prefix = 'b7mf3aq_';
      define('AUTH_KEY', 'put your unique phrase here');
      define('SECURE_AUTH_KEY', 'put your unique phrase here');
      define('LOGGED_IN_KEY', 'put your unique phrase here');
      define('NONCE_KEY', 'put your unique phrase here');
      define('AUTH_SALT', 'put your unique phrase here');
      define('SECURE_AUTH_SALT', 'put your unique phrase here');
      define('LOGGED_IN_SALT', 'put your unique phrase here');
      define('NONCE_SALT', 'put your unique phrase here');
      define('FS_METHOD', 'direct');
      ?>

      Note 1: Make sure there are no spaces, newlines, or other strings before an opening ‘< ?php’ tag or after a closing ‘?>’ tag.
      Note 2: Replace “hswjm” with MariaDB account to access the second database.
      Note 3: Replace “[email protected]” with complex password (at least 14 characters).
      Note 4: Replace “b7mf3aq” with the second WordPress database name.
      Note 5: In-order to generate random values for the AUTH_KEY, SECURE_AUTH_KEY, LOGGED_IN_KEY and NONCE_KEY, use the web site bellow:
      http://api.wordpress.org/secret-key/1.1/
    9. Copy the wp-config.php file:
      cp /www/WebSiteA/wordpress/wp-config-sample.php /www/WebSiteA/wordpress/wp-config.php
      cp /www/WebSiteB/wordpress/wp-config-sample.php /www/WebSiteB/wordpress/wp-config.php

      Note 1: Replace WebSiteA with the relevant name
      Note 2: Replace WebSiteB with the relevant name
    10. Edit using VI, the file /www/WebSiteA/wordpress/wp-config.php
      Add the following lines before the string “That’s all, stop editing! Happy blogging”:
      /* Multisite */
      define('WP_ALLOW_MULTISITE', true);
      include('/www/WebSiteA/config.php');

      Remove or comment the following sections:
      define('DB_NAME', 'putyourdbnamehere');
      define('DB_USER', 'usernamehere');
      define('DB_PASSWORD', 'yourpasswordhere');
      define('DB_HOST', 'localhost');
      $table_prefix = 'wp_';
      define('AUTH_KEY', 'put your unique phrase here');
      define('SECURE_AUTH_KEY', 'put your unique phrase here');
      define('LOGGED_IN_KEY', 'put your unique phrase here');
      define('NONCE_KEY', 'put your unique phrase here');
      define('AUTH_SALT', 'put your unique phrase here');
      define('SECURE_AUTH_SALT', 'put your unique phrase here');
      define('LOGGED_IN_SALT', 'put your unique phrase here');
      define('NONCE_SALT', 'put your unique phrase here');

      Note: Replace WebSiteA with the relevant name
    11. Edit using VI, the file /www/WebSiteB/wordpress/wp-config.php
      Add the following lines before the string “That’s all, stop editing! Happy blogging”:
      /* Multisite */
      define('WP_ALLOW_MULTISITE', true);
      include('/www/WebSiteB/config.php');

      Remove or comment the following sections:
      define('DB_NAME', 'putyourdbnamehere');
      define('DB_USER', 'usernamehere');
      define('DB_PASSWORD', 'yourpasswordhere');
      define('DB_HOST', 'localhost');
      $table_prefix = 'wp_';
      define('AUTH_KEY', 'put your unique phrase here');
      define('SECURE_AUTH_KEY', 'put your unique phrase here');
      define('LOGGED_IN_KEY', 'put your unique phrase here');
      define('NONCE_KEY', 'put your unique phrase here');
      define('AUTH_SALT', 'put your unique phrase here');
      define('SECURE_AUTH_SALT', 'put your unique phrase here');
      define('LOGGED_IN_SALT', 'put your unique phrase here');
      define('NONCE_SALT', 'put your unique phrase here');

      Note: Replace WebSiteB with the relevant name
    12. Create using VI the file /www/WebSiteA/wordpress/.htaccess and add the following content:
      # BEGIN WordPress
      <IfModule mod_rewrite.c>
      RewriteEngine On
      RewriteBase /
      RewriteRule ^index\.php$ - [L]
      RewriteCond %{REQUEST_FILENAME} !-f
      RewriteCond %{REQUEST_FILENAME} !-d
      RewriteRule . /index.php [L]
      </IfModule>
      # END WordPress
      Header set X-XSS-Protection "1; mode=block"
      Header set X-Content-Type-Options nosniff
      Header set Content-Security-Policy "default-src 'self' 'unsafe-inline' 'unsafe-eval' https: data:"

      Note: Replace WebSiteA with the relevant name
    13. Create using VI the file /www/WebSiteA/wordpress/wp-content/.htaccess and add the following content:
      # BEGIN WordPress
      <IfModule mod_rewrite.c>
      RewriteEngine On
      RewriteBase /
      RewriteRule ^index\.php$ - [L]
      RewriteCond %{REQUEST_FILENAME} !-f
      RewriteCond %{REQUEST_FILENAME} !-d
      RewriteRule . /index.php [L]
      </IfModule>
      # END WordPress

      Note: Replace WebSiteA with the relevant name
    14. Create using VI the file /www/WebSiteA/wordpress/wp-includes/.htaccess and add the following content:
      # BEGIN WordPress
      <IfModule mod_rewrite.c>
      RewriteEngine On
      RewriteBase /
      RewriteRule ^index\.php$ - [L]
      RewriteCond %{REQUEST_FILENAME} !-f
      RewriteCond %{REQUEST_FILENAME} !-d
      RewriteRule . /index.php [L]
      </IfModule>
      # END WordPress

      Note: Replace WebSiteA with the relevant name
    15. Set ownership and permissions on the .htaccess files below:
      chown apache:apache /www/WebSiteA/wordpress/.htaccess
      chown apache:apache /www/WebSiteA/wordpress/wp-content/.htaccess
      chown apache:apache /www/WebSiteA/wordpress/wp-includes/.htaccess
      chmod 644 /www/WebSiteA/wordpress/.htaccess
      chmod 644 /www/WebSiteA/wordpress/wp-content/.htaccess
      chmod 644 /www/WebSiteA/wordpress/wp-includes/.htaccess

      Note: Replace WebSiteA with the relevant name
    16. Create using VI the file /www/WebSiteB/wordpress/.htaccess and add the following content:
      # BEGIN WordPress
      <IfModule mod_rewrite.c>
      RewriteEngine On
      RewriteBase /
      RewriteRule ^index\.php$ - [L]
      RewriteCond %{REQUEST_FILENAME} !-f
      RewriteCond %{REQUEST_FILENAME} !-d
      RewriteRule . /index.php [L]
      </IfModule>
      # END WordPress
      Header set X-XSS-Protection "1; mode=block"
      Header set X-Content-Type-Options nosniff
      Header set Content-Security-Policy "default-src 'self' 'unsafe-inline' 'unsafe-eval' https: data:"

      Note: Replace WebSiteB with the relevant name
    17. Create using VI the file /www/WebSiteB/wordpress/wp-content/.htaccess and add the following content:
      # BEGIN WordPress
      <IfModule mod_rewrite.c>
      RewriteEngine On
      RewriteBase /
      RewriteRule ^index\.php$ - [L]
      RewriteCond %{REQUEST_FILENAME} !-f
      RewriteCond %{REQUEST_FILENAME} !-d
      RewriteRule . /index.php [L]
      </IfModule>
      # END WordPress

      Note: Replace WebSiteB with the relevant name
    18. Create using VI the file /www/WebSiteB/wordpress/wp-includes/.htaccess and add the following content:
      # BEGIN WordPress
      <IfModule mod_rewrite.c>
      RewriteEngine On
      RewriteBase /
      RewriteRule ^index\.php$ - [L]
      RewriteCond %{REQUEST_FILENAME} !-f
      RewriteCond %{REQUEST_FILENAME} !-d
      RewriteRule . /index.php [L]
      </IfModule>
      # END WordPress

      Note: Replace WebSiteB with the relevant name
    19. Set ownership and permissions on the .htaccess files below:
      chown apache:apache /www/WebSiteB/wordpress/.htaccess
      chown apache:apache /www/WebSiteB/wordpress/wp-content/.htaccess
      chown apache:apache /www/WebSiteB/wordpress/wp-includes/.htaccess
      chmod 644 /www/WebSiteB/wordpress/.htaccess
      chmod 644 /www/WebSiteB/wordpress/wp-content/.htaccess
      chmod 644 /www/WebSiteB/wordpress/wp-includes/.htaccess

      Note: Replace WebSiteB with the relevant name
    20. Remove default content from the first WordPress site:
      rm -f /www/WebSiteA/wordpress/license.txt
      rm -f /www/WebSiteA/wordpress/readme.html
      rm -f /www/WebSiteA/wordpress/wp-config-sample.php
      rm -f /www/WebSiteA/wordpress/wp-content/plugins/hello.php
    21. Remove default content from the second WordPress site:
      rm -f /www/WebSiteB/wordpress/license.txt
      rm -f /www/WebSiteB/wordpress/readme.html
      rm -f /www/WebSiteB/wordpress/wp-config-sample.php
      rm -f /www/WebSiteB/wordpress/wp-content/plugins/hello.php
    22. Edit using VI the file /etc/httpd/sites-available/websitea.com.conf
      Replace the value of the string, from:
      DocumentRoot /www/WebSiteA
      To:
      DocumentRoot /www/WebSiteA/wordpress
      Replace the value of the string, from:
      <Directory /www/WebSiteA>
      To:
      <Directory /www/WebSiteA/wordpress>
      Note: Replace WebSiteA with the relevant name
    23. Edit using VI the file /etc/httpd/sites-available/websiteb.com.conf
      Replace the value of the string, from:
      DocumentRoot /www/WebSiteB
      To:
      DocumentRoot /www/WebSiteB/wordpress
      Replace the value of the string, from:
      <Directory /www/WebSiteB>
      To:
      <Directory /www/WebSiteB/wordpress>
      Note: Replace WebSiteB with the relevant name
    24. Restart the Apache service:
      systemctl restart httpd.service
    25. Open a web browser from a client machine, and enter the URL bellow:
      http://Server_FQDN/wp-admin/install.php
      Note: Replace Server_FQDN with the relevant DNS name
    26. Select language and click Continue
    27. Specify the following information:
        Site Title
        Username – replace the default “admin”
        Password
        E-mail
    28. Click on “Install WordPress” button, and close the web browser.
    29. Change ownership and permissions on the files and folders below:
      chown -R apache:apache /www/WebSiteA/wordpress
      find /www/WebSiteA/wordpress/ -type d -exec chmod -R 755 {} \;
      find /www/WebSiteA/wordpress/ -type f -exec chmod -R 644 {} \;
      chmod 400 /www/WebSiteA/wordpress/wp-config.php
      chown apache:apache /www/WebSiteA/config.php
      chmod 644 /www/WebSiteA/config.php

      Note: Replace WebSiteA with the relevant name
    30. Change ownership and permissions on the files and folders below:
      chown -R apache:apache /www/WebSiteB/wordpress
      find /www/WebSiteB/wordpress/ -type d -exec chmod -R 755 {} \;
      find /www/WebSiteB/wordpress/ -type f -exec chmod -R 644 {} \;
      chmod 400 /www/WebSiteB/wordpress/wp-config.php
      chown apache:apache /www/WebSiteB/config.php
      chmod 644 /www/WebSiteB/config.php

      Note: Replace WebSiteB with the relevant name
    31. Download “WordPress Firewall” plugin from:
      http://www.seoegghead.com/software/wordpress-firewall.seo
    32. Copy the “WordPress Firewall” plugin file “wordpress-firewall.php” using PSCP (or SCP) into /www/WebSiteA/wordpress/wp-content/plugins
      Note: Replace WebSiteA with the relevant name
    33. Copy the “WordPress Firewall” plugin file “wordpress-firewall.php” using PSCP (or SCP) into /www/WebSiteB/wordpress/wp-content/plugins
    34. Open a web browser from a client machine, and enter the URL bellow:
      http://Server_FQDN/wp-login.php
      Note: Replace Server_FQDN with the relevant DNS name
    35. From WordPress dashboard, click on “settings” -> make sure that “Anyone can register” is left unchecked -> put a new value inside the “Tagline” field -> click on “Save changes”.
    36. From the left pane, click on Plugins -> Add New -> search, install and activate the following plugins:
        Acunetix WP Security
        Antispam Bee
        WP Limit Login Attempts
        Login LockDown
        WP Security Audit Log
    37. From the list of installed plugins, locate and activate the Firewall plugin
    38. From the upper pane, click on “Log Out”.
    39. Delete the file /wp-admin/install.php
      SSL Configuration Phase
    1. Login using privileged account
    2. To add support for SSL certificates, run the command below:
      yum install mod_ssl -y
    3. Run the command below to change the permissions on the certificates folder:
      chmod 700 /etc/pki/CA/private
    4. Run the command bellow to generate a key pair for the first WordPress site:
      openssl genrsa -des3 -out /etc/pki/CA/private/websitea-server.key 2048
      Note 1: Specify a complex pass phrase for the private key (and document it)
      Note 2: Replace websitea with the relevant name
    5. Run the command bellow to generate a key pair for the second WordPress site:
      openssl genrsa -des3 -out /etc/pki/CA/private/websiteb-server.key 2048
      Note 1: Specify a complex pass phrase for the private key (and document it)
      Note 2: Replace websiteb with the relevant name
    6. Run the command bellow to generate the CSR for the first WordPress site:
      openssl req -new -newkey rsa:2048 -nodes -sha256 -keyout /etc/pki/CA/private/websitea-server.key -out /tmp/websitea-apache.csr
      Note 1: The command above should be written as one line.
      Note 2: Replace websitea with the relevant name
    7. Run the command bellow to generate the CSR for the second WordPress site:
      openssl req -new -newkey rsa:2048 -nodes -sha256 -keyout /etc/pki/CA/private/websiteb-server.key -out /tmp/websiteb-apache.csr
      Note 1: The command above should be written as one line.
      Note 2: Replace websiteb with the relevant name
    8. Edit using VI the file /etc/httpd/sites-available/websitea.com.conf and add the following:
      <VirtualHost *:443>
      ServerAdmin [email protected]websitea.com
      ServerName www.websitea.com
      ServerAlias websitea.com
      DocumentRoot /www/WebSiteA/wordpress
      <Directory />
      Options FollowSymLinks
      AllowOverride None
      </Directory>
      <Directory /www/WebSiteA/wordpress>
      Options Indexes FollowSymLinks MultiViews
      AllowOverride all
      Require all granted
      Order allow,deny
      Allow from all
      <LimitExcept GET POST>
      deny from all
      </limitexcept>
      </Directory>
      SSLCertificateFile /etc/ssl/certs/websitea.crt
      SSLCertificateKeyFile /etc/pki/CA/private/websitea-server.key
      SSLCipherSuite EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES128-SHA256:AES128-GCM-SHA256:ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:DH+AES:ECDH+3DES:DH+3DES:RSA+AES:RSA+3DES:!ADH:!AECDH:!MD5:!DSS:!aNULL:!EDH:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS
      SSLHonorCipherOrder On
      # Disable SSLv2 and SSLv3
      SSLProtocol ALL -SSLv2 –SSLv3 +TLSv1 +TLSv1.1 +TLSv1.2
      # Disable SSL Compression
      SSLCompression Off
      SSLEngine on
      ErrorLog /www/WebSiteA/logs/ssl_error.log
      CustomLog /www/WebSiteA/logs/ssl_access.log combined
      </VirtualHost>

      Note: Replace WebSiteA with the relevant name
    9. Edit using VI the file /etc/httpd/sites-available/websiteb.com.conf and add the following:
      <VirtualHost *:443>
      ServerAdmin [email protected]websiteb.com
      ServerName www.websiteb.com
      ServerAlias websiteb.com
      DocumentRoot /www/WebSiteB/wordpress
      <Directory />
      Options FollowSymLinks
      AllowOverride None
      </Directory>
      <Directory /www/WebSiteB/wordpress>
      Options Indexes FollowSymLinks MultiViews
      AllowOverride all
      Require all granted
      Order allow,deny
      Allow from all
      <LimitExcept GET POST>
      deny from all
      </limitexcept>
      </Directory>
      SSLCertificateFile /etc/ssl/certs/websiteb.crt
      SSLCertificateKeyFile /etc/pki/CA/private/websiteb-server.key
      SSLCipherSuite EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES128-SHA256:AES128-GCM-SHA256:ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:DH+AES:ECDH+3DES:DH+3DES:RSA+AES:RSA+3DES:!ADH:!AECDH:!MD5:!DSS:!aNULL:!EDH:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS
      SSLHonorCipherOrder On
      # Disable SSLv2 and SSLv3
      SSLProtocol ALL -SSLv2 –SSLv3 +TLSv1 +TLSv1.1 +TLSv1.2
      # Disable SSL Compression
      SSLCompression Off
      SSLEngine on
      ErrorLog /www/WebSiteB/logs/ssl_error.log
      CustomLog /www/WebSiteB/logs/ssl_access.log combined
      </VirtualHost>

      Note: Replace WebSiteB with the relevant name
    10. Edit using VI the file /etc/httpd/conf.d/ssl.conf and comment the following commands:
      <VirtualHost _default_:443>
      ErrorLog logs/ssl_error_log
      TransferLog logs/ssl_access_log
      LogLevel warn
      SSLEngine on
      SSLProtocol all -SSLv2 -SSLv3
      SSLCipherSuite HIGH:3DES:!aNULL:!MD5:!SEED:!IDEA
      SSLCertificateFile
      SSLCertificateKeyFile
    11. Restart the Apace service, run the command below:
      systemctl restart httpd
    12. Run the commands below to enable HTTPD rule on the firewall:
      firewall-cmd --zone=public --add-service=https --permanent
      systemctl restart firewalld
    13. Run the command below to change the permissions on the certificates folder:
      chmod 600 /etc/pki/CA/private
    14. In-case the server was configured with SSL certificate, add the following line to the /www/WebSiteA/config.php file:
      define('FORCE_SSL_LOGIN', true);
      Note: Replace WebSiteA with the relevant name
    15. In-case the server was configured with SSL certificate, add the following line to the /www/WebSiteB/config.php file:
      define('FORCE_SSL_LOGIN', true);
      Note: Replace WebSiteB with the relevant name
      WordPress upgrade process
    1. Run the commands below to change the SELinux permissions:
      semanage fcontext -a -t httpd_sys_rw_content_t "/www/WebSiteA/wordpress(/.*)?"
      restorecon -F -R -v /www/WebSiteA/wordpress
      chcon -R -t httpd_sys_rw_content_t /www/WebSiteA/wordpress

      Note: Replace WebSiteA with the relevant name
    2. Login to the WordPress admin portal:
      http://Server_FQDN/wp-login.php
      Note: Replace Server_FQDN with the relevant DNS name
    3. When prompted, select to upgrade the WordPress
    4. Once the upgrade process completes successfully, log off the WordPress admin portal
    5. Run the commands below to change the SELinux permissions:
      semanage fcontext -a -t httpd_sys_content_t "/www/WebSiteA/wordpress(/.*)?"
      restorecon -F -R -v /www/WebSiteA/wordpress
      chcon -R -t httpd_sys_content_t /www/WebSiteA/wordpress
      semanage fcontext -a -t httpd_sys_rw_content_t "/www/WebSiteA/wordpress/wp-content(/.*)?"
      restorecon -F -R -v /www/WebSiteA/wordpress/wp-content
      chcon -R -t httpd_sys_rw_content_t /www/WebSiteA/wordpress/wp-content

      Note: Replace WebSiteA with the relevant name
    6. Logoff the SSH console
      Check your site on the following test sites
      • https://www.ssllabs.com/ssltest/
      • https://dnsflagday.net/
      • https://securityheaders.com/
      • https://search.google.com/test/mobile-friendly

    Why should SMBs deploy Software Defined Backup Solutions

    You might be thinking, shouldn’t the question be: Why should SMBs (Small to Medium-sized Businesses) deploy backup solutions at all? Well, we’re going with the assumption that you’ve already accepted that backup solutions; whether cloud based backups or on-premises backup appliances, are very important for SMBs. If you’re still dubious about the significance of data backup solutions for SMBs, then allow me to shed some brief light on the subject before venturing onto the main topic.

    Why should SMBs deploy backup solutions

    If you Google “data loss cost implications for SMBs”, you’ll find a ton of statistics how costly they can be. One such statistic, for your reference, is a report by National Archives and Records Administration in Washington. This report states that 93% of businesses that lost their data for 10 days or more filed for bankruptcy within 1 year and 50% filed for bankruptcy immediately. That’s how devastating data loss can be and that’s exactly the reason why SMBs need to deploy backup solutions.

    With that established, let’s move to the main point of interest here: Why should SMBs focus on Software Defined Backup Solutions and not physical infrastructures or backup appliances.

    Software Defined Backup Solutions – Simpler, Easier, Less Costly and Reliable

    I think software defined backup solutions are the best choice for SMBs. Backup appliances, verily, have their own perks. However, from an SMBs perspective, it’s better to look away from them.

    Why do I say that software based backup is the best option for SMBs? Here are some of my reasons:

    Software based Backups are Simpler and deliver faster deployment

    Consider the process of acquisition for a backup appliance. You have to order it, wait for it to be delivered. Once it gets to you, you have to follow a detailed manual that describes how it will be setup. After you make your way through all of that, the appliance is setup. As it starts working, it begins to consume resources like cooling, power, maintenance etc. You have to make sure there’s a professional keeping an eye on it and that it’s safe from physical harm as well. A backup appliance can be quite delicate as well. If someone accidentally ends up touching it the wrong way and it suddenly ends up on the ground, the impact will be on your pocket.

    Suffice to say, backup appliances take quite a bit of time to acquire and deploy. Backup appliances maybe essential for large enterprises but SMBs can work it out with software defined backup solutions instead.

    To setup a software defined backup solution, you just have to go through a series of purchase procedures; that are far simpler and less time consuming compared to the acquisition of physical backup appliances. Once you’re done with those and you conclude the payment procedure, you can be up and running within a matter of an hour or two. It’s easier, it’s simpler and it’s a lot faster. Milestones are easily achieved with each click.

    Software based Backup solutions are Cost Effectiveness

    As an SMB, cost is a major thing to consider in all matters. Software Defined Backup solutions are very economical. Compared to the initial cost of backup appliances, software defined backup solutions cost a lot less.

    As mentioned earlier, backup appliances incur quite a few dynamic costs. Software defined backup solutions do not incur such costs. You can begin from the storage capacity that you immediately need and then you can increase as you go. Payment models like pay-as-you-go make the cost attribute of software defined backup solutions a lot more appealing.

    Unlike backup appliances that incur dynamic costs like maintenance or power or cooling; software backup solutions do not incur such costs.

    Software Based Backup Solutions offer Enhanced Data Security

    Let’s say the backup appliance is accidentally damaged or it experiences hardware failure, all of the data within the appliance may never recover. If the appliance goes down, all the backed up data goes down with it. How difficult are accidents at a workplace? Something as simple as spilling your coffee is sometimes what it takes to initiate a disaster.

    However, you don’t have to worry about such things with a software defined backup solution that keeps all your backed up data in the cloud. With cloud backups,the data security of your backed up data is in the hands of the cloud service provider. These three are only the gist of my reasons why businesses should setup software defined backup solutions that facilitate simplified backup to the cloud. By effectively leveraging cloud technology, SMBs can ensure that their data is well protected and recoverable.

    This article is written by George Williams. George is a Senior Technical Content Marketer at StoneFly Inc.

    Why Millennials Are Blasé About Privacy

    Millennials don’t seem to care that Facebook and other companies harvest their data for profit. At least that’s the premise of a recent opinion piece in the New York Post. It suggests that millennials are consigned to the fact that, in order to have the many advantages that the new tech world provides, there has to be a sacrifice.  If you are a millennial, I would be interested in your reaction to this premise and others which follow.

    Millennials seem more comfortable with the notion that if a product is free then you are the product, and allow themselves to be an ’’open book”  for all to see. As it will be revealed later, the opinion piece opines that this is not true of previous generations who appear to be more guarded with their privacy. Of course, previous generations had fewer threats to their privacy to go along with markedly less availability to information, entertainment, and communication (just to name a few).

    So it is not necessarily fair to single out the millennials as if they were some alien outliers. Although, like aliens, they come from and live in different worlds to their predecessors. I mean, book burning was non-existent before Guttenberg’s printing press printed books, and there wasn’t a need for fallout shelters until the world went nuclear. In fact, you could make a case that the dangerous, crazy world that was passed on to millennials, and that they now inherit, may make the exposure of their personal information to the public seem tame by comparison. Not to mention that heavy engagement with social media and the like is a needed distraction from modern life!

    Besides, no one would have guessed some fifteen years ago that Mark Zuckerberg’s dorm room doodle would morph into the behemoth of a business model it is today – replete with its invasive algorithms. Who could have imagined that social media companies could learn our political leanings, our likes and dislikes, our religious affiliations, and our sexual orientations and proclivities?  If I, or some other legal or law enforcement entity want to retrace my activities on a given day – that is easily and readily accessible from my smartphone.

    As millennials blithely rollover to the tech gods when it comes to filleting themselves publicly, the article takes them (and others) to task for handwringing and breathlessly expressing surprise and outrage at Cambridge Analytica for just working with the leeway given to them. Of course, if the company had helped Hillary Clinton win the Whitehouse instead of purportedly boosting the prospects of the odious ogre, Trump, there likely wouldn’t have been the same angst – or so the piece posits.

    Be that as it may, the question must be asked: what did Cambridge Analytica do that countless other companies haven’t done? I mean, why should it be treated any differently by Facebook because it’s a political firm and not an avaricious advertising scavenger? The other Silicon Valley savants – Google, Apple, and Microsoft – all monetize your information. They are eager to invite advertisers, researchers, and government agencies to discover your treasure trove of personal information through them.

    And millennials, as well as those of other generations, are only too willing, it seems, to provide such information- and in massive amounts. Indeed, they seem to relish, in a race to the bottom, who can post the most content, photos, and the like. They seem to be ambivalent about the inevitable fallout. “So what?” they say, “I’ve got nothing to hide.”

    The article questions if those of previous generations would be so forthcoming, citing the so-called Greatest Generation eschewing the telephone if it meant that the government could eavesdrop on their conversations with impunity. On the contrary, millennials, it would appear, view the lack of privacy and the co-opting of personal information as the price for the plethora of pleasures that the digital medium supplies.

    As Wired magazine founder Kevin Kelly said in his 2016 book, The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future:

    If today’s social media has taught us anything about ourselves as a species, it is that the human impulse to share overwhelms the human impulse for privacy.

    What do you think? Is it a fair assessment of the current state of affairs?

    This article was originally published at BestVPN.com.

    Your Internet Privacy Is at Risk, But You Can Salvage It All

    In what has to be the most ironical turn of events, companies collectively pay cybersecurity experts billions of dollars every year so that they can keep their business safe and out of prying eyes. Once they have attained the level of security and privacy they want, they turn around to infringe upon the privacy of the people.

    This has been the model many of them have been operating for a while now, and they don’t seem to be slowing down anytime soon. We would have said the government should fight against this, but not when they have a hand in the mud pie too.

    In this piece, we discuss the various ways these organizations have been shredding your privacy to bits, what the future of internet privacy is shaping up to be and how to take back the control of your own data privacy.

    How Your Internet Privacy Is Being Violated

    A lot of the simple operations you perform with the internet every day means more to some data collectors than you know. In fact, it is what they are using to make decisions on what to offer you, what to hold back from you and so much more.

    Going by the available technology, here are some of the frameworks that allow the collection and use of your information.

    • Big data analytics: These are mapped to certain individuals/ demographics and used to predictive models for the future.

    When you hear big data, what is being referred to is a large body of data which is meant for later analysis. Via a combination of efforts from man, bot and algorithms, the large amount of data is sifted through with the sole aim of finding patterns, trends, and behaviors.

    • Internet of Things: Thus, you can access live video feeds of your home from anywhere in the world. You can even have your refrigerator tell you what supplies you are running low on.

    What you don’t know is that as your IoT units collect this data, they are not just feeding it to you. In fact, they are sending back a lot more than you know to the companies that developed them.
    Everyday things (such as your printer, refrigerator, light, AC unit and so much more) can now be connected to an internet connection. This enables them to work with one another with the sole aim of interoperability and remote access.

      • Machine learning: These machines are then released into data sets to practice their newfound freedom. 

    Guess what they do? Mine for data from almost any source they can lay hands on, rigorously analyze the data and use that to diverse ends.
    Machines were taught to do more than just be machines. Instead of being given a set of commands to run with, they have now been given specialized commands to aid their learning.

    The significance of The Data Collections

    All of the above might look like normal procedures for the achievement of the intended purposes that these technologies bring. However, they cause more harm than good.

    On the one hand, the datasets are used to pigeonhole consumers by companies.

    As of the time of this writing, machine learning data is already being used by some credit card companies to determine who they should allow a credit card and who they shouldn’t. It is even more ridiculous to think this decision is based off something as simple as what type of car accessory a customer would opt for.

    As if that is not enough, machine learning is approaching a place where it would soon be able to diagnose diseases. That is not the scary part. This diagnosis will be based on social media data.

    The companies don’t even need to see you physically before they know what diseases you might have. Talk about prodding into the most sensitive areas of your life that you might not even have shared with family and friends.

    That, and we have not even talked about how marketers will chase you around with ads based on what you’ve searched, offer suggested content based on your patterns and prevent you from seeing out of the box across the board.

    Putting an End to The Nonsense

    You don’t have to put up with all these. Technology is meant to be enjoyed, so you shouldn’t be punished by being exploited.

    One helpful tip is to layer all your connection over a VPN. These will help make you anonymous on the network, preventing data collectors and monitors from identifying your computer with your data stream.

    Your IoT devices will also benefit from a VPN if you can share one over your router. This helps to make them more secure since they cannot us a VPN otherwise.

    Private browser networks such as Tor will do you a lot of good when browsing the web. If you prefer conventional browsers, though, don’t forget to install an ad blocker before you go online again. They help prevent marketers and companies from tracking you all around the web after looking at some content.

    Don’t forget to switch from Google to other search engines that leave your data to you. DuckDuckGo and Qwant are some of the options on this list.

    Combine all of that, and you have your shot back at decency and privacy on the internet.

    Cloud Services – Evolution and Future Trends

    Cloud services are no longer a buzz, they are existing fact. Small and large organizations are leading the revolution in the IT industry for almost a decade, some migrating small environments and conducting experiments while others deploying their entire production environments using this model.

    It is commonly used to consider cloud services as a continue of the data center environment and in fact this is where the concept evolved, and yet, the differences between the traditional computing services, server farms, storage and even virtualization are fundamentally different from a true cloud. Let’s evaluate the differences:

    In the past we used to call “cloud”, for any compute services containing the following characteristics as defined by NIST:

    • On-Demand Self-Service
    • Broad Network Access
    • Resource Pooling
    • Rapid Elasticity
    • Measured service

    When looking deeper into the various cloud service models such as IaaS (Infrastructure as a Service), PaaS (Platform as a Service) and SaaS (Software as a Service), we find that things are not always black or white: In some cases we come across services that we know for fact they are cloud services, we can’t always say these services include all the above characteristics.

    A good example: File sharing services such as Dropbox. As a private customer we are not exposed to data that will enable us to measure the service (in terms of performance or in terms of billing vs storage capacity cost).

    In case we choose to deploy a “private cloud” inside our organizational data center, based on infrastructure such as VMWARE, OpenStack or alike, we expect all the above characteristics in our on premise as well.

     

    Let’s differentiate between cloud and hosting service

    In the current IT industry there are many companies offering compute services, on the range between cloud services and hosting services.

    Hosting companies (or managed services), usually offers the customer the following capabilities:

    • Compute environments – Such as physical servers (in case of special hardware requirements), virtual servers, storage and network equipment (Routers, Firewalls, VPN Gateway, etc.)
    • Managed services – Such as web hosting for marketing or commercial web sites, email services, file sharing services and organizational systems such as CRM as a service.
    • Backup and DR as a service.
    • Managed support/IT services.

    Hosting companies might offer the customer a capability to increase the number of servers and in some cases even to choose servers in data center abroad (in case we would like to allow access to the data/servers close to the end customer).

    For SMB or enterprise organizations making their first move to the cloud, or for customers who wishes to outsource their IT services to external providers, there isn’t much of a difference between choosing hosting/managed service and choose public cloud service.

    The differences between hosting and cloud services begins when trying to deploy entire environments in the cloud, based on architectures emphasizing service and platform (SaaS and PaaS), and less on infrastructure as a service (IaaS).

    In this configuration, the system is developed based on dynamic scaling capabilities, environments deployed for a short amount of time, servers and infrastructure raised for specific purpose and stop to exist a few minutes after the entire process completes.

    This model is called “Cloud Native Applications”, which allows us to avoid committing to pre-defined amount of infrastructure, specific task management, compatibility, server health check, etc., what is the role of each server or platform, in case they will be destroyed within a second? The infrastructure in this model is not important, only the service the system meant to provide.

    Unlike hard-coded infrastructure management, there is a new concept – “Infrastructure as a code”. Environments are written as “recipes”, sent to the cloud provider using API’s, and environments are being created instantly and on the fly.

    A few examples for the efficiencies of this model – A large American service provider destroys his entire IT environment in the cloud and deploys an entire up-to-date mirror environment within a few minutes instead of updating each and every server. A different American service provider increases the amount of servers automatically in advanced before peak hours, as a result of applicative monitoring, and after peak hours, all the new servers vanishes.

    This change is not a magic, but a result of cloud efficient planning of systems and applications, training dedicated teams with the relevant capabilities, understanding the cloud provider’s services, billing, prioritization and the constant changes in the management interfaces.

     

    Process of migrating systems to the public cloud

    Infrastructure as a Service (IaaS) allows organizations to perform “Lift & Shift” (copying existing systems to the cloud with minor changes) from the on premise environment to the public cloud, as part of migration processes to the cloud.

    Most organizations will quickly find out that the “Lift & Shift” strategy is indeed easier as first stage, but in the long term it is a very bad economical decision, with the same challenges that organizations struggle with today: waste of hardware resources, un-optimized operating system and running code on servers, distributed management difficulties, etc.

    At later stages, organizations who migrated systems to the public cloud, begin to perform tuning to their cloud environments by measuring resource usage, for their virtual servers, adapting the proper VM instance type for the actual use in terms of CPU/memory/storage.

    Below is an example from AWS presentation about the evolution organizations pass migrating to public cloud in terms of cost:

    The future is here

    Today, startup companies and enterprise organizations are developing applications for the cloud, agnostically to the hardware or infrastructure underneath.

    In past, organizations used to migrate from developing on physical servers to virtual servers, and today, organizations are migrating from developing monolith applications to micro-service based applications and even Serverless applications (code running inside a managed compute environment, without the need to manage infrastructure resources such as servers, storage, etc.)

    Changes in the development world is crucial to understanding what is cloud service, due to the fact that today, we are less relying on NIST definition of cloud service, and providers offering Infrastructure as a Service (as most hosting providers) and today cloud service is characterized by the following capabilities:

    • Collection of API’s
    • Billing capability of services/resources by their actual usage
    • Services managed using API (such as the ability to provision, decommission, start/stop, etc.)

     

    The bottom line

    Today there are many providers who wrap VMWARE infrastructure with friendly user interface, allowing the customer to choose the VM type (in terms of CPU/Memory) and the number of servers the customer would like to consume, but it is not scale enough and it doesn’t allow the customer the flexibility to scale-up or scale-down to hundreds of servers automatically, and within seconds over multiple geographical regions.

    Cloud provider who supports “Cloud Native Applications” enables the customer to connect his automation mechanisms, build and deployment processes using API’s to the cloud provider’s infrastructure, in-order to allow provisioning/changing compute environments, deploy micro-services based systems and even allowing the customer to deploy and run Serverless infrastructure.

    The next time you are considering a service provider, whether using a hosting provider (sophisticated as he might be) or using public cloud provider, with capabilities of running hundreds of servers over multiple geographic regions within minutes, hands-free, supporting micro-services and Serverless applications, with API connectivity capabilities, simply present the service provider your requirements, and choose the most suitable service provider.

     

     

    This article was written by Eyal Estrin, cloud security architect and Vitaly Unic, application security architect.