web analytics

What makes a good cloud architect?

Virtually any organization active in the public cloud needs at least one cloud architect to be able to see the big picture and to assist designing solutions.

So, what makes a cloud architect a good cloud architect?

In a word – be multidisciplinary.

Customer-Oriented

While the position requires good technical skills, a good cloud architect must have good customer facing skills. A cloud architect needs to understand the business needs, from the end-users (usually connecting from the Internet) to the technological teams. That means being able to speak many “languages,” and translate from one to the another while navigating the delicate nuances of each. All in the same conversation.

At the end of the day, the technology is just a means to serve your customers.

Sometimes a customer may ask for something non-technical at all (“Draw me a sheep…”) and sometimes it could be very technical (“I want to expose an API to allow read and update backend database”).

A good cloud architect knows how to take make a drawing of a sheep into a full-blown architecture diagram, complete with components, protocols, and more. In other worlds, translating a business or customer requirement into a technical requirement.

Technical Skills

Here are a few of the technical skills good cloud architects should have under their belts.

  • Operating systems – Know how to deploy and troubleshoot problems related to virtual machines, based on both Windows and Linux.
  • Cloud services – Be familiar with at least one public cloud provider’s services (such as AWS, Azure, GCP, Oracle Cloud, etc.). Even better to be familiar with at least two public cloud vendors since the world is heading toward multi-cloud environments.
  • Networking – Be familiar with network-related concepts such as OSI model, TCP/IP, IP and subnetting, ACLs, HTTP, routing, DNS, etc.
  • Storage – Be familiar with storage-related concepts such as object storage, block storage, file storage, snapshots, SMB, NFS, etc.
  • Database – Be familiar with database-related concepts such as relational database, NoSQL database, etc.
  • Architecture – Be familiar with concepts such as three-tier architecture, micro-services, serverless, twelve-factor app, API, etc.

Information Security

A good cloud architect can read an architecture diagram and knows which questions to ask and which security controls to embed inside a given solution.

  • Identity management – Be familiar with concepts such as directory services, Identity and access management (IAM), Active Directory, Kerberos, SAML, OAuth, federation, authentication, authorization, etc.
  • Auditing – Be familiar with concepts such as audit trail, access logs, configuration changes, etc.
  • Cryptography – Be familiar with concepts such as TLS, public key authentication, encryption at transit & at rest, tokenization, hashing algorithms, etc.
  • Application Security – Be familiar with concepts such as input validation, OWASP Top10, SDLC, SQL Injection, etc.

Laws, Regulation and Standards

In our dynamic world a good cloud architect needs to have at least a basic understanding of the following topics:

  • Laws and Regulation – Be familiar with privacy regulations such as GDPR, CCPA, etc., and how they affect your organization’s cloud environments and products
  • Standards – Be familiar with standards such as ISO 27001 (Information Security Management), ISO 27017 (Cloud Security), ISO 27018 (Protection of PII in public clouds), ISO 27701 (Privacy), SOC 2, CSA Security Trust Assurance and Risk (STAR), etc.
  • Contractual agreements – Be able to read contracts between customers and public cloud providers, and know which topics need to appear in a typical contract (SLA, business continuity, etc.)

Code

Good cloud architects, like a good DevOps guys or gals, are not afraid to get their hands dirty and be able read and write code, mostly for automation purposes.

The required skills vary from scenario to scenario, but in most cases include:

  • CLI – Be able to run command line tools, in-order to query existing environment settings up to updating or deploying new components.
  • Scripting – Be familiar with at least one scripting language, such as PowerShell, Bash scripts, Python, Java Script, etc.
  • Infrastructure as a Code – Be familiar with at least one declarative language, such as HashiCorp Terraform, AWS Cloud​Formation, Azure Resource Manager, Google Cloud Deployment Manager, RedHat Ansible, etc.
  • Programming languages – Be familiar with at least one programming language, such as Java, Microsoft .NET, Ruby, etc.

Sales

A good cloud architect needs to be able to “sell” a solution to various audiences. Again the required skills vary from scenario to scenario, but in most cases include:

Summary

Recruiting a good cloud architect is indeed challenging. The role requires multidisciplinary skills – from soft skills (been a customer-oriented and salesperson) to deep technical skills (technology, cloud services, information security, etc.)

There is no alternative to years of hands-on experience. The more areas of experience cloud architects have, the better they will succeed at the job.

References

  • What is a cloud architect? A vital role for success in the cloud.

https://www.cio.com/article/3282794/what-is-a-cloud-architect-a-vital-role-for-success-in-the-cloud.html

  • Want to Become a Cloud Architect? Here’s How

https://www.businessnewsdaily.com/10767-how-to-become-a-cloud-architect.html

The Public Cloud is Coming to Your Local Data Center

For a long time, public cloud providers have given users (almost) unlimited access to compute resources (virtual servers, storage, database, etc.) inside their end-to-end managed data centers. Recently the need for local on-premise solutions is now being felt.

In scenarios where network latency or there is a need to store sensitive or critical data inside a local data center, public cloud providers have built server racks meant for deployment of familiar virtual servers, storage and network equipment cloud infrastructure, while using the same user interface and the same APIs for controlling components using CLI or SDK.

Managing the lower infrastructure layers (monitoring of hardware/software/licenses and infrastructure updates) is done remotely by the public cloud providers, which in some cases, requires constant inbound Internet connectivity.

This solution allows customers to enjoy all the benefits of the public cloud (minus the scale), transparently expand on-premise environments to the public cloud, continue storing and processing data inside local data centers as much as required, and in in cases where there is demand for large compute power, migrate environments (or deploy new environments) to the public cloud.

The solution is suitable for military and defense users, or organizations with large amounts of data sets which cannot be moved to the public cloud in a reasonable amount of time. Below is a comparison of three solutions currently available:

Azure Stack HubAWS OutpostsOracle Private Cloud at Customer
Ability to work in disconnect mode from the public cloud / InternetFully supported / Partially supportedThe solution requires constant connectivity to a region in the cloudThe solution requires remote connectivity of Oracle support for infrastructure monitoring and software updates
VM deployment supportFully supportedFully supportedFully supported
Containers or Kubernetes deployment supportFully supportedFully supportedFully supported
Support Object Storage locallyFully supportedWill be supported in 2020Fully supported
Support Block Storage locallyFully supportedFully supportedFully supported
Support managed database deployment locallyFully supported (MySQL, PostgreSQL)Fully supported (Oracle Database)
Support data analytics deployment locallyFully supported (Amazon EMR)
Support load balancing services locallyFully supportedFully supportedFully supported
Built in support for VPN connectivity to the solutionFully supported
Support connectivity between the solution and resources from on premise siteFully supported
Built in support for encryption services (data at rest)Fully supported (Key Vault)Fully supported (AWS KMS)
Maximum number of physical cores (per rack)100 physical cores96 physical cores
Maximum storage capacity (per rack)5TB55TB200TB

Summary

The private cloud solutions noted here are not identical in terms of their capabilities. At least for the initial installation and support, a partner who specializes in this field is a must.

Support for the well-known services from public cloud environments (virtual servers, storage, database, etc.) will expand over time, as these solutions become more commonly used by organizations or hosting providers.

These solutions are not meant for every customer. However they provide a suitable solution in scenarios where it is not possible to use the public cloud, for regulatory or military/defense reasons for example, or when organizations are planning for a long term migration to the public cloud a few years in advance. These plans can be due to legacy applications not built for the cloud, network latency issues or a large amount of data sets that need to be copied to the cloud.

Cloud-native as the Future of Data Loss Prevention

Data loss prevention (DLP) is one of the most important tools that enterprises have to protect themselves from modern security threats like data exfiltration, data leakage, and other types of sensitive data and secrets exposure. Many organizations seem to understand this, with the DLP market expected to grow worldwide in the coming years. However, not all approaches to DLP are created equal. DLP solutions can vary in the scope of remediation options they provide as well as the security layers that they apply to. Traditionally, data loss prevention has been an on-premise or endpoint solution meant to enforce policies on devices connected over specific networks. As cloud adoption accelerates, though, the utility of these traditional approaches to DLP will substantially decrease.

Established data loss prevention solution providers have attempted to address these gaps with developments like endpoint DLP and cloud access security brokers (CASBs) which provide security teams with visibility of devices and programs running outside of their walls or sanctioned environments. While both solutions minimize security blind spots, at least relative to network layer and on-prem solutions, they can result in inconsistent enforcement. Endpoint DLPs, for example, do not provide visibility at the application layer, meaning that policy enforcement is limited to managing what programs and data are installed on a device. CASBs can be somewhat more sophisticated in determining what cloud applications are permissible on a device or network, but may still face similar shortfalls surrounding behavior and data within cloud applications.

Cloud adoption was expected to grow nearly 17% between 2019 and 2020; however, as more enterprises embrace cloud-first strategies for workforce management and business continuity during the COVID-19 pandemic, we’re likely to see even more aggressive cloud adoption. With more data in the cloud, the need for policy remediation and data visibility at the application layer will only increase and organizations will begin to seek cloud-native approaches to cloud security.

What is cloud-native data loss prevention?

The explosion of cloud technologies in the past decade has brought new architectural models for applications and computing systems. The concept of a cloud-native architecture, while not new, is a development that’s taken off in the last five years. But what exactly does cloud-native mean, and how can it apply to security products like data loss prevention (DLP)?

Cloud-native describes a growing class of platforms that are built in the cloud, for the cloud. True cloud-native data loss prevention is defined by the following features:

  • Agentless. Cloud-native DLP solutions aren’t deployed as software programs that require installation, rather they integrate with the applications they secure through APIs. This makes deployment easy and updates to such platforms effortless, without getting end-users or IT involved. 
  • API driven. Central to cloud-native data loss prevention is the API driven nature of such solutions. Connecting with cloud platforms via API means that visibility and security policies immediately apply at the application layer. API-driven solutions can derive platform-specific context & metadata, as well as provide granular, platform-specific actions, versus broad-brush blocking on the network.
  • Agnostic. True cloud-native solutions are platform, endpoint, and network agnostic in that they’re capable of integrating with cloud platforms quickly and can provide single pane of glass visibility across the cloud.
  • Automated. True cloud-native solutions don’t just provide visibility into the cloud, but help automate policies whenever possible. The sheer volume of data that moves through cloud systems combined with the always-on nature of cloud applications means that incidents can happen at any time and will require immediate remediation. Automation ensures that security teams can respond to these as quickly as possible.
  • Accurate. Finally, in order to help security teams process the massive amounts of data in the cloud, cloud-native DLP must be accurate. The accuracy of such platforms is often enabled by the same systems that make them automated — an effective use of machine learning that can quickly and accurately identify when business-critical data has been exposed.

What are the advantages of cloud-native DLP?

When you consider the capabilities listed above, cloud-native DLP is designed to help organizations get a handle on protecting the massive volumes of data moving in and out of data silos daily. With organizations understanding that the security of their data in the cloud is their responsibility, security teams are increasingly investing in tools designed to help them address visibility and policy blindspots. While it might be the case that cloud-native data loss prevention platforms aren’t the only security tools companies choose to invest in, it’s clear that they’ll be one of the most essential parts of their security toolkit.

About Nightfall

Nightfall is the industry’s first cloud-native DLP platform that discovers, classifies, and protects data via machine learning. Nightfall is designed to work with popular SaaS applications like Slack & GitHub as well as IaaS platforms like AWS. You can schedule a demo with us below to see the Nightfall platform in action.

“This article is originally posted on Nightfall.ai

How to Achieve Long Term Cost Savings Using Cloud Services

The relatively high cost of cloud computing resources, compared to on-premise solutions, is a major challenge for organizations migrating to public cloud services. In this post, we will review several available plans for long-term cost saving of compute resources.

Background

The Pay-As-You-Go, or Pay on-Demand, is the most common option for paying for actual usage when consuming cloud resources. This method is suitable when the required compute power is changing or unpredictable. A good example of this is for services migrated from on-premise to the public cloud (Lift & Shift), or new environments (Dev/Test), and more.

The second most common pricing option is called Spot (Amazon EC2 Spot Instances, Azure Spot Virtual Machines or Google Preemptible Virtual Machines). These options can potentially deliver a discount of up to 90% and are best when there is a demand for large amounts of compute power, and the service is not sensitive to disruptions. Spot Instances are suitable for scenarios when compute power is required by another paying customer and the cloud provider claims the machine back, with 30-second to 2-minute notifications. This method is suitable for image/video processing, batch processing, HPC services, etc.

Reserved Instance

This is the most common pricing option for saving costs. Users commit to one to three years of usage in advance, with a potential savings of up to 70%.

Reserved Instances are available with various payment methods. These range from

  • “All Upfront” – where you pay the entire server cost in advance for the entire commitment period
  • “Partial Upfront” – where you pay the server costs on monthly basis in installments, until the end of the commitment period
  •  “No Upfront” – where you pay a fixed price for the server cost until till the end of the commitment period

There are also options for more flexibility on Reserved Instance options. These include Standard RI, where you commit to a certain instance type (instance family type, operating system, payment method, etc.), and Convertible RI, where you are allowed to change the instance type (instance family type, operating system, etc.) during the commitment period.

Additional information about Reserved Instance options can be found at:

It is important to note that these cost-saving options are not limited to virtual servers. It is possible to purchase a commitment for managed services, such as Amazon RDS Reserved Instances, Azure SQL Database reserved capacity, Azure Blob storage reserved capacity, and more.

AWS Saving Plans

AWS has a flexible pricing option, like AWS Reserved Instances, which allows up to 72% discount.

These plans include two alternatives:

  • Compute Saving Plans – This plan allows you to commit to resource consumption in advance, with the flexibility to choose, and change, instance family type, instance size (ratio between CPU/Memory), region, availability zone and operating systems. The Compute Saving plan covers compute resources from virtual machines (EC2 instances), through AWS Fargate and up to Amazon Lambda.
  • EC2 Instance Saving Plans – This plan allows you to save on virtual servers’ costs. However, it is limited to virtual servers from specific instance family types, in a specific region. It is still possible to change instance size (ratio between CPU/Memory), availability zone and operating systems.

Additional information can be found at: https://aws.amazon.com/savingsplans/faq/

Google Sustained Use Discounts

This plan is designed to encourage customers to commit to long term use of Google compute resources, such as virtual servers or Google Kubernetes Engine, for any constant time, longer than 25% of the month. This plan grants an automatic discount of between 20% and 30% of the price list. No action needs to be taken; the discount is applied when reaching the plan’s minimum consumption level of compute resources.

Additional information can be found at: https://cloud.google.com/compute/docs/sustained-use-discounts

Conclusion

The first step toward enjoying long-term cost savings is understanding your compute demands. Studying up and staying up to date on vendors’ various pricing plans and options, then matching those to your needs and environments, is the key to achieving the most cost-effective public cloud solution.

Best Practices for Deploying New Environments in the Cloud for the First Time

When organizations take their first steps to use public cloud services, they tend to look at a specific target.

My recommendation – think scale!

Plan a couple of steps ahead instead of looking at single server that serves just a few customers. Think about a large environment comprised of hundreds or thousands of servers, serving 10,000 customers concurrently.

Planning will allow you to manage the environment (infrastructure, information security and budget) when you do reach a scale of thousands of concurrent customers. The more we plan the deployment of new environments in advance, according to their business purposes and required resources required for each environment, it will be easier to plan to scale up, while maintaining high level security, budget and change management control and more.

In this three-part blog series, we will review some of the most important topics that will help avoid mistakes while building new cloud environments for the first time.

Resource allocation planning

The first step in resources allocation planning is to decide how to divide resources based on an organizational structure (sales, HR, infrastructure, etc.) or based on environments (production, Dev, testing, etc.)

In-order to avoid mixing resources (or access rights) between various environments, the best practice is to separate the environments as follows:

  • Share resource account (security products, auditing, billing management, etc.)
  • Development environment account (consider creating separate account for test environment purposes)
  • Production environment account

Separating different accounts or environments can be done using:

Tagging resources

Even when deploying a single server inside a network environment (AWS VPC, Azure Resource Group, GCP VPC), it is important to tag resources. This allows identifying which resources belong to which projects / departments / environments, for billing purposes.

Common tagging examples:

  • Project
  • Department
  • Environment (Prod, Dev, Test)

Beyond tagging, it is recommended to add a description to resources that support this kind of meta-data, in-order to locate resources by their target use.

Authentication, Authorization and Password Policy

In-order to ease the management of working with accounts in the cloud (and in the future, multiple accounts according to the various environments), the best practice is to follow the rules below:

  • Central authentication – In case the organization isn’t using Active Directory for central account management and access rights, the alternative is to use managed services such as AWS IAM, Google Cloud IAM, Azure AD, Oracle Cloud IAM, etc.

If managed IAM service is chosen, it is critical to set password policy according to the organization’s password policy (minimum password length, password complexity, password history, etc.)

  • If the central directory service is used by the organization, it is recommended to connect and sync the managed IAM service in the cloud to the organizational center directory service on premise (federated authentication).
  • It is crucial to protect privileged accounts in the cloud environment (such as AWS Root Account, Azure Global Admin, Azure Subscription Owner, GCP Project Owner, Oracle Cloud Service Administrator, etc.), among others, by limiting the use of privileged accounts to the minimum required, enforcing complex passwords, and password rotation every few months. This enables multi-factor authentication and auditing on privileged accounts, etc.
  • Access to resources should be defined according to the least privilege principle.
  • Access to resources should be set to groups instead of specific users.
  • Access to resources should be based on roles in AWS, Azure, GCP, Oracle Cloud, etc.

Audit Trail

It is important to enable auditing in all cloud environments, in-order to gain insights on access to resources, actions performed in the cloud environment and by whom. This is both security and change management reasons.

Common managed audit trail services:

  • AWS CloudTrail – It is recommended to enable auditing on all regions and forward the audit logs to a central S3 bucket in a central AWS account (which will be accessible only for a limited amount of user accounts).
  • Working with Azure, it is recommended to enable the use of Azure Monitor for the first phase, in-order to audit all access to resources and actions done inside the subscription. Later on, when the environment expands, you may consider using services such as Azure Security Center and Azure Sentinel for auditing purposes.
  • Google Cloud Logging – It is recommended to enable auditing on all GCP projects and forward the audit logs to the central GCP project (which will be accessible only for a limited amount of user accounts).
  • Oracle Cloud Infrastructure Audit service – It is recommended to enable auditing on all compartments and forward the audit logs to the Root compartment account (which will be accessible only for a limited amount of user accounts).

Budget Control

It is crucial to set a budget and budget alerts for any account in the cloud at in the early stages of working with in cloud environment. This is important in order to avoid scenarios in which high resource consumption happens due to human error, such as purchasing or consuming expensive resources, or of Denial of Wallet scenarios, where external attackers breach an organization’s cloud account and deploys servers for Bitcoin mining.

Common examples of budget control management for various cloud providers:

  • AWS Consolidated Billing – Configure central account among all the AWS account in the organization, in-order to forward billing data (which will be accessible only for a limited amount of user accounts).
  • GCP Cloud Billing Account – Central repository for storing all billing data from all GCP projects.
  • Azure Cost Management – An interface for configuring budget and budget alerts for all Azure subscriptions in the organization. It is possible to consolidate multiple Azure subscriptions to Management Groups in-order to centrally control budgets for all subscriptions.
  • Budget on Oracle Cloud Infrastructure – An interface for configuring budget and budget alerts for all compartments.

Secure access to cloud environments

In order to avoid inbound access from the Internet to resources in cloud environments (virtual servers, databases, storage, etc.), it is highly recommended to deploy a bastion host, which will be accessible from the Internet (SSH or RDP traffic) and will allow access and management of resources inside the cloud environment.

Common guidelines for deploying Bastion Host:

The more we expand the usage of cloud environments, we can consider deploying a VPN tunnel from the corporate network (Site-to-site VPN) or allow client VPN access from the Internet to the cloud environment (such as AWS Client VPN endpoint, Azure Point-to-Site VPN, Oracle Cloud SSL VPN).

Managing compute resources (Virtual Machines and Containers)

When selecting to deploy virtual machines in cloud environment, it is highly recommended to follow the following guidelines:

  • Choose an existing image from a pre-defined list in the cloud providers’ marketplace (operating system flavor, operating system build, and sometimes an image that includes additional software inside the base image).
  • Configure the image according to organizational or application demands.
  • Update all software versions inside the image.
  • Store an up-to-date version of the image (“Golden Image”) inside the central image repository in the cloud environment (for reuse).
  • In case the information inside the virtual machines is critical, consider using managed backup services (such as AWS Backup or Azure Backup).
  • When deploying Windows servers, it is crucial to set complex passwords for the local Administrator’s account, and when possible, join the Windows machine to the corporate domain.
  • When deploying Linux servers, it is crucial to use SSH Key authentication and store the private key(s) in a secure location.
  • Whenever possible, encrypt data at rest for all block volumes (the server’s hard drives / volumes).
  • It is highly recommended to connect the servers to a managed vulnerability assessment service, in order to detect software vulnerabilities (services such as Amazon Inspector or Azure Security Center).
  • It is highly recommended to connect the servers to a managed patch management service in-order to ease the work of patch management (services such as AWS Systems Manager Patch Manager, Azure Automation Update Management or Google OS Patch Management).

When selecting to deploy containers in the cloud environment, it is highly recommended to follow the following guidelines:

Storing sensitive information

It is highly recommended to avoid storing sensitive information, such as credentials, encryption keys, secrets, API keys, etc., in clear text inside virtual machines, containers, text files or on the local desktop.

Sensitive information should be stored inside managed vault services such as:

Object Storage

When using Object Storage, it is recommended to follow the following guidelines:

  • Avoid allowing public access to services such as Amazon S3, Azure Blob Storage, Google Cloud Storage, Oracle Cloud Object Storage, etc.
  • Enable audit access on Object Storage and store the access logs in a central account in the cloud environment (which will be accessible only for a limited amount of user accounts).
  • It is highly recommended to encrypt data at rest on all data inside Object Storage and when there is a business or regulatory requirement, and encrypt data using customer managed keys.
  • It is highly recommended to enforce HTTPS/TLS for access to object storage (users, computers and applications).
  • Avoid creating object storage bucket names with sensitive information, since object storage bucket names are unique and saved inside the DNS servers worldwide.

Networking

Advanced use of cloud environments

It allows consumption of services, rather than maintaining servers, operating systems, updates/patches, backup and availability, assuming managed services in cluster or replica mode is chosen.

  • Use Infrastructure as a Code (IoC) in-order to ease environment deployments, lower human errors and standardize deployment on multiple environments (Prod, Dev, Test).

Common Infrastructure as a Code alternatives:

Summary

To sum up:

Plan. Know what you need. Think scale.

If you use the best practices outlined here, taking off to the cloud for the first time will be an easier, safer and smoother ride then you might expect.

Additional references

Top Six Cloud Myths Debunked

Organizations considering migrating to cloud services encounter many myths along the way. In most cases, the myths are based on FUD factor (Fear, Uncertainty and Doubt).

In this post, I will review some of the common myths about clouds and see if they stand up to reality.

Myth #1: The Cloud is less secure than on premise or the cloud is more secure than the on premise

The truth is that they are both right and both wrong. The real answer lies somewhere in the middle.

When comparing managed services (SaaS) such as SAP SuccessFactors, Oracle CRM, Office 365, SalesForce, etc., to similar services provided on premise, they cannot be compared to on premise models.

On the one hand, the customer shifts the burden of security and maintenance to the cloud provider, and on the other, mature cloud providers (such as those mentioned above), invest huge amounts of money (when compared to most organizations) in information security, penetration testing, audit trails and constant training to their support and information security teams.

The comparison of IaaS solutions and on premise models is different. According to the shared responsibility model, customers get access from the operating system and above, and they are solely responsible for everything that happens inside the operating system. This includes backups, hardening, updates, authorization, access control and defense against attacks.

The IaaS model is similar to traditional virtualization from the local data center. But top IaaS providers enable access to various managed services in order to ease the burden on server maintenance (from managed databases, through backup services, patch management, vulnerability management, etc.) Bottom line – It is possible to reach the “sweet spot” where using the cloud makes an organization more secure than using on premise, as long as we are familiar with the cloud providers services and capabilities and as long as we learn how to make the most out of these services on-going basis.

Myth #2: The cloud is more expensive than on premise or the cloud is less expensive than on premise

Again, the truth can be found somewhere in the middle.

In order to make an accurate comparison of on premise and cloud solutions, we need to take into consideration several factors. These include the cost of hardware, several years of maintenance, licenses, backup and archive storage, system availability and most important – manpower costs for system maintenance, including training of IT, development and security teams.

When comparing managed services, such as managed databases vs. manual maintenance of on premise databases, the calculation would look something like this. The cloud saves maintenance cost, hardening, patching/updating, and even backups, if they are part of the managed service. That translates into significant savings on maintenance costs, compared to on premise environments, and allows organizations to consume services without the burden of maintaining the systems.

When comparing IaaS environments, the picture changes. The costs of the cloud servers, in a pay-as-you-go model, in most cases are higher than comparable on-premises models (when comparing same amount of vCPU and memory). In-order to cut costs in IaaS model, we need to understand if we are dealing with a high performance workload and changing run time, or are we are dealing with servers operating 24×7 for long period of time. And if we are dealing with the long-term, it is highly recommended to purchase reserved instances for 1 or 3 years in advance.

Another alternative for saving server costs in an IaaS model is to choose Spot model and save up to 90% of the price, assuming the service itself is not fault-sensitive and can be recovered automatically, such as batch processing, image processing, etc. The best alternative for saving server costs will require re-architecting our systems (as much as possible) and migrating to building systems based on micro-service architecture, or use Serverless services and cut the cost on resources and monthly costs to the minimum required.

Myth #3: The cloud is complex or Cloud migration is always complex

Migrating existing services from on premise to managed services in a SaaS model varies from one cloud provider to another, which makes it hard to generalize.

Many SaaS vendors publish guidelines and tools to assist organizations with the migration process. Some examples are SalesForce, Oracle CRM, SAP, Office 365, Google G Suite, etc.

When migrating to PaaS services, there are many guidelines and tools to assist organizations with the migration process. Some examples include AWS Database Migration Service, Azure Database Migration Service, Google BigQuery Data Transfer Service, etc.

Migrating systems to IaaS model requires training IT personnel on how cloud providers implement infrastructure services, such as VM deployment, setting network access rules, connecting to storage, settings permissions, etc.

Organizations who train their IT, networking and information security teams on working with IaaS and PaaS environments will be able to make the migration process easier. There are many low cost online courses to assist closing the required knowledge gap.

If you want to migrate really easily (“fast and dirty”), you can always choose to migrate systems using “lift & shift” method, at least during first phase, although it is not considered a cost effective solution. Sometimes similar hardware in on premise environments is cheaper than similar hardware in an IaaS environment. But this method will allow the organization access to migrated environments and later on, to adapt the required resources to allow the system to function, change the system architecture, such as replacing servers with managed services, etc. Bottom line – It all begins with organizations willing to adapt to working in cloud environments and, of course, management support for the cloud migration.

Myth #4: Multi-Cloud will prevent Vendor Lock-In

When organizations take their first steps toward working with public cloud solutions, it make sense to choose a single IaaS provider in-order to allow the organization to train employees, plan cloud migration strategy and begin the actual cloud migration phase and deployment of the new environments.

The fear of vendor lock-in, or from the cloud provider going bankrupt, is not unreasonable. However the most likely complimentary control mechanism is to choose one of the hyper-scale cloud providers, and mitigate the risk of the cloud provider going bankrupt.

Theoretically, selecting multiple IaaS providers might allow migration between providers, but in reality, moving to multi-cloud environments creates many challenges. These include the requirement to enforce central authentication, requirements to understand how each cloud provider implements services differently (such as storage, network, compute, etc.), understanding how to deploy new environments over multiple cloud providers’ infrastructure, understanding how to enforce logging/auditing and how to centrally handle incident response processes over multiple providers, etc.

When you want to mitigate the risk of vendor lock-in and allow organizations to move environments between cloud providers, we need to plan our infrastructure architecture ahead of time – from the very beginning phases and based architecture on Containers or Kubernetes. As long as services are wrapped in containers, you will be able to deploy and run them over multiple cloud providers. Also take into consideration the integration with each cloud providers’ ecosystem, such as storage, monitoring, message queuing services, etc. Bottom line – deploying production environments over multiple cloud providers requires deep understanding of the cloud ecosystem. Instead of resolving vendor lock-in risks, it can create high overhead for the organization, which may not be justified relative to the risk of vendor lock-in. Moving to container-based architectures might ease the organization’s ability to work with multiple cloud providers.

Myth #5: Auditing cloud environments is hard

Correct. But only partially.

Cloud migration requires customers to understand that they may not be able to conduct on premise audits of the cloud providers’ data centers, as we used to conduct with hosting providers in the past. But on the other hand, mature cloud vendors provide us with complimentary audit controls, such as:

Bottom line – It is possible and highly recommended to constantly audit cloud environments. Choosing one of the mature cloud providers will allow various complimentary controls in order to assure that cloud environments are secure and comply with standards and regulations.

Myth #6: Migration to the cloud will cut manpower and cause employee job security issues

This perhaps one of the most common myths. But inaccurate.

It is true that IT and information security teams will need to undergo training to work with various services and adapt existing knowledge from the on premise environments to cloud environments. But here lies the great potential.

If in the past we used to have deep knowledge in a specific field, such as operating systems, networking, storage, databases, information security, etc., today organizations migrating to the cloud are looking for employees with multidisciplinary knowledge.

The migration from on premise models to the cloud will require organizations to streamline. Although migration to SaaS or managed services requires less IT personnel, the migration to IaaS/PaaS environments requires a shift in the mindset of IT teams. They will need to adapt existing knowledge from manual work, like server maintenance, databases, environment deployments, etc., to automation, like writing code (but no need for professional developers), switching to environment deployment based on Infrastructure as a Code, etc. This ability to adapt will be in high demand by organizations. They will seek professional IT personnel, and will make existing IT teams, who adapt to the changing world, even a more valuable asset to their organizations.

Smart use of cloud services

Many organizations are talking and beginning to embrace system migration to the cloud, as part of a global IT strategy to use public clouds.
The migration from the on premise to the public cloud, allows organizations to choose from a variety of alternatives, each with its own benefits, knowledge requirement and different pricing model (as compared to the on premise licensing model).
In this article, I shell review the different methods, with their pros and cons and I will try to explain which method suites which scenario.

Lift & Shift Migration
In this method, the customer chooses an existing system from the on premise environment, reviewing the required resources for running the system in a cloud environment (number of CPU/amount of Memory and required disk space), the operating system version (assuming the cloud provider has an image for this operating system), checking with the software vendor the ability to run the system a cloud environment (licensing issues) and deploying all software components on a virtual server in the cloud environment (or migrating the entire system, assuming the cloud provider offers a managed service for system migration).

This is the most common method and the simplest one (comparing to other alternatives…) for migrating to the cloud, and most cloud vendors (Infrastructure as a Service) support this method, but we should take under consideration, that cost-wise, this method is considered expensive (in terms of cost and resource usage) when comparing to purchasing physical hardware for 3-5 years in advanced in the on premise environment.
The most common ways to lower the server cost are:
• Resizing the server size (number of CPU/amount of memory) to the actual usage
• Purchase reserved instance for 1 or 3 years in advanced
• Using Spot instances for servers who does not require availability of 24×7 or for applications who can survive temporary downtime, without harming the entire service

Moving to Micro-Services and Containers
In this method, the organization begins migration from monolith application (a system where all components relay on each other and required to be deployed together) to micro-services based development, where each component runs independently (usually inside its own container) and it can be replaced, upgraded and vertically scale out as needed and independently from the rest of the system components.
It is possible to run containers on virtual servers (the entire management, update and scale is the customer’s responsibility) or as part of a managed service (such as managed Kubernetes clusters service).
This method requires the developer’s teams to know how to package their applications inside containers, take care of scaling, monitoring of containers activities (and the communications between containers), and taking care of security topics (such as who can access a container or whether or not the communication between containers is encrypted, etc.)
This method is suitable for organizations who wish to change their current application architecture or being developing new applications. Modern applications are being developed today as containers and allows the customer to migrate between the on premise environments to the public cloud, and with proper adjustments between different cloud providers (once we solve topics such as connectivity to current cloud vendor’s services such as message queuing, storage, logging, etc.)

Moving to Serverless / Function as a Service
In this method, the customer isn’t in charge of operating system maintenance, system availability or scale. Due to the fact that the entire infrastructure is been managed by the cloud vendor, the vendor takes care of scale, as required by the application needs.
This method is suitable for event based services, with short running time (few seconds to few minutes). As part of moving to modern applications, there are many scenarios for choosing specific development language, uploading the code to a managed cloud environment (Serverless), selecting the desired compute power (amount of memory, which effects the number of CPU) and creating triggers for running the function.
It is possible to embed Serverless capabilities, as part of modern micro-services architecture.
The pricing model for this method is based on the amount of time the function was running and the amount of memory used for running the function.
Common use cases for Serverless – image processing, data analysis from IoT devices, etc.
This method is not suitable for every application (due to short running intervals), and also not every development language is currently been supported by every cloud vendor.
For example:
• AWS Lambda (currently) support natively the following languages: Java, Go, PowerShell, Node.JS, C#, Python, Ruby
• Azure Functions (currently) support natively the following languages: Java, JavaScript, C#, PowerShell, Python, TrueScript
• Google Cloud Functions (currently) support natively the following languages: Python, Go, Node.JS
• Oracle Functions (currently) support natively the following languages: Java, Python, Node.JS, Go, Ruby

Migration to managed services (SaaS / PaaS)
In this method, the organization chooses an existing SaaS (such as Messaging, CRM, ERP, etc.) or existing PaaS (such as Database, Storage, etc.)
This method suites many scenarios in which the organization would like to consume existing service, without the need to maintain the infrastructure (operating system, storage, backup, security aspects, etc.). After choosing an existing service, the organization begin migrating data to the managed service, configure proper access rights, sometimes configure VPN connectivity between the on premise and the cloud environment, configures backup (according to the service support this capability) and being consuming the service.
The pricing model changes between cloud vendors (sometime is it based on monthly pricing and sometimes it is based on consumed compute power or consumed storage space).
Mature and transparent the cloud vendors, reveal accurate monthly billing information.
Also, mature cloud vendors knows how to handle privacy, low and regulation aspects (such as GDPR compliance and other privacy regulations) using data processing agreements.

Summary
In this article, I have reviewed the various methods of using cloud service wisely. As we can see, not every method suites every scenario or every organization, but there is no doubt that the future is heading cloud and managed services.
My recommendation for organizations – focus on what brings your organization business value (such as banking, commerce, retail, education, etc.), train your IT and development teams on the coming technological changes and migrate the responsibility for maintaining your organization infrastructure to vendors who specialized on the topic.

To Tokenize Or Not Tokenize – The Truth About Scope And Compliance Risk

The hard-to-face reality is that billions of personal records are exposed each year. A commonly used, yet incomplete solution, is tokenization. Tokenizing sensitive data does not eliminate the need to achieve and certify PCI DSS compliance.

In order to completely descope from PCI, a business can partner with a data custodian (VGS) that handles 100% of data capture and vaulting – removing any compliance risk and completely avoiding data leaks.

Massive data leaks, at this point, are becoming a frequent occurrence – with headlines regularly popping up highlighting cybersecurity disasters that have impacted millions of consumers.

Earlier this summer, news of a cybersecurity disaster rattled North American consumers. The highly-publicized Capital One data breach of 2019 led to the sensitive data exposure of 100 million Americans and 6 million Canadians – including hundreds of thousands of social security numbers and bank account numbers.

Similarly, in July, we learned about the whopping $700 million settlement as a result of the Equifax data breach. Now, years later, the 147 million customers impacted by that disaster all get a piece of that pie.

It only seems like a matter of time until the next multi-million-dollar data breach settlement will be announced, and another consumer data-handling organization will have their feet publicly held to the fire.

From the improper configuration of web applications to the massive security risk involved in cloud storage generally, companies have wisely been seeking alternatives to storing their own sensitive user data and opening themselves up to data breach risk.

The rise of tokenization

In order to reduce data leakage risk as much as possible, many tech organizations have leveraged a method called tokenization. It’s a way to limit storing plain text sensitive data within a company’s own systems by using “tokens” to replace private user information like medical data or credit card numbers.

Unlike encryption, where a mathematical equation can “solve” the data replacement and reveal the original sensitive information, tokenization is not reversible. With no mathematical relationship to the original data point, tokenization is widely considered to be a safe way of transmitting and storing critical information.

However, tokenizing sensitive data does not eliminate the need to achieve and certify PCI DSS compliance – although it can reduce the number of system components to which PCI DSS compliance would apply.

With tokenization, sensitive data is mostly hidden. But, there are two points where tokenized data still remains within the scope of PCI DSS compliance: the data vault and the original point of capture.

Even when organizations store and use tokens instead of raw sensitive data, the original data is within PCI scope because they’re still within the cardholder data environment. In this environment, a business is still responsible for any leaked data.

But what if businesses could offload this data risk fully, and enjoy the benefits of tokenization while keeping all the original data completely off their own systems?

Descoping entirely

In order to completely descope from PCI DSS compliance, a business can partner with a data custodian that handles 100% of data capture and vaulting – removing any compliance risk and completely avoiding data leaks.

VGS is an innovative data custodian that takes on their clients’ compliance risk, securely storing all sensitive data and removing businesses from PCI scope in full.

By leveraging synthetic data ‘aliases,’ which are generated in real-time to protect all sensitive customer information, VGS empowers businesses to use customer data just as they did before – but never actually touch the sensitive data themselves.

Startups that work with VGS remove their compliance risk and avoid the potential risk associated with data breaches because there’s nothing to steal in the first place.

By partnering with a data custodian, organizations can completely descope their systems and win over new customers with the peace of mind that their sensitive personal data is in safe hands.

When businesses implement VGS solutions to handle their sensitive data, they instantly inherit VGS’s best-in-class security posture, which enables them to fast-track their certifications like PCI, SOC2 and others.

With data security as one less thing to worry about, organizations can focus their time and resources on growing their core businesses.
This article was originally published in Very Good Security.

Threat Modeling for Data Protection

When evaluating the security of an application and data model ask the questions:

  • What is the sensitivity of the data?
  • What are the regulatory, compliance, or privacy requirements for the data?
  • What is the attack vector that a data owner is hoping to mitigate?
  • What is the overall security posture of the environment, is it a hostile environment or a relatively trusted one?

Data When threat modeling, consider the following common scenarios:

Data at rest (“DAR”)

In information technology means inactive data that is stored physically in any digital form (e.g. database/data warehouses, spreadsheets, archives, tapes, off-site backups, mobile devices etc.).

  • Transparent Data Encryption (often abbreviated to TDE) is a technology employed by Microsoft SQL, IBM DB2 and Oracle to encrypt the “table-space” files in a database. TDE offers encryption at the file level. It solves the problem of protecting data at rest by encrypting databases both on the hard drive as well as on backup media. It does not protect data in motion DIM nor data in use DIU.
  • Mount-point encryption: This is another form of TDE is available for database systems which do not natively support table-space encryption. Several vendors offer mount-point encryption for Linux/Unix/Microsoft Windows file system mount-points. When a vendor does not support TDE, this type of encryption effectively encrypts the database table-space and stores the encryption keys separate from the file system. So, if the physical or logical storage medium is detached from the compute resource, the database table-space remains encrypted.

Data in Motion (“DIM”)

Data in motion considers the security of data that is being copied from one medium to another. Data in motion typically considers data being transmitted over a network transport. Web Applications represent common data in motion scenarios.

  • Transport Layer Security (TLS or SSL): is commonly used to encrypt internet protocol based network transports. TLS works by encrypting the internet layer 7 “application layer” packets of a given network stream using symmetric encryption.
  • Secure Shell/Secure File Transport (SSH, SCP, SFTP): SSH is a protocol used to securely login and access remote computers. SFTP runs over the SSH protocol (leveraging SSH security and authentication functionality) but is used for secure transfer of files. The SSH protocol utilizes public key cryptography to authenticate access to remote systems.
  • Virtual Private Networks (VPNs) A virtual private network (VPN) extends a private network across a public network, and enables users to send and receive data across shared or public networks as if their computing devices were directly connected to the private network.

Data in Use (“DIU”)

Data in use happens whenever a computer application reads data from a storage medium into volatile memory.

  • Full memory encryption: Encryption to prevent data visibility in the event of theft, loss, or unauthorized access or theft. This is commonly used to protect Data in Motion and Data at Rest. Encryption is increasingly recognized as an optimal method for protecting Data in Use. There have been multiple approaches to encrypt data in use within memory. Microsoft’s Xbox has a capability to provide memory encryption. A company Private Core presently has a commercial software product cage to provide attestation along with full memory encryption for x86 servers.
  • RAM Enclaves: enable an enclave of protected data to be secured with encryption in RAM. Enclave data is encrypted while in RAM but available as clear text inside the CPU and CPU cache, when written to disk, when traversing networks etc. Intel Corporation has introduced the concept of “enclaves” as part of its Software Guard Extensions in technical papers published in 2013.
  • 2013 papers: from Workshop on Hardware and Architectural Support for Security and Privacy 2013
  • Innovative Instructions and Software Model for Isolated Execution
  • Innovative Technology for CPU Based Attestation and Sealing

Where do traditional data protection techniques fall short?

TDE: Database and mount point encryption both fall short of fully protecting data across the data’s entire lifecycle. For instance: TDE was designed to defend against theft of physical or virtual storage media only. An authorized system administrator, or and unauthorized user or process can gain access to sensitive data either by running a legitimate query and , or by scraping RAM. TDE does not provide granular access control to data at rest once the data has been mounted.

TLS/SCP/STFP/VPN, etc: TCP/IP Transport layer encryption also falls short of protecting data across the entire data lifecycle. For example, TLS does not protect data at rest or in use. Quite often TLS is only enabled on Internet facing application load balancers. Often TLS calls to web applications are plaintext on the datacenter or cloud side of the application load-balancer.

DIU: Memory encryption, Data in use full memory encryption falls short of protecting data across the entire data lifecycle. DIU techniques are cutting edge and not generally available. Commodity compute architecture has just begun to support memory encryption. With DIU memory encryption, data is only encrypted while in memory. Data is in plaintext while in the CPU, Cache, written to disk, and traversing network transports.

Complimentary or Alternative Approach: Tokenization

We need an alternative approach that address all the exposure gaps 100% of the time. In information security, we really want a defense in depth strategy. That is, we want layers of controls so that if a single layer is fails or is compromised another layer can compensate for the failure.

Tokenization and format preserving encryption are unique in the fact they protect sensitive data throughout the data lifecycle/across a data-flow. Tokenization and FPE are portable and remain in force across mixed technology stacks. Tokenization and Format preserving encryption do not share the same exposures as traditional data protection techniques.

How does this work? Fields of sensitive data are cryptographically transformed at the system of origin, that is during intake. A cryptographic transform of a sensitive field is applied, producing a non-sensitive token representation of the original data.

Tokenization, when applied to data security, is the process of substituting a sensitive data element with a non-sensitive equivalent, referred to as a token, that has no extrinsic or exploitable meaning or value. The token is a reference (i.e. identifier) that maps back to the sensitive data through a tokenization system.

Format preserving encryption takes this a step further and allows the data element to maintain its original format and data type. For instance, a 16-digit credit card number can be protected and the result is another 16-digit value. The value here is to reduce the overall impact of code changes to applications and databases while reducing the time to market of implementing end to end data protection.

In Closing

Use of tokenization or format preserving encryption to replace live data in systems results in minimized exposure of sensitive data to those applications, stores, people and processes. Replacing sensitive data results in reduced risk of compromise or accidental exposure and unauthorized access to sensitive data.

Applications can operate using tokens instead of live data, with the exception of a small number of trusted applications explicitly permitted to detokenize when strictly necessary for an approved business purpose. Moreover: in several cases removal of sensitive data from an organization’s applications, databases, business processes will result in reduced compliance and audit scope, resulting in significantly less complex and shorter audits.

This article was originally published in Very Good Security.

WordPress 5.x Hardening Guide for CentOS 7.6

This document explains the process of installation, configuration and hardening of Apache server from source files, based on CentOS 7.6 default installation (Linux Firewall and SELinux enabled by default), including support for TLS v1.2 and PHP 7.3

    Pre-Requirements
    • Linux server installed with CentOS 7.6 (64bit)
    • policycoreutils-python-* package installed
    • setools-libs-* package installed
    • libcgroup-* package installed
    • audit-libs-python-* package installed
    • libsemanage-python-* package installed
    • gcc* package installed
    • gcc-c++* package installed
    • autoconf* package installed
    • automake* package installed
    • libtool* package installed
    • perl-core package installed
    • zlib-devel package installed
    • expat-devel package installed
    • yum-utils package installed
    OpenSSL upgrade phase
  1. Login using privileged account
  2. Run the commands below to download the latest build of OpenSSL:
    cd /usr/local/src
    wget https://www.openssl.org/source/openssl-1.1.1.tar.gz
    tar -xvzf openssl-1.1.1.tar.gz
  3. Run the commands below to compile the latest build of OpenSSL:
    cd openssl-1.1.1
    ./config --prefix=/usr/local/ssl --openssldir=/usr/local/ssl shared zlib
    make
    make test
    make install
  4. Edit using VI the file /etc/ld.so.conf.d/openssl-1.1.1.conf and add the following string to the file:
    /usr/local/ssl/lib
  5. Run the command below to reload the dynamic link:
    ldconfig -v
  6. Backup the original OpenSSL binary:
    mv /usr/bin/openssl /usr/bin/openssl.BEKUP
  7. Create using VI the file /etc/profile.d/openssl.sh and add the following content:
    #Set OPENSSL_PATH
    OPENSSL_PATH=/usr/local/ssl/bin
    export OPENSSL_PATH
    PATH=$PATH:$OPENSSL_PATH
    export PATH
  8. Run the commands below to complete the configuration of the OpenSSL:
    chmod +x /etc/profile.d/openssl.sh
    source /etc/profile.d/openssl.sh
    echo $PATH
    which openssl
    Apache 2.4.6 installation phase
  1. Login using privileged account
  2. Run the command below to install Apache 2.4.6:
    yum install httpd -y
  3. Updating Ownership and Permissions on Apache folders:
    chown root:root /usr/sbin/apachectl
    chown root:root /usr/sbin/httpd
    chmod 770 /usr/sbin/apachectl
    chmod 770 /usr/sbin/httpd
    chown -R root:root /etc/httpd
    chmod -R go-r /etc/httpd
    chown -R root:root /etc/httpd/logs
    chmod -R 700 /etc/httpd/logs
  4. Create folder for the web content:
    mkdir -p /www
  5. Updating Ownership and Permissions on the web content folder:
    chown -R root /www
    chmod -R 775 /www
  6. Fix the SELinux security context on the new web folder:
    semanage fcontext -a -t httpd_sys_content_t "/www(/.*)?"
    restorecon -F -R -v /www
    chcon -R -t httpd_sys_content_t /www
  7. Create folder for the first WordPress site:
    mkdir /www/WebSiteA
    Note: Replace WebSiteA with the relevant name
  8. Create folder for the secondWordPress site:
    mkdir /www/WebSiteB
    Note: Replace WebSiteB with the relevant name
  9. Create logs folder for the first WordPress site:
    mkdir /www/WebSiteA/logs
    Note: Replace WebSiteA with the relevant name
  10. Create logs folder for the second WordPress site:
    mkdir /www/WebSiteB/logs
    Note: Replace WebSiteB with the relevant name
  11. Configure permissions on the logs folder for the first WordPress site:
    chown -R apache:apache /www/WebSiteA/logs
    chmod -R 700 /www/WebSiteA/logs

    Note: Replace WebSiteA with the relevant name
  12. Configure permissions on the logs folder for the second WordPress site:
    chown -R apache:apache /www/WebSiteB/logs
    chmod -R 700 /www/WebSiteB/logs

    Note: Replace WebSiteB with the relevant name
  13. Fix the SELinux security context on the new web folder for the first WordPress site:
    semanage fcontext -a -t httpd_log_t "/www/WebSiteA/logs(/.*)?"
    restorecon -F -R -v /www/WebSiteA/logs
    chcon -R -t httpd_log_t /www/WebSiteA/logs

    Note: Replace WebSiteA with the relevant name
  14. Fix the SELinux security context on the new web folder for the second WordPress site:
    semanage fcontext -a -t httpd_log_t "/www/WebSiteB/logs(/.*)?"
    restorecon -F -R -v /www/WebSiteB/logs
    chcon -R -t httpd_log_t /www/WebSiteB/logs

    Note: Replace WebSiteB with the relevant name
  15. Create the following folders:
    mkdir /etc/httpd/sites-available
    mkdir /etc/httpd/sites-enabled
  16. Edit using VI the file /etc/httpd/conf/httpd.conf and change the following strings:
    From:
    LogLevel warnTo:
    LogLevel notice

    From:
    DocumentRoot "/var/www/html"

    To:
    # DocumentRoot "/var/www/html"

    From:
    ScriptAlias /cgi-bin/ "/var/www/cgi-bin/"

    To:
    # ScriptAlias /cgi-bin/ "/var/www/cgi-bin/"

  17. Comment out the entire sections below inside the /etc/httpd/conf/httpd.conf
    <Directory />
    <Directory "/var/www">
    <Directory "/var/www/html">
    <Directory "/var/www/cgi-bin">
  18. Add the following sections to the end of the /etc/httpd/conf/httpd.conf file:
    IncludeOptional sites-enabled/*.conf
    # Configure custom error message:
    ErrorDocument 400 "The requested URL was not found on this server."
    ErrorDocument 401 "The requested URL was not found on this server."
    ErrorDocument 403 "The requested URL was not found on this server."
    ErrorDocument 404 "The requested URL was not found on this server."
    ErrorDocument 405 "The requested URL was not found on this server."
    ErrorDocument 408 "The requested URL was not found on this server."
    ErrorDocument 410 "The requested URL was not found on this server."
    ErrorDocument 411 "The requested URL was not found on this server."
    ErrorDocument 412 "The requested URL was not found on this server."
    ErrorDocument 413 "The requested URL was not found on this server."
    ErrorDocument 414 "The requested URL was not found on this server."
    ErrorDocument 415 "The requested URL was not found on this server."
    ErrorDocument 500 "The requested URL was not found on this server."
    # Configure Server Tokens
    ServerTokens Prod
    # Disable Server Signature
    ServerSignature Off
    # Disable Tracing
    TraceEnable Off
    # Maximum size of the request body.
    LimitRequestBody 4000000
    # Maximum number of request headers in a request.
    LimitRequestFields 40
    # Maximum size of request header lines.
    LimitRequestFieldSize 4000
    # Maximum size of the request line.
    LimitRequestLine 4000
    MaxRequestsPerChild 10000
    # Configure clickjacking protection
    Header always append X-Frame-Options SAMEORIGIN
  19. Remove the files below:
    mv /etc/httpd/conf.d/autoindex.conf /etc/httpd/conf.d/autoindex.conf.bak
    mv /etc/httpd/conf.d/userdir.conf /etc/httpd/conf.d/userdir.conf.bak
  20. Comment out the lines inside the /etc/httpd/conf.modules.d/00-base.conf file below to disable default modules:
    LoadModule status_module modules/mod_status.so
    LoadModule info_module modules/mod_info.so
    LoadModule autoindex_module modules/mod_autoindex.so
    LoadModule include_module modules/mod_include.so
    LoadModule userdir_module modules/mod_userdir.so
    LoadModule env_module modules/mod_env.so
    LoadModule negotiation_module modules/mod_negotiation.so
    LoadModule actions_module modules/mod_actions.so
  21. Comment out the lines inside the /etc/httpd/conf.modules.d/01-cgi.conf file below to disable default modules:
    LoadModule cgi_module modules/mod_cgi.so
  22. Using VI, create configuration file for the first WordPress site called /etc/httpd/sites-available/websitea.com.conf with the following content:
    <VirtualHost *:80>
    ServerAdmin [email protected]websitea.com
    ServerName www.websitea.com
    ServerAlias websitea.com
    DocumentRoot /www/WebSiteA
    <Directory />
    Options FollowSymLinks
    AllowOverride None
    </Directory>
    <Directory /www/WebSiteA>
    Options Indexes FollowSymLinks MultiViews
    AllowOverride all
    Require all granted
    Order allow,deny
    Allow from all
    <LimitExcept GET POST>
    deny from all
    </limitexcept>
    </Directory>
    ErrorLog /www/WebSiteA/logs/error.log
    CustomLog /www/WebSiteA/logs/access.log combined
    </VirtualHost>

    Note: Replace WebSiteA with the relevant name
  23. Using VI, create configuration file for the first WordPress site called /etc/httpd/sites-available/websiteb.com.conf with the following content:
    <VirtualHost *:80>
    ServerAdmin [email protected]websiteb.com
    ServerName www.websiteb.com
    ServerAlias websiteb.com
    DocumentRoot /www/WebSiteB
    <Directory />
    Options FollowSymLinks
    AllowOverride None
    </Directory>
    <Directory /www/WebSiteB>
    Options Indexes FollowSymLinks MultiViews
    AllowOverride all
    Require all granted
    Order allow,deny
    Allow from all
    <LimitExcept GET POST>
    deny from all
    </limitexcept>
    </Directory>
    ErrorLog /www/WebSiteB/logs/error.log
    CustomLog /www/WebSiteB/logs/access.log combined
    </VirtualHost>

    Note: Replace WebSiteB with the relevant name
  24. Run the commands below to enable the new virtual host files:
    ln -s /etc/httpd/sites-available/websitea.com.conf /etc/httpd/sites-enabled/websitea.com.conf
    ln -s /etc/httpd/sites-available/websiteb.com.conf /etc/httpd/sites-enabled/websiteb.com.conf

    Note 1: Replace WebSiteA with the relevant name
    Note 2: Replace WebSiteB with the relevant name
  25. Run the command below to configure Apache to load at startup:
    systemctl enable httpd
  26. To start the Apace service, run the command below:
    systemctl start httpd
  27. Run the commands below to enable HTTPD rule on the firewall:
    firewall-cmd --zone=public --add-service=http --permanent
    systemctl restart firewalld
    MariaDB installation phase
  1. Login using privileged account
  2. Install MariaDB:
    yum install -y mariadb-server mariadb-client
  3. Enable the MariaDB service:
    systemctl enable mariadb.service
  4. Start the MariaDB service:
    systemctl start mariadb.service
  5. Run the command bellow to set ownership and permissions for /etc/my.cnf file:
    chown root /etc/my.cnf
    chmod 644 /etc/my.cnf
  6. Edit using VI, the file /etc/my.cnf and add the string bellow under the [mysqld] section
    bind-address = 127.0.0.1
  7. Run the command below to secure the MySQL:
    mysql_secure_installation
  8. Specify the MySQL root account password (leave blank) -> Press Y to set the Root password -> specify new complex password (at least 14 characters, upper case, lower case, number, special characters) and document it -> Press Y to remove anonymous users -> Press Y to disallow root login remotely -> Press Y to remove test database -> Press Y to reload privilege tables and exit the script.
  9. Restart the MariaDB service:
    systemctl restart mariadb.service
    PHP 7.3 installation phase
  1. Login using privileged account
  2. Run the commands below to install PHP 7.3:
    yum install http://rpms.remirepo.net/enterprise/remi-release-7.rpm -y
    yum-config-manager --enable remi-php73
    yum install php php-mcrypt php-cli php-gd php-curl php-mysql php-ldap php-zip php-fileinfo -y
  3. Change the permissions on the php.ini file:
    chmod 640 /etc/php.ini
  4. Edit using VI, the file /etc/php.ini
    From:
    mysqli.default_host =
    To:
    mysqli.default_host = 127.0.0.1:3306From:
    allow_url_fopen = On

    To:
    allow_url_fopen = Off

From:
expose_php = On

To:
expose_php = Off

From:
memory_limit = 128M

To:
memory_limit = 8M

From:
post_max_size = 8M

To:
post_max_size = 2M

From:
upload_max_filesize = 2M

To:
upload_max_filesize = 1M

From:
disable_functions =

To:
disable_functions = fpassthru,crack_check,crack_closedict,crack_getlastmessage,crack_opendict, psockopen,php_ini_scanned_files,shell_exec,chown,hell-exec,dl,ctrl_dir,phpini,tmp,safe_mode,systemroot,server_software, get_current_user,HTTP_HOST,ini_restore,popen,pclose,exec,suExec,passthru,proc_open,proc_nice,proc_terminate, proc_get_status,proc_close,pfsockopen,leak,apache_child_terminate,posix_kill,posix_mkfifo,posix_setpgid, posix_setsid,posix_setuid,escapeshellcmd,escapeshellarg,posix_ctermid,posix_getcwd,posix_getegid,posix_geteuid,posix_getgid,posix_getgrgid, posix_getgrnam,posix_getgroups,posix_getlogin,posix_getpgid,posix_getpgrp,posix_getpid, posix_getppid,posix_getpwnam,posix_getpwuid,posix_getrlimit,system,posix_getsid,posix_getuid,posix_isatty, posix_setegid,posix_seteuid,posix_setgid,posix_times,posix_ttyname,posix_uname,posix_access,posix_get_last_error,posix_mknod, posix_strerror,posix_initgroups,posix_setsidposix_setuid

  • Restart the Apache service:
    systemctl restart httpd.service
      • WordPress 5.x installation phase

    1. Login using privileged account.
    2. Run the command bellow to login to the MariaDB:
      /usr/bin/mysql -uroot -p
      Note: When prompted, specify the password for the MariaDB root account.
    3. Run the following commands from the MariaDB prompt:
      CREATE USER 'blgusr'@'localhost' IDENTIFIED BY 'A3fg1j7x!s2gEq';
      CREATE USER 'hswjm'@'localhost' IDENTIFIED BY '[email protected]';
      CREATE DATABASE m6gf42s;
      CREATE DATABASE b7mf3aq;
      GRANT ALL PRIVILEGES ON m6gf42s.* TO "blgusr"@"localhost" IDENTIFIED BY "A3fg1j7x!s2gEq";
      GRANT ALL PRIVILEGES ON b7mf3aq.* TO "hswjm"@"localhost" IDENTIFIED BY "[email protected]";
      FLUSH PRIVILEGES;
      quit

      Note 1: Replace “blgusr” with a username to access first the database.
      Note 2: Replace “A3fg1j7x!s2gEq” with complex password for the account who will access the first database (at least 14 characters, upper case, lower case, number, special characters).
      Note 3: Replace “hswjm” with a username to access second the database.
      Note 4: Replace “[email protected]” with complex password for the account who will access the second database (at least 14 characters, upper case, lower case, number, special characters).
      Note 5: Replace “m6gf42s” with the first WordPress database name.
      Note 6: Replace “b7mf3aq” with the second WordPress database name.
    4. Run the commands below to download the latest build of WordPress:
      cd /usr/local/src
      wget https://wordpress.org/latest.zip
      unzip latest.zip -d /www/WebSiteA
      unzip latest.zip -d /www/WebSiteB

      Note 1: Replace WebSiteA with the relevant name
      Note 2: Replace WebSiteB with the relevant name
    5. Fix the SELinux security context on the new web folder for the first WordPress site:
      semanage fcontext -a -t httpd_sys_content_t "/www/WebSiteA(/.*)?"
      restorecon -F -R -v /www/WebSiteA
      chcon -R -t httpd_sys_content_t /www/WebSiteA
      semanage fcontext -a -t httpd_sys_rw_content_t "/www/WebSiteA/wp-content(/.*)?"
      restorecon -F -R -v /www/WebSiteA/wp-content
      chcon -R -t httpd_sys_rw_content_t /www/WebSiteA/wp-content

      Note: Replace WebSiteA with the relevant name
    6. Fix the SELinux security context on the new web folder for the second WordPress site:
      semanage fcontext -a -t httpd_sys_content_t "/www/WebSiteB(/.*)?"
      restorecon -F -R -v /www/WebSiteB
      chcon -R -t httpd_sys_content_t /www/WebSiteB
      semanage fcontext -a -t httpd_sys_rw_content_t "/www/WebSiteB/wp-content(/.*)?"
      restorecon -F -R -v /www/WebSiteB/wp-content
      chcon -R -t httpd_sys_rw_content_t /www/WebSiteB/wp-content

      Note: Replace WebSiteB with the relevant name
    7. Create using VI the file /www/WebSiteA/config.php with the following content:
      <?php
      define('DB_NAME', 'm6gf42s');
      define('DB_USER', 'blgusr');
      define('DB_PASSWORD', 'A3fg1j7x!s2gEq');
      define('DB_HOST', 'localhost');
      $table_prefix = 'm6gf42s_';
      define('AUTH_KEY', 'put your unique phrase here');
      define('SECURE_AUTH_KEY', 'put your unique phrase here');
      define('LOGGED_IN_KEY', 'put your unique phrase here');
      define('NONCE_KEY', 'put your unique phrase here');
      define('AUTH_SALT', 'put your unique phrase here');
      define('SECURE_AUTH_SALT', 'put your unique phrase here');
      define('LOGGED_IN_SALT', 'put your unique phrase here');
      define('NONCE_SALT', 'put your unique phrase here');
      define('FS_METHOD', 'direct');
      ?>

      Note 1: Make sure there are no spaces, newlines, or other strings before an opening ‘< ?php’ tag or after a closing ‘?>’ tag.
      Note 2: Replace “blgusr” with MariaDB account to access the first database.
      Note 3: Replace “A3fg1j7x!s2gEq” with complex password (at least 14 characters).
      Note 4: Replace “m6gf42s” with the first WordPress database name.
      Note 5: In-order to generate random values for the AUTH_KEY, SECURE_AUTH_KEY, LOGGED_IN_KEY and NONCE_KEY, use the web site bellow:
      http://api.wordpress.org/secret-key/1.1/
    8. Create using VI the file /www/WebSiteB/config.php with the following content:
      <?php
      define('DB_NAME', 'b7mf3aq');
      define('DB_USER', 'hswjm');
      define('DB_PASSWORD', '[email protected]');
      define('DB_HOST', 'localhost');
      $table_prefix = 'b7mf3aq_';
      define('AUTH_KEY', 'put your unique phrase here');
      define('SECURE_AUTH_KEY', 'put your unique phrase here');
      define('LOGGED_IN_KEY', 'put your unique phrase here');
      define('NONCE_KEY', 'put your unique phrase here');
      define('AUTH_SALT', 'put your unique phrase here');
      define('SECURE_AUTH_SALT', 'put your unique phrase here');
      define('LOGGED_IN_SALT', 'put your unique phrase here');
      define('NONCE_SALT', 'put your unique phrase here');
      define('FS_METHOD', 'direct');
      ?>

      Note 1: Make sure there are no spaces, newlines, or other strings before an opening ‘< ?php’ tag or after a closing ‘?>’ tag.
      Note 2: Replace “hswjm” with MariaDB account to access the second database.
      Note 3: Replace “[email protected]” with complex password (at least 14 characters).
      Note 4: Replace “b7mf3aq” with the second WordPress database name.
      Note 5: In-order to generate random values for the AUTH_KEY, SECURE_AUTH_KEY, LOGGED_IN_KEY and NONCE_KEY, use the web site bellow:
      http://api.wordpress.org/secret-key/1.1/
    9. Copy the wp-config.php file:
      cp /www/WebSiteA/wordpress/wp-config-sample.php /www/WebSiteA/wordpress/wp-config.php
      cp /www/WebSiteB/wordpress/wp-config-sample.php /www/WebSiteB/wordpress/wp-config.php

      Note 1: Replace WebSiteA with the relevant name
      Note 2: Replace WebSiteB with the relevant name
    10. Edit using VI, the file /www/WebSiteA/wordpress/wp-config.php
      Add the following lines before the string “That’s all, stop editing! Happy blogging”:
      /* Multisite */
      define('WP_ALLOW_MULTISITE', true);
      include('/www/WebSiteA/config.php');

      Remove or comment the following sections:
      define('DB_NAME', 'putyourdbnamehere');
      define('DB_USER', 'usernamehere');
      define('DB_PASSWORD', 'yourpasswordhere');
      define('DB_HOST', 'localhost');
      $table_prefix = 'wp_';
      define('AUTH_KEY', 'put your unique phrase here');
      define('SECURE_AUTH_KEY', 'put your unique phrase here');
      define('LOGGED_IN_KEY', 'put your unique phrase here');
      define('NONCE_KEY', 'put your unique phrase here');
      define('AUTH_SALT', 'put your unique phrase here');
      define('SECURE_AUTH_SALT', 'put your unique phrase here');
      define('LOGGED_IN_SALT', 'put your unique phrase here');
      define('NONCE_SALT', 'put your unique phrase here');

      Note: Replace WebSiteA with the relevant name
    11. Edit using VI, the file /www/WebSiteB/wordpress/wp-config.php
      Add the following lines before the string “That’s all, stop editing! Happy blogging”:
      /* Multisite */
      define('WP_ALLOW_MULTISITE', true);
      include('/www/WebSiteB/config.php');

      Remove or comment the following sections:
      define('DB_NAME', 'putyourdbnamehere');
      define('DB_USER', 'usernamehere');
      define('DB_PASSWORD', 'yourpasswordhere');
      define('DB_HOST', 'localhost');
      $table_prefix = 'wp_';
      define('AUTH_KEY', 'put your unique phrase here');
      define('SECURE_AUTH_KEY', 'put your unique phrase here');
      define('LOGGED_IN_KEY', 'put your unique phrase here');
      define('NONCE_KEY', 'put your unique phrase here');
      define('AUTH_SALT', 'put your unique phrase here');
      define('SECURE_AUTH_SALT', 'put your unique phrase here');
      define('LOGGED_IN_SALT', 'put your unique phrase here');
      define('NONCE_SALT', 'put your unique phrase here');

      Note: Replace WebSiteB with the relevant name
    12. Create using VI the file /www/WebSiteA/wordpress/.htaccess and add the following content:
      # BEGIN WordPress
      <IfModule mod_rewrite.c>
      RewriteEngine On
      RewriteBase /
      RewriteRule ^index\.php$ - [L]
      RewriteCond %{REQUEST_FILENAME} !-f
      RewriteCond %{REQUEST_FILENAME} !-d
      RewriteRule . /index.php [L]
      </IfModule>
      # END WordPress
      Header set X-XSS-Protection "1; mode=block"
      Header set X-Content-Type-Options nosniff
      Header set Content-Security-Policy "default-src 'self' 'unsafe-inline' 'unsafe-eval' https: data:"

      Note: Replace WebSiteA with the relevant name
    13. Create using VI the file /www/WebSiteA/wordpress/wp-content/.htaccess and add the following content:
      # BEGIN WordPress
      <IfModule mod_rewrite.c>
      RewriteEngine On
      RewriteBase /
      RewriteRule ^index\.php$ - [L]
      RewriteCond %{REQUEST_FILENAME} !-f
      RewriteCond %{REQUEST_FILENAME} !-d
      RewriteRule . /index.php [L]
      </IfModule>
      # END WordPress

      Note: Replace WebSiteA with the relevant name
    14. Create using VI the file /www/WebSiteA/wordpress/wp-includes/.htaccess and add the following content:
      # BEGIN WordPress
      <IfModule mod_rewrite.c>
      RewriteEngine On
      RewriteBase /
      RewriteRule ^index\.php$ - [L]
      RewriteCond %{REQUEST_FILENAME} !-f
      RewriteCond %{REQUEST_FILENAME} !-d
      RewriteRule . /index.php [L]
      </IfModule>
      # END WordPress

      Note: Replace WebSiteA with the relevant name
    15. Set ownership and permissions on the .htaccess files below:
      chown apache:apache /www/WebSiteA/wordpress/.htaccess
      chown apache:apache /www/WebSiteA/wordpress/wp-content/.htaccess
      chown apache:apache /www/WebSiteA/wordpress/wp-includes/.htaccess
      chmod 644 /www/WebSiteA/wordpress/.htaccess
      chmod 644 /www/WebSiteA/wordpress/wp-content/.htaccess
      chmod 644 /www/WebSiteA/wordpress/wp-includes/.htaccess

      Note: Replace WebSiteA with the relevant name
    16. Create using VI the file /www/WebSiteB/wordpress/.htaccess and add the following content:
      # BEGIN WordPress
      <IfModule mod_rewrite.c>
      RewriteEngine On
      RewriteBase /
      RewriteRule ^index\.php$ - [L]
      RewriteCond %{REQUEST_FILENAME} !-f
      RewriteCond %{REQUEST_FILENAME} !-d
      RewriteRule . /index.php [L]
      </IfModule>
      # END WordPress
      Header set X-XSS-Protection "1; mode=block"
      Header set X-Content-Type-Options nosniff
      Header set Content-Security-Policy "default-src 'self' 'unsafe-inline' 'unsafe-eval' https: data:"

      Note: Replace WebSiteB with the relevant name
    17. Create using VI the file /www/WebSiteB/wordpress/wp-content/.htaccess and add the following content:
      # BEGIN WordPress
      <IfModule mod_rewrite.c>
      RewriteEngine On
      RewriteBase /
      RewriteRule ^index\.php$ - [L]
      RewriteCond %{REQUEST_FILENAME} !-f
      RewriteCond %{REQUEST_FILENAME} !-d
      RewriteRule . /index.php [L]
      </IfModule>
      # END WordPress

      Note: Replace WebSiteB with the relevant name
    18. Create using VI the file /www/WebSiteB/wordpress/wp-includes/.htaccess and add the following content:
      # BEGIN WordPress
      <IfModule mod_rewrite.c>
      RewriteEngine On
      RewriteBase /
      RewriteRule ^index\.php$ - [L]
      RewriteCond %{REQUEST_FILENAME} !-f
      RewriteCond %{REQUEST_FILENAME} !-d
      RewriteRule . /index.php [L]
      </IfModule>
      # END WordPress

      Note: Replace WebSiteB with the relevant name
    19. Set ownership and permissions on the .htaccess files below:
      chown apache:apache /www/WebSiteB/wordpress/.htaccess
      chown apache:apache /www/WebSiteB/wordpress/wp-content/.htaccess
      chown apache:apache /www/WebSiteB/wordpress/wp-includes/.htaccess
      chmod 644 /www/WebSiteB/wordpress/.htaccess
      chmod 644 /www/WebSiteB/wordpress/wp-content/.htaccess
      chmod 644 /www/WebSiteB/wordpress/wp-includes/.htaccess

      Note: Replace WebSiteB with the relevant name
    20. Remove default content from the first WordPress site:
      rm -f /www/WebSiteA/wordpress/license.txt
      rm -f /www/WebSiteA/wordpress/readme.html
      rm -f /www/WebSiteA/wordpress/wp-config-sample.php
      rm -f /www/WebSiteA/wordpress/wp-content/plugins/hello.php
    21. Remove default content from the second WordPress site:
      rm -f /www/WebSiteB/wordpress/license.txt
      rm -f /www/WebSiteB/wordpress/readme.html
      rm -f /www/WebSiteB/wordpress/wp-config-sample.php
      rm -f /www/WebSiteB/wordpress/wp-content/plugins/hello.php
    22. Edit using VI the file /etc/httpd/sites-available/websitea.com.conf
      Replace the value of the string, from:
      DocumentRoot /www/WebSiteA
      To:
      DocumentRoot /www/WebSiteA/wordpress
      Replace the value of the string, from:
      <Directory /www/WebSiteA>
      To:
      <Directory /www/WebSiteA/wordpress>
      Note: Replace WebSiteA with the relevant name
    23. Edit using VI the file /etc/httpd/sites-available/websiteb.com.conf
      Replace the value of the string, from:
      DocumentRoot /www/WebSiteB
      To:
      DocumentRoot /www/WebSiteB/wordpress
      Replace the value of the string, from:
      <Directory /www/WebSiteB>
      To:
      <Directory /www/WebSiteB/wordpress>
      Note: Replace WebSiteB with the relevant name
    24. Restart the Apache service:
      systemctl restart httpd.service
    25. Open a web browser from a client machine, and enter the URL bellow:
      http://Server_FQDN/wp-admin/install.php
      Note: Replace Server_FQDN with the relevant DNS name
    26. Select language and click Continue
    27. Specify the following information:
        Site Title
        Username – replace the default “admin”
        Password
        E-mail
    28. Click on “Install WordPress” button, and close the web browser.
    29. Change ownership and permissions on the files and folders below:
      chown -R apache:apache /www/WebSiteA/wordpress
      find /www/WebSiteA/wordpress/ -type d -exec chmod -R 755 {} \;
      find /www/WebSiteA/wordpress/ -type f -exec chmod -R 644 {} \;
      chmod 400 /www/WebSiteA/wordpress/wp-config.php
      chown apache:apache /www/WebSiteA/config.php
      chmod 644 /www/WebSiteA/config.php

      Note: Replace WebSiteA with the relevant name
    30. Change ownership and permissions on the files and folders below:
      chown -R apache:apache /www/WebSiteB/wordpress
      find /www/WebSiteB/wordpress/ -type d -exec chmod -R 755 {} \;
      find /www/WebSiteB/wordpress/ -type f -exec chmod -R 644 {} \;
      chmod 400 /www/WebSiteB/wordpress/wp-config.php
      chown apache:apache /www/WebSiteB/config.php
      chmod 644 /www/WebSiteB/config.php

      Note: Replace WebSiteB with the relevant name
    31. Download “WordPress Firewall” plugin from:
      http://www.seoegghead.com/software/wordpress-firewall.seo
    32. Copy the “WordPress Firewall” plugin file “wordpress-firewall.php” using PSCP (or SCP) into /www/WebSiteA/wordpress/wp-content/plugins
      Note: Replace WebSiteA with the relevant name
    33. Copy the “WordPress Firewall” plugin file “wordpress-firewall.php” using PSCP (or SCP) into /www/WebSiteB/wordpress/wp-content/plugins
    34. Open a web browser from a client machine, and enter the URL bellow:
      http://Server_FQDN/wp-login.php
      Note: Replace Server_FQDN with the relevant DNS name
    35. From WordPress dashboard, click on “settings” -> make sure that “Anyone can register” is left unchecked -> put a new value inside the “Tagline” field -> click on “Save changes”.
    36. From the left pane, click on Plugins -> Add New -> search, install and activate the following plugins:
        Acunetix WP Security
        Antispam Bee
        WP Limit Login Attempts
        Login LockDown
        WP Security Audit Log
    37. From the list of installed plugins, locate and activate the Firewall plugin
    38. From the upper pane, click on “Log Out”.
    39. Delete the file /wp-admin/install.php
      SSL Configuration Phase
    1. Login using privileged account
    2. To add support for SSL certificates, run the command below:
      yum install mod_ssl -y
    3. Run the command below to change the permissions on the certificates folder:
      chmod 700 /etc/pki/CA/private
    4. Run the command bellow to generate a key pair for the first WordPress site:
      openssl genrsa -des3 -out /etc/pki/CA/private/websitea-server.key 2048
      Note 1: Specify a complex pass phrase for the private key (and document it)
      Note 2: Replace websitea with the relevant name
    5. Run the command bellow to generate a key pair for the second WordPress site:
      openssl genrsa -des3 -out /etc/pki/CA/private/websiteb-server.key 2048
      Note 1: Specify a complex pass phrase for the private key (and document it)
      Note 2: Replace websiteb with the relevant name
    6. Run the command bellow to generate the CSR for the first WordPress site:
      openssl req -new -newkey rsa:2048 -nodes -sha256 -keyout /etc/pki/CA/private/websitea-server.key -out /tmp/websitea-apache.csr
      Note 1: The command above should be written as one line.
      Note 2: Replace websitea with the relevant name
    7. Run the command bellow to generate the CSR for the second WordPress site:
      openssl req -new -newkey rsa:2048 -nodes -sha256 -keyout /etc/pki/CA/private/websiteb-server.key -out /tmp/websiteb-apache.csr
      Note 1: The command above should be written as one line.
      Note 2: Replace websiteb with the relevant name
    8. Edit using VI the file /etc/httpd/sites-available/websitea.com.conf and add the following:
      <VirtualHost *:443>
      ServerAdmin [email protected]websitea.com
      ServerName www.websitea.com
      ServerAlias websitea.com
      DocumentRoot /www/WebSiteA/wordpress
      <Directory />
      Options FollowSymLinks
      AllowOverride None
      </Directory>
      <Directory /www/WebSiteA/wordpress>
      Options Indexes FollowSymLinks MultiViews
      AllowOverride all
      Require all granted
      Order allow,deny
      Allow from all
      <LimitExcept GET POST>
      deny from all
      </limitexcept>
      </Directory>
      SSLCertificateFile /etc/ssl/certs/websitea.crt
      SSLCertificateKeyFile /etc/pki/CA/private/websitea-server.key
      SSLCipherSuite EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES128-SHA256:AES128-GCM-SHA256:ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:DH+AES:ECDH+3DES:DH+3DES:RSA+AES:RSA+3DES:!ADH:!AECDH:!MD5:!DSS:!aNULL:!EDH:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS
      SSLHonorCipherOrder On
      # Disable SSLv2 and SSLv3
      SSLProtocol ALL -SSLv2 –SSLv3 +TLSv1 +TLSv1.1 +TLSv1.2
      # Disable SSL Compression
      SSLCompression Off
      SSLEngine on
      ErrorLog /www/WebSiteA/logs/ssl_error.log
      CustomLog /www/WebSiteA/logs/ssl_access.log combined
      </VirtualHost>

      Note: Replace WebSiteA with the relevant name
    9. Edit using VI the file /etc/httpd/sites-available/websiteb.com.conf and add the following:
      <VirtualHost *:443>
      ServerAdmin [email protected]websiteb.com
      ServerName www.websiteb.com
      ServerAlias websiteb.com
      DocumentRoot /www/WebSiteB/wordpress
      <Directory />
      Options FollowSymLinks
      AllowOverride None
      </Directory>
      <Directory /www/WebSiteB/wordpress>
      Options Indexes FollowSymLinks MultiViews
      AllowOverride all
      Require all granted
      Order allow,deny
      Allow from all
      <LimitExcept GET POST>
      deny from all
      </limitexcept>
      </Directory>
      SSLCertificateFile /etc/ssl/certs/websiteb.crt
      SSLCertificateKeyFile /etc/pki/CA/private/websiteb-server.key
      SSLCipherSuite EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES128-SHA256:AES128-GCM-SHA256:ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:DH+AES:ECDH+3DES:DH+3DES:RSA+AES:RSA+3DES:!ADH:!AECDH:!MD5:!DSS:!aNULL:!EDH:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS
      SSLHonorCipherOrder On
      # Disable SSLv2 and SSLv3
      SSLProtocol ALL -SSLv2 –SSLv3 +TLSv1 +TLSv1.1 +TLSv1.2
      # Disable SSL Compression
      SSLCompression Off
      SSLEngine on
      ErrorLog /www/WebSiteB/logs/ssl_error.log
      CustomLog /www/WebSiteB/logs/ssl_access.log combined
      </VirtualHost>

      Note: Replace WebSiteB with the relevant name
    10. Edit using VI the file /etc/httpd/conf.d/ssl.conf and comment the following commands:
      <VirtualHost _default_:443>
      ErrorLog logs/ssl_error_log
      TransferLog logs/ssl_access_log
      LogLevel warn
      SSLEngine on
      SSLProtocol all -SSLv2 -SSLv3
      SSLCipherSuite HIGH:3DES:!aNULL:!MD5:!SEED:!IDEA
      SSLCertificateFile
      SSLCertificateKeyFile
    11. Restart the Apace service, run the command below:
      systemctl restart httpd
    12. Run the commands below to enable HTTPD rule on the firewall:
      firewall-cmd --zone=public --add-service=https --permanent
      systemctl restart firewalld
    13. Run the command below to change the permissions on the certificates folder:
      chmod 600 /etc/pki/CA/private
    14. In-case the server was configured with SSL certificate, add the following line to the /www/WebSiteA/config.php file:
      define('FORCE_SSL_LOGIN', true);
      Note: Replace WebSiteA with the relevant name
    15. In-case the server was configured with SSL certificate, add the following line to the /www/WebSiteB/config.php file:
      define('FORCE_SSL_LOGIN', true);
      Note: Replace WebSiteB with the relevant name
      WordPress upgrade process
    1. Run the commands below to change the SELinux permissions:
      semanage fcontext -a -t httpd_sys_rw_content_t "/www/WebSiteA/wordpress(/.*)?"
      restorecon -F -R -v /www/WebSiteA/wordpress
      chcon -R -t httpd_sys_rw_content_t /www/WebSiteA/wordpress

      Note: Replace WebSiteA with the relevant name
    2. Login to the WordPress admin portal:
      http://Server_FQDN/wp-login.php
      Note: Replace Server_FQDN with the relevant DNS name
    3. When prompted, select to upgrade the WordPress
    4. Once the upgrade process completes successfully, log off the WordPress admin portal
    5. Run the commands below to change the SELinux permissions:
      semanage fcontext -a -t httpd_sys_content_t "/www/WebSiteA/wordpress(/.*)?"
      restorecon -F -R -v /www/WebSiteA/wordpress
      chcon -R -t httpd_sys_content_t /www/WebSiteA/wordpress
      semanage fcontext -a -t httpd_sys_rw_content_t "/www/WebSiteA/wordpress/wp-content(/.*)?"
      restorecon -F -R -v /www/WebSiteA/wordpress/wp-content
      chcon -R -t httpd_sys_rw_content_t /www/WebSiteA/wordpress/wp-content

      Note: Replace WebSiteA with the relevant name
    6. Logoff the SSH console
      Check your site on the following test sites
      • https://www.ssllabs.com/ssltest/
      • https://dnsflagday.net/
      • https://securityheaders.com/
      • https://search.google.com/test/mobile-friendly