web analytics

Archive for the ‘Cloud computing’ Category

The Public Cloud is Coming to Your Local Data Center

For a long time, public cloud providers have given users (almost) unlimited access to compute resources (virtual servers, storage, database, etc.) inside their end-to-end managed data centers. Recently the need for local on-premise solutions is now being felt.

In scenarios where network latency or there is a need to store sensitive or critical data inside a local data center, public cloud providers have built server racks meant for deployment of familiar virtual servers, storage and network equipment cloud infrastructure, while using the same user interface and the same APIs for controlling components using CLI or SDK.

Managing the lower infrastructure layers (monitoring of hardware/software/licenses and infrastructure updates) is done remotely by the public cloud providers, which in some cases, requires constant inbound Internet connectivity.

This solution allows customers to enjoy all the benefits of the public cloud (minus the scale), transparently expand on-premise environments to the public cloud, continue storing and processing data inside local data centers as much as required, and in in cases where there is demand for large compute power, migrate environments (or deploy new environments) to the public cloud.

The solution is suitable for military and defense users, or organizations with large amounts of data sets which cannot be moved to the public cloud in a reasonable amount of time. Below is a comparison of three solutions currently available:

Azure Stack HubAWS OutpostsOracle Private Cloud at Customer
Ability to work in disconnect mode from the public cloud / InternetFully supported / Partially supportedThe solution requires constant connectivity to a region in the cloudThe solution requires remote connectivity of Oracle support for infrastructure monitoring and software updates
VM deployment supportFully supportedFully supportedFully supported
Containers or Kubernetes deployment supportFully supportedFully supportedFully supported
Support Object Storage locallyFully supportedWill be supported in 2020Fully supported
Support Block Storage locallyFully supportedFully supportedFully supported
Support managed database deployment locallyFully supported (MySQL, PostgreSQL)Fully supported (Oracle Database)
Support data analytics deployment locallyFully supported (Amazon EMR)
Support load balancing services locallyFully supportedFully supportedFully supported
Built in support for VPN connectivity to the solutionFully supported
Support connectivity between the solution and resources from on premise siteFully supported
Built in support for encryption services (data at rest)Fully supported (Key Vault)Fully supported (AWS KMS)
Maximum number of physical cores (per rack)100 physical cores96 physical cores
Maximum storage capacity (per rack)5TB55TB200TB

Summary

The private cloud solutions noted here are not identical in terms of their capabilities. At least for the initial installation and support, a partner who specializes in this field is a must.

Support for the well-known services from public cloud environments (virtual servers, storage, database, etc.) will expand over time, as these solutions become more commonly used by organizations or hosting providers.

These solutions are not meant for every customer. However they provide a suitable solution in scenarios where it is not possible to use the public cloud, for regulatory or military/defense reasons for example, or when organizations are planning for a long term migration to the public cloud a few years in advance. These plans can be due to legacy applications not built for the cloud, network latency issues or a large amount of data sets that need to be copied to the cloud.

Cloud-native as the Future of Data Loss Prevention

Data loss prevention (DLP) is one of the most important tools that enterprises have to protect themselves from modern security threats like data exfiltration, data leakage, and other types of sensitive data and secrets exposure. Many organizations seem to understand this, with the DLP market expected to grow worldwide in the coming years. However, not all approaches to DLP are created equal. DLP solutions can vary in the scope of remediation options they provide as well as the security layers that they apply to. Traditionally, data loss prevention has been an on-premise or endpoint solution meant to enforce policies on devices connected over specific networks. As cloud adoption accelerates, though, the utility of these traditional approaches to DLP will substantially decrease.

Established data loss prevention solution providers have attempted to address these gaps with developments like endpoint DLP and cloud access security brokers (CASBs) which provide security teams with visibility of devices and programs running outside of their walls or sanctioned environments. While both solutions minimize security blind spots, at least relative to network layer and on-prem solutions, they can result in inconsistent enforcement. Endpoint DLPs, for example, do not provide visibility at the application layer, meaning that policy enforcement is limited to managing what programs and data are installed on a device. CASBs can be somewhat more sophisticated in determining what cloud applications are permissible on a device or network, but may still face similar shortfalls surrounding behavior and data within cloud applications.

Cloud adoption was expected to grow nearly 17% between 2019 and 2020; however, as more enterprises embrace cloud-first strategies for workforce management and business continuity during the COVID-19 pandemic, we’re likely to see even more aggressive cloud adoption. With more data in the cloud, the need for policy remediation and data visibility at the application layer will only increase and organizations will begin to seek cloud-native approaches to cloud security.

What is cloud-native data loss prevention?

The explosion of cloud technologies in the past decade has brought new architectural models for applications and computing systems. The concept of a cloud-native architecture, while not new, is a development that’s taken off in the last five years. But what exactly does cloud-native mean, and how can it apply to security products like data loss prevention (DLP)?

Cloud-native describes a growing class of platforms that are built in the cloud, for the cloud. True cloud-native data loss prevention is defined by the following features:

  • Agentless. Cloud-native DLP solutions aren’t deployed as software programs that require installation, rather they integrate with the applications they secure through APIs. This makes deployment easy and updates to such platforms effortless, without getting end-users or IT involved. 
  • API driven. Central to cloud-native data loss prevention is the API driven nature of such solutions. Connecting with cloud platforms via API means that visibility and security policies immediately apply at the application layer. API-driven solutions can derive platform-specific context & metadata, as well as provide granular, platform-specific actions, versus broad-brush blocking on the network.
  • Agnostic. True cloud-native solutions are platform, endpoint, and network agnostic in that they’re capable of integrating with cloud platforms quickly and can provide single pane of glass visibility across the cloud.
  • Automated. True cloud-native solutions don’t just provide visibility into the cloud, but help automate policies whenever possible. The sheer volume of data that moves through cloud systems combined with the always-on nature of cloud applications means that incidents can happen at any time and will require immediate remediation. Automation ensures that security teams can respond to these as quickly as possible.
  • Accurate. Finally, in order to help security teams process the massive amounts of data in the cloud, cloud-native DLP must be accurate. The accuracy of such platforms is often enabled by the same systems that make them automated — an effective use of machine learning that can quickly and accurately identify when business-critical data has been exposed.

What are the advantages of cloud-native DLP?

When you consider the capabilities listed above, cloud-native DLP is designed to help organizations get a handle on protecting the massive volumes of data moving in and out of data silos daily. With organizations understanding that the security of their data in the cloud is their responsibility, security teams are increasingly investing in tools designed to help them address visibility and policy blindspots. While it might be the case that cloud-native data loss prevention platforms aren’t the only security tools companies choose to invest in, it’s clear that they’ll be one of the most essential parts of their security toolkit.

About Nightfall

Nightfall is the industry’s first cloud-native DLP platform that discovers, classifies, and protects data via machine learning. Nightfall is designed to work with popular SaaS applications like Slack & GitHub as well as IaaS platforms like AWS. You can schedule a demo with us below to see the Nightfall platform in action.

“This article is originally posted on Nightfall.ai

How to Achieve Long Term Cost Savings Using Cloud Services

The relatively high cost of cloud computing resources, compared to on-premise solutions, is a major challenge for organizations migrating to public cloud services. In this post, we will review several available plans for long-term cost saving of compute resources.

Background

The Pay-As-You-Go, or Pay on-Demand, is the most common option for paying for actual usage when consuming cloud resources. This method is suitable when the required compute power is changing or unpredictable. A good example of this is for services migrated from on-premise to the public cloud (Lift & Shift), or new environments (Dev/Test), and more.

The second most common pricing option is called Spot (Amazon EC2 Spot Instances, Azure Spot Virtual Machines or Google Preemptible Virtual Machines). These options can potentially deliver a discount of up to 90% and are best when there is a demand for large amounts of compute power, and the service is not sensitive to disruptions. Spot Instances are suitable for scenarios when compute power is required by another paying customer and the cloud provider claims the machine back, with 30-second to 2-minute notifications. This method is suitable for image/video processing, batch processing, HPC services, etc.

Reserved Instance

This is the most common pricing option for saving costs. Users commit to one to three years of usage in advance, with a potential savings of up to 70%.

Reserved Instances are available with various payment methods. These range from

  • “All Upfront” – where you pay the entire server cost in advance for the entire commitment period
  • “Partial Upfront” – where you pay the server costs on monthly basis in installments, until the end of the commitment period
  •  “No Upfront” – where you pay a fixed price for the server cost until till the end of the commitment period

There are also options for more flexibility on Reserved Instance options. These include Standard RI, where you commit to a certain instance type (instance family type, operating system, payment method, etc.), and Convertible RI, where you are allowed to change the instance type (instance family type, operating system, etc.) during the commitment period.

Additional information about Reserved Instance options can be found at:

It is important to note that these cost-saving options are not limited to virtual servers. It is possible to purchase a commitment for managed services, such as Amazon RDS Reserved Instances, Azure SQL Database reserved capacity, Azure Blob storage reserved capacity, and more.

AWS Saving Plans

AWS has a flexible pricing option, like AWS Reserved Instances, which allows up to 72% discount.

These plans include two alternatives:

  • Compute Saving Plans – This plan allows you to commit to resource consumption in advance, with the flexibility to choose, and change, instance family type, instance size (ratio between CPU/Memory), region, availability zone and operating systems. The Compute Saving plan covers compute resources from virtual machines (EC2 instances), through AWS Fargate and up to Amazon Lambda.
  • EC2 Instance Saving Plans – This plan allows you to save on virtual servers’ costs. However, it is limited to virtual servers from specific instance family types, in a specific region. It is still possible to change instance size (ratio between CPU/Memory), availability zone and operating systems.

Additional information can be found at: https://aws.amazon.com/savingsplans/faq/

Google Sustained Use Discounts

This plan is designed to encourage customers to commit to long term use of Google compute resources, such as virtual servers or Google Kubernetes Engine, for any constant time, longer than 25% of the month. This plan grants an automatic discount of between 20% and 30% of the price list. No action needs to be taken; the discount is applied when reaching the plan’s minimum consumption level of compute resources.

Additional information can be found at: https://cloud.google.com/compute/docs/sustained-use-discounts

Conclusion

The first step toward enjoying long-term cost savings is understanding your compute demands. Studying up and staying up to date on vendors’ various pricing plans and options, then matching those to your needs and environments, is the key to achieving the most cost-effective public cloud solution.

Best Practices for Deploying New Environments in the Cloud for the First Time

When organizations take their first steps to use public cloud services, they tend to look at a specific target.

My recommendation – think scale!

Plan a couple of steps ahead instead of looking at single server that serves just a few customers. Think about a large environment comprised of hundreds or thousands of servers, serving 10,000 customers concurrently.

Planning will allow you to manage the environment (infrastructure, information security and budget) when you do reach a scale of thousands of concurrent customers. The more we plan the deployment of new environments in advance, according to their business purposes and required resources required for each environment, it will be easier to plan to scale up, while maintaining high level security, budget and change management control and more.

In this three-part blog series, we will review some of the most important topics that will help avoid mistakes while building new cloud environments for the first time.

Resource allocation planning

The first step in resources allocation planning is to decide how to divide resources based on an organizational structure (sales, HR, infrastructure, etc.) or based on environments (production, Dev, testing, etc.)

In-order to avoid mixing resources (or access rights) between various environments, the best practice is to separate the environments as follows:

  • Share resource account (security products, auditing, billing management, etc.)
  • Development environment account (consider creating separate account for test environment purposes)
  • Production environment account

Separating different accounts or environments can be done using:

Tagging resources

Even when deploying a single server inside a network environment (AWS VPC, Azure Resource Group, GCP VPC), it is important to tag resources. This allows identifying which resources belong to which projects / departments / environments, for billing purposes.

Common tagging examples:

  • Project
  • Department
  • Environment (Prod, Dev, Test)

Beyond tagging, it is recommended to add a description to resources that support this kind of meta-data, in-order to locate resources by their target use.

Authentication, Authorization and Password Policy

In-order to ease the management of working with accounts in the cloud (and in the future, multiple accounts according to the various environments), the best practice is to follow the rules below:

  • Central authentication – In case the organization isn’t using Active Directory for central account management and access rights, the alternative is to use managed services such as AWS IAM, Google Cloud IAM, Azure AD, Oracle Cloud IAM, etc.

If managed IAM service is chosen, it is critical to set password policy according to the organization’s password policy (minimum password length, password complexity, password history, etc.)

  • If the central directory service is used by the organization, it is recommended to connect and sync the managed IAM service in the cloud to the organizational center directory service on premise (federated authentication).
  • It is crucial to protect privileged accounts in the cloud environment (such as AWS Root Account, Azure Global Admin, Azure Subscription Owner, GCP Project Owner, Oracle Cloud Service Administrator, etc.), among others, by limiting the use of privileged accounts to the minimum required, enforcing complex passwords, and password rotation every few months. This enables multi-factor authentication and auditing on privileged accounts, etc.
  • Access to resources should be defined according to the least privilege principle.
  • Access to resources should be set to groups instead of specific users.
  • Access to resources should be based on roles in AWS, Azure, GCP, Oracle Cloud, etc.

Audit Trail

It is important to enable auditing in all cloud environments, in-order to gain insights on access to resources, actions performed in the cloud environment and by whom. This is both security and change management reasons.

Common managed audit trail services:

  • AWS CloudTrail – It is recommended to enable auditing on all regions and forward the audit logs to a central S3 bucket in a central AWS account (which will be accessible only for a limited amount of user accounts).
  • Working with Azure, it is recommended to enable the use of Azure Monitor for the first phase, in-order to audit all access to resources and actions done inside the subscription. Later on, when the environment expands, you may consider using services such as Azure Security Center and Azure Sentinel for auditing purposes.
  • Google Cloud Logging – It is recommended to enable auditing on all GCP projects and forward the audit logs to the central GCP project (which will be accessible only for a limited amount of user accounts).
  • Oracle Cloud Infrastructure Audit service – It is recommended to enable auditing on all compartments and forward the audit logs to the Root compartment account (which will be accessible only for a limited amount of user accounts).

Budget Control

It is crucial to set a budget and budget alerts for any account in the cloud at in the early stages of working with in cloud environment. This is important in order to avoid scenarios in which high resource consumption happens due to human error, such as purchasing or consuming expensive resources, or of Denial of Wallet scenarios, where external attackers breach an organization’s cloud account and deploys servers for Bitcoin mining.

Common examples of budget control management for various cloud providers:

  • AWS Consolidated Billing – Configure central account among all the AWS account in the organization, in-order to forward billing data (which will be accessible only for a limited amount of user accounts).
  • GCP Cloud Billing Account – Central repository for storing all billing data from all GCP projects.
  • Azure Cost Management – An interface for configuring budget and budget alerts for all Azure subscriptions in the organization. It is possible to consolidate multiple Azure subscriptions to Management Groups in-order to centrally control budgets for all subscriptions.
  • Budget on Oracle Cloud Infrastructure – An interface for configuring budget and budget alerts for all compartments.

Secure access to cloud environments

In order to avoid inbound access from the Internet to resources in cloud environments (virtual servers, databases, storage, etc.), it is highly recommended to deploy a bastion host, which will be accessible from the Internet (SSH or RDP traffic) and will allow access and management of resources inside the cloud environment.

Common guidelines for deploying Bastion Host:

The more we expand the usage of cloud environments, we can consider deploying a VPN tunnel from the corporate network (Site-to-site VPN) or allow client VPN access from the Internet to the cloud environment (such as AWS Client VPN endpoint, Azure Point-to-Site VPN, Oracle Cloud SSL VPN).

Managing compute resources (Virtual Machines and Containers)

When selecting to deploy virtual machines in cloud environment, it is highly recommended to follow the following guidelines:

  • Choose an existing image from a pre-defined list in the cloud providers’ marketplace (operating system flavor, operating system build, and sometimes an image that includes additional software inside the base image).
  • Configure the image according to organizational or application demands.
  • Update all software versions inside the image.
  • Store an up-to-date version of the image (“Golden Image”) inside the central image repository in the cloud environment (for reuse).
  • In case the information inside the virtual machines is critical, consider using managed backup services (such as AWS Backup or Azure Backup).
  • When deploying Windows servers, it is crucial to set complex passwords for the local Administrator’s account, and when possible, join the Windows machine to the corporate domain.
  • When deploying Linux servers, it is crucial to use SSH Key authentication and store the private key(s) in a secure location.
  • Whenever possible, encrypt data at rest for all block volumes (the server’s hard drives / volumes).
  • It is highly recommended to connect the servers to a managed vulnerability assessment service, in order to detect software vulnerabilities (services such as Amazon Inspector or Azure Security Center).
  • It is highly recommended to connect the servers to a managed patch management service in-order to ease the work of patch management (services such as AWS Systems Manager Patch Manager, Azure Automation Update Management or Google OS Patch Management).

When selecting to deploy containers in the cloud environment, it is highly recommended to follow the following guidelines:

Storing sensitive information

It is highly recommended to avoid storing sensitive information, such as credentials, encryption keys, secrets, API keys, etc., in clear text inside virtual machines, containers, text files or on the local desktop.

Sensitive information should be stored inside managed vault services such as:

Object Storage

When using Object Storage, it is recommended to follow the following guidelines:

  • Avoid allowing public access to services such as Amazon S3, Azure Blob Storage, Google Cloud Storage, Oracle Cloud Object Storage, etc.
  • Enable audit access on Object Storage and store the access logs in a central account in the cloud environment (which will be accessible only for a limited amount of user accounts).
  • It is highly recommended to encrypt data at rest on all data inside Object Storage and when there is a business or regulatory requirement, and encrypt data using customer managed keys.
  • It is highly recommended to enforce HTTPS/TLS for access to object storage (users, computers and applications).
  • Avoid creating object storage bucket names with sensitive information, since object storage bucket names are unique and saved inside the DNS servers worldwide.

Networking

Advanced use of cloud environments

It allows consumption of services, rather than maintaining servers, operating systems, updates/patches, backup and availability, assuming managed services in cluster or replica mode is chosen.

  • Use Infrastructure as a Code (IoC) in-order to ease environment deployments, lower human errors and standardize deployment on multiple environments (Prod, Dev, Test).

Common Infrastructure as a Code alternatives:

Summary

To sum up:

Plan. Know what you need. Think scale.

If you use the best practices outlined here, taking off to the cloud for the first time will be an easier, safer and smoother ride then you might expect.

Additional references

Top Six Cloud Myths Debunked

Organizations considering migrating to cloud services encounter many myths along the way. In most cases, the myths are based on FUD factor (Fear, Uncertainty and Doubt).

In this post, I will review some of the common myths about clouds and see if they stand up to reality.

Myth #1: The Cloud is less secure than on premise or the cloud is more secure than the on premise

The truth is that they are both right and both wrong. The real answer lies somewhere in the middle.

When comparing managed services (SaaS) such as SAP SuccessFactors, Oracle CRM, Office 365, SalesForce, etc., to similar services provided on premise, they cannot be compared to on premise models.

On the one hand, the customer shifts the burden of security and maintenance to the cloud provider, and on the other, mature cloud providers (such as those mentioned above), invest huge amounts of money (when compared to most organizations) in information security, penetration testing, audit trails and constant training to their support and information security teams.

The comparison of IaaS solutions and on premise models is different. According to the shared responsibility model, customers get access from the operating system and above, and they are solely responsible for everything that happens inside the operating system. This includes backups, hardening, updates, authorization, access control and defense against attacks.

The IaaS model is similar to traditional virtualization from the local data center. But top IaaS providers enable access to various managed services in order to ease the burden on server maintenance (from managed databases, through backup services, patch management, vulnerability management, etc.) Bottom line – It is possible to reach the “sweet spot” where using the cloud makes an organization more secure than using on premise, as long as we are familiar with the cloud providers services and capabilities and as long as we learn how to make the most out of these services on-going basis.

Myth #2: The cloud is more expensive than on premise or the cloud is less expensive than on premise

Again, the truth can be found somewhere in the middle.

In order to make an accurate comparison of on premise and cloud solutions, we need to take into consideration several factors. These include the cost of hardware, several years of maintenance, licenses, backup and archive storage, system availability and most important – manpower costs for system maintenance, including training of IT, development and security teams.

When comparing managed services, such as managed databases vs. manual maintenance of on premise databases, the calculation would look something like this. The cloud saves maintenance cost, hardening, patching/updating, and even backups, if they are part of the managed service. That translates into significant savings on maintenance costs, compared to on premise environments, and allows organizations to consume services without the burden of maintaining the systems.

When comparing IaaS environments, the picture changes. The costs of the cloud servers, in a pay-as-you-go model, in most cases are higher than comparable on-premises models (when comparing same amount of vCPU and memory). In-order to cut costs in IaaS model, we need to understand if we are dealing with a high performance workload and changing run time, or are we are dealing with servers operating 24×7 for long period of time. And if we are dealing with the long-term, it is highly recommended to purchase reserved instances for 1 or 3 years in advance.

Another alternative for saving server costs in an IaaS model is to choose Spot model and save up to 90% of the price, assuming the service itself is not fault-sensitive and can be recovered automatically, such as batch processing, image processing, etc. The best alternative for saving server costs will require re-architecting our systems (as much as possible) and migrating to building systems based on micro-service architecture, or use Serverless services and cut the cost on resources and monthly costs to the minimum required.

Myth #3: The cloud is complex or Cloud migration is always complex

Migrating existing services from on premise to managed services in a SaaS model varies from one cloud provider to another, which makes it hard to generalize.

Many SaaS vendors publish guidelines and tools to assist organizations with the migration process. Some examples are SalesForce, Oracle CRM, SAP, Office 365, Google G Suite, etc.

When migrating to PaaS services, there are many guidelines and tools to assist organizations with the migration process. Some examples include AWS Database Migration Service, Azure Database Migration Service, Google BigQuery Data Transfer Service, etc.

Migrating systems to IaaS model requires training IT personnel on how cloud providers implement infrastructure services, such as VM deployment, setting network access rules, connecting to storage, settings permissions, etc.

Organizations who train their IT, networking and information security teams on working with IaaS and PaaS environments will be able to make the migration process easier. There are many low cost online courses to assist closing the required knowledge gap.

If you want to migrate really easily (“fast and dirty”), you can always choose to migrate systems using “lift & shift” method, at least during first phase, although it is not considered a cost effective solution. Sometimes similar hardware in on premise environments is cheaper than similar hardware in an IaaS environment. But this method will allow the organization access to migrated environments and later on, to adapt the required resources to allow the system to function, change the system architecture, such as replacing servers with managed services, etc. Bottom line – It all begins with organizations willing to adapt to working in cloud environments and, of course, management support for the cloud migration.

Myth #4: Multi-Cloud will prevent Vendor Lock-In

When organizations take their first steps toward working with public cloud solutions, it make sense to choose a single IaaS provider in-order to allow the organization to train employees, plan cloud migration strategy and begin the actual cloud migration phase and deployment of the new environments.

The fear of vendor lock-in, or from the cloud provider going bankrupt, is not unreasonable. However the most likely complimentary control mechanism is to choose one of the hyper-scale cloud providers, and mitigate the risk of the cloud provider going bankrupt.

Theoretically, selecting multiple IaaS providers might allow migration between providers, but in reality, moving to multi-cloud environments creates many challenges. These include the requirement to enforce central authentication, requirements to understand how each cloud provider implements services differently (such as storage, network, compute, etc.), understanding how to deploy new environments over multiple cloud providers’ infrastructure, understanding how to enforce logging/auditing and how to centrally handle incident response processes over multiple providers, etc.

When you want to mitigate the risk of vendor lock-in and allow organizations to move environments between cloud providers, we need to plan our infrastructure architecture ahead of time – from the very beginning phases and based architecture on Containers or Kubernetes. As long as services are wrapped in containers, you will be able to deploy and run them over multiple cloud providers. Also take into consideration the integration with each cloud providers’ ecosystem, such as storage, monitoring, message queuing services, etc. Bottom line – deploying production environments over multiple cloud providers requires deep understanding of the cloud ecosystem. Instead of resolving vendor lock-in risks, it can create high overhead for the organization, which may not be justified relative to the risk of vendor lock-in. Moving to container-based architectures might ease the organization’s ability to work with multiple cloud providers.

Myth #5: Auditing cloud environments is hard

Correct. But only partially.

Cloud migration requires customers to understand that they may not be able to conduct on premise audits of the cloud providers’ data centers, as we used to conduct with hosting providers in the past. But on the other hand, mature cloud vendors provide us with complimentary audit controls, such as:

Bottom line – It is possible and highly recommended to constantly audit cloud environments. Choosing one of the mature cloud providers will allow various complimentary controls in order to assure that cloud environments are secure and comply with standards and regulations.

Myth #6: Migration to the cloud will cut manpower and cause employee job security issues

This perhaps one of the most common myths. But inaccurate.

It is true that IT and information security teams will need to undergo training to work with various services and adapt existing knowledge from the on premise environments to cloud environments. But here lies the great potential.

If in the past we used to have deep knowledge in a specific field, such as operating systems, networking, storage, databases, information security, etc., today organizations migrating to the cloud are looking for employees with multidisciplinary knowledge.

The migration from on premise models to the cloud will require organizations to streamline. Although migration to SaaS or managed services requires less IT personnel, the migration to IaaS/PaaS environments requires a shift in the mindset of IT teams. They will need to adapt existing knowledge from manual work, like server maintenance, databases, environment deployments, etc., to automation, like writing code (but no need for professional developers), switching to environment deployment based on Infrastructure as a Code, etc. This ability to adapt will be in high demand by organizations. They will seek professional IT personnel, and will make existing IT teams, who adapt to the changing world, even a more valuable asset to their organizations.

Smart use of cloud services

Many organizations are talking and beginning to embrace system migration to the cloud, as part of a global IT strategy to use public clouds.
The migration from the on premise to the public cloud, allows organizations to choose from a variety of alternatives, each with its own benefits, knowledge requirement and different pricing model (as compared to the on premise licensing model).
In this article, I shell review the different methods, with their pros and cons and I will try to explain which method suites which scenario.

Lift & Shift Migration
In this method, the customer chooses an existing system from the on premise environment, reviewing the required resources for running the system in a cloud environment (number of CPU/amount of Memory and required disk space), the operating system version (assuming the cloud provider has an image for this operating system), checking with the software vendor the ability to run the system a cloud environment (licensing issues) and deploying all software components on a virtual server in the cloud environment (or migrating the entire system, assuming the cloud provider offers a managed service for system migration).

This is the most common method and the simplest one (comparing to other alternatives…) for migrating to the cloud, and most cloud vendors (Infrastructure as a Service) support this method, but we should take under consideration, that cost-wise, this method is considered expensive (in terms of cost and resource usage) when comparing to purchasing physical hardware for 3-5 years in advanced in the on premise environment.
The most common ways to lower the server cost are:
• Resizing the server size (number of CPU/amount of memory) to the actual usage
• Purchase reserved instance for 1 or 3 years in advanced
• Using Spot instances for servers who does not require availability of 24×7 or for applications who can survive temporary downtime, without harming the entire service

Moving to Micro-Services and Containers
In this method, the organization begins migration from monolith application (a system where all components relay on each other and required to be deployed together) to micro-services based development, where each component runs independently (usually inside its own container) and it can be replaced, upgraded and vertically scale out as needed and independently from the rest of the system components.
It is possible to run containers on virtual servers (the entire management, update and scale is the customer’s responsibility) or as part of a managed service (such as managed Kubernetes clusters service).
This method requires the developer’s teams to know how to package their applications inside containers, take care of scaling, monitoring of containers activities (and the communications between containers), and taking care of security topics (such as who can access a container or whether or not the communication between containers is encrypted, etc.)
This method is suitable for organizations who wish to change their current application architecture or being developing new applications. Modern applications are being developed today as containers and allows the customer to migrate between the on premise environments to the public cloud, and with proper adjustments between different cloud providers (once we solve topics such as connectivity to current cloud vendor’s services such as message queuing, storage, logging, etc.)

Moving to Serverless / Function as a Service
In this method, the customer isn’t in charge of operating system maintenance, system availability or scale. Due to the fact that the entire infrastructure is been managed by the cloud vendor, the vendor takes care of scale, as required by the application needs.
This method is suitable for event based services, with short running time (few seconds to few minutes). As part of moving to modern applications, there are many scenarios for choosing specific development language, uploading the code to a managed cloud environment (Serverless), selecting the desired compute power (amount of memory, which effects the number of CPU) and creating triggers for running the function.
It is possible to embed Serverless capabilities, as part of modern micro-services architecture.
The pricing model for this method is based on the amount of time the function was running and the amount of memory used for running the function.
Common use cases for Serverless – image processing, data analysis from IoT devices, etc.
This method is not suitable for every application (due to short running intervals), and also not every development language is currently been supported by every cloud vendor.
For example:
• AWS Lambda (currently) support natively the following languages: Java, Go, PowerShell, Node.JS, C#, Python, Ruby
• Azure Functions (currently) support natively the following languages: Java, JavaScript, C#, PowerShell, Python, TrueScript
• Google Cloud Functions (currently) support natively the following languages: Python, Go, Node.JS
• Oracle Functions (currently) support natively the following languages: Java, Python, Node.JS, Go, Ruby

Migration to managed services (SaaS / PaaS)
In this method, the organization chooses an existing SaaS (such as Messaging, CRM, ERP, etc.) or existing PaaS (such as Database, Storage, etc.)
This method suites many scenarios in which the organization would like to consume existing service, without the need to maintain the infrastructure (operating system, storage, backup, security aspects, etc.). After choosing an existing service, the organization begin migrating data to the managed service, configure proper access rights, sometimes configure VPN connectivity between the on premise and the cloud environment, configures backup (according to the service support this capability) and being consuming the service.
The pricing model changes between cloud vendors (sometime is it based on monthly pricing and sometimes it is based on consumed compute power or consumed storage space).
Mature and transparent the cloud vendors, reveal accurate monthly billing information.
Also, mature cloud vendors knows how to handle privacy, low and regulation aspects (such as GDPR compliance and other privacy regulations) using data processing agreements.

Summary
In this article, I have reviewed the various methods of using cloud service wisely. As we can see, not every method suites every scenario or every organization, but there is no doubt that the future is heading cloud and managed services.
My recommendation for organizations – focus on what brings your organization business value (such as banking, commerce, retail, education, etc.), train your IT and development teams on the coming technological changes and migrate the responsibility for maintaining your organization infrastructure to vendors who specialized on the topic.

Cloud Services – Evolution and Future Trends

Cloud services are no longer a buzz, they are existing fact. Small and large organizations are leading the revolution in the IT industry for almost a decade, some migrating small environments and conducting experiments while others deploying their entire production environments using this model.

It is commonly used to consider cloud services as a continue of the data center environment and in fact this is where the concept evolved, and yet, the differences between the traditional computing services, server farms, storage and even virtualization are fundamentally different from a true cloud. Let’s evaluate the differences:

In the past we used to call “cloud”, for any compute services containing the following characteristics as defined by NIST:

  • On-Demand Self-Service
  • Broad Network Access
  • Resource Pooling
  • Rapid Elasticity
  • Measured service

When looking deeper into the various cloud service models such as IaaS (Infrastructure as a Service), PaaS (Platform as a Service) and SaaS (Software as a Service), we find that things are not always black or white: In some cases we come across services that we know for fact they are cloud services, we can’t always say these services include all the above characteristics.

A good example: File sharing services such as Dropbox. As a private customer we are not exposed to data that will enable us to measure the service (in terms of performance or in terms of billing vs storage capacity cost).

In case we choose to deploy a “private cloud” inside our organizational data center, based on infrastructure such as VMWARE, OpenStack or alike, we expect all the above characteristics in our on premise as well.

 

Let’s differentiate between cloud and hosting service

In the current IT industry there are many companies offering compute services, on the range between cloud services and hosting services.

Hosting companies (or managed services), usually offers the customer the following capabilities:

  • Compute environments – Such as physical servers (in case of special hardware requirements), virtual servers, storage and network equipment (Routers, Firewalls, VPN Gateway, etc.)
  • Managed services – Such as web hosting for marketing or commercial web sites, email services, file sharing services and organizational systems such as CRM as a service.
  • Backup and DR as a service.
  • Managed support/IT services.

Hosting companies might offer the customer a capability to increase the number of servers and in some cases even to choose servers in data center abroad (in case we would like to allow access to the data/servers close to the end customer).

For SMB or enterprise organizations making their first move to the cloud, or for customers who wishes to outsource their IT services to external providers, there isn’t much of a difference between choosing hosting/managed service and choose public cloud service.

The differences between hosting and cloud services begins when trying to deploy entire environments in the cloud, based on architectures emphasizing service and platform (SaaS and PaaS), and less on infrastructure as a service (IaaS).

In this configuration, the system is developed based on dynamic scaling capabilities, environments deployed for a short amount of time, servers and infrastructure raised for specific purpose and stop to exist a few minutes after the entire process completes.

This model is called “Cloud Native Applications”, which allows us to avoid committing to pre-defined amount of infrastructure, specific task management, compatibility, server health check, etc., what is the role of each server or platform, in case they will be destroyed within a second? The infrastructure in this model is not important, only the service the system meant to provide.

Unlike hard-coded infrastructure management, there is a new concept – “Infrastructure as a code”. Environments are written as “recipes”, sent to the cloud provider using API’s, and environments are being created instantly and on the fly.

A few examples for the efficiencies of this model – A large American service provider destroys his entire IT environment in the cloud and deploys an entire up-to-date mirror environment within a few minutes instead of updating each and every server. A different American service provider increases the amount of servers automatically in advanced before peak hours, as a result of applicative monitoring, and after peak hours, all the new servers vanishes.

This change is not a magic, but a result of cloud efficient planning of systems and applications, training dedicated teams with the relevant capabilities, understanding the cloud provider’s services, billing, prioritization and the constant changes in the management interfaces.

 

Process of migrating systems to the public cloud

Infrastructure as a Service (IaaS) allows organizations to perform “Lift & Shift” (copying existing systems to the cloud with minor changes) from the on premise environment to the public cloud, as part of migration processes to the cloud.

Most organizations will quickly find out that the “Lift & Shift” strategy is indeed easier as first stage, but in the long term it is a very bad economical decision, with the same challenges that organizations struggle with today: waste of hardware resources, un-optimized operating system and running code on servers, distributed management difficulties, etc.

At later stages, organizations who migrated systems to the public cloud, begin to perform tuning to their cloud environments by measuring resource usage, for their virtual servers, adapting the proper VM instance type for the actual use in terms of CPU/memory/storage.

Below is an example from AWS presentation about the evolution organizations pass migrating to public cloud in terms of cost:

The future is here

Today, startup companies and enterprise organizations are developing applications for the cloud, agnostically to the hardware or infrastructure underneath.

In past, organizations used to migrate from developing on physical servers to virtual servers, and today, organizations are migrating from developing monolith applications to micro-service based applications and even Serverless applications (code running inside a managed compute environment, without the need to manage infrastructure resources such as servers, storage, etc.)

Changes in the development world is crucial to understanding what is cloud service, due to the fact that today, we are less relying on NIST definition of cloud service, and providers offering Infrastructure as a Service (as most hosting providers) and today cloud service is characterized by the following capabilities:

  • Collection of API’s
  • Billing capability of services/resources by their actual usage
  • Services managed using API (such as the ability to provision, decommission, start/stop, etc.)

 

The bottom line

Today there are many providers who wrap VMWARE infrastructure with friendly user interface, allowing the customer to choose the VM type (in terms of CPU/Memory) and the number of servers the customer would like to consume, but it is not scale enough and it doesn’t allow the customer the flexibility to scale-up or scale-down to hundreds of servers automatically, and within seconds over multiple geographical regions.

Cloud provider who supports “Cloud Native Applications” enables the customer to connect his automation mechanisms, build and deployment processes using API’s to the cloud provider’s infrastructure, in-order to allow provisioning/changing compute environments, deploy micro-services based systems and even allowing the customer to deploy and run Serverless infrastructure.

The next time you are considering a service provider, whether using a hosting provider (sophisticated as he might be) or using public cloud provider, with capabilities of running hundreds of servers over multiple geographic regions within minutes, hands-free, supporting micro-services and Serverless applications, with API connectivity capabilities, simply present the service provider your requirements, and choose the most suitable service provider.

 

 

This article was written by Eyal Estrin, cloud security architect and Vitaly Unic, application security architect.

Benefits of using managed database as a service in the cloud

When using public cloud services for relational databases, you have two options:

  • IaaS solution – Install a database server on top of a virtual machine
  • PaaS solution – Connect to a managed database service

In the traditional data center, organizations had to maintain the operating system and the database by themselves.

The benefits are very clear – full control over the entire stack.

The downside – The organization needs to maintain availability, license cost and security (access control, patch level, hardening, auditing, etc.)

Today, all the major public cloud vendors offer managed services for databases in the cloud.

To connect to the database and begin working, all a customer needs is a DNS name, port number and credentials.

The benefits of a managed database service are:

  • Easy administration – No need to maintain the operating system (including patch level for the OS and for the database, system hardening, backup, etc.)
  • Scalability – The number of virtual machines in the cluster will grow automatically according to load, in addition to the storage space required for the data
  • High availability – The cluster can be configured to span across multiple availability zones (physical data centers)
  • Performance – Usually the cloud provider installs the database on SSD storage
  • Security – Encryption at rest and in transit
  • Monitoring – Built-in the service
  • Cost – Pay only for what you use

Not all features available on the on-premises version of the database are available on the PaaS version, and not all common databases are available as managed service of the major cloud providers.

Amazon RDS

Amazon managed services currently (as of April 2018) supports the following database engines:

Azure Managed databases

Microsoft Azure managed database services currently (as of April 2018) support the following database engines:

Google Cloud SQL

Google managed database services currently (as of April 2018) support the following database engines:

Oracle Database Cloud Service

Oracle managed database services currently (as of April 2018) support the following database engines:

Cloud Providers Service Limits

When working with cloud service providers, you may notice that at some point there are service / quota limitations.

Some limits are per account / subscription; some of them are per region and some limits are per pricing tier (free tier vs billable).

Here are some of the most common reasons for service / quota limitations:

  • Performance issues on the cloud provider’s side – loading a lot of virtual machines on the same data center requires a lot of resources from the cloud provider
  • Avoiding spikes in usage – protect from a situation where one customer consumes a lot of resources that might affect other customers and might eventually cause denial of service

For more information about default cloud service limits, see:

Default limitations can be changed by contacting the cloud service provider’s support and requesting a change to the default limitation.

For instructions on how to change the service limitations, see:

Best practices for using AWS access keys

AWS access keys enable us to use programmatic or AWS CLI services in a manner similar to using a username and password.

AWS access keys have account privileges – for better and for worse.

For example, if you save access keys (credentials) of a root account inside code, anyone who uses this code can totally damage your AWS account.

Many stories have been published about security breaches due to access key exposure, especially combined with open source version control systems such as GitHub and GitLab.

In order to avoid security breaches, here is a list of best practices for securing your environment when using access keys: