How to Secure Cloud Infrastructure
This is a copy of an article I wrote for Atlantic.Net – All Rights Reserved
Demand for cloud computing has grown exponentially in 2020, the Covid-19 global pandemic has seen many businesses fast track their cloud migration strategy, and transition business-critical workloads to the cloud. Atlantic.Net has been providing managed hosting and cloud services over the last 25 years, our engineers have been evolving the Atlantic Cloud Platform (ACP) into the award-winning platform our customers are enjoying today.
Security has been at the forefront our the ACP design process over the years, allowing our engineers to create a high-performance public cloud service, and private cloud hosting solutions that protect our client’s investment. Security defined infrastructure has quickly become the top priority for senior leadership teams within any modern digital organization, and our ability to deliver highly secure, robust, and reliable cloud computing platforms is an absolute necessity in today’s security-conscious global market.
In this whitepaper, we will examine how Atlantic.Net’s public and managed cloud services are secured with physical, technical, and administrative safeguards at all of our regional data center locations. We will share our insights on the available technology that protects our hyper-converged infrastructure and enables you to embrace the rapidly changing security landscape of cloud computing.
The security of public cloud infrastructure services revolves around the following key security concepts:
- How to Secure Cloud Infrastructure
- Cloud Security as a Service
- Access Controls & Auditing
- Network access controls and firewalls
- Protections against Denial of Service Attacks
- Resource Sharing and Isolation
- Data Encryption and Key Management
- Security misconceptions
Cloud Security as a Service
A cloud provider’s security policy should be comprehensive and encompass the entirety of both public and private cloud platforms. Cloud security responsibilities should be transparent from the very beginning, and a strongly defined segregation of duties and ownership should apply to all who are administering the technical solutions.
Some of the larger cloud providers work to a one-size-fits approach. This requires the client to own the security configuration as well as the responsibility for data protection. Other cloud providers may present several identity access management tools, but they push the security responsibility to the customer.
This approach may increase the likelihood of misconfigured cloud services, a security problem that can potentially lead to disastrous data breaches, like those commonly reported in the media. Understanding the responsibility matrix, sometimes known as the RACI (responsible, accountable, consulted, and informed). Ascertain what security controls the cloud provider is responsible for, and what security controls the client must look after.
Atlantic.Net approaches things a little different, for all of our clients, Atlantic.Net is responsible for infrastructure security; this includes the network, storage, and servers. Our highly skilled engineering teams harden and update our cloud services to conform to industry best practices allowing our customers to manage their VMs from a robust and reliable hosting platform.
A customer will typically be responsible for the operating system, applications, access control lists, and application certificate management; these may be considered typical day-to-day system administrator tasks. Atlantic.Net is responsible for the change control process needed for the cloud service, such as infrastructure upgrades, firmware updates, and security patching.
Some of our clients opt for the fully managed cloud service. With this option, our security operations center (SOC) will manage and maintain the servers. For our entire cloud infrastructure, we follow (and recommend) the principle of least privilege. This security control measure only grants users the essential privileges required to complete a particular task.
The core elements of Atlantic.Net’s public cloud security are defined by the way we architect the infrastructure. Here are some of the ways we protect our customers:
- Public cloud with private networking – each of our clients has an isolated networking topology that spans the client’s internal local area network. Our cloud network is a managed service and can be configured to allow private access to other Atlantic.Net resources using SSH keys. You control the public and private IP allocations via the cloud console
- Cloud Interconnect – Console access is provided over the world-wide-web using HTTPs. You can also directly SSH or RDP to your server. For our managed cloud hosting, we can provide various secure connectivity methods for clients to consume our cloud services, such as a Virtual Private Network (VPN)
- Secured Virtual Appliances – For our managed cloud customers, additional safeguards are offered by Atlantic.Net, such as an Intrusion Prevention System (IPS). The IPS screens individual data packets and monitors trends against a threat database. Our next-gen Web Application Firewalls (WAF) act as an ingress/egress gateway to the network. This data is fed into a logging and monitoring toolset to provide real-time analytics for network and infrastructure services. The WAF integrates with IPS, complement these services by adding additional network edge protection, DDOS protection, and SSL termination, all of which greatly enhances cloud security
- Carrier Network Security – Our carefully chosen network partners provide high-capacity and high-performance secured networking interconnects. The networks span all our data center locations, and we can tailor the service for your individual compliance needs, such as additional security requirements demanded by GDPR of HIPAA compliance regulations.
Network access controls and firewalls
Atlantic.Net’s cloud firewalls are the first line of defense to a client’s private and public cloud networks. The firewall permits access to network resources for ingress and egress traffic and distributes traffic to dedicated endpoints of the network. Firewall security rules are configurable to simply permit or deny network traffic from traversing into the network and can be zoned to allow different access privileges to different parts of the subnetwork.
Best practice for egress (outgoing) traffic recommends allowing all egress traffic, with the assumption that the infrastructure will only pass traffic to expected targets. This is sometimes referred to as the implied allow egress rule, which essentially means all externally bound traffic is permitted by default. For our managed private cloud hosting, egress access is completely configurable.
Best practice for ingress (incoming) traffic recommends restricting all ingress traffic to the network by default; this is sometimes referred to as the implied deny ingress rule. Configuration amendments can be made to change the exceptions that apply to your organization’s ingress and egress traffic. For our managed private cloud hosting, ingress access is completely configurable.
The Atlantic.Net private and public clouds can offer:
- Common exceptions – such as TCP port 443 (HTTPS), DNS resolution, and NTP (time servers)
- Exceptions to specific target machines – manually configured exceptions for specific servers that need to process ingress and egress traffic, such as web servers, cloud backup, content servers (managed cloud hosting only)
- Exceptions to application ports groups – such as WSUS, antivirus, Active Directory, WinRM, LDAP (managed cloud hosting only)
- Exceptions to specific IP ranges – such as a cluster of web-facing application nodes (managed cloud hosting only)
- Exceptions from specific source IP address – such as a database or an external 3rd party payment provider service (managed cloud hosting only)
For cloud services that require internet-facing connectivity, several safeguards can be applied to help improve cloud security access controls:
- HTTPS SSL/TLS – these security standards are essential for web content servers and work seamlessly with HTTPS load balancing. You are required to provide domain credentials and purchase an SSL certificate from a certificate authority, but once installed, the certificate provides the authenticity of your web server instances.
- Encryption of network traffic – It is important to remember that firewalls do not protect sensitive information or compliance data, such as login credentials and confidential files. You will need to ensure all network traffic is encrypted internally to prevent snooping and man-in-the-middle attacks.
- Firewalls – additional firewalls can be deployed with strict rules on ingress and egress traffic. This might be a software firewall or a web application firewall that resides between the frontend and the backend services.
- Bastion hosts – these servers provide an externally facing jump box into a secure virtual private network that has no external IP availability. SSH forwarding can be configured to route SSH traffic directly through the server, or the Bastion may be configured as a Windows terminal service that can manage the VPC and cloud assets directly. (managed cloud hosting only)
- SOCKS proxy – creating a SOCKS proxy is a great additional security layer. A SOCKS proxy cannot read HTTP traffic, but it can read TCP/IP traffic and provide authentication, resulting in the ability to traverse firewalls and connect to any number of servers automatically. (managed cloud hosting only)
- Limit CIDR ranges – Classless Inter-Domain Routing (CIDR) is a network class that restricts the volume of IPv4 addresses and subnets available, limiting the number of IP addresses that can be used within the network class. In production systems, it is heavily used to prevent IP address spaces from being “available” for exploitation. (managed cloud hosting only)
- IPsec VPN – connecting your existing on-premises network to our cloud network via IPsec adds greater levels of security by encrypting traffic in transit. Additional IPsec tunnels can be configured inside the VPC. (managed cloud hosting only)
Protections against Denial of Service Attacks
One of the most serious threats to organizations that rely on revenue-generating websites is the risk of a denial of service attack. These cyberattacks have the potential to take websites offline for prolonged periods, resulting in the loss of revenue, reputational damage, and the possibility that customers will move to a competitor.
Distributed Denial of Service (DDoS) attacks typically attempt to exhaust cloud computing resources in three different ways:
- Bandwidth [Network or Layer 3 Attack] – an attack that monopolizes network bandwidth and floods a web server with too much traffic. This overloads the hardware resources and brings the servers down.
- Connection [SYN or Layer 4 attack] – this type of DDoS attack causes a web server to send out a handshake to a spoofed legitimate source (usually a botnet), resulting in the webserver waiting for responses until the session times out. Huge volumes of these attacks can bring a web server down and prevent legitimate traffic access.
- Application [HTTP GET or Layer 7 attack] – this form of attack occurs after a successful layer 4 exploit. HTTP GET requests are made for multiple files on the server concurrently. This overwhelms the web server, causing legitimate requests to fail and resulting in a denial of service.
Cloud computing introduces several solutions to help eliminate the impact of DDoS. Hackers and exploiters will always target public IP address spaces, but, thanks to the size and scale of cloud computing infrastructure and specialist DDoS technology, cloud providers absorb, divert, and repel the overwhelming majority of DDoS attempts.
Additional edge network protections make this possible. An edge network is located on the periphery of the centralized virtual private network and feeds data in and out of the core network. Atlantic.Net can provide the following capabilities for edge protection for managed cloud hosting:
- Content Delivery Network (CDN) – a CDN is essentially a cached copy of data stored in a secured edge location. It contains a cache of regularly accessed data, such as web pages, specific images, and media. A CDN protects the internal network and applications, as data will always be delivered by the CDN if possible, resulting in minimal traffic routing directly to the backend services and thus improving security.
- Load Balancing – the load balancer can provide edge protection and can be configured to detect, drop, and terminate connections to prevent runaway resources. The web sockets have a configurable timeout, as well as bandwidth and connection protection protocols.
- TCP/SSL Proxy – a dedicated edge proxy can detect and drop UDP traffic floods and can prevent SYN floods by terminating the TCP/IP connectivity.
- Website optimization – Atlantic.Net web optimization streamlines the loading of your content by bundling JavaScript files, minimizing network connections, and dramatically compressing the size of your resources to boost performance. Besides, we utilize the extensive network built by our team to route traffic via the least congested areas.
Resource Sharing and Isolation
The ability to share and isolate cloud services is what makes cloud computing so appealing to our cloud customers. Sharing resources with authorized users is a wonderful benefit and offers significant flexibility within the workplace. It is great for accessing and sharing documents, storage, and system data.
Similarly, isolating resources by design can inherently bolster cloud security and protect cloud assets from compromise, and many businesses discover that these capabilities are often the first step towards regulatory compliance.
To secure the cloud assets, ensure that shared resources are correctly configured and that the principle of least privilege is followed when granting access to servers, users, and services. Public-facing services should be protected with stringent ingress firewall rules to a predefined IP range, and users should only have access to the data required. Securing cloud storage with SSH keys, complex passwords, and two-factor or multi-factor authentication is highly recommended.
For our managed cloud hosting customers, private networks can also share resources using direct peering at the Atlantic.Net data centers. This resource sharing typically includes:
- Sharing virtual machines’ internal IP addresses in all subnets – this essentially spans the network across Atlantic.Net regional data centers. VMs can share resources and HA clusters, and DR capabilities can be designed.
- Internal Load balancing IPs to all subnets – with this resource sharing capability, you can run active-active configurations wherein inbound traffic is load-balanced between servers in different regions. These configurations could use rules that were geo-location specific or due to server load.
The isolation of cloud services can be difficult to configure, but the security benefits outweigh the technical challenges of implementation. There are many ways that cloud services can be isolated, including:
- Isolation via multiple network interfaces – this approach has been popular on physical servers for decades and can be utilized when virtual network interfaces are bonded to separate domains. Traffic is controlled by static routes, resulting in an individual instance that can connect to a completely separate network. Data can be segregated for management, storage, and backup networks, etc.
- IP address isolation and VPN tunneling – this technique is most commonly used for increased privacy or systems with sensitive data payloads. VPN tunnels over SSH can be configured to connect backend services, creating a secured direct connection inside a subnet.
Atlantic.Net engineers can manage the entire resource sharing and isolation as part of our managed service offering.
Data Encryption and Key Management
For many businesses and organizations, data is one of the most valuable business assets. Data analytics can drive strategic business decisions, guide purchasing decisions, and even predict the growth of an organization. Data is often seen as the new currency of business, and because of this, it has become a prime target of hacking groups.
Encrypting data is one of the best techniques available to secure data integrity. Atlantic.Net offers AES256 encryption of data at rest to our managed cloud-hosting customers. AES is approved by the United States government and the National Security Agency (NSA) and is one of the most popular security standards for encryption. AES works by creating master keys and client keys that are automatically and periodically rotated to harden the encryption.
Our cloud-computing infrastructure is so powerful that on-the-fly decryption has no visible impact on the file system performance. If any unauthorized user ever gained access to the encrypted data, the data would be garbled and completely unusable.
AES-256 encryption is implemented at the storage system layer in a cipher mode of XTS-plain64, using a hash algorithm of sha256, and key size of 512-bits with half of the bits used for the cipher key and the other half used for the XTS key.
Atlantic.Net utilizes a centralized key management service (KMS) that is replicated in a peer-to-peer fashion. This KMS makes storing, encrypting, and decrypting data on a massive scale easily manageable.
The key used to encrypt data on a storage system is called the data encryption key (DEK). DEKs are generated on the storage systems and sent to the KMS where they are encrypted with the receiving system’s key-encryption key (KEK), then passed back to the originating storage system to be stored for future use.
The KMS automatically rotates KEKs at a regular interval. Our standard rotation period is 90 days. KEKs are stored as a key set. We keep one KEK active for encryption purposes and a set of historical KEKs for decryption purposes.
The KMS is itself protected by a master key called the key management service key that encrypts and decrypts all the KEKs in the system. The master key is only present in RAM on the KMS systems. When a KMS instance is restarted, it will obtain the master key from its peer instances.
For our public cloud, we offer simple-to-use, highly secure SSH key management, and all public cloud traffic uses SSL HTTPs protocols.
Security Misconceptions
A common misconception of cloud computing is that the cloud is inherently insecure. While this may have been the case years ago, in the very early days of the cloud, but in reality, today’s public and private clouds are significantly more secure than traditional on-premises data center security. Atlantic.Net takes security extremely seriously, and we offer a robust and proven secure platform for each of our clients.
Another misconception is that cloud security is too complex to configure and maintain. In truth, other public cloud providers give you the tools to create your security configuration, and many of the security services discussed in this whitepaper are based on subscription – but at Atlantic.Net, the technology and configuration already exist, you merely sign up for the service.
Cloud security is indeed a shared responsibility between the client and the vendor; however, many of the mainstream providers adopt a cookie-cutter approach of one-security-model-fits-all. What makes Atlantic.Net different is that we treat security on an individual client basis and we can tailor the service to fit your needs.
Ready to get started with securing your public cloud? Contact Atlantic.Net’s cloud security experts today about how we can help secure your cloud infrastructure!
Recent Comments