September 4, 2019 By Brett Valentine 4 min read

Network segmentation is a concept that dates back to the start of enterprise IT systems. The simplest demonstration of this is separating application and infrastructure components with a firewall. This concept is now a routine part of building data centers and application architectures. In fact, it’s nearly impossible to find examples of enterprises without some network segmentation model in place.

More recently, many have stated that microsegmentation is sufficient to secure these services. Microsegmentation techniques provide granular point-to-point traffic restrictions between services and can be user-session aware. But the modern concept of network segmentation is more than source and destination restrictions. Best practices for network segmentation require the following capabilities:

  • Intrusion detection and prevention systems (IDS and IPS) to detect and block malicious traffic based on known CVEs, behavior-based patterns and industry intelligence
  • Antivirus and malware detection to detect and block virus and malware behaviors within traffic
  • Sandboxing to execute and process traffic in a “safe” virtual environment to observe the results before passing it on if it’s valid traffic
  • Web application firewalls to detect and block application-based threats
  • Distributed denial-of-service (DDoS) protection to block brute-force and denial-of-service attacks
  • SSL decryption and monitoring to gain visibility and be able to respond to traffic

In an on-premises scenario, next-generation firewalls provide most of these capabilities. Ideally, the firewalls only allow traffic on valid ports. But regardless, these firewalls can inspect traffic on all ports, including the open valid ports (e.g., 80, 443) to ensure malicious behaviors are being transmitted.

In an Amazon Web Services (AWS) environment, there is nothing that provides these capabilities between services, but there are many capabilities that can be combined to do so. These threats must be mitigated through careful security configuration.

How to Achieve Network Segmentation in AWS

Let’s assume an example application running on AWS has four components: content on S3, Lambda functions, custom data processing components running on EC2 instances and several RDS instances. These reflect three network segmentation zones: web, application and data.

Inbound traffic is sent to static or dynamic pages in S3. These pages initiate Lambda functions to manipulate and transform the data provided. The Lambda functions call custom complex logic served by systems running on EC2 instances. The Lambda functions and the EC2 systems interact with multiple RDS databases to enrich and store the data in various formats. In real life, these components would use a lot of other AWS configurations and policies, but it suffices for this discussion.

Consider the typical behavior of application developers: They leave security controls loose and get the work done as fast as possible. The diagram below shows this nonsecured flow and an overlay of the desired network zones to be created:

The segmentation requirement needs multiple AWS configurations, including:

  • AWS Shield Advanced;
  • AWS WAF;
  • VPC – Private Subnet;
  • VPC – Public Subnet;
  • VPC – Internet Gateway;
  • VPC – Route Table;
  • VPC – Security Groups;
  • VPC – Network Load Balancer;
  • Virtual Next Generation Firewalls; and
  • AWS Cloud Watch.

How Network Segmentation Works

The inbound traffic requests are first screened by AWS Shield. This prohibits DDoS attacks and certain other disruption vectors. The request is then analyzed by the AWS WAF to restrict things like SQL insertion, scans for various CVE and IP whitelisting (depending on the nature of the application needs). The inbound traffic is then sent to S3.

Next, Lambda functions manipulate and translate the data provided. All of this processing is done in publicly accessible services in AWS. Security for the next steps in the processing are within a VPC.

The traffic from Lambda is sent through an internet gateway and then routed to a network load balancer. The load balancer redirects to one of several virtual next-generation firewalls. Why do we need an LB and multiple firewalls? For redundancy and capacity, of course. These firewalls apply IDS/IPS, malware, sandboxing and sometimes SSL decryption for packet-level inspection by a security information and event management (SIEM) solution.

Next, the request is sent to the VPC Routing Table. The Routing Table applies Security Group policies, which restrict the source, destination, ports and routes for the traffic to ensure that only specific services can communicate. This Route Table also differentiates between public subnets (i.e., externally accessible, in this case for EC2 application servers) and private subnets (i.e., databases). All of the processing done by the VPC is captured in VPC Flow Logs and routed to the SIEM system, which is likely hosted on-premises or elsewhere.

This model, with appropriate policies applied at each component, can achieve all of the network segmentation requirements described above.

Complexity Considerations and a Call to Action for Vendors

This traffic routing is obviously much more complex than a traditional system. Complexity is costly, and it increases the opportunity for errors and configuration gaps, not to mention the operational burden.

This routing will also impact performance. If this model protects a time-sensitive transaction such as an e-commerce site, it needs to be evaluated and optimized. But given the speed and performance within AWS, most users’ browsers and network connections are likely too slow to notice a difference. For transactions that are not extremely time-sensitive, this model will work fine.

Still, these capabilities and the need to segment network traffic have existed for a long time. AWS and the various network security vendors need to establish a more complete solution to offer within a VPC.

More from Cloud Security

2024 Cloud Threat Landscape Report: How does cloud security fail?

4 min read - Organizations often set up security rules to help reduce cybersecurity vulnerabilities and risks. The 2024 Cost of a Data Breach Report discovered that 40% of all data breaches involved data distributed across multiple environments, meaning that these best-laid plans often fail in the cloud environment.Not surprisingly, many organizations find keeping a robust security posture in the cloud to be exceptionally challenging, especially with the need to enforce security policies consistently across dynamic and expansive cloud infrastructures. The recently released X-Force…

Cloud threat report: Why have SaaS platforms on dark web marketplaces decreased?

3 min read - IBM’s X-Force team recently released the latest edition of the Cloud Threat Landscape Report for 2024, providing a comprehensive outlook on the rise of cloud infrastructure adoption and its associated risks.One of the key takeaways of this year’s report was focused on the gradual decrease in Software-as-a-Service (SaaS) platforms being mentioned across dark web marketplaces. While this trend potentially points to more cloud platforms increasing their defensive posture and limiting the number of exploits or compromised credentials that are surfacing,…

Cloud Threat Landscape Report: AI-generated attacks low for the cloud

2 min read - For the last couple of years, a lot of attention has been placed on the evolutionary state of artificial intelligence (AI) technology and its impact on cybersecurity. In many industries, the risks associated with AI-generated attacks are still present and concerning, especially with the global average of data breach costs increasing by 10% from last year.However, according to the most recent Cloud Threat Landscape Report released by IBM’s X-Force team, the near-term threat of an AI-generated attack targeting cloud computing…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today