AWS Defense in Depth Overview

AWS Defense in Depth Overview

Layers of security bolster defenses for any application, database, or critical data. In a traditional data centers, physical network isolation meant building walls for physical security. For cloud, the providers – AWS, Azure, and others, build the walls, fences and comply with things like ITAR and SOC 1. This is the provider-owned and completely provider-controlled security they provide to users.

Up the cloud stack, users can add and more layers of defense at the virtualization layer by creating logical segmentation, and at the application layer with application segmentation. Three key ways to add network security users can access provider-owned, user-controller features like virtual private clouds (aka VLAN isolation), port filtering, and static assignable public IP addresses.

AWS allow users to control certain features and services, but ultimately own the feature. The cloud user is responsible for setting up, maintaining and updating these features. One example is port filtering on the host operating system. Port filtering prevents packets from ever reaching a virtual adapter. hypervisor firewall through network mechanisms such as security groups or configuration files. Users can limit rules to only allow ports needed for each application.

AWS Defense in Depth

AWS is responsible for security of the cloud. AWS users are responsible for security in the cloud.

Customer data and applications are completely controlled by AWS users. AWS provides security features including IAM, firewalls, port filtering (security groups), and network protection but users must enable, maintain and control those features.

AWS Shared Responsibibility

AWS provider-owned/User Controlled Security

Identity and Access Management (IAM)

In AWS, the identity and access management (IAM) service allows users to create specific accounts for each person/role that needs AWS access.

In a new AWS account, the initial account is the “root account” with full access all services and controls in the account. After configuring the administrator roles and access you should shift all administrative activities in the console to assigned roles. Before deleting the root access key, you can first deactivate it to test for any issues. You can next delete the root account and root access key to prevent any outside access.

Force MFA for all AWS users

From the IAM console, you can add multi-factor authentication (MFA) for all users. First, enable MFA on the root account. Next, you can require all AWS users to configure MFA. The “force MFA” IAM policy is attached to each user. Note that once you enable “force MFA” the user will be denied all other permissions until the he/she sets up MFA and logs in using MFA.

AWS MFA

Use IAM roles for all services

AWS IAM allows you to create roles to give users or AWS infrastructure the necessary permissions to access other AWS services. For example, roles in EC2 roles can limit which users can launch an instance and which S3 permissions can interact with EC2.

AWS Key Management Service

AWS Key Management Service (KMS) is a service for creating and controlling encryption keys. KMS uses Hardware Security Modules (HSMs) to protect keys in AWS.

CloudTrail

CloudTrail is an AWS service that records API calls for you account and delivers log files. CloudTrail is not enabled by default. CloudTrail provides a history of AWS API calls for your account., including API calls made via the Management Console, SDKs, command line tools, and high-level AWS services. CloudTrail API call history enables security analysis, resource change tracking, and compliance auditing.

AWS Config

AWS Config is a managed service that creates a resource inventory, configuration history, and configuration change notifications for security and governance. AWS Config lets you export a complete inventory of your AWS resources with all configuration details. AWS Config helps enable compliance auditing, security analysis, and resource change tracking.

AWS Trusted Advisor

AWS Trusted Advisor inspects the AWS environment and finds opportunities to save money, improve system performance and reliability, or help close security gaps.

Amazon Inspector

Amazon Inspector is an automated security assessment service that can assess applications for vulnerabilities or deviations from best practices. Amazon Inspector includes a knowledge base of hundreds of rules mapped to common security compliance standards (e.g., PCI DSS) and vulnerability definitions.

AWS Networking Security

Security Groups = act as firewalls for inbound and outbound traffic to/from your EC2-VPC devices. Security group characteristics include:

  • By default, outbound traffic is allowed
  • Rules are permissive (you can’t deny access)
  • Add / remove rules at any time
  • You can copy the rules from an existing security group to a new security group
  • Security groups are stateful — if you send a request from your instance, the response traffic for that request is allowed to flow in regardless of inbound security group rules

To create a security group rule, specify the following:

  1. The protocol to allow (such as TCP, UDP, or ICMP)
  2. For TCP, UDP, or a custom protocol: The range of ports to allow
  3. For ICMP: The ICMP type and code
  4. Choose one of the following options for the source (inbound rules) or destination (outbound rules):
  • An individual IP address, in CIDR notation ( 203.0.113./32)
  • An IP address range, in CIDR notation (for example, 203.0.113.0/24)
  • a name or ID of a security group – allow instances associated with the specified security group to access instances associated with this security group

Network access control lists (ACLs) = act as a firewall for associated subnets, controlling both inbound and outbound traffic at the subnet level.
The following are the parts of a network ACL rule:

  • Rule number. Rules are evaluated starting with the lowest numbered rule. As soon as a rule matches traffic, it’s applied regardless of any higher-numbered rule that may contradict it.
  • Protocol. You can specify any protocol that has a standard protocol number. For more information, see Protocol Numbers. If you specify ICMP as the protocol, you can specify any or all of the ICMP types and codes.
  • [Inbound rules only] The source of the traffic (CIDR range) and the destination (listening) port or port range.
  • [Outbound rules only] The destination for the traffic (CIDR range) and the destination port or port range.
  • Choice of ALLOW or DENY for the specified traffic.

* NOTE: ACLs are similar to Security Groups (rules), but ACLs monitor traffic at the subnet level. It’s important to note that Security Groups are Stateful, while the NACL is Stateless. *

Elastic IP address (EIP) = static IP address associated with your AWS account. Use EIPs to mask the failure of an instance or software by rapidly remapping the address to another instance in your account.

An Elastic IP address is a public IP address, reachable from the Internet. If your instance does not have a public IP address, you can associate an Elastic IP address with your instance to enable communication with the Internet.

When you associate an Elastic IP address with an instance in EC2-Classic, a default VPC, or an instance in a nondefault VPC in which you assigned a public IP to the eth0 network interface during launch, the instance’s current public IP address is released back into the public IP address pool. If you disassociate an Elastic IP address from the instance, the instance is automatically assigned a new public IP address within a few minutes.

Further Reading: 10 AWS security blunders and how to avoid them. By Fahmida Y. Rashid Originally published on InfoWorld Nov 3, 2016

Next up, use your user-provided, user-owned features to add application layer security.

Services like SSL/TLS termination, load balancing, caching, proxies, and reverse proxies can also add application-layer security. Additionally, tailoring security policies to each application can be more effective than applying complex, blanket security policies across multiple applications.

Quick overview of Azure Defense in Depth

Quick overview of Azure Defense in Depth

Layers of security bolster defenses for any application, database, or critical data. In a traditional data centers, physical network isolation meant building walls for physical security. For cloud, the providers – AWS, Azure, and others, build the walls, fences and comply with things like ISO 9000. This is the provider-owned and complexly provider-controlled security they provide to users.

Up the cloud stack, users can add and more layers of defense at the virtualization layer by creating logical segmentation, and at the application layer with application segmentation. Three key ways to add network security users can access provider-owned, user-controller features like VLAN isolation, port filtering, and static assignable public IP addresses.

Public cloud providers allow users to control certain features and services, but ultimately own the feature. The cloud user is responsible for setting up, maintaining and updating these features. One example is port filtering on the host operating system. Port filtering prevents packets from ever reaching a virtual adapter. hypervisor firewall through network mechanisms such as security groups or configuration files. Users can limit rules to only allow ports needed for each application.

In Azure, you can use the following Azure-provided, user-controlled features:

  • Azure Multi-Factor Authentication
  • Privileged Access Workstations (PAW )
  • Azure Role based access control (RBAC)
  • Network Security Groups (NSGs)
  • Azure Key Vault
  • Azure Disk Encryption
  • Security Center monitoring and compliance checking

Azure provider-owned/User Controlled Security

  1. Use Azure identity management and access control for each application (like AD), enable password management and create multi-factor authentication (MFA) for users
  2. Use role based access control (RBAC) to assign privileges to users
  3. Monitor account activity
  4. Add and control access to each Resource

View and Add access to each Azure Resource and Resource group

  • Select Resource groups in the navigation bar on the left.
  • Select the name of the resource group from the Resource groups blade.
  • Select Access control (IAM) from the left menu.

The Access control blade lists all users, groups, and applications that have been granted access to the resource group.

  • Select Add on the Access control blade.
  • Select the role that you wish to assign from the Select a role blade.
  • Select the user, group, or application in your directory that you wish to grant access to. You can search the directory with display names, email addresses, and object identifiers.
  • Select OK to create the assignment. The Adding user popup tracks the progress. After successfully adding a role assignment, it will appear on the Users blade

Azure Networking Security

Azure offers several networking security services:

  • Azure VPN Gateway
  • Azure Application Gateway
  • Azure Load Balancer
  • Azure ExpressRoute (direct connection through ISP)
  • Azure Traffic Manager
  • Azure Application Proxy

More on Network Access Control

Network access control is the act of limiting connectivity to and from specific devices or subnets within an Azure Virtual Network to ensure your VMs and services are accessible to only users and devices you control.

  • Network Layer Control – basic network level access control (based on IP address and the TCP or UDP protocols), using Network Security Groups. A Network Security Group (NSG) is a basic stateful packet filtering firewall and it enables you to control access based on a 5-tuple. NSGs do not provide application layer inspection or authenticated access controls.
  • Route Control and Forced Tunneling – customize routing behavior for network traffic on your Azure Virtual Networks by configuring User Defined Routes in Azure.
    Forced tunneling = ensure services are not allowed to initiate a connection to devices on the Internet. All connections to the Internet are forced through your on-premises gateway. You can configure forced tunneling by taking advantage of User Defined Routes.
  • Network Security Groups = contains a list of access control list (ACL) rules that allow or deny network traffic to your VM instances in a Virtual Network

 

Next up, use your user-provided, user-owned features to add application layer security.

Services like SSL/TLS termination, load balancing, caching, proxies, and reverse proxies can also add application-layer security. Additionally, tailoring security policies to each application can be more effective than applying complex, blanket security policies across multiple applications.

5 concepts to truly understand cloud computing

5 concepts to truly understand cloud computing

No matter what your role is, it is worth noting key differences between traditional on-premises computing model and cloud computing.

There are plenty of requirements and opinions out there about who has the best IaaS Cloud offering (see Gartner Magic Quadrant or similar). Most requirements for IaaS cloud are on-demand flexible infrastructure you can quickly order up on a credit card.

There are 5 core cloud concepts everyone in technology should know:

  1. Elasticity
  2. Scalability
  3. Fast, fat, and flat
  4. Pets vs. Cattle
  5. Key Differences in On-Prem and Cloud

Elasticity

Elasticity stems from the concepts in physics and economics. In computing, it describes the ability to automatically provision and de-provision computing resources on demand as workloads change.

Elasticity should not be confused with efficiency or scalability.

Definitions vary, but these are key concepts:

  1. ODCA : ” the configurability and expandability of the solution… the ability to scale up and scale down capacity based on subscriber workload.”
  2. NIST Definition of Cloud Computing : ”Rapid elasticity: Capabilities can be elastically provisioned and released, in some cases automatically, to scale rapidly outward and inward commensurate with demand. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be appropriated in any quantity at any time.”
  3. Rich Wolski : ”Elasticity measures the ability of the cloud to map a single user request to different resources.”

Scalability

Scalability is the ability to add resources to meet demand. In virtualization, there are 2 key types of scalability: horizontal and vertical:

  • Horizontal Scaling (In/Out)
    Scaling-out is a common approach in cloud, using commodity hardware to increase the number of resources performing an operation. Amazon AWS’ AutoScale service is a way to automate scaling a complete stacks in the cloud. With Horizontal Scaling, owners need to route requests between increased resources and demand – a load balancer is an ideal way to scale a web application. The only downside of Horizontal Scaling is the limits of application owners’ ability to mange the architecture design, security, and persistence of a huge scale.A good analogy for horizontal scaling is a railway system that needs to transport more and more goods every day. In order to cope with the demand, a railroad adds train tracks and locomotives to increase throughput. As the demand grows, the railroad will need to add tracks, signal directors (routing agents), and more railroad staff. The biggest limits are on the railroad’s ability to fund and manage the complexity.
  • Vertical Scaling (Up/Down)
    Vertical Scaling increases the size of the resources used to perform an operation. Instead of creating additional copies of an application, cloud users just add more resources. The largest downside to Vertical Scaling is the eventual limits to the system. Unlike Horizontal Scaling, Vertical Scaling will meet limits or bottlenecks.
    In the railway system analogy, scaling vertically grows the amount of goods as well as each locomotive. For example, scaling up adds more cargo cars and adds a larger, more powerful engine to provide more horsepower (add more RAM, CPU, disk space, etc). Vertical Scaling does not change the number of trains or add more rail lines, but there is a limit to how large trains can get when traveling though tunnels and over bridges, etc.

Fast, Fat, and Flat

How can you differentiate real cloud infrastructure offerings? The foundation of a “real cloud” is the underlying level where basic IaaS is fast, fat, and flat.

Cloud Infrastructure is more than a virtualized environment. A true cloud IaaS offering will go beyond a virtualized set of components by installing architecture that supports fast, fat and flat infrastructure. Read CEO Patrick Kerpan’s oldie-but-goodie blog post.

Pets vs Cattle

Are your servers pets or cattle?

Cloud computing has an apt analogy for how we’ve changed our attitudes toward virtual servers: pets and cattle.

Pets are unique. We name them and pay close attention to them. When they get sick, we nurse them back to health.

Cattle, on the other hand, are only part of heard. We brand them with numbers to track them. When they get sick, we can replace them.

One of the key benefits of cloud computing is the ability to quickly and easily deploy servers. We no longer need so many pets. We have access to on-demand and affordable cattle. When we need a herd of 15,000 virtual machines for a project, cloud computing allows us to scale out, not scale up.

*The origin of the “pets and cattle” metaphor, according to Randy Bias, is attributed to former Microsoft employee Bill Baker.

Cloud vs. On-Prem

On-premises (on-prem) are deployed in a traditional manner: organizations buy servers, install operating systems, and build the systems within the offices or datacenter. Users/owners are fully responsible for servers, physical buildings, and the electricity.

Hosted offerings are provided by a service provider within a “colocation” data center or facility. Hosted offerings can be contracted for limited times but are built for an organization specifically. When a hosting provider hosts solutions, they are responsible for whatever it is that they are offering – for a datacenter, they provide the electricity, physical security, and perhaps core networking functions. Ultimately, the organization pays for something that others are providing.

Cloud offerings similar to hosted, but are no custom-built for an organization. All cloud offerings are on-demand self-service, accessible via the Internet, offer resource pooling, and provide rapid elasticity, measured service and flexibility.

Key Differences in Cloud vs. On-Prem

  • Access
    On-premises solutions are located within an organization’s control, installed on a user’s computers, in a server closet, or in a data center. Cloud must be accessed over the internet, and it is generally hosted by a third-party vendor.
  • Pricing
    In traditional on-prem systems, organizations pay upfront with large capital expenditures to build data centers, buy servers, and purchase software licenses. These costs are capital (CAPEX) and the organization pays them off over time. Once the systems are no longer useful – after an “end of life” or the hardware breaks – the organization must buy new hardware. Considering hardware, deperciation, and ongoing maintenance costs of on-prem is calculated using total cost of ownership (TCO).
    In cloud, pricing is based on pay-as-you-go or on-demand usage. The on-demand usage is viewed as “utility” cost or Operation expense (OPEX) compared to a large capital expenditure. The pricing difference has created a low cost/low entry point for startups and smaller businesses.

Interested in learning more? Contact our sales team about our 3 part VNS3 and cloud networking courses.

VNS3 highlighted in the AWS Partner SA Roundup – July 2017

VNS3 highlighted in the AWS Partner SA Roundup – July 2017

See the full blog post on the AWS blog here.

Many AWS customers have a hybrid network topology where part of their infrastructure is on premises and part is within the AWS Cloud. Most IT experts and developers aren’t concerned with where the infrastructure resides—all they want is easy access to all their resources, remote or local, from their local networks.

So how do you manage all these networks as a single distributed network in a secure fashion? The configuration and maintenance of such a complex environment can be challenging.

Cohesive Networks, an APN Advanced Technology Partner, has a product called VNS3:vpn, which helps alleviate some of these challenges. The VNS3 product family helps you build and manage a secure, highly available, and self-healing network between multiple regions, cloud providers, and/or physical data centers. VNS3:vpn is available as an Amazon Machine Image (AMI) on the AWS Marketplace, and can be deployed on an Amazon EC2 instance inside your VPCs.

One of the interesting features of VNS3 is its ability to create meshed connectivity between multiple locations and run an overlay network on top. This effectively creates a single distributed network across locations by peering several remote VNS3 controllers.

Here is an example of a network architecture that uses VNS3 for peering:

The VNS3 controllers act as six machines in one, to address all your network needs:

  • Router
  • Switch
  • SSL/IPsec VPN concentrator
  • Firewall
  • Protocol redistributor
  • Extensible network functions virtualization (NFV)

The setup process is straightforward and well-documented with both how-to videos and detailed configuration guides.

Cohesive Networks also provides a web-based monitoring and management system called VNS3:ms in a separate server, where you can update your network topology, fail over between VNS3 controllers, and monitor your network and instances’ performance.

See the VNS3 family offerings from Cohesive Networks in AWS Marketplace , and start building your secured, cross-connected network. Also, be sure to head over to the Cohesive Networks website to learn more about the VNS3 product family.

See the full blog post on the AWS blog here.