5 concepts to truly understand cloud computing

5 concepts to truly understand cloud computing

No matter what your role is, it is worth noting key differences between traditional on-premises computing model and cloud computing.

There are plenty of requirements and opinions out there about who has the best IaaS Cloud offering (see Gartner Magic Quadrant or similar). Most requirements for IaaS cloud are on-demand flexible infrastructure you can quickly order up on a credit card.

There are 5 core cloud concepts everyone in technology should know:

  1. Elasticity
  2. Scalability
  3. Fast, fat, and flat
  4. Pets vs. Cattle
  5. Key Differences in On-Prem and Cloud

Elasticity

Elasticity stems from the concepts in physics and economics. In computing, it describes the ability to automatically provision and de-provision computing resources on demand as workloads change.

Elasticity should not be confused with efficiency or scalability.

Definitions vary, but these are key concepts:

  1. ODCA : ” the configurability and expandability of the solution… the ability to scale up and scale down capacity based on subscriber workload.”
  2. NIST Definition of Cloud Computing : ”Rapid elasticity: Capabilities can be elastically provisioned and released, in some cases automatically, to scale rapidly outward and inward commensurate with demand. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be appropriated in any quantity at any time.”
  3. Rich Wolski : ”Elasticity measures the ability of the cloud to map a single user request to different resources.”

Scalability

Scalability is the ability to add resources to meet demand. In virtualization, there are 2 key types of scalability: horizontal and vertical:

  • Horizontal Scaling (In/Out)
    Scaling-out is a common approach in cloud, using commodity hardware to increase the number of resources performing an operation. Amazon AWS’ AutoScale service is a way to automate scaling a complete stacks in the cloud. With Horizontal Scaling, owners need to route requests between increased resources and demand – a load balancer is an ideal way to scale a web application. The only downside of Horizontal Scaling is the limits of application owners’ ability to mange the architecture design, security, and persistence of a huge scale.A good analogy for horizontal scaling is a railway system that needs to transport more and more goods every day. In order to cope with the demand, a railroad adds train tracks and locomotives to increase throughput. As the demand grows, the railroad will need to add tracks, signal directors (routing agents), and more railroad staff. The biggest limits are on the railroad’s ability to fund and manage the complexity.
  • Vertical Scaling (Up/Down)
    Vertical Scaling increases the size of the resources used to perform an operation. Instead of creating additional copies of an application, cloud users just add more resources. The largest downside to Vertical Scaling is the eventual limits to the system. Unlike Horizontal Scaling, Vertical Scaling will meet limits or bottlenecks.
    In the railway system analogy, scaling vertically grows the amount of goods as well as each locomotive. For example, scaling up adds more cargo cars and adds a larger, more powerful engine to provide more horsepower (add more RAM, CPU, disk space, etc). Vertical Scaling does not change the number of trains or add more rail lines, but there is a limit to how large trains can get when traveling though tunnels and over bridges, etc.

Fast, Fat, and Flat

How can you differentiate real cloud infrastructure offerings? The foundation of a “real cloud” is the underlying level where basic IaaS is fast, fat, and flat.

Cloud Infrastructure is more than a virtualized environment. A true cloud IaaS offering will go beyond a virtualized set of components by installing architecture that supports fast, fat and flat infrastructure. Read CEO Patrick Kerpan’s oldie-but-goodie blog post.

Pets vs Cattle

Are your servers pets or cattle?

Cloud computing has an apt analogy for how we’ve changed our attitudes toward virtual servers: pets and cattle.

Pets are unique. We name them and pay close attention to them. When they get sick, we nurse them back to health.

Cattle, on the other hand, are only part of heard. We brand them with numbers to track them. When they get sick, we can replace them.

One of the key benefits of cloud computing is the ability to quickly and easily deploy servers. We no longer need so many pets. We have access to on-demand and affordable cattle. When we need a herd of 15,000 virtual machines for a project, cloud computing allows us to scale out, not scale up.

*The origin of the “pets and cattle” metaphor, according to Randy Bias, is attributed to former Microsoft employee Bill Baker.

Cloud vs. On-Prem

On-premises (on-prem) are deployed in a traditional manner: organizations buy servers, install operating systems, and build the systems within the offices or datacenter. Users/owners are fully responsible for servers, physical buildings, and the electricity.

Hosted offerings are provided by a service provider within a “colocation” data center or facility. Hosted offerings can be contracted for limited times but are built for an organization specifically. When a hosting provider hosts solutions, they are responsible for whatever it is that they are offering – for a datacenter, they provide the electricity, physical security, and perhaps core networking functions. Ultimately, the organization pays for something that others are providing.

Cloud offerings similar to hosted, but are no custom-built for an organization. All cloud offerings are on-demand self-service, accessible via the Internet, offer resource pooling, and provide rapid elasticity, measured service and flexibility.

Key Differences in Cloud vs. On-Prem

  • Access
    On-premises solutions are located within an organization’s control, installed on a user’s computers, in a server closet, or in a data center. Cloud must be accessed over the internet, and it is generally hosted by a third-party vendor.
  • Pricing
    In traditional on-prem systems, organizations pay upfront with large capital expenditures to build data centers, buy servers, and purchase software licenses. These costs are capital (CAPEX) and the organization pays them off over time. Once the systems are no longer useful – after an “end of life” or the hardware breaks – the organization must buy new hardware. Considering hardware, deperciation, and ongoing maintenance costs of on-prem is calculated using total cost of ownership (TCO).
    In cloud, pricing is based on pay-as-you-go or on-demand usage. The on-demand usage is viewed as “utility” cost or Operation expense (OPEX) compared to a large capital expenditure. The pricing difference has created a low cost/low entry point for startups and smaller businesses.

Interested in learning more? Contact our sales team about our 3 part VNS3 and cloud networking courses.

VNS3 highlighted in the AWS Partner SA Roundup – July 2017

VNS3 highlighted in the AWS Partner SA Roundup – July 2017

See the full blog post on the AWS blog here.

Many AWS customers have a hybrid network topology where part of their infrastructure is on premises and part is within the AWS Cloud. Most IT experts and developers aren’t concerned with where the infrastructure resides—all they want is easy access to all their resources, remote or local, from their local networks.

So how do you manage all these networks as a single distributed network in a secure fashion? The configuration and maintenance of such a complex environment can be challenging.

Cohesive Networks, an APN Advanced Technology Partner, has a product called VNS3:vpn, which helps alleviate some of these challenges. The VNS3 product family helps you build and manage a secure, highly available, and self-healing network between multiple regions, cloud providers, and/or physical data centers. VNS3:vpn is available as an Amazon Machine Image (AMI) on the AWS Marketplace, and can be deployed on an Amazon EC2 instance inside your VPCs.

One of the interesting features of VNS3 is its ability to create meshed connectivity between multiple locations and run an overlay network on top. This effectively creates a single distributed network across locations by peering several remote VNS3 controllers.

Here is an example of a network architecture that uses VNS3 for peering:

The VNS3 controllers act as six machines in one, to address all your network needs:

  • Router
  • Switch
  • SSL/IPsec VPN concentrator
  • Firewall
  • Protocol redistributor
  • Extensible network functions virtualization (NFV)

The setup process is straightforward and well-documented with both how-to videos and detailed configuration guides.

Cohesive Networks also provides a web-based monitoring and management system called VNS3:ms in a separate server, where you can update your network topology, fail over between VNS3 controllers, and monitor your network and instances’ performance.

See the VNS3 family offerings from Cohesive Networks in AWS Marketplace , and start building your secured, cross-connected network. Also, be sure to head over to the Cohesive Networks website to learn more about the VNS3 product family.

See the full blog post on the AWS blog here.