Learning from our mistakes.
Building Global Hybrid-Cloud Infrastructure Automation with Pipes and Python3
Building automation for foundational hybrid-cloud infrastructure deployments in hybrid cloud environments is hard. At Cohesive Networks we’ve helped our users build out networks that span datacenters, public clouds, IOT networks, even cell networks for bridging SMS communications. So we can find ourselves stitching together cloud infrastructure using different APIs in different clouds assuming different credentials etc. It can seem a little daunting. Now, If you’re in single public cloud, life’s a little bit easier: each cloud has their own deployment manager and Infrastructure as Code (IaC) templating system that more or less suffices. The complexity arises when you need to, say, build some infrastructure in AWS, then switch context and build in Azure, and switch back to AWS with state from the Azure infrastructure to continue building in AWS. The goal here is to step through an example architecture and simplify our deployment using pipes and python3.
The Hybrid-Cloud Architecture
So let’s run through a common hybrid-cloud architecture, break it down into discrete steps and see if we can’t simplify our lives.
Here’s a simplified architecture we see quite a bit: Account B in Cloud Y is providing connectivity (down and across) to Account A in Cloud X. Account A would like its applications to have secure access to any applications running on premise. At Cohesive we call the routing plane in Account B the “federation plane” and the whole network the “federation network”. It provides highly available connectivity to any new virtual private clouds (VPC) spun up in Cloud XYZ.
Automated VPC Cloud Construction
Ok, so let’s run through automating the construction of a new VPC in Cloud X that automatically has connectivity to on-premise applications. The steps for us look something like:
- Fetch network configuration for new VPC subnet (e.g. CIDR 10.1.0.0/16 with your favorite subnets)
- Create new VPC and virtual router/firewall (VNS3) for Account A in Cloud X
- Create fednet routing – routes for traffic “up” to 10.1.0.0/16.
- Create new VPC routing – routes to on-prem, traffic 10.0.0.0/16 to controller interface for routing traffic “down” to fednet
- Temporarily open API access for configuring VNS3 network routes and firewalls
- Connect new VPC with federation network by IPSEC peering controller in account A with federation network controllers
- Remove API access
- Teardown any configuration resources
Each step requires state from the previous step. This is really just a simple pipe: “echo parameters | fetch-network-config | create-new-vpc | …”. Ah, the beauty in simple ideas decades old. Each step accepts the previous step’s state, runs the step’s task, enriches the data with any new state and returns it. Now, I’ve always been partial to python for its ease of use and readability for things like this. Unfortunately, it doesn’t have native pipelining functionality. So let’s build it. Python3 also has some nice async features that we can take advantage for optimizing our automation.
The Python 3 Pipeline
A first pass at a simple pipelining function could be as simple as this:
I like to use python3’s new typing functionality because I think it makes for more maintainable code. Here it tells maintainers that a step is a tuple of length 2, where the first element is a string and the second is a callable function. We also copy the initial data so as not to mutate our caller’s data and then simply loop through our functions. So a pipeline function for our architecture might look like this:
So what’s going on here?
- We initialize some cloud clients that have permissions for provisioning resources in our cloud environments
- We define each step with a name (for readability and logging purposes) and a function. The function partial binds the parameters passed (template) to the function provided as the first argument. You’ll see each step can target whatever cloud we want using whatever IaC templates we like and the state will be passed through. In fact, each step can do anything so long as it respects the function signature expected (ie. it accepts and returns a dictionary).
A Little Optimization
This works quite nicely and it’s very simple. You might notice that some steps could perhaps be combined or run concurrently. Python3’s asyncio makes that quite easy:
Our new pipe function has a couple changes:
- It now accepts different kinds of steps as indicated by the Union typing parameter. Our pipe now accepts a step that provides a list of functions rather than just one. Using asyncio.gather we can run each sub-step concurrently (each sub-step must implement the async/await paradigm )
- We also updated our step functions to return a dictionary that has an optional “outputs” key for passing along in the pipe.
Add thats it. Pretty minor changes to take advantage of steps that can be run concurrently. Here’s what our deployment pipeline looks like now:
We’ve found this approach simple and effective. I’ll end with putting out a feeler: is there any interest in an open source python3 library for hybrid-cloud deployment automation? It would be purposefully simple and pluggable, adopting only a few powerful idioms like a pipeline. Let us know!
Recent Comments