Containers: at one point seemingly a fad, now a major component in large enterprise. Estimations from Gartner point to half of global organizations running containers in production from 2020, and the once turbulent landscape of containerization solutions (with Kubernetes, Docker and Mesosphere going head to head in the orchestration space) has now begun to calm into a more stable portfolio of accepted, and supported, products.
Containers provide an elegant solution to an age-old problem in development, where underlying architectures and operating systems led to difficulty promoting code from a developer laptop, to a dev/test environment, and finally on to production. Seemingly trivial differences between OS builds, SDK versions and hardening policies saw perfectly good code crash and burn when spun up, leading to the need for additional testing and fixes, and ultimately costing more time. Containers, however, by abstracting all dependencies into a wrapper alongside the application, solve this problem.
Along with providing less headaches for developers, benefits exist for the infrastructure team as well. A traditional application may have run across dozens of VMs, all with a quota of cores and memory, and all licensed for their associated OS. On top of this, every VM requires patching and monitoring. Whilst definitely providing more density than physical tin, increases in capacity to a hypervisor platform still take time, and OS licensing continues to squeeze budgets. In comparison to VMs, the greater density offered by containers reduces the infrastructure footprint required to run the application (and subsequently be protected and maintained), and reduces the required budget for licensing.
Whilst this is certainly a step in the right direction for reducing the infrastructure requiring maintenance, container platforms still have infrastructure running underneath them, and as these platforms grow, time and effort begins to creep back in to supporting an expanding infrastructure layer.
Above is a generalized OpenShift cluster design I produced last year for hosting a company’s public web presence. Highlighted in green are elements I would consider core and necessary infrastructure: domain controllers (DNS and Identity) and firewalls (security). Above this, in orange, if the OpenShift Infrastructure: scoped to support ~60 production containers, it provides significantly better density than traditional VMs. Despite this however, virtual machines remain – and the infrastructure team must support them as such. This leads to the need for more core infra – AV repositories, update management servers and jump boxes to name but a few.
This was, at the time, entirely fit for purpose and implemented into production. However, with cloud advancing at the pace it does, myriad solutions have appeared in the marketplace that could help to reduce the underlying footprint, and abstract away the need for patching, maintenance and upgrades. For the above example, I’ll look at some specific solutions that could be used to transform this towards a serverless architecture, moving us away from maintenance-heavy IaaS components.
A key component of any container deployment is accessibility to a registry, allowing for centralized storage of container images. From here, the orchestration platform can roll these to the app nodes. Multiple public facing solutions exist such as docker hub, and self-hosted solutions are also deployable.
To fill this purpose within the Azure ecosystem, Azure Container Service (ACR) provides this functionality: ACR allows for integration with existing orchestration systems both on prem and cloud based, and has the added benefit of network proximity to other Azure infrastructure, reducing deployment times for container platforms hosted on both Azure IaaS and PaaS.
ACR can be connected to VSTS and other CI/CD pipelines such as Jenkins, and has the benefit of being secured with Azure AD. At the other end of the pipeline, ACR can be connected to not only traditional container infrastructure, but also App Services and Service Fabric, providing a wide range of deployment targets and architectures.
Often the most frustrating part of the initial stand-up of a container infrastructure is the initial deployment of the management nodes: alongside the initial set up, they must be carefully monitored and maintained, as the loss of the management nodes initially results in a loss of control of the cluster, and ultimately results in a complete loss of connectivity to the containers themselves. Azure Kubernetes Service (AKS) steps in to provide the Kubernetes Orchestration layer, without the need to look after any infrastructure. The management cluster is provisioned ready-built and free of charge, with cost incurred only on the app nodes used. A specific version of Kubernetes can be selected to ensure compatibility with current deployments (with upgrades being made available as they are tested), and the ability to provision Windows containers is now in preview.
One key benefit of containers over VMs is their burstability: given a container can be spun up in seconds, they are ideally suited to handle fluctuating workloads. Whilst the orchestration platform can make efficient use of the app nodes available to it, when this capacity is exhausted there is a wait for an additional node to be prvisioned underneath.
This problem is mitigated however through the use of Azure Container Instances (ACI): ACI can provide containers to an orchestration platform without the delay of waiting for the underlying node, ideal for sudden bursts in demand. Such a scenario could be an end of season sale on a website, where maintaining maximum capacity at all times would not be cost effective, and demand cannot be accurately gauged prior to the event.
If we refer back to my design from last year, and then factor in the solutions above, we suddenly see a dramatic drop in the need for VMs to be managed by the end customer – in fact, the entire container infrastructure has now been abstracted, as per below:
We’re now able to remove a significant portion of the IaaS, reduce management and maintenance, and improve scalability – a proper transformation to cloud, utilizing native services to make life easier.