The concept of lifting & shifting has been around for many years within IT: from the days when it inferred packing datacenters into lorries and hurriedly racking and stacking at the other end, to the present day when the term may be used to describe the shift of on premises VMs into a cloud offering. Whilst both situations may appear vastly different at first glance, they both boil down to the same concept: taking what is already there, in terms of applications, services, quirks and bottlenecks, and moving these part and parcel to another location. Whilst the majority of companies recognize this as a simple stop gap whilst exiting a legacy hosting site, some fall into the trap of thinking that this exercise will solve all of the woes of legacy hosting, only to discover that very little changes regardless of the tin sitting very abstractly below.
Before the days of cloud, all services, however trivial, required hosting. This evolved over time from physical boxes to smaller VMs, reducing the initial footprint. However, more recently, myriad solutions to this old problem have become available, with the advent of serverless architectures and the ability to simply host scripts as opposed to their underlying infrastructure. This provides a new avenue for hosting of these jobs, and along with this provides the potential for significant cost savings in the era of hourly resource billing.
Common examples of this that I see on most migration projects revolve around maintenance jobs and backups. Servers within the environment are flagged during a migration assessment as either heavily underutilized, or incredibly high on storage costs, which is due to either being a script hosting server (think Active Directory tidying scripts) or an archive store for old data (such as decommissioned VMs and user profiles). When considering traditional infrastructure, this may have made practical sense, however when looking to migrate to cloud, these sorts of services are an ideal candidate for simple transformation to serverless architecture. This can lead to a reduction in cost, a reduction in VMs to manage, and the ability to dip a toe in the expanded capabilities of a cloud platform, letting a customer get to grips with what could potentially be done to truly transform and optimize their environment.
Below, we have some of the potential offerings available to customers in Azure, and potential use cases for each. This is of course not exhaustive, but provides a starting point for further discussion on the topic when it comes to a migration/transformation.
Several of our recent completed assessments have uncovered servers(s) that exist primarily to perform tidy up on Active Directory, or archive off data as part of the HR process with leavers and maternity leave. Over time, these servers have built up an array of complex scripts used in a critical manner to manage the environment, however are comparatively rarely used. These servers pose an ideal candidate for transition to a serverless architecture, where billing would occur based on the execution of scripts, as opposed to constant billing for a single server.
Azure automation runbooks can be built from already existing scripts, augmented into a runbook. Runbooks can accept parameterized inputs, such as user account names, and from here execute a series of predefined tasks, such as disabling user accounts and gathering their profile data into low cost storage. This facility can also be extended to on-premises machines in the event of a hybrid cloud scenario using Hybrid Worker machines, comprised of small-footprint VMs hosted on premises to extend cloud functionality beyond Azure.
Alongside the ability to use standard PowerShell, Azure Automation can also be leveraged with PowerShell Desired State Configuration (DSC), allowing for newly provisioned machines to be brought into line with company standards, for example installing any server hardening, AV and monitoring required. By doing so, manual intervention during deployments is minimized, and services are available and accepted into operation much faster.
Many companies focusing on web presence and engagement have made significant investment in backend infrastructure to support their websites, with functionality far beyond a simple static, non-interactive web presence. These backend systems are performing functions such as document conversion and image resizing, all the way up to accepting various image inputs to create custom image mosaics through backend processing. Whilst a VM would have previously been required for these jobs, serverless options have been developed to once again reduce management overhead and provide rich reporting on job progress and performance.
Azure functions can be written in multiple languages, ranging from standard C#/.NET through to Python, PHP or even Bash & PowerShell. Once written, these jobs can be hooked into pre-existing services, as either a webhook or perhaps listening on other services such as storage. The function can be used to ingest, transform and output a variety of items, limited only by the language underlying the function. Alternatively, some customers have leveraged functions to deliver customized content to a web page based on logged in users, based on the functions’ ability to leverage multiple services. In addition, a function is entirely elastic, with the ability to scale the performance to whatever requirement, and conversely scale down to considerably save on cost in quiet hours.
Whilst a slightly more obvious concept, Azure storage is commonly overlooked when customers are looking to implement storage solutions to serve applications and business processes. I have had several customers recently looking to transfer vast archives of data attached to VMs in order to service regulatory requirements: if these were to be transferred to a cloud offering as-is, the storage costs would prove to be significant due to the sheer volume of data to be stored, alongside the cost for a VM to be kept online and maintained for data access.
Azure Storage now offers both a “cool” tier and “archive” tier, both of which offer a significant cost saving for rarely accessed data. Both of these tiers are accessed via the same means (HTTPS and REST API) as the hot tier of storage, allowing for the vast majority of backup solutions to natively connect to this storage. Additionally, through the use of either Azure Automation or Functions, data to be archived off can automatically added to these tiers without manual intervention.
Storing data at this level benefits from providing secure data retention, with all data encrypted at rest: additionally, with the use of Geo-Redundant Storage (GRS), this data can be replicated to multiple locations within a region, providing greatly improved data resiliency as opposed to traditional tape archiving.
Whilst the above provides a very simple and quick insight to the initial possibilities with some of the Azure Serverless capabilities, I hope it provides some insight into where the industry is heading, and some of the discussions we hope to have with customers. Whilst lift & shift may solve some immediate problems, transformation of legacy is key, and the above provides some simple ways to begin this journey.