So it’s time to move one of your legacy web applications from your on-premise data centre to a public cloud provider such as Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP). This may be a decision you made, or one that your leadership team made. In this post I’ll explain what you need to think about when figuring out how to migrate your web application to the cloud.
About Your Web Application
You have a legacy-style web application consisting of an EAR file running on an application server. It shares the app server with several other applications deployed on it, and it talks to a relational database that is shared with other applications. The deployment architecture looks something like this:
Let’s assume you’re not using EJBs in your web application, and you’re using dependency injection such as Spring Framework. EJBs are tightly coupled to their host application server.
Do you Need To Migrate?
There are numerous benefits of cloud computing. Some of them include faster time to market, lower infrastructure costs, and better scalability. Often an organization wants to get away from the cost of expensive licenses for application servers and databases. Maybe the thought of someone else worrying about patching operating systems is a compelling reason. Are your infrastructure people fed up with dealing with disk drive failures, and power supplies making weird noises? Those problems go away when you use a compute instance on a public cloud provider.
Which of these reasons are meaningful to your organization? It’s important to understand the reasons and the business value for migrating to the cloud so you can make decisions that are consistent with those goals.
What is Different About the Cloud?
Here’s the thing: you’re running on someone else’s servers that use some sort of virtualization. This technology is exceptionally reliable and very cost-effective. But on rare occasions it does fail. The cloud provider may decide to move some virtual servers around, upgrade the virtualization layer, whatever, and in the process your server reboots. In rare situations your server instance fails. Now, you could roll up your sleeves and figure out why your cloud instance failed and begin a lengthly investigative and resurrection process. Or just kill the old instance and start a new one. The latter is a far more common approach. The bottom line is this: assume the instance your app is running on is ephemeral.
The cloud is elastic. A typical on-premise approach is to provision servers for maximum capacity, plus a bit more. This allows you to handle seasonal spikes in demand like Black Friday or Christmas. Of course, the big drawback of this is all the excess capacity goes unused (and paid for) for the rest of the year. With cloud-native services such as autoscaling, that spike triggers the automatic provisioning of more server instances. Then when demand subsides, autoscaling gracefully shuts down the extra instances. So you pay for only as much compute usage as you need when you need it.
I’m assuming by this point your organization has taken care of all the infrastructure and security issues. That is, there is a network infrastructure to land on (a Virtual Private Cloud), and Identity and Access Management roles and policies are in place. You have a way to connect to the cloud platform, whether it’s by a VPN, SSH, RDP, or whatever.
Lift and shift.
This is where you take your on-premise application and move it to the cloud provider, with little or no change to the architecture. Most cloud providers provide tooling to automate this. Although this means you are still incurring software licensing fees (more of them during the transition period), it gets your application on to the cloud from where you can make optimizations.
Lift and shift and tinker
This is the focus of this article. You move your application to the cloud provider, and in the process make some minor optimizations to take advantage of the provider’s cloud-native services. For example, rather than stand up a compute instance to run MS SQL Server, you use the provider’s database-as-a-service offering such as Amazon Relational Database Service (RDS). Instead of using an expensive application server, you use an open source equivalent such as Apache Tomcat.
Refactor and Re-architect
You re-imagine the design and the architecture of your application, using the services the cloud provider offers. For example, move from a monolithic, layered architecture to a microservices architecture. Perhaps even use serverless offerings such as Amazon Elastic Container Service or Azure Container Instances.
This can be quite expensive, but can be worth the cost and effort if your on-premise architecture cannot meet scaling and elasticity requirements.
Break Out the WAR File
An EAR file is just a thin wrapper around a WAR file. That wrapper has the deployment configuration that is specific to the application server. We take the WAR file and run it in an open-source servlet container such as Apache Tomcat or Netty.
We’ll also run one application per servlet container. I know this may sound like a waste of capability, given that Tomcat can hold several tenant WAR files. But we’ll see later why this is a more resilient strategy.
What You Need To Change in Your App
Remember when we talked about your instance being ephemeral? Suppose you need to spool up a new instance; what about the users who are already logged in? Their sessions will disappear when they begin using the new instance, so they’ll have to log in again. That’s not a very good user experience, particularly if your application involves a multi-set work flow using Spring Web Flow.
You can deal with this by using Spring Session. This library stores session information in an external repository rather than in the application server. Among other options, Spring Session lets you use Redis or a relational database as the backing store.
Some organizations use shared database DataSources on the application server. This is where many applications share one DataSource connection pool, and an application makes a JNDI call to its host application server to fetch the DataSource. No more. In the cloud, each application uses its own DataSource with a connection pool. Examples of connection pool solutions include C3PO, Hikari and Commons DBCP. A side benefit is if you code the validation query to be something like “SELECT my-app-name”, your DBAs can readily see what app is causing contention issues in the database.
If your configuration settings are defined in .properties (or .yml) files, that needs to change. In the event you need to kill the instance and spool up another one, any changes you made to these property or logging configuration files will be lost. You’ll get whatever settings are in your version control system.
The way to solve this is by using a configuration service such as Spring Cloud Config. Your configuration settings are stored in this service, and your application makes calls to retrieve them periodically. With the clever use of annotations, Spring Cloud Config is trivial to implement in your application. Spring Cloud Config in turn fetches the settings from a Git repository. Right there you have visibility into who change what settings and when they changed it.
Implement application monitoring to quickly determine whether your application is healthy or sick. Spring Boot Actuator or Micronaut Management & Monitoring provide simple health status web endpoints out of the box with minimal configuration. Use AWS CloudWatch or Azure Monitor or GCP Cloud Monitoring to periodically ping this health endpoint and raise an alarm when the number of unhealthy results exceeds a threshold.
What about the web server you were using as a reverse proxy? Configure an Application Load Balancer instead to front all requests to your app, and set it up with a TLS certificate.
If rapid application failover is a requirement, create a second instance of your application. Configure the load balancer so it is aware of both instances and speed incoming user requests across the two instances. If one instance goes bad, the load balancer will know when it sees numerous failures in the call to the health endpoint. It will mark that instance as bad and redirect traffic to the known good one.
What About Authentication and Authorization?
During this period of transition you’re going to be running some workloads on the public cloud, and others in your data centre. You’re going to have some sort of a VPN connection between the cloud provider and your data centre, with your DNS configured to resolve queries in both. As far as your application is concerned, it can still use the same authentication and authorization server host names as you did prior to this cloud migration effort. If anything, you may need to tweak some DNS entries, but your application can authenticate and authorize with little change.
In the Cloud
Here is a diagram showing how you might deploy your application to AWS:
Four of the biggest items to consider when migrating your web application to the cloud are:
- Session Management;
- Application configuration; and
Beyond this point, you may want to consider re-architecting your application to take advantage of the cloud-native services the provider offers. Maybe consider using Docker to containerize your application, even Kubernetes. Perhaps avail yourself of the serverless offerrings such as AWS’ Elastic Compute Service.
The operational cost savings can be quite compelling, but the effort to get there can be costly. Be sure to weigh these against the expected business value, then use your best judgment.