How we automated build and deploy of Dot Net Core applications on AWS Serverless stack.

This is the fifth article in a series about implementing an AWS Serverless Web Application for global customers of a large enterprise. If you are lost, please read the first article.

In April 2017 I started working earnestly with a client to do a globally used serverless web application. The incumbent product was a sharepoint application written for internal staff and the target state was a high performing, globally used web application for external customers. Among many goals of the Architecture was the goal of automated continuous deployment to production. At that point, neither anyone in the client’s organization nor anyone outside we knew had any experience with continuous delivery of serverless application. We had multiple goals to achieve. Here is a list of a few top goals.

  1. Seamless Developer Experience: Developers should find it easy to deploy Lambdas, API Gateways, DynamoDB, Static Assets easily to test their code without disrupting any “higher” environments.
  2. 100% Automation: No manual steps for any deployment activities from dev to production environments.
  3. Support Legacy Requirements: We wanted full interoperability with our legacy process — an SDLC-like process with many gates, hand-offs, manual server setups, code branching.
  4. No Dedicated Capacity: In general, fixed CPU-servers were to be avoided. Only usage-priced resources were to be used.
  5. Microservice Deployment: Deploy small independent Lambdas backing API gateway resources instead of large code base with MVC.
  6. Safety: Deployments should not cause downtime inadvertently or have ambiguous paths causing any confusion or dangerous errors.
  7. Absolutely No Downtime: There should be absolutely no need to deploy anything to any environment.

In this article, I want to introduce at a very high level what I ended up proposing. This article builds on my previous article about “Microservice Composition

Requirements & Constraints

There are three main stakeholders who consumed this and drove business requirements.

Management: Management’s main goal was to minimize risk as an existing app is re-engineered to AWS Serverless (Note: this is only in context of build and deployment, the ultimate goal is to make a higher performing global application). Technical Staff: Developers, testers, operators want to make it all easy and super fast. End Users: I consider the end-users, my client’s customers to be the implicit stakeholders. End users want features and upgrades quickly, and they drive agility requirements. The only constraint to speak off was our choice of AWS cloud infrastructure. Any deviation from using AWS would have required approvals. This is actually a good thing because this forced us to use AWS best practices and distribute the application globally with ease.

Deploying to Distributed Architecture

One major consideration in our build and deploy process is the distributed nature of our application. Recall from a previous article that the entire serverless stack is distributed across multiple vertical functionalities and 5 horizontal technology tiers. This means that each intersection of the vertical and horizontal became a deployable solution. In the rest of this article, I will try to describe how each of these deployable solutions is deployed along a vertical. But, first, I must state 2 key decisions we made early on that made the continuous delivery possible.

Backward Compatibility

First decision we made early on is to never release a change that is not backward compatible with rest of the stack. In other words, if we needed to change key schema of a Dynamo DB table, we vowed never to simply change it, but, instead add a new key schema and allow upstream code to use old and new at the same time. This allowed us to make changes to small parts of the application that would not break the entire thing.

Change Sequence

Another decision we made early on was to always release changes in a sequence from the bottom of our stack to the top. Let’s say that we were going to release a change that included a DynamoDB change (key schema change), a Lambda Function change (modified function), an API Gateway change (modified method), and a client application change (new angular widget). In this case, we would release a backward compatible DynamoDB change first, followed by a new Lambda function Version, followed by a new API version and finally change the Angular Application. This allowed us to “phase in” a change and allow us to rollback at any point if a problem was identified.

A day in the life of a Serverless Application

To propose and communicate with my client, I used a “storyline”. In this storyline, I simplified the communication down to changes for the Angular Application and the Lambda Functions. These are the most familiar parts of the component (a.k.a. Front End and Back End code).

Day 1

Let’s say that on Day 1, Lambda is running version 1 and Angular App is running Version 1. Users connect to cloudfront from their browsers, download the Angular App and that makes API Gateway invocations to eventually run Lambda functions.

day 1 in the life of a serverless app ci cd

Day 2

Let’s say that on Day 2, a code change to the Lambda function is to be released. In this case, Developer would check in code to CodeCommit along with the Cloudformation to deploy a new version of the Lambda function.

day 2 in the life of a serverless app ci cd

Then, the deployed structure would look somewhat like this on day 2. The newly minted Lambda function version V2 would be tested in dev and QA, but, not used in production yet.

day 2 deployed in the life of a serverless app ci cd

Day 3

After all testing is complete, on Day 3, the developers would let the API GW pipeline run and create a new deployment to the API Gateway.

day 3 in the life of a serverless app ci cd

After deployment, this new deployed version will point to Lambda V2 and Lambda V1 is the new unused Lambda Version. At this point the Angular app starts talking to Lambda Version 2.

day 3 deployed in the life of a serverless app ci cd

In Conlcusion

While I have over simplified this blog post, it still is the crux of the entire continuous deployment, delivery solution. This is the starting point of all conversations.

It is critical to lay down expectations that there should be no downtime on applications. There should be no manual process involved except for writing code. It is equally critical to communicate all aspects of the solution in a manner that traditional Ops and Developers can consume and understand.

I enjoy working with my clients through FastUp on such business critical projects. I hope to write more on this topic in the coming weeks.