When companies make decisions about moving to the cloud they must weigh business factors and technical factors. AWS Managed Services such as serverless technology offer a great application architecture choice to deliver features fast and do it at a lower cost structure. However, doing so is far from simple and sometimes can have a significant learning curve. Best trained, most well-meaning engineers can make bad decisions. Similarly, business managers can underestimate complexity and costs. What can one do to reduce these risks? This article takes a narrow view of such risk and tries to examine this problem under the development and operations lens. At FastUp we have deployed the following best practices to our client’s serverless cloud operations as well as to our own operations with great success.

1) Dedicated AWS Accounts

When developers first start working with API Gateways, Lambda Functions and Dynamo Databases, they create it all in the AWS Console or using “blueprints” and “quickstarts”. That is a great starting point. Pretty soon, a single service balloons into multiple endpoints, functions and database tables. At that point, operations start becoming complex. Communicating technology architecture to peers and across teams becomes difficult. When there are bugs and problems, it is not easy to assign responsibility to fix. Due to the distributed nature of AWS Serverless, deployment history, audit logs, and artifacts also become complicated. Ensuring and enforcing organizational security policies becomes difficult. Following diagram shows how a single account deployment may look.

home brew aws serverless architecture

Instead, create dedicated AWS Accounts for specific purposes. Create a new AWS Account, call it “dev” and use that for developers and for testing your applications. Create another dedicated AWS Account that developers cannot access or have only minimal access. Call it “prod”. Let your operators and security engineers run this AWS Account. For releasing applications from the dev account to the prod account, use continuous delivery pipelines (such as AWS Codepipeline). Here is how a multi-account serverless deployment may look.

Multi account aws serverless architecture

This way, developers can create new features continuously without risking failures and mishaps in production where customers live. There are numerous other benefits to a multi-account AWS Structure. (In Microsoft Azure, this is achieved through “Subscriptions”).

2) Single Account Versioned Deployments

This is a simpler alternative to Multi-Account Pipelines. This is particularly friendly to smaller organizations that do not have a dedicated operations or security department. There are always those applications that do not change much or do not require high security. For example, think of an anonymous survey application or file upload application that must be branded according to marketing standards. In this case, a developer can quickly whip up an app and deploy it in a single AWS Account. However, to avoid all the issues noted above, the developer can use AWS SAM to version their APIs and Functions. This allows safe upgrades and rollbacks without much operational intervention. Here is how a single account versioned app may look.

single account versioned aws serverless architecture

3) Automate all using pipelines

AWS has made it extremely simple to write code, deploy it and manage it. This makes it easy to get started, but, there are still significant challenges when operating large applications having more than a handful of developers, operators, security personnel as well as many users. I am sure this will get even better as time goes by. Until then, invest in automated pipelines for continuous delivery. AWS Codepipeline is a great choice. Start with a git push. When a developer pushes to codecommit, automatically trigger a pipeline to build the code and provide immediate positive (or negative) feedback to developers. Once the code is built and deployed, automatically send an email invitation to your developers and testers to test the application. When tests are run and the new features are ready to go to production, allow your change managers to push a button to go to production. Investing in 100% automation will go a long way to securing and operating the AWS Cloud. Developer and operator permissions requirements are minimal. Security requirements can be baked into the automation and operational procedures can be completely automated. It may sound daunting at first and there will be many opinions within your teams. Success be worth the effort of wading through all that. Followed below is a picture of how a codepipeline might look in AWS.

automated continuous delivery

4) Automate the Storage Layer

Developers do not view the storage layer as a feature layer. Meaning, changing a database column does not lead to a new feature. Developers tend to invest their time in writing code for new features, in using the latest and the greatest technologies. Storage is generally left to database specialists. The best practice is to automate the creation and management of storage layer as well. Here, storage includes databases, file systems, s3 buckets as well as managed services such as Amazon Cognito. All of these storage technologies can be coded as cloudformation or terraform templates and deployed automatically using pipelines. The main benefit of this approach is that the specialists can focus on what they do best - data operations to support the business while automation handles the mundane daily operations. Such automation makes operations smooth and security is built into the automation.

5) Backward Compatible Changes

Always make “backward compatible” changes instead of “destructive changes”. We have written in the past about how to compose highly distributed serverless applications. Think of serverless applications as a grid of services. Each node in the grid is a distributed component serving a specific purpose. This grid is easily constructed and standardized and provides a nice convention for developers, managers and operators to communicate about large, distributed serverless applications. The nodes in the grid can represent any AWS Service such as cloudfront, s3, api gateway, lambda, dynamodb, cognito or even traditional services such as load balancers, ec2 instances and relational databases. The key point here is to automate and orchestrate deployment to each of the nodes independently in a safe rollback friendly way. When that is achieved, the application reaches serverless nirvana. Followed below is the grid diagram of a serverless application.

automated continuous delivery

In the context of this diagram, continuous deployment must be carried out from the bottom up in a backward compatible manner. The bottom layer is the storage layer. Changes in this layer impact all layers above it. Hence, when you make changes to dynamodb or whatever else is in that layer, make the change as an addition to existing design. Do not make a destructive change. A destructive change disables or does away with a prior design and replaces with the new design. A Backward compatible change retains the old design while also additionally supporting the new design. This way, developers can make a change to any node in the grid and retain the guarantee that everything else will continue to work as it was. Next, make a backward compatible change to the layer above it. This is the compute layer which accepts incoming requests and calculates the response to send to clients. Next, continue upwards until all nodes in the grid are changed. This methodical approach allows developers to make controlled changes without breaking the entire application. This may sound daunting in itself, but, once it is instituted, the process is quite simple and provides a determinate path to change management.

Conclusion

Migrating to the cloud and re-architecting for the cloud are great ways to improve many business and technical metrics of your application portfolio. AWS Cloud makes it very easy to get started and is becoming easier to manage day after day. Business managers must understand the good parts of it and make plans to deal with the bad. These 5 best practices should start developers Migrating to the cloud and re-architecting for the cloud are great ways to improve many business and technical metrics of your application portfolio. AWS Cloud makes it very easy to get started and is becoming easier to manage day after day. Business managers must understand the good parts of it and make plans to deal with the bad. These 5 best practices should start developers down the right path to create and manage serverless applications.