This is the second article in a series about implementing an AWS Serverless Web Application for the global clients base of a large enterprise. If you are lost, please read the first article.

T minus 1: Secure the thing

On premise, I have seen, and been part of architecting many applications where security was slapped on later on. The attitude is, “yeah, let’s do the interesting stuff first and we can slap on some security afterwards”. In the serverless world, you can most definitely do that, but it is much easier to affix security first. In our case, we were required to affix security first. There are many reasons for this.

Firstly, this was going to be the first time we adopt a 100% cloud-native technology. This was a very high visibility project throughout the company. This was going to be used globally by our customers who were not managed in our primary enterprise Identity Store (Microsoft Active Directory). We wanted to prevent inadvertent actions by our developers who had yet to do any real work on Amazon Web Services cloud. Finally, we wanted everything to be automated with almost no human intervention. All these concerns and considerations required that before we do any development, we plan for authentication, authorization and audits at all levels for all actions. In this article, I want to share what we ended up doing for number 4 & 5 above.

Sandboxes and play areas

My first order of business was to enable our developers to get their feet wet with our technology stack — CloudFront, S3, API Gateway, Lambda, DynamoDB — while preventing them from messing around with production deployments. On premises, you could do this by cordoning off servers in networks that are simply not accessible from workstations. In public cloud, there is no real network segregation. A person who has access, can do anything they want to anything they have access permission to. In order to prevent inadvertent actions and mistakes, we decided on day 0 that we simply won’t allow manual changes in the AWS console to any resources. If developers want to make a change, they should write CloudFormation templates, have those run through a CodePipeline so that resources are deployed. The only exception to that was a sandbox account (aka “play area”) in which a developer could use Visual Studio to write a Dot Net Core Lambda function and deploy it into AWS using the Visual Studio Extension from AWS. Dynamo Tables were pre-created and API endpoint changes were tasked to a small administering team. So, developers would develop, test locally, try it out in this sandbox and, if all checks out, commit their code, which would eventually make it to an official environment.

A sandbox environment for developers

Fig. 1: A sandbox environment for developers to play with Lambda functions and preview their functional code in action

Official Environments

Official environments were created very differently. (I will address the build & delivery pipelines in a separate article in this series. Here, I will focus on developer workflow). For official environments, we declared (wrote code) all AWS infrastructure in CloudFormation Templates. For example, CloudFront Content Delivery Network (something that distributes static content all over the world for super fast downloads) was all declared in one file in one CodeCommit Repository. Any change to that file, when pushed into CodeCommit, would trigger a CodePipeline that would effect that change into the first official environment called dev01. Once there, no one except an external AWS administrator (such as myself) could make changes through the AWS Administration Console. None of the developers have any access beyond “read only” in the console. Similarly, API Gateway has it’s own CodeCommit repository with it’s own pipeline and so on till DynamoDB. Absolutely none of the infrastructure is ever “hand crafted”.

Developers write dot net code that deploys through pipelines

Fig. 2: Developers write dot net core code and cloudformation infrastructure that makes its way to production via pipelines.

This total separation between a “Developer Workstation” and “Official Environments” allowed a level of safety and confidence that proved very prescient and helpful to all our development efforts.