I’m a lead developer at a startup. I have been working in startup for the past 4 years. Over the years I’ve learnt a lot with respect to managing infra through services like AWS, GCP and Azure.
In this article, I’m going to talk about why we started using AWS Secrets Manager to manage sensitive data like passwords, keys, tokens, etc used in our applications.
This is what is covered in this article:
- What the hell are you talking about?
- What did our previous setup look like?
- Problems with the old setup
- How did we use secrets manager to solve the problems?
What Am I Talking About?
If you’re not familiar with a configuration server, it’s basically a service that is used to keep configuration data needed in applications.
If you are wondering what configuration data is, it can be any data that is needed to run applications and is different for different environments like staging and production.
For example: If you’re using stripe to charge your customers, you’ll need to use different keys for staging and production environments.
We keep different configuration servers for different environments. A staging server where staging configuration data would be stored and similarly a production server for prod configuration data. The runtime applications would pull this data from these servers and use them.
The next question is how can we input this data to a configuration server?
What Did Our Previous Setup Look Like?
There are multiple ways to store configuration data depending on the service you’re using. In our case it is Consul and this is how we use it:
- We store configuration data for each service as a JSON file in the staging and prod branches of a Github repository for respective environments.
- Every time we make changes to this repository, we run a Jenkins job to update the data on the Consul server.
- Our microservices which are orchestrated using Docker would pull the configuration data and keep it in their docker images that are built and then use it on the runtime.
Problems With Our Old Setup
There are many problems with storing sensitive data as part of code in a Github repository.
- It is not the right way to keep sensitive data as part of your code on Github even if it’s a private one.
- It is not secure. If hackers get access to your Github account, all your secrets would be compromised.
How Did We Use Secrets Manager To Solve the Problems?
There are many applications that provide encrypt/decrypt services to store sensitive data. The idea here is that we provide data in plain format. The service then encrypts the data using a key.
When you request this data, the service then decrypts this data and sends you back in the plain format. There are many details involved in this process like encrypt/decrypt algorithm used, access to read and write data from services, limiting the access to specific keys, etc. But these are out of the scope of this blog.
AWS Secrets Manager is one of them and works well with the AWS ecosystem of services.
This Is What We Did To Solve the Problems Discussed Above
- Removed sensitive configuration data from our private Github repositories.
- Used KMS and Secrets Manager to store sensitive data. Here we manually add the data one by one on the AWS console after we log in.
- Provided access to our applications/microservices through policies and attached them to ECS task roles to read data stored in Secrets Manager.
- Updated Dockerfile and consul template files of our services to pull the data stored in Secrets Manager when building the docker image.
- Note that we still store all the other nonsensitive data in a Consul server. When we build docker images we pull data from both Consul and AWS Secrets Manager and merge them and use them.
- Our microservices now read sensitive data from AWS Secrets Manager instead of from a consul server. And there are no worries of someone hacking our data as AWS ensures keeping this data secret and encrypted.
I hope this article has given some insights into how sensitive data should be stored and accessed during the lifecycle of an application.