Save cost on our on-demand EC2 instances by shutting them down based on a schedule defined within a tag.
The solution can be found on my Github repo.
We came up with the following tag format to help us shutdown our EC2 instances. As each EC2 instance runs specific applications with varying workloads and access patterns, we wanted to define a set of tags to cover all of them. This is what we came up with.
Shutdown at 12am on Saturday and start back up at 12am on Monday.
Shutdown at 12am on Saturday and start up at 8am on Monday.
Shutdown at 6pm Monday to Sunday and start up at 8am.
Shutdown at 6pm Monday to Friday and start back up at 8am.
Shutdown at 6pm and do not bring it back up.
Should not shutdown as the instance needs to be up 24×7.
Nothing to do but send an email notifying that the instance is tagged for maintenance.
Lambda to the rescue. The serverless framework was used to package things up as it was an intuitive way to set it up and hook into our current CI/CD.
Python was the natural fit for this lambda with the libraries around AWS specially with boto3.
The lambda fires up at a defined interval as configured on your serverless.yaml which then checks to see if any of the above schedule constraints are met. If so, we perform the relevant operation(either a start or stop) on those EC2 instances. The scheduled activity on the lambda is done via the integration with the AWS EventBridge which is all done under the hood through the serverless framework.
Did I tell you how awesome this framework is? Well get ready to hear it a couple of more times through this post!
In terms of the actual implementation, I will leave it up to the reader to go through the GitHub repository. It was a mixture of the usage of pytz, datetime and boto3 in action.
Test driven development (TDD)
I am a huge advocate of TDD and thus was looking at implementing my lambda using TDD. I stumbled upon this amazing library called moto. Huge shout out to these guys as it is just an amazing piece of work.
What it essentially lets you do is run a virtual AWS environment and run tests/assertions against it. This was just what I needed and saved me a good several hours as I did not have to go through the deploy/test cycles to test my lambda.
As I used Pipfile to package my lambda, it was easy for me to separate out the dependencies needed for development and production.
As you can see from the following, I was able to package on moto and pyyaml just for testing purposes and leave the rest to be bundled up when I do a deployment to production.
[dev-packages] moto = "*" pyyaml = "*" [packages] boto3 = "*" pytz = "*"
Dockerize the tests
When integrating this to our existing CI/CD platform, I did not want to have to go in to all the servers and install the Pipenv related dependencies just to run the integration tests.
Docker to the rescue. The panacea to all things these days ain’t it!
I wrote a quick Dockerfile that installed all the required dependencies and ran the tests.
I also wanted to make sure that the docker instance ran with the UTC timezone. The Dockerfile used is as follows;
FROM python:3.6-slim ENV AWS_DEFAULT_REGION=ap-southeast-2 ENV TZ=UTC RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone RUN pip install --upgrade pip RUN pip install pipenv ADD Pipfile /home/ec2-stop-lambda/ ADD *.py /home/ec2-stop-lambda/ ADD serverless.yml /home/ec2-stop-lambda/ WORKDIR /home/ec2-stop-lambda RUN pipenv install --dev ENTRYPOINT exec pipenv run test
The python slim version of the docker image was used to keep the size of the artefact at a minimum.
The Lambda runs on all our environments and works as expected to start and stop all our EC2 instances on the schedules we define.
Do drop a comment below if you have anything you would like to add on and thank you for reading!
Have a good week ahead people. Stay safe!