Recently I posted about how to gracefully handle the termination of AWS Spot Instances. After some time in production it turned out not to be as reliable as I wished. Sometimes the Ansible environment wasn't in order or the script exited for no apparent reason. It was time for an update. The requirements were: must be portable and independent must provide a safe Ansible environment including boto for AWS API interaction The solution was obviously a Docker container. But why would you create a ~120MB container just to run some Bash and Ansible? Mostly because of the advantages that Ansible »
If you run your infrastructure at AWS you maybe also use several of their services like CloudWatch, RDS or S3. Over a month this adds up to a big pile of paper with a lot of dollar signs on it. Also, from a security standpoint, I recommend having a separate AWS account for testing besides the one that runs your live infrastructure. This way you lower the risks of accidentally damaging your live setup. So there is another pile of invoices for your second account. I don't like this pile of papers and I want to see the impact of »
In November 2015 we started using AWS Elastic Container Service in production. I became deeply involved with this new product of AWS and quickly found its up- and downsides (i.e.: You still can't delete Task Definitions, wtf AWS, we have like 200 of them by now). We had to invent several tweaks in order to get all microservices we run into containers. By now we mostly control ECS via its API because the console is very confusing and overloaded. Customized ECS instances We created a specific AMI that is used as the root partition of a new instance. Two »