Terraform RabbitMQ Autocluster

At mytaxi we are handling a lot of MQTT traffic back and forth with the taxi driver app. Thousands of connections must be kept for all online drivers. The system behind that has to be fast and reliable otherwise customers might not be able to book a taxi. Our former RabbitMQ setup was running on Amazon Linux. While planning a RabbitMQ update we found, that Amazon Linux only provides Erlang R14B04 in their repositories. So Amazon Linux was a dead end. We then turned to Docker and slightly modified the alpine-rabbitmq-autocluster image for our needs. Together with an AWS Autoscaling »

[UPDATE] SpotInstance-Terminationspotter

Recently I posted about how to gracefully handle the termination of AWS Spot Instances. After some time in production it turned out not to be as reliable as I wished. Sometimes the Ansible environment wasn't in order or the script exited for no apparent reason. It was time for an update. The requirements were: must be portable and independent must provide a safe Ansible environment including boto for AWS API interaction The solution was obviously a Docker container. But why would you create a ~120MB container just to run some Bash and Ansible? Mostly because of the advantages that Ansible »

Show and tell: Netflix Ice

If you run your infrastructure at AWS you maybe also use several of their services like CloudWatch, RDS or S3. Over a month this adds up to a big pile of paper with a lot of dollar signs on it. Also, from a security standpoint, I recommend having a separate AWS account for testing besides the one that runs your live infrastructure. This way you lower the risks of accidentally damaging your live setup. So there is another pile of invoices for your second account. I don't like this pile of papers and I want to see the impact of »

Gracefully handle the termination of AWS Spot Instances

As I announced in my last post, I want to write today about the termination of AWS Spot Instances and how I set up a Termination-Spotter Service. If the price for a spot instance rises above the limit that you are willing to pay for it, you will lose this instance. However you will not lose it out of a sudden. AWS gives you a two minute warning before termination. This warning comes in form of an API at http://169.254.169.254/latest/meta-data/spot/termination-time. This endpoint will become available, when your instance has been marked for »

Sebastian Herzberg

Running AWS ECS with Spot Instances

In November 2015 we started using AWS Elastic Container Service in production. I became deeply involved with this new product of AWS and quickly found its up- and downsides (i.e.: You still can't delete Task Definitions, wtf AWS, we have like 200 of them by now). We had to invent several tweaks in order to get all microservices we run into containers. By now we mostly control ECS via its API because the console is very confusing and overloaded. Customized ECS instances We created a specific AMI that is used as the root partition of a new instance. Two »

SLAAC on a FortiGate 200D

SLAAC is my prefered form of address provisioning in IPv6. Interfaces derive their identifier from the MAC address [EUI-64 Format, RFC4291] and are thus unique. You can easily configure it on the a router interface and instantly have IPv6 connectivity in the network behind it. Combined with a DNS server, you will have just as little complexity as with legacy IP. I will cover some exceptions to DNS later in this post. In my first real post here I want to cover the SLAAC settings for a FortiGate 200D with FortiOS 5.4. Configuring SLAAC is currently only possible on »

Fiat Lux

no design yet, no images, just light »

Sebastian Herzberg