Elasticsearch cluster on Docker, Terraformed for ECS

Elasticsearch is just a full text search engine. I had a good experience with it in my past. Many applications now a days are highly dependent on it. Whether it comes to normal search or facets based search, ES is there for you. It also helps you create recommendations if user misspelled some text. Well there is actually plethora of open source services including ELK (Elasticsearch Logstash Kibana), which is well famous around developers as logs are the only saviour in case of mishaps, utilizes ES as a datastore.
I will not go deep in Elasticsearch, my main objective of this post is all about Containerizing an ES service.

Need of Containers

Why containers when I can boot up a machine install ES in that attach it to my network and boom its ready to serve my base application.?
To answer this just ask yourself how much time will it take to pull all the things up. You need to go through a tedious installation procedure, in addition to that, what happens when load shoots up in the middle of the night or while you are having a good time?
I literally hate this shit, but its part of work, so why not automate it!
These days instead of working hard people are working intelligently, you need not to woke up and manually boot-up new EC2 instances and run all those installation scripts. Instead do all the homework before hand and make your clusters auto-scalable & purely immutable.
Containers helps developers to create the complete image with tools they need and then they can just hand it over to OPS to deploy it on production. It keeps development and production environment in sync. If you still have doubts then google is there for you. There are many posts on the net written by talented folks around the world about pros and cons of Containers.
On local now developers have these tiny little docker-compose file to kick start their development. You can build your own Docker image using Dockerfile or use any one available on dockerhub. Main motive of using docker compose is to have different containers, built for different services (or micro-services), coming up at the same time. And if you have experience with docker these services comes up in seconds not like those old virtual machine based solutions like. Containers gives us flexibility of decoupling our base application needs like database, cache, search, queue etc
Cool, but how can I deploy it on production?
This post is focused around AWS infrastructure. They have this ECS service to manage your containers across many EC2 machines which together forms a ClusterYou can decide how many instances you want, their autoscale policy etc in task definition.
Still its a manual effort of creating them, how can I make it portable and ready to use for any purpose I want, like we have with docker compose?
Terraform is all you need. Its an orchestration tool to manage your complete infrastructure, from VPC to EC2, everything you need you can code.
I had created a repo of Elasticsearch cluster have a look around it, as we are going to talk of the same: https://github.com/tkant/elasticsearch-terraform-ecs

About elasticsearch-terraform-ecs:

Its composed of a Dockerfile which is nothing but an image for our container. It exposes port 9200 & 9300. Also you can mount a folder in container to have persistent datastorage volume to be used by Elasticsearch. Instead of folder you can also use data only containers (I am not a big fan of it so happy with mounting a local folder).
To use this image on local you can just run docker build, and run it using following command:
// docker build
docker build -t es_box github.com/tkant/elasticsearch-terraform-ecs
// docked run
mkdir es_data && docker run -p 9200:9200 -p 9300:9300 -t es_box -v ./es_data:/usr/share/elasticsearch/data
If you are also a fan of docker compose so you may find utilising it much easier. In docker-compose.yml you can link this container in other one. Now if you wanna utilise it then you just need to pass in the name of that link which will act as a hostname which is ready to be utilised by base container as shown below:
// yml usages
web:
  links:
   - db
You just saved yourself from passing hardcoded IPs 😉
Great, that made my development environment much better. lets now talk about pushing it to production.
To use my terraform script you need to have a VPC, 2 private subnet in two different availability zone, basic routing and NACL rules enabled.
Now push the docker image to ECR, if you want you can push it to your dockerhub repo as well. Edit main.tf variables and variables.tf under terraform folder, as per your need.

Persistent Shared Storage:

Elasticsearch stores its data at:
// ES Volume Mount Path
/usr/share/elasticsearch/data
This is already exposed as volume in DockerFile.
What we need here is to have a centralized shared location for Elasticsearch data. Container’s memory is ephemeral and also its not shared among others. Again you can use a separate data only container, but lets go with traditional NFS (Network File System) based solution which is much more mature and reliable. In AWS we have one more service called EFS (Elastic File System) which is yet another fancy name for NFS version 4.
So if you had literally checked the terraform code on top you will find an EFS getting created and the same is mounted into containers host machine (an EC2) at
// efs mounted on ec2 at location
/mnt/efs
This is done using user-data script. Its then further volume mounted into ES containers using task definition. Pretty cool!
This solves our persistent data-storage problem. AWS says EFS is well suited for containers running on top of ECS, but personally I don’t have as such any production level experience with it so can’t comment on it. If you face any problems with it then please do let us know by commenting on this post.

What all this terraform script creates:

  1. EFS: Used as a persistent storage for our ElasticSearch Cluster
  2. ECS: Cluster and Task definition
  3. ELB: Internal ELB use with private subnet
  4. SG
  5. IAM Role

I hope it might be useful for you!