You can run etcd either inside containers, or directly in the host operating system. Our etcd layer should be running inside their own nodes, with at least two servers in an etcd cluster in order to provide redundancy. Note that, because the masters expose its API trough REST, a load balancer solution is required in front of the masters in order to have true high availability and multi-server load balancing.Īgain, a discovery layer is needed here, being “etcd” one of the big players. The master runs many services inside containers, each with a very specific function inside the master. While a single master can control our complete setup, in production environments multiple masters are the norm. The master is our control layer, which runs an “API” service that controls the whole orchestrator. The first component of a Kubernetes setup is the master. This one is more complex than Swarm, but it offers more features too. Google offers its own solution for container orchestration too. You can begin with swarm with an all-in-one setup Note: You can begin with swarm with an “all-in-one” setup, running everything at first on a single server. Tasks: The docker containers and commands inside a service. Services: Your final deployed service composed of tasks. This layer should have as many workers as you need. All your end-services will run along your worker nodes. You can setup your discovery services in the same manager nodes or in an independent set of nodes. Normally, each manager is deployed in an independent node.ĭiscovery: Your state and service discovery layer. This layer should be redundant in your architecture. Those services are composed of tasks, which are your individual Docker containers (mostly) but also can include specific commands running along the containers. The services are sets of Docker containers running across your Swarm infrastructure. Our final deployed services run in the worker nodes (which you can compare to OpenStack compute nodes or, AWS EC2 service).įinally, you have services and tasks. Basically, this is your “container-based yellow pages” in the orchestration infrastructure, providing both state and discovery for your cluster. This layer knows where container-based services are being run. This discovery layer is the registry part of the solution (you can compare it to OpenStack Keystone or AWS IAM). Also included as a very important part of the whole orchestration solution is the discovery layer, container based too, and running a key-value solution like “etcd” or “consul”. Like many orchestration solutions (taking by example: OpenStack), swarm includes a control layer called managers and a worker layer called “workers”.Īll services running on both layers are running inside containers. Being designed by Docker, and because it’s already included as a core Docker feature, we can argue that Swarm is the most compatible and natural way to orchestrate docker container infrastructures. Swarm is an in-house Docker orchestration solution. In order to better understand our actors, let’s briefly list them and their core concepts first. Knowing Our Actors: Docker Swarm, Google Kubernetes, and Marathon/Apache-Mesos In this brief article, we’ll take a look at the most widely used container orchestration solutions, how they compare to each other, and the right environment where to use each option.
Deploying on medium to large scale platforms implies resource scheduling, that is only possible when an orchestration engine of any kind is being used. This is where container orchestrators enter the scene.
In actual modern deployments that make use of containers (especially the cloud-based ones), it’s common to need some degree of failover, load balancing, and in general, services clustering. The Need for Orchestration in Container-based Deployments Apache Mesos: Container Orchestration Comparison