Deploying and Managing set of applications that have small-scale containers does not lead to major issues. However, when it scales into a larger environment, managing hundreds of containers needs a container orchestration solution. In AWS, there is at least a couple of container orchestration solutions that are consumed by many customers. AWS Managed Orchestration Platform(s) such as AWS ECS and AWS EKS play a vital role in enabling applications to run in a microservices-based architecture.
In this blog, we will discuss AWS ECS Services which is widely used by customers of different industries.
Amazon Elastic Container Service (Amazon ECS) is a highly scalable container management service that allows you to run, stop, and manage Docker containers on a cluster of Amazon Elastic Compute Cloud (Amazon EC2) instances. Amazon ECS helps you run containers at scale and reduces the number of decisions one should take while deploying your microservices. AWS ECS greatest advantage is the integration with other AWS services such as AWS S3, AWS IAM, AWS Route53, etc.
To understand the ECS architecture, it is important to understand the various building blocks of ECS and how it enables the deployment of containers.
ECS Cluster is a regional service and forms the logical entity for the containers to run. It carves out the resources within the cluster and help to scale dynamically.
ECS Clusters provides flexibility with underlying AWS node management viz EC2 Launch Type and Fargate Launch type. In Fargate Launch type, all the cluster resources are managed by AWS ECS and thus reducing further overhead while EC2 launch type needs to be self-managed Containers that run with in the clusters are called as Tasks. Choosing the right launch type for your ECS cluster is important for building application architecture that can scale and meet the business demands.
It is recommended that the workload of below type can use AWS EC2 Launch type.
- Workloads that are price sensitive and need to optimize the price.
- Consistently high usage of CPU and Memory to cater the application service.
- Stateful application that require access to persistent storage.
- You are looking for managing the AWS infrastructure for the cluster.
It is recommended one to use AWS Fargate launch type for the following workloads:
- When large number of workloads that need less overhead.
- Less number of workloads that need occasional burst capacity.
- Workloads that are stateless and are tiny in nature.
- Batch processing workloads
ECS Container Agent
Container Agent is needed for managing containers when the launch type is EC2 based. ECS Agent sends information around the running tasks in a cluster node, resource utilization and supports in activity such as start and stop of tasks as needed.
Tasks are the smallest unit that delivers the work. These tasks run on ECS Container instances that are part of a ECS Cluster.
ECS Task Definitions
This is very vital part of the ECS Architecture that brings containers together to build up an application. Task definition is a JSON formatted text file that defines the number of containers that can be grouped together to form a single microservice unit. This includes the CPU and memory configuration needed for the containers to run.
Task definitions shall group multiple containers under a single definition. These containers will be deployed on a same compute capacity. In the below picture, all three containers running Web, API and Access has been grouped together. This leads to scaling issues and limits. If these three containers are deployed in same task definition, scaling up one of the containers when there is a need will be a challenge.
It is a best practice to scale each container independently based on the demand. Thus, deploying your containers in the below model by configuring independent task definitions based on the business functions will enable better scaling capabilities and will aid improvising your microservices architecture. It is strongly recommended that you don’t define task definitions that use multiple containers for grouping applications containers of different business functions. In the above picture, assuming that web, api and access are serving different business functions, it is a scalable approach to increase or decrease the needed capacity on demand. You can always add additional containers within the same definition that are needed for providing logging, observability and other features such as envoy for other integrations.
Service: AWS ECS supports you to simultaneously run and maintain a specific number of tasks (Task definitions) on an ECS Cluster. If a task/container fails due to some reason, it makes sure a new task is launched based on the defined strategy and maintains a desired number of tasks per the scheduling strategy. Scaling an application requires you to understand the conditions under which they should scale out either due to demand or a scheduled activity. Amazon ECS leverages the Application Auto Scaling service to provide this functionality.
Application Auto Scaling allows you to automatically scale your scalable resources according to conditions that you define.
- Target tracking scaling – Scale the resources used by the service based on the target value that you define using a specific CloudWatch metric. E.g., CPU reaching 70% must trigger scaling and enable deploying additional tasks.
- Step scaling – Step scaling is similar to above scaling policy; however, you have options to step Scale a resource based on a set of scaling adjustments and can incrementally increase or decrease the number of tasks.
- Scheduled scaling – Scale a resource one time only or on a recurring schedule
Elastic Container Registry: AWS ECR is the source for docker images. You can pull the docker images from ECR to run containers within the cluster. It is recommended that you have smaller image size that reduces the time it takes to unpack and run the container service when pulled from ECR. It is also very much needed to improve the performance of your services when auto scaled. Another important recommendation is to have the ECR in the same AWS region where you will deploy resources for your application.
You can aim to build a microservices architecture that can be deployed and scaled independently. However, there are challenges that need to be addressed when these microservices need to communicate both in a VPC and outside.
Do you have containerized applications in AWS that were built and deployed in a hurry? Are you looking at ways to align your deployments against best practices?
We at Codincity specialize on Well Architected Reviews. Reach out to us for a conversation on this topic, we will enable you with right solutions and recommendations for your environment.