Guidelines for creating an EKS cluster for ScalarDL Ledger
This document explains the requirements and recommendations for creating an Amazon Elastic Kubernetes Service (EKS) cluster for ScalarDL Ledger deployment. For details on how to deploy ScalarDL Ledger on an EKS cluster, see Deploy ScalarDL Ledger on Amazon EKS.
Before you begin
You must create an EKS cluster based on the following requirements, recommendations, and your project's requirements. For specific details about how to create an EKS cluster, see the official Amazon documentation at Creating an Amazon EKS cluster.
Requirements
When deploying ScalarDL Ledger, you must:
- Create the EKS cluster by using a supported Kubernetes version.
- Configure the EKS cluster based on the version of Kubernetes and your project's requirements.
For Byzantine fault detection in ScalarDL to work properly, do not deploy your application pods on the same EKS cluster as the ScalarDL Ledger deployment.
Recommendations (optional)
The following are some recommendations for deploying ScalarDL Ledger. These recommendations are not required, so you can choose whether or not to apply these recommendations based on your needs.
Create at least three worker nodes and three pods
To ensure that the EKS cluster has high availability, you should use at least three worker nodes and deploy at least three pods spread across the worker nodes. You can see the sample configurations of podAntiAffinity for making three pods spread across the worker nodes.
If you place the worker nodes in different availability zones (AZs), you can withstand an AZ failure.
Use 4vCPU / 8GB memory nodes for the worker node in the ScalarDL Ledger node group
From the perspective of commercial licenses, resources for one pod running ScalarDL Ledger are limited to 2vCPU / 4GB memory. In addition to the ScalarDL Ledger pod, Kubernetes could deploy some of the following components to each worker node:
- ScalarDL Ledger pod (2vCPU / 4GB)
- Envoy proxy
- Monitoring components (if you deploy monitoring components such as
kube-prometheus-stack) - Kubernetes components
With this in mind, you should use a worker node that has at least 4vCPU / 8GB memory resources and use at least three worker nodes for availability, as mentioned in Create at least three worker nodes and three pods.
However, three nodes with at least 4vCPU / 8GB memory resources per node is the minimum environment for production. You should also consider the resources of the EKS cluster (for example, the number of worker nodes, vCPUs per node, memory per node, and ScalarDL Ledger pods), which depend on your system's workload. In addition, if you plan to scale the pods automatically by using some features like Horizontal Pod Autoscaling (HPA), you should consider the maximum number of pods on the worker node when deciding the worker node resources.
Configure Cluster Autoscaler in EKS
If you want to scale ScalarDL Ledger pods automatically by using Horizontal Pod Autoscaler, you should configure Cluster Autoscaler in EKS too. For details, see the official Amazon documentation at Autoscaling.
In addition, if you configure Cluster Autoscaler, you should create a subnet in an Amazon Virtual Private Cloud (VPC) for EKS with the prefix (e.g., /24) to ensure a sufficient number of IPs exist so that EKS can work without network issues after scaling.