Guidelines for creating an AKS cluster for ScalarDL Ledger and ScalarDL Auditor
This document explains the requirements and recommendations for creating an Azure Kubernetes Service (AKS) cluster for ScalarDL Ledger and ScalarDL Auditor deployment. For details on how to deploy ScalarDL Ledger and ScalarDL Auditor on an AKS cluster, see Deploy ScalarDL Ledger and ScalarDL Auditor on AKS.
Before you begin
You must create an AKS cluster based on the following requirements, recommendations, and your project's requirements. For specific details about how to create an AKS cluster, refer to the following official Microsoft documentation based on the tool you use in your environment:
Requirements
When deploying ScalarDL Ledger and ScalarDL Auditor, you must:
- Create two AKS clusters by using a supported Kubernetes version.
- One AKS cluster for ScalarDL Ledger
- One AKS cluster for ScalarDL Auditor
- Configure the AKS clusters based on the version of Kubernetes and your project's requirements.
- Configure a virtual network (VNet) as follows.
- Connect the VNet of AKS (for Ledger) and the VNet of AKS (for Auditor) by using virtual network peering. To do so, you must specify the different IP ranges for the VNet of AKS (for Ledger) and the VNet of AKS (for Auditor) when you create those VNets.
- Allow connections between Ledger and Auditor to make ScalarDL (Auditor mode) work properly.
- For more details about these network requirements, refer to Configure Network Peering for ScalarDL Auditor Mode.
For Byzantine fault detection in ScalarDL to work properly, do not deploy your application pods on the same AKS clusters as the ScalarDL Ledger and ScalarDL Auditor deployments.
Recommendations (optional)
The following are some recommendations for deploying ScalarDL Ledger and ScalarDL Auditor. These recommendations are not required, so you can choose whether or not to apply these recommendations based on your needs.
Create at least three worker nodes and three pods per AKS cluster
To ensure that the AKS cluster has high availability, you should use at least three worker nodes and deploy at least three pods spread across the worker nodes. You can see the ScalarDL Ledger sample configurations and ScalarDL Auditor sample configurations of podAntiAffinity
for making three pods spread across the worker nodes.
If you place the worker nodes in different availability zones (AZs), you can withstand an AZ failure.