Getting Started with Helm Charts (ScalarDB Cluster with TLS)
This tutorial explains how to get started with ScalarDB Cluster with TLS configurations by using Helm Charts on a Kubernetes cluster in a test environment. Before starting, you should already have a Mac or Linux environment for testing. In addition, although this tutorial mentions using minikube, the steps described should work in any Kubernetes cluster.
Requirements​
- You need to have a license key (trial license or commercial license) for ScalarDB Cluster. If you don't have a license key, please contact us.
- You need to use ScalarDB Cluster 3.12 or later, which supports TLS.
What you'll create​
In this tutorial, you'll deploy the following components on a Kubernetes cluster in the following way:
+----------------------------------------------------------------------------------------------------------------------------------------------------+
| [Kubernetes Cluster] |
| [Pod] [Pod] [Pod] |
| |
| +-------+ +------------------------+ |
| +---> | Envoy | ---+ +---> | ScalarDB Cluster node | ---+ |
| [Pod] | +-------+ | | +------------------------+ | |
| | | | | |
| +-----------+ +---------+ | +-------+ | +--------------------+ | +------------------------+ | +---------------+ |
| | Client | ---> | Service | ---+---> | Envoy | ---+---> | Service | ---+---> | ScalarDB Cluster node | ---+---> | PostgreSQL | |
| | (SQL CLI) | | (Envoy) | | +-------+ | | (ScalarDB Cluster) | | +------------------------+ | | (For Ledger) | |
| +-----------+ +---------+ | | +--------------------+ | | +---------------+ |
| | +-------+ | | +------------------------+ | |
| +---> | Envoy | ---+ +---> | ScalarDB Cluster node | ---+ |
| +-------+ +------------------------+ |
| |
+----------------------------------------------------------------------------------------------------------------------------------------------------+
You'll also create the following private key and certificate files for TLS connections.
+-------------------------------+
+---> | For Scalar Envoy |
| +-------------------------------+
| | envoy-key.pem |
| | envoy.pem |
+----------------------+ | +-------------------------------+
| Self-signed CA | ---(Sign certificates)---+
+----------------------+ | +-------------------------------+
| ca-key.pem | +---> | For ScalarDB Cluster |
| ca.pem | +-------------------------------+
+----------------------+ | scalardb-cluster-key.pem |
| scalardb-cluster.pem |
+-------------------------------+
You'll set each private key and certificate file as follows to enable TLS in each connection.
+--------------------------------+ +-----------------------------------------+ +-----------------------------------------+
| Client | ---(CRUD/SQL requests)---> | Envoy for ScalarDB Cluster | ---> | ScalarDB Cluster nodes |
+--------------------------------+ +-----------------------------------------+ +-----------------------------------------+
| ca.pem (to verify envoy.pem) | | envoy-key.pem | | scalardb-cluster-key.pem |
+--------------------------------+ | envoy.pem | | scalardb-cluster.pem |
| ca.pem (to verify scalardb-cluster.pem) | | ca.pem (used for health check) |
+-----------------------------------------+ +-----------------------------------------+
The following connections exist amongst the ScalarDB Cluster–related components:
Client - Envoy for ScalarDB Cluster
: When you execute a CRUD API or SQL API function, the client accesses Envoy for ScalarDB Cluster.Envoy for ScalarDB Cluster - ScalarDB Cluster
: Envoy works as an L7 (gRPC) load balancer in front of ScalarDB Cluster.ScalarDB Cluster node - ScalarDB Cluster node
: A ScalarDB Cluster node accesses other ScalarDB Cluster nodes. In other words, the cluster's internal communications exist amongst all ScalarDB Cluster nodes.
Step 1. Start a Kubernetes cluster and install tools​
You need to prepare a Kubernetes cluster and install some tools (kubectl
, helm
, cfssl
, and cfssljson
). For more details on how to install them, see Getting Started with Scalar Helm Charts.
Step 2. Start the PostgreSQL containers​
ScalarDB Cluster must use some type of database system as a backend database. In this tutorial, you'll use PostgreSQL.
You can deploy PostgreSQL on the Kubernetes cluster as follows:
-
Add the Bitnami helm repository.
helm repo add bitnami https://charts.bitnami.com/bitnami
-
Deploy PostgreSQL for ScalarDB Cluster.
helm install postgresql-scalardb-cluster bitnami/postgresql \
--set auth.postgresPassword=postgres \
--set primary.persistence.enabled=false \
-n default -
Check if the PostgreSQL containers are running.
kubectl get pod -n default
[Command execution result]
NAME READY STATUS RESTARTS AGE
postgresql-scalardb-cluster-0 1/1 Running 0 34s
Step 3. Create a working directory​
You'll create some configuration files and private key and certificate files locally. Be sure to create a working directory for those files.
-
Create a working directory.
mkdir -p ${HOME}/scalardb-cluster-test/certs/
Step 4. Create private key and certificate files​
You'll create private key and a certificate files.
-
Change the working directory to
${HOME}/scalardb-cluster-test/certs/
.cd ${HOME}/scalardb-cluster-test/certs/
-
Create a JSON file that includes CA information.
cat << 'EOF' > ${HOME}/scalardb-cluster-test/certs/ca.json
{
"CN": "scalar-test-ca",
"key": {
"algo": "ecdsa",
"size": 256
},
"names": [
{
"C": "JP",
"ST": "Tokyo",
"L": "Shinjuku",
"O": "Scalar Test CA"
}
]
}
EOF -
Create the CA private key and certificate files.
cfssl gencert -initca ca.json | cfssljson -bare ca
-
Create a JSON file that includes CA configurations.
cat << 'EOF' > ${HOME}/scalardb-cluster-test/certs/ca-config.json
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"scalar-test-ca": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth"
]
}
}
}
}
EOF -
Create a JSON file that includes Envoy information.
cat << 'EOF' > ${HOME}/scalardb-cluster-test/certs/envoy.json
{
"CN": "scalar-envoy",
"hosts": [
"envoy.scalar.example.com",
"localhost"
],
"key": {
"algo": "ecdsa",
"size": 256
},
"names": [
{
"C": "JP",
"ST": "Tokyo",
"L": "Shinjuku",
"O": "Scalar Envoy Test"
}
]
}
EOF -
Create a JSON file that includes ScalarDB Cluster information.
cat << 'EOF' > ${HOME}/scalardb-cluster-test/certs/scalardb-cluster.json
{
"CN": "scalardb-cluster",
"hosts": [
"cluster.scalardb.example.com",
"localhost"
],
"key": {
"algo": "ecdsa",
"size": 256
},
"names": [
{
"C": "JP",
"ST": "Tokyo",
"L": "Shinjuku",
"O": "ScalarDB Cluster Test"
}
]
}
EOF -
Create private key and certificate files for Envoy.
cfssl gencert -ca ca.pem -ca-key ca-key.pem -config ca-config.json -profile scalar-test-ca envoy.json | cfssljson -bare envoy
-
Create private key and certificate files for ScalarDB Cluster.
cfssl gencert -ca ca.pem -ca-key ca-key.pem -config ca-config.json -profile scalar-test-ca scalardb-cluster.json | cfssljson -bare scalardb-cluster
-
Confirm that the private key and certificate files were created.
ls -1
[Command execution result]
ca-config.json
ca-key.pem
ca.csr
ca.json
ca.pem
envoy-key.pem
envoy.csr
envoy.json
envoy.pem
scalardb-cluster-key.pem
scalardb-cluster.csr
scalardb-cluster.json
scalardb-cluster.pem
Step 5. Deploy ScalarDB Cluster on the Kubernetes cluster by using Helm Charts​
-
Add the Scalar Helm Charts repository.
helm repo add scalar-labs https://scalar-labs.github.io/helm-charts
-
Set your license key and certificate as environment variables. If you don't have a license key, please contact us. For details about the value of
<CERT_PEM_FOR_YOUR_LICENSE_KEY>
, see How to Configure a Product License Key.SCALAR_DB_CLUSTER_LICENSE_KEY='<YOUR_LICENSE_KEY>'
SCALAR_DB_CLUSTER_LICENSE_CHECK_CERT_PEM='<CERT_PEM_FOR_YOUR_LICENSE_KEY>' -
Create a custom values file for ScalarDB Cluster (
scalardb-cluster-custom-values.yaml
).cat << 'EOF' > ${HOME}/scalardb-cluster-test/scalardb-cluster-custom-values.yaml
envoy:
enabled: true
tls:
downstream:
enabled: true
certChainSecret: "envoy-tls-cert"
privateKeySecret: "envoy-tls-key"
upstream:
enabled: true
overrideAuthority: "cluster.scalardb.example.com"
caRootCertSecret: "scalardb-cluster-tls-ca"
scalardbCluster:
image:
repository: "ghcr.io/scalar-labs/scalardb-cluster-node-byol-premium"
scalardbClusterNodeProperties: |
### Necessary configurations for deployment on Kuberetes
scalar.db.cluster.membership.type=KUBERNETES
scalar.db.cluster.membership.kubernetes.endpoint.namespace_name=${env:SCALAR_DB_CLUSTER_MEMBERSHIP_KUBERNETES_ENDPOINT_NAMESPACE_NAME}
scalar.db.cluster.membership.kubernetes.endpoint.name=${env:SCALAR_DB_CLUSTER_MEMBERSHIP_KUBERNETES_ENDPOINT_NAME}
### Storage configurations
scalar.db.contact_points=jdbc:postgresql://postgresql-scalardb-cluster.default.svc.cluster.local:5432/postgres
scalar.db.username=${env:SCALAR_DB_CLUSTER_POSTGRES_USERNAME}
scalar.db.password=${env:SCALAR_DB_CLUSTER_POSTGRES_PASSWORD}
scalar.db.storage=jdbc
### SQL configurations
scalar.db.sql.enabled=true
### Auth configurations
scalar.db.cluster.auth.enabled=true
scalar.db.cross_partition_scan.enabled=true
### TLS configurations
scalar.db.cluster.tls.enabled=true
scalar.db.cluster.tls.ca_root_cert_path=/tls/scalardb-cluster/certs/ca.crt
scalar.db.cluster.node.tls.cert_chain_path=/tls/scalardb-cluster/certs/tls.crt
scalar.db.cluster.node.tls.private_key_path=/tls/scalardb-cluster/certs/tls.key
scalar.db.cluster.tls.override_authority=cluster.scalardb.example.com
### License key configurations
scalar.db.cluster.node.licensing.license_key=${env:SCALAR_DB_CLUSTER_LICENSE_KEY}
scalar.db.cluster.node.licensing.license_check_cert_pem=${env:SCALAR_DB_CLUSTER_LICENSE_CHECK_CERT_PEM}
tls:
enabled: true
overrideAuthority: "cluster.scalardb.example.com"
caRootCertSecret: "scalardb-cluster-tls-ca"
certChainSecret: "scalardb-cluster-tls-cert"
privateKeySecret: "scalardb-cluster-tls-key"
secretName: "scalardb-credentials-secret"
EOF -
Create a secret resource named
scalardb-credentials-secret
that includes credentials and license keys.kubectl create secret generic scalardb-credentials-secret \
--from-literal=SCALAR_DB_CLUSTER_POSTGRES_USERNAME=postgres \
--from-literal=SCALAR_DB_CLUSTER_POSTGRES_PASSWORD=postgres \
--from-literal=SCALAR_DB_CLUSTER_LICENSE_KEY="${SCALAR_DB_CLUSTER_LICENSE_KEY}" \
--from-file=SCALAR_DB_CLUSTER_LICENSE_CHECK_CERT_PEM=<(echo ${SCALAR_DB_CLUSTER_LICENSE_CHECK_CERT_PEM} | sed 's/\\n/\
/g') \
-n default -
Create secret resources that include the private key and certificate files for Envoy.
kubectl create secret generic envoy-tls-cert --from-file=tls.crt=${HOME}/scalardb-cluster-test/certs/envoy.pem -n default
kubectl create secret generic envoy-tls-key --from-file=tls.key=${HOME}/scalardb-cluster-test/certs/envoy-key.pem -n default -
Create secret resources that include the key, certificate, and CA certificate files for ScalarDB Cluster.
kubectl create secret generic scalardb-cluster-tls-ca --from-file=ca.crt=${HOME}/scalardb-cluster-test/certs/ca.pem -n default
kubectl create secret generic scalardb-cluster-tls-cert --from-file=tls.crt=${HOME}/scalardb-cluster-test/certs/scalardb-cluster.pem -n default
kubectl create secret generic scalardb-cluster-tls-key --from-file=tls.key=${HOME}/scalardb-cluster-test/certs/scalardb-cluster-key.pem -n default -
Set the chart version of ScalarDB Cluster.
SCALAR_DB_CLUSTER_VERSION=3.12.2
SCALAR_DB_CLUSTER_CHART_VERSION=$(helm search repo scalar-labs/scalardb-cluster -l | grep -F "${SCALAR_DB_CLUSTER_VERSION}" | awk '{print $2}' | sort --version-sort -r | head -n 1) -
Deploy ScalarDB Cluster.
helm install scalardb-cluster scalar-labs/scalardb-cluster -f ${HOME}/scalardb-cluster-test/scalardb-cluster-custom-values.yaml --version ${SCALAR_DB_CLUSTER_CHART_VERSION} -n default
-
Check if the ScalarDB Cluster pods are deployed.
kubectl get pod -n default
[Command execution result]
NAME READY STATUS RESTARTS AGE
postgresql-scalardb-cluster-0 1/1 Running 0 4m30s
scalardb-cluster-envoy-7cc948dfb-4rb8l 1/1 Running 0 18s
scalardb-cluster-envoy-7cc948dfb-hwt96 1/1 Running 0 18s
scalardb-cluster-envoy-7cc948dfb-rzbrx 1/1 Running 0 18s
scalardb-cluster-node-7c6959c79d-445kj 1/1 Running 0 18s
scalardb-cluster-node-7c6959c79d-4z54q 1/1 Running 0 18s
scalardb-cluster-node-7c6959c79d-vcv96 1/1 Running 0 18sIf the ScalarDB Cluster pods are deployed properly, the
STATUS
column for those pods will be displayed asRunning
. -
Check if the ScalarDB Cluster services are deployed.
kubectl get svc -n default
[Command execution result]
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7h34m
postgresql-scalardb-cluster ClusterIP 10.96.92.27 <none> 5432/TCP 4m52s
postgresql-scalardb-cluster-hl ClusterIP None <none> 5432/TCP 4m52s
scalardb-cluster-envoy ClusterIP 10.96.250.175 <none> 60053/TCP 40s
scalardb-cluster-envoy-metrics ClusterIP 10.96.40.197 <none> 9001/TCP 40s
scalardb-cluster-headless ClusterIP None <none> 60053/TCP 40s
scalardb-cluster-metrics ClusterIP 10.96.199.135 <none> 9080/TCP 40sIf the ScalarDB Cluster services are deployed properly, you can see private IP addresses in the
CLUSTER-IP
column.
The CLUSTER-IP
values for postgresql-scalardb-cluster-hl
and scalardb-cluster-headless
are None
since they have no IP addresses.
Step 6. Start a client container​
You'll use the CA certificate file in a client container. Therefore, you'll need to create a secret resource and mount it to the client container.
-
Create a secret resource named
client-ca-cert
.kubectl create secret generic client-ca-cert --from-file=ca.crt=${HOME}/scalardb-cluster-test/certs/ca.pem -n default
-
Create a manifest file for a client pod (
scalardb-cluster-client-pod.yaml
).cat << 'EOF' > ${HOME}/scalardb-cluster-test/scalardb-cluster-client-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: "scalardb-cluster-client"
spec:
containers:
- name: scalardb-cluster-client
image: eclipse-temurin:8
command: ['sleep']
args: ['inf']
env:
- name: SCALAR_DB_CLUSTER_VERSION
value: SCALAR_DB_CLUSTER_CLIENT_POD_SCALAR_DB_CLUSTER_VERSION
volumeMounts:
- name: "client-ca-cert"
mountPath: "/certs/"
readOnly: true
volumes:
- name: "client-ca-cert"
secret:
secretName: "client-ca-cert"
restartPolicy: Never
EOF -
Set the ScalarDB Cluster version in the manifest file.
sed -i s/SCALAR_DB_CLUSTER_CLIENT_POD_SCALAR_DB_CLUSTER_VERSION/${SCALAR_DB_CLUSTER_VERSION}/ ${HOME}/scalardb-cluster-test/scalardb-cluster-client-pod.yaml
-
Deploy the client pod.
kubectl apply -f ${HOME}/scalardb-cluster-test/scalardb-cluster-client-pod.yaml -n default
-
Check if the client container is running.
kubectl get pod scalardb-cluster-client -n default
[Command execution result]
NAME READY STATUS RESTARTS AGE
scalardb-cluster-client 1/1 Running 0 26s
Step 7. Run the ScalarDB Cluster SQL CLI in the client container​
-
Run bash in the client container.
kubectl exec -it scalardb-cluster-client -n default -- bash
The commands in the following steps must be run in the client container.
-
Download the ScalarDB Cluster SQL CLI from Releases.
curl -OL https://github.com/scalar-labs/scalardb/releases/download/v${SCALAR_DB_CLUSTER_VERSION}/scalardb-cluster-sql-cli-${SCALAR_DB_CLUSTER_VERSION}-all.jar
-
Create a
database.properties
file and add configurations.cat << 'EOF' > /database.properties
# ScalarDB Cluster configurations
scalar.db.sql.connection_mode=cluster
scalar.db.sql.cluster_mode.contact_points=indirect:scalardb-cluster-envoy.default.svc.cluster.local
# Auth configurations
scalar.db.cluster.auth.enabled=true
scalar.db.sql.cluster_mode.username=admin
scalar.db.sql.cluster_mode.password=admin
# TLS configurations
scalar.db.cluster.tls.enabled=true
scalar.db.cluster.tls.ca_root_cert_path=/certs/ca.crt
scalar.db.cluster.tls.override_authority=envoy.scalar.example.com
EOF -
Run the ScalarDB Cluster SQL CLI.
java -jar /scalardb-cluster-sql-cli-${SCALAR_DB_CLUSTER_VERSION}-all.jar --config /database.properties
-
Create a sample namespace named
ns
.CREATE NAMESPACE ns;
-
Create a sample table named
tbl
under the namespacens
.CREATE TABLE ns.tbl (a INT, b INT, c INT, PRIMARY KEY(a, b));
-
Insert sample records.
INSERT INTO ns.tbl VALUES (1,2,3), (4,5,6), (7,8,9);
-
Select the sample records that you inserted.
SELECT * FROM ns.tbl;
[Command execution result]
0: scalardb> SELECT * FROM ns.tbl;
+---+---+---+
| a | b | c |
+---+---+---+
| 7 | 8 | 9 |
| 1 | 2 | 3 |
| 4 | 5 | 6 |
+---+---+---+
3 rows selected (0.059 seconds) -
Press
Ctrl + D
to exit from ScalarDB Cluster SQL CLI.^D
-
Exit from the client container.
exit
Step 8. Delete all resources​
After completing the ScalarDB Cluster tests on the Kubernetes cluster, remove all resources.
-
Uninstall ScalarDB Cluster and PostgreSQL.
helm uninstall -n default scalardb-cluster postgresql-scalardb-cluster
-
Remove the client container.
kubectl delete pod scalardb-cluster-client --grace-period 0 -n default
-
Remove the secret resources.
kubectl delete secrets scalardb-credentials-secret scalardb-cluster-tls-key scalardb-cluster-tls-cert scalardb-cluster-tls-ca envoy-tls-key envoy-tls-cert client-ca-cert
-
Remove the working directory and sample files (configuration file, private key, and certificate).
cd ${HOME}
rm -rf ${HOME}/scalardb-cluster-test/
Further reading​
You can see how to get started with monitoring or logging for Scalar products in the following tutorials: