Getting Started with Helm Charts (Monitoring using Prometheus Operator)
This document explains how to get started with Scalar products monitoring on Kubernetes using Prometheus Operator (kube-prometheus-stack). Here, we assume that you already have a Mac or Linux environment for testing. We use Minikube in this document, but the steps we will show should work in any Kubernetes cluster.
What we create
We will deploy the following components on a Kubernetes cluster as follows.
+--------------------------------------------------------------------------------------------------+
| +------------------------------------------------------+ +-----------------+ |
| | kube-prometheus-stack | | Scalar Products | |
| | | | | |
| | +--------------+ +--------------+ +--------------+ | -----(Monitor)----> | +-----------+ | |
| | | Prometheus | | Alertmanager | | Grafana | | | | ScalarDB | | |
| | +-------+------+ +------+-------+ +------+-------+ | | +-----------+ | |
| | | | | | | +-----------+ | |
| | +----------------+-----------------+ | | | ScalarDL | | |
| | | | | +-----------+ | |
| +--------------------------+---------------------------+ +-----------------+ |
| | |
| | Kubernetes |
+----------------------------+---------------------------------------------------------------------+
| <- expose to localhost (127.0.0.1) or use load balancer etc to access
|
(Access Dashboard through HTTP)
|
+----+----+
| Browser |
+---------+
Step 1. Start a Kubernetes cluster
First, you need to prepare a Kubernetes cluster. If you use a minikube environment, please refer to the Getting Started with Scalar Helm Charts. If you have already started a Kubernetes cluster, you can skip this step.
Step 2. Prepare a custom values file
-
Save the sample file scalar-prometheus-custom-values.yaml for
kube-prometheus-stack. -
Add custom values in the
scalar-prometheus-custom-values.yamlas follows.- settings
prometheus.service.typetoLoadBalanceralertmanager.service.typetoLoadBalancergrafana.service.typetoLoadBalancergrafana.service.portto3000
- Example
alertmanager:
service:
type: LoadBalancer
...
grafana:
service:
type: LoadBalancer
port: 3000
...
prometheus:
service:
type: LoadBalancer
... - Note:
-
If you want to customize the Prometheus Operator deployment by using Helm Charts, you'll need to set the following configurations to monitor Scalar products:
- Set
serviceMonitorSelectorNilUsesHelmValuesandruleSelectorNilUsesHelmValuestofalse(trueby default) so that Prometheus Operator can detectServiceMonitorandPrometheusRulefor Scalar products.
- Set
-
If you want to use Scalar Manager, you'll need to set the following configurations to enable Scalar Manager to collect CPU and memory resources:
- Set
kubeStateMetrics.enabled,nodeExporter.enabled, andkubelet.enabledtotrue.
- Set
-
If you want to use Scalar Manager, you'll need to set the following configurations to enable Scalar Manager to embed Grafana:
- Set
grafana.ini.security.allow_embeddingandgrafana.ini.auth.anonymous.enabledtotrue. - Set
grafana.ini.auth.anonymous.org_nameto the organization you are using. If you're using the sample custom values, the value isMain Org.. - Set
grafana.ini.auth.anonymous.org_roletoEditor.
- Set
-
- settings
Step 3. Deploy kube-prometheus-stack
-
Add the
prometheus-communityhelm repository.helm repo add prometheus-community https://prometheus-community.github.io/helm-charts -
Create a namespace
monitoringon the Kubernetes.kubectl create namespace monitoring -
Deploy the
kube-prometheus-stack.helm install scalar-monitoring prometheus-community/kube-prometheus-stack -n monitoring -f scalar-prometheus-custom-values.yaml
Step 4. Deploy (or Upgrade) Scalar products using Helm Charts
- Note:
- The following explains the minimum steps. If you want to know more details about the deployment of ScalarDB and ScalarDL, please refer to the following documents.
-
To enable Prometheus monitoring of Scalar products, set
trueto the following configurations in the custom values file.- Configurations
*.prometheusRule.enabled*.grafanaDashboard.enabled*.serviceMonitor.enabled
- Sample configuration files
- ScalarDB (scalardb-custom-values.yaml)
envoy:
prometheusRule:
enabled: true
grafanaDashboard:
enabled: true
serviceMonitor:
enabled: true
scalardb:
prometheusRule:
enabled: true
grafanaDashboard:
enabled: true
serviceMonitor:
enabled: true - ScalarDL Ledger (scalardl-ledger-custom-values.yaml)
envoy:
prometheusRule:
enabled: true
grafanaDashboard:
enabled: true
serviceMonitor:
enabled: true
ledger:
prometheusRule:
enabled: true
grafanaDashboard:
enabled: true
serviceMonitor:
enabled: true - ScalarDL Auditor (scalardl-auditor-custom-values.yaml)
envoy:
prometheusRule:
enabled: true
grafanaDashboard:
enabled: true
serviceMonitor:
enabled: true
auditor:
prometheusRule:
enabled: true
grafanaDashboard:
enabled: true
serviceMonitor:
enabled: true
- ScalarDB (scalardb-custom-values.yaml)
- Configurations
-
Deploy (or Upgrade) Scalar products using Helm Charts with the above custom values file.
- Examples
- ScalarDB
helm install scalardb scalar-labs/scalardb -f ./scalardb-custom-values.yamlhelm upgrade scalardb scalar-labs/scalardb -f ./scalardb-custom-values.yaml - ScalarDL Ledger
helm install scalardl-ledger scalar-labs/scalardl -f ./scalardl-ledger-custom-values.yamlhelm upgrade scalardl-ledger scalar-labs/scalardl -f ./scalardl-ledger-custom-values.yaml - ScalarDL Auditor
helm install scalardl-auditor scalar-labs/scalardl-audit -f ./scalardl-auditor-custom-values.yamlhelm upgrade scalardl-auditor scalar-labs/scalardl-audit -f ./scalardl-auditor-custom-values.yaml
- ScalarDB
- Examples
Step 5. Access Dashboards
If you use minikube
-
To expose each service resource as your
localhost (127.0.0.1), open another terminal, and run theminikube tunnelcommand.minikube tunnelAfter running the
minikube tunnelcommand, you can see the EXTERNAL-IP of each service resource as127.0.0.1.kubectl get svc -n monitoring scalar-monitoring-kube-pro-prometheus scalar-monitoring-kube-pro-alertmanager scalar-monitoring-grafana[Command execution result]
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
scalar-monitoring-kube-pro-prometheus LoadBalancer 10.98.11.12 127.0.0.1 9090:30550/TCP 26m
scalar-monitoring-kube-pro-alertmanager LoadBalancer 10.98.151.66 127.0.0.1 9093:31684/TCP 26m
scalar-monitoring-grafana LoadBalancer 10.103.19.4 127.0.0.1 3000:31948/TCP 26m -
Access each Dashboard.
- Prometheus
http://localhost:9090/ - Alertmanager
http://localhost:9093/ - Grafana
http://localhost:3000/- Note:
- You can see the user and password of Grafana as follows.
- user
kubectl get secrets scalar-monitoring-grafana -n monitoring -o jsonpath='{.data.admin-user}' | base64 -d - password
kubectl get secrets scalar-monitoring-grafana -n monitoring -o jsonpath='{.data.admin-password}' | base64 -d
- user
- You can see the user and password of Grafana as follows.
- Note:
- Prometheus
If you use other Kubernetes than minikube
If you use a Kubernetes cluster other than minikube, you need to access the LoadBalancer service according to the manner of each Kubernetes cluster. For example, using a Load Balancer provided by cloud service or the kubectl port-forward command.
Step 6. Delete all resources
After completing the Monitoring tests on the Kubernetes cluster, remove all resources.
-
Terminate the
minikube tunnelcommand. (If you use minikube)Ctrl + C -
Uninstall
kube-prometheus-stack.helm uninstall scalar-monitoring -n monitoring -
Delete minikube. (Optional / If you use minikube)
minikube delete --all- Note:
- If you deploy the ScalarDB or ScalarDL, you need to remove them before deleting minikube.
- Note: