Kubernetes
Containers
Observability & Monitoring

Kubernetes Dashboard - Deploy and Visualize your Kubernetes Cluster

The Kubernetes Dashboard is a web-based Kubernetes user interface. It has tons of features including the overview of the cluster, managing cluster resources
March 18, 2024

The Kubernetes Dashboard is a web-based Kubernetes user interface. It has tons of features including the overview of the cluster, managing cluster resources, troubleshooting deployed applications, and health checking. Compared to operating with a command line interface, it provides a visual representation or a control panel of your cluster.

From an operational perspective, Kubernetes Dashboard removes obstacles for someone not familiar with the command line interface and allows them to participate in infrastructure support. For example, the first-tier support team can monitor the infrastructure using the dashboard to give instance responses.

Figure 1 Introduction | Landing page of Kubernetes Dashboard.
Figure 1 Introduction | Landing page of Kubernetes Dashboard.

In this article, I will provide some commonly used views and sub-views as examples to show you what you can get from the dashboard from a DevOps perspective. Those examples are mainly used daily in Kubernetes operations. Moreover, there will be a highlighted features section.

Installation

The Kubernetes Dashboard is not deployed by default during cluster creation. For the latest version, the official repository provides a Helm-based installation which is also the only installation method since version 7.0.0. You can still install older versions using Manifest-based installation. In this article, we will use the Helm-based installation.

To install,

  1. Use any SSH client to connect to your Kubernetes master node.
  2. Install `kubectl` and `helm` on the master node, if they have not been installed yet.
Figure 2 Installation | kubectl and helm version on the master node.
Figure 2 Installation | kubectl and helm version on the master node.
  1. Add the Helm repository `kubernetes-dashboard`.
    helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
Figure 3 Installation | Add the Helm repository.
Figure 3 Installation | Add the Helm repository.
  1. Deploy a Helm release named `kubernetes-dashboard` using the `kubernetes-dashboard` chart. It will create a namespace and corresponding resources on your cluster.
    helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard
Figure 4 Installation | Helm release installation.
Figure 4 Installation | Helm release installation.
Figure 5 Installation | Kubernetes workloads created by the Helm release.
Figure 5 Installation | Kubernetes workloads created by the Helm release.

Until now, you have deployed the Kubernetes Dashboard on your cluster. However, by default, the type of the dashboard web service is `ClusterIP`, which means that it is only accessible internally within the cluster. 

There are many ways to expose the dashboard services externally, such as port forwarding, using `NodePort` as the service type and access through node port, setting up `Ingress` to expose the services, etc. You should choose the way depending on your use case.

In our demo, we will update the type of service `kubernetes-dashboard-kong-proxy` to `NodePort` from `ClusterIP` and enable its HTTP connection. Then, we can access the dashboard using `<Master node IP>:<NodePort>`.

  1. Create a YAML file named `values.yaml` with the following content.
Figure 6 Installation | Values to pass for Helm Upgrade.
Figure 6 Installation | Values to pass for Helm Upgrade.
  1. Upgrade the Helm release with the YAML file we created in step 5. It will update the existing service.
    helm upgrade kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard -f values.yaml -n kubernetes-dashboard
Figure 7 Installation | Upgrade the Helm release with values.yaml.
Figure 7 Installation | Upgrade the Helm release with values.yaml.
Figure 8 Installation | Updated service type and exposed with a node port.
Figure 8 Installation | Updated service type and exposed with a node port.
Figure 9 Installation | Access dashboard through node port 32559.
Figure 9 Installation | Access dashboard through node port 32559.

You can now see the sign-in page of Kubernetes Dashboard. It uses the Kubernetes authentication that logs in by using a bearer token generated from a service account. Therefore, we will need to create a service account, a cluster role binding, and a secret, and then get the token from the secret.

  1. Create a YAML file named `user.yaml` with the following content. It creates a service account with the necessary role and secret.
Figure 10 Installation | Necessary resources for dashboard authentication.
Figure 10 Installation | Necessary resources for dashboard authentication.
apiVersion: v1
kind: ServiceAccount
metadata:  
  name: admin-user  
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:  
  name: admin-user
roleRef:  
  apiGroup: rbac.authorization.k8s.io  
  kind: ClusterRole  
  name: cluster-admin
subjects:- 
  kind: ServiceAccount  
  name: admin-user  
  namespace: kubernetes-dashboard
---
apiVersion: v1
kind: Secret
metadata:  
  name: admin-user  
  namespace: kubernetes-dashboard  
  annotations:    
    kubernetes.io/service-account.name: "admin-user"
type: kubernetes.io/service-account-token
  1. Create the resources by applying `user.yaml`.

kubectl apply -f user.yaml

Figure 11 Installation | Create resources by applying 'user.yaml'.
Figure 11 Installation | Create resources by applying 'user.yaml'.

  1. Get the access token from the newly created secret.

kubectl get secret admin-user -n kubernetes-dashboard -o jsonpath={".data.token"} | base64 -d

Figure 12 Installation | Generate a bearer token.
Figure 12 Installation | Generate a bearer token.
  1. Use the token generated from Step 9 and you can now log in to the dashboard.
Figure 13 Installation | Login using the bearer token.
Figure 13 Installation | Login using the bearer token.
Figure 14 Installation | Dashboard landing page.
Figure 14 Installation | Dashboard landing page.

Overview

There are five components to view from the dashboard, including Workloads, Service, Config and Storage, Cluster, and Custom Resource Definitions.

Workloads

The Workloads view provides an overview of the applications in different forms deployed on the cluster. Its sub-views include Deployments, Pods, Daemon Sets, etc. Under each sub-view, you can see the metrics of the running workloads like CPU and memory usage.

Take Deployments as an example:

Figure 15 Workloads Overview | Deployment sub-view.
Figure 15 Workloads Overview | Deployment sub-view.

The upper part is the metrics that show the CPU and memory usage of the pods under the deployments in the current namespace selection. The metric is simply showing the recent 10 to 15 minutes of data. The lower part is the list of deployments under the current namespace selection. You can see some important information about an individual deployment like name, image, number of running pods, and labels.

You can also click on a specific deployment to see its detailed overview. It shows the metadata, deployment strategy, pod status, conditions, and attached replica sets. The information here will be updated in real time when there are some changes.

Figure 16 Workloads Overview | Deployment detailed overview.
Figure 16 Workloads Overview | Deployment detailed overview.

Service

The Service view provides a comprehensive overview of the access control at the application level. The sub-views are Ingresses, Ingress Classes, and Services which show a list of the resources with their core information.

Using the Services sub-view as an example, you can get the name, labels, service type, cluster IP, and internal and external endpoints. 

Figure 17 Service Overview | Services sub-view.
Figure 17 Service Overview | Services sub-view.

You can click any Service to see its dedicated overview, where you can see the metadata, service information, and the attached pods. It is one of the most useful views on the dashboard because it is straightforward to see the relations between services and pods, which helps to verify if the pod selector configuration works correctly.

Figure 18 Service Overview | Service detailed overview.
Figure 18 Service Overview | Service detailed overview.

Config and Storage

In this section, it provides an overview of Config Maps, Persistent Volume Claim, Secrets, and Storage Classes. You can find every storage and configuration item here.

The Secrets sub-view is a list of secrets under the current namespace selection, and it simply shows the name, labels, and the secret type. 

Figure 19 Config and Storage Overview | Secrets sub-view.
Figure 19 Config and Storage Overview | Secrets sub-view.

Same as other sub-views, you can click on any secret to see its detailed information. Take the secret `admin-user` we created in the Installation section as an example. You can see all the data in this secret in a hidden view. After expanding the token data, you will see the same value that we got from kubectl get secret admin-user.

Figure 20 Config and Storage Overview | Secret `admin-user` detailed overview.
Figure 20 Config and Storage Overview | Secret `admin-user` detailed overview.

Figure 21 Config and Storage Overview | Expanded token in secret `admin-user`.
Figure 21 Config and Storage Overview | Expanded token in secret `admin-user`.

Cluster

The Cluster view is the overview of ten cluster-level resources. They are Cluster Role Bindings, Cluster Roles, Events, Namespaces, Network Policies, Nodes, Persistent Volumes, Role Bindings, Roles, and Service Accounts.

We will use Nodes and Namespaces sub-views as examples.

For the namespace sub-view, it shows all the namespaces under the cluster. This page is simple but gives you a quick view of different groups of resources.

Figure 22 Cluster Overview | Namespace sub-view.
Figure 22 Cluster Overview | Namespace sub-view.

By clicking on any one of the namespaces, you will see more information here like the namespace-level resource quotas and limits. 

Figure 23 Cluster Overview | Namespace detailed overview.
Figure 23 Cluster Overview | Namespace detailed overview.

Additionally, if you would like to get a general overview of resources at the namespace level, you can select a specific namespace on the pull-down menu at the header of the dashboard.

Figure 24 Cluster Overview | Namespace selection.
Figure 24 Cluster Overview | Namespace selection.

The Node sub-view lists all the master and worker nodes under the cluster. It gives the name, labels, CPU and memory limits, requests and capacity, and the current number of running pods. Like the Deployment sub-view under Workloads, you can see a part of the metrics and a part of the node list.

Figure 25 Cluster Overview | Node sub-view.
Figure 25 Cluster Overview | Node sub-view.

In the detailed overview of a specific node, you can see all its basic information, metrics, resource allocation, node conditions, and the list of running pods.

Figure 26 Cluster Overview | Node detailed overview.
Figure 26 Cluster Overview | Node detailed overview.

Key features

The Kubernetes Dashboard is not only a display dashboard but also a portal to interact with your cluster. In this section, I will show you some key features that benefit your daily Kubernetes operations as a DevOps, Site Reliability, or even a software engineer.

Quick creation

You can create Kubernetes resources using the portal by providing YAML or JSON content or uploading YAML or JSON files. You can also create a deployment directly by filling out the form.

Figure 27 Quick Creation | Providing YAML or JSON content.
Figure 27 Quick Creation | Providing YAML or JSON content.
Figure 28 Quick Creation | Uploading YAML or JSON file.
Figure 28 Quick Creation | Uploading YAML or JSON file.
Figure 29 Quick Creation | Creating from form.
Figure 29 Quick Creation | Creating from form.

We will try to create a deployment using the form.

Figure 30 Quick Creation | Create a deployment using the form.
Figure 30 Quick Creation | Create a deployment using the form.
Figure 31 Quick Creation | Preview the deployment in YAML format.
Figure 31 Quick Creation | Preview the deployment in YAML format.

You can deploy it by clicking the Deploy button. It will redirect you to the Workloads Overview page.

Figure 32 Quick Creation | Deployment created.
Figure 32 Quick Creation | Deployment created.

Logs tailing

Logs must be one of the most important items we want from a Kubernetes cluster for bug tracking, performance checking, and many other usages. The Kubernetes Dashboard provides a feature in any Pods detailed overview page which is equivalent to the `kubectl logs` command.

Figure 33 Logs tailing | Start tailing the log.
Figure 33 Logs tailing | Start tailing the log.

Once you click the button, you will see a black container showing the pod logs in live mode.

Figure 34 Logs tailing | Pod logs in live mode.
Figure 34 Logs tailing | Pod logs in live mode.
Figure 35 Logs tailing | Logs of other containers or download the logs.
Figure 35 Logs tailing | Logs of other containers or download the logs.

If your pod has multiple containers, you can also tail the logs by choosing the container using the pull-down list in (1) of Figure 35. Or download the logs using the button (2) of Figure 35.

Pod interacting

In some cases, you may need to execute into the pod for testing or temporary modification. It is also a feature provided by the Kubernetes Dashboard that you can directly interact with the selected pod through the browser. You don’t need to use any SSH client and set your cluster context, as it is equivalent to the `kubectl exec` command.

Figure 36 Pod interacting | Exec into the pod.
Figure 36 Pod interacting | Exec into the pod.

Once you click the button, you will see a Shell box to run any command. If you would like to delete all the changes made against the Pod in Shell, you can simply delete the Pod and let it recreate provided that the Pod is controlled by a Replica Set.

Figure 37 Pod interacting | Shell box.
Figure 37 Pod interacting | Shell box.
Figure 38 Pod interacting | Using Shell mode.
Figure 38 Pod interacting | Using Shell mode.

Conclusion

You have learned about using the Kubernetes Dashboard to simplify your daily duties as a Kubernetes cluster administrator. The dashboard is equivalent to many `kubectl` commands but in a visual way. You can dig into the dashboard and find the best way to utilize it. It brings more benefits than you expect.