Deploy a Hosted Cluster

Introduction

The beginning section of this lab will introduce you to the basics of using Advanced Cluster Management for Kubernetes (RHACM) and the Multicluster Engine operator to deploy an OpenShift on OpenShift cluster using Hosted Control Planes as the provider.

Goals

  • Deploy a hosted control planes based OpenShift cluster.

  • Review the hosted cluster environment.

As a reminder, here are your credentials for the OpenShift Console:

Your OpenShift cluster console is available sample_openshift_cluster_console_url[here^].

Administrator login is available with:

  • User: sample_openshift_cluster_admin_username

  • Password: sample_openshift_cluster_admin_password

Prerequisites for Deployment

  1. Click on the link above to launch the OpenShift console in a new browser tab.

    console login
  2. When you login you will be presented with a pop-up window that promotes the ease of managing clusters with RHACM. Click the x in the corner to close the window.

    cluster create popup
  3. Your initial landing page will be on the ACM default view for All Clusters. Currently the only cluster being managed is the local cluster that we are running on.

  4. Click on the local-cluster to find out more information about it.

    acm default window
  5. The Overview tab for the local cluster will show you the status of the environment, the console URL for direct login, as well as information about the hosting environment, the current release information, and information about the number of nodes and applications currently deployed on the cluster.

    local cluster overview
    For the purpose of this lab please note that we are hosting the environment in Google Cloud. This is not a supported environment for production use at this time. For production use, please see the release notes available here
  6. Click on the Add-ons tab, in the list you will see the hypershift-addon listed. This is required for the deployment of Hosted Control Planes.

    local cluster addons
  7. Now click on Credentials in the left-side menu. You should see that that there is a credential available called kubevirt-secret. This secret contains both a pull-secret, and a public ssh key from the bastion host that will allow you to deploy and manage the hosted cluster once it’s deployed.

    view credentials
  8. When you are done viewing the credentials click on Infrastructure in the left-side menu and click on Clusters to return to the Clusters view where we began the next section.

    infrastructure clusters
  9. One additional action we need to perform is to create a project for our hosted clusters to be deployed to. Click on the All Clusters menu item at the top and select local-cluster from the drop down menu.

    all clusters dropdown
  10. You will be presented with the Overview screen for the hosting cluster.

    hosting cluster overview
  11. On the left-side menu click on Home and then Projects and the blue Create Project button at the top of the screen.

    create project screen
  12. You will be presented with a Create Project window. Name your project clusters and click the blue Create button.

    create project screen
  13. With this complete we can return to our RHACM console by clicking on the local-cluster menu item at the top and selecting All Clusters from the drop-down menu.

    return to acm

Deploy Cluster

  1. Click on the blue Create cluster button to begin the deployment process.

    create cluster
  2. You will be presented with the Infrastructure window. Here you are given the choice of a number of cloud providers where you can deploy your OpenShift cluster, choose the tile for Red Hat OpenShift Virtualization and click on it.

    infrastructure provider
  3. This will take you to a page titled Control plane type - OpenShift Virtualization. There is a tile here called Hosted which also mentioned the benefits of using Hosted Control Planes to deploy your OpenShift cluster. Click it.

    control plane type hosted
  4. This will bring you to the Create cluster window where you will have a number of options to fill out for creating your new hosted cluster.

  5. Please fill out the form with the following details:

    1. Infrastructure provider credential: kubevirt-secret

    2. Cluster name: my-hosted-cluster

    3. Cluster set: default

    4. Release image: 4.16.z

    5. Etcd storage class: ocs-storagecluster-ceph-rbd

  6. Click on Next when you have completed filling out the options.

    create cluster details
  7. You will be presented with the page to configure your intitial cluster node pools.

  8. Please fill out the form with the following details:

    1. Node pool name: my-node-pool

    2. Node pool replica: 2

    3. Core: 2

    4. Memory (GiB): 8

    5. Auto repair: True

  9. Click on Next when you have completed filling out the options.

    create node pools
  10. The next screen shown will allow you to review the deployment configuration by providing a review of the Cluster details and the Node pools that you have configured. If all looks good upon review, click the Create button to begin provisioning your hosted cluster.

    create cluster review
  11. You will see a message that the cluster is starting deployment and then will be taken to the overview page for your hosted cluster, please be patient while the cluster deploys.

    hosted cluster deploying
  12. After 15-20 minutes the cluster deployment will be complete and you can start to explore your hosted cluster.

    hosted cluster deploy complete
    Due to the fact that we are hosting this lab in Google Cloud, you will see two items without a green check once the hosted cluster deployment is complete. Neither item will have an adverse affect this lab going forward, but be sure to follow recommended configuration guides when deploying a cluster for production or your own testing use.

Explore the Cluster

With our cluster deployed we can start exploring by scrolling down the page. There we are presented with information about the cluster itself, including how to login, the node-pools where application workloads will run, and the pods that support the control plane. Come on, lets explore.

Control Plane Pods

During the value prop section it was brought to your attention that there are no dedicated control plane nodes in a hosted OpenShift environment. All of the control plane processes are run within containers, within a project on the cluster. Let us take some time and explore these pods.

  1. Click on the Control plane pods link provided near the bottom of the deployment summary and above the Cluster node pools section.

    control plane pods
  2. This will launch a new tab and take you to the project clusters-my-hosted-cluster, assuming that is what you named your cluster when we deployed it.

    pods my hosted cluster
  3. The pods here will consist of mostly replica sets for the processes that run the hosted cluster control plane.

  4. Scroll down until you see the etcd pods, what do you notice different about them?

    etcd pods
  5. Notice that they are configured as StatefulSets. Click on the etcd-0 pod to see more.

  6. When you click on the pod, you will be brought to the details page for the pod. Notice that you can see the node it’s assigned to. Looking at the other etcd pods will show that they are assigned to different nodes through anti-affinity rules. There is also a pod-disruption budget set so that the maximum unavailable at any time is 1, or the cluster becomes unavailable.

    etcd0 details
  7. Press the back button on your browser to return to the list of pods available in the clusters-my-hosted-cluster project.

  8. Scroll to the very bottom of this list, there are two more pods that stand out, what do you notice different about them?

    nodepool pods
  9. These are the VirtualMachineInstance pods that represent the two virtual worker nodes in our node pool. Notice that the amount of reserved memory is much larger than any other pod in this project.

  10. Click on one of the pods to bring up it’s detail page. Notice that like etcd you can also see the node that it is assigned to, as well as it’s pod disruption budget. But also a node slector to find nodes that can host VMs, as well as a difference in the restart policy.

    vmi pods

Once you are done exploring the pods running in the clusters-my-hosted-cluster project, close out that browser tab to return to the overview for my-hosted-cluster.

Cluster Details and Status

The next major section of the overview page provides details about the cluster we deployed. This includes the status of the cluster, details about it’s infrastructure and where we can manage future operations like upgrades. It also includes the information we need to access the cluster like the cluster API address, the console URL we can use to login, and the credentials to do so, conveniently hidden for security sake.

hosted cluster details
  1. Click the eye icon to review the credentials, it will provide the information for the kubeadmin user and a randomly generated password.

    reveal creds
  2. Click on the copy icon next to the password to save it to the buffer, then click on the Console URL right above that.

    copy password
  3. A new tab will open. You will first be presented with some security certificate prompts, click the Advanced button, and then the link it displays to bypass these warnings.

    browser security
    You may have to do this more than once.
  4. Once you bypass the certificate prompts you will be presented with the login to the OpenShift console. This is where you will use the username kubeadmin and the password that we copied a few moments ago to log in.

    hosted cluster login prompt
  5. You will be presented with the OpenShift console home page, just as if you were logging into any other OpenShift cluster. You can see that the environment was newly provisioned as there are 59 days left on a default self-support trial, and that the infrastructure provider is listed as KubeVirt. You may continue to explore the environment at your leisure. When you are finished, simply close the tab to return to the remaining lab tasks.

    hosted cluster console home
  6. Now that you are back on the hosted cluster overview scroll further down the page to the Status section where you will see a summary describing the number of nodes and applications currently running in the cluster.

    hosted cluster status
  7. Click on the large number above applications, it will bring you to a page that shows all of the apps currently running in the cluster. At this point they are all administrative applications responsible for providing cluster services. You can also use the blue Create application button to configure centralized application deployments from ACM to your hosted cluster fleet using ArgoCD or a subscription model.

    hosted cluster applications
  8. When you are done exploring this page, click the back button on your browser to return to the my-hosted-cluster overview.

Nodes and Node Pools

  1. From the my-hosted-cluster overview page, click on the tab for nodes at the top.

    cluster overview nodes tab
  2. You will be taken to a page that shows the nodes in your node pool, and some basic information about their resources. if you recall we configured two during the deployment. Click on the first cluster node to explore it further.

    cluster overview nodes tab
  3. A new tab will open and bring you to the compute section of your hosted cluster where you can see the node details.

    hosted cluster node details
  4. Close this tab and return to the my-hosted-cluster overview page. If you scroll about halfway down the page you can see the place where we explored the Control plane pods earlier, and the Cluster node pools section. Click on the three-dot menu on the right, and select the option for Manage node pool.

    manage cluster nodepools
  5. You will be prompted with a pop-up window to Manage node pool where you can manually scale the node pool to more than 2 nodes. Close this window when you are done viewing it.

    scale up nodepool
Do not scale the cluster at this time! We will work with node pools later in the lab.

Summary

In this module of our lab we have deployed a hosted OpenShift cluster on OpenShift by using Hosted Control Planes. We explored the pods that make up the control plane, and the virtual machines that make up the worker node pool.