Steps
1. Log into the vCenter client and select "Menu -> Workload Management" and click on "Enable"
Full details on how to enable and setup the Supervisor Cluster can be found at the following docs
https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-kubernetes/GUID-21ABC792-0A23-40EF-8D37-0367B483585E.html
Make sure you enable Harbor as the Registry using this link below
https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-kubernetes/GUID-AE24CF79-3C74-4CCD-B7C7-757AD082D86A.html
A pre-requisite for Workload Management is to have NSX-T 3.0 installed / enabled. https://docs.vmware.com/en/VMware-NSX-T-Data-Center/index.html
Once all done the "Workload Management" page will look like this. This can take around 30 minutes to complete
2. As a vSphere administrator, you can create namespaces on a Supervisor Cluster and configure them with resource quotas, storage, as well as set permissions for DevOps engineer users. Once you configure a namespace, you can provide it DevOps engineers, who run vSphere Pods and Kubernetes clusters created through the VMware Tanzu™ Kubernetes Grid™ Service.
To do this follow this link below
https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-kubernetes/GUID-1544C9FE-0B23-434E-B823-C59EFC2F7309.html
Note: Make a note of this Namespace as we are going to need to connect to it shortly. In the examples below we have a namespace called "ns1"
3. With a vSphere namespace created we can now download the required CLI
Note: You can get the files from the Namespace summary page as shown below under the heading "Link to CLI Tools"
One downloaded put the contents of the .zip file in your OS's executable search path
4. Now we are ready to login. To do that we will use a command as follows
kubectl vsphere login --server=SUPERVISOR-CLUSTER-CONTROL-PLANE-IP-ADDRESS
--vsphere-username VCENTER-SSO-USER
Example:
$ kubectl vsphere login --insecure-skip-tls-verify --server wcp.haas-yyy.pez.pivotal.io -u administrator@vsphere.local
Password:
Logged in successfully.
You have access to the following contexts:
ns1
wcp.haas-253.pez.pivotal.io
If the context you wish to use is not in this list, you may need to try
logging in again later, or contact your cluster administrator.
To change context, use `kubectl config use-context
Full instructions are at the following URL
https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-kubernetes/GUID-F5114388-1838-4B3B-8A8D-4AE17F33526A.html
5. At this point we need to switch to the Namespace we created at step 2 which is "ns1"
$ kubectl config use-context ns1
Switched to context "ns1".
6. Get a list of the available content images and the Kubernetes version that the image provides
Command: kubectl get virtualmachineimages
$ kubectl get virtualmachineimages NAME AGE ob-15957779-photon-3-k8s-v1.16.8---vmware.1-tkg.3.60d2ffd 35m
Version Information can be retrieved as follows:
$ kubectl describe virtualmachineimage ob-15957779-photon-3-k8s-v1.16.8---vmware.1-tkg.3.60d2ffd Name: ob-15957779-photon-3-k8s-v1.16.8---vmware.1-tkg.3.60d2ffd Namespace: Labels: <none> Annotations: vmware-system.compatibilityoffering: [{"requires": {"k8s.io/configmap": [{"predicate": {"operation": "anyOf", "arguments": [{"operation": "not", "arguments": [{"operation": "i... vmware-system.guest.kubernetes.addons.calico: {"type": "inline", "value": "---\n# Source: calico/templates/calico-config.yaml\n# This ConfigMap is used to configure a self-hosted Calic... vmware-system.guest.kubernetes.addons.pvcsi: {"type": "inline", "value": "apiVersion: v1\nkind: Namespace\nmetadata:\n name: {{ .PVCSINamespace }}\n---\nkind: ServiceAccount\napiVers... vmware-system.guest.kubernetes.addons.vmware-guest-cluster: {"type": "inline", "value": "apiVersion: v1\nkind: Namespace\nmetadata:\n name: vmware-system-cloud-provider\n---\napiVersion: v1\nkind: ... vmware-system.guest.kubernetes.distribution.image.version: {"kubernetes": {"version": "1.16.8+vmware.1", "imageRepository": "vmware.io"}, "compatibility-7.0.0.10100": {"isCompatible": "true"}, "dis... API Version: vmoperator.vmware.com/v1alpha1 Kind: VirtualMachineImage Metadata: Creation Timestamp: 2020-04-22T04:52:42Z Generation: 1 Resource Version: 28324 Self Link: /apis/vmoperator.vmware.com/v1alpha1/virtualmachineimages/ob-15957779-photon-3-k8s-v1.16.8---vmware.1-tkg.3.60d2ffd UID: 9b2a8248-d315-4b50-806f-f135459801a8 Spec: Image Source Type: Content Library Type: ovf Events: <none>
7. Create a YAML file with the required configuration parameters to define the cluster
Few things to note:
- Make sure your storageClass name matches the storage class name you used during setup
- Make sure your distribution version matches a name from the output of step 6
apiVersion: run.tanzu.vmware.com/v1alpha1 #TKG API endpoint
kind: TanzuKubernetesCluster #required parameter
metadata:
name: tkg-cluster-1 #cluster name, user defined
namespace: ns1 #supervisor namespace
spec:
distribution:
version: v1.16 #resolved kubernetes version
topology:
controlPlane:
count: 1 #number of control plane nodes
class: best-effort-small #vmclass for control plane nodes
storageClass: pacific-gold-storage-policy #storageclass for control plane
workers:
count: 3 #number of worker nodes
class: best-effort-small #vmclass for worker nodes
storageClass: pacific-gold-storage-policy #storageclass for worker nodes
More information on what the goes into your YAML is defined here
Configuration Parameters for Provisioning Tanzu Kubernetes Clusters
https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-kubernetes/GUID-4E68C7F2-C948-489A-A909-C7A1F3DC545F.html
8. Provision the Tanzu Kubernetes cluster using the following kubectl command against the manifest file above
Command: kubectl apply -f CLUSTER-NAME.yaml
While creating you can check the status as follows
Command: kubectl get TanzuKubernetesCluster,clusters.cluster.x-k8s.io,machine.cluster.x-k8s.io,virtualmachines
$ kubectl get TanzuKubernetesCluster,clusters.cluster.x-k8s.io,machine.cluster.x-k8s.io,virtualmachines NAME CONTROL PLANE WORKER DISTRIBUTION AGE PHASE tanzukubernetescluster.run.tanzu.vmware.com/tkg-cluster-1 1 3 v1.16.8+vmware.1-tkg.3.60d2ffd 15m running NAME PHASE cluster.cluster.x-k8s.io/tkg-cluster-1 provisioned NAME PROVIDERID PHASE machine.cluster.x-k8s.io/tkg-cluster-1-control-plane-4jmn7 vsphere://420c7807-d2f2-0461-8232-ec33e07632fa running machine.cluster.x-k8s.io/tkg-cluster-1-workers-z5d2x-cc45bbd76-2qznp provisioning machine.cluster.x-k8s.io/tkg-cluster-1-workers-z5d2x-cc45bbd76-m5xnm provisioning machine.cluster.x-k8s.io/tkg-cluster-1-workers-z5d2x-cc45bbd76-vv26c provisioning NAME AGE virtualmachine.vmoperator.vmware.com/tkg-cluster-1-control-plane-4jmn7 14m virtualmachine.vmoperator.vmware.com/tkg-cluster-1-workers-z5d2x-cc45bbd76-2qznp 6m3s virtualmachine.vmoperator.vmware.com/tkg-cluster-1-workers-z5d2x-cc45bbd76-m5xnm 6m3s virtualmachine.vmoperator.vmware.com/tkg-cluster-1-workers-z5d2x-cc45bbd76-vv26c 6m4s
9. Run the following command and make sure the Tanzu Kubernetes cluster is running, this may take some time.
Command: kubectl get TanzuKubernetesCluster,clusters.cluster.x-k8s.io,machine.cluster.x-k8s.io,virtualmachines
$ kubectl get TanzuKubernetesCluster,clusters.cluster.x-k8s.io,machine.cluster.x-k8s.io,virtualmachines NAME CONTROL PLANE WORKER DISTRIBUTION AGE PHASE tanzukubernetescluster.run.tanzu.vmware.com/tkg-cluster-1 1 3 v1.16.8+vmware.1-tkg.3.60d2ffd 18m running NAME PHASE cluster.cluster.x-k8s.io/tkg-cluster-1 provisioned NAME PROVIDERID PHASE machine.cluster.x-k8s.io/tkg-cluster-1-control-plane-4jmn7 vsphere://420c7807-d2f2-0461-8232-ec33e07632fa running machine.cluster.x-k8s.io/tkg-cluster-1-workers-z5d2x-cc45bbd76-2qznp vsphere://420ca6ec-9793-7f23-2cd9-67b46c4cc49d provisioned machine.cluster.x-k8s.io/tkg-cluster-1-workers-z5d2x-cc45bbd76-m5xnm vsphere://420c9dd0-4fee-deb1-5673-dabc52b822ca provisioned machine.cluster.x-k8s.io/tkg-cluster-1-workers-z5d2x-cc45bbd76-vv26c vsphere://420cf11f-24e4-83dd-be10-7c87e5486f1c provisioned NAME AGE virtualmachine.vmoperator.vmware.com/tkg-cluster-1-control-plane-4jmn7 18m virtualmachine.vmoperator.vmware.com/tkg-cluster-1-workers-z5d2x-cc45bbd76-2qznp 9m58s virtualmachine.vmoperator.vmware.com/tkg-cluster-1-workers-z5d2x-cc45bbd76-m5xnm 9m58s virtualmachine.vmoperator.vmware.com/tkg-cluster-1-workers-z5d2x-cc45bbd76-vv26c 9m59s
10. For a more concise view of what Tanzu Kubernetes Cluster you have this command with it's status is useful enough
Command: kubectl get tanzukubernetescluster
$ kubectl get tanzukubernetescluster NAME CONTROL PLANE WORKER DISTRIBUTION AGE PHASE tkg-cluster-1 1 3 v1.16.8+vmware.1-tkg.3.60d2ffd 20m running
11. Now let's login to a Tanzu Kubernetes Cluster using it's name as follows
kubectl vsphere login --tanzu-kubernetes-cluster-name TKG-CLUSTER-NAME --vsphere-username VCENTER-SSO-USER --server SUPERVISOR-CLUSTER-CONTROL-PLANE-IP-ADDRESS --insecure-skip-tls-verify
Example:
$ kubectl vsphere login --tanzu-kubernetes-cluster-name tkg-cluster-1 --vsphere-username administrator@vsphere.local --server wcp.haas-yyy.pez.pivotal.io --insecure-skip-tls-verify
Password:
Logged in successfully.
You have access to the following contexts:
ns1
tkg-cluster-1
wcp.haas-yyy.pez.pivotal.io
If the context you wish to use is not in this list, you may need to try
logging in again later, or contact your cluster administrator.
To change context, use `kubectl config use-context
12. Let's switch to the correct context here which is our newly created Kubernetes cluster
$ kubectl config use-context tkg-cluster-1
Switched to context "tkg-cluster-1".
13. If your applications fail to run with the error “container has runAsNonRoot and the image will run as root”, add the RBAC cluster roles from here:
https://github.com/dstamen/Kubernetes/blob/master/demo-applications/allow-runasnonroot-clusterrole.yaml
PSP (Pod Security Policy) is enabled by default in the Tanzu Kubernetes Clusters so a PSP policy needs to be applied prior to dropping a deployment on the cluster as shown above in the link
14. Now lets deploy a simple nginx deployment using the YAML file
apiVersion: v1
kind: Service
metadata:
labels:
name: nginx
name: nginx
spec:
ports:
- port: 80
selector:
app: nginx
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
replicas: 2 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
15. Apply the YAML config to create the Deployment
$ kubectl create -f nginx-deployment.yaml
service/nginx created
deployment.apps/nginx created
16. Verify everything was deployed successfully as shown below
$ kubectl get all NAME READY STATUS RESTARTS AGE pod/nginx-574b87c764-2zrp2 1/1 Running 0 74s pod/nginx-574b87c764-p8d45 1/1 Running 0 74s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 29m service/nginx LoadBalancer 10.111.0.106 10.193.191.68 80:31921/TCP 75s service/supervisor ClusterIP None <none> 6443/TCP 29m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/nginx 2/2 2 2 75s NAME DESIRED CURRENT READY AGE replicaset.apps/nginx-574b87c764 2 2 2 75s
To access NGINX use the the external IP address of the service "service/nginx" on port 80
17. Finally lets return to vSphere client and see where our Tanzu Kubernetes Cluster we created exists. It will be inside the vSphere namespace "ns1" which os where we drove our install of the Tanzu Kubernetes Cluster from.
More Information
Introducing vSphere 7: Modern Applications & Kubernetes
https://blogs.vmware.com/vsphere/2020/03/vsphere-7-kubernetes-tanzu.html
How to Get vSphere with Kubernetes
https://blogs.vmware.com/vsphere/2020/04/how-to-get-vsphere-with-kubernetes.html
vSphere with Kubernetes 101 Whitepaper
https://blogs.vmware.com/vsphere/2020/03/vsphere-with-kubernetes-101.html
No comments:
Post a Comment