Search This Blog

Tuesday, 28 April 2020

Creating a Single instance stateful MySQL pod on vSphere 7 with Kubernetes

In the vSphere environment, the persistent volume objects are backed by virtual disks that reside on datastores. Datastores are represented by storage policies. After the vSphere administrator creates a storage policy, for example gold, and assigns it to a namespace in a Supervisor Cluster, the storage policy appears as a matching Kubernetes storage class in the Supervisor Namespace and any available Tanzu Kubernetes clusters.

In this example below we will show how to get a Single instance Stateful MySQL application pod on vSphere 7 with Kubernetes. For an introduction to vSphere 7 with Kubernetes see this blog link below.

A first look a running a Kubernetes cluster on "vSphere 7 with Kubernetes"
http://theblasfrompas.blogspot.com/2020/04/a-first-look-running-kubenetes-cluster.html

Steps 

1. If you followed the Blog above you will have a Namespace as shown in the image below. The namespace we are using is called "ns1"



2. Click on "ns1" and ensure you have added storage using the "Storage" card



3. Now let's connect to our supervisor cluster and switch to the Namespace "ns1"

kubectl vsphere login --server=SUPERVISOR-CLUSTER-CONTROL-PLANE-IP-ADDRESS 
--vsphere-username VCENTER-SSO-USER

Example:

$ kubectl vsphere login --insecure-skip-tls-verify --server wcp.haas-yyy.pez.pivotal.io -u administrator@vsphere.local

Password:
Logged in successfully.

You have access to the following contexts:
   ns1
   wcp.haas-yyy.pez.pivotal.io

If the context you wish to use is not in this list, you may need to try
logging in again later, or contact your cluster administrator.

To change context, use `kubectl config use-context `

4. At this point we need to switch to the Namespace we created at step 2 which is "ns1".

$ kubectl config use-context ns1
Switched to context "ns1".

5. Use one of the following commands to verify that the storage class is the one which we added to the Namespace as per #2 above, in this case "pacific-gold-storage-policy".
  
$ kubectl get storageclass
NAME                          PROVISIONER              AGE
pacific-gold-storage-policy   csi.vsphere.vmware.com   5d20h

$ kubectl describe namespace ns1
Name:         ns1
Labels:       vSphereClusterID=domain-c8
Annotations:  ncp/extpoolid: domain-c8:1d3e6bfb-af68-4494-a9bf-c8560a7a6aef-ippool-10-193-191-129-10-193-191-190
              ncp/snat_ip: 10.193.191.141
              ncp/subnet-0: 10.244.0.240/28
              ncp/subnet-1: 10.244.1.16/28
              vmware-system-resource-pool: resgroup-67
              vmware-system-vm-folder: group-v68
Status:       Active

Resource Quotas
 Name:                                                                     ns1-storagequota
 Resource                                                                  Used  Hard
 --------                                                                  ---   ---
 pacific-gold-storage-policy.storageclass.storage.k8s.io/requests.storage  20Gi  9223372036854775807

No resource limits.

As a DevOps engineer, you can use the storage class in your persistent volume claim specifications. You can then deploy an application that uses storage from the persistent volume claim.

6. At this point we can create a Persistent Volume Claim using YAML as follows. In the example below we reference storage class name ""pacific-gold-storage-policy".

Note: We are using a Supervisor Cluster Namespace here for our Stateful MySQL application but the storage class name will also appear in any Tanzu Kubernetes clusters you have created.

Example:

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pv-claim
spec:
  storageClassName: pacific-gold-storage-policy
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 20Gi

$ kubectl apply -f mysql-pvc.yaml
persistentvolumeclaim/mysql-pv-claim created

7. Let's view the PVC we just created
  
$ kubectl get pvc
NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                  AGE
mysql-pv-claim   Bound    pvc-a60f2787-ccf4-4142-8bf5-14082ae33403   20Gi       RWO            pacific-gold-storage-policy   39s

8. Now let's create a Deployment that will mount this PVC we created above using the name "mysql-pv-claim"

Example:

apiVersion: v1
kind: Service
metadata:
  name: mysql
spec:
  ports:
  - port: 3306
  selector:
    app: mysql
  clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: mysql
spec:
  selector:
    matchLabels:
      app: mysql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - image: mysql:5.6
        name: mysql
        env:
          # Use secret in real usage
        - name: MYSQL_ROOT_PASSWORD
          value: password
        ports:
        - containerPort: 3306
          name: mysql
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
      volumes:
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: mysql-pv-claim

$ kubectl apply -f mysql-deployment.yaml
service/mysql created
deployment.apps/mysql created

9. Let's verify we have a running Deployment with a MySQL POD as shown below
  
$ kubectl get all
NAME                        READY   STATUS    RESTARTS   AGE
pod/mysql-c85f7f79c-gskkr   1/1     Running   0          78s
pod/nginx                   1/1     Running   0          3d21h

NAME                                          TYPE           CLUSTER-IP    EXTERNAL-IP     PORT(S)          AGE
service/mysql                                 ClusterIP      None          <none>          3306/TCP         79s
service/tkg-cluster-1-60657ac113b7b5a0ebaab   LoadBalancer   10.96.0.253   10.193.191.68   80:32078/TCP     5d19h
service/tkg-cluster-1-control-plane-service   LoadBalancer   10.96.0.222   10.193.191.66   6443:30659/TCP   5d19h

NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/mysql   1/1     1            1           79s

NAME                              DESIRED   CURRENT   READY   AGE
replicaset.apps/mysql-c85f7f79c   1         1         1       79s

10. If we return to vSphere client we will see our MySQL Stateful deployment as shown below


11. We can also view the PVC we have created in vSphere client as well



12. Finally let's connect to the MySQL database which is done as follows by

$ kubectl run -it --rm --image=mysql:5.6 --restart=Never mysql-client -- mysql -h mysql -ppassword
If you don't see a command prompt, try pressing enter.
Warning: Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 1
Server version: 5.6.47 MySQL Community Server (GPL)

Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show databases;
+---------------------+
| Database            |
+---------------------+
| information_schema  |
| #mysql50#lost+found |
| mysql               |
| performance_schema  |
+---------------------+
4 rows in set (0.02 sec)

mysql>


More Information

Deploy a Stateful Application
https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-kubernetes/GUID-D875DED3-41A1-484F-A1CD-13810D674420.html

Display Storage Classes in a Supervisor Namespace or Tanzu Kubernetes Cluster
https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-kubernetes/GUID-883E60F9-03C5-40D7-9AB8-BE42835B7B52.html#GUID-883E60F9-03C5-40D7-9AB8-BE42835B7B52

Thursday, 23 April 2020

A first look a running a Kubernetes cluster on "vSphere 7 with Kubernetes"

VMware recently announced the general availability of vSphere 7. Among many new features is the integration of Kubernetes into vSphere. In this blog post we will see what is required to create our first Kubernetes Guest cluster and deploy the simplest of workloads.



Steps

1. Log into the vCenter client and select "Menu -> Workload Management" and click on "Enable"

Full details on how to enable and setup the Supervisor Cluster can be found at the following docs

https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-kubernetes/GUID-21ABC792-0A23-40EF-8D37-0367B483585E.html

Make sure you enable Harbor as the Registry using this link below

https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-kubernetes/GUID-AE24CF79-3C74-4CCD-B7C7-757AD082D86A.html

A pre-requisite for Workload Management is to have NSX-T 3.0 installed / enabled. https://docs.vmware.com/en/VMware-NSX-T-Data-Center/index.html

Once all done the "Workload Management" page will look like this. This can take around 30 minutes to complete



2. As a vSphere administrator, you can create namespaces on a Supervisor Cluster and configure them with resource quotas, storage, as well as set permissions for DevOps engineer users. Once you configure a namespace, you can provide it DevOps engineers, who run vSphere Pods and Kubernetes clusters created through the VMware Tanzu™ Kubernetes Grid™ Service.

To do this follow this link below

https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-kubernetes/GUID-1544C9FE-0B23-434E-B823-C59EFC2F7309.html

Note: Make a note of this Namespace as we are going to need to connect to it shortly. In the examples below we have a namespace called "ns1"

3. With a vSphere namespace created we can now download the required CLI

Note: You can get the files from the Namespace summary page as shown below under the heading "Link to CLI Tools"



One downloaded put the contents of the .zip file in your OS's executable search path

4. Now we are ready to login. To do that we will use a command as follows

kubectl vsphere login --server=SUPERVISOR-CLUSTER-CONTROL-PLANE-IP-ADDRESS 
--vsphere-username VCENTER-SSO-USER

Example:

$ kubectl vsphere login --insecure-skip-tls-verify --server wcp.haas-yyy.pez.pivotal.io -u administrator@vsphere.local

Password:
Logged in successfully.

You have access to the following contexts:
   ns1
   wcp.haas-253.pez.pivotal.io

If the context you wish to use is not in this list, you may need to try
logging in again later, or contact your cluster administrator.

To change context, use `kubectl config use-context `

Full instructions are at the following URL

https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-kubernetes/GUID-F5114388-1838-4B3B-8A8D-4AE17F33526A.html

5. At this point we need to switch to the Namespace we created at step 2 which is "ns1"

$ kubectl config use-context ns1
Switched to context "ns1".

6. Get a list of the available content images and the Kubernetes version that the image provides

Command: kubectl get virtualmachineimages
  
$ kubectl get virtualmachineimages
NAME                                                        AGE
ob-15957779-photon-3-k8s-v1.16.8---vmware.1-tkg.3.60d2ffd   35m

Version Information can be retrieved as follows:
  
$ kubectl describe virtualmachineimage ob-15957779-photon-3-k8s-v1.16.8---vmware.1-tkg.3.60d2ffd
Name:         ob-15957779-photon-3-k8s-v1.16.8---vmware.1-tkg.3.60d2ffd
Namespace:
Labels:       <none>
Annotations:  vmware-system.compatibilityoffering:
                [{"requires": {"k8s.io/configmap": [{"predicate": {"operation": "anyOf", "arguments": [{"operation": "not", "arguments": [{"operation": "i...
              vmware-system.guest.kubernetes.addons.calico:
                {"type": "inline", "value": "---\n# Source: calico/templates/calico-config.yaml\n# This ConfigMap is used to configure a self-hosted Calic...
              vmware-system.guest.kubernetes.addons.pvcsi:
                {"type": "inline", "value": "apiVersion: v1\nkind: Namespace\nmetadata:\n  name: {{ .PVCSINamespace }}\n---\nkind: ServiceAccount\napiVers...
              vmware-system.guest.kubernetes.addons.vmware-guest-cluster:
                {"type": "inline", "value": "apiVersion: v1\nkind: Namespace\nmetadata:\n  name: vmware-system-cloud-provider\n---\napiVersion: v1\nkind: ...
              vmware-system.guest.kubernetes.distribution.image.version:
                {"kubernetes": {"version": "1.16.8+vmware.1", "imageRepository": "vmware.io"}, "compatibility-7.0.0.10100": {"isCompatible": "true"}, "dis...
API Version:  vmoperator.vmware.com/v1alpha1
Kind:         VirtualMachineImage
Metadata:
  Creation Timestamp:  2020-04-22T04:52:42Z
  Generation:          1
  Resource Version:    28324
  Self Link:           /apis/vmoperator.vmware.com/v1alpha1/virtualmachineimages/ob-15957779-photon-3-k8s-v1.16.8---vmware.1-tkg.3.60d2ffd
  UID:                 9b2a8248-d315-4b50-806f-f135459801a8
Spec:
  Image Source Type:  Content Library
  Type:               ovf
Events:               <none>


7. Create a YAML file with the required configuration parameters to define the cluster

Few things to note:
  1. Make sure your storageClass name matches the storage class name you used during setup
  2. Make sure your distribution version matches a name from the output of step 6
Example:

apiVersion: run.tanzu.vmware.com/v1alpha1               #TKG API endpoint
kind: TanzuKubernetesCluster                            #required parameter
metadata:
  name: tkg-cluster-1                                   #cluster name, user defined
  namespace: ns1                                        #supervisor namespace
spec:
  distribution:
    version: v1.16                                      #resolved kubernetes version
  topology:
    controlPlane:
      count: 1                                          #number of control plane nodes
      class: best-effort-small                          #vmclass for control plane nodes
      storageClass: pacific-gold-storage-policy         #storageclass for control plane
    workers:
      count: 3                                          #number of worker nodes
      class: best-effort-small                          #vmclass for worker nodes
      storageClass: pacific-gold-storage-policy         #storageclass for worker nodes

More information on what the goes into your YAML is defined here

Configuration Parameters for Provisioning Tanzu Kubernetes Clusters
https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-kubernetes/GUID-4E68C7F2-C948-489A-A909-C7A1F3DC545F.html

8. Provision the Tanzu Kubernetes cluster using the following kubectl command against the manifest file above

Command: kubectl apply -f CLUSTER-NAME.yaml

While creating you can check the status as follows

Command: kubectl get TanzuKubernetesCluster,clusters.cluster.x-k8s.io,machine.cluster.x-k8s.io,virtualmachines
  
$ kubectl get TanzuKubernetesCluster,clusters.cluster.x-k8s.io,machine.cluster.x-k8s.io,virtualmachines
NAME                                                        CONTROL PLANE   WORKER   DISTRIBUTION                     AGE   PHASE
tanzukubernetescluster.run.tanzu.vmware.com/tkg-cluster-1   1               3        v1.16.8+vmware.1-tkg.3.60d2ffd   15m   running

NAME                                     PHASE
cluster.cluster.x-k8s.io/tkg-cluster-1   provisioned

NAME                                                                   PROVIDERID                                       PHASE
machine.cluster.x-k8s.io/tkg-cluster-1-control-plane-4jmn7             vsphere://420c7807-d2f2-0461-8232-ec33e07632fa   running
machine.cluster.x-k8s.io/tkg-cluster-1-workers-z5d2x-cc45bbd76-2qznp                                                    provisioning
machine.cluster.x-k8s.io/tkg-cluster-1-workers-z5d2x-cc45bbd76-m5xnm                                                    provisioning
machine.cluster.x-k8s.io/tkg-cluster-1-workers-z5d2x-cc45bbd76-vv26c                                                    provisioning

NAME                                                                               AGE
virtualmachine.vmoperator.vmware.com/tkg-cluster-1-control-plane-4jmn7             14m
virtualmachine.vmoperator.vmware.com/tkg-cluster-1-workers-z5d2x-cc45bbd76-2qznp   6m3s
virtualmachine.vmoperator.vmware.com/tkg-cluster-1-workers-z5d2x-cc45bbd76-m5xnm   6m3s
virtualmachine.vmoperator.vmware.com/tkg-cluster-1-workers-z5d2x-cc45bbd76-vv26c   6m4s

9. Run the following command and make sure the Tanzu Kubernetes cluster is running, this may take some time.

Command: kubectl get TanzuKubernetesCluster,clusters.cluster.x-k8s.io,machine.cluster.x-k8s.io,virtualmachines
  
$ kubectl get TanzuKubernetesCluster,clusters.cluster.x-k8s.io,machine.cluster.x-k8s.io,virtualmachines
NAME                                                        CONTROL PLANE   WORKER   DISTRIBUTION                     AGE   PHASE
tanzukubernetescluster.run.tanzu.vmware.com/tkg-cluster-1   1               3        v1.16.8+vmware.1-tkg.3.60d2ffd   18m   running

NAME                                     PHASE
cluster.cluster.x-k8s.io/tkg-cluster-1   provisioned

NAME                                                                   PROVIDERID                                       PHASE
machine.cluster.x-k8s.io/tkg-cluster-1-control-plane-4jmn7             vsphere://420c7807-d2f2-0461-8232-ec33e07632fa   running
machine.cluster.x-k8s.io/tkg-cluster-1-workers-z5d2x-cc45bbd76-2qznp   vsphere://420ca6ec-9793-7f23-2cd9-67b46c4cc49d   provisioned
machine.cluster.x-k8s.io/tkg-cluster-1-workers-z5d2x-cc45bbd76-m5xnm   vsphere://420c9dd0-4fee-deb1-5673-dabc52b822ca   provisioned
machine.cluster.x-k8s.io/tkg-cluster-1-workers-z5d2x-cc45bbd76-vv26c   vsphere://420cf11f-24e4-83dd-be10-7c87e5486f1c   provisioned

NAME                                                                               AGE
virtualmachine.vmoperator.vmware.com/tkg-cluster-1-control-plane-4jmn7             18m
virtualmachine.vmoperator.vmware.com/tkg-cluster-1-workers-z5d2x-cc45bbd76-2qznp   9m58s
virtualmachine.vmoperator.vmware.com/tkg-cluster-1-workers-z5d2x-cc45bbd76-m5xnm   9m58s
virtualmachine.vmoperator.vmware.com/tkg-cluster-1-workers-z5d2x-cc45bbd76-vv26c   9m59s

10. For a more concise view of what Tanzu Kubernetes Cluster you have this command with it's status is useful enough

Command: kubectl get tanzukubernetescluster
  
$ kubectl get tanzukubernetescluster
NAME            CONTROL PLANE   WORKER   DISTRIBUTION                     AGE   PHASE
tkg-cluster-1   1               3        v1.16.8+vmware.1-tkg.3.60d2ffd   20m   running 

11. Now let's login to a Tanzu Kubernetes Cluster using it's name as follows

kubectl vsphere login --tanzu-kubernetes-cluster-name TKG-CLUSTER-NAME --vsphere-username VCENTER-SSO-USER --server SUPERVISOR-CLUSTER-CONTROL-PLANE-IP-ADDRESS --insecure-skip-tls-verify

Example:

$ kubectl vsphere login --tanzu-kubernetes-cluster-name tkg-cluster-1 --vsphere-username administrator@vsphere.local --server wcp.haas-yyy.pez.pivotal.io --insecure-skip-tls-verify

Password:

Logged in successfully.

You have access to the following contexts:
   ns1
   tkg-cluster-1
   wcp.haas-yyy.pez.pivotal.io

If the context you wish to use is not in this list, you may need to try
logging in again later, or contact your cluster administrator.

To change context, use `kubectl config use-context `

12. Let's switch to the correct context here which is our newly created Kubernetes cluster

$ kubectl config use-context tkg-cluster-1
Switched to context "tkg-cluster-1".

13. If your applications fail to run with the error “container has runAsNonRoot and the image will run as root”, add the RBAC cluster roles from here:

https://github.com/dstamen/Kubernetes/blob/master/demo-applications/allow-runasnonroot-clusterrole.yaml

PSP (Pod Security Policy) is enabled by default in the Tanzu Kubernetes Clusters so a PSP policy needs to be applied prior to dropping a deployment on the cluster as shown above in the link

14. Now lets deploy a simple nginx deployment using the YAML file

apiVersion: v1
kind: Service
metadata:
  labels:
    name: nginx
  name: nginx
spec:
  ports:
    - port: 80
  selector:
    app: nginx
  type: LoadBalancer

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 2 # tells deployment to run 2 pods matching the template
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80

15. Apply the YAML config to create the Deployment

$ kubectl create -f nginx-deployment.yaml
service/nginx created
deployment.apps/nginx created

16. Verify everything was deployed successfully as shown below
  
$ kubectl get all
NAME                         READY   STATUS    RESTARTS   AGE
pod/nginx-574b87c764-2zrp2   1/1     Running   0          74s
pod/nginx-574b87c764-p8d45   1/1     Running   0          74s

NAME                 TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)        AGE
service/kubernetes   ClusterIP      10.96.0.1      <none>          443/TCP        29m
service/nginx        LoadBalancer   10.111.0.106   10.193.191.68   80:31921/TCP   75s
service/supervisor   ClusterIP      None           <none>          6443/TCP       29m

NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx   2/2     2            2           75s

NAME                               DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-574b87c764   2         2         2       75s

To access NGINX use the the external IP address of the service "service/nginx" on port 80



17. Finally lets return to vSphere client and see where our Tanzu Kubernetes Cluster we created exists. It will be inside the vSphere namespace "ns1" which os where we drove our install of the Tanzu Kubernetes Cluster from.





More Information

Introducing vSphere 7: Modern Applications & Kubernetes
https://blogs.vmware.com/vsphere/2020/03/vsphere-7-kubernetes-tanzu.html

How to Get vSphere with Kubernetes
https://blogs.vmware.com/vsphere/2020/04/how-to-get-vsphere-with-kubernetes.html

vSphere with Kubernetes 101 Whitepaper
https://blogs.vmware.com/vsphere/2020/03/vsphere-with-kubernetes-101.html



Thursday, 16 April 2020

Ever wondered if Cloud Foundry can run on Kubernetes?

Well yep it's possible now and is available to be tested now as per the repo below. In this post we will show what we can do with cf-for-k8s as it stands now, once installed and some requirements on how to install it.

https://github.com/cloudfoundry/cf-for-k8s

Before we get started it's important to note, this taken directly from the GitHub repo itself.

"This is a highly experimental project to deploy the new CF Kubernetes-centric components on Kubernetes. It is not meant for use in production and is subject to change in the future"

Steps

1. First we need a k8s cluster. I am using k8s on vSphere using VMware Enterprise PKS but you can use GKE or any other cluster that supports the minimum requirements.

To deploy cf-for-k8s as is, the cluster should:
  • be running version 1.14.x, 1.15.x, or 1.16.x
  • have a minimum of 5 nodes
  • have a minimum of 3 CPU, 7.5GB memory per node
2. There are also some IaaS requirements as shown below.



  • Supports LoadBalancer services
  • Defines a default StorageClass 


  • 3. Finally requirements for pushing source-code based apps to Cloud Foundry means we need a OCI compliant registry. I am using GCR but Docker Hub also works.

    Under the hood, cf-for-k8s uses Cloud Native buildpacks to detect and build the app source code into an oci compliant image and pushes the app image to the registry. Though cf-for-k8s has been tested with Google Container Registry and Dockerhub.com, it should work for any external OCI compliant registry.

    So if you like me and using GCR and following along you will need to create an IAM account with storage privileges for GCR. Assuming you want to create a new IAM account on GCP follow these steps ensuring you set your GCP project id as shown below

    $ export GCP_PROJECT_ID={project-id-in-gcp}

    $ gcloud iam service-accounts create push-image

    $ gcloud projects add-iam-policy-binding $GCP_PROJECT_ID \
        --member serviceAccount:push-image@$GCP_PROJECT_ID.iam.gserviceaccount.com \
        --role roles/storage.admin

    $ gcloud iam service-accounts keys create \

      --iam-account "push-image@$GCP_PROJECT_ID.iam.gserviceaccount.com" \
      gcr-storage-admin.json

    4. So to install cf-for-k8s we simply follow the detailed steps below.

    https://github.com/cloudfoundry/cf-for-k8s/blob/master/docs/deploy.md

    Note: We are using GCR so the generate values script we run looks as follows which injects our GCR IAM account keys into the YML file if we performed the step above 

    $ ./hack/generate-values.sh -d DOMAIN -g ./gcr-push-storage-admin.json > /tmp/cf-values.yml

    5. So in about 8 minutes or so you should have Cloud Foundry running on your Kubernetes cluster. Let's run a series of commands to verify that.

    - Here we see a set of Cloud Foundry namespaces named "cf-{name}"
      
    $ kubectl get ns
    NAME                   STATUS   AGE
    cf-blobstore           Active   8d
    cf-db                  Active   8d
    cf-system              Active   8d
    cf-workloads           Active   8d
    cf-workloads-staging   Active   8d
    console                Active   122m
    default                Active   47d
    istio-system           Active   8d
    kpack                  Active   8d
    kube-node-lease        Active   47d
    kube-public            Active   47d
    kube-system            Active   47d
    metacontroller         Active   8d
    pks-system             Active   47d
    vmware-system-tmc      Active   12d
    

    - Let's check the Cloud Foundry system is up and running by inspecting the status of the PODS as shown below
      
    $ kubectl get pods -n cf-system
    NAME                                     READY   STATUS    RESTARTS   AGE
    capi-api-server-6d89f44d5b-krsck         5/5     Running   2          8d
    capi-api-server-6d89f44d5b-pwv4b         5/5     Running   2          8d
    capi-clock-6c9f6bfd7-nmjrd               2/2     Running   0          8d
    capi-deployment-updater-79b4dc76-g2x6s   2/2     Running   0          8d
    capi-kpack-watcher-6c67984798-2x5n2      2/2     Running   0          8d
    capi-worker-7f8d499494-cd8fx             2/2     Running   0          8d
    cfroutesync-6fb9749-cbv6w                2/2     Running   0          8d
    eirini-6959464957-25ttx                  2/2     Running   0          8d
    fluentd-4l9ml                            2/2     Running   3          8d
    fluentd-mf8x6                            2/2     Running   3          8d
    fluentd-smss9                            2/2     Running   3          8d
    fluentd-vfzhl                            2/2     Running   3          8d
    fluentd-vpn4c                            2/2     Running   3          8d
    log-cache-559846dbc6-p85tk               5/5     Running   5          8d
    metric-proxy-76595fd7c-x9x5s             2/2     Running   0          8d
    uaa-79d77dbb77-gxss8                     2/2     Running   2          8d
    

    - Lets view the ingress gateway resources in the namespace "
      
    $ kubectl get all -n istio-system
    NAME                                          READY   STATUS    RESTARTS   AGE
    pod/istio-citadel-bc7957fc4-nn8kx             1/1     Running   0          8d
    pod/istio-galley-6478b6947d-6dl9h             2/2     Running   0          8d
    pod/istio-ingressgateway-fcgvg                2/2     Running   0          8d
    pod/istio-ingressgateway-jzkpj                2/2     Running   0          8d
    pod/istio-ingressgateway-ptjzz                2/2     Running   0          8d
    pod/istio-ingressgateway-rtwk4                2/2     Running   0          8d
    pod/istio-ingressgateway-tvz8p                2/2     Running   0          8d
    pod/istio-pilot-67955bdf6f-nrhzp              2/2     Running   0          8d
    pod/istio-policy-6b786c6f65-m7tj5             2/2     Running   3          8d
    pod/istio-sidecar-injector-5669cc5894-tq55v   1/1     Running   0          8d
    pod/istio-telemetry-77b745cd6b-wn2dx          2/2     Running   3          8d
    
    NAME                             TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)                                                                                                                      AGE
    service/istio-citadel            ClusterIP      10.100.200.216   <none>          8060/TCP,15014/TCP                                                                                                           8d
    service/istio-galley             ClusterIP      10.100.200.214   <none>          443/TCP,15014/TCP,9901/TCP,15019/TCP                                                                                         8d
    service/istio-ingressgateway     LoadBalancer   10.100.200.105   10.195.93.142   15020:31515/TCP,80:31666/TCP,443:30812/TCP,15029:31219/TCP,15030:31566/TCP,15031:30615/TCP,15032:30206/TCP,15443:32555/TCP   8d
    service/istio-pilot              ClusterIP      10.100.200.182   <none>          15010/TCP,15011/TCP,8080/TCP,15014/TCP                                                                                       8d
    service/istio-policy             ClusterIP      10.100.200.98    <none>          9091/TCP,15004/TCP,15014/TCP                                                                                                 8d
    service/istio-sidecar-injector   ClusterIP      10.100.200.160   <none>          443/TCP                                                                                                                      8d
    service/istio-telemetry          ClusterIP      10.100.200.5     <none>          9091/TCP,15004/TCP,15014/TCP,42422/TCP                                                                                       8d
    
    NAME                                  DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
    daemonset.apps/istio-ingressgateway   5         5         5       5            5           <none>          8d
    
    NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/istio-citadel            1/1     1            1           8d
    deployment.apps/istio-galley             1/1     1            1           8d
    deployment.apps/istio-pilot              1/1     1            1           8d
    deployment.apps/istio-policy             1/1     1            1           8d
    deployment.apps/istio-sidecar-injector   1/1     1            1           8d
    deployment.apps/istio-telemetry          1/1     1            1           8d
    
    NAME                                                DESIRED   CURRENT   READY   AGE
    replicaset.apps/istio-citadel-bc7957fc4             1         1         1       8d
    replicaset.apps/istio-galley-6478b6947d             1         1         1       8d
    replicaset.apps/istio-pilot-67955bdf6f              1         1         1       8d
    replicaset.apps/istio-policy-6b786c6f65             1         1         1       8d
    replicaset.apps/istio-sidecar-injector-5669cc5894   1         1         1       8d
    replicaset.apps/istio-telemetry-77b745cd6b          1         1         1       8d
    
    NAME                                                  REFERENCE                    TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
    horizontalpodautoscaler.autoscaling/istio-pilot       Deployment/istio-pilot       0%/80%    1         5         1          8d
    horizontalpodautoscaler.autoscaling/istio-policy      Deployment/istio-policy      2%/80%    1         5         1          8d
    horizontalpodautoscaler.autoscaling/istio-telemetry   Deployment/istio-telemetry   7%/80%    1         5         1          8d
    

    You can use kapp to verify your install as follows:

    $ kapp list
    Target cluster 'https://cfk8s.mydomain:8443' (nodes: 46431ba8-2048-41ea-a5c9-84c3a3716f6e, 4+)

    Apps in namespace 'default'

    Name  Label                                 Namespaces                                                                                                  Lcs   Lca
    cf    kapp.k14s.io/app=1586305498771951000  (cluster),cf-blobstore,cf-db,cf-system,cf-workloads,cf-workloads-staging,istio-system,kpack,metacontroller  true  8d

    Lcs: Last Change Successful
    Lca: Last Change Age

    1 apps

    Succeeded

    6. Now Cloud Foundry is running we need to configure DNS on your IaaS provider to point the wildcard subdomain of your system domain and the wildcard subdomain of all apps domains to point to external IP of the Istio Ingress Gateway service. You can retrieve the external IP of this service by running a command as follows

    $ kubectl get svc -n istio-system istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[*].ip}'

    Note: The DNS A record wildcard entry would look as follows ensuring you use the DOMAIN you told the install script you were using

    DNS entry should be mapped to : *.{DOMAIN}

    7. Once done we can use DIG to verify we have setup our DNS wildcard entry correct. We looking for a ANSWER section which maps to the IP address we got from

    $ dig api.mydomain

    ; <<>> DiG 9.10.6 <<>> api.mydomain
    ;; global options: +cmd
    ;; Got answer:
    ;; ->>HEADER<<- 58127="" font="" id:="" noerror="" opcode:="" query="" status:="">
    ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

    ;; OPT PSEUDOSECTION:
    ; EDNS: version: 0, flags:; udp: 4096
    ;; QUESTION SECTION:
    ;api.mydomain. IN A

    ;; ANSWER SECTION:
    api.mydomain. 60 IN A 10.0.0.1

    ;; Query time: 216 msec
    ;; SERVER: 10.10.6.6#53(10.10.6.7)
    ;; WHEN: Thu Apr 16 11:46:59 AEST 2020
    ;; MSG SIZE  rcvd: 83

    8. So now we are ready to login using Cloud Foundry CLI. Make sure your using the latest version as shown below

    $ cf version
    cf version 6.50.0+4f0c3a2ce.2020-03-03

    Note: You can install Cloud Foundry CLI as follows

    https://github.com/cloudfoundry/cli

    9. Ok so we are ready to target the API endpoint and login. As you may as guessed the API endpoint is "api.{DOMNAIN" so go ahead and do that as shown below. If this fails it means you have to re-visit steps 6 and 7 above.

    $ cf api https://api.mydomain --skip-ssl-validation
    Setting api endpoint to https://api.mydomain...
    OK

    api endpoint:   https://api.mydomain
    api version:    2.148.0

    10. So now we need the admin password to login using UAA and this was generated for us when we run the generate script above and produced our install YML. You can run a simple command as follows using the YML file to get the password.

    $ head cf-values.yml
    #@data/values
    ---
    system_domain: "mydomain"
    app_domains:
    #@overlay/append
    - "mydomain"
    cf_admin_password: 5nxm5bnl23jf5f0aivbs

    cf_blobstore:
      secret_key: 04gihynpr0x4dpptc5a5

    11. So to login I use a script as follows which will create a space for me which I then target to applications into.

    cf auth admin 5nxm5bnl23jf5f0aivbs
    cf target -o system
    cf create-space development
    cf target -s development

    Output when we run this script or just type each command one at a time will look as follows.

    API endpoint: https://api.mydomain
    Authenticating...
    OK

    Use 'cf target' to view or set your target org and space.
    api endpoint:   https://api.mydomain
    api version:    2.148.0
    user:           admin
    org:            system
    space:          development
    Creating space development in org system as admin...
    OK

    Space development already exists

    api endpoint:   https://api.mydomain
    api version:    2.148.0
    user:           admin
    org:            system
    space:          development

    12. If we type in "cf apps" we will see we have no applications deployed which is expected.

    $ cf apps
    Getting apps in org system / space development as admin...
    OK

    No apps found

    13. So lets deploy out first application. In this example we will use a NodeJS cloud foundry application which exists at the following GitHub repo. We will deploy it using it's source code only. To do that we will clone it onto our file system as shown below.

    https://github.com/cloudfoundry-samples/cf-sample-app-nodejs

    $ git clone https://github.com/cloudfoundry-samples/cf-sample-app-nodejs

    14. Edit cf-sample-app-nodejs/manifest.yml to look as follows by removing radom-route entry

    ---
    applications:
    - name: cf-nodejs
      memory: 512M
      instances: 1

    15. Now to push the Node app we are going to use two terminal windows. One to actually push the app and the other to view the logs.


    16. Now in first terminal window issue this command ensuring the cloned app from above exists from the directory your in as shown by the path it's referencing

    $ cf push test-node-app -p ./cf-sample-app-nodejs

    17. In the second terminal window issue this command.

    $ cf logs test-node-app

    18. You should see log output while the application is being pushed.



    19. Wait for the "cf push" to complete as shown below

    ....

    Waiting for app to start...

    name:                test-node-app
    requested state:     started
    isolation segment:   placeholder
    routes:              test-node-app.system.run.haas-210.pez.pivotal.io
    last uploaded:       Thu 16 Apr 13:04:59 AEST 2020
    stack:
    buildpacks:

    type:           web
    instances:      1/1
    memory usage:   1024M
         state     since                  cpu    memory    disk      details
    #0   running   2020-04-16T03:05:13Z   0.0%   0 of 1G   0 of 1G


    Verify we have deployed our Node app and it has a fully qualified URL for us to access it as shown below.

    $ cf apps
    Getting apps in org system / space development as admin...
    OK

    name            requested state   instances   memory   disk   urls
    test-node-app   started           1/1         1G       1G     test-node-app.mydomain

    ** Browser **



    Ok so what actually has happened on our k8s cluster to get this application deployed? There was a series of steps performed which is why "cf push" blocks until all these have happened. At a high level these are the 3 main steps
    1. Capi uploads the code, puts it in internal blob store
    2. kpack builds the image and stores in the registry you defined at install time (GCR for us)
    3. Eirini schedules the pod

    GCR "cf-workloads" folder


    kpack is where lots of magic actually occurs. kpack is based on the CNCF sandbox project knows as Cloud Native Buildpacks and can create OCI compliant images from source code and/or artifacts automatically for you. CNB/kpack doesn't just stop there to find out more I suggest going to the following links.

    https://tanzu.vmware.com/content/blog/introducing-kpack-a-kubernetes-native-container-build-service

    https://buildpacks.io/

    Buildpacks provide a higher-level abstraction for building apps compared to Dockerfiles.

    Specifically, buildpacks:
    • Provide a balance of control that reduces the operational burden on developers and supports enterprise operators who manage apps at scale.
    • Ensure that apps meet security and compliance requirements without developer intervention.
    • Provide automated delivery of both OS-level and application-level dependency upgrades, efficiently handling day-2 app operations that are often difficult to manage with Dockerfiles.
    • Rely on compatibility guarantees to safely apply patches without rebuilding artifacts and without unintentionally changing application behavior.
    20. Let's run a series of kubectl commands to see what was created. All of our apps get deployed to the namespace "cf-workloads".

    - What POD's are running in cf-workloads
      
    $ kubectl get pods -n cf-workloads
    NAME                                     READY   STATUS    RESTARTS   AGE
    test-node-app-development-c346b24349-0   2/2     Running   0          26m 
    

    - You will notice we have a POD running with 2 containers BUT also we have a Service which is used internally to route to the or more PODS using ClusterIP as shown below
      
    $ kubectl get svc -n cf-workloads
    NAME                                     TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
    s-1999c874-e300-45e1-b5ff-1a69b7649dd6   ClusterIP   10.100.200.26   <none>        8080/TCP   27m
    

    - Each POD has two containers named as follows.

    opi : This is your actual container instance running your code
    istio-proxy: This as the name suggests is a proxy container which among other things routes requests to the OPI container image when required

    21. Ok so let's scale our application to run 2 instances. To do that we simply use Cloud Foundry CLI as follows

    $ cf scale test-node-app -i 2
    Scaling app test-node-app in org system / space development as admin...
    OK

    $ cf apps
    Getting apps in org system / space development as admin...
    OK

    name            requested state   instances   memory   disk   urls
    test-node-app   started           2/2         1G       1G     test-node-app.mydomain

    And using kubectl as expected we end up with another POD created for the second instance
      
    $ kubectl get pods -n cf-workloads
    NAME                                     READY   STATUS    RESTARTS   AGE
    test-node-app-development-c346b24349-0   2/2     Running   0          44m
    test-node-app-development-c346b24349-1   2/2     Running   0          112s
    

    If we dig a bit deeper will see that a Statefulset backs the application deployment shown below
      
    $ kubectl get all -n cf-workloads
    NAME                                         READY   STATUS    RESTARTS   AGE
    pod/test-node-app-development-c346b24349-0   2/2     Running   0          53m
    pod/test-node-app-development-c346b24349-1   2/2     Running   0          10m
    
    NAME                                             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
    service/s-1999c874-e300-45e1-b5ff-1a69b7649dd6   ClusterIP   10.100.200.26   <none>        8080/TCP   53m
    
    NAME                                                    READY   AGE
    statefulset.apps/test-node-app-development-c346b24349   2/2     53m
    

    Ok so as you may have guessed we can deploy many different types of apps because kpack supports multiple languages including Java, Go, Python etc.

    22. Let's deploy a Go application as follows.

    $ git clone https://github.com/swisscom/cf-sample-app-go

    $ cf push my-go-app -m 64M -p ./cf-sample-app-go
    Pushing app my-go-app to org system / space development as admin...
    Getting app info...
    Creating app with these attributes...
    + name:       my-go-app
      path:       /Users/papicella/pivotal/PCF/APJ/PEZ-HaaS/haas-210/cf-for-k8s/artifacts/cf-sample-app-go
    + memory:     64M
      routes:
    +   my-go-app.mydomain

    Creating app my-go-app...
    Mapping routes...
    Comparing local files to remote cache...
    Packaging files to upload...
    Uploading files...
     1.43 KiB / 1.43 KiB [====================================================================================] 100.00% 1s

    Waiting for API to complete processing files...

    Staging app and tracing logs...

    Waiting for app to start...

    name:                my-go-app
    requested state:     started
    isolation segment:   placeholder
    routes:              my-go-app.mydomain
    last uploaded:       Thu 16 Apr 14:06:25 AEST 2020
    stack:
    buildpacks:

    type:           web
    instances:      1/1
    memory usage:   64M
         state     since                  cpu    memory     disk      details
    #0   running   2020-04-16T04:06:43Z   0.0%   0 of 64M   0 of 1G

    We can invoke the application using "curl" or something more modern like "HTTPie"

    $ http http://my-go-app.mydomain
    HTTP/1.1 200 OK
    content-length: 59
    content-type: text/plain; charset=utf-8
    date: Thu, 16 Apr 2020 04:09:46 GMT
    server: istio-envoy
    x-envoy-upstream-service-time: 6

    Congratulations! Welcome to the Swisscom Application Cloud!

    If we tailed the logs using "cf logs my-go-app" we would of seen that kpack intelligently determine this is a GO app and uses the Go buildpack to compile the code and produce a container image.

    ...
    2020-04-16T14:05:27.52+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT Warning: Image "gcr.io/fe-papicella/cf-workloads/f0072cfa-0e7e-41da-9bf7-d34b2997fb94" not found
    2020-04-16T14:05:29.59+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT
    2020-04-16T14:05:29.59+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT Go Compiler Buildpack 0.0.83
    2020-04-16T14:05:29.59+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT Go 1.13.7: Contributing to layer
    2020-04-16T14:05:29.59+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT     Downloading from https://buildpacks.cloudfoundry.org/dependencies/go/go-1.13.7-bionic-5bb47c26.tgz
    2020-04-16T14:05:35.13+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT     Verifying checksum
    2020-04-16T14:05:35.63+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT     Expanding to /layers/org.cloudfoundry.go-compiler/go
    2020-04-16T14:05:41.48+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT
    2020-04-16T14:05:41.48+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT Go Mod Buildpack 0.0.84
    2020-04-16T14:05:41.48+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT Setting environment variables
    2020-04-16T14:05:41.48+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT : Contributing to layer
    2020-04-16T14:05:41.68+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT github.com/swisscom/cf-sample-app-go
    2020-04-16T14:05:41.69+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT : Contributing to layer
    ...

    Using "cf apps" we now have two applications deployed as shown below.

    $ cf apps
    Getting apps in org system / space development as admin...
    OK

    name            requested state   instances   memory   disk   urls
    my-go-app       started           1/1         64M      1G     my-go-app.mydomain
    test-node-app   started           2/2         1G       1G     test-node-app.mydomain

    23. Finally kpack and the buildpacks eco system can deploy already created artifacts. The Java Buildpack is capable of not only deploying from source but can also use a FAT spring boot JAR file for example as shown below. In this example we have packaged the artifact we wish to deploy as "PivotalMySQLWeb-1.0.0-SNAPSHOT.jar".

    $ cf push piv-mysql-web -p PivotalMySQLWeb-1.0.0-SNAPSHOT.jar -i 1 -m 1g
    Pushing app piv-mysql-web to org system / space development as admin...
    Getting app info...
    Creating app with these attributes...
    + name:        piv-mysql-web
      path:        /Users/papicella/pivotal/PCF/APJ/PEZ-HaaS/haas-210/cf-for-k8s/artifacts/PivotalMySQLWeb-1.0.0-SNAPSHOT.jar
    + instances:   1
    + memory:      1G
      routes:
    +   piv-mysql-web.mydomain

    Creating app piv-mysql-web...
    Mapping routes...
    Comparing local files to remote cache...
    Packaging files to upload...
    Uploading files...
     1.03 MiB / 1.03 MiB [====================================================================================] 100.00% 2s

    Waiting for API to complete processing files...

    Staging app and tracing logs...

    Waiting for app to start...

    name:                piv-mysql-web
    requested state:     started
    isolation segment:   placeholder
    routes:              piv-mysql-web.mydomain
    last uploaded:       Thu 16 Apr 14:17:22 AEST 2020
    stack:
    buildpacks:

    type:           web
    instances:      1/1
    memory usage:   1024M
         state     since                  cpu    memory    disk      details
    #0   running   2020-04-16T04:17:43Z   0.0%   0 of 1G   0 of 1G


    Of course the usual commands you expect from CF CLI still exist. Here are some examples as follows.

    $ cf app piv-mysql-web
    Showing health and status for app piv-mysql-web in org system / space development as admin...

    name:                piv-mysql-web
    requested state:     started
    isolation segment:   placeholder
    routes:              piv-mysql-web.mydomain
    last uploaded:       Thu 16 Apr 14:17:22 AEST 2020
    stack:
    buildpacks:

    type:           web
    instances:      1/1
    memory usage:   1024M
         state     since                  cpu    memory         disk      details
    #0   running   2020-04-16T04:17:43Z   0.1%   195.8M of 1G   0 of 1G

    $ cf env piv-mysql-web
    Getting env variables for app piv-mysql-web in org system / space development as admin...
    OK

    System-Provided:

    {
     "VCAP_APPLICATION": {
      "application_id": "3b8bad84-2654-46f4-b32a-ebad0a4993c1",
      "application_name": "piv-mysql-web",
      "application_uris": [
       "piv-mysql-web.mydomain"
      ],
      "application_version": "750d9530-e756-4b74-ac86-75b61c60fe2d",
      "cf_api": "https://api. mydomain",
      "limits": {
       "disk": 1024,
       "fds": 16384,
       "mem": 1024
      },
      "name": "piv-mysql-web",
      "organization_id": "8ae94610-513c-435b-884f-86daf81229c8",
      "organization_name": "system",
      "process_id": "3b8bad84-2654-46f4-b32a-ebad0a4993c1",
      "process_type": "web",
      "space_id": "7f3d78ae-34d4-42e4-8ab8-b34e46e8ad1f",
      "space_name": "development",
      "uris": [
       "piv-mysql-web. mydomain"
      ],
      "users": null,
      "version": "750d9530-e756-4b74-ac86-75b61c60fe2d"
     }
    }

    No user-defined env variables have been set

    No running env variables have been set

    No staging env variables have been set

    So what about some sort of UI? That brings as to step 24

    24. Let's start by installing helm using a script as follows

    #!/usr/bin/env bash

    echo "install helm"
    # installs helm with bash commands for easier command line integration
    curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash
    # add a service account within a namespace to segregate tiller
    kubectl --namespace kube-system create sa tiller
    # create a cluster role binding for tiller
    kubectl create clusterrolebinding tiller \
        --clusterrole cluster-admin \
        --serviceaccount=kube-system:tiller

    echo "initialize helm"
    # initialized helm within the tiller service account
    helm init --service-account tiller
    # updates the repos for Helm repo integration
    helm repo update

    echo "verify helm"
    # verify that helm is installed in the cluster
    kubectl get deploy,svc tiller-deploy -n kube-system

    Once installed you can verify helm is working by using "helm ls" which should come back with no output as you haven't installed anything with helm yet.

    25. Run the following to install Stratos an open source Web UI for Cloud Foundry

    For more information on Stratos visit this URL - https://github.com/cloudfoundry/stratos

    $ helm install stratos/console --namespace=console --name my-console --set console.service.type=LoadBalancer
    NAME:   my-console
    LAST DEPLOYED: Thu Apr 16 09:48:19 2020
    NAMESPACE: console
    STATUS: DEPLOYED

    RESOURCES:
    ==> v1/Deployment
    NAME        READY  UP-TO-DATE  AVAILABLE  AGE
    stratos-db  0/1    1           0          2s

    ==> v1/Job
    NAME                   COMPLETIONS  DURATION  AGE
    stratos-config-init-1  0/1          2s        2s

    ==> v1/PersistentVolumeClaim
    NAME                              STATUS  VOLUME                                    CAPACITY  ACCESS MODES  STORAGECLASS  AGE
    console-mariadb                   Bound   pvc-4ff20e21-1852-445f-854f-894bc42227ce  1Gi       RWO           fast          2s
    my-console-encryption-key-volume  Bound   pvc-095bb7ed-7be9-4d93-b63a-a8af569361b6  20Mi      RWO           fast          2s

    ==> v1/Pod(related)
    NAME                         READY  STATUS             RESTARTS  AGE
    stratos-config-init-1-2t47x  0/1    ContainerCreating  0         2s
    stratos-config-init-1-2t47x  0/1    ContainerCreating  0         2s
    stratos-config-init-1-2t47x  0/1    ContainerCreating  0         2s

    ==> v1/Role
    NAME              AGE
    config-init-role  2s

    ==> v1/RoleBinding
    NAME                              AGE
    config-init-secrets-role-binding  2s

    ==> v1/Secret
    NAME                  TYPE    DATA  AGE
    my-console-db-secret  Opaque  5     2s
    my-console-secret     Opaque  5     2s

    ==> v1/Service
    NAME                TYPE          CLUSTER-IP      EXTERNAL-IP    PORT(S)        AGE
    my-console-mariadb  ClusterIP     10.100.200.162           3306/TCP       2s
    my-console-ui-ext   LoadBalancer  10.100.200.171  10.195.93.143  443:31524/TCP  2s

    ==> v1/ServiceAccount
    NAME         SECRETS  AGE
    config-init  1        2s

    ==> v1/StatefulSet
    NAME     READY  AGE
    stratos  0/1    2s

    26. You can verify it installed a few ways as shown below.

    - Use helm with "helm ls"
      
    $ helm ls
    NAME       REVISION UPDATED                  STATUS   CHART         APP VERSION NAMESPACE
    my-console 1        Thu Apr 16 09:48:19 2020 DEPLOYED console-3.0.0 3.0.0       console
    

    - Verify everything is running using "kubectl get all -n console"
      
    $ k get all -n console
    NAME                              READY   STATUS              RESTARTS   AGE
    pod/stratos-0                     0/2     ContainerCreating   0          40s
    pod/stratos-config-init-1-2t47x   0/1     Completed           0          40s
    pod/stratos-db-69ddf7f5f7-gb8xm   0/1     Running             0          40s
    
    NAME                         TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)         AGE
    service/my-console-mariadb   ClusterIP      10.100.200.162   <none>          3306/TCP        40s
    service/my-console-ui-ext    LoadBalancer   10.100.200.171   10.195.1.1    443:31524/TCP   40s
    
    NAME                         READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/stratos-db   0/1     1            0           41s
    
    NAME                                    DESIRED   CURRENT   READY   AGE
    replicaset.apps/stratos-db-69ddf7f5f7   1         1         0       41s
    
    NAME                       READY   AGE
    statefulset.apps/stratos   0/1     41s
    
    NAME                              COMPLETIONS   DURATION   AGE
    job.batch/stratos-config-init-1   1/1           27s        42s
    

    27. Now to open up the UI web app we just need the external IP from "service/my-console-ui-ext" as per the output above.

    Navigate to https://{external-ip}:443

    28. Create a local user to login using the password you set and and the username "admin".

    Note: The password is just to get into the UI. It can be anything you want it to be.



    29. Now we need to click on "Endpoints" and register a Cloud Foundry endpoint using the same login details we used with the Cloud Foundry API earlier at step 11.

    Note: The API endpoint is what you used at step 9 and make sure to skip SSL validation

    Once connected there are our deployed applications.



    Summary 

    In this post we explored what running Cloud Foundry on Kubernetes looks like. For those familiar with Cloud Foundry or Tanzu Application Service (formally known as Pivotal Application Service) from a development perspective everything is the same using familiar CF CLI commands. What changes here is the footprint to run Cloud Foundry is much less complicated and runs on Kubernetes itself meaning even more places to run Cloud Foundry then ever before plus the ability to leverage community based projects on Kubernetes further more simplifying Cloud Foundry.

    For more information see the links below.

    More Information

    GitHub Repo
    https://github.com/cloudfoundry/cf-for-k8s

    VMware Tanzu Application Service for Kubernetes (Beta)
    https://network.pivotal.io/products/tas-for-kubernetes/