It taken some time but now I officially was able to test PKS with NSX-T rather then using Flannel.
While there is a bit of initial setup to install NSX-T and PKS and then ensure PKS networking is NSX-T, the ease of rolling out multiple Kubernetes clusters with unique networking is greatly simplified by NSX-T. Here I am going to show what happens after pushing a workload to my PKS K8s cluster
First Before we can do anything we need the following...
Pre Steps
1. Ensure you have NSX-T setup and a dashboard UI as follows
2. Ensure you have PKS installed in this example I have it installed on vSphere which at the time of this blog is the only supported / applicable version we can use for NSX-T
PKS tile would need to ensure it's setup to use NSX-T which is done on this page of the tile configuration
3. You can see from the NSX-T manager UI we have a Load Balancers setup as shown below. Navigate to "
Load Balancing -> Load Balancers"
And this Load Balancer is backed by few "
Virtual Servers", one for http (port 80) and the other for https (port 443), which can be seen when you select the Virtual Servers link
From here we have logical switches created for each of the Kubernetes namespaces. We see two for our load balancer, and the other 3 are for the 3 K8s namespaces which are (default, kube-public, kube-system)
Here is how we verify the namespaces we have in our K8s cluster
pasapicella@pas-macbook:~/pivotal $ kubectl get ns
NAME STATUS AGE
default Active 5h
kube-public Active 5h
kube-system Active 5h
All of the logical switches are connected to the T0 Logical Switch by a set of T1 Logical Routers
For these to be accessible, they are linked to the T0 Logical Router via a set of router ports
Now lets push a basic K8s workload and see what NSX-T and PKS give us out of the box...
Steps
Lets create our K8s cluster using the PKS CLI. You will need a PKS CLI user which can be created following this doc
https://docs.pivotal.io/runtimes/pks/1-1/manage-users.html
1. Login using the PKS CLI as follows
$ pks login -k -a api.pks.haas-148.pez.pivotal.io -u pas -p ****
2. Create a cluster as shown below
$ pks create-cluster apples --external-hostname apples.haas-148.pez.pivotal.io --plan small
Name: apples
Plan Name: small
UUID: d9f258e3-247c-4b4c-9055-629871be896c
Last Action: CREATE
Last Action State: in progress
Last Action Description: Creating cluster
Kubernetes Master Host: apples.haas-148.pez.pivotal.io
Kubernetes Master Port: 8443
Worker Instances: 3
Kubernetes Master IP(s): In Progress
3. Wait for the cluster to have created as follows
$ pks cluster apples
Name: apples
Plan Name: small
UUID: d9f258e3-247c-4b4c-9055-629871be896c
Last Action: CREATE
Last Action State: succeeded
Last Action Description: Instance provisioning completed
Kubernetes Master Host: apples.haas-148.pez.pivotal.io
Kubernetes Master Port: 8443
Worker Instances: 3
Kubernetes Master IP(s): 10.1.1.10
The PKS CLI is basically telling BOSH to go ahead an based on the small plan create me a fully functional/working K8's cluster from VM's to all the processes that go along with it and when it's up keep it up and running for me in the event of failure.
His an example of the one of the WORKER VM's of the cluster shown in vSphere Web Client
4. Using the following YAML file as follows lets push that workload to our K8s cluster
apiVersion: v1
kind: Service
metadata:
labels:
app: fortune-service
deployment: pks-workshop
name: fortune-service
spec:
ports:
- port: 80
name: ui
- port: 9080
name: backend
- port: 6379
name: redis
type: LoadBalancer
selector:
app: fortune
---
apiVersion: v1
kind: Pod
metadata:
labels:
app: fortune
deployment: pks-workshop
name: fortune
spec:
containers:
- image: azwickey/fortune-ui:latest
name: fortune-ui
ports:
- containerPort: 80
protocol: TCP
- image: azwickey/fortune-backend-jee:latest
name: fortune-backend
ports:
- containerPort: 9080
protocol: TCP
- image: redis
name: redis
ports:
- containerPort: 6379
protocol: TCP
5. Push the workload as follows once the above YAML is saved to a file
$ kubectl create -f fortune-teller.yml
service "fortune-service" created
pod "fortune" created
6. Verify the PODS are running as follows
$ kubectl get all
NAME READY STATUS RESTARTS AGE
po/fortune 3/3 Running 0 35s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/fortune-service LoadBalancer 10.100.200.232 10.195.3.134 80:30591/TCP,9080:32487/TCP,6379:32360/TCP 36s
svc/kubernetes ClusterIP 10.100.200.1 443/TCP 5h
Great so now lets head back to our NSX-T manager UI and see what has been created. From the above output you can see a LB service is created and external IP address assigned
7. First thing you will notice is in "
Virtual Servers" we have some new entries for each of our containers as shown below
and ...
Finally the LB we previously had in place shows our "
Virtual Servers" added to it's config and routable
More Information
Pivotal Container Service
https://docs.pivotal.io/runtimes/pks/1-1/
VMware NSX-T
https://docs.vmware.com/en/VMware-NSX-T/index.html