https://github.com/cloudfoundry/cf-for-k8s
Before we get started it's important to note, this taken directly from the GitHub repo itself.
"This is a highly experimental project to deploy the new CF Kubernetes-centric components on Kubernetes. It is not meant for use in production and is subject to change in the future"
Steps
1. First we need a k8s cluster. I am using k8s on vSphere using VMware Enterprise PKS but you can use GKE or any other cluster that supports the minimum requirements.
To deploy cf-for-k8s as is, the cluster should:
- be running version 1.14.x, 1.15.x, or 1.16.x
- have a minimum of 5 nodes
- have a minimum of 3 CPU, 7.5GB memory per node
LoadBalancer
services3. Finally requirements for pushing source-code based apps to Cloud Foundry means we need a OCI compliant registry. I am using GCR but Docker Hub also works.
Under the hood, cf-for-k8s uses Cloud Native buildpacks to detect and build the app source code into an oci compliant image and pushes the app image to the registry. Though cf-for-k8s has been tested with Google Container Registry and Dockerhub.com, it should work for any external OCI compliant registry.
So if you like me and using GCR and following along you will need to create an IAM account with storage privileges for GCR. Assuming you want to create a new IAM account on GCP follow these steps ensuring you set your GCP project id as shown below
$ export GCP_PROJECT_ID={project-id-in-gcp}
$ gcloud iam service-accounts create push-image
$ gcloud projects add-iam-policy-binding $GCP_PROJECT_ID \
--member serviceAccount:push-image@$GCP_PROJECT_ID.iam.gserviceaccount.com \
--role roles/storage.admin
$ gcloud iam service-accounts keys create \
--iam-account "push-image@$GCP_PROJECT_ID.iam.gserviceaccount.com" \
gcr-storage-admin.json
4. So to install cf-for-k8s we simply follow the detailed steps below.
https://github.com/cloudfoundry/cf-for-k8s/blob/master/docs/deploy.md
Note: We are using GCR so the generate values script we run looks as follows which injects our GCR IAM account keys into the YML file if we performed the step above
$ ./hack/generate-values.sh -d DOMAIN -g ./gcr-push-storage-admin.json > /tmp/cf-values.yml
5. So in about 8 minutes or so you should have Cloud Foundry running on your Kubernetes cluster. Let's run a series of commands to verify that.
- Here we see a set of Cloud Foundry namespaces named "cf-{name}"
$ kubectl get ns NAME STATUS AGE cf-blobstore Active 8d cf-db Active 8d cf-system Active 8d cf-workloads Active 8d cf-workloads-staging Active 8d console Active 122m default Active 47d istio-system Active 8d kpack Active 8d kube-node-lease Active 47d kube-public Active 47d kube-system Active 47d metacontroller Active 8d pks-system Active 47d vmware-system-tmc Active 12d
- Let's check the Cloud Foundry system is up and running by inspecting the status of the PODS as shown below
$ kubectl get pods -n cf-system NAME READY STATUS RESTARTS AGE capi-api-server-6d89f44d5b-krsck 5/5 Running 2 8d capi-api-server-6d89f44d5b-pwv4b 5/5 Running 2 8d capi-clock-6c9f6bfd7-nmjrd 2/2 Running 0 8d capi-deployment-updater-79b4dc76-g2x6s 2/2 Running 0 8d capi-kpack-watcher-6c67984798-2x5n2 2/2 Running 0 8d capi-worker-7f8d499494-cd8fx 2/2 Running 0 8d cfroutesync-6fb9749-cbv6w 2/2 Running 0 8d eirini-6959464957-25ttx 2/2 Running 0 8d fluentd-4l9ml 2/2 Running 3 8d fluentd-mf8x6 2/2 Running 3 8d fluentd-smss9 2/2 Running 3 8d fluentd-vfzhl 2/2 Running 3 8d fluentd-vpn4c 2/2 Running 3 8d log-cache-559846dbc6-p85tk 5/5 Running 5 8d metric-proxy-76595fd7c-x9x5s 2/2 Running 0 8d uaa-79d77dbb77-gxss8 2/2 Running 2 8d
- Lets view the ingress gateway resources in the namespace "
$ kubectl get all -n istio-system NAME READY STATUS RESTARTS AGE pod/istio-citadel-bc7957fc4-nn8kx 1/1 Running 0 8d pod/istio-galley-6478b6947d-6dl9h 2/2 Running 0 8d pod/istio-ingressgateway-fcgvg 2/2 Running 0 8d pod/istio-ingressgateway-jzkpj 2/2 Running 0 8d pod/istio-ingressgateway-ptjzz 2/2 Running 0 8d pod/istio-ingressgateway-rtwk4 2/2 Running 0 8d pod/istio-ingressgateway-tvz8p 2/2 Running 0 8d pod/istio-pilot-67955bdf6f-nrhzp 2/2 Running 0 8d pod/istio-policy-6b786c6f65-m7tj5 2/2 Running 3 8d pod/istio-sidecar-injector-5669cc5894-tq55v 1/1 Running 0 8d pod/istio-telemetry-77b745cd6b-wn2dx 2/2 Running 3 8d NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/istio-citadel ClusterIP 10.100.200.216 <none> 8060/TCP,15014/TCP 8d service/istio-galley ClusterIP 10.100.200.214 <none> 443/TCP,15014/TCP,9901/TCP,15019/TCP 8d service/istio-ingressgateway LoadBalancer 10.100.200.105 10.195.93.142 15020:31515/TCP,80:31666/TCP,443:30812/TCP,15029:31219/TCP,15030:31566/TCP,15031:30615/TCP,15032:30206/TCP,15443:32555/TCP 8d service/istio-pilot ClusterIP 10.100.200.182 <none> 15010/TCP,15011/TCP,8080/TCP,15014/TCP 8d service/istio-policy ClusterIP 10.100.200.98 <none> 9091/TCP,15004/TCP,15014/TCP 8d service/istio-sidecar-injector ClusterIP 10.100.200.160 <none> 443/TCP 8d service/istio-telemetry ClusterIP 10.100.200.5 <none> 9091/TCP,15004/TCP,15014/TCP,42422/TCP 8d NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/istio-ingressgateway 5 5 5 5 5 <none> 8d NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/istio-citadel 1/1 1 1 8d deployment.apps/istio-galley 1/1 1 1 8d deployment.apps/istio-pilot 1/1 1 1 8d deployment.apps/istio-policy 1/1 1 1 8d deployment.apps/istio-sidecar-injector 1/1 1 1 8d deployment.apps/istio-telemetry 1/1 1 1 8d NAME DESIRED CURRENT READY AGE replicaset.apps/istio-citadel-bc7957fc4 1 1 1 8d replicaset.apps/istio-galley-6478b6947d 1 1 1 8d replicaset.apps/istio-pilot-67955bdf6f 1 1 1 8d replicaset.apps/istio-policy-6b786c6f65 1 1 1 8d replicaset.apps/istio-sidecar-injector-5669cc5894 1 1 1 8d replicaset.apps/istio-telemetry-77b745cd6b 1 1 1 8d NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE horizontalpodautoscaler.autoscaling/istio-pilot Deployment/istio-pilot 0%/80% 1 5 1 8d horizontalpodautoscaler.autoscaling/istio-policy Deployment/istio-policy 2%/80% 1 5 1 8d horizontalpodautoscaler.autoscaling/istio-telemetry Deployment/istio-telemetry 7%/80% 1 5 1 8d
You can use kapp to verify your install as follows:
$ kapp list
Target cluster 'https://cfk8s.mydomain:8443' (nodes: 46431ba8-2048-41ea-a5c9-84c3a3716f6e, 4+)
Apps in namespace 'default'
Name Label Namespaces Lcs Lca
cf kapp.k14s.io/app=1586305498771951000 (cluster),cf-blobstore,cf-db,cf-system,cf-workloads,cf-workloads-staging,istio-system,kpack,metacontroller true 8d
Lcs: Last Change Successful
Lca: Last Change Age
1 apps
Succeeded
6. Now Cloud Foundry is running we need to configure DNS on your IaaS provider to point the wildcard subdomain of your system domain and the wildcard subdomain of all apps domains to point to external IP of the Istio Ingress Gateway service. You can retrieve the external IP of this service by running a command as follows
$ kubectl get svc -n istio-system istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[*].ip}'
Note: The DNS A record wildcard entry would look as follows ensuring you use the DOMAIN you told the install script you were using
DNS entry should be mapped to : *.{DOMAIN}
7. Once done we can use DIG to verify we have setup our DNS wildcard entry correct. We looking for a ANSWER section which maps to the IP address we got from
$ dig api.mydomain
; <<>> DiG 9.10.6 <<>> api.mydomain
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- 58127="" font="" id:="" noerror="" opcode:="" query="" status:="">->
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;api.mydomain. IN A
;; ANSWER SECTION:
api.mydomain. 60 IN A 10.0.0.1
;; Query time: 216 msec
;; SERVER: 10.10.6.6#53(10.10.6.7)
;; WHEN: Thu Apr 16 11:46:59 AEST 2020
;; MSG SIZE rcvd: 83
8. So now we are ready to login using Cloud Foundry CLI. Make sure your using the latest version as shown below
$ cf version
cf version 6.50.0+4f0c3a2ce.2020-03-03
Note: You can install Cloud Foundry CLI as follows
https://github.com/cloudfoundry/cli
9. Ok so we are ready to target the API endpoint and login. As you may as guessed the API endpoint is "api.{DOMNAIN" so go ahead and do that as shown below. If this fails it means you have to re-visit steps 6 and 7 above.
$ cf api https://api.mydomain --skip-ssl-validation
Setting api endpoint to https://api.mydomain...
OK
api endpoint: https://api.mydomain
api version: 2.148.0
10. So now we need the admin password to login using UAA and this was generated for us when we run the generate script above and produced our install YML. You can run a simple command as follows using the YML file to get the password.
$ head cf-values.yml
#@data/values
---
system_domain: "mydomain"
app_domains:
#@overlay/append
- "mydomain"
cf_admin_password: 5nxm5bnl23jf5f0aivbs
cf_blobstore:
secret_key: 04gihynpr0x4dpptc5a5
11. So to login I use a script as follows which will create a space for me which I then target to applications into.
cf auth admin 5nxm5bnl23jf5f0aivbs
cf target -o system
cf create-space development
cf target -s development
Output when we run this script or just type each command one at a time will look as follows.
API endpoint: https://api.mydomain
Authenticating...
OK
Use 'cf target' to view or set your target org and space.
api endpoint: https://api.mydomain
api version: 2.148.0
user: admin
org: system
space: development
Creating space development in org system as admin...
OK
Space development already exists
api endpoint: https://api.mydomain
api version: 2.148.0
user: admin
org: system
space: development
12. If we type in "cf apps" we will see we have no applications deployed which is expected.
$ cf apps
Getting apps in org system / space development as admin...
OK
No apps found
13. So lets deploy out first application. In this example we will use a NodeJS cloud foundry application which exists at the following GitHub repo. We will deploy it using it's source code only. To do that we will clone it onto our file system as shown below.
https://github.com/cloudfoundry-samples/cf-sample-app-nodejs
$ git clone https://github.com/cloudfoundry-samples/cf-sample-app-nodejs
14. Edit cf-sample-app-nodejs/manifest.yml to look as follows by removing radom-route entry
---
applications:
- name: cf-nodejs
memory: 512M
instances: 1
15. Now to push the Node app we are going to use two terminal windows. One to actually push the app and the other to view the logs.
16. Now in first terminal window issue this command ensuring the cloned app from above exists from the directory your in as shown by the path it's referencing
$ cf push test-node-app -p ./cf-sample-app-nodejs
17. In the second terminal window issue this command.
$ cf logs test-node-app
18. You should see log output while the application is being pushed.
19. Wait for the "cf push" to complete as shown below
....
Waiting for app to start...
name: test-node-app
requested state: started
isolation segment: placeholder
routes: test-node-app.system.run.haas-210.pez.pivotal.io
last uploaded: Thu 16 Apr 13:04:59 AEST 2020
stack:
buildpacks:
type: web
instances: 1/1
memory usage: 1024M
state since cpu memory disk details
#0 running 2020-04-16T03:05:13Z 0.0% 0 of 1G 0 of 1G
Verify we have deployed our Node app and it has a fully qualified URL for us to access it as shown below.
$ cf apps
Getting apps in org system / space development as admin...
OK
name requested state instances memory disk urls
test-node-app started 1/1 1G 1G test-node-app.mydomain
** Browser **
Ok so what actually has happened on our k8s cluster to get this application deployed? There was a series of steps performed which is why "cf push" blocks until all these have happened. At a high level these are the 3 main steps
- Capi uploads the code, puts it in internal blob store
- kpack builds the image and stores in the registry you defined at install time (GCR for us)
- Eirini schedules the pod
GCR "cf-workloads" folder
kpack is where lots of magic actually occurs. kpack is based on the CNCF sandbox project knows as Cloud Native Buildpacks and can create OCI compliant images from source code and/or artifacts automatically for you. CNB/kpack doesn't just stop there to find out more I suggest going to the following links.
https://tanzu.vmware.com/content/blog/introducing-kpack-a-kubernetes-native-container-build-service
https://buildpacks.io/
Buildpacks provide a higher-level abstraction for building apps compared to Dockerfiles.
Specifically, buildpacks:
- Provide a balance of control that reduces the operational burden on developers and supports enterprise operators who manage apps at scale.
- Ensure that apps meet security and compliance requirements without developer intervention.
- Provide automated delivery of both OS-level and application-level dependency upgrades, efficiently handling day-2 app operations that are often difficult to manage with Dockerfiles.
- Rely on compatibility guarantees to safely apply patches without rebuilding artifacts and without unintentionally changing application behavior.
- What POD's are running in cf-workloads
$ kubectl get pods -n cf-workloads NAME READY STATUS RESTARTS AGE test-node-app-development-c346b24349-0 2/2 Running 0 26m
- You will notice we have a POD running with 2 containers BUT also we have a Service which is used internally to route to the or more PODS using ClusterIP as shown below
$ kubectl get svc -n cf-workloads NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE s-1999c874-e300-45e1-b5ff-1a69b7649dd6 ClusterIP 10.100.200.26 <none> 8080/TCP 27m
- Each POD has two containers named as follows.
opi : This is your actual container instance running your code
istio-proxy: This as the name suggests is a proxy container which among other things routes requests to the OPI container image when required
21. Ok so let's scale our application to run 2 instances. To do that we simply use Cloud Foundry CLI as follows
$ cf scale test-node-app -i 2
Scaling app test-node-app in org system / space development as admin...
OK
$ cf apps
Getting apps in org system / space development as admin...
OK
name requested state instances memory disk urls
test-node-app started 2/2 1G 1G test-node-app.mydomain
And using kubectl as expected we end up with another POD created for the second instance
$ kubectl get pods -n cf-workloads NAME READY STATUS RESTARTS AGE test-node-app-development-c346b24349-0 2/2 Running 0 44m test-node-app-development-c346b24349-1 2/2 Running 0 112s
If we dig a bit deeper will see that a Statefulset backs the application deployment shown below
$ kubectl get all -n cf-workloads NAME READY STATUS RESTARTS AGE pod/test-node-app-development-c346b24349-0 2/2 Running 0 53m pod/test-node-app-development-c346b24349-1 2/2 Running 0 10m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/s-1999c874-e300-45e1-b5ff-1a69b7649dd6 ClusterIP 10.100.200.26 <none> 8080/TCP 53m NAME READY AGE statefulset.apps/test-node-app-development-c346b24349 2/2 53m
Ok so as you may have guessed we can deploy many different types of apps because kpack supports multiple languages including Java, Go, Python etc.
22. Let's deploy a Go application as follows.
$ git clone https://github.com/swisscom/cf-sample-app-go
$ cf push my-go-app -m 64M -p ./cf-sample-app-go
Pushing app my-go-app to org system / space development as admin...
Getting app info...
Creating app with these attributes...
+ name: my-go-app
path: /Users/papicella/pivotal/PCF/APJ/PEZ-HaaS/haas-210/cf-for-k8s/artifacts/cf-sample-app-go
+ memory: 64M
routes:
+ my-go-app.mydomain
Creating app my-go-app...
Mapping routes...
Comparing local files to remote cache...
Packaging files to upload...
Uploading files...
1.43 KiB / 1.43 KiB [====================================================================================] 100.00% 1s
Waiting for API to complete processing files...
Staging app and tracing logs...
Waiting for app to start...
name: my-go-app
requested state: started
isolation segment: placeholder
routes: my-go-app.mydomain
last uploaded: Thu 16 Apr 14:06:25 AEST 2020
stack:
buildpacks:
type: web
instances: 1/1
memory usage: 64M
state since cpu memory disk details
#0 running 2020-04-16T04:06:43Z 0.0% 0 of 64M 0 of 1G
We can invoke the application using "curl" or something more modern like "HTTPie"
$ http http://my-go-app.mydomain
HTTP/1.1 200 OK
content-length: 59
content-type: text/plain; charset=utf-8
date: Thu, 16 Apr 2020 04:09:46 GMT
server: istio-envoy
x-envoy-upstream-service-time: 6
Congratulations! Welcome to the Swisscom Application Cloud!
If we tailed the logs using "cf logs my-go-app" we would of seen that kpack intelligently determine this is a GO app and uses the Go buildpack to compile the code and produce a container image.
...
2020-04-16T14:05:27.52+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT Warning: Image "gcr.io/fe-papicella/cf-workloads/f0072cfa-0e7e-41da-9bf7-d34b2997fb94" not found
2020-04-16T14:05:29.59+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT
2020-04-16T14:05:29.59+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT Go Compiler Buildpack 0.0.83
2020-04-16T14:05:29.59+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT Go 1.13.7: Contributing to layer
2020-04-16T14:05:29.59+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT Downloading from https://buildpacks.cloudfoundry.org/dependencies/go/go-1.13.7-bionic-5bb47c26.tgz
2020-04-16T14:05:35.13+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT Verifying checksum
2020-04-16T14:05:35.63+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT Expanding to /layers/org.cloudfoundry.go-compiler/go
2020-04-16T14:05:41.48+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT
2020-04-16T14:05:41.48+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT Go Mod Buildpack 0.0.84
2020-04-16T14:05:41.48+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT Setting environment variables
2020-04-16T14:05:41.48+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT : Contributing to layer
2020-04-16T14:05:41.68+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT github.com/swisscom/cf-sample-app-go
2020-04-16T14:05:41.69+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT : Contributing to layer
...
Using "cf apps" we now have two applications deployed as shown below.
$ cf apps
Getting apps in org system / space development as admin...
OK
name requested state instances memory disk urls
my-go-app started 1/1 64M 1G my-go-app.mydomain
test-node-app started 2/2 1G 1G test-node-app.mydomain
23. Finally kpack and the buildpacks eco system can deploy already created artifacts. The Java Buildpack is capable of not only deploying from source but can also use a FAT spring boot JAR file for example as shown below. In this example we have packaged the artifact we wish to deploy as "PivotalMySQLWeb-1.0.0-SNAPSHOT.jar".
$ cf push piv-mysql-web -p PivotalMySQLWeb-1.0.0-SNAPSHOT.jar -i 1 -m 1g
Pushing app piv-mysql-web to org system / space development as admin...
Getting app info...
Creating app with these attributes...
+ name: piv-mysql-web
path: /Users/papicella/pivotal/PCF/APJ/PEZ-HaaS/haas-210/cf-for-k8s/artifacts/PivotalMySQLWeb-1.0.0-SNAPSHOT.jar
+ instances: 1
+ memory: 1G
routes:
+ piv-mysql-web.mydomain
Creating app piv-mysql-web...
Mapping routes...
Comparing local files to remote cache...
Packaging files to upload...
Uploading files...
1.03 MiB / 1.03 MiB [====================================================================================] 100.00% 2s
Waiting for API to complete processing files...
Staging app and tracing logs...
Waiting for app to start...
name: piv-mysql-web
requested state: started
isolation segment: placeholder
routes: piv-mysql-web.mydomain
last uploaded: Thu 16 Apr 14:17:22 AEST 2020
stack:
buildpacks:
type: web
instances: 1/1
memory usage: 1024M
state since cpu memory disk details
#0 running 2020-04-16T04:17:43Z 0.0% 0 of 1G 0 of 1G
Of course the usual commands you expect from CF CLI still exist. Here are some examples as follows.
$ cf app piv-mysql-web
Showing health and status for app piv-mysql-web in org system / space development as admin...
name: piv-mysql-web
requested state: started
isolation segment: placeholder
routes: piv-mysql-web.mydomain
last uploaded: Thu 16 Apr 14:17:22 AEST 2020
stack:
buildpacks:
type: web
instances: 1/1
memory usage: 1024M
state since cpu memory disk details
#0 running 2020-04-16T04:17:43Z 0.1% 195.8M of 1G 0 of 1G
$ cf env piv-mysql-web
Getting env variables for app piv-mysql-web in org system / space development as admin...
OK
System-Provided:
{
"VCAP_APPLICATION": {
"application_id": "3b8bad84-2654-46f4-b32a-ebad0a4993c1",
"application_name": "piv-mysql-web",
"application_uris": [
"piv-mysql-web.mydomain"
],
"application_version": "750d9530-e756-4b74-ac86-75b61c60fe2d",
"cf_api": "https://api. mydomain",
"limits": {
"disk": 1024,
"fds": 16384,
"mem": 1024
},
"name": "piv-mysql-web",
"organization_id": "8ae94610-513c-435b-884f-86daf81229c8",
"organization_name": "system",
"process_id": "3b8bad84-2654-46f4-b32a-ebad0a4993c1",
"process_type": "web",
"space_id": "7f3d78ae-34d4-42e4-8ab8-b34e46e8ad1f",
"space_name": "development",
"uris": [
"piv-mysql-web. mydomain"
],
"users": null,
"version": "750d9530-e756-4b74-ac86-75b61c60fe2d"
}
}
No user-defined env variables have been set
No running env variables have been set
No staging env variables have been set
So what about some sort of UI? That brings as to step 24
24. Let's start by installing helm using a script as follows
#!/usr/bin/env bash
echo "install helm"
# installs helm with bash commands for easier command line integration
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash
# add a service account within a namespace to segregate tiller
kubectl --namespace kube-system create sa tiller
# create a cluster role binding for tiller
kubectl create clusterrolebinding tiller \
--clusterrole cluster-admin \
--serviceaccount=kube-system:tiller
echo "initialize helm"
# initialized helm within the tiller service account
helm init --service-account tiller
# updates the repos for Helm repo integration
helm repo update
echo "verify helm"
# verify that helm is installed in the cluster
kubectl get deploy,svc tiller-deploy -n kube-system
Once installed you can verify helm is working by using "helm ls" which should come back with no output as you haven't installed anything with helm yet.
25. Run the following to install Stratos an open source Web UI for Cloud Foundry
For more information on Stratos visit this URL - https://github.com/cloudfoundry/stratos
$ helm install stratos/console --namespace=console --name my-console --set console.service.type=LoadBalancer
NAME: my-console
LAST DEPLOYED: Thu Apr 16 09:48:19 2020
NAMESPACE: console
STATUS: DEPLOYED
RESOURCES:
==> v1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
stratos-db 0/1 1 0 2s
==> v1/Job
NAME COMPLETIONS DURATION AGE
stratos-config-init-1 0/1 2s 2s
==> v1/PersistentVolumeClaim
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
console-mariadb Bound pvc-4ff20e21-1852-445f-854f-894bc42227ce 1Gi RWO fast 2s
my-console-encryption-key-volume Bound pvc-095bb7ed-7be9-4d93-b63a-a8af569361b6 20Mi RWO fast 2s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
stratos-config-init-1-2t47x 0/1 ContainerCreating 0 2s
stratos-config-init-1-2t47x 0/1 ContainerCreating 0 2s
stratos-config-init-1-2t47x 0/1 ContainerCreating 0 2s
==> v1/Role
NAME AGE
config-init-role 2s
==> v1/RoleBinding
NAME AGE
config-init-secrets-role-binding 2s
==> v1/Secret
NAME TYPE DATA AGE
my-console-db-secret Opaque 5 2s
my-console-secret Opaque 5 2s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-console-mariadb ClusterIP 10.100.200.162
my-console-ui-ext LoadBalancer 10.100.200.171 10.195.93.143 443:31524/TCP 2s
==> v1/ServiceAccount
NAME SECRETS AGE
config-init 1 2s
==> v1/StatefulSet
NAME READY AGE
stratos 0/1 2s
26. You can verify it installed a few ways as shown below.
- Use helm with "helm ls"
$ helm ls NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE my-console 1 Thu Apr 16 09:48:19 2020 DEPLOYED console-3.0.0 3.0.0 console
- Verify everything is running using "kubectl get all -n console"
$ k get all -n console NAME READY STATUS RESTARTS AGE pod/stratos-0 0/2 ContainerCreating 0 40s pod/stratos-config-init-1-2t47x 0/1 Completed 0 40s pod/stratos-db-69ddf7f5f7-gb8xm 0/1 Running 0 40s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/my-console-mariadb ClusterIP 10.100.200.162 <none> 3306/TCP 40s service/my-console-ui-ext LoadBalancer 10.100.200.171 10.195.1.1 443:31524/TCP 40s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/stratos-db 0/1 1 0 41s NAME DESIRED CURRENT READY AGE replicaset.apps/stratos-db-69ddf7f5f7 1 1 0 41s NAME READY AGE statefulset.apps/stratos 0/1 41s NAME COMPLETIONS DURATION AGE job.batch/stratos-config-init-1 1/1 27s 42s
27. Now to open up the UI web app we just need the external IP from "service/my-console-ui-ext" as per the output above.
Navigate to https://{external-ip}:443
28. Create a local user to login using the password you set and and the username "admin".
Note: The password is just to get into the UI. It can be anything you want it to be.
29. Now we need to click on "Endpoints" and register a Cloud Foundry endpoint using the same login details we used with the Cloud Foundry API earlier at step 11.
Note: The API endpoint is what you used at step 9 and make sure to skip SSL validation
Once connected there are our deployed applications.
Summary
In this post we explored what running Cloud Foundry on Kubernetes looks like. For those familiar with Cloud Foundry or Tanzu Application Service (formally known as Pivotal Application Service) from a development perspective everything is the same using familiar CF CLI commands. What changes here is the footprint to run Cloud Foundry is much less complicated and runs on Kubernetes itself meaning even more places to run Cloud Foundry then ever before plus the ability to leverage community based projects on Kubernetes further more simplifying Cloud Foundry.
For more information see the links below.
More Information
GitHub Repo
https://github.com/cloudfoundry/cf-for-k8s
VMware Tanzu Application Service for Kubernetes (Beta)
https://network.pivotal.io/products/tas-for-kubernetes/
No comments:
Post a Comment