Search This Blog

Thursday, 18 June 2020

GitHub Actions to deploy Spring Boot application to Tanzu Application Service for Kubernetes

In this demo I show how to deploy a simple Spring boot application using GitHub Actions onto Tanzu Application Service for Kubernetes (TAS4K8s).

Steps

Ensure you have Tanzu Application Service for Kubernetes (TAS4K8s) running as shown below.
  
$ kapp list
Target cluster 'https://35.189.13.31' (nodes: gke-tanzu-gke-lab-f67-np-f67b23a0f590-abbca04e-5sqc, 8+)

Apps in namespace 'default'

Name                        Namespaces                                             Lcs    Lca
certmanager-cluster-issuer  (cluster)                                              true   8d
externaldns                 (cluster),external-dns                                 true   8d
harbor-cert                 harbor                                                 true   8d
tas                         (cluster),cf-blobstore,cf-db,cf-system,                false  8d
                            cf-workloads,cf-workloads-staging,istio-system,kpack,
                            metacontroller
tas4k8s-cert                cf-system                                              true   8d

Lcs: Last Change Successful
Lca: Last Change Age

5 apps

Succeeded

The demo exists on GitHub using the following URL, to follow along simply use your own GitHub repository making the changes as detailed below. The example below is for a Spring Boot application so your YAML file for the action would differ for non Java applications but there are many starter templates to choose from for other programming languages.

https://github.com/papicella/github-boot-demo



GitHub Actions help you automate your software development workflows in the same place you store code and collaborate on pull requests and issues. You can write individual tasks, called actions, and combine them to create a custom workflow. Workflows are custom automated processes that you can set up in your repository to build, test, package, release, or deploy any code project on GitHub

1. Create a folder at the root of your project source code as follows

$ mkdir ".github/workflows"

2. In ".github/workflows" folder, add a .yml or .yaml file for your workflow. For example, ".github/workflows/maven.yml"

3. Use the "Workflow syntax for GitHub Actions" reference documentation to choose events to trigger an action, add actions, and customize your workflow. In this example the YML "maven.yml" looks as follows.

maven.yml
  
name: Java CI with Maven and CD with CF CLI

on:
  push:
    branches: [ master ]
  pull_request:
    branches: [ master ]

jobs:
  build:

    runs-on: ubuntu-latest

    steps:
    - uses: actions/checkout@v2
    - name: Set up JDK 11.0.5
      uses: actions/setup-java@v1
      with:
        java-version: 11.0.5
    - name: Build with Maven
      run: mvn -B package --file pom.xml
    - name: push to TAS4K8s
      env:
        CF_USERNAME: ${{ secrets.CF_USERNAME }}
        CF_PASSWORD: ${{ secrets.CF_PASSWORD }}
      run: |
        curl --location "https://cli.run.pivotal.io/stable?release=linux64-binary&source=github" | tar zx
        ./cf api https://api.tas.lab.pasapples.me --skip-ssl-validation
        ./cf auth $CF_USERNAME $CF_PASSWORD
        ./cf target -o apples-org -s development
        ./cf push -f manifest.yaml

Few things here around the YML Workflow syntax for the GitHub Action above

  • We are using a maven action sample which will FIRE on a push or pull request on the master branch
  • We are using JDK 11 rather then Java 8
  • 3 Steps exists here
    • Setup JDK
    • Maven Build/Package
    • CF CLI Push to TAS4K8s using the built JAR artifact from the maven build
  • We download the CF CLI into ubuntu image 
  • We have masked the username and password using Secrets

4. Next in the project root add a manifest YAML for deployment to TAS4K8s

- Add a manifest.yaml file in the project root to deploy our simple Spring boot RESTful application

---
applications:
  - name: github-TAS4K8s-boot-demo
    memory: 1024M
    instances: 1
    path: ./target/demo-0.0.1-SNAPSHOT.jar

5. Now we need to add Secrets to the Github repo which are referenced in out "maven.yml" file. In our case they are as follows.
  • CF_USERNAME 
  • CF_PASSWORD
In your GitHub repository click on "Settings" tab then on left hand side navigation bar click on "Secrets" and define your username and password for your TAS4K8s instance as shown below



6. At this point that is all we need to test our GitHub Action. Here in IntelliJ IDEA I issue a commit/push to trigger the GitHub action



7. If all went well using "Actions" tab in your GitHub repo will show you the status and logs as follows






8. Finally our application will be deployed to TAS4K8s as shown below and we can invoke it using HTTPie or CURL for example
  
$ cf apps
Getting apps in org apples-org / space development as pas...
OK

name                       requested state   instances   memory   disk   urls
github-TAS4K8s-boot-demo   started           1/1         1G       1G     github-tas4k8s-boot-demo.apps.tas.lab.pasapples.me
my-springboot-app          started           1/1         1G       1G     my-springboot-app.apps.tas.lab.pasapples.me
test-node-app              started           1/1         1G       1G     test-node-app.apps.tas.lab.pasapples.me

$ cf app github-TAS4K8s-boot-demo
Showing health and status for app github-TAS4K8s-boot-demo in org apples-org / space development as pas...

name:                github-TAS4K8s-boot-demo
requested state:     started
isolation segment:   placeholder
routes:              github-tas4k8s-boot-demo.apps.tas.lab.pasapples.me
last uploaded:       Thu 18 Jun 12:03:19 AEST 2020
stack:
buildpacks:

type:           web
instances:      1/1
memory usage:   1024M
     state     since                  cpu    memory         disk      details
#0   running   2020-06-18T02:03:32Z   0.2%   136.5M of 1G   0 of 1G

$ http http://github-tas4k8s-boot-demo.apps.tas.lab.pasapples.me
HTTP/1.1 200 OK
content-length: 28
content-type: text/plain;charset=UTF-8
date: Thu, 18 Jun 2020 02:07:39 GMT
server: istio-envoy
x-envoy-upstream-service-time: 141

Thu Jun 18 02:07:39 GMT 2020



More Information

Download TAS4K8s
https://network.pivotal.io/products/tas-for-kubernetes/

GitHub Actions
https://github.com/features/actions

GitHub Marketplace - Actions
https://github.com/marketplace?type=actions

Tuesday, 16 June 2020

Deploying a Spring Boot application to Tanzu Application Service for Kubernetes using GitLab

In this demo I show how to deploy a simple Springboot application using GitLab pipeline onto Tanzu Application Service for Kubernetes (TAS4K8s).

Steps

Ensure you have Tanzu Application Service for Kubernetes (TAS4K8s) running as shown below
  
$ kapp list
Target cluster 'https://lemons.run.haas-236.pez.pivotal.io:8443' (nodes: a51852ac-e449-40ad-bde7-1beb18340854, 5+)

Apps in namespace 'default'

Name  Namespaces                                    Lcs   Lca
cf    (cluster),build-service,cf-blobstore,cf-db,   true  10d
      cf-system,cf-workloads,cf-workloads-staging,
      istio-system,kpack,metacontroller

Lcs: Last Change Successful
Lca: Last Change Age

1 apps

Succeeded

Ensure you have GitLab running. In this example it's installed on a Kubernetes cluster but it doesn't have to be. All that matters here is that GitLab can access the API endpoint of your TAS4K8s install
  
$ helm ls -A
NAME   NAMESPACE REVISION UPDATED                               STATUS   CHART        APP VERSION
gitlab gitlab    2        2020-05-15 13:22:15.470219 +1000 AEST deployed gitlab-3.3.4 12.10.5

1. First let's create a basic Springboot application with a simple RESTful endpoint as shown below. It's best to use the Spring Initializer to create this application. I simply used the web and lombok dependancies as shown below.

Note: Make sure you select java version 11.

Spring Initializer Web Interface


Using built in Spring Initializer in IntelliJ IDEA.


Here is my simple RESTful controller which simply output's todays date.
  
package com.example.demo;

import lombok.extern.slf4j.Slf4j;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;

import java.util.Date;

@RestController
@Slf4j
public class FrontEnd {
    @GetMapping("/")
    public String index () {
        log.info("An INFO Message");
        return new Date().toString();
    }
}

2. Create an empty project in GitLab using the name "gitlab-TAS4K8s-boot-demo"



3. At this point this add our project files from step #1 above into the empty GitLab project repository. We do that as follows.

$ cd "existing project folder from step #1"
$ git init
$ git remote add origin http://gitlab.ci.run.haas-236.pez.pivotal.io/root/gitlab-tas4k8s-boot-demo.git
$ git add .
$ git commit -m "Initial commit"
$ git push -u origin master

Once done we now have out GitLab project repository with the files we created as part of the project setup


4. It's always worth running the code locally just to make sure it's working so if you like you can do that as follows

RUN:

$ ./mvnw spring-boot:run

CURL:

$ curl http://localhost:8080/
Tue Jun 16 10:46:26 AEST 2020

HTTPie:

papicella@papicella:~$
papicella@papicella:~$
papicella@papicella:~$ http :8080/
HTTP/1.1 200
Connection: keep-alive
Content-Length: 29
Content-Type: text/plain;charset=UTF-8
Date: Tue, 16 Jun 2020 00:46:40 GMT
Keep-Alive: timeout=60

Tue Jun 16 10:46:40 AEST 2020

5. Our GitLab project as no pipelines defined so let's create one as follows in the project root directory using the default pipeline name ".gitlab-ci.yml"

image: openjdk:11-jdk

stages:
  - build
  - deploy

build:
  stage: build
  script: ./mvnw package
  artifacts:
    paths:
      - target/demo-0.0.1-SNAPSHOT.jar

production:
  stage: deploy
  script:
  - curl --location "https://cli.run.pivotal.io/stable?release=linux64-binary&source=github" | tar zx
  - ./cf api https://api.system.run.haas-236.pez.pivotal.io --skip-ssl-validation
  - ./cf auth $CF_USERNAME $CF_PASSWORD
  - ./cf target -o apples-org -s development
  - ./cf push -f manifest.yaml
  only:
  - master


Note: We have not defined any tests in our pipeline which we should do but we haven't written any in this example.

6. For this pipeline to work we will need to do the following

- Add a manifest.yaml file in the project root to deploy our simple Springboot RESTful application

---
applications:
  - name: gitlab-TAS4K8s-boot-demo
    memory: 1024M
    instances: 1
    path: ./target/demo-0.0.1-SNAPSHOT.jar

- Alter the API endpoint to match your TAS4K8s endpoint

- ./cf api https://api.system.run.haas-236.pez.pivotal.io --skip-ssl-validation

- Alter the target to use your ORG and SPACE within TAs4K8s.

- ./cf target -o apples-org -s development

This command shows you what your current CF CLI is targeted to so you can ensure you edit it with correct details
  
$ cf target
api endpoint:   https://api.system.run.haas-236.pez.pivotal.io
api version:    2.150.0
user:           pas
org:            apples-org
space:          development

7. For the ".gitlab-ci.yml" to work we need to define two ENV variables for our username and password. Those two are as follows which is our login credentials to TAS4K8s

  • CF_USERNAME 
  • CF_PASSWORD

To do that we need to navigate to "Project Settings -> CI/CD - Variables" and fill in the appropriate details as shown below



8. Now let's add the two new files using git , add a commit message and push the changes

$ git add .gitlab-ci.yml
$ git add manifest.yaml
git commit -m "add pipeline configuration"
$ git push -u origin master

9. Navigate to GitLab UI "CI/CD -> Pipelines" and we should see our pipeline starting to run








10. If everything went well!!!



11. Finally our application will be deployed to TAS4K8s as shown below
  
$ cf apps
Getting apps in org apples-org / space development as pas...
OK

name                       requested state   instances   memory   disk   urls
gitlab-TAS4K8s-boot-demo   started           1/1         1G       1G     gitlab-tas4k8s-boot-demo.apps.system.run.haas-236.pez.pivotal.io
gitlab-tas4k8s-demo        started           1/1         1G       1G     gitlab-tas4k8s-demo.apps.system.run.haas-236.pez.pivotal.io
test-node-app              started           1/1         1G       1G     test-node-app.apps.system.run.haas-236.pez.pivotal.io

$ cf app gitlab-TAS4K8s-boot-demo
Showing health and status for app gitlab-TAS4K8s-boot-demo in org apples-org / space development as pas...

name:                gitlab-TAS4K8s-boot-demo
requested state:     started
isolation segment:   placeholder
routes:              gitlab-tas4k8s-boot-demo.apps.system.run.haas-236.pez.pivotal.io
last uploaded:       Tue 16 Jun 11:29:03 AEST 2020
stack:
buildpacks:

type:           web
instances:      1/1
memory usage:   1024M
     state     since                  cpu    memory         disk      details
#0   running   2020-06-16T01:29:16Z   0.1%   118.2M of 1G   0 of 1G

12. Access it as follows.

$ http http://gitlab-tas4k8s-boot-demo.apps.system.run.haas-236.pez.pivotal.io
HTTP/1.1 200 OK
content-length: 28
content-type: text/plain;charset=UTF-8
date: Tue, 16 Jun 2020 01:35:28 GMT
server: istio-envoy
x-envoy-upstream-service-time: 198

Tue Jun 16 01:35:28 GMT 2020

Of course if you wanted to create an API like service you could use the source code at this repo rather then the simple demo shown here using OpenAPI.

https://github.com/papicella/spring-book-service



More Information

Download TAS4K8s
https://network.pivotal.io/products/tas-for-kubernetes/

GitLab
https://about.gitlab.com/

Friday, 5 June 2020

Installing a UI for Tanzu Application Service for Kubernetes

Having installed Tanzu Application Service for Kubernetes a few times having a UI is something I must have. In this post I show how to get Stratos deployed and running on Tanzu Application Service for Kubernetes (TAS4K8s) beta 0.2.0.

Steps

Note: It's assumed you have TAS4K8s deployed and running as per the output of "kapp" 

$ kapp list
Target cluster 'https://lemons.run.haas-236.pez.pivotal.io:8443' (nodes: a51852ac-e449-40ad-bde7-1beb18340854, 5+)

Apps in namespace 'default'

Name  Namespaces                                    Lcs   Lca
cf    (cluster),build-service,cf-blobstore,cf-db,   true  2h
      cf-system,cf-workloads,cf-workloads-staging,
      istio-system,kpack,metacontroller

Lcs: Last Change Successful
Lca: Last Change Age

1 apps

Succeeded

1. First let's create a namespace to install Stratos into.

$ kubectl create namespace console
namespace/console created

2. Using helm 3 install Stratos as shown below.

$ helm repo add stratos https://cloudfoundry.github.io/stratos
$ helm install my-console --namespace=console stratos/console --set console.service.type=LoadBalancer
NAME: my-console
LAST DEPLOYED: Fri Jun  5 13:18:22 2020
NAMESPACE: console
STATUS: deployed
REVISION: 1
TEST SUITE: None

3. You can verify it installed correctly a few ways as shown below

- Check using "helm ls -A"
$ helm ls -A
NAME       NAMESPACE REVISION UPDATED                               STATUS   CHART         APP VERSION
my-console console   1        2020-06-05 13:18:22.785689 +1000 AEST deployed console-3.2.1 3.2.1
- Check everything in the namespace "console" is up and running
$ kubectl get all -n console
NAME                              READY   STATUS      RESTARTS   AGE
pod/stratos-0                     2/2     Running     0          34m
pod/stratos-config-init-1-mxqbw   0/1     Completed   0          34m
pod/stratos-db-7fc9b7b6b7-sp4lf   1/1     Running     0          34m

NAME                         TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)         AGE
service/my-console-mariadb   ClusterIP      10.100.200.65    <none>          3306/TCP        34m
service/my-console-ui-ext    LoadBalancer   10.100.200.216   10.195.75.164   443:32286/TCP   34m

NAME                         READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/stratos-db   1/1     1            1           34m

NAME                                    DESIRED   CURRENT   READY   AGE
replicaset.apps/stratos-db-7fc9b7b6b7   1         1         1       34m

NAME                       READY   AGE
statefulset.apps/stratos   1/1     34m

NAME                              COMPLETIONS   DURATION   AGE
job.batch/stratos-config-init-1   1/1           28s        34m
4. To invoke the UI run a script as follows.

Script:

export IP=`kubectl -n console get service my-console-ui-ext -ojsonpath='{.status.loadBalancer.ingress[0].ip}'`

echo ""
echo "Stratos URL: https://$IP:443"
echo ""

Output:

$ ./get-stratos-url.sh

Stratos URL: https://10.195.75.164:443

5. Invoking the URL above will take you to a screen as follows where you would select "Local Admin" account



6. Set a password and click "Finish" button


7. At this point we need to get an API endpoint for our TAS4K8s install. Easiest way to get that is to run a command as follows when logged in using the CF CLI as follows

$ cf api
api endpoint:   https://api.system.run.haas-236.pez.pivotal.io
api version:    2.150.0

8. Click on the "Register an Endpoint" + button as shown below


9. Select "Cloud Foundry" as the type you wish to register.

10. Enter details as shown below and click on "Register" button.


11. At this point you should connect to Cloud Foundry using your admin credentials for the TAS4K8s instance as shown below.


12. Once connected your good to go and start deploying some applications. 




Monday, 1 June 2020

Targeting specific namespaces with kubectl

Note for myself given kubectl does not allow multiple namespaces as per it's CLI

$ eval 'kubectl  --namespace='{cf-system,kpack,istio-system}' get pod;'

OR (get all) if you want to see all resources

$ eval 'kubectl  --namespace='{cf-system,kpack,istio-system}' get all;'
  
$ eval 'kubectl  --namespace='{cf-system,kpack,istio-system}' get pod;'
NAME                                         READY   STATUS      RESTARTS   AGE
ccdb-migrate-995n7                           0/2     Completed   1          3d23h
cf-api-clock-7595b76c78-94trp                2/2     Running     2          3d23h
cf-api-deployment-updater-758f646489-k5498   2/2     Running     2          3d23h
cf-api-kpack-watcher-6fb8f7b4bf-xh2mg        2/2     Running     0          3d23h
cf-api-server-5dc58fb9d-8d2nc                5/5     Running     5          3d23h
cf-api-server-5dc58fb9d-ghwkn                5/5     Running     4          3d23h
cf-api-worker-7fffdbcdc7-fqpnc               2/2     Running     2          3d23h
cfroutesync-75dff99567-kc8qt                 2/2     Running     0          3d23h
eirini-5cddc6d89b-57dgc                      2/2     Running     0          3d23h
fluentd-4fsp8                                2/2     Running     2          3d23h
fluentd-5vfnv                                2/2     Running     1          3d23h
fluentd-gq2kr                                2/2     Running     2          3d23h
fluentd-hnjgm                                2/2     Running     2          3d23h
fluentd-j6d5n                                2/2     Running     1          3d23h
fluentd-wbzcj                                2/2     Running     2          3d23h
log-cache-7fd48cd767-fj9k8                   5/5     Running     5          3d23h
metric-proxy-695797b958-j7tns                2/2     Running     0          3d23h
uaa-67bd4bfb7d-v72v6                         2/2     Running     2          3d23h
NAME                               READY   STATUS    RESTARTS   AGE
kpack-controller-595b8c5fd-x4kgf   1/1     Running   0          3d23h
kpack-webhook-6fdffdf676-g8v9q     1/1     Running   0          3d23h
NAME                                      READY   STATUS    RESTARTS   AGE
istio-citadel-589c85d7dc-677fz            1/1     Running   0          3d23h
istio-galley-6c7b88477-fk9km              2/2     Running   0          3d23h
istio-ingressgateway-25g8s                2/2     Running   0          3d23h
istio-ingressgateway-49txj                2/2     Running   0          3d23h
istio-ingressgateway-9qsqj                2/2     Running   0          3d23h
istio-ingressgateway-dlbcr                2/2     Running   0          3d23h
istio-ingressgateway-jdn42                2/2     Running   0          3d23h
istio-ingressgateway-jnx2m                2/2     Running   0          3d23h
istio-pilot-767fc6d466-8bzt8              2/2     Running   0          3d23h
istio-policy-66f4f99b44-qhw92             2/2     Running   1          3d23h
istio-sidecar-injector-6985796b87-2hvxw   1/1     Running   0          3d23h
istio-telemetry-d6599c76f-ps6xd           2/2     Running   1          3d23h

Thursday, 7 May 2020

Paketo Buildpacks - Cloud Native Buildpacks providing language runtime support for applications on Kubernetes or Cloud Foundry

Paketo Buildpacks are modular Buildpacks, written in Go. Paketo Buildpacks provide language runtime support for applications. They leverage the Cloud Native Buildpacks framework to make image builds easy, performant, and secure.

Paketo Buildpacks implement the Cloud Native Buildpacks specification, an emerging standard for building app container images. You can use Paketo Buildpacks with tools such as the CNB pack CLI, kpack, Tekton, and Skaffold, in addition to a number of cloud platforms.

Here how simple they are to use.

Steps

1. First to get started you need a few things installed the most important is is the Pack CLI and a Docker up and running to allow you to locally create OCI compliant images from your source code

Prerequisites:

    Pack CLI
    Docker

2. Verify pack is installed as follows

$ pack version
0.10.0+git-06d9983.build-259

3. Now in this example below I am going to use a Springboot application source code of mine. The Github URL for that is as follows so you could clone it if you want to follow using this demo.

https://github.com/papicella/msa-apifirst

4. Build my OCI compliant image as follows.

$ pack build msa-apifirst-paketo -p ./msa-apifirst --builder gcr.io/paketo-buildpacks/builder:base
base: Pulling from paketo-buildpacks/builder
Digest: sha256:1bb775a178ed4c54246ab71f323d2a5af0e4b70c83b0dc84f974694b0221d636
Status: Image is up to date for gcr.io/paketo-buildpacks/builder:base
base-cnb: Pulling from paketo-buildpacks/run
Digest: sha256:d70bf0fe11d84277997c4a7da94b2867a90d6c0f55add4e19b7c565d5087206f
Status: Image is up to date for gcr.io/paketo-buildpacks/run:base-cnb
===> DETECTING
[detector] 6 of 15 buildpacks participating
[detector] paketo-buildpacks/bellsoft-liberica 2.5.0
[detector] paketo-buildpacks/maven             1.2.1
[detector] paketo-buildpacks/executable-jar    1.2.2
[detector] paketo-buildpacks/apache-tomcat     1.1.2
[detector] paketo-buildpacks/dist-zip          1.2.2
[detector] paketo-buildpacks/spring-boot       1.5.2
===> ANALYZING
[analyzer] Restoring metadata for "paketo-buildpacks/bellsoft-liberica:openssl-security-provider" from app image
[analyzer] Restoring metadata for "paketo-buildpacks/bellsoft-liberica:security-providers-configurer" from app image

...

[builder] Paketo Maven Buildpack 1.2.1
[builder]     Set $BP_MAVEN_SETTINGS to configure the contents of a settings.xml file. Default .
[builder]     Set $BP_MAVEN_BUILD_ARGUMENTS to configure the arguments passed to the build system. Default -Dmaven.test.skip=true package.
[builder]     Set $BP_MAVEN_BUILT_MODULE to configure the module to find application artifact in. Default .
[builder]     Set $BP_MAVEN_BUILT_ARTIFACT to configure the built application artifact. Default target/*.[jw]ar.
[builder]     Creating cache directory /home/cnb/.m2
[builder]   Compiled Application: Reusing cached layer
[builder]   Removing source code
[builder]
[builder] Paketo Executable JAR Buildpack 1.2.2
[builder]   Process types:
[builder]     executable-jar: java -cp "${CLASSPATH}" ${JAVA_OPTS} org.springframework.boot.loader.JarLauncher
[builder]     task:           java -cp "${CLASSPATH}" ${JAVA_OPTS} org.springframework.boot.loader.JarLauncher
[builder]     web:            java -cp "${CLASSPATH}" ${JAVA_OPTS} org.springframework.boot.loader.JarLauncher
[builder]
[builder] Paketo Spring Boot Buildpack 1.5.2
[builder]   Image labels:
[builder]     org.opencontainers.image.title
[builder]     org.opencontainers.image.version
[builder]     org.springframework.boot.spring-configuration-metadata.json
[builder]     org.springframework.boot.version
===> EXPORTING
[exporter] Reusing layer 'launcher'
[exporter] Reusing layer 'paketo-buildpacks/bellsoft-liberica:class-counter'
[exporter] Reusing layer 'paketo-buildpacks/bellsoft-liberica:java-security-properties'
[exporter] Reusing layer 'paketo-buildpacks/bellsoft-liberica:jre'
[exporter] Reusing layer 'paketo-buildpacks/bellsoft-liberica:jvmkill'
[exporter] Reusing layer 'paketo-buildpacks/bellsoft-liberica:link-local-dns'
[exporter] Reusing layer 'paketo-buildpacks/bellsoft-liberica:memory-calculator'
[exporter] Reusing layer 'paketo-buildpacks/bellsoft-liberica:openssl-security-provider'
[exporter] Reusing layer 'paketo-buildpacks/bellsoft-liberica:security-providers-configurer'
[exporter] Reusing layer 'paketo-buildpacks/executable-jar:class-path'
[exporter] Reusing 1/1 app layer(s)
[exporter] Adding layer 'config'
[exporter] *** Images (726b340b596b):
[exporter]       index.docker.io/library/msa-apifirst-paketo:latest
[exporter] Adding cache layer 'paketo-buildpacks/bellsoft-liberica:jdk'
[exporter] Reusing cache layer 'paketo-buildpacks/maven:application'
[exporter] Reusing cache layer 'paketo-buildpacks/maven:cache'
[exporter] Reusing cache layer 'paketo-buildpacks/executable-jar:class-path'
Successfully built image msa-apifirst-paketo

5. Now lets run our application locally as shown below

$ docker run --rm -p 8080:8080 msa-apifirst-paketo
Container memory limit unset. Configuring JVM for 1G container.
Calculated JVM Memory Configuration: -XX:MaxDirectMemorySize=10M -XX:MaxMetaspaceSize=113348K -XX:ReservedCodeCacheSize=240M -Xss1M -Xmx423227K (Head Room: 0%, Loaded Class Count: 17598, Thread Count: 250, Total Memory: 1073741824)
Adding Security Providers to JVM

  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::        (v2.1.1.RELEASE)

2020-05-07 09:48:04.153  INFO 1 --- [           main] p.a.p.m.apifirst.MsaApifirstApplication  : Starting MsaApifirstApplication on 486f85c54667 with PID 1 (/workspace/BOOT-INF/classes started by cnb in /workspace)
2020-05-07 09:48:04.160  INFO 1 --- [           main] p.a.p.m.apifirst.MsaApifirstApplication  : No active profile set, falling back to default profiles: default

...

2020-05-07 09:48:15.515  INFO 1 --- [           main] p.a.p.m.apifirst.MsaApifirstApplication  : Started MsaApifirstApplication in 12.156 seconds (JVM running for 12.975)
Hibernate: insert into customer (id, name, status) values (null, ?, ?)
2020-05-07 09:48:15.680  INFO 1 --- [           main] p.apj.pa.msa.apifirst.LoadDatabase       : Preloading Customer(id=1, name=pas, status=active)
Hibernate: insert into customer (id, name, status) values (null, ?, ?)
2020-05-07 09:48:15.682  INFO 1 --- [           main] p.apj.pa.msa.apifirst.LoadDatabase       : Preloading Customer(id=2, name=lucia, status=active)
Hibernate: insert into customer (id, name, status) values (null, ?, ?)
2020-05-07 09:48:15.684  INFO 1 --- [           main] p.apj.pa.msa.apifirst.LoadDatabase       : Preloading Customer(id=3, name=lucas, status=inactive)
Hibernate: insert into customer (id, name, status) values (null, ?, ?)
2020-05-07 09:48:15.688  INFO 1 --- [           main] p.apj.pa.msa.apifirst.LoadDatabase       : Preloading Customer(id=4, name=siena, status=inactive)

6. Access the API endpoint using curl or HTTPie as shown below

$ http :8080/customers/1
HTTP/1.1 200
Content-Type: application/hal+json;charset=UTF-8
Date: Thu, 07 May 2020 09:49:05 GMT
Transfer-Encoding: chunked

{
    "_links": {
        "customer": {
            "href": "http://localhost:8080/customers/1"
        },
        "self": {
            "href": "http://localhost:8080/customers/1"
        }
    },
    "name": "pas",
    "status": "active"
}

It also has a swagger UI endpoint as follows

http://localhost:8080/swagger-ui.html

7. Now you will see as per below you have a locally built OCI compliant image

$ docker images | grep msa-apifirst-paketo
msa-apifirst-paketo                       latest              726b340b596b        40 years ago        286MB

8. Now you can push this OCI compliant image to a Container Registry here I am using Dockerhub

$ pack build pasapples/msa-apifirst-paketo:latest --publish --path ./msa-apifirst
cflinuxfs3: Pulling from cloudfoundry/cnb
Digest: sha256:30af1eb2c8a6f38f42d7305acb721493cd58b7f203705dc03a3f4b21f8439ce0
Status: Image is up to date for cloudfoundry/cnb:cflinuxfs3
===> DETECTING
[detector] 6 of 15 buildpacks participating
[detector] paketo-buildpacks/bellsoft-liberica 2.5.0
[detector] paketo-buildpacks/maven             1.2.1

...

===> EXPORTING
[exporter] Adding layer 'launcher'
[exporter] Adding layer 'paketo-buildpacks/bellsoft-liberica:class-counter'
[exporter] Adding layer 'paketo-buildpacks/bellsoft-liberica:java-security-properties'
[exporter] Adding layer 'paketo-buildpacks/bellsoft-liberica:jre'
[exporter] Adding layer 'paketo-buildpacks/bellsoft-liberica:jvmkill'
[exporter] Adding layer 'paketo-buildpacks/bellsoft-liberica:link-local-dns'
[exporter] Adding layer 'paketo-buildpacks/bellsoft-liberica:memory-calculator'
[exporter] Adding layer 'paketo-buildpacks/bellsoft-liberica:openssl-security-provider'
[exporter] Adding layer 'paketo-buildpacks/bellsoft-liberica:security-providers-configurer'
[exporter] Adding layer 'paketo-buildpacks/executable-jar:class-path'
[exporter] Adding 1/1 app layer(s)
[exporter] Adding layer 'config'
[exporter] *** Images (sha256:097c7f67ac3dfc4e83d53c6b3e61ada8dd3d2c1baab2eb860945eba46814dba5):
[exporter]       index.docker.io/pasapples/msa-apifirst-paketo:latest
[exporter] Adding cache layer 'paketo-buildpacks/bellsoft-liberica:jdk'
[exporter] Adding cache layer 'paketo-buildpacks/maven:application'
[exporter] Adding cache layer 'paketo-buildpacks/maven:cache'
[exporter] Adding cache layer 'paketo-buildpacks/executable-jar:class-path'
Successfully built image pasapples/msa-apifirst-paketo:latest

Dockerhub showing pushed OCI compliant image


9. If you wanted to deploy your application to Kubernetes you could do that as follows.

$ kubectl create deployment msa-apifirst-paketo --image=pasapples/msa-apifirst-paketo
$ kubectl expose deployment msa-apifirst-paketo --type=LoadBalancer --port=8080

10. Finally you can select from 3 different builders as per below. We used the "base" builder in our example above
  • gcr.io/paketo-buildpacks/builder:full-cf
  • gcr.io/paketo-buildpacks/builder:base
  • gcr.io/paketo-buildpacks/builder:tiny

More Information

Paketo Buildpacks
https://paketo.io/

Tuesday, 5 May 2020

Creating my first Tanzu Kubernetes Grid 1.0 workload cluster on AWS

With Tanzu Kubernetes Grid you can run the same K8s across data center, public cloud and edge for a consistent, secure experience for all development teams. To find out more here is step by step to get this working on AWS which is one of the first 2 supported IaaS, the other being vSphere.

Steps

Before we get started we need to download a few bits and pieces all described here.

https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/1.0/vmware-tanzu-kubernetes-grid-10/GUID-install-tkg-set-up-tkg.html

Once you done that make sure you have tkg cli as follows

$ tkg version
Client:
Version: v1.0.0
Git commit: 60f6fd5f40101d6b78e95a33334498ecca86176e

You will also need the following
  • kubectl is installed.
  • Docker is installed and running, if you are installing Tanzu Kubernetes Grid on Linux.
  • Docker Desktop is installed and running, if you are installing Tanzu Kubernetes Grid on Mac OS.
  • System time is synchronized with a Network Time Protocol (NTP) server
Once that is done follow this link for AWS pre-reqs and other downloads required

https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/1.0/vmware-tanzu-kubernetes-grid-10/GUID-install-tkg-aws.html

1. Start by setting some AWS env variables for your account. Ensure you select a region supported by TKG which in my case I am using US regions

export AWS_ACCESS_KEY_ID=YYYY
export AWS_SECRET_ACCESS_KEY=ZZZZ
export AWS_REGION=us-east-1

2. Run the following clusterawsadm command to create a CloudFoundation stack.

$ ./clusterawsadm alpha bootstrap create-stack
Attempting to create CloudFormation stack cluster-api-provider-aws-sigs-k8s-io

Following resources are in the stack:

Resource                  |Type                                                                                |Status
AWS::IAM::Group           |bootstrapper.cluster-api-provider-aws.sigs.k8s.io                                   |CREATE_COMPLETE
AWS::IAM::InstanceProfile |control-plane.cluster-api-provider-aws.sigs.k8s.io                                  |CREATE_COMPLETE
AWS::IAM::InstanceProfile |controllers.cluster-api-provider-aws.sigs.k8s.io                                    |CREATE_COMPLETE
AWS::IAM::InstanceProfile |nodes.cluster-api-provider-aws.sigs.k8s.io                                          |CREATE_COMPLETE
AWS::IAM::ManagedPolicy   |arn:aws:iam::667166452325:policy/control-plane.cluster-api-provider-aws.sigs.k8s.io |CREATE_COMPLETE
AWS::IAM::ManagedPolicy   |arn:aws:iam::667166452325:policy/nodes.cluster-api-provider-aws.sigs.k8s.io         |CREATE_COMPLETE
AWS::IAM::ManagedPolicy   |arn:aws:iam::667166452325:policy/controllers.cluster-api-provider-aws.sigs.k8s.io   |CREATE_COMPLETE
AWS::IAM::Role            |control-plane.cluster-api-provider-aws.sigs.k8s.io                                  |CREATE_COMPLETE
AWS::IAM::Role            |controllers.cluster-api-provider-aws.sigs.k8s.io                                    |CREATE_COMPLETE
AWS::IAM::Role            |nodes.cluster-api-provider-aws.sigs.k8s.io                                          |CREATE_COMPLETE
AWS::IAM::User            |bootstrapper.cluster-api-provider-aws.sigs.k8s.io                                   |CREATE_COMPLETE

On AWS console you should see the stack created as follows


3. Ensure SSH key pair exists in your region as shown below

$ aws ec2 describe-key-pairs --key-name us-east-key
{
    "KeyPairs": [
        {
            "KeyFingerprint": "71:44:e3:f9:0e:93:1f:e7:1e:c4:ba:58:e8:65:92:3e:dc:e6:27:42",
            "KeyName": "us-east-key"
        }
    ]
}

4. Set Your AWS Credentials as Environment Variables for Use by Cluster API

$ export AWS_CREDENTIALS=$(aws iam create-access-key --user-name bootstrapper.cluster-api-provider-aws.sigs.k8s.io --output json)

$ export AWS_ACCESS_KEY_ID=$(echo $AWS_CREDENTIALS | jq .AccessKey.AccessKeyId -r)

$ export AWS_SECRET_ACCESS_KEY=$(echo $AWS_CREDENTIALS | jq .AccessKey.SecretAccessKey -r)

$ export AWS_B64ENCODED_CREDENTIALS=$(./clusterawsadm alpha bootstrap encode-aws-credentials)

5. Set the correct AMI for your region.

List here: https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/1.0/rn/VMware-Tanzu-Kubernetes-Grid-10-Release-Notes.html#amis

$ export AWS_AMI_ID=ami-0cdd7837e1fdd81f8

6. Deploy the Management Cluster to Amazon EC2 with the Installer Interface

$ tkg init --ui

Following the docs link below to fill in the desired details most of the defaults should work

https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/1.0/vmware-tanzu-kubernetes-grid-10/GUID-install-tkg-aws-ui.html

Once complete:

$ ./tkg init --ui
Logs of the command execution can also be found at: /var/folders/mb/93td1r4s7mz3ptq6cmpdvc6m0000gp/T/tkg-20200429T091728980865562.log

Validating the pre-requisites...
Serving kickstart UI at http://127.0.0.1:8080
Validating configuration...
web socket connection established
sending pending 2 logs to UI
Using infrastructure provider aws:v0.5.2
Generating cluster configuration...
Setting up bootstrapper...
Installing providers on bootstrapper...
Fetching providers
Installing cert-manager
Waiting for cert-manager to be available...
Installing Provider="cluster-api" Version="v0.3.3" TargetNamespace="capi-system"
Installing Provider="bootstrap-kubeadm" Version="v0.3.3" TargetNamespace="capi-kubeadm-bootstrap-system"
Installing Provider="control-plane-kubeadm" Version="v0.3.3" TargetNamespace="capi-kubeadm-control-plane-system"
Installing Provider="infrastructure-aws" Version="v0.5.2" TargetNamespace="capa-system"
Start creating management cluster...
Installing providers on management cluster...
Fetching providers
Installing cert-manager
Waiting for cert-manager to be available...
Installing Provider="cluster-api" Version="v0.3.3" TargetNamespace="capi-system"
Installing Provider="bootstrap-kubeadm" Version="v0.3.3" TargetNamespace="capi-kubeadm-bootstrap-system"
Installing Provider="control-plane-kubeadm" Version="v0.3.3" TargetNamespace="capi-kubeadm-control-plane-system"
Installing Provider="infrastructure-aws" Version="v0.5.2" TargetNamespace="capa-system"
Waiting for the management cluster to get ready for move...
Moving all Cluster API objects from bootstrap cluster to management cluster...
Performing move...
Discovering Cluster API objects
Moving Cluster API objects Clusters=1
Creating objects in the target cluster
Deleting objects from the source cluster
Context set for management cluster pasaws-tkg-man-cluster as 'pasaws-tkg-man-cluster-admin@pasaws-tkg-man-cluster'.

Management cluster created!


You can now create your first workload cluster by running the following:

  tkg create cluster [name] --kubernetes-version=[version] --plan=[plan]


In AWS console EC2 instances page you will see a few VM's that represent the management cluster as shown below


7. Show the management cluster as follows

$ tkg get management-cluster
+--------------------------+-----------------------------------------------------+
| MANAGEMENT CLUSTER NAME  | CONTEXT NAME                                        |
+--------------------------+-----------------------------------------------------+
| pasaws-tkg-man-cluster * | pasaws-tkg-man-cluster-admin@pasaws-tkg-man-cluster |
+--------------------------+-----------------------------------------------------+

8. You

9. You can connect to the management cluster as follows to look at what is running

$ kubectl config use-context pasaws-tkg-man-cluster-admin@pasaws-tkg-man-cluster
Switched to context "pasaws-tkg-man-cluster-admin@pasaws-tkg-man-cluster".

10. Deploy a Dev cluster with Multiple Worker Nodes as shown below. This should take about 10 minutes or so.

$ tkg create cluster apples-aws-tkg --plan=dev --worker-machine-count 2
Logs of the command execution can also be found at: /var/folders/mb/93td1r4s7mz3ptq6cmpdvc6m0000gp/T/tkg-20200429T101702293042678.log
Creating workload cluster 'apples-aws-tkg'...

Context set for workload cluster apples-aws-tkg as apples-aws-tkg-admin@apples-aws-tkg

Waiting for cluster nodes to be available...

Workload cluster 'apples-aws-tkg' created

In AWS console EC2 instances page you will see a few more VM's that represent our new TKG workload cluster


11. View what workload clusters are under management and have been created

$ tkg get clusters
+----------------+-------------+
| NAME           | STATUS      |
+----------------+-------------+
| apples-aws-tkg | Provisioned |
+----------------+-------------+

12. To connect to the workload cluster we just created use a set of commands as follows

$ tkg get credentials apples-aws-tkg
Credentials of workload cluster apples-aws-tkg have been saved
You can now access the cluster by switching the context to apples-aws-tkg-admin@apples-aws-tkg under /Users/papicella/.kube/config

$ kubectl config use-context apples-aws-tkg-admin@apples-aws-tkg
Switched to context "apples-aws-tkg-admin@apples-aws-tkg".

$ kubectl cluster-info
Kubernetes master is running at https://apples-aws-tkg-apiserver-2050013369.us-east-1.elb.amazonaws.com:6443
KubeDNS is running at https://apples-aws-tkg-apiserver-2050013369.us-east-1.elb.amazonaws.com:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

The following link will also be helpful
https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/1.0/vmware-tanzu-kubernetes-grid-10/GUID-tanzu-k8s-clusters-connect.html

18. View your cluster nodes as shown below
  
$ kubectl get nodes
NAME                         STATUS   ROLES    AGE     VERSION
ip-10-0-0-12.ec2.internal    Ready    <none>   6h24m   v1.17.3+vmware.2
ip-10-0-0-143.ec2.internal   Ready    master   6h25m   v1.17.3+vmware.2
ip-10-0-0-63.ec2.internal    Ready    <none>   6h24m   v1.17.3+vmware.2

Now your ready to deploy workloads into your TKG workload cluster and or create as many clusters as you need. For more information use the links below.


More Information

VMware Tanzu Kubernetes Grid
https://tanzu.vmware.com/kubernetes-grid

VMware Tanzu Kubernetes Grid 1.0 Documentation
https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/1.0/vmware-tanzu-kubernetes-grid-10/GUID-index.html


Tuesday, 28 April 2020

Creating a Single instance stateful MySQL pod on vSphere 7 with Kubernetes

In the vSphere environment, the persistent volume objects are backed by virtual disks that reside on datastores. Datastores are represented by storage policies. After the vSphere administrator creates a storage policy, for example gold, and assigns it to a namespace in a Supervisor Cluster, the storage policy appears as a matching Kubernetes storage class in the Supervisor Namespace and any available Tanzu Kubernetes clusters.

In this example below we will show how to get a Single instance Stateful MySQL application pod on vSphere 7 with Kubernetes. For an introduction to vSphere 7 with Kubernetes see this blog link below.

A first look a running a Kubernetes cluster on "vSphere 7 with Kubernetes"
http://theblasfrompas.blogspot.com/2020/04/a-first-look-running-kubenetes-cluster.html

Steps 

1. If you followed the Blog above you will have a Namespace as shown in the image below. The namespace we are using is called "ns1"



2. Click on "ns1" and ensure you have added storage using the "Storage" card



3. Now let's connect to our supervisor cluster and switch to the Namespace "ns1"

kubectl vsphere login --server=SUPERVISOR-CLUSTER-CONTROL-PLANE-IP-ADDRESS 
--vsphere-username VCENTER-SSO-USER

Example:

$ kubectl vsphere login --insecure-skip-tls-verify --server wcp.haas-yyy.pez.pivotal.io -u administrator@vsphere.local

Password:
Logged in successfully.

You have access to the following contexts:
   ns1
   wcp.haas-yyy.pez.pivotal.io

If the context you wish to use is not in this list, you may need to try
logging in again later, or contact your cluster administrator.

To change context, use `kubectl config use-context `

4. At this point we need to switch to the Namespace we created at step 2 which is "ns1".

$ kubectl config use-context ns1
Switched to context "ns1".

5. Use one of the following commands to verify that the storage class is the one which we added to the Namespace as per #2 above, in this case "pacific-gold-storage-policy".
  
$ kubectl get storageclass
NAME                          PROVISIONER              AGE
pacific-gold-storage-policy   csi.vsphere.vmware.com   5d20h

$ kubectl describe namespace ns1
Name:         ns1
Labels:       vSphereClusterID=domain-c8
Annotations:  ncp/extpoolid: domain-c8:1d3e6bfb-af68-4494-a9bf-c8560a7a6aef-ippool-10-193-191-129-10-193-191-190
              ncp/snat_ip: 10.193.191.141
              ncp/subnet-0: 10.244.0.240/28
              ncp/subnet-1: 10.244.1.16/28
              vmware-system-resource-pool: resgroup-67
              vmware-system-vm-folder: group-v68
Status:       Active

Resource Quotas
 Name:                                                                     ns1-storagequota
 Resource                                                                  Used  Hard
 --------                                                                  ---   ---
 pacific-gold-storage-policy.storageclass.storage.k8s.io/requests.storage  20Gi  9223372036854775807

No resource limits.

As a DevOps engineer, you can use the storage class in your persistent volume claim specifications. You can then deploy an application that uses storage from the persistent volume claim.

6. At this point we can create a Persistent Volume Claim using YAML as follows. In the example below we reference storage class name ""pacific-gold-storage-policy".

Note: We are using a Supervisor Cluster Namespace here for our Stateful MySQL application but the storage class name will also appear in any Tanzu Kubernetes clusters you have created.

Example:

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pv-claim
spec:
  storageClassName: pacific-gold-storage-policy
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 20Gi

$ kubectl apply -f mysql-pvc.yaml
persistentvolumeclaim/mysql-pv-claim created

7. Let's view the PVC we just created
  
$ kubectl get pvc
NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                  AGE
mysql-pv-claim   Bound    pvc-a60f2787-ccf4-4142-8bf5-14082ae33403   20Gi       RWO            pacific-gold-storage-policy   39s

8. Now let's create a Deployment that will mount this PVC we created above using the name "mysql-pv-claim"

Example:

apiVersion: v1
kind: Service
metadata:
  name: mysql
spec:
  ports:
  - port: 3306
  selector:
    app: mysql
  clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: mysql
spec:
  selector:
    matchLabels:
      app: mysql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - image: mysql:5.6
        name: mysql
        env:
          # Use secret in real usage
        - name: MYSQL_ROOT_PASSWORD
          value: password
        ports:
        - containerPort: 3306
          name: mysql
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
      volumes:
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: mysql-pv-claim

$ kubectl apply -f mysql-deployment.yaml
service/mysql created
deployment.apps/mysql created

9. Let's verify we have a running Deployment with a MySQL POD as shown below
  
$ kubectl get all
NAME                        READY   STATUS    RESTARTS   AGE
pod/mysql-c85f7f79c-gskkr   1/1     Running   0          78s
pod/nginx                   1/1     Running   0          3d21h

NAME                                          TYPE           CLUSTER-IP    EXTERNAL-IP     PORT(S)          AGE
service/mysql                                 ClusterIP      None          <none>          3306/TCP         79s
service/tkg-cluster-1-60657ac113b7b5a0ebaab   LoadBalancer   10.96.0.253   10.193.191.68   80:32078/TCP     5d19h
service/tkg-cluster-1-control-plane-service   LoadBalancer   10.96.0.222   10.193.191.66   6443:30659/TCP   5d19h

NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/mysql   1/1     1            1           79s

NAME                              DESIRED   CURRENT   READY   AGE
replicaset.apps/mysql-c85f7f79c   1         1         1       79s

10. If we return to vSphere client we will see our MySQL Stateful deployment as shown below


11. We can also view the PVC we have created in vSphere client as well



12. Finally let's connect to the MySQL database which is done as follows by

$ kubectl run -it --rm --image=mysql:5.6 --restart=Never mysql-client -- mysql -h mysql -ppassword
If you don't see a command prompt, try pressing enter.
Warning: Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 1
Server version: 5.6.47 MySQL Community Server (GPL)

Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show databases;
+---------------------+
| Database            |
+---------------------+
| information_schema  |
| #mysql50#lost+found |
| mysql               |
| performance_schema  |
+---------------------+
4 rows in set (0.02 sec)

mysql>


More Information

Deploy a Stateful Application
https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-kubernetes/GUID-D875DED3-41A1-484F-A1CD-13810D674420.html

Display Storage Classes in a Supervisor Namespace or Tanzu Kubernetes Cluster
https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-kubernetes/GUID-883E60F9-03C5-40D7-9AB8-BE42835B7B52.html#GUID-883E60F9-03C5-40D7-9AB8-BE42835B7B52