Search This Blog

Thursday 7 May 2020

Paketo Buildpacks - Cloud Native Buildpacks providing language runtime support for applications on Kubernetes or Cloud Foundry

Paketo Buildpacks are modular Buildpacks, written in Go. Paketo Buildpacks provide language runtime support for applications. They leverage the Cloud Native Buildpacks framework to make image builds easy, performant, and secure.

Paketo Buildpacks implement the Cloud Native Buildpacks specification, an emerging standard for building app container images. You can use Paketo Buildpacks with tools such as the CNB pack CLI, kpack, Tekton, and Skaffold, in addition to a number of cloud platforms.

Here how simple they are to use.

Steps

1. First to get started you need a few things installed the most important is is the Pack CLI and a Docker up and running to allow you to locally create OCI compliant images from your source code

Prerequisites:

    Pack CLI
    Docker

2. Verify pack is installed as follows

$ pack version
0.10.0+git-06d9983.build-259

3. Now in this example below I am going to use a Springboot application source code of mine. The Github URL for that is as follows so you could clone it if you want to follow using this demo.

https://github.com/papicella/msa-apifirst

4. Build my OCI compliant image as follows.

$ pack build msa-apifirst-paketo -p ./msa-apifirst --builder gcr.io/paketo-buildpacks/builder:base
base: Pulling from paketo-buildpacks/builder
Digest: sha256:1bb775a178ed4c54246ab71f323d2a5af0e4b70c83b0dc84f974694b0221d636
Status: Image is up to date for gcr.io/paketo-buildpacks/builder:base
base-cnb: Pulling from paketo-buildpacks/run
Digest: sha256:d70bf0fe11d84277997c4a7da94b2867a90d6c0f55add4e19b7c565d5087206f
Status: Image is up to date for gcr.io/paketo-buildpacks/run:base-cnb
===> DETECTING
[detector] 6 of 15 buildpacks participating
[detector] paketo-buildpacks/bellsoft-liberica 2.5.0
[detector] paketo-buildpacks/maven             1.2.1
[detector] paketo-buildpacks/executable-jar    1.2.2
[detector] paketo-buildpacks/apache-tomcat     1.1.2
[detector] paketo-buildpacks/dist-zip          1.2.2
[detector] paketo-buildpacks/spring-boot       1.5.2
===> ANALYZING
[analyzer] Restoring metadata for "paketo-buildpacks/bellsoft-liberica:openssl-security-provider" from app image
[analyzer] Restoring metadata for "paketo-buildpacks/bellsoft-liberica:security-providers-configurer" from app image

...

[builder] Paketo Maven Buildpack 1.2.1
[builder]     Set $BP_MAVEN_SETTINGS to configure the contents of a settings.xml file. Default .
[builder]     Set $BP_MAVEN_BUILD_ARGUMENTS to configure the arguments passed to the build system. Default -Dmaven.test.skip=true package.
[builder]     Set $BP_MAVEN_BUILT_MODULE to configure the module to find application artifact in. Default .
[builder]     Set $BP_MAVEN_BUILT_ARTIFACT to configure the built application artifact. Default target/*.[jw]ar.
[builder]     Creating cache directory /home/cnb/.m2
[builder]   Compiled Application: Reusing cached layer
[builder]   Removing source code
[builder]
[builder] Paketo Executable JAR Buildpack 1.2.2
[builder]   Process types:
[builder]     executable-jar: java -cp "${CLASSPATH}" ${JAVA_OPTS} org.springframework.boot.loader.JarLauncher
[builder]     task:           java -cp "${CLASSPATH}" ${JAVA_OPTS} org.springframework.boot.loader.JarLauncher
[builder]     web:            java -cp "${CLASSPATH}" ${JAVA_OPTS} org.springframework.boot.loader.JarLauncher
[builder]
[builder] Paketo Spring Boot Buildpack 1.5.2
[builder]   Image labels:
[builder]     org.opencontainers.image.title
[builder]     org.opencontainers.image.version
[builder]     org.springframework.boot.spring-configuration-metadata.json
[builder]     org.springframework.boot.version
===> EXPORTING
[exporter] Reusing layer 'launcher'
[exporter] Reusing layer 'paketo-buildpacks/bellsoft-liberica:class-counter'
[exporter] Reusing layer 'paketo-buildpacks/bellsoft-liberica:java-security-properties'
[exporter] Reusing layer 'paketo-buildpacks/bellsoft-liberica:jre'
[exporter] Reusing layer 'paketo-buildpacks/bellsoft-liberica:jvmkill'
[exporter] Reusing layer 'paketo-buildpacks/bellsoft-liberica:link-local-dns'
[exporter] Reusing layer 'paketo-buildpacks/bellsoft-liberica:memory-calculator'
[exporter] Reusing layer 'paketo-buildpacks/bellsoft-liberica:openssl-security-provider'
[exporter] Reusing layer 'paketo-buildpacks/bellsoft-liberica:security-providers-configurer'
[exporter] Reusing layer 'paketo-buildpacks/executable-jar:class-path'
[exporter] Reusing 1/1 app layer(s)
[exporter] Adding layer 'config'
[exporter] *** Images (726b340b596b):
[exporter]       index.docker.io/library/msa-apifirst-paketo:latest
[exporter] Adding cache layer 'paketo-buildpacks/bellsoft-liberica:jdk'
[exporter] Reusing cache layer 'paketo-buildpacks/maven:application'
[exporter] Reusing cache layer 'paketo-buildpacks/maven:cache'
[exporter] Reusing cache layer 'paketo-buildpacks/executable-jar:class-path'
Successfully built image msa-apifirst-paketo

5. Now lets run our application locally as shown below

$ docker run --rm -p 8080:8080 msa-apifirst-paketo
Container memory limit unset. Configuring JVM for 1G container.
Calculated JVM Memory Configuration: -XX:MaxDirectMemorySize=10M -XX:MaxMetaspaceSize=113348K -XX:ReservedCodeCacheSize=240M -Xss1M -Xmx423227K (Head Room: 0%, Loaded Class Count: 17598, Thread Count: 250, Total Memory: 1073741824)
Adding Security Providers to JVM

  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::        (v2.1.1.RELEASE)

2020-05-07 09:48:04.153  INFO 1 --- [           main] p.a.p.m.apifirst.MsaApifirstApplication  : Starting MsaApifirstApplication on 486f85c54667 with PID 1 (/workspace/BOOT-INF/classes started by cnb in /workspace)
2020-05-07 09:48:04.160  INFO 1 --- [           main] p.a.p.m.apifirst.MsaApifirstApplication  : No active profile set, falling back to default profiles: default

...

2020-05-07 09:48:15.515  INFO 1 --- [           main] p.a.p.m.apifirst.MsaApifirstApplication  : Started MsaApifirstApplication in 12.156 seconds (JVM running for 12.975)
Hibernate: insert into customer (id, name, status) values (null, ?, ?)
2020-05-07 09:48:15.680  INFO 1 --- [           main] p.apj.pa.msa.apifirst.LoadDatabase       : Preloading Customer(id=1, name=pas, status=active)
Hibernate: insert into customer (id, name, status) values (null, ?, ?)
2020-05-07 09:48:15.682  INFO 1 --- [           main] p.apj.pa.msa.apifirst.LoadDatabase       : Preloading Customer(id=2, name=lucia, status=active)
Hibernate: insert into customer (id, name, status) values (null, ?, ?)
2020-05-07 09:48:15.684  INFO 1 --- [           main] p.apj.pa.msa.apifirst.LoadDatabase       : Preloading Customer(id=3, name=lucas, status=inactive)
Hibernate: insert into customer (id, name, status) values (null, ?, ?)
2020-05-07 09:48:15.688  INFO 1 --- [           main] p.apj.pa.msa.apifirst.LoadDatabase       : Preloading Customer(id=4, name=siena, status=inactive)

6. Access the API endpoint using curl or HTTPie as shown below

$ http :8080/customers/1
HTTP/1.1 200
Content-Type: application/hal+json;charset=UTF-8
Date: Thu, 07 May 2020 09:49:05 GMT
Transfer-Encoding: chunked

{
    "_links": {
        "customer": {
            "href": "http://localhost:8080/customers/1"
        },
        "self": {
            "href": "http://localhost:8080/customers/1"
        }
    },
    "name": "pas",
    "status": "active"
}

It also has a swagger UI endpoint as follows

http://localhost:8080/swagger-ui.html

7. Now you will see as per below you have a locally built OCI compliant image

$ docker images | grep msa-apifirst-paketo
msa-apifirst-paketo                       latest              726b340b596b        40 years ago        286MB

8. Now you can push this OCI compliant image to a Container Registry here I am using Dockerhub

$ pack build pasapples/msa-apifirst-paketo:latest --publish --path ./msa-apifirst
cflinuxfs3: Pulling from cloudfoundry/cnb
Digest: sha256:30af1eb2c8a6f38f42d7305acb721493cd58b7f203705dc03a3f4b21f8439ce0
Status: Image is up to date for cloudfoundry/cnb:cflinuxfs3
===> DETECTING
[detector] 6 of 15 buildpacks participating
[detector] paketo-buildpacks/bellsoft-liberica 2.5.0
[detector] paketo-buildpacks/maven             1.2.1

...

===> EXPORTING
[exporter] Adding layer 'launcher'
[exporter] Adding layer 'paketo-buildpacks/bellsoft-liberica:class-counter'
[exporter] Adding layer 'paketo-buildpacks/bellsoft-liberica:java-security-properties'
[exporter] Adding layer 'paketo-buildpacks/bellsoft-liberica:jre'
[exporter] Adding layer 'paketo-buildpacks/bellsoft-liberica:jvmkill'
[exporter] Adding layer 'paketo-buildpacks/bellsoft-liberica:link-local-dns'
[exporter] Adding layer 'paketo-buildpacks/bellsoft-liberica:memory-calculator'
[exporter] Adding layer 'paketo-buildpacks/bellsoft-liberica:openssl-security-provider'
[exporter] Adding layer 'paketo-buildpacks/bellsoft-liberica:security-providers-configurer'
[exporter] Adding layer 'paketo-buildpacks/executable-jar:class-path'
[exporter] Adding 1/1 app layer(s)
[exporter] Adding layer 'config'
[exporter] *** Images (sha256:097c7f67ac3dfc4e83d53c6b3e61ada8dd3d2c1baab2eb860945eba46814dba5):
[exporter]       index.docker.io/pasapples/msa-apifirst-paketo:latest
[exporter] Adding cache layer 'paketo-buildpacks/bellsoft-liberica:jdk'
[exporter] Adding cache layer 'paketo-buildpacks/maven:application'
[exporter] Adding cache layer 'paketo-buildpacks/maven:cache'
[exporter] Adding cache layer 'paketo-buildpacks/executable-jar:class-path'
Successfully built image pasapples/msa-apifirst-paketo:latest

Dockerhub showing pushed OCI compliant image


9. If you wanted to deploy your application to Kubernetes you could do that as follows.

$ kubectl create deployment msa-apifirst-paketo --image=pasapples/msa-apifirst-paketo
$ kubectl expose deployment msa-apifirst-paketo --type=LoadBalancer --port=8080

10. Finally you can select from 3 different builders as per below. We used the "base" builder in our example above
  • gcr.io/paketo-buildpacks/builder:full-cf
  • gcr.io/paketo-buildpacks/builder:base
  • gcr.io/paketo-buildpacks/builder:tiny

More Information

Paketo Buildpacks
https://paketo.io/

Tuesday 5 May 2020

Creating my first Tanzu Kubernetes Grid 1.0 workload cluster on AWS

With Tanzu Kubernetes Grid you can run the same K8s across data center, public cloud and edge for a consistent, secure experience for all development teams. To find out more here is step by step to get this working on AWS which is one of the first 2 supported IaaS, the other being vSphere.

Steps

Before we get started we need to download a few bits and pieces all described here.

https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/1.0/vmware-tanzu-kubernetes-grid-10/GUID-install-tkg-set-up-tkg.html

Once you done that make sure you have tkg cli as follows

$ tkg version
Client:
Version: v1.0.0
Git commit: 60f6fd5f40101d6b78e95a33334498ecca86176e

You will also need the following
  • kubectl is installed.
  • Docker is installed and running, if you are installing Tanzu Kubernetes Grid on Linux.
  • Docker Desktop is installed and running, if you are installing Tanzu Kubernetes Grid on Mac OS.
  • System time is synchronized with a Network Time Protocol (NTP) server
Once that is done follow this link for AWS pre-reqs and other downloads required

https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/1.0/vmware-tanzu-kubernetes-grid-10/GUID-install-tkg-aws.html

1. Start by setting some AWS env variables for your account. Ensure you select a region supported by TKG which in my case I am using US regions

export AWS_ACCESS_KEY_ID=YYYY
export AWS_SECRET_ACCESS_KEY=ZZZZ
export AWS_REGION=us-east-1

2. Run the following clusterawsadm command to create a CloudFoundation stack.

$ ./clusterawsadm alpha bootstrap create-stack
Attempting to create CloudFormation stack cluster-api-provider-aws-sigs-k8s-io

Following resources are in the stack:

Resource                  |Type                                                                                |Status
AWS::IAM::Group           |bootstrapper.cluster-api-provider-aws.sigs.k8s.io                                   |CREATE_COMPLETE
AWS::IAM::InstanceProfile |control-plane.cluster-api-provider-aws.sigs.k8s.io                                  |CREATE_COMPLETE
AWS::IAM::InstanceProfile |controllers.cluster-api-provider-aws.sigs.k8s.io                                    |CREATE_COMPLETE
AWS::IAM::InstanceProfile |nodes.cluster-api-provider-aws.sigs.k8s.io                                          |CREATE_COMPLETE
AWS::IAM::ManagedPolicy   |arn:aws:iam::667166452325:policy/control-plane.cluster-api-provider-aws.sigs.k8s.io |CREATE_COMPLETE
AWS::IAM::ManagedPolicy   |arn:aws:iam::667166452325:policy/nodes.cluster-api-provider-aws.sigs.k8s.io         |CREATE_COMPLETE
AWS::IAM::ManagedPolicy   |arn:aws:iam::667166452325:policy/controllers.cluster-api-provider-aws.sigs.k8s.io   |CREATE_COMPLETE
AWS::IAM::Role            |control-plane.cluster-api-provider-aws.sigs.k8s.io                                  |CREATE_COMPLETE
AWS::IAM::Role            |controllers.cluster-api-provider-aws.sigs.k8s.io                                    |CREATE_COMPLETE
AWS::IAM::Role            |nodes.cluster-api-provider-aws.sigs.k8s.io                                          |CREATE_COMPLETE
AWS::IAM::User            |bootstrapper.cluster-api-provider-aws.sigs.k8s.io                                   |CREATE_COMPLETE

On AWS console you should see the stack created as follows


3. Ensure SSH key pair exists in your region as shown below

$ aws ec2 describe-key-pairs --key-name us-east-key
{
    "KeyPairs": [
        {
            "KeyFingerprint": "71:44:e3:f9:0e:93:1f:e7:1e:c4:ba:58:e8:65:92:3e:dc:e6:27:42",
            "KeyName": "us-east-key"
        }
    ]
}

4. Set Your AWS Credentials as Environment Variables for Use by Cluster API

$ export AWS_CREDENTIALS=$(aws iam create-access-key --user-name bootstrapper.cluster-api-provider-aws.sigs.k8s.io --output json)

$ export AWS_ACCESS_KEY_ID=$(echo $AWS_CREDENTIALS | jq .AccessKey.AccessKeyId -r)

$ export AWS_SECRET_ACCESS_KEY=$(echo $AWS_CREDENTIALS | jq .AccessKey.SecretAccessKey -r)

$ export AWS_B64ENCODED_CREDENTIALS=$(./clusterawsadm alpha bootstrap encode-aws-credentials)

5. Set the correct AMI for your region.

List here: https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/1.0/rn/VMware-Tanzu-Kubernetes-Grid-10-Release-Notes.html#amis

$ export AWS_AMI_ID=ami-0cdd7837e1fdd81f8

6. Deploy the Management Cluster to Amazon EC2 with the Installer Interface

$ tkg init --ui

Following the docs link below to fill in the desired details most of the defaults should work

https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/1.0/vmware-tanzu-kubernetes-grid-10/GUID-install-tkg-aws-ui.html

Once complete:

$ ./tkg init --ui
Logs of the command execution can also be found at: /var/folders/mb/93td1r4s7mz3ptq6cmpdvc6m0000gp/T/tkg-20200429T091728980865562.log

Validating the pre-requisites...
Serving kickstart UI at http://127.0.0.1:8080
Validating configuration...
web socket connection established
sending pending 2 logs to UI
Using infrastructure provider aws:v0.5.2
Generating cluster configuration...
Setting up bootstrapper...
Installing providers on bootstrapper...
Fetching providers
Installing cert-manager
Waiting for cert-manager to be available...
Installing Provider="cluster-api" Version="v0.3.3" TargetNamespace="capi-system"
Installing Provider="bootstrap-kubeadm" Version="v0.3.3" TargetNamespace="capi-kubeadm-bootstrap-system"
Installing Provider="control-plane-kubeadm" Version="v0.3.3" TargetNamespace="capi-kubeadm-control-plane-system"
Installing Provider="infrastructure-aws" Version="v0.5.2" TargetNamespace="capa-system"
Start creating management cluster...
Installing providers on management cluster...
Fetching providers
Installing cert-manager
Waiting for cert-manager to be available...
Installing Provider="cluster-api" Version="v0.3.3" TargetNamespace="capi-system"
Installing Provider="bootstrap-kubeadm" Version="v0.3.3" TargetNamespace="capi-kubeadm-bootstrap-system"
Installing Provider="control-plane-kubeadm" Version="v0.3.3" TargetNamespace="capi-kubeadm-control-plane-system"
Installing Provider="infrastructure-aws" Version="v0.5.2" TargetNamespace="capa-system"
Waiting for the management cluster to get ready for move...
Moving all Cluster API objects from bootstrap cluster to management cluster...
Performing move...
Discovering Cluster API objects
Moving Cluster API objects Clusters=1
Creating objects in the target cluster
Deleting objects from the source cluster
Context set for management cluster pasaws-tkg-man-cluster as 'pasaws-tkg-man-cluster-admin@pasaws-tkg-man-cluster'.

Management cluster created!


You can now create your first workload cluster by running the following:

  tkg create cluster [name] --kubernetes-version=[version] --plan=[plan]


In AWS console EC2 instances page you will see a few VM's that represent the management cluster as shown below


7. Show the management cluster as follows

$ tkg get management-cluster
+--------------------------+-----------------------------------------------------+
| MANAGEMENT CLUSTER NAME  | CONTEXT NAME                                        |
+--------------------------+-----------------------------------------------------+
| pasaws-tkg-man-cluster * | pasaws-tkg-man-cluster-admin@pasaws-tkg-man-cluster |
+--------------------------+-----------------------------------------------------+

8. You

9. You can connect to the management cluster as follows to look at what is running

$ kubectl config use-context pasaws-tkg-man-cluster-admin@pasaws-tkg-man-cluster
Switched to context "pasaws-tkg-man-cluster-admin@pasaws-tkg-man-cluster".

10. Deploy a Dev cluster with Multiple Worker Nodes as shown below. This should take about 10 minutes or so.

$ tkg create cluster apples-aws-tkg --plan=dev --worker-machine-count 2
Logs of the command execution can also be found at: /var/folders/mb/93td1r4s7mz3ptq6cmpdvc6m0000gp/T/tkg-20200429T101702293042678.log
Creating workload cluster 'apples-aws-tkg'...

Context set for workload cluster apples-aws-tkg as apples-aws-tkg-admin@apples-aws-tkg

Waiting for cluster nodes to be available...

Workload cluster 'apples-aws-tkg' created

In AWS console EC2 instances page you will see a few more VM's that represent our new TKG workload cluster


11. View what workload clusters are under management and have been created

$ tkg get clusters
+----------------+-------------+
| NAME           | STATUS      |
+----------------+-------------+
| apples-aws-tkg | Provisioned |
+----------------+-------------+

12. To connect to the workload cluster we just created use a set of commands as follows

$ tkg get credentials apples-aws-tkg
Credentials of workload cluster apples-aws-tkg have been saved
You can now access the cluster by switching the context to apples-aws-tkg-admin@apples-aws-tkg under /Users/papicella/.kube/config

$ kubectl config use-context apples-aws-tkg-admin@apples-aws-tkg
Switched to context "apples-aws-tkg-admin@apples-aws-tkg".

$ kubectl cluster-info
Kubernetes master is running at https://apples-aws-tkg-apiserver-2050013369.us-east-1.elb.amazonaws.com:6443
KubeDNS is running at https://apples-aws-tkg-apiserver-2050013369.us-east-1.elb.amazonaws.com:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

The following link will also be helpful
https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/1.0/vmware-tanzu-kubernetes-grid-10/GUID-tanzu-k8s-clusters-connect.html

18. View your cluster nodes as shown below
  
$ kubectl get nodes
NAME                         STATUS   ROLES    AGE     VERSION
ip-10-0-0-12.ec2.internal    Ready    <none>   6h24m   v1.17.3+vmware.2
ip-10-0-0-143.ec2.internal   Ready    master   6h25m   v1.17.3+vmware.2
ip-10-0-0-63.ec2.internal    Ready    <none>   6h24m   v1.17.3+vmware.2

Now your ready to deploy workloads into your TKG workload cluster and or create as many clusters as you need. For more information use the links below.


More Information

VMware Tanzu Kubernetes Grid
https://tanzu.vmware.com/kubernetes-grid

VMware Tanzu Kubernetes Grid 1.0 Documentation
https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/1.0/vmware-tanzu-kubernetes-grid-10/GUID-index.html