Search This Blog

Tuesday 24 September 2019

Basic VMware Harbor Registry usage for Pivotal Container Service (PKS)

VMware Harbor Registry is an enterprise-class registry server that stores and distributes container images. Harbor allows you to store and manage images for use with Enterprise Pivotal Container Service (Enterprise PKS).

In this simple example we show what you need at a minimum to get an image on Harbor deployed onto your PKS cluster. First we need the following to be able to run this basic demo

Required Steps

1. PKS installed with Harbor Registry tile added as shown below


2. VMware Harbor Registry integrated with Enterprise PKS as per the link below. The most important step is the one as follows "Import the CA Certificate Used to Sign the Harbor Certificate and Key to BOSH". You must complete that prior to creating a PKS cluster

https://docs.pivotal.io/partners/vmware-harbor/integrating-pks.html

3. A PKS cluster created. You must have completed step #2 before you create the cluster

https://docs.pivotal.io/pks/1-4/create-cluster.html

$ pks cluster oranges

Name:                     oranges
Plan Name:                small
UUID:                     21998d0d-b9f8-437c-850c-6ee0ed33d781
Last Action:              CREATE
Last Action State:        succeeded
Last Action Description:  Instance provisioning completed
Kubernetes Master Host:   oranges.run.yyyy.bbbb.pivotal.io
Kubernetes Master Port:   8443
Worker Nodes:             4
Kubernetes Master IP(s):  1.1.1.1
Network Profile Name:

4. Docker Desktop Installed on your local machine



Steps

1. First let's log into Harbor and create a new project. Make sure you record your username and password you have assigned for the project. In this example I make the project public.




Details

  • Project Name: cto_apj
  • Username: pas
  • Password: ****

2. Next in order to be able to connect to our registry from our local laptop we will need to install

The VMware Harbor registry isn't running on a public domain, and is using a self-signed certificate. So we need to access this registry with self-signed certificates from my mac osx clients given I am using Docker for Mac. This link shows how to add the self signed certificate to Linux and Mac clients

https://blog.container-solutions.com/adding-self-signed-registry-certs-docker-mac

You can download the self signed cert from Pivotal Ops Manager as sown below


With all that in place a command as follows is all I need to run

$ sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain ca.crt

3. Now lets login to the registry using a command as follows

$ docker login harbor.haas-bbb.yyyy.pivotal.io -u pas
Password:
Login Succeeded

4. Now I have an image sitting on Docker Hub itself so let's tag that and then deploy that to our VMware Harbor registry as shown below

 $ docker tag pasapples/customer-api:latest harbor.haas-bbb.yyyy.io/cto_apj/customer-api:latest
 $ docker push harbor.haas-bbb.yyyy.io/cto_apj/customer-api:latest


5. Now lets create a new secret for accessing the container registry

$ kubectl create secret docker-registry regcred --docker-server=harbor.haas-bbb.yyyy.io --docker-username=pas --docker-password=**** --docker-email=papicella@pivotal.io

6. Now let's deploy this image to our PKS cluster using a deployment YAML file as follows

customer-api.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: customer-api
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: customer-api
    spec:
      containers:
        - name: customer-api
          image: harbor.haas-206.pez.pivotal.io/cto_apj/customer-api:latest
          ports:
            - containerPort: 8080

---
apiVersion: v1
kind: Service
metadata:
  name: customer-api-service
  labels:
    name: customer-api-service
spec:
  ports:
    - port: 80
      targetPort: 8080
      protocol: TCP
  selector:
    app: customer-api
  type: LoadBalancer

7. Deploy as follows

$ kubectl create -f customer-api.yaml

8. You should see the POD and SERVICE running as follows

$ kubectl get pods | grep customer-api
customer-api-7b8fcd5778-czh46                    1/1     Running   0          58s

$ kubectl get svc | grep customer-api
customer-api-service            LoadBalancer   10.100.2.2    10.195.1.1.80.5   80:31156/TCP 


More Information

PKS Release Notes 1.4
https://docs.pivotal.io/pks/1-4/release-notes.html

VMware Harbor Registry
https://docs.vmware.com/en/VMware-Enterprise-PKS/1.4/vmware-harbor-registry/GUID-index.html

Wednesday 11 September 2019

Taking kpack, a Kubernetes Native Container Build Service for a test drive

We wanted Build Service to combine the Cloud Native Buildpacks experience with the declarative model of Kubernetes, and extend the K8s workflow in an idiomatic fashion. With this goal in mind, we leveraged custom resource definitions to extended the K8s API. This way, we could use Kubernetes technology to create a composable, declarative architecture to power build service. The Custom Resource Definitions (CRDs) are coordinated by Custom Controllers to automate container image builds and keep them up to date based on user-provided configuration.

So with that in mind lets go and deploy kpack on GKE cluster and build our first image...



Steps

1. Install v0.0.3 of kpack into your Kube cluster

$ kubectl apply -f <(curl -L https://github.com/pivotal/kpack/releases/download/v0.0.3/release.yaml)

...

namespace/kpack created
customresourcedefinition.apiextensions.k8s.io/builds.build.pivotal.io created
customresourcedefinition.apiextensions.k8s.io/builders.build.pivotal.io created
clusterrole.rbac.authorization.k8s.io/kpack-admin created
clusterrolebinding.rbac.authorization.k8s.io/kpack-controller-admin created
deployment.apps/kpack-controller created
customresourcedefinition.apiextensions.k8s.io/images.build.pivotal.io created
serviceaccount/controller created
customresourcedefinition.apiextensions.k8s.io/sourceresolvers.build.pivotal.io created

2. Lets just verify what Custom resources definition (CRD's) have been installed

$ kubectl api-resources --api-group build.pivotal.io
NAME              SHORTNAMES                    APIGROUP           NAMESPACED   KIND
builders          cnbbuilder,cnbbuilders,bldr   build.pivotal.io   true         Builder
builds            cnbbuild,cnbbuilds,bld        build.pivotal.io   true         Build
images            cnbimage,cnbimages            build.pivotal.io   true         Image
sourceresolvers                                 build.pivotal.io   true         SourceResolver

3. Create a builder resource as follows

builder-resource.yaml

apiVersion: build.pivotal.io/v1alpha1
kind: Builder
metadata:
  name: sample-builder
spec:
  image: cloudfoundry/cnb:bionic
  updatePolicy: polling

$ kubectl create -f builder-resource.yaml
builder.build.pivotal.io/sample-builder created

$ kubectl get builds,images,builders,sourceresolvers
NAME                                      AGE
builder.build.pivotal.io/sample-builder   42s

4. Create a secret for push access to the desired docker registry

docker-secret.yaml

apiVersion: v1
kind: Secret
metadata:
  name: basic-docker-user-pass
  annotations:
    build.pivotal.io/docker: index.docker.io
type: kubernetes.io/basic-auth
stringData:
  username: papicella
  password:

$ kubectl create -f docker-secret.yaml
secret/basic-docker-user-pass created

5. Create a secret for pull access from the desired git repository. The example below is for a github repository

git-secret.yaml

apiVersion: v1
kind: Secret
metadata:
  name: basic-git-user-pass
  annotations:
    build.pivotal.io/git: https://github.com
type: kubernetes.io/basic-auth
stringData:
  username: papicella
  password:

$ kubectl create -f git-secret.yaml
secret/basic-git-user-pass created

6. Create a service account that uses the docker registry secret and the git repository secret.

service-account.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: service-account
secrets:
  - name: basic-docker-user-pass
  - name: basic-git-user-pass

$ kubectl create -f service-account.yaml
serviceaccount/service-account created

7. Install logs utility. In order to view the build logs for each image as it's created right now you have to use a utility that you build from the kpack github repo source fils itself. Follow the steps below to get it built

$ export GOPATH=`pwd`
$ git clone https://github.com/pivotal/kpack $GOPATH/src/github.com/pivotal/kpack
$ cd $GOPATH/src/github.com/pivotal/kpack
$ dep ensure -v
$ go build ./cmd/logs

You will have "logs" executable created in current directory which we will use it shortly

8. Create an image as follows. The GitHub repo I have here is public so will work no problem at all

pbs-demo-sample-image.yaml

apiVersion: build.pivotal.io/v1alpha1
kind: Image
metadata:
  name: pbs-demo-image
spec:
  tag: pasapples/pbs-demo-image
  serviceAccount: service-account
  builderRef: sample-builder
  cacheSize: "1.5Gi" # Optional, if not set then the caching feature is disabled
  failedBuildHistoryLimit: 5 # Optional, if not present defaults to 10
  successBuildHistoryLimit: 5 # Optional, if not present defaults to 10
  source:
    git:
      url: https://github.com/papicella/pbs-demo
      revision: master
  build: # Optional
    env:
      - name: BP_JAVA_VERSION
        value: 11.*
    resources:
      limits:
        cpu: 100m
        memory: 1G
      requests:
        cpu: 50m
        memory: 512M

$ kubectl create -f pbs-demo-sample-image.yaml
image.build.pivotal.io/sample-image created

9. Now at this point we can view the created image and current Cloud native Buildpack builds being run using two commands as follows.

$ kubectl get images
NAME             LATESTIMAGE   READY
pbs-demo-image                 Unknown

$ kubectl get cnbbuilds
NAME                           IMAGE   SUCCEEDED
pbs-demo-image-build-1-pvh6k           Unknown

Note: Unknown is normal as it has not yet completed 

10. Now using our created "logs" utility lets view the current build logs

$ ./logs -image pbs-demo-image
{"level":"info","ts":1568175056.446671,"logger":"fallback-logger","caller":"creds-init/main.go:40","msg":"Credentials initialized.","commit":"002a41a"}
source-init:main.go:277: Successfully cloned "https://github.com/papicella/pbs-demo" @ "cee67e26d55b6d2735afd7fa3e0b81e251e0d5ce" in path "/workspace"
2019/09/11 04:11:23 Unable to read "/root/.docker/config.json": open /root/.docker/config.json: no such file or directory
======== Results ========
skip: org.cloudfoundry.archiveexpanding@1.0.0-RC03
pass: org.cloudfoundry.openjdk@1.0.0-RC03
pass: org.cloudfoundry.buildsystem@1.0.0-RC03
pass: org.cloudfoundry.jvmapplication@1.0.0-RC03
pass: org.cloudfoundry.tomcat@1.0.0-RC03
pass: org.cloudfoundry.springboot@1.0.0-RC03
pass: org.cloudfoundry.distzip@1.0.0-RC03
skip: org.cloudfoundry.procfile@1.0.0-RC03
skip: org.cloudfoundry.azureapplicationinsights@1.0.0-RC03
skip: org.cloudfoundry.debug@1.0.0-RC03
skip: org.cloudfoundry.googlestackdriver@1.0.0-RC03
skip: org.cloudfoundry.jdbc@1.0.0-RC03
skip: org.cloudfoundry.jmx@1.0.0-RC03
pass: org.cloudfoundry.springautoreconfiguration@1.0.0-RC03
Resolving plan... (try #1)
Success! (7)
Cache '/cache': metadata not found, nothing to restore
Analyzing image 'index.docker.io/pasapples/pbs-demo-image@sha256:40fe8aa932037faad697c3934667241eef620aac1d09fc7bb5ec5a75d5921e3e'
Writing metadata for uncached layer 'org.cloudfoundry.openjdk:openjdk-jre'

......

11. Now this will take some time to do our first build given it will hagve to download all the maven dependancies but you may be wondering how do we determine how many builds have been run so we can actually view the logs of any builds across our image we just created. To do that run a command as follows

$ kubectl get pods --show-labels | grep pbs-demo-image
pbs-demo-image-build-1-pvh6k-build-pod   0/1     Init:6/9   0          6m29s   image.build.pivotal.io/buildNumber=1,image.build.pivotal.io/image=pbs-demo-image

12. So from the output above you can clearly see we just have the one single build so to view logs of just a particular build we use it's ID as shown above as follows

$ ./logs -image pbs-demo-image -build {ID}

...

13. Now if we wait at least 5 minutes as the first build will always take time just to the dependancies required to be downloaded it will eventually complete and show it's complete using the following commands

$ kubectl get images
NAME             LATESTIMAGE                                                                                                        READY
pbs-demo-image   index.docker.io/pasapples/pbs-demo-image@sha256:a2d4082004d686bb2c76222a631b8a9b3866bef54c1fae03261986a528b556fe   True

$ kubectl get cnbbuilds
NAME                           IMAGE                                                                                                              SUCCEEDED
pbs-demo-image-build-1-pvh6k   index.docker.io/pasapples/pbs-demo-image@sha256:a2d4082004d686bb2c76222a631b8a9b3866bef54c1fae03261986a528b556fe   True

14. Now let's actually make a code change to our source code and issue a git commit. In this example below I am using IntelliJ IDEA for my code change/commit


15. Now let's see if a new build is kicked off it should be. Run the following command

$ kubectl get cnbbuilds
NAME                           IMAGE                                                                                                              SUCCEEDED
pbs-demo-image-build-1-pvh6k   index.docker.io/pasapples/pbs-demo-image@sha256:a2d4082004d686bb2c76222a631b8a9b3866bef54c1fae03261986a528b556fe   True

pbs-demo-image-build-2-stl8w                                                                                                                      Unknown


16. Now lets see that in fact this new build is build ID 2 using a command as follows

$ kubectl get pods --show-labels | grep pbs-demo-image
pbs-demo-image-build-1-pvh6k-build-pod   0/1     Completed   0          21m     image.build.pivotal.io/buildNumber=1,image.build.pivotal.io/image=pbs-demo-image
pbs-demo-image-build-2-stl8w-build-pod   0/1     Init:6/9    0          2m15s   image.build.pivotal.io/buildNumber=2,image.build.pivotal.io/image=pbs-demo-image

17. Lets view the logs for BUILD 2 as follows

$ ./logs -image pbs-demo-image -build 2
{"level":"info","ts":1568176191.088838,"logger":"fallback-logger","caller":"creds-init/main.go:40","msg":"Credentials initialized.","commit":"002a41a"}
source-init:main.go:277: Successfully cloned "https://github.com/papicella/pbs-demo" @ "e2830bbcfb32bfdd72bf5d4b17428c405f46f3c1" in path "/workspace"
2019/09/11 04:29:55 Unable to read "/root/.docker/config.json": open /root/.docker/config.json: no such file or directory
======== Results ========
skip: org.cloudfoundry.archiveexpanding@1.0.0-RC03
pass: org.cloudfoundry.openjdk@1.0.0-RC03
pass: org.cloudfoundry.buildsystem@1.0.0-RC03
pass: org.cloudfoundry.jvmapplication@1.0.0-RC03
pass: org.cloudfoundry.tomcat@1.0.0-RC03
pass: org.cloudfoundry.springboot@1.0.0-RC03
pass: org.cloudfoundry.distzip@1.0.0-RC03
skip: org.cloudfoundry.procfile@1.0.0-RC03
skip: org.cloudfoundry.azureapplicationinsights@1.0.0-RC03
skip: org.cloudfoundry.debug@1.0.0-RC03
skip: org.cloudfoundry.googlestackdriver@1.0.0-RC03
skip: org.cloudfoundry.jdbc@1.0.0-RC03
skip: org.cloudfoundry.jmx@1.0.0-RC03
pass: org.cloudfoundry.springautoreconfiguration@1.0.0-RC03
Resolving plan... (try #1)
Success! (7)
Restoring cached layer 'org.cloudfoundry.openjdk:openjdk-jdk'
Restoring cached layer 'org.cloudfoundry.openjdk:90c33cf3f2ed0bd773f648815de7347e69cfbb3416ef3bf41616ab1c4aa0f5a8'
Restoring cached layer 'org.cloudfoundry.buildsystem:build-system-cache'
Restoring cached layer 'org.cloudfoundry.jvmapplication:executable-jar'
Restoring cached layer 'org.cloudfoundry.springboot:spring-boot'
Analyzing image 'index.docker.io/pasapples/pbs-demo-image@sha256:a2d4082004d686bb2c76222a631b8a9b3866bef54c1fae03261986a528b556fe'
Using cached layer 'org.cloudfoundry.openjdk:90c33cf3f2ed0bd773f648815de7347e69cfbb3416ef3bf41616ab1c4aa0f5a8'
Using cached layer 'org.cloudfoundry.openjdk:openjdk-jdk'
Writing metadata for uncached layer 'org.cloudfoundry.openjdk:openjdk-jre'
Using cached layer 'org.cloudfoundry.buildsystem:build-system-cache'
Using cached launch layer 'org.cloudfoundry.jvmapplication:executable-jar'
Rewriting metadata for layer 'org.cloudfoundry.jvmapplication:executable-jar'
Using cached launch layer 'org.cloudfoundry.springboot:spring-boot'
Rewriting metadata for layer 'org.cloudfoundry.springboot:spring-boot'
Writing metadata for uncached layer 'org.cloudfoundry.springautoreconfiguration:auto-reconfiguration'

Cloud Foundry OpenJDK Buildpack 1.0.0-RC03
  OpenJDK JDK 11.0.4: Reusing cached layer
  OpenJDK JRE 11.0.4: Reusing cached layer

Cloud Foundry Build System Buildpack 1.0.0-RC03
    Using wrapper
    Linking Cache to /home/cnb/.m2
  Compiled Application: Contributing to layer
    Executing /workspace/mvnw -Dmaven.test.skip=true package
[INFO] Scanning for projects...
[INFO]
[INFO] ------------------------< com.example:pbs-demo >------------------------
[INFO] Building pbs-demo 0.0.1-SNAPSHOT
[INFO] --------------------------------[ jar ]---------------------------------
[INFO]
[INFO] --- maven-resources-plugin:3.1.0:resources (default-resources) @ pbs-demo ---

...

18. Now this build won't take as long as the first build as this time we don't have to pull down the maven dependancies plus avoid creating layers that have not changes in the first OCI complaint image which is something cloud native buildpacks does for us nicely. Once complete you now have two builds as follows

$ kubectl get cnbbuilds
NAME                           IMAGE                                                                                                              SUCCEEDED
pbs-demo-image-build-1-pvh6k   index.docker.io/pasapples/pbs-demo-image@sha256:a2d4082004d686bb2c76222a631b8a9b3866bef54c1fae03261986a528b556fe   True
pbs-demo-image-build-2-stl8w   index.docker.io/pasapples/pbs-demo-image@sha256:a22c64754cb7addc3f7e9a9335b094adf466b5f8035227691e81403d0c9c177f   True

19. Now let's run this locally as follows given I have docker desktop running. First we pull down the created image which in this case is the LATEST build build 2 here



$ docker pull pasapples/pbs-demo-image
Using default tag: latest
latest: Pulling from pasapples/pbs-demo-image
35c102085707: Already exists
251f5509d51d: Already exists
8e829fe70a46: Already exists
6001e1789921: Already exists
76a30c9e6d47: Pull complete
8538f1fe6188: Pull complete
2a899c7e684d: Pull complete
0ea0c38329cb: Pull complete
bb281735f842: Pull complete
664d87aab7ff: Pull complete
f4b03070a779: Pull complete
682af613b7ca: Pull complete
b893e5904080: Pull complete
Digest: sha256:a22c64754cb7addc3f7e9a9335b094adf466b5f8035227691e81403d0c9c177f
Status: Downloaded newer image for pasapples/pbs-demo-image:latest

20. Now let's run it

$ docker run -p 8080:8080 pasapples/pbs-demo-image

  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::        (v2.1.6.RELEASE)

2019-09-11 04:40:41.747  WARN 1 --- [           main] pertySourceApplicationContextInitializer : Skipping 'cloud' property source addition because not in a cloud
2019-09-11 04:40:41.751  WARN 1 --- [           main] nfigurationApplicationContextInitializer : Skipping reconfiguration because not in a cloud
2019-09-11 04:40:41.760  INFO 1 --- [           main] com.example.pbsdemo.PbsDemoApplication   : Starting PbsDemoApplication on 5975633400c4 with PID 1 (/workspace/BOOT-INF/classes started by cnb in /workspace)

...

2019-09-11 04:40:50.255  INFO 1 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 8080 (http) with context path ''
2019-09-11 04:40:50.259  INFO 1 --- [           main] com.example.pbsdemo.PbsDemoApplication   : Started PbsDemoApplication in 8.93 seconds (JVM running for 9.509)
Hibernate: insert into customer (id, name, status) values (null, ?, ?)
2019-09-11 04:40:50.323  INFO 1 --- [           main] com.example.pbsdemo.LoadDatabase         : Preloading Customer(id=1, name=pas, status=active)
Hibernate: insert into customer (id, name, status) values (null, ?, ?)
2019-09-11 04:40:50.326  INFO 1 --- [           main] com.example.pbsdemo.LoadDatabase         : Preloading Customer(id=2, name=lucia, status=active)
Hibernate: insert into customer (id, name, status) values (null, ?, ?)
2019-09-11 04:40:50.329  INFO 1 --- [           main] com.example.pbsdemo.LoadDatabase         : Preloading Customer(id=3, name=lucas, status=inactive)
Hibernate: insert into customer (id, name, status) values (null, ?, ?)
2019-09-11 04:40:50.331  INFO 1 --- [           main] com.example.pbsdemo.LoadDatabase         : Preloading Customer(id=4, name=siena, status=inactive)

21. Invoke it through a browser as follows

http://localhost:8080/swagger-ui.html


22. Finally let's actually run this application on our k8s cluster itself. So start by creating a basic YAML file for deployment as follows

run-pbs-image-k8s-yaml.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: pbs-demo-image
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: pbs-demo-image
    spec:
      containers:
        - name: pbs-demo-image
          image: pasapples/pbs-demo-image
          ports:
            - containerPort: 8080

---
apiVersion: v1
kind: Service
metadata:
  name: pbs-demo-image-service
  labels:
    name: pbs-demo-image-service
spec:
  ports:
    - port: 80
      targetPort: 8080
      protocol: TCP
  selector:
    app: pbs-demo-image
  type: LoadBalancer

23. Apply your config

$ kubectl create -f run-pbs-image-k8s-yaml.yaml
deployment.extensions/pbs-demo-image created
service/pbs-demo-image-service created

24. Check we have running pods and LB service created

$ kubectl get all
NAME                                         READY   STATUS      RESTARTS   AGE
pod/pbs-demo-image-build-1-pvh6k-build-pod   0/1     Completed   0          39m
pod/pbs-demo-image-build-2-stl8w-build-pod   0/1     Completed   0          19m
pod/pbs-demo-image-f5c9d989-l2hg5            1/1     Running     0          48s
pod/pbs-demo-image-f5c9d989-pfxzs            1/1     Running     0          48s

NAME                             TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/kubernetes               ClusterIP      10.101.0.1              443/TCP        86m
service/pbs-demo-image-service   LoadBalancer   10.101.15.197        80:30769/TCP   49s

NAME                             READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/pbs-demo-image   2/2     2            2           49s

NAME                                      DESIRED   CURRENT   READY   AGE
replicaset.apps/pbs-demo-image-f5c9d989   2         2         2       50s


More Information

Introducing kpack, a Kubernetes Native Container Build Service
https://content.pivotal.io/blog/introducing-kpack-a-kubernetes-native-container-build-service

Cloud Native Buildpacks
https://buildpacks.io/