Search This Blog

Thursday 6 August 2020

Configure a MySQL Marketplace service for the new Tanzu Application Service on Kubernetes using Container Services Manager for VMware Tanzu

The following post shows how to configure a MySQL service into the new Tanzu Application Service BETA version 0.3.0. For instructions on how to install the Container Services Manager for VMware Tanzu (KSM) see post below.

http://www.clue2solve.io/tanzu/2020/07/14/install-ksm-and-configure-the-cf-marketplace.html

Steps

It's assumed you have already installed KSM into your Kubernetes Cluster as shown below. If not please refer to the documentation to get this done first


$ kubectl get all -n ksm
NAME                                  READY   STATUS    RESTARTS   AGE
pod/ksm-chartmuseum-78d5d5bfb-2ggdg   1/1     Running   0          15d
pod/ksm-ksm-broker-6db696894c-blvpp   1/1     Running   0          15d
pod/ksm-ksm-broker-6db696894c-mnshg   1/1     Running   0          15d
pod/ksm-ksm-daemon-587b6fd549-cc7sv   1/1     Running   1          15d
pod/ksm-ksm-daemon-587b6fd549-fgqx5   1/1     Running   1          15d
pod/ksm-postgresql-0                  1/1     Running   0          15d

NAME                              TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)        AGE
service/ksm-chartmuseum           ClusterIP      10.100.200.107   <none>          8080/TCP       15d
service/ksm-ksm-broker            LoadBalancer   10.100.200.229   10.195.93.188   80:30086/TCP   15d
service/ksm-ksm-daemon            LoadBalancer   10.100.200.222   10.195.93.179   80:31410/TCP   15d
service/ksm-postgresql            ClusterIP      10.100.200.213   <none>          5432/TCP       15d
service/ksm-postgresql-headless   ClusterIP      None             <none>          5432/TCP       15d

NAME                              READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/ksm-chartmuseum   1/1     1            1           15d
deployment.apps/ksm-ksm-broker    2/2     2            2           15d
deployment.apps/ksm-ksm-daemon    2/2     2            2           15d

NAME                                        DESIRED   CURRENT   READY   AGE
replicaset.apps/ksm-chartmuseum-78d5d5bfb   1         1         1       15d
replicaset.apps/ksm-ksm-broker-6db696894c   2         2         2       15d
replicaset.apps/ksm-ksm-broker-8645dfcf98   0         0         0       15d
replicaset.apps/ksm-ksm-daemon-587b6fd549   2         2         2       15d

NAME                              READY   AGE
statefulset.apps/ksm-postgresql   1/1     15d

1. let's start by getting the Broker IP address which when installed using LoadBalancer type can be retrieved as shown below.

$ kubectl get service ksm-ksm-broker -n ksm -o=jsonpath='{@.status.loadBalancer.ingress[0].ip}'
10.195.93.188

2. Upgrade your Helm release by running the following using the IP address from above

$ export BROKER_IP=$(kubectl get service ksm-ksm-broker -n ksm -o=jsonpath='{@.status.loadBalancer.ingress[0].ip}')
$ helm upgrade ksm ./ksm -n ksm --reuse-values \
            --set cf.brokerUrl="http://$BROKER_IP" \
            --set cf.brokerName=KSM \
            --set cf.apiAddress="https://api.system.run.haas-210.pez.pivotal.io" \
            --set cf.username="admin" \
            --set cf.password="admin-password"

3. Next we configure the ksm CLI. You can download the CLI from here

configure-ksm-cli.sh

export KSM_IP=$(kubectl get service ksm-ksm-daemon -n ksm -o=jsonpath='{@.status.loadBalancer.ingress[0].ip}')
export KSM_TARGET=http://$KSM_IP:$(kubectl get svc ksm-ksm-daemon -n ksm -o=jsonpath='{@.spec.ports[0].port}')
export KSM_USER=admin
export KSM_PASSWORD=$(kubectl get secret -n ksm ksm-ksm-daemon -o=jsonpath='{@.data.SECURITY_USER_PASSWORD}' | base64 --decode)

4. Verify ksm CLI is configured correctly

$ ksm version
Client Version [0.10.80]
Server Version [0.10.80]

5. Create a YAML file for the KSM service account and ClusterRoleBinding using the following YAML:

ksm-sa.yml

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ksm-admin
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: ksm-cluster-admin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: ksm-admin
    namespace: kube-system

Apply as follows

$ kubectl apply -f ksm-sa.yml

6. You need a cluster credential file to register and set default Kubernetes clusters that is done as follows

cluster-creds.sh

export kube_config="/Users/papicella/.kube/config"

cluster=`grep current $kube_config|sed "s/ //g"|cut -d ":" -f 2`

echo "Using cluster $cluster"

export server=`grep -B 2 "name: $cluster" $kube_config \
  |grep server|sed "s/ //g"|sed "s/^[^:]*://g"`

export certificate=`grep -B 2 "name: $cluster" $kube_config \
  |grep certificate|sed "s/ //g"|sed "s/.*://"`

export secret_name=$(kubectl get serviceaccount ksm-admin \
   --namespace=kube-system -o jsonpath='{.secrets[0].name}')

export secret_val=$(kubectl --namespace=kube-system get secret $secret_name \
   -o jsonpath='{.data.token}')

export secret_val=$(echo ${secret_val} | base64 --decode)

cat > cluster-creds.yaml << EOF
token: ${secret_val}
server: ${server}
caData: ${certificate}
EOF

echo ""
echo "ready to roll!!!!"
echo ""

Before running this script it's best to make sure you have targeted the correct K8s cluster you wish to. You can run a command as follows to verify that

$ kubectl config current-context
tas4k8s
 
7. Now we have a "cluster-creds.yaml" file we can go ahead and register the Kubernetes cluster with KSM as follows

$ ksm cluster register ksm-svcs ./cluster-creds.yaml
$ ksm cluster set-default ksm-svcs

Verify as follows:

$ ksm cluster list
CLUSTER NAME IP ADDRESS                                      DEFAULT
ksm-svcs    https://tas4k8s.run.haas-210.pez.pivotal.io:8443 true

8. Now we can go ahead and create a Marketplace offering for MySQL. To do that we will use the Bitnami MySQL chart as shown below

$ git clone https://github.com/bitnami/charts.git
$ cd ./charts/bitnami/mysql

** create bind.yaml as follows which is required so our service binding from Tanzu Application Service will inject the right JSON we are expecting or requiring at bind time **

$ cat bind.yaml
template: |
  local filterfunc(j) = std.length(std.findSubstr("mysql", j.name)) > 0;
  local s1 = std.filter(filterfunc, $.services);
  {
    hostname: s1[0].status.loadBalancer.ingress[0].ip,
    name: s1[0].name,
    jdbcUrl: "jdbc:mysql://" + self.hostname + "/my_db?user=" + self.username + "&password=" + self.password + "&useSSL=false",
    uri: "mysql://" + self.username + ":" + self.password + "@" + self.hostname + ":" + self.port + "/my_db?reconnect=true",
    password: $.secrets[0].data['mysql-root-password'],
    port: 3306,
    username: "root"
  }

$ helm package .
# cd ..
$ ksm offer save ./mysql ./mysql/mysql-6.14.7.tgz

Verify MySQL is now part of the offer list as follows
  
$ ksm offer list
MARKETPLACE NAME	INCLUDED CHARTS	VERSION	PLANS
rabbitmq        	rabbitmq       	6.18.1 	[persistent ephemeral]
mysql           	mysql          	6.14.7 	[default]

9. Now we need to login as an ADMIN user

Verify you are logged in as admin user using the CF CLI:

$ cf target
api endpoint:   https://api.system.run.haas-210.pez.pivotal.io
api version:    2.151.0
user:           admin
org:            system
space:          development

10. At this point you can see the KSM service broker registered with TAS4K8s as follows

$ cf service-brokers
Getting service brokers as admin...

name   url
KSM    http://10.195.93.188

11. Enable access to the MySQL service as follows

$ cf enable-service-access mysql

Verify it's enabled:

$ cf service-access
Getting service access as admin...
broker: KSM
   service    plan         access   orgs
   mysql      default      all
   rabbitmq   ephemeral    all
   rabbitmq   persistent   all

12. At this point it's best to log out of admin and log back in as a user that is not admin

$ cf target
api endpoint:   https://api.system.run.haas-210.pez.pivotal.io
api version:    2.151.0
user:           pas
org:            apples-org
space:          development

13. Create a MySQL service as follows. I passing in some JSON to indicate that my K8s cluster support's a LoadBalancer type so use that as part of the creation of the service.

$ cf create-service mysql default pas-mysql -c '{"service":{"type":"LoadBalancer"}}'

14. Check that the service has created correctly it will take a few minutes

$ cf services
Getting services in org apples-org / space development as pas...

name        service    plan        bound apps          last operation     broker   upgrade available
pas-mysql   mysql      default     my-springboot-app   create succeeded   KSM      no

15. Your service is created in it's own K8s namespace BUT that may not be the case at some point. 
$ kubectl get all -n ksm-2e526124-11a3-4d38-966c-b3ffd45471d7
NAME                            READY   STATUS    RESTARTS   AGE
pod/k-wqo5mubw-mysql-master-0   1/1     Running   0          15d
pod/k-wqo5mubw-mysql-slave-0    1/1     Running   0          15d

NAME                             TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)          AGE
service/k-wqo5mubw-mysql         LoadBalancer   10.100.200.12    10.195.93.192   3306:30563/TCP   15d
service/k-wqo5mubw-mysql-slave   LoadBalancer   10.100.200.130   10.195.93.191   3306:31982/TCP   15d

NAME                                       READY   AGE
statefulset.apps/k-wqo5mubw-mysql-master   1/1     15d
statefulset.apps/k-wqo5mubw-mysql-slave    1/1     15d

16. At this point we can now test our new MySQL service we created and use a Spring Boot application to test this out with. 

The following GitHub repo can be used for that. Ignore the steps to create a service as you have already done that




Finally to define service plans see the link below

More Information

Container Services Manager(KSM)

Tanzu Application Service for Kubernetes

Monday 3 August 2020

Using CNCF Sandbox Project Strimzi for Kafka Clusters on VMware Tanzu Kubernetes Grid Integrated Edition (TKGI)

Strimzi a CNCF sandbox project provides a way to run an Apache Kafka cluster on Kubernetes in various deployment configurations. In this post we will take a look at how to get this running on VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) and consume the Kafka cluster from a Springboot application.

If you have a K8s cluster that's all you need to follow along in this exampleI am using VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) but you can use any K8s cluster you have such as GKE, AKS, EKS etc.

Steps

1. Installing Strimzi is pretty straight forward so we can do that as follows. I am using the namespace "kafka" which needs to be created prior to running this command.

kubectl apply -f 'https://strimzi.io/install/latest?namespace=kafka' -n kafka

2. Verify that the operator was installed correctly and we have a running POD as shown below
  
$ kubectl get pods -n kafka
NAME                                                    READY   STATUS    RESTARTS   AGE
strimzi-cluster-operator-6c9d899778-4mdtg               1/1     Running   0          6d22h

3. Next let's ensure we have a default storage class for the cluster as shown below.

$ kubectl get storageclass
NAME             PROVISIONER                    AGE
fast (default)   kubernetes.io/vsphere-volume   47d

4. Now at this point we are ready to create a Kafka cluster. For this example we will create a 3 node cluster defined in YML as follows.

kafka-persistent-MULTI_NODE.yaml

apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
  name: apples-kafka-cluster
spec:
  kafka:
    version: 2.5.0
    replicas: 3
    listeners:
      external:
        type: loadbalancer
        tls: false
      plain: {}
      tls: {}
    config:
      offsets.topic.replication.factor: 3
      transaction.state.log.replication.factor: 3
      transaction.state.log.min.isr: 2
      log.message.format.version: "2.5"
    storage:
      type: jbod
      volumes:
      - id: 0
        type: persistent-claim
        size: 100Gi
        deleteClaim: false
  zookeeper:
    replicas: 3
    storage:
      type: persistent-claim
      size: 100Gi
      deleteClaim: false
  entityOperator:
    topicOperator: {}
    userOperator: {}

Few things to note:
  • We have enable access to the cluster using the type LoadBalancer which means your K8s cluster needs to support such a Type
  • We need to create dynamic Persistence claim's in the cluster so ensure #3 above is in place
  • We have disabled TLS given this is a demo 
5. Create the Kafka cluster as shown below ensuring we target the namespace "kafka"

$ kubectl apply -f kafka-persistent-MULTI_NODE.yaml -n kafka

6. Now we can view the status/creation of our cluster one of two ways as shown below. You will need to wait a few minutes for everything to start up.

Option 1:
  
$ kubectl get Kafka -n kafka
NAME                   DESIRED KAFKA REPLICAS   DESIRED ZK REPLICAS
apples-kafka-cluster   3                        3             1/1     Running   0          6d22h

Option 2:
  
$ kubectl get all -n kafka
NAME                                                        READY   STATUS    RESTARTS   AGE
pod/apples-kafka-cluster-entity-operator-58685b8fbd-r4wxc   3/3     Running   0          6d21h
pod/apples-kafka-cluster-kafka-0                            2/2     Running   0          6d21h
pod/apples-kafka-cluster-kafka-1                            2/2     Running   0          6d21h
pod/apples-kafka-cluster-kafka-2                            2/2     Running   0          6d21h
pod/apples-kafka-cluster-zookeeper-0                        1/1     Running   0          6d21h
pod/apples-kafka-cluster-zookeeper-1                        1/1     Running   0          6d21h
pod/apples-kafka-cluster-zookeeper-2                        1/1     Running   0          6d21h
pod/strimzi-cluster-operator-6c9d899778-4mdtg               1/1     Running   0          6d23h

NAME                                                    TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)                      AGE
service/apples-kafka-cluster-kafka-0                    LoadBalancer   10.100.200.90    10.195.93.200   9094:30362/TCP               6d21h
service/apples-kafka-cluster-kafka-1                    LoadBalancer   10.100.200.179   10.195.93.197   9094:32022/TCP               6d21h
service/apples-kafka-cluster-kafka-2                    LoadBalancer   10.100.200.155   10.195.93.201   9094:32277/TCP               6d21h
service/apples-kafka-cluster-kafka-bootstrap            ClusterIP      10.100.200.77    <none>          9091/TCP,9092/TCP,9093/TCP   6d21h
service/apples-kafka-cluster-kafka-brokers              ClusterIP      None             <none>          9091/TCP,9092/TCP,9093/TCP   6d21h
service/apples-kafka-cluster-kafka-external-bootstrap   LoadBalancer   10.100.200.58    10.195.93.196   9094:30735/TCP               6d21h
service/apples-kafka-cluster-zookeeper-client           ClusterIP      10.100.200.22    <none>          2181/TCP                     6d21h
service/apples-kafka-cluster-zookeeper-nodes            ClusterIP      None             <none>          2181/TCP,2888/TCP,3888/TCP   6d21h

NAME                                                   READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/apples-kafka-cluster-entity-operator   1/1     1            1           6d21h
deployment.apps/strimzi-cluster-operator               1/1     1            1           6d23h

NAME                                                              DESIRED   CURRENT   READY   AGE
replicaset.apps/apples-kafka-cluster-entity-operator-58685b8fbd   1         1         1       6d21h
replicaset.apps/strimzi-cluster-operator-6c9d899778               1         1         1       6d23h

NAME                                              READY   AGE
statefulset.apps/apples-kafka-cluster-kafka       3/3     6d21h
statefulset.apps/apples-kafka-cluster-zookeeper   3/3     6d21h                     3             1/1     Running   0          6d22h

7. Our entry point into the cluster is a service of type LoadBalancer which we asked for as per our Kafka cluster YML config. To find the IP address we can run a command as follow using the cluster name from above.

$ kubectl get service -n kafka apples-kafka-cluster-kafka-external-bootstrap -o=jsonpath='{.status.loadBalancer.ingress[0].ip}{"\n"}'
10.195.93.196

Note: Make a not of this IP address as we will need it shortly

8. Let's create a Kafka Topic using YML as follows. In this YML we actually ensure we are using the namespace "kafka".  

create-kafka-topic.yaml

apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaTopic
metadata:
  name: apples-topic
  namespace: kafka
  labels:
    strimzi.io/cluster: apples-kafka-cluster
spec:
  partitions: 1
  replicas: 1
  config:
    retention.ms: 7200000
    segment.bytes: 1073741824


9. Create a Kafka topic as shown below.

$ kubectl apply -f create-kafka-topic.yaml

10. We can view the Kafka topics as shown below.
  
$ kubectl get KafkaTopic -n kafka
NAME                                                          PARTITIONS   REPLICATION FACTOR
apples-topic                                                  1            1

11. Now at this point we ready to send some messages to our topic "apples-topic" as well as consume messages so to do that we are going to use a Springboot Application in fact two of them which exist on GitHub.


Download or clone those onto your file system. 

12.With both downloaded you will need to set the spring.kafka.bootstrap-servers with the IP address we retrieved from #7 above. That needs to be done in both GitHub downloaded/cloned repo's above. The file we need to edit for both repo's is as follows. 

File: src/main/resources/application.yml 

Example:

spring:
  kafka:
    bootstrap-servers: IP-ADDRESS:9094

Note: Make sure you do this for both downloaded repo application.yml files

13. Now let's run the producer and consumer Springboot application using a command as follows in seperate terminal windows. One will use PORT 8080 while the other uses port 8081.

$ ./mvnw spring-boot:run

Consumer:

papicella@papicella:~/pivotal/DemoProjects/spring-starter/pivotal/KAFKA/demo-kafka-producer$ ./mvnw spring-boot:run

...
2020-08-03 11:41:46.742  INFO 34025 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 8080 (http) with context path ''
2020-08-03 11:41:46.754  INFO 34025 --- [           main] a.a.t.k.DemoKafkaProducerApplication     : Started DemoKafkaProducerApplication in 1.775 seconds (JVM running for 2.102)

Producer:

papicella@papicella:~/pivotal/DemoProjects/spring-starter/pivotal/KAFKA/demo-kafka-consumer$ ./mvnw spring-boot:run

...
2020-08-03 11:43:53.423  INFO 34056 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 8081 (http) with context path ''
2020-08-03 11:43:53.440  INFO 34056 --- [           main] a.a.t.k.DemoKafkaConsumerApplication     : Started DemoKafkaConsumerApplication in 1.666 seconds (JVM running for 1.936)

14. Start by opening up the the Producer UI by navigating to http://localhost:8080/



15. Now let's not add any messages yet and also open up the Consumer UI by navigating to http://localhost:8081/



Note: This application will automatically refresh the page every 2 seconds to show which messages have been sent to the Kafka Topic

16. Return to the Producer UI http://localhost:8080/ and add two messages using whatever text you like as shown below.


17. Return to the Consumer UI http://localhost:8081/ to verify the two messages sent to the Kafka topic has been consumed



18. Both these Springboot applications are using "Spring for Apache Kafka


Both Springboot application use a application.yml to bootstrap access to the Kafka cluster

The Producer Springboot application is using a KafkaTemplate to send messages to our Kafka Topic as shown below.
  
@Controller
@Slf4j
public class TopicMessageController {

    private KafkaTemplate<String, String> kafkaTemplate;

    @Autowired
    public TopicMessageController(KafkaTemplate<String, String> kafkaTemplate) {
        this.kafkaTemplate = kafkaTemplate;
    }

    final private String topicName = "apples-topic";

    @GetMapping("/")
    public String indexPage (Model model){
        model.addAttribute("topicMessageAddSuccess", "N");
        return "home";
    }

    @PostMapping("/addentry")
    public String addNewTopicMessage (@RequestParam(value="message") String message, Model model){

        kafkaTemplate.send(topicName, message);

        log.info("Sent single message: " + message);
        model.addAttribute("message", message);
        model.addAttribute("topicMessageAddSuccess", "Y");

        return "home";
    }
}                                                

The Consumer Springboot application is configured with a KafkaListener as shown below
  
@Controller
@Slf4j
public class TopicConsumerController {

    private static ArrayList<String> topicMessages = new ArrayList<String>();

    @GetMapping("/")
    public String indexPage (Model model){
        model.addAttribute("topicMessages", topicMessages);
        model.addAttribute("topicMessagesCount", topicMessages.size());

        return "home";
    }

    @KafkaListener(topics = "apples-topic")
    public void listen(String message) {
        log.info("Received Message: " + message);
        topicMessages.add(message);
    }
}                                              

In this post we did not setup any client authentication against the cluster for the producer or consumer given this was just a demo.





More Information

Spring for Apache Kafka

CNCF Sanbox projects

Strimzi