Search This Blog

Tuesday, 29 December 2020

Loading Australian Football League (AFL) Data into the Elastic Stack with some cool visulaizations

I decided to load some AFL data into the Elastic Stack and do some basic visualisations. I loaded data for all home and away plus finals games since 2017 so four seasons in total. Follow below if you want to do the same. 

Steps

Note: We already have Elasticsearch cluster running for this demo

$ curl -u "elastic:welcome1" localhost:9200
{
  "name" : "node1",
  "cluster_name" : "apples-cluster",
  "cluster_uuid" : "hJrp2eJaRGCfBt7Zg_-EJQ",
  "version" : {
    "number" : "7.10.0",
    "build_flavor" : "default",
    "build_type" : "tar",
    "build_hash" : "51e9d6f22758d0374a0f3f5c6e8f3a7997850f96",
    "build_date" : "2020-11-09T21:30:33.964949Z",
    "build_snapshot" : false,
    "lucene_version" : "8.7.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}  

First I need the data loaded into the Elastic Stack I did that using Squiggle API which you would do as follows

1. I use HTTPie rather then curl. 

http "https://api.squiggle.com.au/?q=games;complete=100" > games-2017-2020.json

2. Now this data itself needs to be altered slightly so I can BULK load it into Elasticsearch cluster and I do that as follows. I use JQ to do this.

$ cat games-2017-2020.json | jq -c '.games[] | {"index": {"_id": .id}}, .' > converted-games-2017-2020.json

Snippet I what the JSON file now looks like

{"index":{"_id":1}}

{"round":1,"hgoals":14,"roundname":"Round 1","hteamid":3,"hscore":89,"winner":"Richmond","ateam":"Richmond","hbehinds":5,"venue":"M.C.G.","year":2017,"complete":100,"id":1,"localtime":"2017-03-23 19:20:00","agoals":20,"date":"2017-03-23 19:20:00","hteam":"Carlton","updated":"2017-04-15 15:59:16","tz":"+11:00","ascore":132,"ateamid":14,"winnerteamid":14,"is_grand_final":0,"abehinds":12,"is_final":0}

{"index":{"_id":2}}

{"date":"2017-03-24 19:50:00","agoals":15,"ateamid":18,"winnerteamid":18,"hteam":"Collingwood","updated":"2017-04-15 15:59:16","tz":"+11:00","ascore":100,"is_grand_final":0,"abehinds":10,"is_final":0,"round":1,"hgoals":12,"hscore":86,"winner":"Western Bulldogs","ateam":"Western Bulldogs","roundname":"Round 1","hteamid":4,"hbehinds":14,"venue":"M.C.G.","year":2017,"complete":100,"id":2,"localtime":"2017-03-24 19:50:00"}

{"index":{"_id":3}}

{"hscore":82,"ateam":"Port Adelaide","winner":"Port Adelaide","roundname":"Round 1","hteamid":16,"round":1,"hgoals":12,"complete":100,"id":3,"localtime":"2017-03-25 16:35:00","venue":"S.C.G.","hbehinds":10,"year":2017,"ateamid":13,"winnerteamid":13,"updated":"2017-04-15 15:59:16","hteam":"Sydney","tz":"+11:00","ascore":110,"date":"2017-03-25 16:35:00","agoals":17,"is_final":0,"is_grand_final":0,"abehinds":8}

Load data into Elasticsearch cluster as follows

$ curl -u "elastic:welcome1" -H "Content-Type: application/json" -XPOST "localhost:9200/afl_games/_bulk?pretty&refresh"  --data-binary "@converted-games-2017-2020.json"

3. Using DevTools with Kibana we can run a query as follows

Question: Get each teams winning games for the season 2020 before finals - Final Ladder

Query:

GET afl_games/_search
{
  "size": 0, 
  "query": {
      "bool": {
        "must": [
          {
            "match": {
              "year": 2020
            }
          },
          {
            "match": {
              "is_final": 0
            }
          }
        ]
      }
    }, 
    "aggs": {
      "group_by_winner": {
        "terms": {
          "field": "winner.keyword",
          "size": 20
        }
      }
    }
} 

Results:

Results  
{
  "took" : 2,
  "timed_out" : false,
  "_shards" : {
    "total" : 1,
    "successful" : 1,
    "skipped" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : {
      "value" : 153,
      "relation" : "eq"
    },
    "max_score" : null,
    "hits" : [ ]
  },
  "aggregations" : {
    "group_by_winner" : {
      "doc_count_error_upper_bound" : 0,
      "sum_other_doc_count" : 0,
      "buckets" : [
        {
          "key" : "Brisbane Lions",
          "doc_count" : 14
        },
        {
          "key" : "Port Adelaide",
          "doc_count" : 14
        },
        {
          "key" : "Geelong",
          "doc_count" : 12
        },
        {
          "key" : "Richmond",
          "doc_count" : 12
        },
        {
          "key" : "West Coast",
          "doc_count" : 12
        },
        {
          "key" : "St Kilda",
          "doc_count" : 10
        },
        {
          "key" : "Western Bulldogs",
          "doc_count" : 10
        },
        {
          "key" : "Collingwood",
          "doc_count" : 9
        },
        {
          "key" : "Melbourne",
          "doc_count" : 9
        },
        {
          "key" : "Greater Western Sydney",
          "doc_count" : 8
        },
        {
          "key" : "Carlton",
          "doc_count" : 7
        },
        {
          "key" : "Fremantle",
          "doc_count" : 7
        },
        {
          "key" : "Essendon",
          "doc_count" : 6
        },
        {
          "key" : "Gold Coast",
          "doc_count" : 5
        },
        {
          "key" : "Hawthorn",
          "doc_count" : 5
        },
        {
          "key" : "Sydney",
          "doc_count" : 5
        },
        {
          "key" : "Adelaide",
          "doc_count" : 3
        },
        {
          "key" : "North Melbourne",
          "doc_count" : 3
        }
      ]
    }
  }
}

4. Finally using Kibana Lens to easily visualize this data using a Kibana Dasboard


Of course you could do much more plus load more data from Squiggle and with the power of Kibana feel free to create your own visualizations.

More Information

Squiggle API

https://api.squiggle.com.au/

Getting Started with the Elastic Stack

https://www.elastic.co/guide/en/elasticsearch/reference/current/getting-started.html

Tuesday, 22 December 2020

VMware Solutions Hub - Elastic Cloud on Kubernetes - the official Elasticsearch Operator from the creators

Proud to have worked on this with the VMware Tanzu team and Elastic team to add this to VMware Solution Hub page clearly highlighting what the Elastic Stack on Kubernetes really means.

Do you need to run your Elastic Stack on a certified Kubernetes distribution, bolstered by the global Kubernetes community allowing you to focus on delivering innovative applications powered by Elastic?

If so click below to get started:

https://tanzu.vmware.com/solutions-hub/data-management/elastic

More Information

https://tanzu.vmware.com/solutions-hub/data-management/elastic

Wednesday, 28 October 2020

How to Become a Kubernetes Admin from the Comfort of Your vSphere

 My Talk at VMworld 2020 with Olive power can be found here.

Talk Details

In this session, we will walk through the integration of VMware vSphere and Kubernetes, and how this union of technologies can fundamentally change how virtual infrastructure and operational engineers view the management of Kubernetes platforms. We will demonstrate the capability of vSphere to host Kubernetes clusters internally, allocate capacity to those clusters, and monitor them side by side with virtual machines (VMs). We will talk about how extended vSphere functionality eases the transition of enterprises to running yet another platform (Kubernetes) by treating all managed endpoints—be they VMs, Kubernetes clusters or pods—as one platform. We want to demonstrate that platforms for running modern applications can be facilitated through the intuitive interface of vSphere and its ecosystem of automation tooling

https://www.vmworld.com/en/video-library/search.html#text=%22KUB2038%22&year=2020

Thursday, 3 September 2020

java-cfenv : A library for accessing Cloud Foundry Services on the new Tanzu Application Service for Kubernetes

The Spring Cloud Connectors library has been with us since the launch event of Cloud Foundry itself back in 2011. This library would create the required Spring Beans from bound VCAP_SERVICE ENV variable from a pushed Cloud Foundry Application such as connecting to databases for example. The java buildpack then replaces these bean definitions you had in your application with those created by the connector library through a feature called ‘auto-reconfiguration’

Auto-reconfiguration is great for getting started. However, it is not so great when you want more control, for example changing the size of the connection pool associated with a DataSource.

With the up coming Tanzu Application Service for Kubernetes the original Cloud Foundry buildpacks are now replaced with the new Tanzu Buildpacks which are based on the Cloud Native Buildpacks CNCF Sandbox project. As a result of this auto-reconfiguration is no longer included in java cloud native buildpacks which means auto-configuration for the backing services is no longer available.

So is their another option for this? The answer is "Java CFEnv". This provide a simple API for retrieving credentials from the JSON strings contained inside the VCAP_SERVICES environment variable.

https://github.com/pivotal-cf/java-cfenv



So if you after exactly how it worked previously all you need to do is add this maven dependancy to your project as shown below.

  
        <dependency>
            <groupId>io.pivotal.cfenv</groupId>
            <artifactId>java-cfenv-boot</artifactId>
        </dependency>

Of course this new library is much more flexible then this and by using the class CfEnv as the entry point to the API for accessing Cloud Foundry environment variables your free to use the Spring Expression Language to invoke methods on the bean of type CfEnv to set properties for example plus more.

For more information read the full blog post as per below

https://spring.io/blog/2019/02/15/introducing-java-cfenv-a-new-library-for-accessing-cloud-foundry-services

Finally this Spring Boot application is an example of using this new library with an application deployed to the new Tanzu Application Service for Kubernetes.

https://github.com/papicella/spring-book-service


More Information

1. Introducing java-cfenv: A new library for accessing Cloud Foundry Services

https://spring.io/blog/2019/02/15/introducing-java-cfenv-a-new-library-for-accessing-cloud-foundry-services

2. Java CFEnv GitHub Repo

https://github.com/pivotal-cf/java-cfenv#pushing-your-application-to-cloud-foundry

Thursday, 6 August 2020

Configure a MySQL Marketplace service for the new Tanzu Application Service on Kubernetes using Container Services Manager for VMware Tanzu

The following post shows how to configure a MySQL service into the new Tanzu Application Service BETA version 0.3.0. For instructions on how to install the Container Services Manager for VMware Tanzu (KSM) see post below.

http://www.clue2solve.io/tanzu/2020/07/14/install-ksm-and-configure-the-cf-marketplace.html

Steps

It's assumed you have already installed KSM into your Kubernetes Cluster as shown below. If not please refer to the documentation to get this done first


$ kubectl get all -n ksm
NAME                                  READY   STATUS    RESTARTS   AGE
pod/ksm-chartmuseum-78d5d5bfb-2ggdg   1/1     Running   0          15d
pod/ksm-ksm-broker-6db696894c-blvpp   1/1     Running   0          15d
pod/ksm-ksm-broker-6db696894c-mnshg   1/1     Running   0          15d
pod/ksm-ksm-daemon-587b6fd549-cc7sv   1/1     Running   1          15d
pod/ksm-ksm-daemon-587b6fd549-fgqx5   1/1     Running   1          15d
pod/ksm-postgresql-0                  1/1     Running   0          15d

NAME                              TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)        AGE
service/ksm-chartmuseum           ClusterIP      10.100.200.107   <none>          8080/TCP       15d
service/ksm-ksm-broker            LoadBalancer   10.100.200.229   10.195.93.188   80:30086/TCP   15d
service/ksm-ksm-daemon            LoadBalancer   10.100.200.222   10.195.93.179   80:31410/TCP   15d
service/ksm-postgresql            ClusterIP      10.100.200.213   <none>          5432/TCP       15d
service/ksm-postgresql-headless   ClusterIP      None             <none>          5432/TCP       15d

NAME                              READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/ksm-chartmuseum   1/1     1            1           15d
deployment.apps/ksm-ksm-broker    2/2     2            2           15d
deployment.apps/ksm-ksm-daemon    2/2     2            2           15d

NAME                                        DESIRED   CURRENT   READY   AGE
replicaset.apps/ksm-chartmuseum-78d5d5bfb   1         1         1       15d
replicaset.apps/ksm-ksm-broker-6db696894c   2         2         2       15d
replicaset.apps/ksm-ksm-broker-8645dfcf98   0         0         0       15d
replicaset.apps/ksm-ksm-daemon-587b6fd549   2         2         2       15d

NAME                              READY   AGE
statefulset.apps/ksm-postgresql   1/1     15d

1. let's start by getting the Broker IP address which when installed using LoadBalancer type can be retrieved as shown below.

$ kubectl get service ksm-ksm-broker -n ksm -o=jsonpath='{@.status.loadBalancer.ingress[0].ip}'
10.195.93.188

2. Upgrade your Helm release by running the following using the IP address from above

$ export BROKER_IP=$(kubectl get service ksm-ksm-broker -n ksm -o=jsonpath='{@.status.loadBalancer.ingress[0].ip}')
$ helm upgrade ksm ./ksm -n ksm --reuse-values \
            --set cf.brokerUrl="http://$BROKER_IP" \
            --set cf.brokerName=KSM \
            --set cf.apiAddress="https://api.system.run.haas-210.pez.pivotal.io" \
            --set cf.username="admin" \
            --set cf.password="admin-password"

3. Next we configure the ksm CLI. You can download the CLI from here

configure-ksm-cli.sh

export KSM_IP=$(kubectl get service ksm-ksm-daemon -n ksm -o=jsonpath='{@.status.loadBalancer.ingress[0].ip}')
export KSM_TARGET=http://$KSM_IP:$(kubectl get svc ksm-ksm-daemon -n ksm -o=jsonpath='{@.spec.ports[0].port}')
export KSM_USER=admin
export KSM_PASSWORD=$(kubectl get secret -n ksm ksm-ksm-daemon -o=jsonpath='{@.data.SECURITY_USER_PASSWORD}' | base64 --decode)

4. Verify ksm CLI is configured correctly

$ ksm version
Client Version [0.10.80]
Server Version [0.10.80]

5. Create a YAML file for the KSM service account and ClusterRoleBinding using the following YAML:

ksm-sa.yml

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ksm-admin
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: ksm-cluster-admin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: ksm-admin
    namespace: kube-system

Apply as follows

$ kubectl apply -f ksm-sa.yml

6. You need a cluster credential file to register and set default Kubernetes clusters that is done as follows

cluster-creds.sh

export kube_config="/Users/papicella/.kube/config"

cluster=`grep current $kube_config|sed "s/ //g"|cut -d ":" -f 2`

echo "Using cluster $cluster"

export server=`grep -B 2 "name: $cluster" $kube_config \
  |grep server|sed "s/ //g"|sed "s/^[^:]*://g"`

export certificate=`grep -B 2 "name: $cluster" $kube_config \
  |grep certificate|sed "s/ //g"|sed "s/.*://"`

export secret_name=$(kubectl get serviceaccount ksm-admin \
   --namespace=kube-system -o jsonpath='{.secrets[0].name}')

export secret_val=$(kubectl --namespace=kube-system get secret $secret_name \
   -o jsonpath='{.data.token}')

export secret_val=$(echo ${secret_val} | base64 --decode)

cat > cluster-creds.yaml << EOF
token: ${secret_val}
server: ${server}
caData: ${certificate}
EOF

echo ""
echo "ready to roll!!!!"
echo ""

Before running this script it's best to make sure you have targeted the correct K8s cluster you wish to. You can run a command as follows to verify that

$ kubectl config current-context
tas4k8s
 
7. Now we have a "cluster-creds.yaml" file we can go ahead and register the Kubernetes cluster with KSM as follows

$ ksm cluster register ksm-svcs ./cluster-creds.yaml
$ ksm cluster set-default ksm-svcs

Verify as follows:

$ ksm cluster list
CLUSTER NAME IP ADDRESS                                      DEFAULT
ksm-svcs    https://tas4k8s.run.haas-210.pez.pivotal.io:8443 true

8. Now we can go ahead and create a Marketplace offering for MySQL. To do that we will use the Bitnami MySQL chart as shown below

$ git clone https://github.com/bitnami/charts.git
$ cd ./charts/bitnami/mysql

** create bind.yaml as follows which is required so our service binding from Tanzu Application Service will inject the right JSON we are expecting or requiring at bind time **

$ cat bind.yaml
template: |
  local filterfunc(j) = std.length(std.findSubstr("mysql", j.name)) > 0;
  local s1 = std.filter(filterfunc, $.services);
  {
    hostname: s1[0].status.loadBalancer.ingress[0].ip,
    name: s1[0].name,
    jdbcUrl: "jdbc:mysql://" + self.hostname + "/my_db?user=" + self.username + "&password=" + self.password + "&useSSL=false",
    uri: "mysql://" + self.username + ":" + self.password + "@" + self.hostname + ":" + self.port + "/my_db?reconnect=true",
    password: $.secrets[0].data['mysql-root-password'],
    port: 3306,
    username: "root"
  }

$ helm package .
# cd ..
$ ksm offer save ./mysql ./mysql/mysql-6.14.7.tgz

Verify MySQL is now part of the offer list as follows
  
$ ksm offer list
MARKETPLACE NAME	INCLUDED CHARTS	VERSION	PLANS
rabbitmq        	rabbitmq       	6.18.1 	[persistent ephemeral]
mysql           	mysql          	6.14.7 	[default]

9. Now we need to login as an ADMIN user

Verify you are logged in as admin user using the CF CLI:

$ cf target
api endpoint:   https://api.system.run.haas-210.pez.pivotal.io
api version:    2.151.0
user:           admin
org:            system
space:          development

10. At this point you can see the KSM service broker registered with TAS4K8s as follows

$ cf service-brokers
Getting service brokers as admin...

name   url
KSM    http://10.195.93.188

11. Enable access to the MySQL service as follows

$ cf enable-service-access mysql

Verify it's enabled:

$ cf service-access
Getting service access as admin...
broker: KSM
   service    plan         access   orgs
   mysql      default      all
   rabbitmq   ephemeral    all
   rabbitmq   persistent   all

12. At this point it's best to log out of admin and log back in as a user that is not admin

$ cf target
api endpoint:   https://api.system.run.haas-210.pez.pivotal.io
api version:    2.151.0
user:           pas
org:            apples-org
space:          development

13. Create a MySQL service as follows. I passing in some JSON to indicate that my K8s cluster support's a LoadBalancer type so use that as part of the creation of the service.

$ cf create-service mysql default pas-mysql -c '{"service":{"type":"LoadBalancer"}}'

14. Check that the service has created correctly it will take a few minutes

$ cf services
Getting services in org apples-org / space development as pas...

name        service    plan        bound apps          last operation     broker   upgrade available
pas-mysql   mysql      default     my-springboot-app   create succeeded   KSM      no

15. Your service is created in it's own K8s namespace BUT that may not be the case at some point. 
$ kubectl get all -n ksm-2e526124-11a3-4d38-966c-b3ffd45471d7
NAME                            READY   STATUS    RESTARTS   AGE
pod/k-wqo5mubw-mysql-master-0   1/1     Running   0          15d
pod/k-wqo5mubw-mysql-slave-0    1/1     Running   0          15d

NAME                             TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)          AGE
service/k-wqo5mubw-mysql         LoadBalancer   10.100.200.12    10.195.93.192   3306:30563/TCP   15d
service/k-wqo5mubw-mysql-slave   LoadBalancer   10.100.200.130   10.195.93.191   3306:31982/TCP   15d

NAME                                       READY   AGE
statefulset.apps/k-wqo5mubw-mysql-master   1/1     15d
statefulset.apps/k-wqo5mubw-mysql-slave    1/1     15d

16. At this point we can now test our new MySQL service we created and use a Spring Boot application to test this out with. 

The following GitHub repo can be used for that. Ignore the steps to create a service as you have already done that




Finally to define service plans see the link below

More Information

Container Services Manager(KSM)

Tanzu Application Service for Kubernetes

Monday, 3 August 2020

Using CNCF Sandbox Project Strimzi for Kafka Clusters on VMware Tanzu Kubernetes Grid Integrated Edition (TKGI)

Strimzi a CNCF sandbox project provides a way to run an Apache Kafka cluster on Kubernetes in various deployment configurations. In this post we will take a look at how to get this running on VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) and consume the Kafka cluster from a Springboot application.

If you have a K8s cluster that's all you need to follow along in this exampleI am using VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) but you can use any K8s cluster you have such as GKE, AKS, EKS etc.

Steps

1. Installing Strimzi is pretty straight forward so we can do that as follows. I am using the namespace "kafka" which needs to be created prior to running this command.

kubectl apply -f 'https://strimzi.io/install/latest?namespace=kafka' -n kafka

2. Verify that the operator was installed correctly and we have a running POD as shown below
  
$ kubectl get pods -n kafka
NAME                                                    READY   STATUS    RESTARTS   AGE
strimzi-cluster-operator-6c9d899778-4mdtg               1/1     Running   0          6d22h

3. Next let's ensure we have a default storage class for the cluster as shown below.

$ kubectl get storageclass
NAME             PROVISIONER                    AGE
fast (default)   kubernetes.io/vsphere-volume   47d

4. Now at this point we are ready to create a Kafka cluster. For this example we will create a 3 node cluster defined in YML as follows.

kafka-persistent-MULTI_NODE.yaml

apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
  name: apples-kafka-cluster
spec:
  kafka:
    version: 2.5.0
    replicas: 3
    listeners:
      external:
        type: loadbalancer
        tls: false
      plain: {}
      tls: {}
    config:
      offsets.topic.replication.factor: 3
      transaction.state.log.replication.factor: 3
      transaction.state.log.min.isr: 2
      log.message.format.version: "2.5"
    storage:
      type: jbod
      volumes:
      - id: 0
        type: persistent-claim
        size: 100Gi
        deleteClaim: false
  zookeeper:
    replicas: 3
    storage:
      type: persistent-claim
      size: 100Gi
      deleteClaim: false
  entityOperator:
    topicOperator: {}
    userOperator: {}

Few things to note:
  • We have enable access to the cluster using the type LoadBalancer which means your K8s cluster needs to support such a Type
  • We need to create dynamic Persistence claim's in the cluster so ensure #3 above is in place
  • We have disabled TLS given this is a demo 
5. Create the Kafka cluster as shown below ensuring we target the namespace "kafka"

$ kubectl apply -f kafka-persistent-MULTI_NODE.yaml -n kafka

6. Now we can view the status/creation of our cluster one of two ways as shown below. You will need to wait a few minutes for everything to start up.

Option 1:
  
$ kubectl get Kafka -n kafka
NAME                   DESIRED KAFKA REPLICAS   DESIRED ZK REPLICAS
apples-kafka-cluster   3                        3             1/1     Running   0          6d22h

Option 2:
  
$ kubectl get all -n kafka
NAME                                                        READY   STATUS    RESTARTS   AGE
pod/apples-kafka-cluster-entity-operator-58685b8fbd-r4wxc   3/3     Running   0          6d21h
pod/apples-kafka-cluster-kafka-0                            2/2     Running   0          6d21h
pod/apples-kafka-cluster-kafka-1                            2/2     Running   0          6d21h
pod/apples-kafka-cluster-kafka-2                            2/2     Running   0          6d21h
pod/apples-kafka-cluster-zookeeper-0                        1/1     Running   0          6d21h
pod/apples-kafka-cluster-zookeeper-1                        1/1     Running   0          6d21h
pod/apples-kafka-cluster-zookeeper-2                        1/1     Running   0          6d21h
pod/strimzi-cluster-operator-6c9d899778-4mdtg               1/1     Running   0          6d23h

NAME                                                    TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)                      AGE
service/apples-kafka-cluster-kafka-0                    LoadBalancer   10.100.200.90    10.195.93.200   9094:30362/TCP               6d21h
service/apples-kafka-cluster-kafka-1                    LoadBalancer   10.100.200.179   10.195.93.197   9094:32022/TCP               6d21h
service/apples-kafka-cluster-kafka-2                    LoadBalancer   10.100.200.155   10.195.93.201   9094:32277/TCP               6d21h
service/apples-kafka-cluster-kafka-bootstrap            ClusterIP      10.100.200.77    <none>          9091/TCP,9092/TCP,9093/TCP   6d21h
service/apples-kafka-cluster-kafka-brokers              ClusterIP      None             <none>          9091/TCP,9092/TCP,9093/TCP   6d21h
service/apples-kafka-cluster-kafka-external-bootstrap   LoadBalancer   10.100.200.58    10.195.93.196   9094:30735/TCP               6d21h
service/apples-kafka-cluster-zookeeper-client           ClusterIP      10.100.200.22    <none>          2181/TCP                     6d21h
service/apples-kafka-cluster-zookeeper-nodes            ClusterIP      None             <none>          2181/TCP,2888/TCP,3888/TCP   6d21h

NAME                                                   READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/apples-kafka-cluster-entity-operator   1/1     1            1           6d21h
deployment.apps/strimzi-cluster-operator               1/1     1            1           6d23h

NAME                                                              DESIRED   CURRENT   READY   AGE
replicaset.apps/apples-kafka-cluster-entity-operator-58685b8fbd   1         1         1       6d21h
replicaset.apps/strimzi-cluster-operator-6c9d899778               1         1         1       6d23h

NAME                                              READY   AGE
statefulset.apps/apples-kafka-cluster-kafka       3/3     6d21h
statefulset.apps/apples-kafka-cluster-zookeeper   3/3     6d21h                     3             1/1     Running   0          6d22h

7. Our entry point into the cluster is a service of type LoadBalancer which we asked for as per our Kafka cluster YML config. To find the IP address we can run a command as follow using the cluster name from above.

$ kubectl get service -n kafka apples-kafka-cluster-kafka-external-bootstrap -o=jsonpath='{.status.loadBalancer.ingress[0].ip}{"\n"}'
10.195.93.196

Note: Make a not of this IP address as we will need it shortly

8. Let's create a Kafka Topic using YML as follows. In this YML we actually ensure we are using the namespace "kafka".  

create-kafka-topic.yaml

apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaTopic
metadata:
  name: apples-topic
  namespace: kafka
  labels:
    strimzi.io/cluster: apples-kafka-cluster
spec:
  partitions: 1
  replicas: 1
  config:
    retention.ms: 7200000
    segment.bytes: 1073741824


9. Create a Kafka topic as shown below.

$ kubectl apply -f create-kafka-topic.yaml

10. We can view the Kafka topics as shown below.
  
$ kubectl get KafkaTopic -n kafka
NAME                                                          PARTITIONS   REPLICATION FACTOR
apples-topic                                                  1            1

11. Now at this point we ready to send some messages to our topic "apples-topic" as well as consume messages so to do that we are going to use a Springboot Application in fact two of them which exist on GitHub.


Download or clone those onto your file system. 

12.With both downloaded you will need to set the spring.kafka.bootstrap-servers with the IP address we retrieved from #7 above. That needs to be done in both GitHub downloaded/cloned repo's above. The file we need to edit for both repo's is as follows. 

File: src/main/resources/application.yml 

Example:

spring:
  kafka:
    bootstrap-servers: IP-ADDRESS:9094

Note: Make sure you do this for both downloaded repo application.yml files

13. Now let's run the producer and consumer Springboot application using a command as follows in seperate terminal windows. One will use PORT 8080 while the other uses port 8081.

$ ./mvnw spring-boot:run

Consumer:

papicella@papicella:~/pivotal/DemoProjects/spring-starter/pivotal/KAFKA/demo-kafka-producer$ ./mvnw spring-boot:run

...
2020-08-03 11:41:46.742  INFO 34025 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 8080 (http) with context path ''
2020-08-03 11:41:46.754  INFO 34025 --- [           main] a.a.t.k.DemoKafkaProducerApplication     : Started DemoKafkaProducerApplication in 1.775 seconds (JVM running for 2.102)

Producer:

papicella@papicella:~/pivotal/DemoProjects/spring-starter/pivotal/KAFKA/demo-kafka-consumer$ ./mvnw spring-boot:run

...
2020-08-03 11:43:53.423  INFO 34056 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 8081 (http) with context path ''
2020-08-03 11:43:53.440  INFO 34056 --- [           main] a.a.t.k.DemoKafkaConsumerApplication     : Started DemoKafkaConsumerApplication in 1.666 seconds (JVM running for 1.936)

14. Start by opening up the the Producer UI by navigating to http://localhost:8080/



15. Now let's not add any messages yet and also open up the Consumer UI by navigating to http://localhost:8081/



Note: This application will automatically refresh the page every 2 seconds to show which messages have been sent to the Kafka Topic

16. Return to the Producer UI http://localhost:8080/ and add two messages using whatever text you like as shown below.


17. Return to the Consumer UI http://localhost:8081/ to verify the two messages sent to the Kafka topic has been consumed



18. Both these Springboot applications are using "Spring for Apache Kafka


Both Springboot application use a application.yml to bootstrap access to the Kafka cluster

The Producer Springboot application is using a KafkaTemplate to send messages to our Kafka Topic as shown below.
  
@Controller
@Slf4j
public class TopicMessageController {

    private KafkaTemplate<String, String> kafkaTemplate;

    @Autowired
    public TopicMessageController(KafkaTemplate<String, String> kafkaTemplate) {
        this.kafkaTemplate = kafkaTemplate;
    }

    final private String topicName = "apples-topic";

    @GetMapping("/")
    public String indexPage (Model model){
        model.addAttribute("topicMessageAddSuccess", "N");
        return "home";
    }

    @PostMapping("/addentry")
    public String addNewTopicMessage (@RequestParam(value="message") String message, Model model){

        kafkaTemplate.send(topicName, message);

        log.info("Sent single message: " + message);
        model.addAttribute("message", message);
        model.addAttribute("topicMessageAddSuccess", "Y");

        return "home";
    }
}                                                

The Consumer Springboot application is configured with a KafkaListener as shown below
  
@Controller
@Slf4j
public class TopicConsumerController {

    private static ArrayList<String> topicMessages = new ArrayList<String>();

    @GetMapping("/")
    public String indexPage (Model model){
        model.addAttribute("topicMessages", topicMessages);
        model.addAttribute("topicMessagesCount", topicMessages.size());

        return "home";
    }

    @KafkaListener(topics = "apples-topic")
    public void listen(String message) {
        log.info("Received Message: " + message);
        topicMessages.add(message);
    }
}                                              

In this post we did not setup any client authentication against the cluster for the producer or consumer given this was just a demo.





More Information

Spring for Apache Kafka

CNCF Sanbox projects

Strimzi

Friday, 17 July 2020

Stumbled upon this today : Lens | The Kubernetes IDE

Lens is the only IDE you’ll ever need to take control of your Kubernetes clusters. It is a standalone application for MacOS, Windows and Linux operating systems. It is open source and free.

I installed it today and was impressed. Below is some screen shots of new Tanzu Application Service running on my Kubernetes cluster using Lens IDE. Simply point it to your Kube Config for the cluster you wish to examine.

On Mac SX it's installed as follows

$ brew cask install lens






More Information

https://github.com/lensapp/lens


Tuesday, 14 July 2020

Spring Data Elasticsearch using Elastic Cloud on Kubernetes (ECK) on VMware Tanzu Kubernetes Grid Integrated Edition (TKGI)

VMware Tanzu Kubernetes Grid Integrated Edition (formerly known as VMware Enterprise PKS) is a Kubernetes-based container solution with advanced networking, a private container registry, and life cycle management.

In this post I show how to get Elastic Cloud on Kubernetes (ECK) up and running on VMware Tanzu Kubernetes Grid Integrated Edition and how to access it using a Spring Boot Application using Spring Data Elasticsearch.

With ECK, users now have a seamless way of deploying, managing, and operating the Elastic Stack on Kubernetes.

If you have a K8s cluster that's all you need to follow along.

Steps

1. Let's install ECK on our cluster we do that as follows

Note: There is a 1.1 version as the latest BUT I installing a slightly older one here

$ kubectl apply -f https://download.elastic.co/downloads/eck/1.0.1/all-in-one.yaml

2. Make sure the operator is up and running as shown below
  
$ kubectl get all -n elastic-system
NAME                     READY   STATUS    RESTARTS   AGE
pod/elastic-operator-0   1/1     Running   0          26d

NAME                             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
service/elastic-webhook-server   ClusterIP   10.100.200.55   <none>        443/TCP   26d

NAME                                READY   AGE
statefulset.apps/elastic-operator   1/1     26d

3. We can also see a CRD for Elasticsearch as shown below.

elasticsearches.elasticsearch.k8s.elastic.co
  
$ kubectl get crd
NAME                                           CREATED AT
apmservers.apm.k8s.elastic.co                  2020-06-17T00:37:32Z
clusterlogsinks.pksapi.io                      2020-06-16T23:04:43Z
clustermetricsinks.pksapi.io                   2020-06-16T23:04:44Z
elasticsearches.elasticsearch.k8s.elastic.co   2020-06-17T00:37:33Z
kibanas.kibana.k8s.elastic.co                  2020-06-17T00:37:34Z
loadbalancers.vmware.com                       2020-06-16T22:51:52Z
logsinks.pksapi.io                             2020-06-16T23:04:43Z
metricsinks.pksapi.io                          2020-06-16T23:04:44Z
nsxerrors.nsx.vmware.com                       2020-06-16T22:51:52Z
nsxlbmonitors.vmware.com                       2020-06-16T22:51:52Z
nsxlocks.nsx.vmware.com                        2020-06-16T22:51:51Z

4. We are now ready to create our first Elasticsearch cluster. To do that create a file YML file as shown below

create-elastic-cluster-from-operator.yaml

apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: quickstart
spec:
  version: 7.7.0
  http:
    service:
      spec:
        type: LoadBalancer # default is ClusterIP
    tls:
      selfSignedCertificate:
        disabled: true
  nodeSets:
  - name: default
    count: 2
    volumeClaimTemplates:
    - metadata:
        name: elasticsearch-data
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 1Gi
    config:
      node.master: true
      node.data: true
      node.ingest: true
      node.store.allow_mmap: false

From the YML a few things to note:

  • We are creating two pods for our Elasticsearch cluster
  • We are using a K8s LoadBalancer to expose access to the cluster through HTTP
  • We are using version 7.7.0 but this is not the latest Elasticsearch version
  • We have disabled the use of TLS given this is just a demo
5. Apply that as shown below.

$ kubectl apply -f create-elastic-cluster-from-operator.yaml

6. After about a minute we should have our Elasticsearch cluster running. The following commands show that
  
$ kubectl get elasticsearch
NAME         HEALTH   NODES   VERSION   PHASE   AGE
quickstart   green    2       7.7.0     Ready   47h

$ kubectl get all -n default
NAME                                   READY   STATUS    RESTARTS   AGE
pod/quickstart-es-default-0            1/1     Running   0          47h
pod/quickstart-es-default-1            1/1     Running   0          47h

NAME                            TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)          AGE
service/kubernetes              ClusterIP      10.100.200.1    <none>          443/TCP          27d
service/quickstart-es-default   ClusterIP      None            <none>          <none>           47h
service/quickstart-es-http      LoadBalancer   10.100.200.92   10.195.93.137   9200:30590/TCP   47h

NAME                                     READY   AGE
statefulset.apps/quickstart-es-default   2/2     47h

7. Let's deploy a Kibana instance. To do that create a YML as shown below

create-kibana.yaml

apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: kibana-sample
spec:
  version: 7.7.0
  count: 1
  elasticsearchRef:
    name: quickstart
    namespace: default
  http:
    service:
      spec:
        type: LoadBalancer # default is ClusterIP

8. Apply that as shown below.

$ kubectl apply -f create-kibana.yaml

9. To verify everything is up and running we can run a command as follows
  
$ kubectl get all
NAME                                   READY   STATUS    RESTARTS   AGE
pod/kibana-sample-kb-f8fcb88d5-jdzh5   1/1     Running   0          2d
pod/quickstart-es-default-0            1/1     Running   0          2d
pod/quickstart-es-default-1            1/1     Running   0          2d

NAME                            TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)          AGE
service/kibana-sample-kb-http   LoadBalancer   10.100.200.46   10.195.93.174   5601:32459/TCP   2d
service/kubernetes              ClusterIP      10.100.200.1    <none>          443/TCP          27d
service/quickstart-es-default   ClusterIP      None            <none>          <none>           2d
service/quickstart-es-http      LoadBalancer   10.100.200.92   10.195.93.137   9200:30590/TCP   2d

NAME                               READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/kibana-sample-kb   1/1     1            1           2d

NAME                                         DESIRED   CURRENT   READY   AGE
replicaset.apps/kibana-sample-kb-f8fcb88d5   1         1         1       2d

NAME                                     READY   AGE
statefulset.apps/quickstart-es-default   2/2     2d

10. So to access out cluster we will need to obtain the following which we can do using a script as follows. This was tested on Mac OSX

What do we need?

  • Elasticsearch password
  • IP address of the LoadBalancer service we created


access.sh

export PASSWORD=`kubectl get secret quickstart-es-elastic-user -o go-template='{{.data.elastic | base64decode}}'`
export IP=`kubectl get svc quickstart-es-http -o jsonpath='{.status.loadBalancer.ingress[0].ip}'`

echo ""
echo $IP
echo ""

curl -u "elastic:$PASSWORD" "http://$IP:9200"

echo ""

curl -u "elastic:$PASSWORD" "http://$IP:9200/_cat/health?v"

Output:

10.195.93.137

{
  "name" : "quickstart-es-default-1",
  "cluster_name" : "quickstart",
  "cluster_uuid" : "Bbpb7Pu7SmaQaCmEY2Er8g",
  "version" : {
    "number" : "7.7.0",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "81a1e9eda8e6183f5237786246f6dced26a10eaf",
    "build_date" : "2020-05-12T02:01:37.602180Z",
    "build_snapshot" : false,
    "lucene_version" : "8.5.1",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

.....

11. Ideally I would load some data into the Elasticsearch cluster BUT let's do that as part of a sample application using "Spring Data Elasticsearch". Clone the demo project as shown below.

$ git clone https://github.com/papicella/boot-elastic-demo.git
Cloning into 'boot-elastic-demo'...
remote: Enumerating objects: 36, done.
remote: Counting objects: 100% (36/36), done.
remote: Compressing objects: 100% (26/26), done.
remote: Total 36 (delta 1), reused 36 (delta 1), pack-reused 0
Unpacking objects: 100% (36/36), done.

12. Edit "./src/main/resources/application.yml" with your details for the Elasticsearch cluster above.

spring:
  elasticsearch:
    rest:
      username: elastic
      password: {PASSWORD}
      uris: http://{IP}:9200

13. Package as follows

$ ./mvnw -DskipTests package

14. Run as follows

$ ./mvnw spring-boot:run

....
2020-07-14 11:10:11.947  INFO 76260 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 8080 (http) with context path ''
2020-07-14 11:10:11.954  INFO 76260 --- [           main] c.e.e.demo.BootElasticDemoApplication    : Started BootElasticDemoApplication in 2.495 seconds (JVM running for 2.778)
....

15. Access application using "http://localhost:8080/"




16. If we look at our code we will see the data was loaded into the Elasticsearch cluster using a java class called "LoadData.java". Ideally data should already exist in the cluster but for demo purposes we load some data as part of the Spring Boot Application and clear the data prior to each application run given it's just a demo.

2020-07-14 11:12:33.109  INFO 76277 --- [           main] com.example.elastic.demo.LoadData        : Pre loading Car{id='OjThSnMBLjyTRl7lZsDL', make='holden', model='commodore', bodystyles=[BodyStyle{type='2-door'}, BodyStyle{type='4-door'}, BodyStyle{type='5-door'}]}
2020-07-14 11:12:33.584  INFO 76277 --- [           main] com.example.elastic.demo.LoadData        : Pre loading Car{id='OzThSnMBLjyTRl7laMCo', make='holden', model='astra', bodystyles=[BodyStyle{type='2-door'}, BodyStyle{type='4-door'}]}
2020-07-14 11:12:34.189  INFO 76277 --- [           main] com.example.elastic.demo.LoadData        : Pre loading Car{id='PDThSnMBLjyTRl7lasCC', make='nissan', model='skyline', bodystyles=[BodyStyle{type='4-door'}]}
2020-07-14 11:12:34.744  INFO 76277 --- [           main] com.example.elastic.demo.LoadData        : Pre loading Car{id='PTThSnMBLjyTRl7lbMDe', make='nissan', model='pathfinder', bodystyles=[BodyStyle{type='5-door'}]}
2020-07-14 11:12:35.227  INFO 76277 --- [           main] com.example.elastic.demo.LoadData        : Pre loading Car{id='PjThSnMBLjyTRl7lb8AL', make='ford', model='falcon', bodystyles=[BodyStyle{type='4-door'}, BodyStyle{type='5-door'}]}
2020-07-14 11:12:36.737  INFO 76277 --- [           main] com.example.elastic.demo.LoadData        : Pre loading Car{id='QDThSnMBLjyTRl7lcMDu', make='ford', model='territory', bodystyles=[BodyStyle{type='5-door'}]}
2020-07-14 11:12:37.266  INFO 76277 --- [           main] com.example.elastic.demo.LoadData        : Pre loading Car{id='QTThSnMBLjyTRl7ldsDU', make='toyota', model='camry', bodystyles=[BodyStyle{type='4-door'}, BodyStyle{type='5-door'}]}
2020-07-14 11:12:37.777  INFO 76277 --- [           main] com.example.elastic.demo.LoadData        : Pre loading Car{id='QjThSnMBLjyTRl7leMDk', make='toyota', model='corolla', bodystyles=[BodyStyle{type='2-door'}, BodyStyle{type='5-door'}]}
2020-07-14 11:12:38.285  INFO 76277 --- [           main] com.example.elastic.demo.LoadData        : Pre loading Car{id='QzThSnMBLjyTRl7lesDj', make='kia', model='sorento', bodystyles=[BodyStyle{type='5-door'}]}
2020-07-14 11:12:38.800  INFO 76277 --- [           main] com.example.elastic.demo.LoadData        : Pre loading Car{id='RDThSnMBLjyTRl7lfMDg', make='kia', model='sportage', bodystyles=[BodyStyle{type='4-door'}]}

LoadData.java
  
package com.example.elastic.demo;

import com.example.elastic.demo.indices.BodyStyle;
import com.example.elastic.demo.indices.Car;
import com.example.elastic.demo.repo.CarRepository;
import org.springframework.boot.CommandLineRunner;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import lombok.extern.slf4j.Slf4j;

import static java.util.Arrays.asList;

@Configuration
@Slf4j
public class LoadData {
    @Bean
    public CommandLineRunner initElasticsearchData(CarRepository carRepository) {
        return args -> {
            carRepository.deleteAll();
            log.info("Pre loading " + carRepository.save(new Car("holden", "commodore", asList(new BodyStyle("2-door"), new BodyStyle("4-door"), new BodyStyle("5-door")))));
            log.info("Pre loading " + carRepository.save(new Car("holden", "astra", asList(new BodyStyle("2-door"), new BodyStyle("4-door")))));
            log.info("Pre loading " + carRepository.save(new Car("nissan", "skyline", asList(new BodyStyle("4-door")))));
            log.info("Pre loading " + carRepository.save(new Car("nissan", "pathfinder", asList(new BodyStyle("5-door")))));
            log.info("Pre loading " + carRepository.save(new Car("ford", "falcon", asList(new BodyStyle("4-door"), new BodyStyle("5-door")))));
            log.info("Pre loading " + carRepository.save(new Car("ford", "territory", asList(new BodyStyle("5-door")))));
            log.info("Pre loading " + carRepository.save(new Car("toyota", "camry", asList(new BodyStyle("4-door"), new BodyStyle("5-door")))));
            log.info("Pre loading " + carRepository.save(new Car("toyota", "corolla", asList(new BodyStyle("2-door"), new BodyStyle("5-door")))));
            log.info("Pre loading " + carRepository.save(new Car("kia", "sorento", asList(new BodyStyle("5-door")))));
            log.info("Pre loading " + carRepository.save(new Car("kia", "sportage", asList(new BodyStyle("4-door")))));
        };
    }
}

17. Our CarRepository interface is defined as follows

CarRepository.java
  
package com.example.elastic.demo.repo;

import com.example.elastic.demo.indices.Car;
import org.springframework.data.domain.Page;
import org.springframework.data.domain.Pageable;
import org.springframework.data.elasticsearch.repository.ElasticsearchRepository;
import org.springframework.stereotype.Repository;

@Repository
public interface CarRepository extends ElasticsearchRepository <Car, String> {

    Page<Car> findByMakeContaining(String make, Pageable page);

}

18. So let's also via this data using "curl" and Kibana as shown below.

curl -X GET -u "elastic:{PASSWORD}" "http://{IP}:9200/vehicle/_search?pretty" -H 'Content-Type: application/json' -d'
{
  "query": { "match_all": {} },
  "sort": [
    { "_id": "asc" }
  ]
}
'

Output:

{
  "took" : 2,
  "timed_out" : false,
  "_shards" : {
    "total" : 1,
    "successful" : 1,
    "skipped" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : {
      "value" : 10,
      "relation" : "eq"
    },
    "max_score" : null,
    "hits" : [
      {
        "_index" : "vehicle",
        "_type" : "_doc",
        "_id" : "OjThSnMBLjyTRl7lZsDL",
        "_score" : null,
        "_source" : {
          "_class" : "com.example.elastic.demo.indices.Car",
          "make" : "holden",
          "model" : "commodore",
          "bodystyles" : [
            {
              "type" : "2-door"
            },
            {
              "type" : "4-door"
            },
            {
              "type" : "5-door"
            }
          ]
        },
        "sort" : [
          "OjThSnMBLjyTRl7lZsDL"
        ]
      },
      {
        "_index" : "vehicle",
        "_type" : "_doc",
        "_id" : "OzThSnMBLjyTRl7laMCo",
        "_score" : null,
        "_source" : {
          "_class" : "com.example.elastic.demo.indices.Car",
          "make" : "holden",
          "model" : "astra",
          "bodystyles" : [
            {
              "type" : "2-door"
            },
            {
              "type" : "4-door"
            }
          ]
        },
        "sort" : [
          "OzThSnMBLjyTRl7laMCo"
        ]
      },
      {
        "_index" : "vehicle",
        "_type" : "_doc",
        "_id" : "PDThSnMBLjyTRl7lasCC",
        "_score" : null,
        "_source" : {
          "_class" : "com.example.elastic.demo.indices.Car",
          "make" : "nissan",
          "model" : "skyline",
          "bodystyles" : [
            {
              "type" : "4-door"
            }
          ]
        },
        "sort" : [
          "PDThSnMBLjyTRl7lasCC"
        ]
      },
      {
        "_index" : "vehicle",
        "_type" : "_doc",
        "_id" : "PTThSnMBLjyTRl7lbMDe",
        "_score" : null,
        "_source" : {
          "_class" : "com.example.elastic.demo.indices.Car",
          "make" : "nissan",
          "model" : "pathfinder",
          "bodystyles" : [
            {
              "type" : "5-door"
            }
          ]
        },
        "sort" : [
          "PTThSnMBLjyTRl7lbMDe"
        ]
      },
      {
        "_index" : "vehicle",
        "_type" : "_doc",
        "_id" : "PjThSnMBLjyTRl7lb8AL",
        "_score" : null,
        "_source" : {
          "_class" : "com.example.elastic.demo.indices.Car",
          "make" : "ford",
          "model" : "falcon",
          "bodystyles" : [
            {
              "type" : "4-door"
            },
            {
              "type" : "5-door"
            }
          ]
        },
        "sort" : [
          "PjThSnMBLjyTRl7lb8AL"
        ]
      },
      {
        "_index" : "vehicle",
        "_type" : "_doc",
        "_id" : "QDThSnMBLjyTRl7lcMDu",
        "_score" : null,
        "_source" : {
          "_class" : "com.example.elastic.demo.indices.Car",
          "make" : "ford",
          "model" : "territory",
          "bodystyles" : [
            {
              "type" : "5-door"
            }
          ]
        },
        "sort" : [
          "QDThSnMBLjyTRl7lcMDu"
        ]
      },
      {
        "_index" : "vehicle",
        "_type" : "_doc",
        "_id" : "QTThSnMBLjyTRl7ldsDU",
        "_score" : null,
        "_source" : {
          "_class" : "com.example.elastic.demo.indices.Car",
          "make" : "toyota",
          "model" : "camry",
          "bodystyles" : [
            {
              "type" : "4-door"
            },
            {
              "type" : "5-door"
            }
          ]
        },
        "sort" : [
          "QTThSnMBLjyTRl7ldsDU"
        ]
      },
      {
        "_index" : "vehicle",
        "_type" : "_doc",
        "_id" : "QjThSnMBLjyTRl7leMDk",
        "_score" : null,
        "_source" : {
          "_class" : "com.example.elastic.demo.indices.Car",
          "make" : "toyota",
          "model" : "corolla",
          "bodystyles" : [
            {
              "type" : "2-door"
            },
            {
              "type" : "5-door"
            }
          ]
        },
        "sort" : [
          "QjThSnMBLjyTRl7leMDk"
        ]
      },
      {
        "_index" : "vehicle",
        "_type" : "_doc",
        "_id" : "QzThSnMBLjyTRl7lesDj",
        "_score" : null,
        "_source" : {
          "_class" : "com.example.elastic.demo.indices.Car",
          "make" : "kia",
          "model" : "sorento",
          "bodystyles" : [
            {
              "type" : "5-door"
            }
          ]
        },
        "sort" : [
          "QzThSnMBLjyTRl7lesDj"
        ]
      },
      {
        "_index" : "vehicle",
        "_type" : "_doc",
        "_id" : "RDThSnMBLjyTRl7lfMDg",
        "_score" : null,
        "_source" : {
          "_class" : "com.example.elastic.demo.indices.Car",
          "make" : "kia",
          "model" : "sportage",
          "bodystyles" : [
            {
              "type" : "4-door"
            }
          ]
        },
        "sort" : [
          "RDThSnMBLjyTRl7lfMDg"
        ]
      }
    ]
  }
}

Kibana

Obtain Kibana HTTP IP as shown below and login using username "elastic" and password we obtained previously.

$ kubectl get svc kibana-sample-kb-http -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
10.195.93.174




Finally maybe you want to deploy the application to Kubernetes. To do that take a look at Cloud Native Buildpacks CNCF project and/or Tanzu Build Service to turn your code into a Container Image stored in a registry.



More Information

Spring Data Elasticsearch
https://spring.io/projects/spring-data-elasticsearch

VMware Tanzu Kubernetes Grid Integrated Edition Documentation
https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid-Integrated-Edition/index.html