Search This Blog

Thursday 27 April 2017

Accessing a Pivotal MySQL service instance within Pivotal Cloud Foundry

Recently at a hackathon we used the Pivotal MySQL service rather then a ClearDB MySQL service. As a result we could not connect to our instance from a third party tool as the service instance is locked down. There are various way to access the MySQL service to me the best two options are as follows.

1. Cloud Foundry CLI MySQL Plugin

cf-mysql-plugin makes it easy to connect the mysql command line client to any MySQL-compatible database used by Cloud Foundry apps. Use it to

  • inspect databases for debugging purposes
  • manually adjust schema or contents in development environments
  • dump and restore databases

Install it as explained in the link below:

  https://github.com/andreasf/cf-mysql-plugin

** Using It ** 

1. First ensure you are logged into a Pivotal Cloud Foundry instance you can determine that as follows

pasapicella@pas-macbook:~$ cf target -o ben.farrelly-org -s hackathon
API endpoint:   https://api.run.pivotal.io
API version:    2.78.0
User:           papicella@pivotal.io
Org:            ben.farrelly-org
Space:          hackathon

2. Verify you have a MySQL instance provisioned

pasapicella@pas-macbook:~$ cf services
Getting services in org ben.farrelly-org / space hackathon as papicella@pivotal.io...
OK

name        service   plan    bound apps                                                     last operation
nab-mysql   p-mysql   100mb   nabhackathon-beacon, nabhackathon-merchant, pivotal-mysqlweb   create succeeded

3. Log in as shown below

pasapicella@pas-macbook:~$ cf mysql nab-mysql

...

Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show databases;
+-----------------------------------------+
| Database                                |
+-----------------------------------------+
| cf_53318c9c_caec_49be_9e33_075fade26183 |
| information_schema                      |
+-----------------------------------------+
2 rows in set (0.30 sec)

mysql> use cf_53318c9c_caec_49be_9e33_075fade26183;
Database changed

mysql> show tables;
+---------------------------------------------------+
| Tables_in_cf_53318c9c_caec_49be_9e33_075fade26183 |
+---------------------------------------------------+
| beacon                                            |
| beacon_product                                    |
| customer                                          |
| customer_registration                             |
| merchant                                          |
| payment                                           |
| payment_product                                   |
| product                                           |
+---------------------------------------------------+
8 rows in set (0.29 sec)

2. Pivotal MySQL*Web

PivotalMySQL*Web is a browser based SQL tool rendered using Bootstrap UI for MySQL PCF service instances which allows you to run SQL commands and view schema objects from a browser based interface. Use it to

  • Multiple Command SQL worksheet for DDL and DML
  • Run Explain Plan across SQL Statements
  • View/Run DDL command against Tables/Views/Indexes/Constraints
  • Command History
  • Auto Bind to Pivotal MySQL Services bound to the Application within Pivotal Cloud Foundry 
  • Manage JDBC Connections
  • Load SQL File into SQL Worksheet from Local File System
  • SQL Worksheet with syntax highlighting support
  • HTTP GET request to auto login without a login form
  • Export SQL query results in JSON or CSV formats
  • Generate DDL for schema objects


It does this deployed within Pivotal Cloud Foundry as an application instance and auto binds to the MySQL service for you if you choose to bind it as part of the "cf push" and a manifest.yml which looks as follows

---
applications:
- name: pivotal-mysqlweb
  memory: 512M
  instances: 1
  host: pivotal-mysqlweb-${random-word}
  path: ./target/PivotalMySQLWeb-0.0.1-SNAPSHOT.jar
  services:
    - pas-mysql

Install it as explained in the link below:

  https://github.com/pivotal-cf/PivotalMySQLWeb


Wednesday 26 April 2017

Cross-origin resource sharing (CORS) from Spring Boot Rest Controllers

Was involved in a hackathon recently and after creating a few Spring boot API's for the UI team to consume and they run into errors around (Cross-origin resource sharing ). For security reasons, browsers prohibit AJAX calls to resources residing outside the current origin.

I have seen this before and Spring Boot has support to ensure you can control which resources can be accessed outside of the current origin. It's as simple as an annotation "@CrossOrigin", as shown below. In this example every request from this Rest Controller supports resource calls residing outside the current origin.
  
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.http.HttpStatus;
import org.springframework.http.MediaType;
import org.springframework.web.bind.annotation.*;

import java.util.List;

@CrossOrigin
@RestController
@RequestMapping(value = "/beacon")
public class BeaconRest
{
    private static Log logger = LogFactory.getLog(BeaconRest.class);

    @Autowired
    private BeaconRepository beaconRepository;

    @RequestMapping(value = "/all",
            method = RequestMethod.GET,
            produces = MediaType.APPLICATION_JSON_VALUE)
    public List<Beacon> allBeacons()
    {
        logger.info("Invoking /beacon/all RESTful method");
        return beaconRepository.findAll();
    }

Of course it's much more flexible then that adding the ability to add options, and you can read more about it here.

https://docs.spring.io/spring/docs/4.2.x/spring-framework-reference/html/cors.html

Thursday 13 April 2017

Spring Boot Application for Pivotal Cloud Cache Service

I previously blogged about the Pivotal Cloud Cache service in Pivotal Cloud Foundry as follows

http://theblasfrompas.blogspot.com.au/2017/04/getting-started-with-pivotal-cloud.html

During that post I promised it will follow with a Spring Boot application which would use the PCC service to show what the code would look like. That demo exists at the GitHub URL below.

https://github.com/papicella/SpringBootPCCDemo

The GitHub URL above shows how you can clone , package and then push this application to PCF using your own PCC service instance using the "Spring Cloud GemFire Connector"



More Information

Pivotal Cloud Cache Docs
http://docs.pivotal.io/p-cloud-cache/index.html



Monday 10 April 2017

Getting Started with Pivotal Cloud Cache on Pivotal Cloud Foundry

Recently we announced the new cache service Pivotal Cloud Cache (PCC) for Pivotal Cloud Foundry (PCC). In short Pivotal Cloud Cache (PCC) is a opinionated, distributed, highly available, high speed key/value caching service. PCC can be easily horizontally scaled for capacity and performance.

In this post we will show how you would provision a service, login to the Pulse UI dashboard, connect using GFSH etc. I won't create a spring boot application to use the service at this stage BUT that will follow in a post soon enough.

Steps

1. First you will need the PCC service and if it's been installed it will look like this


2. Now let's view the current plans we have in place as shown below

pasapicella@pas-macbook:~$ cf marketplace -s p-cloudcache
Getting service plan information for service p-cloudcache as papicella@pivotal.io...
OK

service plan   description          free or paid
extra-small    Plan 1 Description   free
extra-large    Plan 5 Description   free

3. Now let's create a service as shown below

pasapicella@pas-macbook:~$ cf create-service p-cloudcache extra-small pas-pcc
Creating service instance pas-pcc in org pivot-papicella / space development as papicella@pivotal.io...
OK

Create in progress. Use 'cf services' or 'cf service pas-pcc' to check operation status.

4. At this point it will asynchronously create the GemFire cluster which is essentially what PCC is. For more Information on GemFire see the docs link here.

You can check the progress one of two ways.

1. Using Pivotal Apps manager as shown below


2. Using a command as follows

pasapicella@pas-macbook:~$ cf service pas-pcc

Service instance: pas-pcc
Service: p-cloudcache
Bound apps:
Tags:
Plan: extra-small
Description: Pivotal CloudCache offers the ability to deploy a GemFire cluster as a service in Pivotal Cloud Foundry.
Documentation url: http://docs.pivotal.io/gemfire/index.html
Dashboard: http://gemfire-yyyyy.run.pez.pivotal.io/pulse

Last Operation
Status: create in progress
Message: Instance provisioning in progress
Started: 2017-04-10T01:34:58Z
Updated: 2017-04-10T01:36:59Z

5. Once complete it will look as follows


6. Now in order to log into both GFSH and Pulse we are going to need to create a service key for the service we just created, which we do as shown below.

pasapicella@pas-macbook:~/pivotal/PCF/services/PCC$ cf create-service-key pas-pcc pas-pcc-key
Creating service key pas-pcc-key for service instance pas-pcc as papicella@pivotal.io...
OK

7. Retrieve service keys as shown below

pasapicella@pas-macbook:~$ cf service-key pas-pcc pas-pcc-key
Getting key pas-pcc-key for service instance pas-pcc as papicella@pivotal.io...

{
 "locators": [
  "0.0.0.0[55221]",
  "0.0.0.0[55221]",
  "0.0.0.0[55221]"
 ],
 "urls": {
  "gfsh": "http://gemfire-yyyy.run.pez.pivotal.io/gemfire/v1",
  "pulse": "http://gemfire-yyyy.run.pez.pivotal.io/pulse"
 },
 "users": [
  {
   "password": "password",
   "username": "developer"
  },
  {
   "password": "password",
   "username": "operator"
  }
 ]
}

8. Now lets log into Pulse. The URL is available as part of the output above

Login Page


Pulse Dashboard : You can see from the dashboard page it shows how many locators and cache server members we have as part of this default cluster



9. Now lets log into GFSH. Once again the URL is as per the output above

- First we will need to download Pivotal GemFire so we have the GFSH client, download the zip at the link below and extract to your file system

  https://network.pivotal.io/products/pivotal-gemfire

- Invoke as follows using the path to the extracted ZIP file

$GEMFIRE_HOME/bin/gfsh

pasapicella@pas-macbook:~/pivotal/software/gemfire/pivotal-gemfire-9.0.3/bin$ ./gfsh
    _________________________     __
   / _____/ ______/ ______/ /____/ /
  / /  __/ /___  /_____  / _____  /
 / /__/ / ____/  _____/ / /    / /
/______/_/      /______/_/    /_/    9.0.3

Monitor and Manage Pivotal GemFire
gfsh>connect --use-http --url=http://gemfire-yyyy.run.pez.pivotal.io/gemfire/v1 --user=operator --password=password
Successfully connected to: GemFire Manager HTTP service @ http://gemfire-yyyy.run.pez.pivotal.io/gemfire/v1

gfsh>

10. Now lets create a region which will use to store some cache data

$ create region --name=demoregion --type=PARTITION_HEAP_LRU --redundant-copies=1
  
gfsh>create region --name=demoregion --type=PARTITION_HEAP_LRU --redundant-copies=1
              Member                | Status
----------------------------------- | ---------------------------------------------------------------------
cacheserver-PCF-PEZ-Heritage-RP04-1 | Region "/demoregion" created on "cacheserver-PCF-PEZ-Heritage-RP04-1"
cacheserver-PCF-PEZ-Heritage-RP04-0 | Region "/demoregion" created on "cacheserver-PCF-PEZ-Heritage-RP04-0"
cacheserver-PCF-PEZ-Heritage-RP04-2 | Region "/demoregion" created on "cacheserver-PCF-PEZ-Heritage-RP04-2"
cacheserver-PCF-PEZ-Heritage-RP04-3 | Region "/demoregion" created on "cacheserver-PCF-PEZ-Heritage-RP04-3" 

Note: Understanding the region types you can create exist at the Pivotal GemFire docs but basically in the example above we create a partitioned region where primary and backup data is stored among the cache servers. As you can see we asked for a single backup copy of each region entry to be placed on a separate cache server itself for redundancy

http://gemfire.docs.pivotal.io/geode/developing/region_options/region_types.html#region_types

11. If we return to the Pulse Dashboard UI we will see from the "Data Browser" tab we have a region


12. Now lets just add some data , few entries which are simple String key/value pairs only
  
gfsh>put --region=/demoregion --key=1 --value="value 1"
Result      : true
Key Class   : java.lang.String
Key         : 1
Value Class : java.lang.String
Old Value   : <NULL>


gfsh>put --region=/demoregion --key=2 --value="value 2"
Result      : true
Key Class   : java.lang.String
Key         : 2
Value Class : java.lang.String
Old Value   : <NULL>


gfsh>put --region=/demoregion --key=3 --value="value 3"
Result      : true
Key Class   : java.lang.String
Key         : 3
Value Class : java.lang.String
Old Value   : <NULL>

13. Finally lets query the data we have in the cache
  
gfsh>query --query="select * from /demoregion"

Result     : true
startCount : 0
endCount   : 20
Rows       : 3

Result
-------
value 3
value 1
value 2

NEXT_STEP_NAME : END

13. We can return to Pulse and invoke the same query from the "Data Browser" tab as shown below.



Of course storing data in a cache isn't useful unless we actually have an application on PCF that can use the Cache BUT that will come in a separate post. Basically we will BIND to this service, connect as a GemFire Client using the locators we are given as part of the service key and then extract the cache data we have just created above by invoking a query.

More Information

Download PCC for PCF
https://network.pivotal.io/products/cloud-cache

Data Sheet for PCC
https://content.pivotal.io/datasheets/pivotal-cloud-cache

Tuesday 4 April 2017

Pivotal Cloud Foundry Cloud Service Brokers for AWS, Azure and GCP

Pivotal Cloud Foundry (PCF) has various cloud service brokers for all the public clouds we support which include AWS, Azure and GCP. You can download and install those service brokers on premise or off premise giving you the capability to use Cloud services where it makes sense for your on premise or off premise cloud native applications.

https://network.pivotal.io/

The three cloud service brokers are as follows:





In the example below we have a PCF install running on vSphere and it has the AWS service broker tile installed as shown by the Ops Manager UI


Once installed this PCF instance can then provision AWS services and you can do that one of two ways.

1. Using Apps Manager UI as shown below


2. Use the CF CLI tool and invoking "cf marketplace" to list the service and then "cf create-service" to actually create an instance of the service.



Once provisioned within a SPACE of PCF you can then bind and use the service from applications as you normally would to consume the service reading the VCAP_SERVICES ENV variable and essentially access AWS services from your on premise installation of PCF in the example above.

More Information

GCP service broker:
https://network.pivotal.io/products/gcp-service-broker

AWS service broker:
https://network.pivotal.io/products/pcf-service-broker-for-aws

Azure service broker:
https://network.pivotal.io/products/microsoft-azure-service-broker


Manually running a BOSH errand for Pivotal Cloud Foundry on GCP

Pivotal Ops Manager has various errands in runs for different deployments within a PCF instance. These Errands can be switched off manually when installing new Tiles or upgrading the platform, in fact in PCF 1.10 the errands themselves will only run if they need to run making it a lot faster.

Below I am going to show you how you would manually run an Errand if you needed to on a PCF instance running on GCP. These instructions would work for PCF running on AWS, Azure or even vSphere so there not specific to PCF on GCP.

1. First login to your Ops Manager VM itself

pasapicella@pas-macbook:~/pivotal/GCP/install/10/opsmanager$ ./ssh-opsman.sh
Welcome to Ubuntu 14.04.5 LTS (GNU/Linux 4.4.0-66-generic x86_64)

 * Documentation:  https://help.ubuntu.com/

  System information as of Mon Apr  3 23:38:57 UTC 2017

  System load:  0.0                Processes:           141
  Usage of /:   14.7% of 78.71GB   Users logged in:     0
  Memory usage: 68%                IP address for eth0: 0.0.0.0
  Swap usage:   0%

  Graph this data and manage this system at:
    https://landscape.canonical.com/

  Get cloud support with Ubuntu Advantage Cloud Guest:
    http://www.ubuntu.com/business/services/cloud

5 packages can be updated.
0 updates are security updates.

Your Hardware Enablement Stack (HWE) is supported until April 2019.

*** System restart required ***
Last login: Mon Apr  3 23:38:59 2017 from 110.175.56.52
ubuntu@om-pcf-110:~$

2. Target the Bosh director which would look like this

ubuntu@om-pcf-110:~$ bosh --ca-cert /var/tempest/workspaces/default/root_ca_certificate target 10.0.0.10
Target set to 'p-bosh'

Note: You may be asked to login if you have not logged in to the bosh director which you can determine the login details from Ops Manager UI as follows

- Log into Ops Manager UI
- Click on the tile for the the the "Ops Manager Director" which would be specific to your IaaS provider, in the example below that is GCP


- Click on the credentials tab


3. Target the correct deployment. In the example below I am targeting the Elastic Runtime deployment.

ubuntu@om-pcf-110:~$ bosh deployment /var/tempest/workspaces/default/deployments/cf-c099637fab39369d6ba0.yml
Deployment set to '/var/tempest/workspaces/default/deployments/cf-c099637fab39369d6ba0.yml'

Note: You can list out the deployment names using "bosh deployments"

4. List out the errands as shown below using "bosh errands"

ubuntu@om-pcf-110:~$ bosh errands
RSA 1024 bit CA certificates are loaded due to old openssl compatibility

+-----------------------------+
| Name                        |
+-----------------------------+
| smoke-tests                 |
| push-apps-manager           |
| notifications               |
| notifications-ui            |
| push-pivotal-account        |
| autoscaling                 |
| autoscaling-register-broker |
| nfsbrokerpush               |
| bootstrap                   |
| mysql-rejoin-unsafe         |
+-----------------------------+

5. Now in this example we are going to run the errand "push-apps-manager" and we do it as shown below

$ bosh run errand push-apps-manager

** Output **

ubuntu@om-pcf-110:~$ bosh run errand push-apps-manager
Acting as user 'director' on deployment 'cf-c099637fab39369d6ba0' on 'p-bosh'
RSA 1024 bit CA certificates are loaded due to old openssl compatibility

Director task 621
  Started preparing deployment > Preparing deployment

  Started preparing package compilation > Finding packages to compile. Done (00:00:01)

     Done preparing deployment > Preparing deployment (00:00:05)

  Started creating missing vms > push-apps-manager/32218933-7511-4c0d-b512-731ca69c4254 (0)

...

+ '[' '!' -z 'Invitations deploy log: ' ']'
+ printf '** Invitations deploy log:  \n'
+ printf '*************************************************************************************************\n'
+ cat /var/vcap/packages/invitations/invitations.log

Errand 'push-apps-manager' completed successfully (exit code 0)
ubuntu@om-pcf-110:~$