I recently setup verbose GC logging on a deployed application to Pivotal Cloud Foundry (PCF) and specified a file to write the GC logging info to. Below shows how you can view application files using the CF CLI.
1. Start by invoking the following to show your deployed applications
[Tue Dec 16 09:32:10 papicella@:~/cf/APJ-vcloud ] $ cf apps
Getting apps in org ANZ / space development as pas...
OK
name requested state instances memory disk urls
pas-playjava started 1/1 512M 1G pas-playjava.apj.fe.pivotal.io
pcfhawq started 1/1 512M 1G pcfhawq.apj.fe.pivotal.io
apples-spring-music started 1/1 512M 1G apples-spring-music.apj.fe.pivotal.io
pas-petclinic started 1/1 512M 1G pas-petclinic.apj.fe.pivotal.io
2. Now lets view the files for the application
[Tue Dec 16 09:33:29 papicella@:~/cf/APJ-vcloud ] $ cf files apples-spring-music
Getting files for app apples-spring-music in org ANZ / space development as pas...
OK
.bash_logout 220B
.bashrc 3.0K
.profile 675B
app/ -
logs/ -
run.pid 3B
staging_info.yml 495B
tmp/ -
3. Now lets view the contents of a specific file by providing the full path to the file, in this case our GC log file.
[Tue Dec 16 09:33:41 papicella@:~/cf/APJ-vcloud ] $ cf files apples-spring-music /app/apples_gc.log
Getting files for app apples-spring-music in org ANZ / space development as pas...
OK
OpenJDK 64-Bit Server VM (25.40-b06) for linux-amd64 JRE (1.8.0_25--vagrant_2014_10_17_04_37-b17), built on Oct 17 2014 04:40:49 by "vagrant" with gcc 4.4.3
Memory: 4k page, physical 16434516k(1028892k free), swap 16434488k(16434476k free)
CommandLine flags: -XX:InitialHeapSize=391468032 -XX:MaxHeapSize=391468032 -XX:MaxMetaspaceSize=67108864 -XX:MetaspaceSize=67108864 -XX:OnOutOfMemoryError=/home/vcap/app/.java-buildpack/open_jdk_jre/bin/killjava.sh -XX:+PrintGC -XX:+PrintGCTimeStamps -XX:ThreadStackSize=995 -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseParallelGC
1.522: [GC (Allocation Failure) 95744K->16737K(367104K), 0.0590876 secs]
1.786: [GC (Allocation Failure) 112481K->23072K(367104K), 0.0735813 secs]
2.075: [GC (Allocation Failure) 118816K->32499K(367104K), 0.0531070 secs]
2.315: [GC (Allocation Failure) 128243K->45124K(367104K), 0.0428136 secs]
2.893: [GC (Allocation Failure) 140868K->53805K(367104K), 0.0375078 secs]
4.143: [GC (Allocation Failure) 149549K->63701K(335360K), 0.1507024 secs]
5.686: [GC (Allocation Failure) 127701K->69319K(331776K), 0.0703850 secs]
7.060: [GC (Allocation Failure) 133319K->70962K(348672K), 0.0121269 secs]
8.458: [GC (Allocation Failure) 130866K->69734K(322560K), 0.0228917 secs]
Search This Blog
Tuesday, 16 December 2014
Monday, 8 December 2014
Typesafe activator , play framework applications deployed to Pivotal Cloud Foundry
I decided to quickly build an application using Typesafe activator for a play framework scala application and deploy it to Pivotal Cloud Foundry. You can read more about Typesafe activator below.
https://typesafe.com/activator
Here are the steps to deploy a scala play framework application created using Typesafe activator. I created a basic hello world scala application with the play framework. The purpose here is what is needed to get it deployed on Pivotal Cloud Foundry.
Note: Assumes we have created an application with name "hello-play-scala" and we are in that actually directly as we create files for deployment.
1. Create a distribution ZIP file as follows once you have finished developing your application
> ./activator dist
2. Create a manifest file as follows which refers to the DIST zip file created in #1 above.
applications:
- name: pas-helloworld-scala
memory: 756M
instances: 1
host: pas-helloworld-scala
domain: apj.fe.pivotal.io
path: ./target/universal/hello-play-scala-1.0-SNAPSHOT.zip
3. Create a build.sh file, make it executable. This simple shell script is going to call sbt/activator
java -jar activator-launch-1.2.12.jar dist
4. Deploy as shown below.
[Mon Dec 08 10:31:14 papicella@:~/vmware/software/scala/apps/hello-play-scala ] $ cf push -f manifest.yml
Using manifest file manifest.yml
Creating app pas-helloworld-scala in org ANZ / space development as pas...
OK
Using route pas-helloworld-scala.apj.fe.pivotal.io
Binding pas-helloworld-scala.apj.fe.pivotal.io to pas-helloworld-scala...
OK
Uploading pas-helloworld-scala...
Uploading app files from: target/universal/hello-play-scala-1.0-SNAPSHOT.zip
Uploading 1.1M, 131 files
OK
Starting app pas-helloworld-scala in org ANZ / space development as pas...
OK
-----> Downloaded app package (26M)
-----> Java Buildpack Version: v2.4 (offline) | https://github.com/cloudfoundry/java-buildpack.git#7cdcf1a
-----> Downloading Open Jdk JRE 1.7.0_60 from http://download.run.pivotal.io/openjdk/lucid/x86_64/openjdk-1.7.0_60.tar.gz (found in cache)
Expanding Open Jdk JRE to .java-buildpack/open_jdk_jre (0.9s)
-----> Downloading Play Framework Auto Reconfiguration 1.4.0_RELEASE from http://download.run.pivotal.io/auto-reconfiguration/auto-reconfiguration-1.4.0_RELEASE.jar (found in cache)
-----> Uploading droplet (57M)
1 of 1 instances running
App started
Showing health and status for app pas-helloworld-scala in org ANZ / space development as pas...
OK
requested state: started
instances: 1/1
usage: 756M x 1 instances
urls: pas-helloworld-scala.apj.fe.pivotal.io
state since cpu memory disk
#0 running 2014-12-08 10:32:27 AM 0.0% 164.6M of 756M 118.8M of 1G
5. Finally access in a browser
https://typesafe.com/activator
Here are the steps to deploy a scala play framework application created using Typesafe activator. I created a basic hello world scala application with the play framework. The purpose here is what is needed to get it deployed on Pivotal Cloud Foundry.
Note: Assumes we have created an application with name "hello-play-scala" and we are in that actually directly as we create files for deployment.
1. Create a distribution ZIP file as follows once you have finished developing your application
> ./activator dist
2. Create a manifest file as follows which refers to the DIST zip file created in #1 above.
applications:
- name: pas-helloworld-scala
memory: 756M
instances: 1
host: pas-helloworld-scala
domain: apj.fe.pivotal.io
path: ./target/universal/hello-play-scala-1.0-SNAPSHOT.zip
3. Create a build.sh file, make it executable. This simple shell script is going to call sbt/activator
java -jar activator-launch-1.2.12.jar dist
4. Deploy as shown below.
[Mon Dec 08 10:31:14 papicella@:~/vmware/software/scala/apps/hello-play-scala ] $ cf push -f manifest.yml
Using manifest file manifest.yml
Creating app pas-helloworld-scala in org ANZ / space development as pas...
OK
Using route pas-helloworld-scala.apj.fe.pivotal.io
Binding pas-helloworld-scala.apj.fe.pivotal.io to pas-helloworld-scala...
OK
Uploading pas-helloworld-scala...
Uploading app files from: target/universal/hello-play-scala-1.0-SNAPSHOT.zip
Uploading 1.1M, 131 files
OK
Starting app pas-helloworld-scala in org ANZ / space development as pas...
OK
-----> Downloaded app package (26M)
-----> Java Buildpack Version: v2.4 (offline) | https://github.com/cloudfoundry/java-buildpack.git#7cdcf1a
-----> Downloading Open Jdk JRE 1.7.0_60 from http://download.run.pivotal.io/openjdk/lucid/x86_64/openjdk-1.7.0_60.tar.gz (found in cache)
Expanding Open Jdk JRE to .java-buildpack/open_jdk_jre (0.9s)
-----> Downloading Play Framework Auto Reconfiguration 1.4.0_RELEASE from http://download.run.pivotal.io/auto-reconfiguration/auto-reconfiguration-1.4.0_RELEASE.jar (found in cache)
-----> Uploading droplet (57M)
1 of 1 instances running
App started
Showing health and status for app pas-helloworld-scala in org ANZ / space development as pas...
OK
requested state: started
instances: 1/1
usage: 756M x 1 instances
urls: pas-helloworld-scala.apj.fe.pivotal.io
state since cpu memory disk
#0 running 2014-12-08 10:32:27 AM 0.0% 164.6M of 756M 118.8M of 1G
5. Finally access in a browser
Wednesday, 3 December 2014
Deploying Spring Boot Applications to Pivotal Cloud Foundry from STS
The example below shows how to use STS (Spring Tool Suite) to deploy a spring boot web application directly from the IDE itself. I created a basic spring boot web application using the template engine thymeleaf. The application isn't that fancy it simply displays a products page of some mock up Products. This blog entry just shows how you could deploy this to Pivotal Cloud Foundry from the IDE itself.
1. First create a Pivotal Cloud Foundry Server connection. The image blow shows the connection and one single application.
2. Right click on your Spring Boot application and select "Configure -> Enable as cloud foundry app"
3. Drag and Drop The project onto the Cloud Foundry Connection.
4. At this point a dialog appears asking for an application name as shown below.
5. Click Next
6. Select deployment options and click Next
7. Bind to existing services if you need to
8. Click next
9. Click finish
At this point it will push the application to your Cloud Foundry Instance
Once complete the Console window in STS will show something as follows
Checking application - SpringBootWebCloudFoundry
Generating application archive
Creating application
Pushing application
Application successfully pushed
Starting and staging application
Got staging request for app with id bb3c63f5-c32d-4e27-a834-04076f2af35a
Updated app with guid bb3c63f5-c32d-4e27-a834-04076f2af35a ({"state"=>"STARTED"})
-----> Downloaded app package (12M)
-----> Java Buildpack Version: v2.4 (offline) | https://github.com/cloudfoundry/java-buildpack.git#7cdcf1a
-----> Downloading Open Jdk JRE 1.7.0_60 from http://download.run.pivotal.io/openjdk/lucid/x86_64/openjdk-1.7.0_60.tar.gz (found in cache)
Expanding Open Jdk JRE to .java-buildpack/open_jdk_jre (0.9s)
-----> Downloading Spring Auto Reconfiguration 1.4.0_RELEASE from http://download.run.pivotal.io/auto-reconfiguration/auto-reconfiguration-1.4.0_RELEASE.jar (found in cache)
-----> Uploading droplet (43M)
Starting app instance (index 0) with guid bb3c63f5-c32d-4e27-a834-04076f2af35a
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v1.1.9.RELEASE)
2014-12-03 11:09:50.434 INFO 32 --- [ main] loudProfileApplicationContextInitializer : Adding 'cloud' to list of active profiles
2014-12-03 11:09:50.447 INFO 32 --- [ main] pertySourceApplicationContextInitializer : Adding 'cloud' PropertySource to ApplicationContext
2014-12-03 11:09:50.497 INFO 32 --- [ main] nfigurationApplicationContextInitializer : Adding cloud service auto-reconfiguration to ApplicationContext
2014-12-03 11:09:50.521 INFO 32 --- [ main] apples.sts.web.Application : Starting Application on 187dfn5m5ve with PID 32 (/home/vcap/app started by vcap in /home/vcap/app)
2014-12-03 11:09:50.577 INFO 32 --- [ main] ationConfigEmbeddedWebApplicationContext : Refreshing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext@374d2f77: startup date [Wed Dec 03 11:09:50 UTC 2014]; root of context hierarchy
2014-12-03 11:09:50.930 WARN 32 --- [ main] .i.s.PathMatchingResourcePatternResolver : Skipping [/home/vcap/app/.java-buildpack/spring_auto_reconfiguration/spring_auto_reconfiguration-1.4.0_RELEASE.jar] because it does not denote a directory
2014-12-03 11:09:51.600 WARN 32 --- [ main] .i.s.PathMatchingResourcePatternResolver : Skipping [/home/vcap/app/.java-buildpack/spring_auto_reconfiguration/spring_auto_reconfiguration-1.4.0_RELEASE.jar] because it does not denote a directory
2014-12-03 11:09:52.349 INFO 32 --- [ main] urceCloudServiceBeanFactoryPostProcessor : Auto-reconfiguring beans of type javax.sql.DataSource
2014-12-03 11:09:52.358 INFO 32 --- [ main] urceCloudServiceBeanFactoryPostProcessor : No beans of type javax.sql.DataSource found. Skipping auto-reconfiguration.
2014-12-03 11:09:53.109 INFO 32 --- [ main] .t.TomcatEmbeddedServletContainerFactory : Server initialized with port: 61097
2014-12-03 11:09:53.391 INFO 32 --- [ main] o.apache.catalina.core.StandardService : Starting service Tomcat
2014-12-03 11:09:53.393 INFO 32 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet Engine: Apache Tomcat/7.0.56
2014-12-03 11:09:53.523 INFO 32 --- [ost-startStop-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
2014-12-03 11:09:53.524 INFO 32 --- [ost-startStop-1] o.s.web.context.ContextLoader : Root WebApplicationContext: initialization completed in 2950 ms
2014-12-03 11:09:54.201 INFO 32 --- [ost-startStop-1] o.s.b.c.e.ServletRegistrationBean : Mapping servlet: 'dispatcherServlet' to [/]
2014-12-03 11:09:54.205 INFO 32 --- [ost-startStop-1] o.s.b.c.embedded.FilterRegistrationBean : Mapping filter: 'hiddenHttpMethodFilter' to: [/*]
2014-12-03 11:09:54.521 INFO 32 --- [ main] o.s.w.s.handler.SimpleUrlHandlerMapping : Mapped URL path [/**/favicon.ico] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]
2014-12-03 11:09:54.611 INFO 32 --- [ main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/],methods=[],params=[],headers=[],consumes=[],produces=[],custom=[]}" onto public java.lang.String apples.sts.web.WelcomeController.welcome(org.springframework.ui.Model)
2014-12-03 11:09:54.612 INFO 32 --- [ main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/products],methods=[],params=[],headers=[],consumes=[],produces=[],custom=[]}" onto public java.lang.String apples.sts.web.ProductController.listProducts(org.springframework.ui.Model)
2014-12-03 11:09:54.615 INFO 32 --- [ main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/error],methods=[],params=[],headers=[],consumes=[],produces=[text/html],custom=[]}" onto public org.springframework.web.servlet.ModelAndView org.springframework.boot.autoconfigure.web.BasicErrorController.errorHtml(javax.servlet.http.HttpServletRequest)
2014-12-03 11:09:54.616 INFO 32 --- [ main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/error],methods=[],params=[],headers=[],consumes=[],produces=[],custom=[]}" onto public org.springframework.http.ResponseEntity> org.springframework.boot.autoconfigure.web.BasicErrorController.error(javax.servlet.http.HttpServletRequest)
2014-12-03 11:09:54.640 INFO 32 --- [ main] o.s.w.s.handler.SimpleUrlHandlerMapping : Mapped URL path [/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]
2014-12-03 11:09:54.641 INFO 32 --- [ main] o.s.w.s.handler.SimpleUrlHandlerMapping : Mapped URL path [/webjars/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]
2014-12-03 11:09:55.077 INFO 32 --- [ main] o.s.j.e.a.AnnotationMBeanExporter : Registering beans for JMX exposure on startup
2014-12-03 11:09:55.156 INFO 32 --- [ main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): 61097/http
2014-12-03 11:09:55.167 INFO 32 --- [ main] apples.sts.web.Application : Started Application in 5.918 seconds (JVM running for 6.712)
You can also view the deployed application details in STS by double clicking on it as shown below.
Sunday, 23 November 2014
PCFHawq*Web Browser based web application for PHD service within Cloud Foundry
Pivotal HD for Pivotal CF deliver's Pivotal HD - Pivotal's leading
Hadoop distribution as a Pivotal CF Service. Pivotal HD is a
commercially supported distribution of Apache Hadoop. The Pivotal HD
Data Service includes HDFS, YARN and MapReduce. It also includes HAWQ,
Pivotal's high performance SQL database on HDFS and Pivotal's In-memory
OLTP SQL processing engine GemFire XD.
https://network.pivotal.io/products/pivotal-hd-service
Pivotal PCFHawq*Web is a browser based schema administration tool for HAWQ within Pivotal Cloud Foundry 1.3. It supports auto binding to a PHD service but can run stand alone outside of PCF. If you don't bind the application to a PHD instance it presents a login page to allow you to manually connect to HAWQ within PCF. When bound to a PHD service it will connect using the VCAP_SERVICES credentials automatically for you. It supports the following features
Below is the GITHUB project for this tool.
More Info
https://github.com/papicella/PCFHawqWeb
https://network.pivotal.io/products/pivotal-hd-service
Pivotal PCFHawq*Web is a browser based schema administration tool for HAWQ within Pivotal Cloud Foundry 1.3. It supports auto binding to a PHD service but can run stand alone outside of PCF. If you don't bind the application to a PHD instance it presents a login page to allow you to manually connect to HAWQ within PCF. When bound to a PHD service it will connect using the VCAP_SERVICES credentials automatically for you. It supports the following features
- Browse tables/views/external tables
- Save Query Results in CSV or JSON format
- SQL Worksheet to load/execute SQL DML/DDL statements
Below is the GITHUB project for this tool.
More Info
https://github.com/papicella/PCFHawqWeb
Friday, 7 November 2014
Starting a Pivotal GemFireXD Distributed System from IntelliJ IDEA
The example below shows how you can start a Pivotal GemFireXD distributed system from IntelliJ IDEA. Here we will start a Locator which has Pulse enabled as well as one member. We use the following class method to achieve this from an IDE such as IntelliJ
FabricServiceManager.getFabricLocatorInstance()
FabricServiceManager.getFabricServerInstance()
1. Add the following to your maven POM file to ensure the correct libraries are present.
2. Create a start locator class as follows
3. Edit the run configuration to ensure you specify the GEMFIREXD ENV variable as shown below.
Note: This is needed to ensure Pulse can start when the locator starts
4. Create a start server class as follows.
5. Start the locator by running "StartLocator1" class.
6. Start one server by running "StartServer1" class.
7. Connect to pulse to verify you have a 2 node distributed system with one locator and one member.
Using URL: http://localhost:7075/pulse/Login.html
FabricServiceManager.getFabricLocatorInstance()
FabricServiceManager.getFabricServerInstance()
1. Add the following to your maven POM file to ensure the correct libraries are present.
<dependency> <groupId>com.pivotal.gemfirexd</groupId> <artifactId>gemfirexd</artifactId> <version>1.3.0</version> </dependency> <dependency> <groupId>com.pivotal.gemfirexd</groupId> <artifactId>gemfirexd-client</artifactId> <version>1.3.0</version> </dependency> <dependency> <groupId>org.apache.tomcat.embed</groupId> <artifactId>tomcat-embed-core</artifactId> <version>8.0.14</version> </dependency> <dependency> <groupId>org.apache.tomcat.embed</groupId> <artifactId>tomcat-embed-logging-juli</artifactId> <version>8.0.14</version> </dependency>
2. Create a start locator class as follows
package pivotal.au.gemfirexd.demos.startup; import com.pivotal.gemfirexd.FabricLocator; import com.pivotal.gemfirexd.FabricServiceManager; import java.sql.SQLException; import java.util.Properties; /** * Created by papicella on 4/11/2014. */ public class StartLocator1 { public static void main(String[] args) throws SQLException, InterruptedException { // TODO Auto-generated method stub Properties serverProps = new Properties(); serverProps.setProperty("sys-disk-dir","./gfxd/locator1"); serverProps.setProperty("server-bind-address","localhost"); serverProps.setProperty("jmx-manager-start","true"); serverProps.setProperty("jmx-manager-http-port","7075"); serverProps.setProperty("jmx-manager-bind-address","localhost"); FabricLocator locator = FabricServiceManager.getFabricLocatorInstance(); locator.start("localhost", 41111, serverProps); locator.startNetworkServer("127.0.0.1", 1527, null); System.out.println("Locator started ... "); Object lock = new Object(); synchronized (lock) { while (true) { lock.wait(); } } } }
3. Edit the run configuration to ensure you specify the GEMFIREXD ENV variable as shown below.
Note: This is needed to ensure Pulse can start when the locator starts
4. Create a start server class as follows.
package pivotal.au.gemfirexd.demos.startup; import com.pivotal.gemfirexd.FabricServer; import com.pivotal.gemfirexd.FabricServiceManager; import java.sql.SQLException; import java.util.Properties; public class StartServer1 { public static void main(String[] args) throws SQLException, InterruptedException { // TODO Auto-generated method stub FabricServer server = FabricServiceManager.getFabricServerInstance(); Properties serverProps = new Properties(); serverProps.setProperty("server-groups", "mygroup"); serverProps.setProperty("persist-dd", "false"); serverProps.setProperty("sys-disk-dir","./gfxd/server1"); serverProps.setProperty("host-data","true"); serverProps.setProperty("locators", "localhost[41111]"); server.start(serverProps); server.startNetworkServer("127.0.0.1", 1528, null); Object lock = new Object(); synchronized (lock) { while (true) { lock.wait(); } } } }
5. Start the locator by running "StartLocator1" class.
6. Start one server by running "StartServer1" class.
7. Connect to pulse to verify you have a 2 node distributed system with one locator and one member.
Using URL: http://localhost:7075/pulse/Login.html
Tuesday, 4 November 2014
Starting a Pivotal GemFireXD server from Java
The FabricServer interface provides an easy way to start an embedded GemFire XD server
process in an existing Java application.
In short code as follows will get you started. Use this in DEV/TEST scenarios not for production use.
More Information
http://gemfirexd.docs.pivotal.io/latest/userguide/index.html#developers_guide/topics/server-side/fabricserver.html
In short code as follows will get you started. Use this in DEV/TEST scenarios not for production use.
package pivotal.au.gemfirexd.demos.startup; import com.pivotal.gemfirexd.FabricServer; import com.pivotal.gemfirexd.FabricServiceManager; import java.sql.SQLException; import java.util.Properties; public class StartServer1 { public static void main(String[] args) throws SQLException, InterruptedException { // TODO Auto-generated method stub FabricServer server = FabricServiceManager.getFabricServerInstance(); Properties serverProps = new Properties(); serverProps.setProperty("server-groups", "mygroup"); serverProps.setProperty("persist-dd", "false"); serverProps.setProperty("sys-disk-dir","./gfxd/server1"); serverProps.setProperty("host-data","true"); server.start(serverProps); server.startNetworkServer("127.0.0.1", 1527, null); Object lock = new Object(); synchronized (lock) { while (true) { lock.wait(); } } } }
More Information
http://gemfirexd.docs.pivotal.io/latest/userguide/index.html#developers_guide/topics/server-side/fabricserver.html
Thursday, 30 October 2014
Creating a WebLogic 12c Data Source Connection to Pivotal GemFireXD 1.3
I am going to show how you would create a WebLogic data source to Pivotal GemFireXD 1.3. In this example I am using the developer edition of Weblogic which known as "Free Oracle WebLogic Server 12c (12.1.3) Zip Distribution and Installers for Developers" You can download / configure it as follows.
http://www.oracle.com/technetwork/middleware/downloads/index-087510.html
Note: I am assuming you have WebLogic 12C running with GemFireXD also running. I am also assuming a WLS install directory as follows with a domain called "mydomain"
/Users/papicella/vmware/software/weblogic/wls12130
1. Ensure you have the GemFireXD client driver copied into your WLS domain lib directory as follows, prior to starting WLS
/Users/papicella/vmware/software/weblogic/wls12130/user_projects/domains/mydomain/lib/gemfirexd-client.jar
2. Navigate to the WebLogic Console as follows
http://localhost:7001/console/
3. Login using your server credentials
4. From the Domain Structure tree navigate to "Services -> Data Sources"
5. Click on "New -> Generic Data Source"
6. Fill in the form as follows
Name: GemFireXD-DataSource
JNDI Name: jdbc/gemfirexd-ds
Type: Select "Other" from the drop down list box
7. Click "Next"
8. Click "Next"
9. Uncheck "Supports Global Transactions" and click next
10. Enter the following details for credentials. The GemFireXD cluster is not setup for auhentication so this is just a fake username/password to allow us to proceed.
Username: app
Password: app
11. Click "Next"
12. Enter the following CONFIG parameters for your GemFireXD Cluster
Driver Class Name: com.pivotal.gemfirexd.jdbc.ClientDriver
URL: jdbc:gemfirexd://localhost:1527/
Test Table Name: sysibm.sysdummy1
Leave the rest as their default values , it's vital you don't alter the default values here.
13. Click the "Test Configuration" button at this point to verify you can connect, if Successful you will see a message as follows
14. Click "Next"
15. Check the server you wish to target this Data Source for. If you don't do this the Data Source will not be deployed and accessible. In DEV only WLS you only have "myserver" to select.
16. Click "Finish"
It should show your all done and no re-starts are required. To access the Data Source you need to use JNDI with the path "jdbc/gemfirexd-ds"
http://www.oracle.com/technetwork/middleware/downloads/index-087510.html
Note: I am assuming you have WebLogic 12C running with GemFireXD also running. I am also assuming a WLS install directory as follows with a domain called "mydomain"
/Users/papicella/vmware/software/weblogic/wls12130
1. Ensure you have the GemFireXD client driver copied into your WLS domain lib directory as follows, prior to starting WLS
/Users/papicella/vmware/software/weblogic/wls12130/user_projects/domains/mydomain/lib/gemfirexd-client.jar
2. Navigate to the WebLogic Console as follows
http://localhost:7001/console/
3. Login using your server credentials
4. From the Domain Structure tree navigate to "Services -> Data Sources"
5. Click on "New -> Generic Data Source"
6. Fill in the form as follows
Name: GemFireXD-DataSource
JNDI Name: jdbc/gemfirexd-ds
Type: Select "Other" from the drop down list box
7. Click "Next"
8. Click "Next"
9. Uncheck "Supports Global Transactions" and click next
10. Enter the following details for credentials. The GemFireXD cluster is not setup for auhentication so this is just a fake username/password to allow us to proceed.
Username: app
Password: app
11. Click "Next"
12. Enter the following CONFIG parameters for your GemFireXD Cluster
Driver Class Name: com.pivotal.gemfirexd.jdbc.ClientDriver
URL: jdbc:gemfirexd://localhost:1527/
Test Table Name: sysibm.sysdummy1
Leave the rest as their default values , it's vital you don't alter the default values here.
13. Click the "Test Configuration" button at this point to verify you can connect, if Successful you will see a message as follows
14. Click "Next"
15. Check the server you wish to target this Data Source for. If you don't do this the Data Source will not be deployed and accessible. In DEV only WLS you only have "myserver" to select.
16. Click "Finish"
It should show your all done and no re-starts are required. To access the Data Source you need to use JNDI with the path "jdbc/gemfirexd-ds"
Thursday, 23 October 2014
Using the tc Server build pack for Pivotal Cloud Foundry 1.3
On Pivotal network you will find various build packs you can download and apply to PCF and use for your applications outside of the shipped build packs, using the link below.
https://network.pivotal.io/products/pivotal-cf
I am going to show how you would take one of these build packs , install it and then consume it from an application. In this demo I am going to use "tc server buildpack (offline) v2.4"
1. Log in as admin user and upload the build pack as shown below. I am adding this build pack in the last position which is position 6.
[Tue Oct 21 20:36:01 papicella@:~/cf/buildpacks ] $ cf create-buildpack tc_server_buildpack_offline tc-server-buildpack-offline-v2.4.zip 6
Creating buildpack tc_server_buildpack_offline...
OK
Uploading buildpack tc_server_buildpack_offline...
OK
2. View buildpacks, which should show the one we just uploaded above.
[Thu Oct 23 11:15:18 papicella@:~/cf/APJ1 ] $ cf buildpacks
Getting buildpacks...
buildpack position enabled locked filename
java_buildpack_offline 1 true false java-buildpack-offline-v2.4.zip
ruby_buildpack 2 true false ruby_buildpack-offline-v1.1.0.zip
nodejs_buildpack 3 true false nodejs_buildpack-offline-v1.0.1.zip
python_buildpack 4 true false python_buildpack-offline-v1.0.1.zip
go_buildpack 4 true false go_buildpack-offline-v1.0.1.zip
php_buildpack 5 true false php_buildpack-offline-v1.0.1.zip
tc_server_buildpack_offline 6 true false tc-server-buildpack-offline-v2.4.zip
3. Push application using your buildpack uploaded above, below is a simple manifest which refers to the build pack I uploaded.
manifest.yml
applications:
- name: pcfhawq
memory: 512M
instances: 1
host: pcfhawq
domain: yyyy.fe.dddd.com
path: ./pcfhawq.war
buildpack: tc_server_buildpack_offline
services:
- phd-dev
[Thu Oct 23 11:36:26 papicella@:~/cf/buildpacks ] $ cf push -f manifest-apj1.yml
Using manifest file manifest-apj1.yml
Creating app pcfhawq-web in org pas-org / space apple as pas...
OK
Creating route yyyy.apj1.dddd.gopivotal.com...
OK
Binding pcfhawq-web.yyyy.fe.dddd.com to pcfhawq-web...
OK
Uploading pcfhawq-web...
Uploading app files from: pcfhawq.war
Uploading 644.1K, 181 files
OK
Binding service phd-dev to app pcfhawq-web in org pas-org / space apple as pas...
OK
Starting app pcfhawq-web in org pas-org / space apple as pas...
OK
-----> Downloaded app package (5.6M)
-----> Java Buildpack Version: v2.4 (offline) | https://github.com/pivotal-cf/tc-server-buildpack.git#396ad0a
-----> Downloading Open Jdk JRE 1.7.0_60 from http://download.run.pivotal.io/openjdk/lucid/x86_64/openjdk-1.7.0_60.tar.gz (found in cache)
Expanding Open Jdk JRE to .java-buildpack/open_jdk_jre (1.3s)
-----> Downloading Spring Auto Reconfiguration 1.4.0_RELEASE from http://download.run.pivotal.io/auto-reconfiguration/auto-reconfiguration-1.4.0_RELEASE.jar (found in cache)
Modifying /WEB-INF/web.xml for Auto Reconfiguration
-----> Downloading Tc Server Instance 2.9.6_RELEASE from http://download.run.pivotal.io/tc-server/tc-server-2.9.6_RELEASE.tar.gz (found in cache)
Instantiating tc Server in .java-buildpack/tc_server (3.4s)
-----> Downloading Tc Server Lifecycle Support 2.2.0_RELEASE from http://download.run.pivotal.io/tomcat-lifecycle-support/tomcat-lifecycle-support-2.2.0_RELEASE.jar (found in cache)
-----> Downloading Tc Server Logging Support 2.2.0_RELEASE from http://download.run.pivotal.io/tomcat-logging-support/tomcat-logging-support-2.2.0_RELEASE.jar (found in cache)
-----> Downloading Tc Server Access Logging Support 2.2.0_RELEASE from http://download.run.pivotal.io/tomcat-access-logging-support/tomcat-access-logging-support-2.2.0_RELEASE.jar (found in cache)
-----> Uploading droplet (45M)
1 of 1 instances running
App started
Showing health and status for app pcfhawq-web in org pas-org / space apple as pas...
OK
requested state: started
instances: 1/1
usage: 1G x 1 instances
urls: pcfhawq-web.yyyy.fe.dddd.com
state since cpu memory disk
#0 running 2014-10-23 11:37:56 AM 0.0% 398.6M of 1G 109.2M of 1G
4. Verify within the DEV console the application is using the build pack you targeted.
More Information
Buildpacks
http://docs.pivotal.io/pivotalcf/buildpacks/index.html
https://network.pivotal.io/products/pivotal-cf
I am going to show how you would take one of these build packs , install it and then consume it from an application. In this demo I am going to use "tc server buildpack (offline) v2.4"
1. Log in as admin user and upload the build pack as shown below. I am adding this build pack in the last position which is position 6.
[Tue Oct 21 20:36:01 papicella@:~/cf/buildpacks ] $ cf create-buildpack tc_server_buildpack_offline tc-server-buildpack-offline-v2.4.zip 6
Creating buildpack tc_server_buildpack_offline...
OK
Uploading buildpack tc_server_buildpack_offline...
OK
2. View buildpacks, which should show the one we just uploaded above.
[Thu Oct 23 11:15:18 papicella@:~/cf/APJ1 ] $ cf buildpacks
Getting buildpacks...
buildpack position enabled locked filename
java_buildpack_offline 1 true false java-buildpack-offline-v2.4.zip
ruby_buildpack 2 true false ruby_buildpack-offline-v1.1.0.zip
nodejs_buildpack 3 true false nodejs_buildpack-offline-v1.0.1.zip
python_buildpack 4 true false python_buildpack-offline-v1.0.1.zip
go_buildpack 4 true false go_buildpack-offline-v1.0.1.zip
php_buildpack 5 true false php_buildpack-offline-v1.0.1.zip
tc_server_buildpack_offline 6 true false tc-server-buildpack-offline-v2.4.zip
3. Push application using your buildpack uploaded above, below is a simple manifest which refers to the build pack I uploaded.
manifest.yml
applications:
- name: pcfhawq
memory: 512M
instances: 1
host: pcfhawq
domain: yyyy.fe.dddd.com
path: ./pcfhawq.war
buildpack: tc_server_buildpack_offline
services:
- phd-dev
[Thu Oct 23 11:36:26 papicella@:~/cf/buildpacks ] $ cf push -f manifest-apj1.yml
Using manifest file manifest-apj1.yml
Creating app pcfhawq-web in org pas-org / space apple as pas...
OK
Creating route yyyy.apj1.dddd.gopivotal.com...
OK
Binding pcfhawq-web.yyyy.fe.dddd.com to pcfhawq-web...
OK
Uploading pcfhawq-web...
Uploading app files from: pcfhawq.war
Uploading 644.1K, 181 files
OK
Binding service phd-dev to app pcfhawq-web in org pas-org / space apple as pas...
OK
Starting app pcfhawq-web in org pas-org / space apple as pas...
OK
-----> Downloaded app package (5.6M)
-----> Java Buildpack Version: v2.4 (offline) | https://github.com/pivotal-cf/tc-server-buildpack.git#396ad0a
-----> Downloading Open Jdk JRE 1.7.0_60 from http://download.run.pivotal.io/openjdk/lucid/x86_64/openjdk-1.7.0_60.tar.gz (found in cache)
Expanding Open Jdk JRE to .java-buildpack/open_jdk_jre (1.3s)
-----> Downloading Spring Auto Reconfiguration 1.4.0_RELEASE from http://download.run.pivotal.io/auto-reconfiguration/auto-reconfiguration-1.4.0_RELEASE.jar (found in cache)
Modifying /WEB-INF/web.xml for Auto Reconfiguration
-----> Downloading Tc Server Instance 2.9.6_RELEASE from http://download.run.pivotal.io/tc-server/tc-server-2.9.6_RELEASE.tar.gz (found in cache)
Instantiating tc Server in .java-buildpack/tc_server (3.4s)
-----> Downloading Tc Server Lifecycle Support 2.2.0_RELEASE from http://download.run.pivotal.io/tomcat-lifecycle-support/tomcat-lifecycle-support-2.2.0_RELEASE.jar (found in cache)
-----> Downloading Tc Server Logging Support 2.2.0_RELEASE from http://download.run.pivotal.io/tomcat-logging-support/tomcat-logging-support-2.2.0_RELEASE.jar (found in cache)
-----> Downloading Tc Server Access Logging Support 2.2.0_RELEASE from http://download.run.pivotal.io/tomcat-access-logging-support/tomcat-access-logging-support-2.2.0_RELEASE.jar (found in cache)
-----> Uploading droplet (45M)
1 of 1 instances running
App started
Showing health and status for app pcfhawq-web in org pas-org / space apple as pas...
OK
requested state: started
instances: 1/1
usage: 1G x 1 instances
urls: pcfhawq-web.yyyy.fe.dddd.com
state since cpu memory disk
#0 running 2014-10-23 11:37:56 AM 0.0% 398.6M of 1G 109.2M of 1G
4. Verify within the DEV console the application is using the build pack you targeted.
More Information
Buildpacks
http://docs.pivotal.io/pivotalcf/buildpacks/index.html
Tuesday, 21 October 2014
Monday, 20 October 2014
Connecting to Pivotal Cloud Foundry Ops Metrics using Java VisualVM
The Pivotal Ops Metrics tool is a JMX extension for Elastic Runtime.
Pivotal Ops Metrics collects and exposes system data from Cloud Foundry
components via a JMX endpoint.
Use this system data to monitor your installation and assist in troubleshooting. Below is the tile once installed and available with Pivotal Cloud Foundry Ops Manager
Once installed and configured, metrics for Cloud Foundry components automatically report to the JMX endpoint. Your JMX client uses the credentials supplied to connect to the IP address of the Pivotal Ops Metrics JMX Provider at port 44444
1. Start jvisualvm
2. Under plugin ensure you have the VisualVm-Mbeans plugin installed as shown below, or install it to be able to view the MBeans.
3. Create a JMX connection as shown below
4. Finally the CF MBeans can be viewed as shown below.
More Information
Deploying Pivotal Ops Metrics
http://docs.pivotal.io/pivotalcf/customizing/deploy-metrics.html
Once installed and configured, metrics for Cloud Foundry components automatically report to the JMX endpoint. Your JMX client uses the credentials supplied to connect to the IP address of the Pivotal Ops Metrics JMX Provider at port 44444
1. Start jvisualvm
2. Under plugin ensure you have the VisualVm-Mbeans plugin installed as shown below, or install it to be able to view the MBeans.
3. Create a JMX connection as shown below
4. Finally the CF MBeans can be viewed as shown below.
More Information
Deploying Pivotal Ops Metrics
http://docs.pivotal.io/pivotalcf/customizing/deploy-metrics.html
SQLShell accessing Pivotal GemFire XD 1.3
I stumbled open SQLShell recently as per the URL below. Below I will show how you can connect to Pivotal GemFireXD using SQLShell. I used this to export query results using CSV output.
http://software.clapper.org/sqlshell/users-guide.html
Note: Assuming SQLShell is already installed and instructions below are for Mac OSX
1. Create a file in $HOME/.sqlshell/config as shown below, I just took the sample it ships with. Notice how I have added an alias for "gemfirexd", highlighted below.
# ---------------------------------------------------------------------------
# initialization file for SQLShell
[settings]
#colspacing: 2
[drivers]
# Driver aliases.
postgresql = org.postgresql.Driver
postgres = org.postgresql.Driver
mysql = com.mysql.jdbc.Driver
sqlite = org.sqlite.JDBC
sqlite3 = org.sqlite.JDBC
oracle = oracle.jdbc.driver.OracleDriver
access = sun.jdbc.odbc.JdbcOdbcDriver
gemfirexd = com.pivotal.gemfirexd.jdbc.ClientDriver
[vars]
historyDir: ${env.HOME}/.sqlshell
[db_postgres]
aliases: post
url: jdbc:postgresql://localhost:5432/sampledb
driver: postgres
user: ${system.user.name}
password:
history: $vars.historyDir/postgres.hist
[db_mysql]
#aliases:
driver: mysql
url: jdbc:mysql://localhost:3306/sampledb
user: ${system.user.name}
password:
history: $vars.historyDir/mysql.hist
[db_sqlite3]
aliases: sqlite3
url: jdbc:sqlite:/tmp/sample.db
driver: sqlite
history: $vars.historyDir/sqlite3.hist
[db_oracle]
aliases: ora
schema: example
url: jdbc:oracle:thin:@localhost:1521:sampledb
user: ${system.user.name}
password:
driver: oracle
history: $vars.historyDir/scrgskd
[db_access]
driver: access
url: jdbc:odbc:Driver={Microsoft Access Driver (*.mdb)};DBQ=/tmp/sample.mdb;DriverID=22}
2. Add Pivotal GemFireXd client driver "gemfirexd-client.jar" to "/Applications/sqlshell/lib"
3. With Pivotal GemFireXD cluster up and running connect and run some commands as shown below.
http://software.clapper.org/sqlshell/users-guide.html
Note: Assuming SQLShell is already installed and instructions below are for Mac OSX
1. Create a file in $HOME/.sqlshell/config as shown below, I just took the sample it ships with. Notice how I have added an alias for "gemfirexd", highlighted below.
# ---------------------------------------------------------------------------
# initialization file for SQLShell
[settings]
#colspacing: 2
[drivers]
# Driver aliases.
postgresql = org.postgresql.Driver
postgres = org.postgresql.Driver
mysql = com.mysql.jdbc.Driver
sqlite = org.sqlite.JDBC
sqlite3 = org.sqlite.JDBC
oracle = oracle.jdbc.driver.OracleDriver
access = sun.jdbc.odbc.JdbcOdbcDriver
gemfirexd = com.pivotal.gemfirexd.jdbc.ClientDriver
[vars]
historyDir: ${env.HOME}/.sqlshell
[db_postgres]
aliases: post
url: jdbc:postgresql://localhost:5432/sampledb
driver: postgres
user: ${system.user.name}
password:
history: $vars.historyDir/postgres.hist
[db_mysql]
#aliases:
driver: mysql
url: jdbc:mysql://localhost:3306/sampledb
user: ${system.user.name}
password:
history: $vars.historyDir/mysql.hist
[db_sqlite3]
aliases: sqlite3
url: jdbc:sqlite:/tmp/sample.db
driver: sqlite
history: $vars.historyDir/sqlite3.hist
[db_oracle]
aliases: ora
schema: example
url: jdbc:oracle:thin:@localhost:1521:sampledb
user: ${system.user.name}
password:
driver: oracle
history: $vars.historyDir/scrgskd
[db_access]
driver: access
url: jdbc:odbc:Driver={Microsoft Access Driver (*.mdb)};DBQ=/tmp/sample.mdb;DriverID=22}
2. Add Pivotal GemFireXd client driver "gemfirexd-client.jar" to "/Applications/sqlshell/lib"
3. With Pivotal GemFireXD cluster up and running connect and run some commands as shown below.
[Mon Oct 20 11:56:10 papicella@:~/vmware/software/sqlshell ] $ sqlshell gemfirexd,jdbc:gemfirexd://localhost:1527 SQLShell, version 0.8.1 (2012/03/16 09:43:31) Copyright (c) 2009-2011 Brian M. Clapper Using JLine Type "help" for help. Type ".about" for more information. sqlshell> .set schema APP sqlshell> .show tables ALL_EMPS APPLES_OFFHEAP CUSTOMERS DEPT EMP EMPLOYEES EMPS_IN_DEPT_10 EMPS_IN_DEPT_20 EMPS_IN_DEPT_30 EMPS_IN_DEPT_40 OFFICES ORDERDETAILS ORDERS PAYMENTS PERSON PRODUCTLINES PRODUCTS TEST_ASYNC TEST_ASYNC2 TEST_CALLBACKLISTENER sqlshell> select * from dept; Execution time: 0.21 seconds Retrieval time: 0.6 seconds 7 rows returned. DEPTNO DNAME LOC ------ ---------- -------- 10 ACCOUNTING NEW YORK 20 RESEARCH DALLAS 30 SALES CHICAGO 40 OPERATIONS BRISBANE 50 MARKETING ADELAIDE 60 DEV PERTH 70 SUPPORT SYDNEY sqlshell> .capture to /tmp/results.csv Capturing result sets to: /tmp/results.csv sqlshell> select * from emp where deptno = 10; Execution time: 0.18 seconds Retrieval time: 0.5 seconds 3 rows returned. EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO ----- ------ --------- ---- --------------------- ---- ---- ------ 7782 CLARK MANAGER 7839 1981/06/09 00:00:00.0 2450 NULL 10 7839 KING PRESIDENT NULL 1981/11/17 00:00:00.0 5000 NULL 10 7934 MILLER CLERK 7782 1982/01/23 00:00:00.0 1300 NULL 10 sqlshell> .capture off No longer capturing query results. sqlshell>
Wednesday, 8 October 2014
Spring XD Pivotal Gemfire Sink Demo
Spring XD is a unified, distributed, and extensible system for data
ingestion, real time analytics, batch processing, and data export. The
project's goal is to simplify the development of big data applications.
There are 2 implementation of the gemfire sink: gemfire-server and gemfire-json-server. They are identical except the latter converts JSON string payloads to a JSON document format proprietary to GemFire and provides JSON field access and query capabilities. If you are not using JSON, the gemfire-server module will write the payload using java serialization to the configured region.
In this example below we show how we connect to an existing GemFire 7.0.2 cluster using a locator to add some JSON trade symbols to an existing region in the cluster.
1. Start a GemFire cluster with with an existing region as shown below. The following cache.xml is for "server1" of the cluster and "server2" of the cluster. They are identical configs , just using different ports
server1 cache.xml
server2 cache.xml
2. Verify using GFSH you have 2 members , a locator and a region as follows
3. Start single node SpringXD server
4. Start SpringXD shell
5. Create a stream as follows
6. Post some entries via HTTP which will be inserted into the GemFire Region
7. Verify via GFSH that data has been inserted into the GemFire region. JSON data stored in GemFire regions is done using PDX.
More Information
SpringXD
http://projects.spring.io/spring-xd/
GemFire Sinks
http://docs.spring.io/spring-xd/docs/1.0.1.RELEASE/reference/html/#gemfire-server
There are 2 implementation of the gemfire sink: gemfire-server and gemfire-json-server. They are identical except the latter converts JSON string payloads to a JSON document format proprietary to GemFire and provides JSON field access and query capabilities. If you are not using JSON, the gemfire-server module will write the payload using java serialization to the configured region.
In this example below we show how we connect to an existing GemFire 7.0.2 cluster using a locator to add some JSON trade symbols to an existing region in the cluster.
1. Start a GemFire cluster with with an existing region as shown below. The following cache.xml is for "server1" of the cluster and "server2" of the cluster. They are identical configs , just using different ports
server1 cache.xml
<?xml version="1.0"?> <!DOCTYPE cache PUBLIC "-//GemStone Systems, Inc.//GemFire Declarative Caching 7.0//EN" "http://www.gemstone.com/dtd/cache7_0.dtd"> <cache> <cache-server bind-address="localhost" port="40404" hostname-for-clients="localhost"/> <region name="springxd-region"> <region-attributes data-policy="partition"> <partition-attributes redundant-copies="1" total-num-buckets="113"/> <eviction-attributes> <lru-heap-percentage action="overflow-to-disk"/> </eviction-attributes> </region-attributes> </region> <resource-manager critical-heap-percentage="75" eviction-heap-percentage="65"/> </cache>
server2 cache.xml
<?xml version="1.0"?> <!DOCTYPE cache PUBLIC "-//GemStone Systems, Inc.//GemFire Declarative Caching 7.0//EN" "http://www.gemstone.com/dtd/cache7_0.dtd"> <cache> <cache-server bind-address="localhost" port="40405" hostname-for-clients="localhost"/> <region name="springxd-region"> <region-attributes data-policy="partition"> <partition-attributes redundant-copies="1" total-num-buckets="113"/> <eviction-attributes> <lru-heap-percentage action="overflow-to-disk"/> </eviction-attributes> </region-attributes> </region> <resource-manager critical-heap-percentage="75" eviction-heap-percentage="65"/> </cache>
2. Verify using GFSH you have 2 members , a locator and a region as follows
$ gfsh _________________________ __ / _____/ ______/ ______/ /____/ / / / __/ /___ /_____ / _____ / / /__/ / ____/ _____/ / / / / /______/_/ /______/_/ /_/ v7.0.2.10 Monitor and Manage GemFire gfsh>connect --locator=localhost[10334]; Connecting to Locator at [host=localhost, port=10334] .. Connecting to Manager at [host=10.98.94.88, port=1099] .. Successfully connected to: [host=10.98.94.88, port=1099] gfsh>list members; Name | Id -------- | --------------------------------------- server1 | 10.98.94.88(server1:10161)<v1>:15610 server2 | 10.98.94.88(server2:10164)<v2>:39300 locator1 | localhost(locator1:10159:locator):42885 gfsh>list regions; List of regions --------------- springxd-region
3. Start single node SpringXD server
[Wed Oct 08 14:51:06 papicella@:~/vmware/software/spring/spring-xd/spring-xd-1.0.1.RELEASE ] $ xd-singlenode _____ __ _______ / ___| (-) \ \ / / _ \ \ `--. _ __ _ __ _ _ __ __ _ \ V /| | | | `--. \ '_ \| '__| | '_ \ / _` | / ^ \| | | | /\__/ / |_) | | | | | | | (_| | / / \ \ |/ / \____/| .__/|_| |_|_| |_|\__, | \/ \/___/ | | __/ | |_| |___/ 1.0.1.RELEASE eXtreme Data Started : SingleNodeApplication Documentation: https://github.com/spring-projects/spring-xd/wiki ....
4. Start SpringXD shell
$ xd-shell _____ __ _______ / ___| (-) \ \ / / _ \ \ `--. _ __ _ __ _ _ __ __ _ \ V /| | | `--. \ '_ \| '__| | '_ \ / _` | / ^ \| | | | /\__/ / |_) | | | | | | | (_| | / / \ \ |/ / \____/| .__/|_| |_|_| |_|\__, | \/ \/___/ | | __/ | |_| |___/ eXtreme Data 1.0.1.RELEASE | Admin Server Target: http://localhost:9393 Welcome to the Spring XD shell. For assistance hit TAB or type "help". xd:>
5. Create a stream as follows
xd:>stream create --name gemfiredemo --definition "http --port=9090 | gemfire-json-server --host=localhost --port=10334 --useLocator=true --regionName=springxd-region --keyExpression=payload.getField('symbol')" --deploy Created and deployed new stream 'gemfiredemo'
6. Post some entries via HTTP which will be inserted into the GemFire Region
xd:>http post --target http://localhost:9090 --data {"symbol":"ORCL","price":38} > POST (text/plain;Charset=UTF-8) http://localhost:9090 {"symbol":"ORCL","price":38} > 200 OK xd:>http post --target http://localhost:9090 --data {"symbol":"VMW","price":94} > POST (text/plain;Charset=UTF-8) http://localhost:9090 {"symbol":"VMW","price":94} > 200 OK
7. Verify via GFSH that data has been inserted into the GemFire region. JSON data stored in GemFire regions is done using PDX.
gfsh>query --query="select * from /springxd-region"; Result : true startCount : 0 endCount : 20 Rows : 2 symbol | price ------ | ----- ORCL | 38 VMW | 94 NEXT_STEP_NAME : END
More Information
SpringXD
http://projects.spring.io/spring-xd/
GemFire Sinks
http://docs.spring.io/spring-xd/docs/1.0.1.RELEASE/reference/html/#gemfire-server
Friday, 26 September 2014
Pivotal GemFire 8 - Starting a Locator / Server directly from IntelliJ 13.x
With the introduction of Pivotal GemFire 8 developers can easily incorporate starting/stopping GemFire Locators/Servers directly within Java code allowing them to easily integrate GemFire management within their IDE. This ensures developers can develop/test/run GemFire applications all within their IDE of choice making them much more productive using very simple Launcher API's
The locator is a Pivotal GemFire process that tells new, connecting members where running members are located and provides load balancing for server use. A GemFire server is a Pivotal GemFire process that runs as a long-lived, configurable member of a distributed system. The GemFire server is used primarily for hosting long-lived data regions and for running standard GemFire processes such as the server in a client/server configuration.
In this post I am going to show how we can use the following classes to launch a Pivotal GemFire locator / server from code directly within IntelliJ IDEA allowing you to develop/test GemFire applications directly from your IDE of choice.
Note: In this post we use Intellij IDEA 13.x
com.gemstone.gemfire.distributed.LocatorLauncher API
com.gemstone.gemfire.distributed.ServerLauncher API
1. Add the GemFire 8 maven REPO to your project to ensure we pull the required JAR files.
2. Create a class as follows to start a locator
3. Create a class as follow to start a single cache server, could create as many iof these as you need
4. Create a cache.xml with a dummy region
5. Edit the run configurations for StartLocator.java to include GEMFIRE env variable as shown below.
6. Run StartLocator.java as shown below.
7. Run StartMember.java as shown below.
8. Finally from the IDE run a script called verify.sh to view the cluster member/regions to ensure it worked.
verify.sh
Output
http://gemfire.docs.pivotal.io/latest/userguide/deploying/topics/running_the_locator.html
Pivotal GemFire Server Processes
http://gemfire.docs.pivotal.io/latest/userguide/deploying/topics/running_the_cacheserver.html
The locator is a Pivotal GemFire process that tells new, connecting members where running members are located and provides load balancing for server use. A GemFire server is a Pivotal GemFire process that runs as a long-lived, configurable member of a distributed system. The GemFire server is used primarily for hosting long-lived data regions and for running standard GemFire processes such as the server in a client/server configuration.
In this post I am going to show how we can use the following classes to launch a Pivotal GemFire locator / server from code directly within IntelliJ IDEA allowing you to develop/test GemFire applications directly from your IDE of choice.
Note: In this post we use Intellij IDEA 13.x
com.gemstone.gemfire.distributed.LocatorLauncher API
com.gemstone.gemfire.distributed.ServerLauncher API
1. Add the GemFire 8 maven REPO to your project to ensure we pull the required JAR files.
<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>gemfire-compression</groupId> <artifactId>gemfire-compression</artifactId> <version>1.0-SNAPSHOT</version> <properties> <gemfire.version>8.0.0</gemfire.version> </properties> <dependencies> <dependency> <groupId>com.gemstone.gemfire</groupId> <artifactId>gemfire</artifactId> <version>${gemfire.version}</version> <scope>compile</scope> </dependency> </dependencies> <repositories> <repository> <id>gemstone-release</id> <name>GemStone Maven RELEASE Repository</name> <url>http://dist.gemstone.com.s3.amazonaws.com/maven/release</url> </repository> </repositories> </project>
2. Create a class as follows to start a locator
package pivotal.gemfire.compression; import com.gemstone.gemfire.distributed.LocatorLauncher; import java.util.concurrent.TimeUnit; public class StartLocator { public static void main(String[] args) { LocatorLauncher locatorLauncher = new LocatorLauncher.Builder() .set("jmx-manager", "true") .set("jmx-manager-start", "true") .set("jmx-manager-http-port", "8083") .set("jmx-manager-ssl", "false") .setMemberName("locator") .setPort(10334) .setBindAddress("localhost") .build(); System.out.println("Attempting to start Locator"); locatorLauncher.start(); locatorLauncher.waitOnStatusResponse(30, 5, TimeUnit.SECONDS); System.out.println("Locator successfully started"); } }
3. Create a class as follow to start a single cache server, could create as many iof these as you need
package pivotal.gemfire.compression; import com.gemstone.gemfire.distributed.ServerLauncher; public class StartMember { public static void main(String[] args){ ServerLauncher serverLauncher = new ServerLauncher.Builder() .setMemberName("server1") .set("locators","localhost[10334]") .set("cache-xml-file", "cache.xml") .set("log-level", "info") .build(); System.out.println("Attempting to start cache server"); serverLauncher.start(); System.out.println("Cache server successfully started"); } }
4. Create a cache.xml with a dummy region
<!DOCTYPE cache PUBLIC "-//GemStone Systems, Inc.//GemFire Declarative Caching 8.0//EN" "http://www.gemstone.com/dtd/cache8_0.dtd"> <cache> <cache-server bind-address="localhost" port="0" hostname-for-clients="localhost"/> <region name="CompressedRegion"> <region-attributes data-policy="partition"> <key-constraint>java.lang.String</key-constraint> <value-constraint>java.lang.String</value-constraint> <partition-attributes redundant-copies="1" total-num-buckets="113"/> <eviction-attributes> <lru-heap-percentage action="overflow-to-disk"/> </eviction-attributes> </region-attributes> </region> <resource-manager critical-heap-percentage="75" eviction-heap-percentage="65"/> </cache>
5. Edit the run configurations for StartLocator.java to include GEMFIRE env variable as shown below.
6. Run StartLocator.java as shown below.
7. Run StartMember.java as shown below.
8. Finally from the IDE run a script called verify.sh to view the cluster member/regions to ensure it worked.
verify.sh
#!/bin/bash . ./setup.sh gfsh <<EOF connect --locator=localhost[10334]; list members; list regions; exit; EOF
Output
More Information
Pivotal GemFire Locator Processeshttp://gemfire.docs.pivotal.io/latest/userguide/deploying/topics/running_the_locator.html
Pivotal GemFire Server Processes
http://gemfire.docs.pivotal.io/latest/userguide/deploying/topics/running_the_cacheserver.html
Thursday, 11 September 2014
Creating a Pivotal GemFireXD Data Source Connection from IntelliJ IDEA 13.x
In order to create a Pivotal GemFireXD Data Source Connection from IntelliJ 13.x , follow the steps below. You will need to define a GemFireXD driver , prior to creating the Data Source itself.
1. Bring up the Databases panel.
2. Define a GemFireXD Driver as follows
3. Once defined select it by using the following options. Your using the Driver you created at #2 above
+ -> Data Source -> com.pivotal.gemfirexd.jdbc.ClientDriver
4. Create a Connection as shown below. You would need to having a running GemFireXD cluster at this point in order to connect.
5. Once connected you can browse objects as shown below.
6. Finally we can run DML/DDL directly from IntelliJ as shown below.
1. Bring up the Databases panel.
2. Define a GemFireXD Driver as follows
3. Once defined select it by using the following options. Your using the Driver you created at #2 above
+ -> Data Source -> com.pivotal.gemfirexd.jdbc.ClientDriver
4. Create a Connection as shown below. You would need to having a running GemFireXD cluster at this point in order to connect.
5. Once connected you can browse objects as shown below.
6. Finally we can run DML/DDL directly from IntelliJ as shown below.
Thursday, 4 September 2014
Variable in list with Postgres JDBC and Greenplum
I previously blogged on how to create a variable JDBC IN list with Oracle. Here is how you would do it with Pivotal Greenplum. Much easier , without having to write a function. In the Greenplum demo below we use the any function combined with string_to_array
http://theblasfrompas.blogspot.com.au/2008/02/variable-in-list-with-oracle-jdbc-and.html
Code as follows
http://theblasfrompas.blogspot.com.au/2008/02/variable-in-list-with-oracle-jdbc-and.html
Code as follows
import java.sql.*; import java.sql.DriverManager; /** * Created by papicella on 4/09/2014. */ public class VariableInListGreenplum { public VariableInListGreenplum() { } private Connection getConnection() throws SQLException, ClassNotFoundException { Class.forName("org.postgresql.Driver"); Connection conn = null; conn = DriverManager.getConnection( "jdbc:postgresql://127.0.0.1:5432/apples","pas", "pas"); return conn; } public void run() throws SQLException { Connection conn = null; PreparedStatement stmt = null; ResultSet rset = null; String queryInList = "SELECT DEPTNO, " + " DNAME, " + " LOC " + "FROM scott.DEPT " + "WHERE DEPTNO = any(string_to_array(?,', ')) "; try { conn = getConnection(); stmt = conn.prepareStatement(queryInList); stmt.setString(1, "10, 20, 30"); rset = stmt.executeQuery(); while (rset.next()) { System.out.println("Dept [" + rset.getInt(1) + ", " + rset.getString(2) + "]"); } } catch (Exception e) { System.out.println("Exception occurred"); e.printStackTrace(); } finally { if (conn != null) { conn.close(); } if (stmt != null) { stmt.close(); } if (rset != null) { rset.close(); } } } public static void main(String[] args) throws Exception { VariableInListGreenplum test = new VariableInListGreenplum(); test.run(); } }
Wednesday, 3 September 2014
REST with Pivotal GemFire 8.0
Pivotal GemFire 8.0 now includes REST support. You can read more about it as follows
http://gemfire.docs.pivotal.io/latest/userguide/gemfire_rest/book_intro.html#concept_7628F498DB534A2D8A99748F5DA5DC94
Here is how we set it up and some quick examples showing how it works with some Region data
In the example below I have PDX setup for the cache servers as shown below.
1. Firstly you need to enable the REST on a cache server node as shown below. Basically set gemfire.start-dev-rest-api to TRUE , you could use a gemfire.properties file but here we just pass it to GFSH as part of the server start command.
start server --name=server1 --classpath=$CLASSPATH --server-port=40411 --cache-xml-file=./server1/cache.xml --properties-file=./server1/gemfire.properties --locators=localhost[10334] --dir=server1 --initial-heap=1g --max-heap=1g --J=-Dgemfire.http-service-port=7070 --J=-Dgemfire.http-service-bind-address=localhost --J=-Dgemfire.start-dev-rest-api=true
2. Once started we can quickly ensure we have the REST server up on port 7070 as shown below.
[Wed Sep 03 12:39:18 papicella@:~/ant-demos/gemfire/80/demo ] $ netstat -an | grep 7070
tcp4 0 0 127.0.0.1.7070 *.* LISTEN
3. Next test that you can access the REST server. The command below will list all the regions available in the cluster.
[Wed Sep 03 12:52:44 papicella@:~/ant-demos/gemfire/80/demo/rest ] $ curl -i http://localhost:7070/gemfire-api/v1
HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
Location: http://localhost:7070/gemfire-api/v1
Accept-Charset: big5, big5-hkscs, euc-jp, euc-kr, gb18030, gb2312, gbk, ibm-thai, ibm00858, ibm01140, ibm01141, ibm01142, ibm01143, ibm01144, ibm01145, ibm01146, ibm01147, ibm01148, ibm01149, ibm037, ibm1026, ibm1047, ibm273, ibm277, ibm278, ibm280, ibm284, ibm285, ibm290, ibm297, ibm420, ibm424, ibm437, ibm500, ibm775, ibm850, ibm852, ibm855, ibm857, ibm860, ibm861, ibm862, ibm863, ibm864, ibm865, ibm866, ibm868, ibm869, ibm870, ibm871, ibm918, iso-2022-cn, iso-2022-jp, iso-2022-jp-2, iso-2022-kr, iso-8859-1, iso-8859-13, iso-8859-15, iso-8859-2, iso-8859-3, iso-8859-4, iso-8859-5, iso-8859-6, iso-8859-7, iso-8859-8, iso-8859-9, jis_x0201, jis_x0212-1990, koi8-r, koi8-u, shift_jis, tis-620, us-ascii, utf-16, utf-16be, utf-16le, utf-32, utf-32be, utf-32le, utf-8, windows-1250, windows-1251, windows-1252, windows-1253, windows-1254, windows-1255, windows-1256, windows-1257, windows-1258, windows-31j, x-big5-hkscs-2001, x-big5-solaris, x-compound_text, x-euc-jp-linux, x-euc-tw, x-eucjp-open, x-ibm1006, x-ibm1025, x-ibm1046, x-ibm1097, x-ibm1098, x-ibm1112, x-ibm1122, x-ibm1123, x-ibm1124, x-ibm1364, x-ibm1381, x-ibm1383, x-ibm300, x-ibm33722, x-ibm737, x-ibm833, x-ibm834, x-ibm856, x-ibm874, x-ibm875, x-ibm921, x-ibm922, x-ibm930, x-ibm933, x-ibm935, x-ibm937, x-ibm939, x-ibm942, x-ibm942c, x-ibm943, x-ibm943c, x-ibm948, x-ibm949, x-ibm949c, x-ibm950, x-ibm964, x-ibm970, x-iscii91, x-iso-2022-cn-cns, x-iso-2022-cn-gb, x-iso-8859-11, x-jis0208, x-jisautodetect, x-johab, x-macarabic, x-maccentraleurope, x-maccroatian, x-maccyrillic, x-macdingbat, x-macgreek, x-machebrew, x-maciceland, x-macroman, x-macromania, x-macsymbol, x-macthai, x-macturkish, x-macukraine, x-ms932_0213, x-ms950-hkscs, x-ms950-hkscs-xp, x-mswin-936, x-pck, x-sjis_0213, x-utf-16le-bom, x-utf-32be-bom, x-utf-32le-bom, x-windows-50220, x-windows-50221, x-windows-874, x-windows-949, x-windows-950, x-windows-iso2022jp
Content-Type: application/json
Content-Length: 493
Date: Wed, 03 Sep 2014 02:52:46 GMT
{
"regions" : [ {
"name" : "demoRegion",
"type" : "PARTITION",
"key-constraint" : null,
"value-constraint" : null
}, {
"name" : "departments",
"type" : "PARTITION",
"key-constraint" : null,
"value-constraint" : null
}, {
"name" : "employees",
"type" : "PARTITION",
"key-constraint" : null,
"value-constraint" : null
}, {
"name" : "complex",
"type" : "PARTITION",
"key-constraint" : null,
"value-constraint" : null
} ]
4. We have a couple of regions in this cluster and once again I am using the classic DEPT/EMP regions here. Some simple REST command belows on the "/departments" region
View all DEPARTMENT region entries
[Wed Sep 03 12:53:38 papicella@:~/ant-demos/gemfire/80/demo/rest ] $ curl -i http://localhost:7070/gemfire-api/v1/departments
HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
Content-Location: http://localhost:7070/gemfire-api/v1/departments/20,10,30,40
Content-Type: application/json
Content-Length: 225
Date: Wed, 03 Sep 2014 02:53:40 GMT
{
"departments" : [ {
"deptno" : 20,
"name" : "RESEARCH"
}, {
"deptno" : 10,
"name" : "ACCOUNTING"
}, {
"deptno" : 30,
"name" : "SALES"
}, {
"deptno" : 40,
"name" : "OPERATIONS"
} ]
}
VIEW a single region entry by KEY
[Wed Sep 03 12:55:34 papicella@:~/ant-demos/gemfire/80/demo/rest ] $ curl -i http://localhost:7070/gemfire-api/v1/departments/10
HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
Content-Location: http://localhost:7070/gemfire-api/v1/departments/10
Content-Type: application/json
Content-Length: 44
Date: Wed, 03 Sep 2014 02:55:36 GMT
{
"deptno" : 10,
"name" : "ACCOUNTING"
}
VIEW multiple entries by KEY
[Wed Sep 03 12:56:25 papicella@:~/ant-demos/gemfire/80/demo/rest ] $ curl -i http://localhost:7070/gemfire-api/v1/departments/10,30
HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
Content-Location: http://localhost:7070/gemfire-api/v1/departments/10,30
Content-Type: application/json
Content-Length: 123
Date: Wed, 03 Sep 2014 02:56:28 GMT
{
"departments" : [ {
"deptno" : 10,
"name" : "ACCOUNTING"
}, {
"deptno" : 30,
"name" : "SALES"
} ]
}
5. We can even use the Spring REST shell as shown below.
Obtain rest-shell using the link below.
https://github.com/spring-projects/rest-shell
6. Open a browser and enter the following URL to browse the Swagger-enabled REST APIs:
http://localhost:7070/gemfire-api/docs/index.html
7. Perform an operation as shown below.
http://gemfire.docs.pivotal.io/latest/userguide/gemfire_rest/book_intro.html#concept_7628F498DB534A2D8A99748F5DA5DC94
Here is how we set it up and some quick examples showing how it works with some Region data
In the example below I have PDX setup for the cache servers as shown below.
<!DOCTYPE cache PUBLIC "-//GemStone Systems, Inc.//GemFire Declarative Caching 8.0//EN" "http://www.gemstone.com/dtd/cache8_0.dtd"> <cache> <pdx read-serialized="true"> <pdx-serializer> <class-name>com.gemstone.gemfire.pdx.ReflectionBasedAutoSerializer</class-name> <parameter name="classes"> <string>org\.pivotal\.pas\.beans\..*</string> </parameter> </pdx-serializer> </pdx> .....
1. Firstly you need to enable the REST on a cache server node as shown below. Basically set gemfire.start-dev-rest-api to TRUE , you could use a gemfire.properties file but here we just pass it to GFSH as part of the server start command.
start server --name=server1 --classpath=$CLASSPATH --server-port=40411 --cache-xml-file=./server1/cache.xml --properties-file=./server1/gemfire.properties --locators=localhost[10334] --dir=server1 --initial-heap=1g --max-heap=1g --J=-Dgemfire.http-service-port=7070 --J=-Dgemfire.http-service-bind-address=localhost --J=-Dgemfire.start-dev-rest-api=true
2. Once started we can quickly ensure we have the REST server up on port 7070 as shown below.
[Wed Sep 03 12:39:18 papicella@:~/ant-demos/gemfire/80/demo ] $ netstat -an | grep 7070
tcp4 0 0 127.0.0.1.7070 *.* LISTEN
3. Next test that you can access the REST server. The command below will list all the regions available in the cluster.
[Wed Sep 03 12:52:44 papicella@:~/ant-demos/gemfire/80/demo/rest ] $ curl -i http://localhost:7070/gemfire-api/v1
HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
Location: http://localhost:7070/gemfire-api/v1
Accept-Charset: big5, big5-hkscs, euc-jp, euc-kr, gb18030, gb2312, gbk, ibm-thai, ibm00858, ibm01140, ibm01141, ibm01142, ibm01143, ibm01144, ibm01145, ibm01146, ibm01147, ibm01148, ibm01149, ibm037, ibm1026, ibm1047, ibm273, ibm277, ibm278, ibm280, ibm284, ibm285, ibm290, ibm297, ibm420, ibm424, ibm437, ibm500, ibm775, ibm850, ibm852, ibm855, ibm857, ibm860, ibm861, ibm862, ibm863, ibm864, ibm865, ibm866, ibm868, ibm869, ibm870, ibm871, ibm918, iso-2022-cn, iso-2022-jp, iso-2022-jp-2, iso-2022-kr, iso-8859-1, iso-8859-13, iso-8859-15, iso-8859-2, iso-8859-3, iso-8859-4, iso-8859-5, iso-8859-6, iso-8859-7, iso-8859-8, iso-8859-9, jis_x0201, jis_x0212-1990, koi8-r, koi8-u, shift_jis, tis-620, us-ascii, utf-16, utf-16be, utf-16le, utf-32, utf-32be, utf-32le, utf-8, windows-1250, windows-1251, windows-1252, windows-1253, windows-1254, windows-1255, windows-1256, windows-1257, windows-1258, windows-31j, x-big5-hkscs-2001, x-big5-solaris, x-compound_text, x-euc-jp-linux, x-euc-tw, x-eucjp-open, x-ibm1006, x-ibm1025, x-ibm1046, x-ibm1097, x-ibm1098, x-ibm1112, x-ibm1122, x-ibm1123, x-ibm1124, x-ibm1364, x-ibm1381, x-ibm1383, x-ibm300, x-ibm33722, x-ibm737, x-ibm833, x-ibm834, x-ibm856, x-ibm874, x-ibm875, x-ibm921, x-ibm922, x-ibm930, x-ibm933, x-ibm935, x-ibm937, x-ibm939, x-ibm942, x-ibm942c, x-ibm943, x-ibm943c, x-ibm948, x-ibm949, x-ibm949c, x-ibm950, x-ibm964, x-ibm970, x-iscii91, x-iso-2022-cn-cns, x-iso-2022-cn-gb, x-iso-8859-11, x-jis0208, x-jisautodetect, x-johab, x-macarabic, x-maccentraleurope, x-maccroatian, x-maccyrillic, x-macdingbat, x-macgreek, x-machebrew, x-maciceland, x-macroman, x-macromania, x-macsymbol, x-macthai, x-macturkish, x-macukraine, x-ms932_0213, x-ms950-hkscs, x-ms950-hkscs-xp, x-mswin-936, x-pck, x-sjis_0213, x-utf-16le-bom, x-utf-32be-bom, x-utf-32le-bom, x-windows-50220, x-windows-50221, x-windows-874, x-windows-949, x-windows-950, x-windows-iso2022jp
Content-Type: application/json
Content-Length: 493
Date: Wed, 03 Sep 2014 02:52:46 GMT
{
"regions" : [ {
"name" : "demoRegion",
"type" : "PARTITION",
"key-constraint" : null,
"value-constraint" : null
}, {
"name" : "departments",
"type" : "PARTITION",
"key-constraint" : null,
"value-constraint" : null
}, {
"name" : "employees",
"type" : "PARTITION",
"key-constraint" : null,
"value-constraint" : null
}, {
"name" : "complex",
"type" : "PARTITION",
"key-constraint" : null,
"value-constraint" : null
} ]
4. We have a couple of regions in this cluster and once again I am using the classic DEPT/EMP regions here. Some simple REST command belows on the "/departments" region
View all DEPARTMENT region entries
[Wed Sep 03 12:53:38 papicella@:~/ant-demos/gemfire/80/demo/rest ] $ curl -i http://localhost:7070/gemfire-api/v1/departments
HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
Content-Location: http://localhost:7070/gemfire-api/v1/departments/20,10,30,40
Content-Type: application/json
Content-Length: 225
Date: Wed, 03 Sep 2014 02:53:40 GMT
{
"departments" : [ {
"deptno" : 20,
"name" : "RESEARCH"
}, {
"deptno" : 10,
"name" : "ACCOUNTING"
}, {
"deptno" : 30,
"name" : "SALES"
}, {
"deptno" : 40,
"name" : "OPERATIONS"
} ]
}
VIEW a single region entry by KEY
[Wed Sep 03 12:55:34 papicella@:~/ant-demos/gemfire/80/demo/rest ] $ curl -i http://localhost:7070/gemfire-api/v1/departments/10
HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
Content-Location: http://localhost:7070/gemfire-api/v1/departments/10
Content-Type: application/json
Content-Length: 44
Date: Wed, 03 Sep 2014 02:55:36 GMT
{
"deptno" : 10,
"name" : "ACCOUNTING"
}
VIEW multiple entries by KEY
[Wed Sep 03 12:56:25 papicella@:~/ant-demos/gemfire/80/demo/rest ] $ curl -i http://localhost:7070/gemfire-api/v1/departments/10,30
HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
Content-Location: http://localhost:7070/gemfire-api/v1/departments/10,30
Content-Type: application/json
Content-Length: 123
Date: Wed, 03 Sep 2014 02:56:28 GMT
{
"departments" : [ {
"deptno" : 10,
"name" : "ACCOUNTING"
}, {
"deptno" : 30,
"name" : "SALES"
} ]
}
5. We can even use the Spring REST shell as shown below.
Obtain rest-shell using the link below.
https://github.com/spring-projects/rest-shell
[Wed Sep 03 13:06:22 papicella@:~ ] $ rest-shell ___ ___ __ _____ __ _ _ _ _ __ | _ \ __/' _/_ _/' _/| || | / / | \ \ | v / _|`._`. | | `._`.| >< | / / / > > |_|_\___|___/ |_| |___/|_||_| |_/_/ /_/ 1.2.1.RELEASE Welcome to the REST shell. For assistance hit TAB or type "help". http://localhost:8080:> baseUri http://localhost:7070/ Base URI set to 'http://localhost:7070' http://localhost:7070:> follow gemfire-api http://localhost:7070/gemfire-api:> follow v1 http://localhost:7070/gemfire-api/v1:> follow departments http://localhost:7070/gemfire-api/v1/departments:> get 20 > GET http://localhost:7070/gemfire-api/v1/departments/20 < 200 OK < Server: Apache-Coyote/1.1 < Content-Location: http://localhost:7070/gemfire-api/v1/departments/20 < Content-Type: application/json < Content-Length: 42 < Date: Wed, 03 Sep 2014 03:07:17 GMT < { "deptno" : 20, "name" : "RESEARCH" } http://localhost:7070/gemfire-api/v1/departments:>
6. Open a browser and enter the following URL to browse the Swagger-enabled REST APIs:
http://localhost:7070/gemfire-api/docs/index.html
7. Perform an operation as shown below.
Subscribe to:
Posts (Atom)