I am going to show how you would create a WebLogic data source to Pivotal GemFireXD 1.3. In this example I am using the developer edition of Weblogic which known as "Free Oracle WebLogic Server 12c (12.1.3) Zip Distribution and Installers for Developers" You can download / configure it as follows.
http://www.oracle.com/technetwork/middleware/downloads/index-087510.html
Note: I am assuming you have WebLogic 12C running with GemFireXD also running. I am also assuming a WLS install directory as follows with a domain called "mydomain"
/Users/papicella/vmware/software/weblogic/wls12130
1. Ensure you have the GemFireXD client driver copied into your WLS domain lib directory as follows, prior to starting WLS
/Users/papicella/vmware/software/weblogic/wls12130/user_projects/domains/mydomain/lib/gemfirexd-client.jar
2. Navigate to the WebLogic Console as follows
http://localhost:7001/console/
3. Login using your server credentials
4. From the Domain Structure tree navigate to "Services -> Data Sources"
5. Click on "New -> Generic Data Source"
6. Fill in the form as follows
Name: GemFireXD-DataSource
JNDI Name: jdbc/gemfirexd-ds
Type: Select "Other" from the drop down list box
7. Click "Next"
8. Click "Next"
9. Uncheck "Supports Global Transactions" and click next
10. Enter the following details for credentials. The GemFireXD cluster is not setup for auhentication so this is just a fake username/password to allow us to proceed.
Username: app
Password: app
11. Click "Next"
12. Enter the following CONFIG parameters for your GemFireXD Cluster
Driver Class Name: com.pivotal.gemfirexd.jdbc.ClientDriver
URL: jdbc:gemfirexd://localhost:1527/
Test Table Name: sysibm.sysdummy1
Leave the rest as their default values , it's vital you don't alter the default values here.
13. Click the "Test Configuration" button at this point to verify you can connect, if Successful you will see a message as follows
14. Click "Next"
15. Check the server you wish to target this Data Source for. If you don't do this the Data Source will not be deployed and accessible. In DEV only WLS you only have "myserver" to select.
16. Click "Finish"
It should show your all done and no re-starts are required. To access the Data Source you need to use JNDI with the path "jdbc/gemfirexd-ds"
Search This Blog
Thursday, 30 October 2014
Thursday, 23 October 2014
Using the tc Server build pack for Pivotal Cloud Foundry 1.3
On Pivotal network you will find various build packs you can download and apply to PCF and use for your applications outside of the shipped build packs, using the link below.
https://network.pivotal.io/products/pivotal-cf
I am going to show how you would take one of these build packs , install it and then consume it from an application. In this demo I am going to use "tc server buildpack (offline) v2.4"
1. Log in as admin user and upload the build pack as shown below. I am adding this build pack in the last position which is position 6.
[Tue Oct 21 20:36:01 papicella@:~/cf/buildpacks ] $ cf create-buildpack tc_server_buildpack_offline tc-server-buildpack-offline-v2.4.zip 6
Creating buildpack tc_server_buildpack_offline...
OK
Uploading buildpack tc_server_buildpack_offline...
OK
2. View buildpacks, which should show the one we just uploaded above.
[Thu Oct 23 11:15:18 papicella@:~/cf/APJ1 ] $ cf buildpacks
Getting buildpacks...
buildpack position enabled locked filename
java_buildpack_offline 1 true false java-buildpack-offline-v2.4.zip
ruby_buildpack 2 true false ruby_buildpack-offline-v1.1.0.zip
nodejs_buildpack 3 true false nodejs_buildpack-offline-v1.0.1.zip
python_buildpack 4 true false python_buildpack-offline-v1.0.1.zip
go_buildpack 4 true false go_buildpack-offline-v1.0.1.zip
php_buildpack 5 true false php_buildpack-offline-v1.0.1.zip
tc_server_buildpack_offline 6 true false tc-server-buildpack-offline-v2.4.zip
3. Push application using your buildpack uploaded above, below is a simple manifest which refers to the build pack I uploaded.
manifest.yml
applications:
- name: pcfhawq
memory: 512M
instances: 1
host: pcfhawq
domain: yyyy.fe.dddd.com
path: ./pcfhawq.war
buildpack: tc_server_buildpack_offline
services:
- phd-dev
[Thu Oct 23 11:36:26 papicella@:~/cf/buildpacks ] $ cf push -f manifest-apj1.yml
Using manifest file manifest-apj1.yml
Creating app pcfhawq-web in org pas-org / space apple as pas...
OK
Creating route yyyy.apj1.dddd.gopivotal.com...
OK
Binding pcfhawq-web.yyyy.fe.dddd.com to pcfhawq-web...
OK
Uploading pcfhawq-web...
Uploading app files from: pcfhawq.war
Uploading 644.1K, 181 files
OK
Binding service phd-dev to app pcfhawq-web in org pas-org / space apple as pas...
OK
Starting app pcfhawq-web in org pas-org / space apple as pas...
OK
-----> Downloaded app package (5.6M)
-----> Java Buildpack Version: v2.4 (offline) | https://github.com/pivotal-cf/tc-server-buildpack.git#396ad0a
-----> Downloading Open Jdk JRE 1.7.0_60 from http://download.run.pivotal.io/openjdk/lucid/x86_64/openjdk-1.7.0_60.tar.gz (found in cache)
Expanding Open Jdk JRE to .java-buildpack/open_jdk_jre (1.3s)
-----> Downloading Spring Auto Reconfiguration 1.4.0_RELEASE from http://download.run.pivotal.io/auto-reconfiguration/auto-reconfiguration-1.4.0_RELEASE.jar (found in cache)
Modifying /WEB-INF/web.xml for Auto Reconfiguration
-----> Downloading Tc Server Instance 2.9.6_RELEASE from http://download.run.pivotal.io/tc-server/tc-server-2.9.6_RELEASE.tar.gz (found in cache)
Instantiating tc Server in .java-buildpack/tc_server (3.4s)
-----> Downloading Tc Server Lifecycle Support 2.2.0_RELEASE from http://download.run.pivotal.io/tomcat-lifecycle-support/tomcat-lifecycle-support-2.2.0_RELEASE.jar (found in cache)
-----> Downloading Tc Server Logging Support 2.2.0_RELEASE from http://download.run.pivotal.io/tomcat-logging-support/tomcat-logging-support-2.2.0_RELEASE.jar (found in cache)
-----> Downloading Tc Server Access Logging Support 2.2.0_RELEASE from http://download.run.pivotal.io/tomcat-access-logging-support/tomcat-access-logging-support-2.2.0_RELEASE.jar (found in cache)
-----> Uploading droplet (45M)
1 of 1 instances running
App started
Showing health and status for app pcfhawq-web in org pas-org / space apple as pas...
OK
requested state: started
instances: 1/1
usage: 1G x 1 instances
urls: pcfhawq-web.yyyy.fe.dddd.com
state since cpu memory disk
#0 running 2014-10-23 11:37:56 AM 0.0% 398.6M of 1G 109.2M of 1G
4. Verify within the DEV console the application is using the build pack you targeted.
More Information
Buildpacks
http://docs.pivotal.io/pivotalcf/buildpacks/index.html
https://network.pivotal.io/products/pivotal-cf
I am going to show how you would take one of these build packs , install it and then consume it from an application. In this demo I am going to use "tc server buildpack (offline) v2.4"
1. Log in as admin user and upload the build pack as shown below. I am adding this build pack in the last position which is position 6.
[Tue Oct 21 20:36:01 papicella@:~/cf/buildpacks ] $ cf create-buildpack tc_server_buildpack_offline tc-server-buildpack-offline-v2.4.zip 6
Creating buildpack tc_server_buildpack_offline...
OK
Uploading buildpack tc_server_buildpack_offline...
OK
2. View buildpacks, which should show the one we just uploaded above.
[Thu Oct 23 11:15:18 papicella@:~/cf/APJ1 ] $ cf buildpacks
Getting buildpacks...
buildpack position enabled locked filename
java_buildpack_offline 1 true false java-buildpack-offline-v2.4.zip
ruby_buildpack 2 true false ruby_buildpack-offline-v1.1.0.zip
nodejs_buildpack 3 true false nodejs_buildpack-offline-v1.0.1.zip
python_buildpack 4 true false python_buildpack-offline-v1.0.1.zip
go_buildpack 4 true false go_buildpack-offline-v1.0.1.zip
php_buildpack 5 true false php_buildpack-offline-v1.0.1.zip
tc_server_buildpack_offline 6 true false tc-server-buildpack-offline-v2.4.zip
3. Push application using your buildpack uploaded above, below is a simple manifest which refers to the build pack I uploaded.
manifest.yml
applications:
- name: pcfhawq
memory: 512M
instances: 1
host: pcfhawq
domain: yyyy.fe.dddd.com
path: ./pcfhawq.war
buildpack: tc_server_buildpack_offline
services:
- phd-dev
[Thu Oct 23 11:36:26 papicella@:~/cf/buildpacks ] $ cf push -f manifest-apj1.yml
Using manifest file manifest-apj1.yml
Creating app pcfhawq-web in org pas-org / space apple as pas...
OK
Creating route yyyy.apj1.dddd.gopivotal.com...
OK
Binding pcfhawq-web.yyyy.fe.dddd.com to pcfhawq-web...
OK
Uploading pcfhawq-web...
Uploading app files from: pcfhawq.war
Uploading 644.1K, 181 files
OK
Binding service phd-dev to app pcfhawq-web in org pas-org / space apple as pas...
OK
Starting app pcfhawq-web in org pas-org / space apple as pas...
OK
-----> Downloaded app package (5.6M)
-----> Java Buildpack Version: v2.4 (offline) | https://github.com/pivotal-cf/tc-server-buildpack.git#396ad0a
-----> Downloading Open Jdk JRE 1.7.0_60 from http://download.run.pivotal.io/openjdk/lucid/x86_64/openjdk-1.7.0_60.tar.gz (found in cache)
Expanding Open Jdk JRE to .java-buildpack/open_jdk_jre (1.3s)
-----> Downloading Spring Auto Reconfiguration 1.4.0_RELEASE from http://download.run.pivotal.io/auto-reconfiguration/auto-reconfiguration-1.4.0_RELEASE.jar (found in cache)
Modifying /WEB-INF/web.xml for Auto Reconfiguration
-----> Downloading Tc Server Instance 2.9.6_RELEASE from http://download.run.pivotal.io/tc-server/tc-server-2.9.6_RELEASE.tar.gz (found in cache)
Instantiating tc Server in .java-buildpack/tc_server (3.4s)
-----> Downloading Tc Server Lifecycle Support 2.2.0_RELEASE from http://download.run.pivotal.io/tomcat-lifecycle-support/tomcat-lifecycle-support-2.2.0_RELEASE.jar (found in cache)
-----> Downloading Tc Server Logging Support 2.2.0_RELEASE from http://download.run.pivotal.io/tomcat-logging-support/tomcat-logging-support-2.2.0_RELEASE.jar (found in cache)
-----> Downloading Tc Server Access Logging Support 2.2.0_RELEASE from http://download.run.pivotal.io/tomcat-access-logging-support/tomcat-access-logging-support-2.2.0_RELEASE.jar (found in cache)
-----> Uploading droplet (45M)
1 of 1 instances running
App started
Showing health and status for app pcfhawq-web in org pas-org / space apple as pas...
OK
requested state: started
instances: 1/1
usage: 1G x 1 instances
urls: pcfhawq-web.yyyy.fe.dddd.com
state since cpu memory disk
#0 running 2014-10-23 11:37:56 AM 0.0% 398.6M of 1G 109.2M of 1G
4. Verify within the DEV console the application is using the build pack you targeted.
More Information
Buildpacks
http://docs.pivotal.io/pivotalcf/buildpacks/index.html
Tuesday, 21 October 2014
Monday, 20 October 2014
Connecting to Pivotal Cloud Foundry Ops Metrics using Java VisualVM
The Pivotal Ops Metrics tool is a JMX extension for Elastic Runtime.
Pivotal Ops Metrics collects and exposes system data from Cloud Foundry
components via a JMX endpoint.
Use this system data to monitor your installation and assist in troubleshooting. Below is the tile once installed and available with Pivotal Cloud Foundry Ops Manager
Once installed and configured, metrics for Cloud Foundry components automatically report to the JMX endpoint. Your JMX client uses the credentials supplied to connect to the IP address of the Pivotal Ops Metrics JMX Provider at port 44444
1. Start jvisualvm
2. Under plugin ensure you have the VisualVm-Mbeans plugin installed as shown below, or install it to be able to view the MBeans.
3. Create a JMX connection as shown below
4. Finally the CF MBeans can be viewed as shown below.
More Information
Deploying Pivotal Ops Metrics
http://docs.pivotal.io/pivotalcf/customizing/deploy-metrics.html
Once installed and configured, metrics for Cloud Foundry components automatically report to the JMX endpoint. Your JMX client uses the credentials supplied to connect to the IP address of the Pivotal Ops Metrics JMX Provider at port 44444
1. Start jvisualvm
2. Under plugin ensure you have the VisualVm-Mbeans plugin installed as shown below, or install it to be able to view the MBeans.
3. Create a JMX connection as shown below
4. Finally the CF MBeans can be viewed as shown below.
More Information
Deploying Pivotal Ops Metrics
http://docs.pivotal.io/pivotalcf/customizing/deploy-metrics.html
SQLShell accessing Pivotal GemFire XD 1.3
I stumbled open SQLShell recently as per the URL below. Below I will show how you can connect to Pivotal GemFireXD using SQLShell. I used this to export query results using CSV output.
http://software.clapper.org/sqlshell/users-guide.html
Note: Assuming SQLShell is already installed and instructions below are for Mac OSX
1. Create a file in $HOME/.sqlshell/config as shown below, I just took the sample it ships with. Notice how I have added an alias for "gemfirexd", highlighted below.
# ---------------------------------------------------------------------------
# initialization file for SQLShell
[settings]
#colspacing: 2
[drivers]
# Driver aliases.
postgresql = org.postgresql.Driver
postgres = org.postgresql.Driver
mysql = com.mysql.jdbc.Driver
sqlite = org.sqlite.JDBC
sqlite3 = org.sqlite.JDBC
oracle = oracle.jdbc.driver.OracleDriver
access = sun.jdbc.odbc.JdbcOdbcDriver
gemfirexd = com.pivotal.gemfirexd.jdbc.ClientDriver
[vars]
historyDir: ${env.HOME}/.sqlshell
[db_postgres]
aliases: post
url: jdbc:postgresql://localhost:5432/sampledb
driver: postgres
user: ${system.user.name}
password:
history: $vars.historyDir/postgres.hist
[db_mysql]
#aliases:
driver: mysql
url: jdbc:mysql://localhost:3306/sampledb
user: ${system.user.name}
password:
history: $vars.historyDir/mysql.hist
[db_sqlite3]
aliases: sqlite3
url: jdbc:sqlite:/tmp/sample.db
driver: sqlite
history: $vars.historyDir/sqlite3.hist
[db_oracle]
aliases: ora
schema: example
url: jdbc:oracle:thin:@localhost:1521:sampledb
user: ${system.user.name}
password:
driver: oracle
history: $vars.historyDir/scrgskd
[db_access]
driver: access
url: jdbc:odbc:Driver={Microsoft Access Driver (*.mdb)};DBQ=/tmp/sample.mdb;DriverID=22}
2. Add Pivotal GemFireXd client driver "gemfirexd-client.jar" to "/Applications/sqlshell/lib"
3. With Pivotal GemFireXD cluster up and running connect and run some commands as shown below.
http://software.clapper.org/sqlshell/users-guide.html
Note: Assuming SQLShell is already installed and instructions below are for Mac OSX
1. Create a file in $HOME/.sqlshell/config as shown below, I just took the sample it ships with. Notice how I have added an alias for "gemfirexd", highlighted below.
# ---------------------------------------------------------------------------
# initialization file for SQLShell
[settings]
#colspacing: 2
[drivers]
# Driver aliases.
postgresql = org.postgresql.Driver
postgres = org.postgresql.Driver
mysql = com.mysql.jdbc.Driver
sqlite = org.sqlite.JDBC
sqlite3 = org.sqlite.JDBC
oracle = oracle.jdbc.driver.OracleDriver
access = sun.jdbc.odbc.JdbcOdbcDriver
gemfirexd = com.pivotal.gemfirexd.jdbc.ClientDriver
[vars]
historyDir: ${env.HOME}/.sqlshell
[db_postgres]
aliases: post
url: jdbc:postgresql://localhost:5432/sampledb
driver: postgres
user: ${system.user.name}
password:
history: $vars.historyDir/postgres.hist
[db_mysql]
#aliases:
driver: mysql
url: jdbc:mysql://localhost:3306/sampledb
user: ${system.user.name}
password:
history: $vars.historyDir/mysql.hist
[db_sqlite3]
aliases: sqlite3
url: jdbc:sqlite:/tmp/sample.db
driver: sqlite
history: $vars.historyDir/sqlite3.hist
[db_oracle]
aliases: ora
schema: example
url: jdbc:oracle:thin:@localhost:1521:sampledb
user: ${system.user.name}
password:
driver: oracle
history: $vars.historyDir/scrgskd
[db_access]
driver: access
url: jdbc:odbc:Driver={Microsoft Access Driver (*.mdb)};DBQ=/tmp/sample.mdb;DriverID=22}
2. Add Pivotal GemFireXd client driver "gemfirexd-client.jar" to "/Applications/sqlshell/lib"
3. With Pivotal GemFireXD cluster up and running connect and run some commands as shown below.
[Mon Oct 20 11:56:10 papicella@:~/vmware/software/sqlshell ] $ sqlshell gemfirexd,jdbc:gemfirexd://localhost:1527 SQLShell, version 0.8.1 (2012/03/16 09:43:31) Copyright (c) 2009-2011 Brian M. Clapper Using JLine Type "help" for help. Type ".about" for more information. sqlshell> .set schema APP sqlshell> .show tables ALL_EMPS APPLES_OFFHEAP CUSTOMERS DEPT EMP EMPLOYEES EMPS_IN_DEPT_10 EMPS_IN_DEPT_20 EMPS_IN_DEPT_30 EMPS_IN_DEPT_40 OFFICES ORDERDETAILS ORDERS PAYMENTS PERSON PRODUCTLINES PRODUCTS TEST_ASYNC TEST_ASYNC2 TEST_CALLBACKLISTENER sqlshell> select * from dept; Execution time: 0.21 seconds Retrieval time: 0.6 seconds 7 rows returned. DEPTNO DNAME LOC ------ ---------- -------- 10 ACCOUNTING NEW YORK 20 RESEARCH DALLAS 30 SALES CHICAGO 40 OPERATIONS BRISBANE 50 MARKETING ADELAIDE 60 DEV PERTH 70 SUPPORT SYDNEY sqlshell> .capture to /tmp/results.csv Capturing result sets to: /tmp/results.csv sqlshell> select * from emp where deptno = 10; Execution time: 0.18 seconds Retrieval time: 0.5 seconds 3 rows returned. EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO ----- ------ --------- ---- --------------------- ---- ---- ------ 7782 CLARK MANAGER 7839 1981/06/09 00:00:00.0 2450 NULL 10 7839 KING PRESIDENT NULL 1981/11/17 00:00:00.0 5000 NULL 10 7934 MILLER CLERK 7782 1982/01/23 00:00:00.0 1300 NULL 10 sqlshell> .capture off No longer capturing query results. sqlshell>
Wednesday, 8 October 2014
Spring XD Pivotal Gemfire Sink Demo
Spring XD is a unified, distributed, and extensible system for data
ingestion, real time analytics, batch processing, and data export. The
project's goal is to simplify the development of big data applications.
There are 2 implementation of the gemfire sink: gemfire-server and gemfire-json-server. They are identical except the latter converts JSON string payloads to a JSON document format proprietary to GemFire and provides JSON field access and query capabilities. If you are not using JSON, the gemfire-server module will write the payload using java serialization to the configured region.
In this example below we show how we connect to an existing GemFire 7.0.2 cluster using a locator to add some JSON trade symbols to an existing region in the cluster.
1. Start a GemFire cluster with with an existing region as shown below. The following cache.xml is for "server1" of the cluster and "server2" of the cluster. They are identical configs , just using different ports
server1 cache.xml
server2 cache.xml
2. Verify using GFSH you have 2 members , a locator and a region as follows
3. Start single node SpringXD server
4. Start SpringXD shell
5. Create a stream as follows
6. Post some entries via HTTP which will be inserted into the GemFire Region
7. Verify via GFSH that data has been inserted into the GemFire region. JSON data stored in GemFire regions is done using PDX.
More Information
SpringXD
http://projects.spring.io/spring-xd/
GemFire Sinks
http://docs.spring.io/spring-xd/docs/1.0.1.RELEASE/reference/html/#gemfire-server
There are 2 implementation of the gemfire sink: gemfire-server and gemfire-json-server. They are identical except the latter converts JSON string payloads to a JSON document format proprietary to GemFire and provides JSON field access and query capabilities. If you are not using JSON, the gemfire-server module will write the payload using java serialization to the configured region.
In this example below we show how we connect to an existing GemFire 7.0.2 cluster using a locator to add some JSON trade symbols to an existing region in the cluster.
1. Start a GemFire cluster with with an existing region as shown below. The following cache.xml is for "server1" of the cluster and "server2" of the cluster. They are identical configs , just using different ports
server1 cache.xml
<?xml version="1.0"?> <!DOCTYPE cache PUBLIC "-//GemStone Systems, Inc.//GemFire Declarative Caching 7.0//EN" "http://www.gemstone.com/dtd/cache7_0.dtd"> <cache> <cache-server bind-address="localhost" port="40404" hostname-for-clients="localhost"/> <region name="springxd-region"> <region-attributes data-policy="partition"> <partition-attributes redundant-copies="1" total-num-buckets="113"/> <eviction-attributes> <lru-heap-percentage action="overflow-to-disk"/> </eviction-attributes> </region-attributes> </region> <resource-manager critical-heap-percentage="75" eviction-heap-percentage="65"/> </cache>
server2 cache.xml
<?xml version="1.0"?> <!DOCTYPE cache PUBLIC "-//GemStone Systems, Inc.//GemFire Declarative Caching 7.0//EN" "http://www.gemstone.com/dtd/cache7_0.dtd"> <cache> <cache-server bind-address="localhost" port="40405" hostname-for-clients="localhost"/> <region name="springxd-region"> <region-attributes data-policy="partition"> <partition-attributes redundant-copies="1" total-num-buckets="113"/> <eviction-attributes> <lru-heap-percentage action="overflow-to-disk"/> </eviction-attributes> </region-attributes> </region> <resource-manager critical-heap-percentage="75" eviction-heap-percentage="65"/> </cache>
2. Verify using GFSH you have 2 members , a locator and a region as follows
$ gfsh _________________________ __ / _____/ ______/ ______/ /____/ / / / __/ /___ /_____ / _____ / / /__/ / ____/ _____/ / / / / /______/_/ /______/_/ /_/ v7.0.2.10 Monitor and Manage GemFire gfsh>connect --locator=localhost[10334]; Connecting to Locator at [host=localhost, port=10334] .. Connecting to Manager at [host=10.98.94.88, port=1099] .. Successfully connected to: [host=10.98.94.88, port=1099] gfsh>list members; Name | Id -------- | --------------------------------------- server1 | 10.98.94.88(server1:10161)<v1>:15610 server2 | 10.98.94.88(server2:10164)<v2>:39300 locator1 | localhost(locator1:10159:locator):42885 gfsh>list regions; List of regions --------------- springxd-region
3. Start single node SpringXD server
[Wed Oct 08 14:51:06 papicella@:~/vmware/software/spring/spring-xd/spring-xd-1.0.1.RELEASE ] $ xd-singlenode _____ __ _______ / ___| (-) \ \ / / _ \ \ `--. _ __ _ __ _ _ __ __ _ \ V /| | | | `--. \ '_ \| '__| | '_ \ / _` | / ^ \| | | | /\__/ / |_) | | | | | | | (_| | / / \ \ |/ / \____/| .__/|_| |_|_| |_|\__, | \/ \/___/ | | __/ | |_| |___/ 1.0.1.RELEASE eXtreme Data Started : SingleNodeApplication Documentation: https://github.com/spring-projects/spring-xd/wiki ....
4. Start SpringXD shell
$ xd-shell _____ __ _______ / ___| (-) \ \ / / _ \ \ `--. _ __ _ __ _ _ __ __ _ \ V /| | | `--. \ '_ \| '__| | '_ \ / _` | / ^ \| | | | /\__/ / |_) | | | | | | | (_| | / / \ \ |/ / \____/| .__/|_| |_|_| |_|\__, | \/ \/___/ | | __/ | |_| |___/ eXtreme Data 1.0.1.RELEASE | Admin Server Target: http://localhost:9393 Welcome to the Spring XD shell. For assistance hit TAB or type "help". xd:>
5. Create a stream as follows
xd:>stream create --name gemfiredemo --definition "http --port=9090 | gemfire-json-server --host=localhost --port=10334 --useLocator=true --regionName=springxd-region --keyExpression=payload.getField('symbol')" --deploy Created and deployed new stream 'gemfiredemo'
6. Post some entries via HTTP which will be inserted into the GemFire Region
xd:>http post --target http://localhost:9090 --data {"symbol":"ORCL","price":38} > POST (text/plain;Charset=UTF-8) http://localhost:9090 {"symbol":"ORCL","price":38} > 200 OK xd:>http post --target http://localhost:9090 --data {"symbol":"VMW","price":94} > POST (text/plain;Charset=UTF-8) http://localhost:9090 {"symbol":"VMW","price":94} > 200 OK
7. Verify via GFSH that data has been inserted into the GemFire region. JSON data stored in GemFire regions is done using PDX.
gfsh>query --query="select * from /springxd-region"; Result : true startCount : 0 endCount : 20 Rows : 2 symbol | price ------ | ----- ORCL | 38 VMW | 94 NEXT_STEP_NAME : END
More Information
SpringXD
http://projects.spring.io/spring-xd/
GemFire Sinks
http://docs.spring.io/spring-xd/docs/1.0.1.RELEASE/reference/html/#gemfire-server
Subscribe to:
Posts (Atom)