Search This Blog

Monday 26 December 2011

vFabric SQLFire - Modelling the classic DEPT/EMP tables

VMware vFabric SQLFire is a memory-optimized, distributed database management system designed for applications that have demanding scalability and availability requirements. Here we will show a quick demo on how to get up and running using the classic DEPT/EMP RDBMS tables only this time in memory usig SQLFire.

Note: I's assumed SQLFire is installed in a directory as follows:

/Users/papicella/vmware/software/sqlfire/vFabric_SQLFire_10

1. Setup your environment as shown below. MAC example.

export PATH=/Users/papicella/vmware/software/sqlfire/vFabric_SQLFire_10/bin:$PATH

2. Ensure you have a valid JDK in your path as shown below

[Mon Dec 26 22:01:26 papicella@:~/vmware/software/sqlfire/vFabric_SQLFire_10/pasdemos/deptemp ] $ java -version
java version "1.6.0_26"
Java(TM) SE Runtime Environment (build 1.6.0_26-b03-383-11A511)
Java HotSpot(TM) 64-Bit Server VM (build 20.1-b02-383, mixed mode)

3. In your current directory created 2 folders called "server1" and "server2"

mkdir server1
mkdir server2

4. Start 2 servers as shown below they will discover each other using multicast port 12333

sqlf server start -dir=server1 -client-port=1527 -mcast-port=12333 &
sqlf server start -dir=server2 -client-port=1528 -mcast-port=12333 &

Output:

[Mon Dec 26 22:11:49 papicella@:~/vmware/software/sqlfire/vFabric_SQLFire_10/pasdemos/deptemp ] $ ./start-servers.sh
[Mon Dec 26 22:11:51 papicella@:~/vmware/software/sqlfire/vFabric_SQLFire_10/pasdemos/deptemp ] $ Starting SQLFire Server using multicast for peer discovery: 239.192.81.1[12333]
Starting network server for SQLFire Server at address localhost/127.0.0.1[1527]
Starting SQLFire Server using multicast for peer discovery: 239.192.81.1[12333]
Starting network server for SQLFire Server at address localhost/127.0.0.1[1528]
SQLFire Server pid: 1032 status: running
  Distributed system now has 2 members.
  Other members: 192-168-1-3.tpgi.com.au(1031:datastore):14506/50489
Logs generated in /Users/papicella/vmware/software/sqlfire/vFabric_SQLFire_10/pasdemos/deptemp/server2/sqlfserver.log
SQLFire Server pid: 1031 status: running
  Distributed system now has 2 members.
  Other members: 192-168-1-3.tpgi.com.au(1032:datastore):23687/50492
Logs generated in /Users/papicella/vmware/software/sqlfire/vFabric_SQLFire_10/pasdemos/deptemp/server1/sqlfserver.log

5. Connect as shown below

[Mon Dec 26 22:13:06 papicella@:~/vmware/software/sqlfire/vFabric_SQLFire_10/pasdemos/deptemp ] $ sqlf
sqlf version 10.4
sqlf> connect client 'localhost:1527';
sqlf>

6. In a separate terminal window create a file called "deptempPERSIST.sql" with contents as follows.

AUTOCOMMIT OFF;

create diskstore STORE1;

CREATE TABLE DEPT (
 DEPTNO INTEGER NOT NULL CONSTRAINT DEPTNO_PK PRIMARY KEY,
 DNAME VARCHAR(14),
 LOC VARCHAR(13))
 partition by column (deptno)
 persistent 'STORE1'
 REDUNDANCY 1;

INSERT INTO DEPT VALUES (10, 'ACCOUNTING', 'NEW YORK');
INSERT INTO DEPT VALUES (20, 'RESEARCH',   'DALLAS');
INSERT INTO DEPT VALUES (30, 'SALES',      'CHICAGO');
INSERT INTO DEPT VALUES (40, 'OPERATIONS', 'BOSTON');

CREATE TABLE EMP (
 EMPNO INTEGER NOT NULL CONSTRAINT EMP_PK Primary Key,
 ENAME VARCHAR(10),
 JOB VARCHAR(9),
 MGR INTEGER,
 HIREDATE DATE,
 SAL INTEGER,
 COMM INTEGER,
 DEPTNO INTEGER)
 partition by column (deptno)
 COLOCATE WITH (DEPT)
 persistent 'STORE1'
 REDUNDANCY 1;

INSERT INTO EMP VALUES
        (7369, 'SMITH',  'CLERK',     7902, '1980-12-17',  800, NULL, 20);
INSERT INTO EMP VALUES
        (7499, 'ALLEN',  'SALESMAN',  7698, '1981-02-21', 1600,  300, 30);
INSERT INTO EMP VALUES
        (7521, 'WARD',   'SALESMAN',  7698, '1981-02-22', 1250,  500, 30);
INSERT INTO EMP VALUES
        (7566, 'JONES',  'MANAGER',   7839, '1981-04-02',  2975, NULL, 20);
INSERT INTO EMP VALUES
        (7654, 'MARTIN', 'SALESMAN',  7698, '1981-09-28', 1250, 1400, 30);
INSERT INTO EMP VALUES
        (7698, 'BLAKE',  'MANAGER',   7839, '1981-05-01',  2850, NULL, 30);
INSERT INTO EMP VALUES
        (7782, 'CLARK',  'MANAGER',   7839, '1981-06-09',  2450, NULL, 10);
INSERT INTO EMP VALUES
        (7788, 'SCOTT',  'ANALYST',   7566, '1982-12-09', 3000, NULL, 20);
INSERT INTO EMP VALUES
        (7839, 'KING',   'PRESIDENT', NULL, '1981-11-17', 5000, NULL, 10);
INSERT INTO EMP VALUES
        (7844, 'TURNER', 'SALESMAN',  7698, '1981-09-08',  1500,    0, 30);
INSERT INTO EMP VALUES
        (7876, 'ADAMS',  'CLERK',     7788, '1983-01-12', 1100, NULL, 20);
INSERT INTO EMP VALUES
        (7900, 'JAMES',  'CLERK',     7698, '1981-12-03',   950, NULL, 30);
INSERT INTO EMP VALUES
        (7902, 'FORD',   'ANALYST',   7566, '1981-12-03',  3000, NULL, 20);
INSERT INTO EMP VALUES
        (7934, 'MILLER', 'CLERK',     7782, '1982-01-23', 1300, NULL, 10);

COMMIT;

ALTER TABLE EMP
   ADD CONSTRAINT EMP_FK Foreign Key (DEPTNO)
   REFERENCES DEPT (DEPTNO);

COMMIT;

7. Run the file "deptempPERSIST.sql" as shown below by returning to the terminal window at step #5

sqlf> run 'deptempPERSIST.sql';

8. Query and verify you now have DEPT / EMP tables with data as shown below.
sqlf> select * from dept;
DEPTNO     |DNAME         |LOC          
----------------------------------------
10         |ACCOUNTING    |NEW YORK     
20         |RESEARCH      |DALLAS       
30         |SALES         |CHICAGO      
40         |OPERATIONS    |BOSTON       

4 rows selected
sqlf> select * from emp;
EMPNO      |ENAME     |JOB      |MGR        |HIREDATE  |SAL        |COMM       |DEPTNO     
-------------------------------------------------------------------------------------------
7934       |MILLER    |CLERK    |7782       |1982-01-23|1300       |NULL       |10         
7782       |CLARK     |MANAGER  |7839       |1981-06-09|2450       |NULL       |10         
7839       |KING      |PRESIDENT|NULL       |1981-11-17|5000       |NULL       |10         
7902       |FORD      |ANALYST  |7566       |1981-12-03|3000       |NULL       |20         
7876       |ADAMS     |CLERK    |7788       |1983-01-12|1100       |NULL       |20         
7788       |SCOTT     |ANALYST  |7566       |1982-12-09|3000       |NULL       |20         
7369       |SMITH     |CLERK    |7902       |1980-12-17|800        |NULL       |20         
7566       |JONES     |MANAGER  |7839       |1981-04-02|2975       |NULL       |20         
7521       |WARD      |SALESMAN |7698       |1981-02-22|1250       |500        |30         
7499       |ALLEN     |SALESMAN |7698       |1981-02-21|1600       |300        |30         
7900       |JAMES     |CLERK    |7698       |1981-12-03|950        |NULL       |30         
7844       |TURNER    |SALESMAN |7698       |1981-09-08|1500       |0          |30         
7654       |MARTIN    |SALESMAN |7698       |1981-09-28|1250       |1400       |30         
7698       |BLAKE     |MANAGER  |7839       |1981-05-01|2850       |NULL       |30         

14 rows selected

9. Check how our DEPT/EMP tables stored within the distributed system.
sqlf> select substr(tablename, 1, 10) as tablename, datapolicy from sys.systables where tableschemaname='APP';
TABLENAME |DATAPOLICY              
-----------------------------------
EMP       |PERSISTENT_PARTITION    
DEPT      |PERSISTENT_PARTITION    

2 rows selected  

Here the data is not only partitioned by also persisted to local disk to ensure it can be rehydrated when the members are brought down and re-started.

10. Verify where our data is stored and ensure that it's partioned by DEPTNO and that the EMP records are co located with the DEPT data.

sqlf> select substr(dsid(), 1, 48) as dsid, deptno, dname from dept;
DSID                                            |DEPTNO     |DNAME         
---------------------------------------------------------------------------
192-168-1-3.tpgi.com.au(1031)<v0>:14506/50489   |30         |SALES         
192-168-1-3.tpgi.com.au(1031)<v0>:14506/50489   |40         |OPERATIONS    
192-168-1-3.tpgi.com.au(1032)<v1>:23687/50492   |10         |ACCOUNTING    
192-168-1-3.tpgi.com.au(1032)<v1>:23687/50492   |20         |RESEARCH      

4 rows selected
sqlf> select substr(dsid(), 1, 48) as dsid, deptno, ename from emp;
DSID                                            |DEPTNO     |ENAME     
-----------------------------------------------------------------------
192-168-1-3.tpgi.com.au(1031)<v0>:14506/50489   |30         |WARD      
192-168-1-3.tpgi.com.au(1031)<v0>:14506/50489   |30         |ALLEN     
192-168-1-3.tpgi.com.au(1031)<v0>:14506/50489   |30         |JAMES     
192-168-1-3.tpgi.com.au(1031)<v0>:14506/50489   |30         |TURNER    
192-168-1-3.tpgi.com.au(1031)<v0>:14506/50489   |30         |MARTIN    
192-168-1-3.tpgi.com.au(1031)<v0>:14506/50489   |30         |BLAKE     
192-168-1-3.tpgi.com.au(1032)<v1>:23687/50492   |10         |MILLER    
192-168-1-3.tpgi.com.au(1032)<v1>:23687/50492   |10         |CLARK     
192-168-1-3.tpgi.com.au(1032)<v1>:23687/50492   |10         |KING      
192-168-1-3.tpgi.com.au(1032)<v1>:23687/50492   |20         |FORD      
192-168-1-3.tpgi.com.au(1032)<v1>:23687/50492   |20         |ADAMS     
192-168-1-3.tpgi.com.au(1032)<v1>:23687/50492   |20         |SCOTT     
192-168-1-3.tpgi.com.au(1032)<v1>:23687/50492   |20         |SMITH     
192-168-1-3.tpgi.com.au(1032)<v1>:23687/50492   |20         |JONES     

14 rows selected

For more information on vFabric SQLFire visit the link below.

http://www.vmware.com/products/application-platform/vfabric-sqlfire/overview.html

Monday 12 December 2011

GemFire Write-Behind with an Oracle RDBMS

Many in memory data fabric's use an Oracle RDBMS to persist data and make it available as required. In this demo we explore an effective way to use GemFire write-behind functionality to an Oracle RDBMS and strategies around how to increase throughput. GemFire's Write-Behind technology simply translates what in-memory data management does uniquely well (low-latency/high throughput of small data updates) to what disk-based data management does best (higher latency/high throughput of large data updates).

Here is an example config of such a setup.

cache XML file
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE cache PUBLIC
    "-//GemStone Systems, Inc.//GemFire Declarative Caching 6.6//EN" 
"http://www.gemstone.com/dtd/cache6_6.dtd">
<cache>
  <gateway-hub id="DBWriterHub" port="-1" startup-policy="none">
     <gateway id="DBWriter">
        <gateway-listener>
           <class-name>pas.au.gemfire.demo.cachewriter.DBGatewayListener</class-name>
        </gateway-listener>
        <!-- 10 seconds limit that can elapse between sending batches of up to 1000 -->
        <gateway-queue batch-size="5000" batch-time-interval="10000"/>
     </gateway>
  </gateway-hub>
  <region name="firstRegion" refid="PARTITION_REDUNDANT"> 
    <region-attributes enable-gateway="true" hub-id="DBWriterHub">
      <eviction-attributes>
        <lru-heap-percentage action="overflow-to-disk" />
      </eviction-attributes>
    </region-attributes>
  </region>
  <function-service>
 <function>
   <class-name>pas.au.gemfire.demo.cachewriter.SizeFunction</class-name>
 </function>
  </function-service> 
  <resource-manager critical-heap-percentage="75" eviction-heap-percentage="65"/>
</cache>

Gateway Listener Code
package pas.au.gemfire.demo.cachewriter;

import com.gemstone.gemfire.cache.Declarable;
import com.gemstone.gemfire.cache.Operation;
import com.gemstone.gemfire.cache.util.GatewayEvent;
import com.gemstone.gemfire.cache.util.GatewayEventListener;

import java.util.ArrayList;
import java.util.List;
import java.util.Properties;
import java.util.logging.Level;
import java.util.logging.Logger;

import org.springframework.context.ApplicationContext;
import org.springframework.context.support.ClassPathXmlApplicationContext;

import pas.au.gemfire.demo.cachewriter.dao.jdbcbatch.JdbcBatch;
import pas.au.gemfire.demo.cachewriter.dao.jdbcbatch.JdbcBatchDAO;
import pas.au.gemfire.demo.cachewriter.dao.jdbcbatch.JdbcBatchDAOImpl;

public class DBGatewayListener implements GatewayEventListener, Declarable 
{
  private Logger logger = Logger.getLogger(this.getClass().getSimpleName());
  private ApplicationContext context;
  private static final String BEAN_NAME = "jdbcBatchDAO";
  private JdbcBatchDAO jdbcBatchDAO;

  public DBGatewayListener()
  {
    context = new ClassPathXmlApplicationContext("application-config.xml");
    jdbcBatchDAO = (JdbcBatchDAOImpl) context.getBean(BEAN_NAME);
  }

  @Override
  public boolean processEvents(List<GatewayEvent> list)
  {
    logger.log (Level.INFO, String.format("Size of List<GatewayEvent> = %s", list.size()));
    List<JdbcBatch> newEntries = new ArrayList<JdbcBatch>();
    
    List<JdbcBatch> updatedEntries = new ArrayList<JdbcBatch>();
    List<String> destroyedEntries = new ArrayList<String>();
    @SuppressWarnings("unused")
 int possibleDulicates = 0;
    
    for (GatewayEvent ge: list)
    {
      
      if (ge.getPossibleDuplicate())
       possibleDulicates++;
       
      if ( ge.getOperation().equals(Operation.UPDATE)) 
      {
     updatedEntries.add((JdbcBatch) ge.getDeserializedValue());
      }
      else if ( ge.getOperation().equals(Operation.CREATE))
      {
        newEntries.add((JdbcBatch) ge.getDeserializedValue());
      }
      else if ( ge.getOperation().equals(Operation.DESTROY))
      {
     destroyedEntries.add(ge.getKey().toString());
      }
     
    }
    
    if (newEntries.size() > 0)
    {
     jdbcBatchDAO.storeInsertBatch(newEntries); 
    }
    
    if (updatedEntries.size() > 0)
    {
     jdbcBatchDAO.storeUpdateBatch(updatedEntries);
    }
    
    if (destroyedEntries.size() > 0)
    {
     jdbcBatchDAO.storeDeleteBatch(destroyedEntries);
    }
    
    logger.log (Level.INFO, 
          String.format("New Entries = [%s], Updated Entries = [%s], Destroyed Entries = [%s], Possible Duplicates = [%s]", 
                  newEntries.size(), 
                  updatedEntries.size(), 
                  destroyedEntries.size(), 
                  possibleDulicates));
    
    return true;
  }

  @Override
  public void close()
  {
  }

  @Override
  public void init(Properties properties)
  {
  }
}

For more information on vFabric GemFire use the link below.

http://www.vmware.com/products/application-platform/vfabric-gemfire/overview.html