Search This Blog

Thursday 26 September 2013

Functions in GemFire

You can execute data-independent functions or data-dependent functions in vFabric GemFire. You run data-independent functions by targeting a specific member or specific members in your distributed system, or by targeting logical member groups on which to execute your function. If you are executing a data-dependent function, you specify a region on which to execute the function.

Lets start with a very basic function which in this example takes a parameter of Type Map and inserts into into the Region using Region.putAll method.

ServerPutAllFunction.java
  
package pivotal.au.gemfire.performance.functions;

import java.util.Map;
import java.util.Properties;

import com.gemstone.gemfire.cache.CacheFactory;
import com.gemstone.gemfire.cache.Declarable;
import com.gemstone.gemfire.cache.GemFireCache;
import com.gemstone.gemfire.cache.Region;
import com.gemstone.gemfire.cache.execute.FunctionAdapter;
import com.gemstone.gemfire.cache.execute.FunctionContext;

public class ServerPutAllFunction extends FunctionAdapter implements Declarable 
{

 private static final long serialVersionUID = 1L;

 public static final String ID = "serverputall-function";
   
    private GemFireCache cache;
   
    public ServerPutAllFunction()
    {
     this.cache = CacheFactory.getAnyInstance(); 
    }
    
 @Override
 public void init(Properties arg0) 
 {
  // TODO Auto-generated method stub
  
 }

 @Override
 public void execute(FunctionContext context) 
 {
     Map testMap = (Map) context.getArguments();
     Region region = this.cache.getRegion("testRegionFunction");
     region.putAll(testMap);
     context.getResultSender().lastResult(testMap.size());
 }

 @Override
 public String getId() 
 {
  // TODO Auto-generated method stub
  return ID;
 }


}

Cahe.xml showing how to define the function
  
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE cache PUBLIC "-//GemStone Systems, Inc.//GemFire Declarative Cache 7.0//EN"
        "http://www.gemstone.com/dtd/cache7_0.dtd">
<cache is-server="true">

    <cache-server port="0" notify-by-subscription="true"/>    
    <pdx read-serialized="false">
        <pdx-serializer>
            <class-name>com.gemstone.gemfire.pdx.ReflectionBasedAutoSerializer</class-name>
            <parameter name="classes">
               <string>pivotal\.au\.gemfire\.performance\..*</string>
            </parameter>
        </pdx-serializer>
    </pdx>

    <region name="testRegion">
        <region-attributes data-policy="partition"
                           statistics-enabled="true"
                           concurrency-level="16">
            <partition-attributes redundant-copies="1" total-num-buckets="113"/>
        </region-attributes>
    </region>

    <region name="testRegionFunction">
        <region-attributes data-policy="partition"
                           statistics-enabled="true"
                           concurrency-level="16">
            <partition-attributes redundant-copies="1" total-num-buckets="113"/>
        </region-attributes>
    </region>
    
    <function-service>
   <function>
     <class-name>pivotal.au.gemfire.performance.functions.SizeFunction</class-name>
   </function>
   <function>
     <class-name>pivotal.au.gemfire.performance.functions.ServerPutAllFunction</class-name>
   </function>
    </function-service> 
   
</cache>

Now lets connect to GFSH and display the functions available on all members.
  
gfsh>connect --locator=localhost[10334]
Connecting to Locator at [host=localhost, port=10334] ..
Connecting to Manager at [host=192-168-1-5.tpgi.com.au, port=1099] ..
Successfully connected to: [host=192-168-1-5.tpgi.com.au, port=1099]

Cluster-1 gfsh>list functions
Member  | Function
------- | ---------------------
server1 | serverputall-function
server1 | size-function
server2 | serverputall-function
server2 | size-function

Calling the function from a client would be done as follows
  
public void run() {
            Map buffer = new HashMap();
            Execution execution = null;
            
            int dataSize = SAMPLE_SIZE / nThreads;
            System.out.printf("Start: %d  End: %d \n",(dataSize * (increment - 1)), (dataSize * increment));
            
            for (int i = (dataSize * (increment - 1)); i < (dataSize * increment); i++) {
                buffer.put(String.valueOf(i), new Test(i, "name" + i));
                if (buffer.size() % BATCH_SIZE == 0) {
                    try {
                        execution = FunctionService.onServer(cache).withArgs(buffer);
                        ResultCollector<?, ?> collector = execution.execute(ServerPutAllFunction.ID);
                        
                        counter += buffer.size();
                        buffer.clear();
                    } catch (Exception e) {
                        e.printStackTrace();
                    }
                }
            }
            
            if (!buffer.isEmpty()) {
                execution = FunctionService.onServer(cache).withArgs(buffer);
                ResultCollector<?, ?> collector = execution.execute(ServerPutAllFunction.ID);
                counter += buffer.size();
            }
        }

Monday 23 September 2013

Check which JDK's Installed on MAC Os X

Note for myself:


[Mon Sep 23 08:16:42 papicella@:~ ] $ /usr/libexec/java_home -V
Matching Java Virtual Machines (4):
    1.7.0_25, x86_64: "Java SE 7" /Library/Java/JavaVirtualMachines/jdk1.7.0_25.jdk/Contents/Home
    1.7.0_12, x86_64: "Java SE 7" /Library/Java/JavaVirtualMachines/jdk1.7.0_12.jdk/Contents/Home
    1.6.0_51-b11-457, x86_64: "Java SE 6" /System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home
    1.6.0_51-b11-457, i386: "Java SE 6" /System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home

/Library/Java/JavaVirtualMachines/jdk1.7.0_25.jdk/Contents/Home

Wednesday 18 September 2013

Using Scala to connect to Greenplum

In this demo we use a basic Scala program to connect to Greenplum using the Postgres JDBC driver.

1. Create a scala program as shown below.

ScalaGreenplumDemo.scala
  
import java.sql.DriverManager
import java.sql.Connection

object ScalaGreenplumDemo {

  def main(args: Array[String]) {
    val driver = "org.postgresql.Driver"
    val url = "jdbc:postgresql://127.0.0.1:5432/gpadmin"
    val username = "pas"
    val password = "pas"

    var connection:Connection = null

    try {
      // make the connection
      Class.forName(driver)
      connection = DriverManager.getConnection(url, username, password)

      // create the statement, and run the select query
      val statement = connection.createStatement()
      val resultSet = statement.executeQuery("select * from scott.emp;")
      while ( resultSet.next() ) {
        val empno = resultSet.getString("empno")
        val ename = resultSet.getString("ename")
        println("empno=" + empno + ", ename = " + ename)
      }
    } catch {
      case e: Throwable => e.printStackTrace
    }
    connection.close()
  }

}

2. Compile it ensuring you have the Postgres JDBC jar file in the classpath which in this example is the directory where we are running from.

export CUR_DIR=`pwd`
scalac -classpath $CUR_DIR/postgresql-9.2-1002.jdbc4.jar:. ScalaGreenplumDemo.scala

3. Run it as follows

export CUR_DIR=`pwd`
scala -classpath $CUR_DIR/postgresql-9.2-1002.jdbc4.jar:. ScalaGreenplumDemo

Output

empno=7499, ename = ALLEN
empno=7521, ename = WARD
empno=7698, ename = BLAKE
empno=7782, ename = CLARK
empno=7788, ename = SCOTT
empno=7844, ename = TURNER
empno=7876, ename = ADAMS
empno=7900, ename = JAMES
empno=7902, ename = FORD
empno=7934, ename = MILLER
empno=7111, ename = LUCIA
empno=7113, ename = SIENA
empno=7369, ename = SMITH
empno=7566, ename = JONES
empno=7654, ename = MARTIN
empno=7839, ename = KING
empno=7933, ename = PAS
empno=7112, ename = LUCAS

Monday 16 September 2013

Spring JDBC with PivotalHD and Hawq

HAWQ enables SQL for Hadoop ensuring we can use something like Spring JDBC as shown below. In this example we use the PivotalHD VM with data from a HAWQ append only table as shown below.

  
gpadmin=# \dt
                             List of relations
   Schema    |            Name             | Type  |  Owner  |   Storage   
-------------+-----------------------------+-------+---------+-------------
 retail_demo | categories_dim_hawq         | table | gpadmin | append only
 retail_demo | customer_addresses_dim_hawq | table | gpadmin | append only
 retail_demo | customers_dim_hawq          | table | gpadmin | append only
 retail_demo | date_dim_hawq               | table | gpadmin | append only
 retail_demo | email_addresses_dim_hawq    | table | gpadmin | append only
 retail_demo | order_lineitems_hawq        | table | gpadmin | append only
 retail_demo | orders_hawq                 | table | gpadmin | append only
 retail_demo | payment_methods_hawq        | table | gpadmin | append only
 retail_demo | products_dim_hawq           | table | gpadmin | append only
(9 rows)

gpadmin=# select * from customers_dim_hawq limit 5;
 customer_id | first_name | last_name | gender 
-------------+------------+-----------+--------
 11371       | Delphine   | Williams  | F
 5480        | Everett    | Johnson   | M
 26030       | Dominique  | Davis     | M
 41922       | Brice      | Martinez  | M
 47265       | Iva        | Wilson    | F
(5 rows)

Time: 57.334 ms 

Code

Customer.java (POJO)
  
package pivotal.au.hawq.beans;

public class Customer {

 public String customerId;
 public String firstName;
 public String lastName;
 public String gender;
 
 public Customer() 
 {
 }

 public Customer(String customerId, String firstName, String lastName,
   String gender) {
  super();
  this.customerId = customerId;
  this.firstName = firstName;
  this.lastName = lastName;
  this.gender = gender;
 }

..... getters/setters etc ....  

DAO : Constants.java
  
package pivotal.au.hawq.dao;

public interface Constants 
{
   public static final String SELECT_CUSTOMER = "select * from retail_demo.customers_dim_hawq where customer_id = ?";
   
   public static final String SELECT_FIRST_FIVE_CUSTOMERS = "select * from retail_demo.customers_dim_hawq limit 5";
   
}  

DAO : CustomerDAO.java
  
package pivotal.au.hawq.dao;

import java.util.List;

import pivotal.au.hawq.beans.Customer;

public interface CustomerDAO 
{
   public Customer selectCustomer (String customerId);
   
   public List<Customer> firstFiveCustomers();
   
}  

DAO : CustomerDAOImpl.java
  
package pivotal.au.hawq.dao;

import java.util.ArrayList;
import java.util.List;
import javax.sql.DataSource;

import org.springframework.jdbc.core.BeanPropertyRowMapper;
import org.springframework.jdbc.core.JdbcTemplate;
import pivotal.au.hawq.beans.Customer;

public class CustomerDAOImpl implements CustomerDAO 
{
 private JdbcTemplate jdbcTemplate;
 
 public void setDataSource(DataSource dataSource)
 {
     this.jdbcTemplate = new JdbcTemplate(dataSource);
 }

 public Customer selectCustomer(String customerId) 
 {
     return (Customer) jdbcTemplate.queryForObject
            (Constants.SELECT_CUSTOMER, 
              new Object[] { customerId }, 
              new BeanPropertyRowMapper<Customer>( Customer.class));
 }
 
 public List<Customer> firstFiveCustomers() 
 {
  List<Customer> customers = new ArrayList<Customer>();
  
  customers = jdbcTemplate.query(Constants.SELECT_FIRST_FIVE_CUSTOMERS, 
                           new BeanPropertyRowMapper<Customer>( Customer.class)); 
  
  return customers;
  
 }

}  

application-context.xml
  
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
 xmlns:jdbc="http://www.springframework.org/schema/jdbc"
 xmlns:context="http://www.springframework.org/schema/context"
 xsi:schemaLocation="http://www.springframework.org/schema/jdbc http://www.springframework.org/schema/jdbc/spring-jdbc-3.2.xsd
  http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
  http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.2.xsd">

 <context:property-placeholder location="classpath:/jdbc.properties"/>
 
 <bean id="pivotalHDDataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
  <property name="driverClassName" value="${jdbc.driverClassName}" />
  <property name="url" value="${jdbc.url}" />
  <property name="username" value="${jdbc.username}" />
  <property name="password" value="${jdbc.password}" />
 </bean>
 
 <bean id="customerDAOImpl" class="pivotal.au.hawq.dao.CustomerDAOImpl">
      <property name="dataSource" ref="pivotalHDDataSource" />
   </bean>
</beans>  

jdbc.properties

jdbc.driverClassName=org.postgresql.Driver
jdbc.url=jdbc:postgresql://172.16.62.142:5432/gpadmin
jdbc.username=gpadmin
jdbc.password=gpadmin

TestCustomerDAO.java
  
package pivotal.au.hawq.dao.test;

import java.util.List;
import java.util.logging.Level;
import java.util.logging.Logger;

import org.springframework.context.ApplicationContext;
import org.springframework.context.support.ClassPathXmlApplicationContext;

import pivotal.au.hawq.beans.Customer;
import pivotal.au.hawq.dao.CustomerDAO;

public class TestCustomerDAO 
{
 private Logger logger = Logger.getLogger(this.getClass().getSimpleName());
 private ApplicationContext context;
 private static final String BEAN_NAME = "customerDAOImpl";
 private CustomerDAO customerDAO;
 
 public TestCustomerDAO() 
 {
     context = new ClassPathXmlApplicationContext("application-context.xml");
     customerDAO = (CustomerDAO) context.getBean(BEAN_NAME);  
     logger.log (Level.INFO, "Obtained customerDAOImpl BEAN...");
 }

 public void run()
 {
  System.out.println("Select single customer from HAWQ -> ");
  Customer customer = customerDAO.selectCustomer("59047");
  System.out.println(customer.toString());
  
  System.out.println("Select five customers from HAWQ -> ");
  
  List<Customer> customers = customerDAO.firstFiveCustomers();
  
  for (Customer cust: customers)
  {
   System.out.println(cust.toString());
  }
  
 }
 
 public static void main(String[] args) 
 {
  // TODO Auto-generated method stub
  TestCustomerDAO test = new TestCustomerDAO();
  test.run();
 }

}  

Output

log4j:WARN No appenders could be found for logger (org.springframework.core.env.StandardEnvironment).
log4j:WARN Please initialize the log4j system properly.
Sep 16, 2013 9:59:09 PM pivotal.au.hawq.dao.test.TestCustomerDAO
INFO: Obtained customerDAOImpl BEAN...
Select single customer from HAWQ -> 
Customer [customerId=59047, firstName=Olivia, lastName=Anderson, gender=F]
Select five customers from HAWQ -> 
Customer [customerId=11371, firstName=Delphine, lastName=Williams, gender=F]
Customer [customerId=5480, firstName=Everett, lastName=Johnson, gender=M]
Customer [customerId=26030, firstName=Dominique, lastName=Davis, gender=M]
Customer [customerId=41922, firstName=Brice, lastName=Martinez, gender=M]
Customer [customerId=47265, firstName=Iva, lastName=Wilson, gender=F]

More Information

Here is the high level page describing the Pivotal HD & HAWQ technology.
http://blog.gopivotal.com/products/pivotal-hd-ga

This page dives deeper into the PHD VM with a walkthrough from data loading, map reduce and SQL queries.
http://pivotalhd.cfapps.io/getting-started/pivotalhd-vm.html

Finally, the following link is the direct download location of the VM discussed above above.
http://bitcast-a.v1.o1.sjc1.bitgravity.com/greenplum/pivotal-sw/pivotalhd_singlenodevm_101_v1.7z




Friday 13 September 2013

Greenplum external table accessing GemFire region

To complete the Greenplum external table blogs , the last one I was created is an external table querying GemFire region data as shown below.

Previous Blogs

http://theblasfrompas.blogspot.com.au/2013/09/greenplum-command-based-web-external.html

http://theblasfrompas.blogspot.com.au/2013/09/using-oracles-sqlplus-with-greenplum.html

External Table Querying GemFire region

run.sh

cd /Users/gpadmin/demos/gemfire-externaltable

source env_setup.sh

java -cp $CLASSPATH vmware.au.se.gf7.query.QueryGemFireEmps

exttab.sql
  
drop external table emps_from_gemfire;

CREATE EXTERNAL WEB TABLE emps_from_gemfire
(empno int, name character varying(20), job character varying(20), deptno int)
EXECUTE '/Users/gpadmin/demos/gemfire-externaltable/run.sh' on host
FORMAT 'TEXT' (DELIMITER ',');
Output
  
gpadmin=# select * from emps_from_gemfire;
 empno |   name   |    job    | deptno 
-------+----------+-----------+--------
  7369 | SMITH    | CLERK     |     20
  7380 | BLACK    | CLERK     |     40
  7377 | ADAM     | CLERK     |     20
  7371 | APICELLA | SALESMAN  |     10
  7374 | LUCAS    | SALESMAN  |     10
  7381 | BROWN    | SALESMAN  |     40
  7373 | SIENA    | CLERK     |     40
  7376 | ADRIAN   | CLERK     |     20
  7370 | APPLES   | MANAGER   |     10
  7375 | ROB      | CLERK     |     30
  7379 | FRANK    | CLERK     |     10
  7372 | LUCIA    | PRESIDENT |     30
  7378 | SALLY    | MANAGER   |     20
(13 rows)

Time: 1078.169 ms

Java Code
  
package vmware.au.se.gf7.query;

import java.util.Collection;
import java.util.Iterator;

import vmware.au.se.gf7.deptemp.beans.Employee;

import com.gemstone.gemfire.cache.client.ClientCache;
import com.gemstone.gemfire.cache.client.ClientCacheFactory;
import com.gemstone.gemfire.cache.query.FunctionDomainException;
import com.gemstone.gemfire.cache.query.NameResolutionException;
import com.gemstone.gemfire.cache.query.Query;
import com.gemstone.gemfire.cache.query.QueryInvocationTargetException;
import com.gemstone.gemfire.cache.query.QueryService;
import com.gemstone.gemfire.cache.query.SelectResults;
import com.gemstone.gemfire.cache.query.TypeMismatchException;

public class QueryGemFireEmps 
{

 public static final String REGION_NAME = "employees";
 public ClientCache cache = null;
 
 public QueryGemFireEmps() 
 {
    cache = new ClientCacheFactory()
          .set("name", "GreenPlumGemFireClient")
          .set("cache-xml-file", "client.xml")
          .set("log-level", "error")
          .create(); 
 }

 public void run () throws FunctionDomainException, TypeMismatchException, NameResolutionException, QueryInvocationTargetException
 { 
  QueryService queryService = cache.getQueryService();
  Query query = queryService.newQuery("SELECT * FROM /" + REGION_NAME);
  
  Object result = query.execute();
  Collection<?> collection = ((SelectResults<?>)result).asList();
  Iterator<?> iter = collection.iterator();
  
  while (iter.hasNext())
  { 
            Employee emp = (Employee) iter.next();
   System.out.println(emp.toCSVFormat());
  }   
  cache.close();
 }
 
 public static void main(String[] args) throws Exception
 {
  QueryGemFireEmps emps = new QueryGemFireEmps();
  emps.run();
 }

}


Tuesday 10 September 2013

Using Oracle's SQLPlus with Greenplum external tables

I previously blogged about using a java client to access oracle tables from Greenplum external tables. That demo below is handy for any SQL or RDBMS so a generic solution.

http://theblasfrompas.blogspot.com.au/2013/09/greenplum-command-based-web-external.html

Here is a simpler example this time using sqlplus knowing I am only accessing oracle tables here.

run.sh
  
cd /Users/gpadmin/demos/sqlplus-externaltable

source setup.sh

sqlplus -s scott/tiger@10.32.243.155:1521/ora11gr2 <<!
@emps.sql
!
emps.sql
  
set feedback off
set heading off
set pagesize 0

select empno||'|'||ename||'|'||deptno
from emp
/ 
exttab.sql
  
drop external table oracle_emps_with_sqlplus;

CREATE EXTERNAL WEB TABLE oracle_emps_with_sqlplus
(empno int, ename character varying(20), deptno int)
EXECUTE '/Users/gpadmin/demos/sqlplus-externaltable/run.sh' on host
FORMAT 'TEXT' (DELIMITER '|');
Example
  
gpadmin=# select * from oracle_emps_with_sqlplus where deptno = 30;
 empno | ename  | deptno 
-------+--------+--------
  7499 | ALLEN  |     30
  7521 | WARD   |     30
  7654 | MARTIN |     30
  7698 | BLAKE  |     30
  7844 | TURNER |     30
  7900 | JAMES  |     30
(6 rows)

Time: 88.829 ms

Greenplum : Command-based Web External Tables accessing Oracle

Command based external tables allow you to execute a shell command or script to return data. These make then ideal for consuming data from external systems. In this example we create an external table that connects to Oracle using JDBC to query EMP data.

The example below is based on a Java Class which allows you to connect to any RDBMS and run any SQL query with a list of comma separated columns defined as parameters.

1. Create a script which simply runs a Java Class which return data from a query to any RDBMS.

run.sh

java -cp $CLASSPATH /Users/gpadmin/demos/jdbc-externaltable/ojdbc6.jar:/Users/gpadmin/demos/jdbc-externaltable/jdbc_external.jar:. pivotal.au.greenplum.externaltable.InvokeQuery "oracle.jdbc.OracleDriver" "jdbc:oracle:thin:@10.32.243.125:1521/ora11gr2" "scott" "tiger" "|" "empno,ename,deptno" "select empno,ename,deptno from emp"

Usage as follows

Usage:

java -cp $CLASSPATH pivotal.au.greenplum.externaltable.InvokeQuery {driver-class} {url} {username} {passwd} {delimiter} {column-list} {query}

driver-class: JDBC driver class to load at runtime

url: JDBC connection URL

username: database username

passwd: database password

delimiter: What delimiter to use to separate fields

column-list: List of columns to output to console

query: SQL query to run against external RDBMS

2. Create an external table as shown below.

CREATE EXTERNAL WEB TABLE oracle_emps 
(empno int, ename character varying(20), deptno int)
EXECUTE '/Users/gpadmin/demos/jdbc-externaltable/run.sh' on host
FORMAT 'TEXT' (DELIMITER '|');

3. Load into Greenplum ensuring to use the fully qualified path to run.sh on the file system.
  
gpadmin=# \i exttab.sql;
DROP EXTERNAL TABLE
Time: 13.052 ms
CREATE EXTERNAL TABLE
Time: 5.190 ms  

4. Access as follows
  
gpadmin=# select * from oracle_emps;
 empno | ename  | deptno 
-------+--------+--------
  7369 | SMITH  |     20
  7499 | ALLEN  |     30
  7521 | WARD   |     30
  7566 | JONES  |     20
  7654 | MARTIN |     30
  7698 | BLAKE  |     30
  7782 | CLARK  |     10
  7788 | SCOTT  |     20
  7839 | KING   |     10
  7844 | TURNER |     30
  7876 | ADAMS  |     20
  7900 | JAMES  |     30
  7902 | FORD   |     20
  7934 | MILLER |     10
(14 rows)

Time: 557.705 ms
gpadmin=# select * from oracle_emps where deptno = 20;
 empno | ename | deptno 
-------+-------+--------
  7369 | SMITH |     20
  7566 | JONES |     20
  7788 | SCOTT |     20
  7876 | ADAMS |     20
  7902 | FORD  |     20
(5 rows)

Time: 543.120 ms  

Finally the java class is defined as follows
  
package pivotal.au.greenplum.externaltable;

import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;

public class InvokeQuery 
{
    private String url;
    private String username;
    private String passwd;
    private String columnList;
    private String delimeter;
    private String driverClassName;
    private String query;
    private Connection conn = null;
    
 public InvokeQuery() 
 {
 }
 
 private void run (String[] args) throws SQLException
 {
  driverClassName = args[0];
  url = args[1];
  username = args[2];
  passwd = args[3];
  delimeter = args[4];
  columnList = args[5];
  query = args[6];
  
  Statement stmt = null;
  ResultSet rset = null;
  
  String[] columns = splitColumns(columnList);
  
  try 
  {
   conn = getConnection(driverClassName);
   stmt = conn.createStatement();
   rset = stmt.executeQuery(query);
   int size = columns.length;
   while (rset.next())
   { 
    int i = 0;
    // go through columns and output data
    for (String column: columns)
    {
     i++;
     if (i < size)
     {
      System.out.print(rset.getString(column) + "" + delimeter);
     }
     else
     {
      System.out.print(rset.getString(column) + "\n");
     }
    }
   }
   
  } 
  catch (Exception e) 
  {
   e.printStackTrace();
  } 
  finally
  {
   if (rset != null)
   {
    rset.close();
   }
   
   if (stmt != null)
   {
    stmt.close();
   }
   
   if (conn != null)
   {
    conn.close();
   }
  }
 }
 
 private String[] splitColumns (String columArray)
 {
  return columArray.split(",");
 }
  
 private Connection getConnection(String driverClass) throws SQLException, ClassNotFoundException
 {
  Class.forName(driverClassName); 
  conn = DriverManager.getConnection(url, username, passwd);
  return conn; 
 }
 
 public static void main(String[] args) throws Exception 
 {
  InvokeQuery invokeQuery = new InvokeQuery();
  
  if (args.length != 7)
  {
    System.out.println("Less then 7 arguments provided, usage as follows");
    System.out.println("\nUsage: \n\n\tjava pivotal.au.greenplum.externaltable.InvokeQuery <driver-class> <url> <username> <passwd> <delimeter> <colum-list> <query> \n");
    System.exit(-1);
  }
  
  invokeQuery.run(args);
  
 }

}  

Thursday 5 September 2013

Pivotal SQLFire - JDBC Client Failover URL using secondary-locators

When you use the thin client to connect to a SQLFire locator member (rather than directly to a SQLFire server), the thin client driver can provide automatic failover if the initial connection to the distributed system is lost. Note, however, that this assumes the initial connection to the specified SQLFire locator succeeds. To improve the chances of establishing an initial connection to a SQLFire system, you can optionally specify the address of a secondary locator in addition to the primary locator, using the secondary-locators connection property. For example:

jdbc:sqlfire://locator1:port1/;secondary-locators=locator2:port2

Pivotal SQLFire : Using -sync=false for Server nodes


Specifying -sync=false (the default for locators) causes the sqlf command to return control after the member reaches "waiting" state. With -sync=true (the default for servers), the sqlf command does not return control until after all dependent members have booted and the member has finished synchronizing disk stores.

Always use -sync=false when starting multiple members on the same machine, especially when executing sqlf commands from a shell script or batch file, so that the script file does not hang while waiting for a particular SQLFire member to start. You can use the sqlf locator wait and/or sqlf server wait later in the script to verify that each server has finished synchronizing and has reached the "running" state. 

Example Below:

Start Locator

sqlf locator start -peer-discovery-address=localhost -peer-discovery-port=41111 -conserve-sockets=false -client-bind-address=localhost -client-port=1527 -dir=locator -sync=false

Start Servers and run a script once system is up.

sqlf server start -server-groups=MYGROUP -client-bind-address=localhost -conserve-sockets=false -client-port=1528 -critical-heap-percentage=85 -eviction-heap-percentage=75 -locators=localhost[41111] -bind-address=localhost -dir=server1 -sync=false 

sqlf server start -server-groups=MYGROUP -client-bind-address=localhost -conserve-sockets=false -client-port=1529 -critical-heap-percentage=85 -eviction-heap-percentage=75 -locators=localhost[41111] -bind-address=localhost -dir=server2 -sync=false

sqlf locator wait -dir=locator
sqlf server wait -dir=server1
sqlf server wait -dir=server2

# Ready to run some SQL now system is up and running
sqlf <<!
connect client 'localhost:1527';
run './sql/add-list.sql';
!

Wednesday 4 September 2013

Pivotal SQLFire export/importing CSV files

Pivotal SQLFire offers 2 built in procedures that allow table data to be exported in CSV format as well as imported back from CSV format as shown below. The procedures are as follows
  • SYSCS_UTIL.EXPORT_TABLE
  • SYSCS_UTIL.IMPORT_TABLE
Example

In this example we are using a table as follows
  
sqlf> select * from apples_dept;
DEPTNO     |DNAME         |LOC          
----------------------------------------
70         |SUPPORT       |SYDNEY       
60         |DEV           |PERTH        
50         |MARKETING     |ADELAIDE     
40         |OPERATIONS    |BRISBANE     
30         |SALES         |CHICAGO      
20         |RESEARCH      |DALLAS       
10         |ACCOUNTING    |NEW YORK     

7 rows selected

1. Export the table as shown below.
  
sqlf> CALL SYSCS_UTIL.EXPORT_TABLE('APP', 'APPLES_DEPT', '/Users/papicella/sqlfire/vFabric_SQLFire_111_b42624/pasdemos/sqlfireweb/sql/apples_dept.csv', null, null, null);
Statement executed.

2. View export data as shown below.

[Wed Sep 04 20:10:18 papicella@:~/sqlfire/vFabric_SQLFire_111_b42624/pasdemos/sqlfireweb/sql ] $ cat apples_dept.csv 
70,"SUPPORT","SYDNEY"
60,"DEV","PERTH"
50,"MARKETING","ADELAIDE"
40,"OPERATIONS","BRISBANE"
30,"SALES","CHICAGO"
20,"RESEARCH","DALLAS"
10,"ACCOUNTING","NEW YORK"

3. Truncate table
  
sqlf> truncate table apples_dept;
0 rows inserted/updated/deleted
4. Import data back into the table
  
sqlf> CALL SYSCS_UTIL.IMPORT_TABLE('APP', 'APPLES_DEPT', '/Users/papicella/sqlfire/vFabric_SQLFire_111_b42624/pasdemos/sqlfireweb/sql/apples_dept.csv', ',', '"', null, 0);
Statement executed.
sqlf> select * from apples_dept;
DEPTNO     |DNAME         |LOC          
----------------------------------------
10         |ACCOUNTING    |NEW YORK     
20         |RESEARCH      |DALLAS       
30         |SALES         |CHICAGO      
40         |OPERATIONS    |BRISBANE     
50         |MARKETING     |ADELAIDE     
60         |DEV           |PERTH        
70         |SUPPORT       |SYDNEY       

7 rows selected  

More Information

http://pubs.vmware.com/vfabric53/index.jsp?topic=/com.vmware.vfabric.sqlfire.1.1/reference/system_procedures/derby/rrefimportproc.html

http://pubs.vmware.com/vfabric53/index.jsp?topic=/com.vmware.vfabric.sqlfire.1.1/reference/system_procedures/derby/rrefexportproc.html