Below are some screen shots of what Hyperic 4.6.5 shows in regards to monitoring Oracle 11g RDBMS running on a VM running Redhat 5.7 guest OS. The database it's monitoring is the actual repository database being used by Hyperic itself. The screen shots below show the main dashboard with the Oracle 11g resource overall status as well as viewing a DB writer process for the running instance itself.
The plugin to monitor the Oracle instance is provided as one of the 80+ plugins available with Hyperic out of the box. Once the HQ agent discovers the Oracle instance we are left with simply providing the config to a DB user which has access to all data dictionary tables. The easiest way to do that is to ensure the user you connect wih has the following granted.
grant select any dictionary to {user};
To me given I own the VM / Oracle instance so I will use the system user as the user to query the data dictionary as shown below. Once setup your good to go.
For more information on vFabric Hyperic use the link below.
http://pubs.vmware.com/vfabric5/index.jsp?topic=/com.vmware.vfabric.hyperic.4.6/vFabric_Hyperic_4.6.html
Search This Blog
Monday, 26 March 2012
Friday, 16 March 2012
Using derby style table functions to load data into SQLFire
Loading data into SQLFire can be done various ways including using Spring Batch with CSV files, Apache DDLUtils or direct JDBC connections pulling data into SQLFire. The approach below is yet another way. In this example we load the Oracle HR schema table "departments" into a SQLFire distributed system.
1. Create a java class with code that simple queries the table data and returns it as aJDBC ResultSet.
2. Add the JAR file with the class above as well as the Oracle JDBC jar file to the system CLASSPATH as shown below.
3. Start the SQLFire servers as shown below.
> sqlf server start -server-groups=MYGROUP -locators=localhost[41111] -client-bind-address=localhost -client-port=1529 -dir=server2 -classpath=$CLASSPATH &
4. Log into the distributed system using the CLI and run the SQL as follows to create the table in SQLFire which will store the same dataset from the Oracle "departments" table.
5. Log into the distributed system using the CLI and run the SQL below to create a function
6. Log into the distributed system using the CLI and run the SQL below to insert data into the "departments" table in SQLFire using the function we created at #5.
7. Verify we now have our departments table in SQLFire with data.
More info on derby-style functions can be found here.
http://db.apache.org/derby/docs/10.4/devguide/cdevspecialtabfuncs.html
1. Create a java class with code that simple queries the table data and returns it as aJDBC ResultSet.
package vmware.au.se.sqlfire.derby; import java.sql.Connection; import java.sql.DriverManager; import java.sql.ResultSet; import java.sql.SQLException; import java.sql.Statement; import oracle.jdbc.OracleDriver; public class DepartmentTable { public static String SQL = "select * from departments"; public static ResultSet read() throws SQLException { Connection conn = getConnection(); Statement stmt = conn.createStatement(); ResultSet rset = stmt.executeQuery(SQL); return rset; } public static Connection getConnection() throws SQLException { String username = "hr"; String password = "hr"; String thinConn = "jdbc:oracle:thin:@172.16.101.70:1521/linux11gr2"; DriverManager.registerDriver(new OracleDriver()); Connection conn = DriverManager.getConnection(thinConn,username,password); conn.setAutoCommit(false); return conn; } }
2. Add the JAR file with the class above as well as the Oracle JDBC jar file to the system CLASSPATH as shown below.
export CUR_DIR=`pwd`
export CLASSPATH=$CUR_DIR/lib/ojdbc6.jar:$CUR_DIR/lib/derby-funct.jar
export CLASSPATH=$CUR_DIR/lib/ojdbc6.jar:$CUR_DIR/lib/derby-funct.jar
Note: This ensures when we start our SQLFire nodes they will have the classes avaiable on the classpath
3. Start the SQLFire servers as shown below.
> sqlf server start -server-groups=MYGROUP -locators=localhost[41111] -client-bind-address=localhost -client-port=1528 -dir=server1 -classpath=$CLASSPATH &
> sqlf server start -server-groups=MYGROUP -locators=localhost[41111] -client-bind-address=localhost -client-port=1529 -dir=server2 -classpath=$CLASSPATH &
4. Log into the distributed system using the CLI and run the SQL as follows to create the table in SQLFire which will store the same dataset from the Oracle "departments" table.
create diskstore STORE1; call sys.set_eviction_heap_percentage_sg (85, 'MYGROUP'); drop table departments; create table departments (department_id int NOT NULL CONSTRAINT department_id_PK PRIMARY KEY, department_name varchar(40), manager_id int, location_id int) partition by column (department_id) SERVER GROUPS (MYGROUP) persistent 'STORE1' REDUNDANCY 1;
5. Log into the distributed system using the CLI and run the SQL below to create a function
CREATE FUNCTION externalDepartments () RETURNS TABLE ( DEPARTMENT_ID INT, DEPARTMENT_NAME VARCHAR( 40 ), MANAGER_ID INT, LOCATION_ID INT ) LANGUAGE JAVA PARAMETER STYLE DERBY_JDBC_RESULT_SET READS SQL DATA EXTERNAL NAME 'vmware.au.se.sqlfire.derby.DepartmentTable.read';
6. Log into the distributed system using the CLI and run the SQL below to insert data into the "departments" table in SQLFire using the function we created at #5.
insert into departments select s.* FROM TABLE (externalDepartments()) s;
7. Verify we now have our departments table in SQLFire with data.
[Fri Mar 16 08:55:40 papicella@:~/sqlfire/vFabric_SQLFire_101/pasdemos/oraclehr ] $ sqlf sqlf version 10.4 sqlf> connect client 'localhost:1527'; sqlf> select * from departments; DEPARTMENT&|DEPARTMENT_NAME |MANAGER_ID |LOCATION_ID ---------------------------------------------------------------------------- 230 |IT Helpdesk |NULL |1700 120 |Treasury |NULL |1700 10 |Administration |200 |1700 240 |Government Sales |NULL |1700 130 |Corporate Tax |NULL |1700 20 |Marketing |201 |1800 250 |Retail Sales |NULL |1700 140 |Control And Credit |NULL |1700 30 |Purchasing |114 |1700 260 |Recruiting |NULL |1700 150 |Shareholder Services |NULL |1700 40 |Human Resources |203 |2400 270 |Payroll |NULL |1700 160 |Benefits |NULL |1700 50 |Shipping |121 |1500 170 |Manufacturing |NULL |1700 60 |IT |103 |1400 180 |Construction |NULL |1700 70 |Public Relations |204 |2700 190 |Contracting |NULL |1700 80 |Sales |145 |2500 200 |Operations |NULL |1700 90 |Executive |100 |1700 210 |IT Support |NULL |1700 100 |Finance |108 |1700 220 |NOC |NULL |1700 110 |Accounting |205 |1700 27 rows selected
More info on derby-style functions can be found here.
http://db.apache.org/derby/docs/10.4/devguide/cdevspecialtabfuncs.html
Monday, 12 March 2012
vFabric hyperic application monitoring
vFabric Hyperic, a stand-alone component of vFabric Application Performance Manager, helps web operations teams monitor the application infrastructure for custom web applications across physical machines, a virtual infrastructure environment, or the cloud. By providing immediate notification of application performance degradation or unavailability, Hyperic enables system administrators ensure availability and reliability of critical business applications. With out-of-the-box monitoring of application metrics, app servers, web servers, databases, messaging servers, authentication systems, guest operating systems, virtual machines (VMs), vSphere ESX hosts, and more, you'll have single-pane visibility into your entire application stack regardless of where it is deployed.
Below are some screen shots showing the main dashboard page of hyperic along with it's integration into vSphere.
Below are some screen shots showing the main dashboard page of hyperic along with it's integration into vSphere.
Subscribe to:
Posts (Atom)