Wednesday, 30 April 2014

How much log space is requeried during an online reorg ?

How much log space is requeried during an online reorg ?

The log space required for an online table REORG is typically larger than what is required for an offline table REORG. The amount of space required is determined by the number of rows being reorganized, the number of indexes, the size of the index keys, and how poorly organized the table is at the outset. It is a good idea to establish a typical benchmark for log space utilization for your tables.

For every row in the table, it will likely be moved twice during an online table reorganization. Given one index, each row has to update the index key to reflect the new location, and once all accesses to the old location are complete, the key is updated again to remove the reference to the old RID. When the row is moved back, these updates to the index key are performed again. All of this activity is logged to make online table reorganization fully recoverable, therefore there is a minimum of two data log records (each instance including the row data) and four index log records (each instance including the key data). The clustering index in particular is prone to filling up the index pages, causing index splits and merges which also must be logged.

Since online table reorganization frequently issues internal commits, it usually does not hold significant logs as active. If there is a time when an online REORG does hold a large number of active logs, it is during the truncate phase, since it acquires an S table lock at the truncate phase. If the table reorganization cannot acquire the lock, it waits and holds the log, and other transactions might fill up the logs quickly.

To check the los space used,apply the command below:
db2pd -d <db_name> -logs
or 
db2 "select int(total_log_used/1024/1024) as Log_Used_Meg, \
int(total_log_available/1024/1024) as Log_Space_Free_Meg, \
int((float(total_log_used) /float(total_log_used+total_log_available))*100) \
as Percent_Used,int(tot_log_used_top/1024/1024) as Max_Log_Used_Meg, \
int(sec_log_used_top/1024/1024) as Max_Secundary_Used_Meg, \
int(sec_logs_allocated) as Secondaries from sysibmadm.snapdb"

Monday, 28 April 2014

How many bytes a character column is storing ?

How many bytes a character column is storing ?

CHARACTER_LENGTH(<COLUMN>,OCTETS)
Use the select below to find how many bytes a character column is storing in your database
db2 "SELECT CHARACTER_LENGTH(NAME,OCTETS) FROM TABLEX"

How is TIMERON calculated and what is it ?

How is TIMERON calculated and what is it ?

The timeron value is meant to convey the relative cost of the search for the query

  • A timeron is a cost estimate calculated by DB2 based on an evaluation of the resources that will be used. IBM uses a proprietary algorithm for calculating timeron values that estimates the total cost as a weighted sum of the I/O cost and processor cost.

  • A weighting factor is applied to both the I/O cost estimate and CPU cost estimate to apply more (or less) emphasis to I/O versus CPU. So, if I/O were given a weighting factor of 1 and CPU given a weighting factor of 0.5, CPU cost would be weighted at half of the I/O cost. Actually, this weighting factor is the default used by DB2.

  • To determine the appropriate emphasis, DB2 examines the CPU model being used. Based upon this value, specific weighting factors are applied. Of course, the DB2 optimizer will utilize the statistics, indexes, filter factors, and other information at its disposal to estimate I/O and CPU cost (before applying the weighting factors).

Source: http://www-1.ibm.com/support/docview.wss?uid=swg21207055

Sunday, 27 April 2014

How change the default date format in DB2 Databases

How change the default date format in DB2 Databases

Follow the steps below to change date format in DB2 Databases:


1 - On the command line, change your current directory to sqllib\bnd

For example: 

On Windows:
c:\program files\IBM\sqllib\bnd
On UNIX:

cd /home/db2inst1/sqllib/bnd


2 - Connect to the database from the operating system shell as a user with SYSADM authority:
db2 connect to DBNAME
db2 bind @db2ubind.lst datetime ISO blocking all grant public
(In your case, substitute your database name and desired date format for DBNAME and ISO, respectively.)

The date formats available on db2 are:  

DEFUse a date and time format associated with the territory code.
EURUse the IBM standard for Europe date and time format.
ISOUse the date and time format of the International Standards Organization.
JISUse the date and time format of the Japanese Industrial Standard.
LOCUse the date and time format in local form associated with the territory code of the database.
USAUse the IBM standard for U.S. date and time format.

Saturday, 26 April 2014

High Availability Options for DB2/LUW

High Availability Options for DB2/LUW

1. Traditional Fail Over 2. DB2 HADR feature 3. Replication 4. GridSCALE


1 March 2007
This article lists and briefly describes the options for increasing the availability of your DB2 on Linux, UNIX and Windows (DB2/LUW) systems through the use of redundant systems and clustering.  Please contact me if you want to discuss any of them more in depth.  This sort of technology is very appealing to many of us because we like to brag to our friends about the cool new technology that we are using.  While there are many legitimate uses for this technology, I will to take a moment at the beginning of this article and rain on your parade.  However boring it may be, following good practices in planning, building, maintaining and monitoring your systems yields the best results and is very cost effective.  Since these are not the focus of this article I will not get into them here, but I would be happy to discuss them with you at length.  I would also like to note that I consider these good practices to be a prerequisite to using any of the more advanced features for high availability, as they add complexity and can DECREASE availability if you have not laid a good foundation.  However, once you have implemented good practices the following items can add that incremental availability that many systems need.
In this article I will describe the 4 primary high availability schemes that we have for DB2/LUW.  For simplicity I will assume two server clusters except where noted, but most of the scenarios allow for more servers. The four options that I will discuss are:
1.  Traditional Fail Over
2.  DB2 HADR feature
3.  Replication
4. GridSCALE

1.  Traditional Fail Over
This is the traditional fail over scenario where you have a shared disk system that can mount file systems on either of the servers in the cluster.  Your instance and database file systems must be able to be mounted on either one of the servers.  Since the instance files can only be on one server at a time, it can only run on one server at a time.  This requires clustering software such as Tivoli System Automation (TSA which free with DB2 on Linux) that can automatically detect and move the file systems and start the instance after a failure.  You can have more than one instance in the cluster.  Under normal circumstances you can either have all instances on one server with the other server as a passive backup, or you can have some instances on each server with the surviving server taking over all instances in the event of a problem.  In the later scenario with two servers, you still need about 50% unused capacity in the cluster to allow full performance processing in the event that a server fails.  To decrease the unused capacity you can have clusters with more than two servers and designate one server as a passive backup for the rest of the cluster.  For example, you could create a 4-server cluster with three servers running at full capacity and one idle server.  In this case you only need about 25% unused capacity.  You just need to keep in mind that while the probability of two servers failing at the same time is low, but it is a possibility.  In addition to TSA, other clustering software options exist including Veritas Cluster Server, HACMP and Microsoft Cluster Server.
Advantages
·      Low cost
·      Proven technology -  This failover practice has been in use for years
·      With minimal configuration, many of the clustering software offerings can detect database failures and perform the fail over.
Disadvantages
·      Database is a single point of failure.  Having the data on RAID helps, but not for logical corruption such as when someone deletes a file.
·      Fail over time increases with the number of logical disk volumes.
·      Must use external shared disk hardware
·      Servers must be in relatively close physical proximity.
2.  DB2 HADR feature
With DB2 High Availability and Disaster Recovery feature you have two separate copies of the database -- one on each server.  There is one active copy and one is a passive copy that is kept up to date automatically by DB2.  There is and active DB2 instance on each server with one containing the primary copy of the database and the other containing a standby copy.  The standby copy is perpetually in "roll forward pending mode” and as changes are committed on the production copy of the database they are automatically sent to the secondary copy.  As with traditional failover, you can have a mutual take over situation if you have multiple databases in the cluster or you can have one server designated solely as a standby.  The HADR feature automatically keeps the copies of the database synchronized, but you need clustering software like TSA to if you wish to automate the process of detecting failure and making the standby the primary database.
Advantages
·      Eliminates all single points of failure
·      Extremely fast fail over -  you just tell the secondary copy to takeover primary processing
·      You can use either internal disk or external disk storage
·      Allows for Disaster Recovery
·      Allows for “rolling” fix pack upgrades – Since the instances can be a different maintenance levels for short periods of time, you can perform maintenance upgrades with an outage that lasts only as long as it takes to tell the secondary to become the primary.  This does not apply to version upgrades like going from DB2 v8.2 to DB2 v9.1.
·      Secondary copy can be at your disaster recovery site.
Disadvantages
·        Consumes more disk spaces because you have two copies of the database.
3.  Replication
This option allows you to have two active databases.  For HA you typically only want to have updates occur on one database at a time for the least complex maintenance.  This is often called a master/slave replication.  The really good thing about replication is the ability to do reporting, ad hoc queries and backups from the secondary server without impacting your primary at all.  There are two types of replication -- SQL Replication and Q-replication.  Replication automatically keeps the copies of the database synchronized, but you need clustering software like TSA to if you wish to automate the process of detecting failure and making the standby the primary database.  You can also use our replication products to replicate data to or from certain other non IBM databases.  You can read more about the traditional SQL Replication and the newer Q-Replication at these links:
Advantages
·          Eliminates all single points of failure
·          Extremely fast fail over -  you just tell the secondary copy to takeover primary processing
·          You can use either internal disk or external disk storage
·          Allows for Disaster Recovery
·          Allows offloading of reporting tasks.
·          Secondary copy can be at your disaster recovery site.
Disadvantages
·          Consumes more disk spaces because you have two copies of the database.
·          More Complex configuration and maintenance
4.  XKOTO GridSCALE
Look for an announcement regarding this technology in the near future!
xkoto is an IBM partner that has a database load balancing technology called GRIDSCALE.  In essence, it allows you to have three or more copies of your database that are kept in sync and the read load is divided among the databases.  This option is designed primarily for increasing the capacity of databases by allowing you to scale out with additional commodity servers as your load grows, but since there are multiple copies of it, you also get high availability with no down time.  When one server fails, the rest just keep processing.  The servers can be in the same or location or widely separated.
In this option the software GridSCALE software intercepts all incoming SQL and farms it out to the individual servers as needed.  Each server has its own copy of the database.  For updates/inserts/deletes it sends them to all servers in the cluster.  Your application gets the appropriate SQL return as soon as the first server completes the data change, and GridSCALE then ensures that all servers perform the update or takes a broken server out of the cluster if it has failed.  Queries are assigned to the server with the smallest load.  It also keeps track of the status of updates and will not execute a query on a server that has not processed an update that would affect it.  For additional details pleas see:
Advantages
·          Eliminates all single points of failure
·          Zero fail over time.
·          You can use either internal disk or external disk storage
·          Allows for Disaster Recovery
·          Allows increased scalability.
·                    A copy or copies can be at your disaster recovery site.
Disadvantages
·          Consumes more disk spaces because you have multiple copies of the database.
·          More Complex configuration and maintenance

HADR and TSA Implement DB2 high availability disaster recovery in a Tivoli System Automation cluster domain

HADR and TSA

Implement DB2 high availability disaster recovery in a Tivoli System Automation cluster domain

The step-by-step implementation process

In today's world, where businesses are serving customers from around the world 24 hours a day, 7 days a week, customers expect their computing systems to be 100% reliable. DB2 for Linux, UNIX, and Windows has always been in the forefront of databases in providing such industrial strength reliability. In DB2 UDB V8.2, DB2 introduced two new features that further provide customers with options for implementing high availability, disaster recovery (HADR) and automatic client rerouting capabilities. By duplicating the workload of the database to a separate site, these features protect users from production downtime in the event of a local hardware failure or a catastrophic site failure. These features are shipped as part of the DB2 UDB Enterprise Server Edition or the DB2 Enteprise 9 standard package.
HADR as a technology has been available in Informix Dynamic Server (IDS) since the mid-1990s. With the acquisition of Informix, it made its way into DB2 in version 8.2. The easiest way to understand HADR is as a pair of servers that are kept in sync with each other at the database level. The primary server of the pair interacts with the end user's application and receives transactions, while the standby server of the pair keeps itself in sync with the primary by applying the transactions directly from the primary server's log buffer to itself. If the primary server fails, the standby can take over the workload very quickly (most cases in under 30 seconds). It also supports rolling upgrades of the database or OS software, allowing you to apply fixes without significantly impacting your production system.
Tivoli System Automation (TSA) for Multiplatforms is designed to provide high availability for critical business applications and middleware through policy-based self-healing that is easily tailored to your individual application environment. It includes plug-and-play automation policy modules for many IBM® and non-IBM middleware and applications such as DB2, WebSphere®, Apache, and mySAP Business Suite. With TSA for Multiplatforms, you can establish one operation and automation team responsible for z/OS®, Linux, and AIX® applications, to help greatly simplify problem determination and resolution.

Figure 1. DB2 HADR in a TSA cluster domain topology 
Sample figure containing an image

This is the actual software configuration used to set up the environment for this article:
  • Operating system: Red Hat Linux Enterprise Server 2.4.21-27 GNU/Linux 
  • DB2: DB2 UDB Enterprise Server Edition (ESE) Version 8.1.0.96 at Fixpak 10 
  • TSA: TSA 1.2.0 FP0005 Linux
Below is the actual hardware configuration used to set up the environment for this article.
Two IBM eServer pSeries® 690 server machines in the cluster domain, each with the following configuration:
  • Processors: Intel Xeon MP 4 CPU 2.70 GHz 
  • Memory: 8 GB 
  • Network adapters: Two Intel PRO/1000 Ethernet Adapters

One IBM eServer pSeries 690 server machine at the disaster recovery site with the following configuration:
  • ProcessorsWsIntel Xeon CPU 3.0 GHz 
  • Memory: 8 GB 
  • Network adapters: Two Intel PRO/1000 Ethernet Adapters
External shared storage
There are four IBM FastT600 Fiber Channel Disks at the cluster side and four IBM DS4300 Fiber Channel Disks at the disaster recovery site.
The following section documents a three-node topology, as depicted in Figure 1. In this example, there is a Active-Passive TSA cluster domain consisting of two nodes (Node1 and Node2) that share a common shared storage consisting of the actual DB2 database files and software. The third node (Node3) in this topology consists of the disaster recovery site existing in a remote location and hosting the standby database for the primary database mentioned earlier. The TSA cluster domain and the standby server are linked together through leased lines. The primary and standby database name for the HADR setup is jsbmain.
NODE1: One of the machines of the Active-Passive TSA cluster domain setup. In the current setup, this node is the active one and owns the resources of the cluster. 
NODE2: The second machine of the TSA cluster domain setup. In the current setup, this node is the passive node and acts like a standby node for the cluster. 
NODE3: This machine is the HADR standby server for DB2 failover and does not fall under the TSA cluster domain setup. 
Detailed below are the steps to successfully configure DB2 HADR on a TSA cluster domain. This setup assumes that the TSA cluster domain is already set up properly. For more information on how to set up a basic TSA cluster domain and the related commands, please refer to Appendix A.

1. Add the appropriate IP address to the hostname mappings in the /etc/hosts file of each node. The hosts file on each node should look like this: 
10.1.1.5 NODE1
10.1.1.6 NODE2
10.1.1.2 NODE3
2. Execute the ping hostname or IP Address command on each of the Nodes to make sure that all three nodes (for example, Node1, Node2, and Node3) are able to communicate with each other through TCP/IP protocol. 
3. Make sure that /etc/services file should have identical entries for the ports where the HADR service is listening on all the nodes of the cluster (Node1 and Node2) as well as the standby machine (Node3). 
Sample output from the /etc/services files should look like this on all the three machines. 
DB2_HADR_15 55001/tcp
DB2_HADR_16 55005/tcp
In this case DB2_HADR_15 is the name of the HADR service running on the primary node of the cluster while DB2_HADR_16 is the name of the HADR service running on the standby server.
Note: Many of the TSA commands used in the setup require RSH to be set up on all three nodes. RSH allows a user from one node to run commands on another remote node. For more information on setting up RSH on Red Hat Linux, please refer to the Resources section of this article.
Configure RSH to allow the root user to issue remote commands on each node (NODE1, NODE2 AND NODE3) by adding the following lines to the file /root/.rhosts.
Node1 root
Node2 root
Node3 root
Login as root user and issue the following commands on each of the three nodes: 
# rsh Node1 ls
# rsh Node2 ls
# rsh Node3 ls
You should see the directory listing of /root on the NODE1, NODE2 AND NODE3.

Please refer to Appendix A for a basic two-node TSA cluster domain setup. Also, get more information on related TSA commands in Appendix A.

Note: In this setup the database is stored in an external shared storage /jsbdata which is a fastT600 fiber channel disk array. The instances are different on the two machines (but having the same name db2inst1) of the cluster but the database is the same. The default TSA scripts that are shipped comes with DB2 does not support the primary and the standby servers (at TSA level) to have the same name. These scripts need to be modified to support this configuration. 
The following catalog command was used to register the database information at the two instances: 
db2 CATALOG DATABASE jsbmain AS jsbmain ON /JSBDATA
Issue the following commands from the command line processor (CLP):
On the Primary database server (Node1):
db2 CONNECT RESET
db2 UPDATE DB CFG FOR jsbmain USING INDEXREC RESTART LOGINDEXBUILD ON LOGARCHMETH1 "DISK: /jsbmain/ARCHFILES" LOGPRIMARY 100 LOGSECOND 50 LOGFILSIZ 5000
db2 BACKUP DATABASE jsbmain TO "/jsbmain/JSBBAK" WITH 2 BUFFERS BUFFER 1024 PARALLELISM 4 WITHOUT PROMPTING
The directory where the backup of the primary server is stored (/jsbmain/jsbbak) should be accessible from the standby server (Node3) or it should be copied to a local drive on the standby server so that the restore process can complete it.
Note: Doing a local restore by copying the backup file to a local drive on the standby server is recommended since a remote restore takes more time because the restore buffers have to be shipped through the network.
On the standby server (Node3):
db2 RESTORE DATABASE jsbmain FROM "/jsbmain/JSBBAK" REPLACE HISTORY FILE WITHOUT PROMPTING

On the primary server (Node1), execute the following command from the db2 CLP to enable the automatic client reroute feature of HADR:

db2 UPDATE ALTERNATE SERVER FOR DATABASE jsbmain USING HOSTNAME 10.1.1.2 PORT 45000
Where 10.1.1.2 is the IP address of the standby server (NODE3) and 45000 is the port number where the db2inst3 instance of the standby server is listening.
On the standby server (NODE3) execute the following command from the db2 prompt to enable the automatic client reroute feature of HADR.
db2 UPDATE ALTERNATE SERVER FOR DATABASE jsbmain USING HOSTNAME 10.1.1.1 PORT 50000
IMPORTANT: When specifying the hostname for the alternate server for the standby server always make sure that you specify the virtual IP address of the TSA cluster domain (in this case the virtual IP address is 10.1.1.1). 
50000 is the port number where the db2inst1 instance is listening. Make sure that db2inst1 on Node2 of the TSA cluster domain also listens on the same port number as db2inst1 on Node1. Otherwise in the scenario of a HADR failover, db2inst3 on server Node3 tries to communicate with port 50000 of db2inst1 (which will not be active in case of a disaster). All the clients should connect at least once to the primary server to pick up the alternate server information in case of a disaster.
To learn more about the automatic client reroute feature of HADR, please refer to the Resources section of this article.


Execute the following commands on the database of the active node of the cluster (Node1 in this case) to make this database the primary database for the HADR setup:
db2 UPDATE DB CFG FOR jsbmain USING HADR_LOCAL_HOST 10.1.1.1
db2 UPDATE DB CFG FOR jsbmain USING HADR_LOCAL_SVC DB2_HADR_15
db2 UPDATE DB CFG FOR jsbmain USING HADR_REMOTE_HOST 10.1.1.2
db2 UPDATE DB CFG FOR jsbmain USING HADR_REMOTE_SVC DB2_HADR_16
db2 UPDATE DB CFG FOR jsbmain USING HADR_REMOTE_INST DB2INST3
db2 UPDATE DB CFG FOR jsbmain USING HADR_SYNCMODE NEARSYNC
db2 UPDATE DB CFG FOR jsbmain USING HADR_TIMEOUT 120

Note: Special care should be taken to ensure that you always specify the virtual IP address of the TSA cluster domain in the HADR_LOCAL_HOST for the primary server in the HADR setup to enable HADR to function normally in this environment.
Execute the following commands on the standby server (Node3) to make this database the standby database for the HADR setup:
db2 UPDATE DB CFG FOR jsbmain USING HADR_LOCAL_HOST 10.1.1.2
db2 UPDATE DB CFG FOR jsbmain USING HADR_LOCAL_SVC DB2_HADR_16
db2 UPDATE DB CFG FOR jsbmain USING HADR_REMOTE_HOST 10.1.1.1
db2 UPDATE DB CFG FOR jsbmain USING HADR_REMOTE_SVC DB2_HADR_15
db2 UPDATE DB CFG FOR jsbmain USING HADR_REMOTE_INST DB2INST1
db2 UPDATE DB CFG FOR jsbmain USING HADR_SYNCMODE NEARSYNC
db2 UPDATE DB CFG FOR jsbmain USING HADR_TIMEOUT 120
Ensure that both servers on the TSA domain and the standby server have the TCPIP protocol enabled for DB2 communication. Execute the db2set DB2COMM=TCPIP command on both the servers of the TSA domain, as well as on the standby server.

As standby instance owner (db2inst3), start HADR on the standby node (Node3) as follows:
db2 DEACTIVATE DATABASE jsbmain
db2 START HADR ON DATABASE jsbmain AS STANDBY
As primary instance owner (db2inst1), start HADR on the primary node (Node1) as follows:
db2 DEACTIVATE DATABASE jsbmain
db2 START HADR ON DATABASE jsbmain AS PRIMARY
Note: While starting HADR, always start the HADR services on the standby first and then the primary. Similarly, when stopping HADR, stop the services first on the primary and then the standby.

Now, since you are done with the entire HADR setup, verify whether it is really working.
Execute the following command at the primary server (Node1):
db2 GET SNAPSHOT FOR DB ON jsbmain
The output should be similar to the one shown below:
The output should be similar to the one shown below:
HADR Status Role = Primary
State = Peer
Synchronization mode = Nearsync
Connection status = Connected, 11/24/2006 03:43:39.044650
Heartbeats missed = 0
Local host = 10.1.1.1
Local service = DB2_HADR_15
Remote host = jsbdr
Remote service = DB2_HADR_16
Remote instance = db2inst3
timeout (seconds) = 120
Primary log position (file, page, LSN) = S0000139.LOG, 0, 000000003C8E0000
Standby log position (file, page, LSN) = S0000139.LOG, 0, 000000003C8E0000
Log gap running average (bytes) = 0
Execute the following command at the standby server (Node3):
db2 GET SNAPSHOT FOR DB ON jsbmain
The output in the standby server should be similar to one shown below:
HADR Status
Role = Standby
State = Peer
Synchronization mode = Nearsync
Connection status = Connected, 11/24/2006 03:41:59.782744
Heartbeats missed = 0
Local host = jsbdr
Local service = DB2_HADR_16
Remote host = 10.1.1.1
Remote service = DB2_HADR_15
Remote instance = db2inst1
timeout (seconds) = 120
Primary log position (file, page, LSN) = S0000139.LOG, 0, 000000003C8E0000
Standby log position (file, page, LSN) = S0000139.LOG, 0, 000000003C8E0000
Log gap running average (bytes) = 0
To get more information on the various states of HADR pair and the actual working of HADR, please refer to the Resources section of this article.

The last step of the setup procedure is to test the failover capability of HADR. Follow these steps:
a. Manually shut down the primary server by the db2_kill command.
b. Execute the takeover command at the standby server 
db2 TAKEOVER HADR ON jsbmain
c. If the normal takeover does not work the BY FORCE option needs to be specified to forcefully db2 takeover HADR on the standby server
d. Taking a snapshot as specified earlier would now show that the standby server is performing the role of a primary server. It may take some time for the status to get reflected due to the network latency in applying the log buffers during which the standby server will show the status as Remote catch up pending.
Now you are all set to leverage the power of DB2 HADR on TSA Cluster!!



The following commands were used to set up a two-node TSA cluster domain:
preprpnode: This command prepares the security settings for the node to be included in a cluster. When issued, public keys are exchanged among the nodes, and the RMC access control list (ACL) is modified to enable access to cluster resources by all the nodes of the cluster. 
mkrpdomain: This command creates a new cluster definition. It is used to specify the name of the cluster, and the list of nodes to be added to the cluster. 
lsrpdomain: This command lists information about the cluster to which the node where the command runs belongs. 
startrpdomain / stoprpdomain: These commands are used to bring the cluster online and offline, respectively. 
addrpnode: Once a cluster has been defined and is operational, this command is used to add new nodes to the cluster. 
startrpnode / stoprpnode: These commands are used to bring individual nodes online and offline to the cluster. They often used when performing maintenance to a particular system. The node is stopped, repairs or maintenance is performed, then the node is restarted, at which time it rejoins the cluster. 
lsrpnode: This command is used to view the list of nodes defined to a cluster, as well as the operating state (OpState) of each node. Note that this command is useful only on nodes that are Online in the cluster; otherwise it will not display the list of nodes. 
rmrpdomain: This command removes a defined cluster. 
rmrpnode: This command removes one or more nodes from a cluster definition.
For detailed descriptions of these commands, refer to these manuals, all of which you can find on the IBM TSA CD:
IBM Reliable Scalable Cluster Technology for Linux, Administration Guide, SA22-7892
IBM Reliable Scalable Cluster Technology for Linux, Technical Reference, SA22-7893
IBM Reliable Scalable Cluster Technology for AIX 5L: Administration Guide, SA22-7889
IBM Reliable Scalable Cluster Technology for AIX 5L: Technical Reference, SA22-7890
Please refer to the Resources section at the bottom of the article for TSM references.

The following scenarios show how you can create a cluster, add nodes to the cluster, and how you can check the status of the IBM TSA daemon (IBM.RecoveryRM).

To create this cluster, you need to:
1. Log in as root on each node in the cluster.
2. Set the environment variable CT_MANAGEMENT_SCOPE=2 on each node:
export CT_MANAGEMENT_SCOPE=2
3. Issue the preprpnode command on all nodes to allow communication between the cluster nodes.
preprpnode node01 node02
4. You can now create a cluster with the name "SA_Domain" running on Node1 and Node2. The following command can be issued from any node:
mkrpdomain SA_Domain node01 node02
Note: When creating RSCT peer domains (clusters) using the mkrpdomain command, the characters used for the peer domain name are limited to the following ASCII characters: A-Z, a-z, 0-9, (Period), and _ (underscore).
5. To look up the status of SA_Domain, issue the lsrpdomain command: lsrpdomain
Output:
Name-------OpState-------RSCTActiveVersion-------MixedVersions-------TSPort-------GSPort
SA_Domain--Offline-------2.3.3.0---------------------No--------------------12347--------12348
The cluster is defined but offline.
6. Issue the startrpdomain command to bring the cluster online:
startrpdomain SA_Domain
When you run the lsrpdomain command again, you see that the cluster is still in the process of starting up, the OpState is Pending Online.

Output:
Name-------OpState-----------------RSCTActiveVersion-------MixedVersions-------TSPort-------GSPort
SA_Domain--Pending Online-------2.3.3.0---------------------No--------------------12347--------12348

Notes: 
1. You may get an error message similar to this one: 
"2632-044 the domain cannot be created due to the following errors that were detected while harvesting information from the target nodes: 
node1: 2632-068 this node has the same internal identifier as node2 and cannot be included in the domain definition."
This error most often occurs if you have cloned Linux images. Something went wrong with the cluster, and the entire configuration should be reset. You can resolve such problems by running the /usr/sbin/rsct/install/bin/recfgct command on the node which is named in the error message in order to reset the node ID. 
Continue with the preprpnode command. 
2. You may also get an error message like this: 
"2632-044 The domain cannot be created due to the following errors that were detected while harvesting information from the target nodes: 
node1: 2610-418 Permission is denied to access the resources or resource class specified in this command."
To resolve this issue, check your hostname resolution. Make sure that all entries for each node of the cluster in your local /etc/hosts files on all nodes and the nameserver entries are identical.
After creating a two-node cluster, you can add a third node to SA_Domain in this way:
1. Issue the lsrpdomain command as a root user to see if your cluster is online:
Output:
Name-------OpState-------RSCTActiveVersion-------MixedVersions-------TSPort-------GSPort
SA_Domain--Online-------2.3.3.0---------------------No--------------------12347--------12348
2.Issue the lsrpnode command to see which nodes are online:
Name OpState RSCT Version
node02 Online 2.3.3.0
node03 Offline 2.3.3.0
node01 Online 2.3.3.0
3. Issue the following preprpnode commands as a root user to allow communication between the existing nodes and the new node.
Log on to Node3 as a root user and enter:
preprpnode node01 node02
Log on to Node2 as a root user and enter:
preprpnode node03
Log on to Node1 as a root user and enter:
preprpnode node03
Make sure that you execute the preprpnode command on each node. It is strongly recommended.
4. In order to add Node3 to the cluster definition, issue the addrpnode command as a root user on Node1 or Node2, which are already online on the cluster:
addrpnode node03
Issue the lsrpnode command as a root user to see the status of all nodes:
Name OpState RSCT Version
node02 Online 2.3.3.0
node03 Offline 2.3.3.0
node01 Online 2.3.3.0
4. As a root user, start Node3 from an online node:
startrpnode node03
After a short time Node3 should be online, too.

Format snapshot for dynamic SQL in DB2

Format snapshot for dynamic SQL

Format snapshot for dynamic SQL
#!/usr/bin/ksh#
#########################
#######          ########
#####  MAIN  PROC  ######
#######          ########
#########################
#

##
#  Check input
##

usage="\n Usage: $0 <DynamicSQL Snapshot File> "
example="\n Example:  $0 ECCM_DYNSQL.snap  \n"

if [[ $# < 1 ]]
then

        echo "${usage}"
        echo "${example}"
        exit
fi

##
#  Initialize Command line Variables
##

FILE=${1}

echo Processing File ${FILE} for Database ${DB}

awk 'BEGIN {    FS="="
                numexec = "Number of executions"
                numcomp = "Number of compilations"
                wrstprep = "Worst prep time"
                bestprep = "Best prep time"
                irowsd = "Internal rows deleted"
                irowsi = "Internal rows inserted"
                rowsr = "Rows read"
                arowsr = "Average Rows read"
                irowsu = "Internal rows updated"
                rowsw = "Rows Written"
                arowsw = "Average Rows Written"
                sorts = "Statement Sorts"
                asorts = "Average Statement Sorts"
                sorto = "Statement Sort Overflows"
                asorto = "Average Sort Overflows"
                sortt = "Statement Sort Time"
                asortt = "Average Sort Time"
                bpdlr = "BP Data Logical Reads"
                abpdlr = "Average BP Data Logical Reads"
                bpdpr = "BP Data Physical Reads"
                abpdpr = "Average BP Data Physical Reads"
                bptdlr = "BP Temp Data Logical Reads"
                abptdlr = "Average BP Temp Data Logical Reads"
                bptdpr = "BP Temp Data Physical Reads"
                abptdpr = "Average BP Temp Data Physical Reads"
                bpilr = "BP Index Logical Reads"
                abpilr = "Average BP Index Logical Reads"
                bpipr = "BP Index Physical Reads"
                abpipr = "Average BP Index Physical Reads"
                bptilr = "BP Temp Index Logical Reads"
                abptilr = "Average BP Temp Index Logical Reads"
                bptipr = "BP Temp Index Physical Reads"
                abptipr = "Average BP Temp Index Physical Reads"
                xtime = "Total exec time"
                atime = "Average exec time"
                ucpu = "Total user cpu"
                scpu = "Total system cpu"
                text = "Text"
                printf( "%s~", numexec )
                printf( "%s~", numcomp )
                printf( "%s~", wrstprep )
                printf( "%s~", bestprep )
                printf( "%s~", irowsd )
                printf( "%s~", irowsi )
                printf( "%s~", rowsr )
                printf( "%s~", arowsr )
                printf( "%s~", irowsu )
                printf( "%s~", rowsw )
                printf( "%s~", arowsw )
                printf( "%s~", sorts )
                printf( "%s~", asorts )
                printf( "%s~", sorto )
                printf( "%s~", asorto )
                printf( "%s~", sortt )
                printf( "%s~", asortt )
                printf( "%s~", bpdlr )
                printf( "%s~", abpdlr )
                printf( "%s~", bpdpr )
                printf( "%s~", abpdpr )
                printf( "%s~", bptdlr )
                printf( "%s~", abptdlr )
                printf( "%s~", bptdpr )
                printf( "%s~", abptdpr )
                printf( "%s~", bpilr )
                printf( "%s~", abpilr )
                printf( "%s~", bpipr )
                printf( "%s~", abpipr )
                printf( "%s~", bptilr )
                printf( "%s~", abptilr )
                printf( "%s~", bptipr )
                printf( "%s~", abptipr )
                printf( "%s~", xtime )
                printf( "%s~", atime )
                printf( "%s~", ucpu )
                printf( "%s~", scpu )
                printf( "%s~", text )
                printf( "\n" )
                numexec = ""
                numcomp = ""
                wrstprep = ""
                bestprep = ""
                irowsd = ""
                irowsi = ""
                rowsr = ""
                arowsr = ""
                irowsu = ""
                rowsw = ""
                arowsw = ""
                sorts = ""
                asorts = ""
                sorto = ""
                asorto = ""
                sortt = ""
                asortt = ""
                bpdlr = ""
                abpdlr = ""
                bpdpr = ""
                abpdpr = ""
                bptdlr = ""
                abptdlr = ""
                bptdpr = ""
                abptdpr = ""
                bpilr = ""
                abpilr = ""
                bpipr = ""
                abpipr = ""
                bptilr = ""
                abptilr = ""
                bptipr = ""
                abptipr = ""
                xtime = ""
                atime = ""
                ucpu = ""
                scpu = ""
                text = ""
        } \
{
        if($0 ~ /Number of executions/)
               {
                numexec = $2
                if(numexec == 0)
                   div = 1
                 else
                   div = numexec
               }
        if($0 ~ /Number of compilations/)
                numcomp = $2
        if($0 ~ /Worst preparation time/)
                wrstprep = $2
        if($0 ~ /Best preparation time/)
                bestprep = $2
        if($0 ~ /Internal rows deleted/)
                irowsd = $2
        if($0 ~ /Internal rows inserted/)
                irowsi = $2
        if($0 ~ /Rows read/)
               {
                rowsr = $2
                arowsr = rowsr/div
               }
        if($0 ~ /Internal rows updated/)
                irowsu = $2
        if($0 ~ /Rows written/)
               {
                rowsw = $2
                arowsw = rowsw/div
               }
        if($0 ~ /Statement sorts/)
               {
                sorts = $2
                asorts = sorts/div
               }
        if($0 ~ /Statement sort overflows/)
               {
                sorto = $2
                asorto = sorts/div
               }
        if($0 ~ /Total sort time/)
               {
                sortt = $2
                asortt = sorts/div
               }
        if($0 ~ /Buffer pool data logical reads/)
               {
                bpdlr = $2
                abpdlr = bpdlr/div
               }
        if($0 ~ /Buffer pool data physical reads/)
               {
                bpdpr = $2
                abpdpr = bpdpr/div
               }
        if($0 ~ /Buffer pool temporary data logical reads/)
               {
                bptdlr = $2
                abptdlr = bptdpr/div
               }
        if($0 ~ /Buffer pool temporary data physical reads/)
               {
                bptdpr = $2
                abptdpr = bptdpr/div
               }
        if($0 ~ /Buffer pool index logical reads/)
               {
                bpilr = $2
                abpilr = bpilr/div
               }
        if($0 ~ /Buffer pool index physical reads/)
               {
                bpipr = $2
                abpipr = bpipr/div
               }
        if($0 ~ /Buffer pool temporary index logical reads/)
               {
                bptilr = $2
                abptilr = bptilr/div
               }
        if($0 ~ /Buffer pool temporary index physical reads/)
               {
                bptipr = $2
                abptipr = bptipr/div
               }
        if($0 ~ /Total execution time/)
               {
                xtime = $2
                atime = xtime/div
               }
        if($0 ~ /Total user cpu time/)
                ucpu = $2
        if($0 ~ /Total system cpu time/)
                scpu = $2
        if($0 ~ /Statement text/)
          {
                text = substr( $0, index($0, "=") +2 )
                printf( "%s~", numexec )
                printf( "%s~", numcomp )
                printf( "%s~", wrstprep )
                printf( "%s~", bestprep )
                printf( "%s~", irowsd )
                printf( "%s~", irowsi )
                printf( "%s~", rowsr )
                printf( "%s~", arowsr )
                printf( "%s~", irowsu )
                printf( "%s~", rowsw )
                printf( "%s~", arowsw )
                printf( "%s~", sorts )
                printf( "%s~", asorts )
                printf( "%s~", sorto )
                printf( "%s~", asorto )
                printf( "%s~", sortt )
                printf( "%s~", asortt )
                printf( "%s~", bpdlr )
                printf( "%s~", abpdlr )
                printf( "%s~", bpdpr )
                printf( "%s~", abpdpr )
                printf( "%s~", bptdlr )
                printf( "%s~", abptdlr )
                printf( "%s~", bptdpr )
                printf( "%s~", abptdpr )
                printf( "%s~", bpilr )
                printf( "%s~", abpilr )
                printf( "%s~", bpipr )
                printf( "%s~", abpipr )
                printf( "%s~", bptilr )
                printf( "%s~", abptilr )
                printf( "%s~", bptipr )
                printf( "%s~", abptipr )
                printf( "%s~", xtime )
                printf( "%s~", atime )
                printf( "%s~", ucpu )
                printf( "%s~", scpu )
                printf( "%s~", text )
                printf( "\n" )
                numexec = ""
                numcomp = ""
                wrstprep = ""
                bestprep = ""
                irowsd = ""
                irowsi = ""
                rowsr = ""
                arowsr = ""
                irowsu = ""
                rowsw = ""
                arowsw = ""
                sorts = ""
                asorts = ""
                sorto = ""
                asorto = ""
                sortt = ""
                asortt = ""
                bpdlr = ""
                abpdlr = ""
                bpdpr = ""
                abpdpr = ""
                bptdlr = ""
                abptdlr = ""
                bptdpr = ""
                abptdpr = ""
                bpilr = ""
                abpilr = ""
                bpipr = ""
                abpipr = ""
                bptilr = ""
                abptilr = ""
                bptipr = ""
                abptipr = ""
                xtime = ""
                atime = ""
                ucpu = ""
                scpu = ""
                text = ""
          }
}'  ${FILE} > ${FILE}_DYNSQL_analysis.txt

echo "Look for ${FILE}_DYNSQL_analysis.txt"

chmod 755 ${FILE}_DYNSQL_analysis.txt

exit