Tuesday, 22 July 2014

Shell script to rebuild an entire database - part 2.

Shell script to rebuild an entire database - part 2.

This script should be put in the same path as the script provided in: "Shell script to rebuild an entire database - part 1. "
Click here to get the file
Size 3.6 kB - File type text/plain

File contents

#!/bin/ksh
#
# ---------------------------------------------------------------------
# This script use the rebuild.conf file to get the following parameters:
# - database name;
# - page size used to create the database;
#
# - Modifications:
#  - Danilo C. Pereira - creation.
#
# ---------------------------------------------------------------------

. $HOME/.profile


### --- Variables

# Path to create the new database.
tDATAPATH=$1

# Database name.
tDBNAME=$(cat rebuild.conf| grep -i DB | cut -f2 -d":")

# Used to create the database setting a defaul value for bufferpools.
tPAGESIZE=$(cat rebuild.conf| grep -i PAGESIZE | cut -f2 -d":")

# Used to check if any row was rejected in the loads.
tLoadCheck=

# Used to check the number of args.
tNUMARGS=$#

# Used to check how many tables needs to run the set integrity command.
tNumtable=0

# Checking the the number of arguments.
if [[ $tNUMARGS -ne 1 ]]; then
        echo "Usage: ./rebuld_db_onTarget.ksh <path to create the new database>"
 echo "example: ./rebuld_db_onTarget.ksh /db2/db1"
        exit -2
fi

# Creating the database:
db2 "create database $tDBNAME ON $tDATAPATH USING CODESET UTF-8 TERRITORY BR COLLATE USING UCA500R1_LEN_S1 PAGESIZE $tPAGESIZE"

# Running the db2look:
db2 -tvf newDb_db2look.sql | tee newDb_db2look.out

# Connecting to the database:
db2 connect to $tDBNAME

# Running the alter table statement to increase the size of the var fildes.
db2 -tvf newDb_tChartoIncrease.sql | tee newDb_tChartoIncrease.out

# Running the loads:
db2 -tvf newDb_load.sql | tee newDb_load.out
db2 -tvf newDb_loadWithIdentity.sql | tee newDb_loadWithIdentity.out

# creating the set integrity commands.
db2 -x "select 'set integrity for ',substr(rtrim(tabschema)||'.'||rtrim(tabname),1,40),'immediate checked;' from syscat.tables where status = 'C' order by tabschema, tabname" >> tSetIntegrity.sql

tNumtable=$(cat tSetIntegrity.sql | wc -l)
cat tSetIntegrity.sql | tr -s " " | cut -f4 -d" " | tr '.' ' ' >> tSetInt1.out
awk ' $5=numtable {  if ( NR < $5 ) { print $1 ".\"" $2 "\"," } if (NR == $5) { print $1 ".\"" $2 "\""  } } '  numtable=$tNumtable tSetInt1.out | tr '\n' ' ' >> tSetInt2.out
awk ' { print "set integrity for", $0 ,"immediate checked;" } ' tSetInt2.out > $tDBNAME.setIntegrity.sql
db2 -tvf $tDBNAME.setIntegrity.sql | tee $tDBNAME.setIntegrity.out

db2 terminate

# Checking if any rows have been rejected during the load.
tLoadCheck=$(cat newDb_load*.out | grep -i rejected | tr -s " " | cut -f6 -d" " | grep -v 0 | wc -l)
if [[ $tLoadCheck -ne 0 ]]; then
        echo -e "\nThere are rows rejected. \nNumber of failed loads: $tLoadCheck"
fi

# Checking if the load has failed.
tloadCheck=$(cat newDb_load*.out | grep -i SQL | grep -v SQL3107W)
if [[ -n $tloadCheck ]]; then
 echo -e "\nThe load has failed with the following error(s):"
 echo -e "\n$tloadCheck"
fi

# Checking if the alter statement to increase varchar/char has failed.

tNUMALTER=$(cat newDb_tChartoIncrease.out | grep -i alter | wc -l)
tNUMSQL=$(cat newDb_tChartoIncrease.out | grep -i "The SQL command completed successfully." | wc -l)

if [[ $tNUMALTER != $tNUMSQL ]] then;  
 echo -e "\nPlease check the newDb_tChartoIncrease.out file. There alter statement failed."
fi

# Checking if the db2look sql has failed.

tERRORDB2LOOK=$(cat newDb_db2look.out | cut -f1 -d" " |  sort -u  | grep -i DB2)
if [[ -n $(echo $tERRORDB2LOOK | grep -v DB20000I) ]]; then
 
 tERRORDB2LOOK=$(echo $tERRORDB2LOOK | tr " " "\n" | grep -v DB20000I)
 echo -e "\nThe following errors were found in the db2look sql: $tERRORDB2LOOK"
fi

# Deleting the temporary files.
rm -f tSetIntegrity.sql
rm -f tSetInt1.out
rm -f tSetInt2.out

exit 0

Shell script to rebuild an entire database - part 1.

Shell script to rebuild an entire database - part 1.

This should be run in the in the source database server. Itś created a script called: <db name>.rebuld_onTarget.ksh with the necessary commands to rebuild the database in the target server.
Click here to get the file
Size 6.7 kB - File type text/plain

File contents

#!/bin/ksh
#
#---------------------------------------------------------------------------------------------
# Create the a zip file with db2look output, export output to rebuild a database.
# Create a file called: <db name>.rebuild.ksh with the necessary commands to rebuild
# in the target server.
#
# Modification:
# Danilo Caetano Pereira:  Creation - June, 01 - 2009
#
#---------------------------------------------------------------------------------------------
. $HOME/sqllib/db2profile

##### --- Variables

# db name
tDBNAME=$1

# number os args.
tNUMARGS=$#

# the output path.
outPath=/db2/BD02/migração/$tDBNAME"_rebuild"

# current path.
tCurrentPath=$(pwd)

# User to handle errors.
tERROR=0

# Function to handle errors.
getError() {
 echo -e "\n$tERROR"
 exit -2
}

if [[ $tNUMARGS -ne 1 ]]; then
        echo "Usage: ./rebuilddb.ksh <parameter 1>"
        echo "<parameter 1> - <db name>"
        exit -2
fi

touch $tCurrentPath/$tDBNAME.rebuild.log

echo -e "\nConnecting to the database.."
db2 connect to $tDBNAME >> $tCurrentPath/$tDBNAME.rebuild.log

if [[ $? -ne 0 ]]; then
 tERROR="Database is not connectable, Please check!"
 getError
fi

# Check if the output path exist, if not it's created.
if [[ ! -d $outPath ]]; then
        mkdir $outPath
        mkdir $outPath/lobs
        chmod 740 $outPath
fi

##### --- Running the utilities to create the db2look, export and load commands:

# running the db2look
echo -e "\nRunning the db2look..\n"
##db2look -d $tDBNAME -a -e -m -l -x -f -xs -o $outPath/$tDBNAME.db2look.sql
db2look -d $tDBNAME -a -e -l -x -f -xs -o $outPath/newDb_db2look.sql

# Creating the export command.
db2 -x "select 'export to ' || rtrim(tabschema)||'.'||rtrim(tabname) ||  '.ixf of ixf lobs to lobs MODIFIED BY lobsinfile MESSAGES ' || rtrim(tabname) || '.msg select * from ' || rtrim(tabschema)||'.'||rtrim(tabname) ||';'from syscat.tables where type = 'T' and tabschema not like 'SYS%' order by tabschema, tabname" >> $outPath/newDb_export.sql

# creating the load commands.
db2 -x "select 'load from ' || rtrim(a.tabschema)||'.'||rtrim(a.tabname) || '.ixf of ixf lobs from lobs SAVECOUNT 5000 MESSAGES l' || rtrim(a.tabname) || '.msg insert into ' || rtrim(a.tabschema)||'.'||rtrim(a.tabname) ||';'from syscat.tables a where  not exists (select * from syscat.columns b where b.generated='A' and a.tabname=b.tabname and  a.tabschema=b.tabschema)  and a.tabschema not like 'SYS%' and a.type='T'" >> $outPath/newDb_load.sql

# creating the load commands for tables with identity as 
db2 -x "select 'load from ' || rtrim(a.tabschema)||'.'||rtrim(a.tabname) || '.ixf of ixf lobs from lobs modified by identityoverride SAVECOUNT 5000 MESSAGES l' || rtrim(a.tabname) || '.msg insert into ' || rtrim(a.tabschema)||'.'||rtrim(a.tabname) ||';'from syscat.tables a where  exists (select * from syscat.columns b where b.generated='A' and a.tabname=b.tabname and  a.tabschema=b.tabschema)  and a.tabschema not like 'SYS%' and a.type='T'" >> $outPath/newDb_loadWithIdentity.sql

##### --- Search for all tables of char and varchar types to increase them.
# Create the alter table command increasing 30% the varchar and char filed's length.

db2 -x "select 'alter table ' || rtrim(tabschema)||'.'||rtrim(tabname) || ' alter column ' ||rtrim(colname) ||' set data type ' || rtrim(typename) || '(' || trim(both '0' from char(int(length * 1.3))) || ');' from syscat.columns where typename like '%CHAR' and tabname exists (select tabname from syscat.tables where type='T' and tabschema not like 'SYS%' and tabschema not like 'DB2INST%' )" | tr -s " " >> $outPath/newDb_tChartoIncrease.sql


echo -e "\nRunning the export.."
cd $outPath
db2 -tvf newDb_export.sql | tee newDb_export.out >> $tCurrentPath/$tDBNAME.rebuild.log
cd $tCurrentPath

# Getting the page size used to create the default bufferpool.
tPAGESIZE=$(db2 -x "select pagesize from syscat.bufferpools where BPNAME='IBMDEFAULTBP'")

# Create the config file:
echo -e "DB:$tDBNAME" > $outPath/rebuild.conf
echo -e "PAGESIZE:$tPAGESIZE" >> $outPath/rebuild.conf

cp rebuild_db.ksh $outPath/rebuld_db_onTarget.ksh
chmod 740 $outPath/rebuld_db_onTarget.ksh

echo -e "\nWarning.."
# Count the number of export done.
tExportCheck=$(cat $outPath/newDb_export.out | grep export | wc -l)
echo -e "\n-- The number of export were: $tExportCheck"

# Count the number of SQL3107W messages.
tExportCheck=$(cat $outPath/newDb_export.out | grep SQL3107W | wc -l)
echo -e "-- The number of SQL3107W messages were: $tExportCheck"

# List all SQL messages different of SQL3107W
tExportCheck=$(cat $outPath/newDb_export.out | grep SQL | grep -v SQL3107W)
if [[ -n $tExportCheck ]]; then
    echo "-- The following SQL were found during the export:"
    echo "$tExportCheck" | tr '.' '\n'
fi

# Checking if there is any table with a column set as generated that it's not always/default.
db2 "select rtrim(tabschema), rtrim(tabname), rtrim(identity) from syscat.columns where generated='A'" | grep -i A | tr -s " " >> tINDENTITY.tmp

db2 terminate > /dev/null

while read line
do
    tIDENTITY=$(echo $line | cut -f3 -d" ")
    if [[ $tIDENTITY = "N" ]]; then
 tTabName=$(echo $line | cut -f2 -d" ")
 tSchema=$(echo $line | cut -f1 -d" ")
 echo -e "     * $tSchema.$tTabName" >> tINDENTITYasN.tmp
    fi

done < tINDENTITY.tmp

if [[ -s tINDENTITYasN.tmp ]]; then
   echo -e "-- There is one or more columns created as generated and it's not Identity Always/Default. Please rerun the export for tables:"

   cat tINDENTITYasN.tmp
fi

# Checking for temporary tablespaces to warn about problem with "AUTORESIZE NO" parameter.
tTemporaryTbspace=$(cat $outPath/newDb_db2look.sql | grep -i "AUTORESIZE NO") 

# Check for create bufferpools statement that it's using the "NOT EXTENDED STORAGE" option that will cause the sql fail.
tCreateBuffer=$(cat $outPath/newDb_db2look.sql | grep -i "NOT EXTENDED STORAGE")

if [[ -n $tTemporaryTbspace ]] || [[ -n $tCreateBuffer ]]; then
   echo -e "\n-- Please check the newDb_db2look.sql file."
fi

if [[ -n $tTemporaryTbspace ]]; then
    echo -e "     * The 'create user temporary tablespace' statement has the 'AUTORESIZE NO' option that will cause the sql fail."
fi

if [[ -n $tCreateBuffer ]]; then
    echo -e "     * The following sqls will fail due the 'NOT EXTENDED STORAGE' option:"
    echo -e "       $tCreateBuffer" | tr ";" "\n"
fi

echo -e "\nCreating the tar file.."
tar -cvf $tDBNAME"_rebuild".tar  $outPath > /dev/null
if [[ $? -ne 0 ]]; then
        echo -e "\nThe tar command failed, The $outPath directory will not be deleted."
else
        #gzip $outPath.tar
        rm -rf $outPath
fi

echo -e "\nResult.."
echo -e "\n-- $tDBNAME"_rebuild".tar was created with: db2look, export outputs\n"

# Deleting the temporary file/directory.
rm -f tSetIntegrity.sql
rm -f tINDENTITY.tmp
rm -f tINDENTITYasN.tmp

exit 0

Shell script to rebuild an entire database.

Shell script to rebuild an entire database.

The script creates a tar files with the export output and the necessary loads command.

Example:

Our environment:
db name: DAN_PROD

Run the rebuiddb.ksh script on the source server like:

./rebuiddb.ksh dan_prod


A tar file will be created:

file name has the following syntax:  <db name>._rebuild.tar.gz

In this case the file name will be: dan_prod_rebuild.tar.gz



Move this files to the target server, uncompress it and run the following script like:

script name syntax: <db name>.rebuld_onTarget.ksh

In this case the script name will be: dan_prod.rebuld_onTarget.ksh


Run it as follow:


./dan_prod.rebuld_onTarget.ksh dan_prod




Note: When migrating a database to case insensitive ( for example, version v8 or v9.1 to v9.5 ) letters with accent will use the double of length. This script increase 30% of length to all char, varchar and character.

Monday, 21 July 2014

How to read db2diag.log file

How to read db2diag.log file

When an error occurs, the db2diag.log file is updated with information about the error.
The following example shows the header information for a sample db2diag.log file entry.

1997-03-16-11.53.18.001160 (1)   Instance:payroll (2)  Node:000 (3)
PID:44829(db2agent (SAMPLE)) (4)    Appid:*LOCAL.payroll.970317140834 (5)
lock_manager (6)         sqlplrq (7)   Probe:111 (8)  Database:SAMPLE (9)
DIA9999E (10) An internal return code occurred. Report the following:
"0xFFFFE10E". (11)
Legend:
(1)
A timestamp for the message.
(2)
The name of the instance generating the message.
(3)
For DB2 Enterprise - Extended Edition systems with a db2nodes.cfg file, the node generating the message. (If the db2nodes.cfg file is not used, the value is "000".)
(4)
Identification of the process generating the message. In this example, the message came from the process identified as 44829. The name of this process is db2agent and it is connected to the database named SAMPLE. Note: If the application is operating in a DUOW environment, the ID shown is the DUOW correlation token.
(5)
Identification of the application for which the process is working. In this example, the process generating the message is working on behalf of an application with the ID *LOCAL.payroll.970317140834. To identify more about a particular application ID, either:
  • Use the db2 list applications command to view a list of application IDs. From this list, you can determine information about the client experiencing the error, such as its node name and its TCP/IP address.
  • Use the db2 get snapshot for application command to view a list of application IDs.
(6)
The DB2 component that is writing the message. For messages written by user applications using the db2AdminMsgWrite API, the component will read "User Application".
(7)
The name of the function that is providing the message. This function operates within the DB2 subcomponent that is writing the message. For messages written by user applications using the db2AdminMsgWrite API, the function will read "User Function". To find out more about the type of activity performed by a function, look at the fourth letter of its name. In this example, the letter "p" in the function "sqlplrq" indicates a data protection problem. (Logs could be damaged, for example.)
The following list shows some of the letters used in the fourth position of the function name, and the type of activity they identify:
b
Buffer pools
c
Communication between clients and servers
d
Data management
e
Engine processes
o
Operating system calls (such as opening and closing files)
p
Data protection (such as locking and logging)
r
Relational database services
s
Sorting
x
Indexing
(8)
Identification of the internal error that was reported.
(9)
The database on which the error occurred.
(10)
Diagnostic message indicating that an internal error occurred.
(11)
Hexadecimal representation of an internal return code. 

Notes:

  • Check the end of the file for the most recent data, because new information is always appended to the bottom of the file.
  • Entries always have a timestamp. If you know when an error occurred, look for the first entry in the file marked with this time.
  • The db2diag.log file grows continuously. When it gets too large, back it up and then erase the file. A new db2diag.log file is generated automatically the next time it is required by the system.

Sunday, 20 July 2014

How to rank rows on DB2

How to rank rows on DB2

use the functions rank(), dense_rank(), rownumber()

If you have a TABLEX with the following content:


COD
ab
dd
ac
dd ae
fa

To rank these data, perform the command below:
db2 "SELECT RANK() OVER (ORDER BY COD) AS RANK,COD FROM TABLEX"

Results:

RANK      COD
---------    ----------
1              ab
2              ac
3              ae
4              dd
4              dd
6              fa

To dense rank, perform the command below:
db2 "SELECT DENSE_RANK() OVER (ORDER BY COD) AS RANK,COD FROM TABLEX"

RANK      COD
---------    ----------
1              ab
2              ac
3              ae
4              dd
4              dd
5              fa

To put row number in your data output, perform the command below:

db2 "SELECT ROWNUMBER() OVER (ORDER BY COD) AS RANK,COD FROM TABLEX"

RANK      COD
---------    ----------
1              ab
2              ac
3              ae
4              dd
5              dd
6              fa

Saturday, 19 July 2014

How to put a database in rollforward pending state

How to put a database in rollforward pending state

The db2rfpen tool is a utility to switch on the database rollforward pending state.

  In a non-HADR environment, this tool should be used under the  advisement of DB2 service.                                            
  In an HADR environment, this tool can be used to reset the database role to STANDARD.                                                     

  SYNTAX: db2rfpen  on < database_alias | -path log_file_header_path |   -file log_file_header >

Example:
Database: sample
db2rfpen on sample

Friday, 18 July 2014

What are the DB2 Isolation Levels ?

What are the DB2 Isolation Levels ?

A brief description of all four isolation levels used by DB2

1. UR - Uncommitted Read


– For read only queries, no record locking
– Will see uncommitted changes by other transactions
– Good for accessing read only tables
– Statements in UR which modify data are upgraded internally to CS

2. CS - Cursor Stability


– Default isolation level
– Locks and unlocks each row, 1 at a time (never has 2 locks at once)
– Guaranteed to only return data which was committed at the time of the
read

3. RS - Read Stability


– Will keep all qualifying rows locked until the transaction is completed
– Does release locks on rows that do not satisfy query predicates
– Use for result set stability or when future actions on returned rows may be taken

4. RR - Repeatable Read


Locks the table within a unit of work. An application can retrieve and operate on rows in the table as many times as needed. However, the entire table is locked, not just the rows that are retrieved. Until the unit of work completes, no other application can update, delete, or insert a row that would affect the table.

Thursday, 17 July 2014

How to perform queries using isolation levels

How to perform queries using isolation levels


The isolation level can be specified in several different ways.


The following heuristics are used in determining which isolation level will be used in compiling an SQL statement:


Static SQL:


  •     If an isolation clause is specified in the statement, then the value of that clause is used.
  •     If no isolation clause is specified in the statement, then the isolation level used is the one specified for the package at the time when the package was bound to the database.

Dynamic SQL:


  •     If an isolation clause is specified in the statement, then the value of that clause is used.
  •     If no isolation clause is specified in the statement, and a SET CURRENT ISOLATION statement has been issued within the current session, then the value of the CURRENT ISOLATION special register is used.
  •     If no isolation clause is specified in the statement, and no SET CURRENT ISOLATION statement has been issued within the current session, then the isolation level used is the one specified for the package at the time when the package was bound to the database.

Procedure To specify the isolation level:


Use the WITH clause. The statement-level isolation level overrides the isolation level specified for the package in which the statement appears.

You can specify an isolation level for the following SQL statements:

  •     SELECT
  •     SELECT INTO
  •     Searched DELETE
  •     INSERT
  •     Searched UPDATE
  •     DECLARE CURSOR

The following conditions apply to isolation levels specified for statements:

  •     The WITH clause cannot be used on subqueries
  •     The WITH UR option applies only to read-only operations. In other cases, the statement is automatically changed from UR to CS.

Repeatable Read = RR

Read Stability = RS

Cursor Stability = CS

Uncommitted Read = UR



Example:

db2 select col1, col2 from tablex with ur

Wednesday, 16 July 2014

How to move data between tables with LOAD FROM CURSOR

How to move data between tables with LOAD FROM CURSOR

using load from cursor to data movement

See the example below, how to use db2 load from cursor to data movement


Table EMP.TABLE_1 has 4 columns:

    COD INT
    NAME  CHAR(10)
    PROJECT INT
    HIRED DATE


Table EMP.TABLE_2 has 4 columns:
  
    COD INT
    NAME  CHAR(10)
    HIRED DATE
    PROJECT INT
   

Executing the following commands will load all the data from EMP.TABLE_1 into EMP.TABLE_2:

db2 declare cursor_x cursor for select cod, name, hired project from emp.table_1
db2 load from cursor_x of cursor insert into emp.table_2 nonrecoverable
After load, some tables can be in pending state, so apply the command below to generate a scrit to set all pending tables to normal state
db2 -x "select 'set integrity for ',substr(rtrim(tabschema)||'.'||rtrim(tabname),1,40),'immediate checked;' from \
syscat.tables where status = 'C' order by tabschema, tabname"

If EMP.TABLE_1 resides in a database different from the database EMP.TABLE_2 is in, the DATABASE, USER, and USING options of the DECLARE CURSOR command can be used to perform the load. For example, if EMP.TABLE_1 resides in database DB1, and the user ID and password for DB1 are user1 and pwd1 respectively, executing the following commands will load all the data from EMP.TABLE_1 into EMP.TABLE_2:

db2 declare cursor_x cursor database DB1 user user1 using pwd1 for select cod, name, hired project from emp.table_1     
db2 load from cursor_x of cursor insert into emp.table_2 nonrecoverable
db2 -x "select 'set integrity for ',substr(rtrim(tabschema)||'.'||rtrim(tabname),1,40),'immediate checked;' from \
syscat.tables where status = 'C' order by tabschema, tabname"

Tuesday, 15 July 2014

How to monitor static SQL Statements o DB2 9.7

How to monitor static SQL Statements o DB2 9.7

MON_GET_PKG_CACHE_STMT
New relational interface to monitor dynamic and static SQL statements in package cache
           
                DB2 Version 9.7 provides a new relational interface,
               MON_GET_PKG_CACHE_STMT, to monitor dynamic and static SQL statements in
               the database package cache. This new relational interface reports information for
               both static and dynamic SQL statements, unlike the dynamic SQL snapshot which
               only reports information for dynamic statements.
               For each dynamic and static SQL statement, the new relational interface returns a
               rich set of metrics, aggregated across executions of the statement. The metrics can
               help you to quickly determine the reasons for poor performance of an SQL
               statement, to compare the behavior and performance of one SQL statement to
               another, and to easily identify the most expensive SQL statements along any
               number of dimensions (for example, the SQL statements consuming the most CPU
               resources, and statements with the longest lock wait times).

EXAMPLES:
List all the dynamic SQL statements from the database package cache ordered by the average CPU time.
db2 SELECT MEMBER,
      SECTION_TYPE , 
      TOTAL_CPU_TIME/NUM_EXEC_WITH_METRICS as  
      AVG_CPU_TIME,EXECUTABLE_ID
      FROM TABLE(MON_GET_PKG_CACHE_STMT ( 'D', NULL, NULL, -2)) as T 
        WHERE T.NUM_EXEC_WITH_METRICS <> 0 ORDER BY AVG_CPU_TIME
The following is an example of output from this query.
MEMBER SECTION_TYPE AVG_CPU_TIME         EXECUTABLE_ID                                                      
------ ------------ -------------------- -------------------------------------------------------------------
     0 D                             754 x'01000000000000007A0000000000000000000000020020081126171554951791'
     0 D                            2964 x'0100000000000000790000000000000000000000020020081126171533551120'
     0 D                            5664 x'01000000000000007C0000000000000000000000020020081126171720728997'
     0 D                            5723 x'01000000000000007B0000000000000000000000020020081126171657272914'
     0 D                            9762 x'01000000000000007D0000000000000000000000020020081126172409987719'

5 record(s) selected.
Note: It takes a longer time period to build the compilation environment and to transfer statement text (which can be as large as 2 MB) between members. To improve performance when retrieving a list of all the statements from the package cache, do not to select the STMT_TEXT and the COMP_ENV_DESC columns.
With the above output, we can use the executable_id to find out the details about the most expensive statement (in terms of the average CPU time):
db2 SELECT STMT_TEXT FROM TABLE(MON_GET_PKG_CACHE_STMT)
      (null, x'01000000000000007D0000000000000000000000020020081126172409987719', null, -2))

STMT_TEXT
-------------------------------------------------------------------------------------------
SELECT * FROM EMPLOYEE
As another example, assume a user named Alex has a connection associated to workload A which has the COLLECT ACTIVITY METRICS set. Another user, Brent, is associated to workload B that has the COLLECT ACTIVITY METRICS set to NONE. In addition, the database mon_act_metrics configuration parameter is set to NONE. When Brent executes the query:
SELECT count(*) FROM syscat.tables
all metrics are returned as 0 and the value of num_exec_with_metrics is also 0. Then Alex executes the same statement afterwards, but the metrics are collected this time for the execution of the statement and num_exec_with_metrics increments. So, after Brent and Alex execute that statement, the result of this query:
SELECT num_executions, num_exec_with_metrics, SUBSTR(stmt_text,1,50) AS stmt_text 
    FROM TABLE (MON_GET_PKG_CACHE_STMT('d', null, null, -1)) AS tf 
    WHERE stmt_text LIKE 'SELECT count%'
shows that the SELECT statement ran twice and one of the execution times had the activity metrics collected.
NUM_EXECUTIONS NUM_EXEC_WITH_METRICS STMT_TEXT           
-------------- --------------------- --------------------
             2                     1 SELECT count(*) FROM syscat.tables

  1 record(s) selected.

Usage notes

The MON_GET_PKG_CACHE_STMT table function returns a point-in-time view of both static and dynamic SQL statements in the database package cache. This allows you to examine the aggregated metrics for a particular SQL statement, allowing you to quickly determine the reasons for poor query performance. The metrics returned are aggregates of the metrics gathered during each execution of the statement.

Monday, 14 July 2014

How to list the DB2 products and features installed on your system

How to list the DB2 products and features installed on your system

where DB2 products are installed on your system and list the DB2 product level with db2ls command
Perform the folowing commands below to check the DB2 products and features installed on your system:

Examples:


To query what DB2 database features are installed to a particular path, issue:
db2ls -q -b /opt/ibm/db2/V9.5
Output:
Install Path : /opt/ibm/db2/V9.5

Feature Response File ID                                        Level   Fix Pack   Feature Description
---------------------------------------------------------------------------------------------------------------------
BASE_CLIENT                                                         9.5.0.3          3   Base client support
JAVA_SUPPORT                                                     9.5.0.3          3   Java support
SQL_PROCEDURES                                              9.5.0.3          3   SQL procedures
BASE_DB2_ENGINE                                              9.5.0.3          3   Base server support
JDK                                                                             9.5.0.3          3   IBM Software Development Kit (SDK) for Java(TM)
CONNECT_SUPPORT                                           9.5.0.3          3   Connect support
COMMUNICATION_SUPPORT_TCPIP              9.5.0.3          3   Communication support - TCP/IP
REPL_CLIENT                                                         9.5.0.3          3   Replication tools
CONTROL_CENTER                                              9.5.0.3          3   Control Center
DB2_DATA_SOURCE_SUPPORT                       9.5.0.3          3   DB2 data source support
LDAP_EXPLOITATION                                           9.5.0.3          3   DB2 LDAP support
INSTANCE_SETUP_SUPPORT                           9.5.0.3          3   DB2 Instance Setup wizard
SPATIAL_EXTENDER_CLIENT_SUPPORT      9.5.0.3          3   Spatial Extender client
XML_EXTENDER                                                    9.5.0.3          3   XML Extender
APPLICATION_DEVELOPMENT_TOOLS          9.5.0.3          3   Base application development tools
FIRST_STEPS                                                          9.5.0.3          3   First Steps
DB2_SAMPLE_DATABASE                                   9.5.0.3          3   Sample database source
TEXT_SEARCH                                                        9.5.0.3          3   DB2 Text Search
INFORMIX_DATA_SOURCE_SUPPORT            9.5.0.3          3   Informix data source support


To check whether a specific DB2 database feature is installed or not, issue:
db2ls -q -b /opt/ibm/db2/V9.5 -f <feature>

Sunday, 13 July 2014

How to list IP's from connected applications on your database

How to list IP's from connected applications on your database

Statement to check the applications IP numbers
To do this, perform the following command:

db2 get snapshot for applications on <db_name> | grep -E "handle|Inbound"

It will show you all applications (local and IP numbers) connected to your database.

Example:
=> db2 get snapshot for applications on DB_SAMPLE | grep -E "handle|Inbound"

Application handle                         = 344
Inbound communication address              = 123.234.100.210 44274
Application handle                         = 286
Inbound communication address              = 123.234.100.210 44272
Application handle                         = 295
Inbound communication address              = 123.234.100.210 34848
Application handle                         = 375
Inbound communication address              = 123.234.100.210 34846
Application handle                         = 212
Inbound communication address              = *LOCAL.app01
Application handle                         = 8
Inbound communication address              = *LOCAL.app01

Saturday, 12 July 2014

How to rebuild an entire database

How to rebuild an entire database

When you cannot recover your database for any reason, you need to rebuild your database

To Rebuild an entire database you need to perform the following steps:



Extract DDL of a database using db2look

  • DB2 V8
db2look -d <dbname> -a -e -m -l -x -f -o <output>
  • DB2 V9
db2look -d <dbname> -a -e -m -l -x -f -xs -o <output>

Exporting all tables of a database

  • DB2 V8
db2 -x "select 'export to ' || rtrim(a.tabschema)||'.'|| rtrim(tabname) || '.ixf of ixf MODIFIED BY lobsinfile modify by identityignore MESSAGES ' || rtrim(tabname) \
 || '.msg select * from ' || rtrim(tabschema)||'.'||rtrim(tabname) ||';'from syscat.tables a where type = 'T' \ 
and tabschema not like 'SYS%' order by tabschema, tabname"
  • DB2 v9
db2 -x "select 'export to ' || rtrim(a.tabschema)||'.'|| rtrim(tabname) || '.ixf of ixf lobs to <pathlobs> MODIFIED BY lobsinfile MESSAGES ' || rtrim(tabname) \
 || '.msg select * from ' || rtrim(tabschema)||'.'||rtrim(tabname) ||';'from syscat.tables a where type = 'T' \ 
and tabschema not like 'SYS%' order by tabschema, tabname"



Loading into all tables

  • DB2 v8

    For tables without identity always columns
db2 -x "select 'load from ' || rtrim(a.tabschema)||'.'||rtrim(a.tabname) || '.ixf of ixf lobs from <pathlobs> \
SAVECOUNT 5000 MESSAGES l' || rtrim(a.tabname) || '.msg insert into ' || \
rtrim(a.tabschema)||'.'||rtrim(a.tabname) ||'nonrecoverable;'from syscat.tables a where  not exists \
(select * from syscat.columns b where b.generated='A' and a.tabname=b.tabname \
and  a.tabschema=b.tabschema)  and a.tabschema not like 'SYS%' and a.type='T'"

For tables with identity always columns
db2 -x "select 'load from ' || rtrim(a.tabschema)||'.'||rtrim(a.tabname) || '.ixf of ixf lobs from <pathlobs> \
modified by identityoverride SAVECOUNT 5000 MESSAGES l' \
|| rtrim(a.tabname) || '.msg insert into ' || rtrim(a.tabschema)||'.'||rtrim(a.tabname) ||'nonrecoverable;'\
from syscat.tables a where  exists (select * from syscat.columns b where b.generated='A' \
and a.tabname=b.tabname and  a.tabschema=b.tabschema)  and a.tabschema not like 'SYS%' and a.type='T'"
  • DB2 v9
For tables without identity always columns
db2 -x "select 'load from ' || rtrim(a.tabschema)||'.'||rtrim(a.tabname) || '.ixf of ixf lobs from <pathlobs> \
SAVECOUNT 5000 MESSAGES l' || rtrim(a.tabname) || '.msg insert into ' || \
rtrim(a.tabschema)||'.'||rtrim(a.tabname) ||'nonrecoverable;'from syscat.tables a where  not exists \
(select * from syscat.columns b where b.generated='A' and a.tabname=b.tabname \
and  a.tabschema=b.tabschema)  and a.tabschema not like 'SYS%' and a.type='T'"

For tables with identity always columns
db2 -x "select 'load from ' || rtrim(a.tabschema)||'.'||rtrim(a.tabname) || '.ixf of ixf lobs from <pathlobs> \
modified by identityoverride SAVECOUNT 5000 MESSAGES l' \
|| rtrim(a.tabname) || '.msg insert into ' || rtrim(a.tabschema)||'.'||rtrim(a.tabname) ||'nonrecoverable;'\
from syscat.tables a where  exists (select * from syscat.columns b where b.generated='A' \
and a.tabname=b.tabname and  a.tabschema=b.tabschema)  and a.tabschema not like 'SYS%' and a.type='T'"

Fix check pending

db2 -x "select 'set integrity for ',substr(rtrim(tabschema)||'.'||rtrim(tabname),1,40),'immediate checked;' from \
syscat.tables where status = 'C' order by tabschema, tabname"

Friday, 11 July 2014

How to reduce Tablespace High Water Mark

How to reduce Tablespace High Water Mark

just available on Db2 9.7
You must have an automatic storage table space that was created with DB2 Version 9.7 or later. Reclaimable storage is not available in table spaces created with earlier versions of the DB2 product. You can see which table spaces in a database support reclaimable storage using the MON_GET_TABLESPACE table function.

To lower the high water mark and reduce the container size with DMS table space is a two-step operation. First, lower the high water mark through extent movement with the following statement:
   ALTER TABLESPACE TS LOWER HIGH WATER MARK
Next, you would reduce the size of tablespace
  ALTER TABLESPACE TS REDUCE

Use the function MON_GET_TABLESPACE and check on column reclaimable_space_enabled if you can reclain your tablespace.

Thursday, 10 July 2014

How to reset snapshot monitors on DB2

How to reset snapshot monitors on DB2

To reset the snapshot monitors on DB2 perform the commands below:


db2 attach to <instance_name>

db2 reset monitor for database <db_name>

db2 reset monitor all 

Wednesday, 9 July 2014

How to restore a database compressed by DB2 in other system

How to restore a database compressed by DB2 in other system

COMPRLIB option
If you are restoring a database v8 to v9, example, and if database is compressed, you have to restore the database with option comprlib, see the example below:

Database: SAMPLE

Database resides on DB2 V8, and I want to restore it in another system with db2 v9. So, first you have to perform a backup offline:
db2 backup db sample to /data/db2 compress

On system with DB2 V9, perform the restore with comprlib option:

db2 restore db sample from /datav9/db2 taken at 20090210030302 
to /home/dinst2db into sample COMPRLIB libdb2compr.a WITHOUT ROLLING FORWARD


Tuesday, 8 July 2014

How to know what tables are in REORG PENDING

How to know what tables are in REORG PENDING

from SYSIBMADM.ADMINTABINFO where REORG_PENDING = 'Y'



perform the command below to find out what tables are in REORG PENDING


db2 " select TABSCHEMA, TABNAME from SYSIBMADM.ADMINTABINFO where REORG_PENDING = 'Y'"

if do you want to generate a script to reorg all tables in reorg peding, execute the statement below:


db2 -x "select 'reorg table', substr(rtrim(TABSCHEMA)||'.'||rtrim(TABNAME),1,20), ';' \
from SYSIBMADM.ADMINTABINFO where REORG_PENDING = 'Y'" 

Monday, 7 July 2014

Insert and Delete Errors - SQL0668N Operation not allowed for reason code 7

Insert and Delete Errors - SQL0668N Operation not allowed for reason code 7

How you do when you cannot insert or delete data in some tables

A - Errors

db2inst1@db2:~/scripts> db2 "delete from tab1 where col1=1"
DB21034E  The command was processed as an SQL statement because it was not a
valid Command Line Processor command.  During SQL processing it returned:
SQL0668N  Operation not allowed for reason code "7" on table
"DB2INST1.tab1".  SQLSTATE=57016

B - Invetigation

db2 " ? SQL0668N "

....

 7 The table is in the reorg pending state. This can occur after
an ALTER TABLE statement containing a REORG-recommended
operation.

....



C - Resolution

      Reorganize the table using the REORG TABLE command (note that INPLACE REORG TABLE is not allowed for a table that is in the reorg pending state.
db2 "reorg table DB2INST1.tab1"

D - POSSIBLE CAUSES

  • When columns are dropped

Sunday, 6 July 2014

When to REORG a table on DB2

When to REORG a table on DB2

Do you know when you have to reorg your table ?

Consider the following factors, which might indicate that you should reorganize a table:


  • A high volume of insert, update, and delete activity on tables accessed by queries. If many rows are inserted, there may not be enough free space to keep them in the clustered sequence. If many rows are deleted, the table will still have the space allocated and a REORG can free the unnecessary space.
  • Significant changes in the performance of queries that use an index with a high cluster ratio. Some applications access groups of rows using the clustered index and may not perform well if the table becomes unclustered.
  • Executing RUNSTATS to refresh statistical information does not improve performance.
  •    In some cases, a RUNSTATS Utility can collect the current statistics and resolve performance problems. If a table has become unclustered or if it contains a large amount of free space, the REORG Utility may be needed to improve access efficiency.
  • The REORGCHK command indicates a need to reorganize your table.
If Row compression is being implemented for a table, the REORG Utility can be used to build the compression dictionary and compress the data rows.

Saturday, 5 July 2014

When to use CLEANUP ONLY or CLEANUP ONLY PAGES on REORG Utility

When to use CLEANUP ONLY or CLEANUP ONLY PAGES on REORG Utility

Do you known when to use the option CLEANUP ONLY or CLEANUP ONLY PAGES on REORG Utility ?

The results of the REORGCHK calculations show what type of table or index reorganization might be needed. The goal is to resolve any out of bounds indicators using the least system resources.

If any of the table formulas are out of bounds, these are the F1, F2 and F3, then a full table and index reorganization would be required. This process consumes the most resource but will resolve all possible out of bounds conditions. The reorganized table should be stored in
a more efficient manner.
If the results of the calculation for F4 exceed the bounds set by the formula for a Cluster Index then the table and indexes should be reorganized. It is not uncommon for several non-clustered indexes to be flagged for the F4 calculation. These can be ignored.

If the results of the calculations for F1, F2, F3 and F4 do not exceed the bounds set by the formula and the results of the calculations for F5 or F6 do exceed the bounds set, then index reorganization is recommended. The reorganized indexes should be stored in a moreefficient manner.
If only the results of the calculation for F7 exceeds the bounds set, but the results of F1, F2, F3, F4, F5 and F6 are within the set bounds, then it is recommended that a cleanup of the indexes be done using the CLEANUP ONLY option of reorg indexes.
If the only calculation result to exceed the set bounds is that of F8, it is recommended that a cleanup of the pseudo empty pages of the indexes be done using the CLEANUP ONLY PAGES option of reorg indexes.

Example:

db2 reorgchk current statistic on table db2inst1.project

Table statistics:

F1: 100 * OVERFLOW / CARD < 5
F2: 100 * (Effective Space Utilization of Data Pages) > 70
F3: 100 * (Required Pages / Total Pages) > 80

SCHEMA.NAME                     CARD     OV     NP     FP ACTBLK    TSIZE  F1  F2  F3 REORG
----------------------------------------------------------------------------------------
Table: DB2INST1.PROJECT
                                  20      0      1      1      -     1340   0   - 100 ---
----------------------------------------------------------------------------------------

Index statistics:

F4: CLUSTERRATIO or normalized CLUSTERFACTOR > 80
F5: 100 * (Space used on leaf pages / Space available on non-empty leaf pages) > MIN(50, (100 - PCTFREE))
F6: (100 - PCTFREE) * (Amount of space available in an index with one less level / Amount of space required for all keys) < 100
F7: 100 * (Number of pseudo-deleted RIDs / Total number of RIDs) < 20
F8: 100 * (Number of pseudo-empty leaf pages / Total number of leaf pages) < 20

SCHEMA.NAME                 INDCARD  LEAF ELEAF LVLS  NDEL    KEYS LEAF_RECSIZE NLEAF_RECSIZE LEAF_PAGE_OVERHEAD NLEAF_PAGE_OVERHEAD  F4  F5  F6  F7  F8 REORG 
--------------------------------------------------------------------------------------------------------------------------------------------------------------
Table: DB2INST1.PROJECT
Index: DB2INST1.PK_PROJECT
                                 20     1     0    1     0      20            6             6               1174                1174 100   -   -   0   0 -----
Index: DB2INST1.XPROJ2
                                 20     1     0    1     0      17            6             6               1174                1174 100   -   -   0   0 ---*-
--------------------------------------------------------------------------------------------------------------------------------------------------------------

CLUSTERRATIO or normalized CLUSTERFACTOR (F4) will indicate REORG is necessary
for indexes that are not in the same sequence as the base table. When multiple
indexes are defined on a table, one or more indexes may be flagged as needing
REORG.  Specify the most important index for REORG sequencing.

Tables defined using the ORGANIZE BY clause and the corresponding dimension
indexes have a '*' suffix to their names. The cardinality of a dimension index
is equal to the Active blocks statistic of the table.

Friday, 4 July 2014

What is the 4 phases of Offline Reorg

What is the 4 phases of Offline Reorg

sort - build - replace - recreate all indexes


There are four phases in a classic or offline table reorganization:

   1. SORT

      If an index is specified with the REORG TABLE command, or if a clustering index is defined on the table, the rows of the table are first sorted according to that index. If the INDEXSCAN option is specified, an index scan is used to sort the table, otherwise, a table scan sort is used. This phase only applies to a clustering REORG. Space reclaiming reorganizations begin at the build phase.

   2. BUILD

      In this phase, a reorganized copy of the entire table is build, either in the table space that the table being reorganized resides, or in a temporary table space specified with the REORG command.

   3. REPLACE

      In this phase, the original table object is replaced by either copying back from the temporary table space, or by pointing to the newly built object within the table space of the table being reorganized.

   4. RECREATE ALL INDEXES

      All indexes defined on the table are recreated

You have two command two monitor the reorg on db2 V9:



db2pd -d <dbname> -reorg
or
db2 "select * from SYSIBMADM.SNAPTAB_REORG"
or
SELECT SUBSTR(TABNAME, 1, 15) AS TAB_NAME, SUBSTR(TABSCHEMA, 1, 15) 
   AS TAB_SCHEMA, REORG_PHASE, SUBSTR(REORG_TYPE, 1, 20) AS REORG_TYPE, 
   REORG_STATUS, REORG_COMPLETION, DBPARTITIONNUM 
   FROM SYSIBMADM.SNAPTAB_REORG ORDER BY DBPARTITIONNUM

Where SYSIBMADM.SNAPTAB_REORG is a table that contain the following data:

Column                         Type      Type
name                           schema    name               Length   Scale Nulls
------------------------------ --------- ------------------ -------- ----- ------
SNAPSHOT_TIMESTAMP             SYSIBM    TIMESTAMP                10     0 Yes  
TABNAME                        SYSIBM    VARCHAR                 128     0 Yes  
TABSCHEMA                      SYSIBM    VARCHAR                 128     0 Yes  
PAGE_REORGS                    SYSIBM    BIGINT                    8     0 Yes  
REORG_PHASE                    SYSIBM    VARCHAR                  16     0 Yes  
REORG_MAX_PHASE                SYSIBM    INTEGER                   4     0 Yes  
REORG_CURRENT_COUNTER          SYSIBM    BIGINT                    8     0 Yes  
REORG_MAX_COUNTER              SYSIBM    BIGINT                    8     0 Yes  
REORG_TYPE                     SYSIBM    VARCHAR                 128     0 Yes  
REORG_STATUS                   SYSIBM    VARCHAR                  10     0 Yes  
REORG_COMPLETION               SYSIBM    VARCHAR                  10     0 Yes  
REORG_START                    SYSIBM    TIMESTAMP                10     0 Yes  
REORG_END                      SYSIBM    TIMESTAMP                10     0 Yes  
REORG_PHASE_START              SYSIBM    TIMESTAMP                10     0 Yes  
REORG_INDEX_ID                 SYSIBM    BIGINT                    8     0 Yes  
REORG_TBSPC_ID                 SYSIBM    BIGINT                    8     0 Yes  
DBPARTITIONNUM                 SYSIBM    SMALLINT                  2     0 Yes

Thursday, 3 July 2014

How to run a reorg and runstats online

How to run a reorg and runstats online

How to allow write on table during the runstats and reorg for indexes

1 - REORG  ONLINE  TO ALLOW READ ON TABLE DURING THE REORG


db2 reorg table <table_name> inplace allow read access


2 - REORG  ONLINE TO ALLOW READ AND WRITE ON TABLE DURING THE REORG


db2 reorg table <table_name> inplace allow write access

3 - RUNSTATS ONLINE TO ALLOW WRITE ON TABLE DURING THE RUNSTATS


db2 runstats on table <table_name> and indexes all shrlevel change

Wednesday, 2 July 2014

How to run REORG, REORGCHK and RUNSTAT on all tables in a database

How to run REORG, REORGCHK and RUNSTAT on all tables in a database

Queries to execute REORG, REORGCHK and RUNSTAT on all tables in a database

Write the following scripts in a .out file, then execute it by issuing the db2 command with option -tvf.


  • Reorg for all tables
db2 -x "select 'reorg table',substr(rtrim(tabschema)||'.'||rtrim(tabname),1,50),';'from syscat.tables where type = 'T' " > reorg.out

db2 -tvf  reorg.out
  •  Reorgchk for all tables
db2 -x "select 'reorgchk update statistics on table',substr(rtrim(tabschema)||'.'||rtrim(tabname),1,50),';' from \
syscat.tables where type = 'T' " > reorgchk.out

db2 -tvf reorgchk.ou
  • Runstats for all tables
db2 -x "select 'runstats on table',substr(rtrim(tabschema)||'.'||rtrim(tabname),1,50),' and indexes all;'from \
syscat.tables where type = 'T' " > runstats.out

db2 -tvf runstats.out

Executing reorg, reorgcheck and runstats for all tables from one specific tablespace.


  • Reorg for all tables from one specifc tablespace
db2 -x "select 'reorg table',substr(rtrim(tabschema)||'.'||rtrim(tabname),1,50),';' \
from syscat.tables where type = 'T' and tbspace='<tablespace_name>'" > reorg.out 

db2 -tvf  reorg.out
  • Reorgchk for all tables from one specifc tablespace
db2 -x "select 'reorgchk update statistics on table',substr(rtrim(tabschema)||'.'||rtrim(tabname),1,50),';' from \
syscat.tables where type = 'T' and tbspace='<tablespace_name>'" > reorgchk.out

db2 -tvf reorgchk.ou
  • Runstats for all tables from one specifc tablespace
db2 -x "select 'runstats on table',substr(rtrim(tabschema)||'.'||rtrim(tabname),1,50),' and indexes all;'from \
syscat.tables where type = 'T' " > runstas.out

db2 -tvf runstats.out

Tuesday, 1 July 2014

How to know when an "CREATE TABLE" statement was last executed

How to know when an "CREATE TABLE" statement was last executed

from syscat.tables using the column CREATE_TIME
Perform the select below to find out when a table was created

db2 "select substr(tabname,1,20) as TABLE, create_time from syscat.tables
 where tabname='<table_name>'"