Saturday, May 31, 2014

find user last login

Often times IT security would want to find out which DB users have been inactive for the last 6 months.  Below query will find out.

Note: You need Query History DB enabled to do this.

select username from _v_user
minus
select distinct sessionusername
from (
select sessionusername, to_char(connecttime,'MMYYYY') as connectime, count(*)
from "$hist_session_prolog_1"
group by sessionusername, to_char(connecttime,'MMYYYY')
having to_char(connecttime,'MMYYYY') in (122013,012014,022014,032014,042014,05014)) a

Stored Procedures vs UDX


Calling a stored procedure


The standard way to call a stored procedure is by using the CALL statement.
call myprocedure(arguments);

You can also invoke a stored procedure using a SELECT statement, but you can't include a FROM clause. So this works:
select myprocedure('foo', 'bar', 123);

This will throw an error:
select myprocedure(x,y,z) from foo;

It is also important to note that a stored procedure cannot be called as part of an expression, even from within NZPLSQL code. So the code below is also illegal and will result in an error.
select myprocedure(arguments) + 100;
-- or in NZPLSQL code
result := myprocedure(arguments) + 100;

Stored Procedure Performance

It is important to understand that a stored procedure runs as a single process on the NPS host, it does not run in parallel.  Any SQL statement issued by a stored procedure behaves just as SQL does when issued by any other source (the SQL runs in parallel on the NPS SPUs).  The primary use case for stored procedures is to control the execution of a multiple step process where all of the work done by each step is performed by a SQL statement.  A stored procedure can be used to dynamically generate a SQL statement based on the inputs to the procedure or the values in a table, but it should never be used to actually process large amounts of data returned from a SQL statement.  Any logic which needs to be applied to a large number of individual rows should be done in a SQL statement or a UDX, never by using IF statements in the procedure itself.  

Procedures vs. Functions

Procedures in Netezza are different than functions. A procedure can only be used as shown above (not with a FROM clause), but a procedure can run SQL. A function can be used with a FROM clause and can process many rows of data instead of only one, but a function cannot issue SQL. In Netezza we usually call them UDF (user defined function), UDA (user defined aggregate), UDTF (user defined table function), or UDAP (user defined analytic process). The term UDX is often used to mean any type of user defined program. A UDX can only process data using the columns of data which are passed into the UDX, it has no capability to issue SQL to get additional lookup information etc. All of the logic must be based only on the arguments passed into the function.

Why SQL isn't allowed in a UDX

When SQL is used inside of a user defined function it limits the ability of the database to run the function in parallel. It is common in non parallel databases to allow functions to issue SQL statements, but in an massively parallel database that would limit scalability. In a MPP database a SQL statement is always executed across many nodes in parallel. Taking a TwinFin-12 as an example, when a SQL statement runs it is actually executing 92 parallel processes across 12 or 14 s-blades. If each of those parallel processes were to run a SELECT statement, each SQL statement would also run in parallel (creating 92 parallel processes for each statement), or a total of 92*92 = 8464 parallel processes. This of course wouldn't work as just one SQL statement would result in more processes than the database is capable of running. Databases which allow SQL to run inside of a function must therefore limit the functions from running in parallel. For that reason Netezza provides two distinct types of user defined programs. Stored procedures run as a single process on the host and are allowed to issue SQL. User defined functions run in parallel but cannot issue SQL.


Thursday, May 29, 2014

Netezza Basic Commands

Some Netezza basic commands use regularly

[nz@CELDNTZ1 ~]$ nzrev
Release 6.0.8 (P-11) [Build 32232]
[nz@CELDNTZ1 ~]$ nzstats

Field Name           Value
-------------------- ------------------------------
Name                 CELDNTZ1
Up Time              2941439 secs
Up Time Text         34 days, 1 hr, 3 mins, 59 secs
Date                 19-May-14, 17:50:38 SGT
State                8
State Text           Online
Model                IBM Netezza 100-1
Num SFIs             0
Num SPAs             1
Num SPUs             1
Num Data Slices      8
Num Hardware Issues  0
Num Dataslice Issues 0

[nz@CELDNTZ1 ~]$ nzhw -issues
No items found
[nz@CELDNTZ1 ~]$ nzds -issues
No items found
[nz@CELDNTZ1 ~]$ nzstate
System state is 'Online'.
[nz@CELDNTZ1 ~]$

Checking sessions (all/active)

nzsession -activeTxn

Netezza logs can be found in this directory

/nz/kit/log/postgres
/nz/kit/log/sysmgr
/nz/kit/log/startupsvr
/nz/kit/log/dbos

Sunday, May 25, 2014

Find unwanted tables in Production

As DBA, you will find your database does not just storing valid production tables but also other tables like TMP, BKP, BACKUP, DDMMYYYY format etc etc.  These tables take up space and especially during backup it will slow down process.  Below script will identify these tables:

SELECT the_table.database,
        CASE WHEN the_table.objclass = 4905  THEN 'Table    '
             WHEN the_table.objclass = 4910  THEN 'MView    '
             WHEN the_table.objclass = 4940  THEN 'TempTable'
             WHEN the_table.objclass = 4953  THEN 'Sec Table'
             WHEN the_table.objclass = 4951  THEN 'Sec Table'
             WHEN the_table.objclass = 4959  THEN 'Sec Table'
             WHEN the_table.objclass = 4961  THEN 'Table    '
             WHEN the_table.objclass = 4963  THEN 'Table    '
                                             ELSE 'What?    '
        END AS "Object",
        MAX
        (   CASE WHEN the_table.objclass = 4910                THEN   SUBSTR(the_table.objname, 3)::nvarchar(128)
                 WHEN the_table.objclass in (4951,4959,4963)   THEN NULL
                 WHEN the_table.objclass =  4940 then the_table.objname::nvarchar(128)
                        || nvl( '/' || _vt_tbldict.tblname,'')
                 ELSE   the_table.objname::nvarchar(128)
            END
        )
        ||
        MAX( case when the_table.objclass in (4959,4963) then ' (Versioned)' else '' end)
        AS "Name"
  ,       TO_CHAR ( nvl(SUM (  used_bytes  ),0)                     , '999,999,999,999,999'  ) AS "Bytes"
  ,          TO_CHAR ( nvl(SUM (  used_bytes  ),0) / (          1024 ) ,     '999,999,999,999'  ) AS "KB"
  ,          TO_CHAR ( nvl(SUM (  used_bytes  ),0) / (       1048576 ) ,         '999,999,999'  ) AS "MB"
  ,          TO_CHAR ( nvl(SUM (  used_bytes  ),0) / (    1073741824 ) ,             '999,999.0') AS "GB"
  ,          TO_CHAR ( nvl(SUM (  used_bytes  ),0) / ( 1099511627776 ) ,                 '999.0') AS "TB"
FROM  _V_OBJ_RELATION_XDB the_table
      left outer join _V_SYS_OBJECT_DSLICE_INFO on
      (
               the_table.objid = _V_SYS_OBJECT_DSLICE_INFO.tblid   and
               _V_SYS_OBJECT_DSLICE_INFO.tblid > 200000
      )
    left outer join _VT_TBLDICT on the_table.objid = _VT_TBLDICT.tblid
WHERE              the_table.objclass in (4905,4910,4940,4959,4951,4953,4961,4963)
               and the_table.objid > 200000
and
(
the_table.objname like '%0%'
or the_table.objname like '%1%'
or the_table.objname like '%2%'
or the_table.objname like '%3%'
or the_table.objname like '%4%'
or the_table.objname like '%5%'
or the_table.objname like '%6%'
or the_table.objname like '%7%'
or the_table.objname like '%8%'
or the_table.objname like '%9%')
and (the_table.objname not like 'NZCC_%'
and the_table.objname not like '_V_%'
and the_table.objname not like 'UAC_%'
and the_table.objname not like 'NZORCH_%'
and the_table.objname not like '%#%'
and the_table.objname not like '%$%')
GROUP BY the_table.DATABASE,
        "Object",
        the_table.visibleid;

Friday, May 23, 2014

Query to find database size

Need a query to find the database size in your Netezza?  Run the query below:

(
    SELECT                             'Database ' AS "Object",
                                       subselect.database::nvarchar(128) AS "Name"
  ,       TO_CHAR ( nvl(SUM ( used ),0)                     , '999,999,999,999,999'  ) AS "Bytes"
  ,          TO_CHAR ( nvl(SUM ( used ),0) / (          1024 ) ,     '999,999,999,999'  ) AS "KB"
  ,          TO_CHAR ( nvl(SUM ( used ),0) / (       1048576 ) ,         '999,999,999'  ) AS "MB"
  ,          TO_CHAR ( nvl(SUM ( used ),0) / (    1073741824 ) ,             '999,999.0') AS "GB"
  ,          TO_CHAR ( nvl(SUM ( used ),0) / ( 1099511627776 ) ,                 '999.0') AS "TB"
    FROM
        (
                SELECT                  database
                      ,0::bigint AS used
                          FROM        _v_database
                          WHERE       database not in ('master_db', 'MASTER_DB')
              UNION ALL
                SELECT                       the_table.database
                             ,SUM ( used_bytes ) as used
                FROM          _V_OBJ_RELATION_XDB the_table
                              left outer join _V_SYS_OBJECT_DSLICE_INFO on
                              (    the_table.objid = _V_SYS_OBJECT_DSLICE_INFO.tblid   and
                                   _V_SYS_OBJECT_DSLICE_INFO.tblid > 200000
                              )
                WHERE             the_table.objclass in (4905,4910,4940,4959,4951,4953,4961,4963)
                              and the_table.objid > 200000
                GROUP BY       the_table.visibleid
                              ,the_table.database
        ) subselect
                        GROUP BY  "Name"
)
UNION ALL
(
        SELECT
        'Appliance' AS "Object",
        'CELDNTZ1                        '::nvarchar(128) AS "Name"
  ,       TO_CHAR ( nvl(SUM ( used ),0)                     , '999,999,999,999,999'  ) AS "Bytes"
  ,          TO_CHAR ( nvl(SUM ( used ),0) / (          1024 ) ,     '999,999,999,999'  ) AS "KB"
  ,          TO_CHAR ( nvl(SUM ( used ),0) / (       1048576 ) ,         '999,999,999'  ) AS "MB"
  ,          TO_CHAR ( nvl(SUM ( used ),0) / (    1073741824 ) ,             '999,999.0') AS "GB"
  ,          TO_CHAR ( nvl(SUM ( used ),0) / ( 1099511627776 ) ,                 '999.0') AS "TB"
FROM
        (
                SELECT

                                      SUM ( used_bytes ) as used
                FROM          _V_OBJ_RELATION_XDB the_table
                              left outer join _V_SYS_OBJECT_DSLICE_INFO on
                              (    the_table.objid = _V_SYS_OBJECT_DSLICE_INFO.tblid   and
                                   _V_SYS_OBJECT_DSLICE_INFO.tblid > 200000
                              )
                WHERE             the_table.objclass in (4905,4910,4940,4959,4951,4953,4961,4963)
                              and the_table.objid > 200000
                GROUP BY      the_table.visibleid
        ) subselect
)
         order by 1 ASC, 2 ASC



Sample Output:

Object Name Bytes KB MB GB TB
Appliance CELDNTZ1                    6,014,002,790,400 5,873,049,600 5,735,400 5,601.0    5.5
Database CELEBI_STG_PROD_DB   1,535,959,891,968 1,499,960,832 1,464,806 1,430.5    1.4
Database AIU_HTTP_DB     840,035,467,264    820,347,136    801,120    782.3    .8
Database CELENR_HLD_PROD_DB     739,612,426,240    722,277,760    705,349    688.8    .7
Database CELEBI_NBA_DDS_DB     471,286,415,360    460,240,640    449,454    438.9    .4
Database SYSTEM     395,920,801,792    386,641,408    377,580    368.7    .4

Wednesday, May 21, 2014

query to find table in Netezza

If you need to search for table/tables in your databases, try query below.  Change the table name accordingly.  This is pretty much like Oracle DBA_TABLES/DBA_OBJECTS.

select database, objname, owner
from _v_obj_relation_xdb
where objname IN ('ARRANGEMENT_ACTIVITY','SUBSCRIBER_ARRANGEMENT')
and objtype='TABLE'

Tuesday, May 20, 2014

nz_migrate undocumented

#########################
# Undocumented Features #
#########################
#
# Normally, I would expect you to want to migrate ALL of a table (all rows).  But maybe
# you want to migrate just some of the data and/or break the migration up into smaller
# pieces.  Therefore, I allow you to specify optional SQL -- on a PER TABLE BASIS -- that
# can be used to control/limit the data extraction.
#
# You do this via environment variables.
# You can set them up for as many, or as few, tables as you wish.
# You set them on a per-table basis (one environment variable per table).
#
# If your table name was CUSTOMER_ORDERS, then you would set up this optional
# environment variable in the following manner, prior to launching nz_migrate.
#
#     export OPTIONAL_SQL_CUSTOMER_ORDERS="order_quantity > 1000"
#
# The environment variable must start with OPTIONAL_SQL_
# The tablename part is case sensitive.  It must match the source system exactly (e.g., upper or lower)
# You must set the variable to a valid SQL statement appropriate for this table.
# BASICALLY, this script is going to issue the first part of the SQL statement, e.g.
#           select * from customer_orders where
# and then append to it whatever you've specified
#           order_quantity > 1000
#
# No error checking will be performed.  Make sure you use valid SQL if you use this option.
# If this feature is used, it will be recorded in the nz_migrate output.  e.g.,
#
#      .....OPTIONAL SQL RESTRICTION                    order_quantity > 1000
#
# Any such restriction will also be passed to the nz_cksum portion of the script,
# so that it verifies+validates just the portion of the data that was moved.


# If your migration is going to take a long time ... if you want to migrate the
# data as of a particular point-in-time ... if the source tables might be changing,
# and you don't want those changes to affect the migration ... then you can add
# this switch
#                -stableTXid nnnn
# where nnnn is the stable transaction ID/number.  For more info (or to find out
# the current stableTXid) see:  nz_get_stableTXid -?


# By default, when larger tables are migrated, 4 threads are used.
#
# But what if ...
# say 3 of the threads finished, but one got hung up/failed for "some reason".
# Here is a way to redo just that one thread ... so you only need to (re)migrate
# that portion of the table.  (The less data you have to move over, the better
# off you'll be).
#
# a)  Terminate the nz_migrate job (and any jobs it may have left running on
#     either of the two boxes)
# b)  Did the target table have any rows inserted into it for this thread?
#     They'll get rolled back automatically.  But at some point in time you'll
#     want to nzreclaim that space.  A block level reclaim should suffice.
# c)  Lets say that "thread 2 of 4" failed.  So you want to redo just that thread.
#     Set the following environment variable
#            export MIGRATE_JUST_THIS_THREAD=2
# d)  Then (re)run nz_migrate against that database table ... using basically the
#     same options as before.  But now the script will just start up thread 2 (of 4)
#     rather than threads 1 thru 4.  It is all controlled by this environment
#     variable.

nz_migrate errors

##########
# ERRORS #
##########

# When data starts flowing between the two machines (".....data flowing....."), this script
# basically steps back and gets out of the way.  It does a "wait" until the unloads+nzloads
# are finished.  Usually, they are successful.  I'm not expecting any errors.  But if
# something untowards happens, here are some sample outputs for your reference.
#
# ----------------------------------------------------------------------
#
# If the nzload job dies (because of an 'nzsession abort') you would see
# messages such as this.  (The 'Broken pipe' is a red-herring -- the unload
# died because the load died (and closed the pipe prematurely)).
#
# ERROR:  The unload to an external table (thread # 1 ) reported the following problem @ 2008-05-27 14:50:38
# ERROR:  /tmp/nz_migrate_20080527_2411/nz_migrate_20080527_2411_1.pipe : Broken pipe
#
# ERROR:  The nzload operation ( thread # 1 ) reported the following problem @ 2008-05-27 14:50:38
# Error: ERROR:  Transaction rolled back by client
# Error: ERROR:Communication link failure
# Error: Load Failed, records not inserted.
#
# ----------------------------------------------------------------------
#
# If the nzload job dies (because of a 'kill') you would see
# messages such as this.  Again, the 'Broken pipe' is a red-herring.
#
# ERROR:  The unload to an external table (thread # 1 ) reported the following problem @ 2008-05-27 14:55:53
# ERROR:  /tmp/nz_migrate_20080527_8057/nz_migrate_20080527_8057_1.pipe : Broken pipe
#
# ERROR:  The nzload operation ( thread # 1 ) reported the following problem @ 2008-05-27 14:55:53
# Error: ERROR:  Transaction rolled back by client
# Error: load is aborted due to user cancellation.
#
# ----------------------------------------------------------------------
#
# If the unload (create external table) dies (because of an 'nzsession abort')
# you would see messages such as this.
#
# ERROR:  The unload to an external table (thread # 1 ) reported the following problem @ 2008-05-27 14:53:58
# pqFlush() -- connection not open
#
# ----------------------------------------------------------------------
#
# I paused my system (which was being used as both the source + target)
# Which resulted in these types of messages.
#
# ERROR:  The unload to an external table (thread # 1 ) reported the following problem @ 2008-05-27 14:57:47
# ERROR:  Transaction rolled back due to restart or failover
#
# ERROR:  The nzload operation ( thread # 1 ) reported the following problem @ 2008-05-27 14:57:47
# Error: ERROR:  Transaction rolled back due to restart or failover
# Error: Load Failed, records not inserted.
#
# ----------------------------------------------------------------------
#
# If someone messes with the pipe (as in this case) it could throw errors for
# both the unload and the load.
#
# ERROR:  The unload to an external table (thread # 1 ) reported the following problem @ 2008-05-27 16:36:47
# ERROR:  /tmp/nz_migrate_20080527_11217/nz_migrate_20080527_11217_1.pipe : Broken pipe
#
# ERROR:  The nzload operation ( thread # 1 ) reported the following problem @ 2008-05-27 16:36:47
# Error: ERROR:  External Table : count of bad input rows reached maxerrors limit, see nzlog file
# Error: Load Failed, records not inserted.
#
# ----------------------------------------------------------------------
#
# This was the error of interest that was being thrown.
# It is actually generated by nzload (and is simply reported back up the stack by nz_migrate)
#
# ERROR: The nzload operation ( thread # 3 ) reported the following problem @ 2009-09-04 03:29:21
# Error: Communication link failure
# Error: ERROR:Communication link failure
# Error: Load Failed, records not inserted.
#
# The following error was also reported ... but is simply a byproduct of the above error.
#
# ERROR: The unload to an external table (thread # 3 ) reported the following problem @ 2009-09-04 03:29:21
# ERROR: /tmp/nz_migrate_20090904_26809/nz_migrate_20090904_26809_3.pipe : Broken pipe
#
# This turned out to be a network problem ... which affected the data transfer (e.g., nzload) between the
# two boxes (nzload runs on the SOURCE host ... loading data across the network ... into the TARGET host)
#
# An ftp test worked ... but ran very slowly.
#
# Things to test and look at
#      Get FTP working
#      Get FTP working fast
#      Make sure there are no network errors being reported ( /sbin/ifconfig )
#
# This problem turned out to be an incorrect configuration of network cards on one of
# the database hosts.  The NICs were configured as Round Robin instead of master-backup.
# Due to Round-robin the network # interface kept switching between different switches
# impacting the performance of moving data across the network.
#
# ----------------------------------------------------------------------
#

# If you see ANYTHING that is flagged as an error in the output, you
# should follow up on it.





#########
# Notes #
#########

# nz_migrate is, for the most part, universal in nature.  It can be used on any of
# these software versions.
#
#      3.0, 3.1, 4.0, 4.5, 4.6, 5.0, yada yada
#
# If you are migrating FROM 3.0, you will need to use the "-tbin <dirname>" option.

# nz_migrate is, for the most part, self contained.  No other scripts/executables
# should be needed ... except if you include one of these options
#
#      -format binary
#      -cksum slow
#      -cksum fast

# I have tested ASCII and BINARY migrations between the following revs ...
# everything seems to check out ok.  And, because nzload is based on
# external tables (as of 3.1) there seems to be no issue with cross
# version compatability -- and thus no need to have a client toolkit
# installed on the source host.  In general.
#
#      3.0-->3.1     You WILL need to have the client toolkit
#      3.0-->4.0     installed (i.e., "-tbin <dirname>" ) if your
#      3.0-->4.5     source system is rev 3.0.
#
#      3.1-->4.0
#      3.1-->4.5
#
#      4.0-->3.1
#      4.0-->4.5
#
#      4.5-->3.1
#      4.5-->4.0

# I've tried to make this script case insensitive.  But that gets tricky at
# times.  Pre 3.1 systems were all lower case.  Post 3.1 systems can be
# either upper case or lower case.  To say nothing of the fact that you
# can explicitly QUOTE a database/tablename -- to preserve its case
# sensitivity -- in any version.  Plus, the very nature of this script
# causes it to often times be used to migrate data from one machine (running
# one version of the software) to another machine (running a different
# version of the software).
#
# At this time ... the script should be mostly case insensitive ... assuming
# it is most important to work in that environment (where one system might
# be uppercase and another system might be lowercase).
#
# I first try to access the table (on both the source and the target machines)
# using an unquoted tablename (which means case insensitive).  If that doesn't
# find a match, then I try to access the table using a quoted tablename (which
# means it would have to be an exact match.  Once I get a match, I then always
# quote it to make sure the box does what I want it to.  This should allow me
# to migrate data from an UPPERCASE system to a lowercase system, and vice versa.

# Running 2 (or more) threads at a time can result in a measureable increase
# in the overall throughput -- by making better use of the multiple processors
# on the SMP host, as well as the house network.  While one stream might be
# pended on I/O or a system call, the other stream(s) can forge ahead.
#
# The biggest performance gain is obtained with the 2nd thread, lesser so with
# the 3rd thread, and then only fractionally better thereafter.
#
# This script is set up to use 1 thread for small tables (less than 1M rows),
# since there is an overhead associated with starting up the threads.
# For larger tables, 4 threads will be used -- which seemed a reasonable
# number (and a reasonable compromise, all things considered).
# You can go higher (I've seen continued small improvements with 10 threads).
#
# Use 1 thread if you're interested in simplicity -- one external table
# unload job and 1 nzload job.
#
# Machine-->Machine migration speed will vary based on MANY factors (your
# data, the data skew, the class of NPS box your source database is on, the
# class of NPS box your target database is on, the size of the table, the
# number of columns in the table, the datatypes of each column, etc...)
#
# You should expect numbers in the 100 GB/hour range at the low end --
# and in the 300+ GB/hour range (near physics speeds!) at the high end.
# (The latter numbers were measured going from<-->to two 8250-HA class
# machines.)





#########
# SPEED #
#########

# Recent performance tests
#
#     8250 --> 8150, 52.9 GB, 46 Tables, 78.8 M Rows
#              took ~ 0:08:00
#              which works out to be about ~400 GB/hour
#              -- as measured by the size of the BINARY DATA ON DISK
#              -- for the right table ... I've seen this number spike up to ~450 GB/hour

# Even more recent performance tests
#
#  10400 --> 10400 using 4.0, and migrating ~10GB of ascii data
#
#       -format ascii -threads 1     3m40s
#       -format ascii -threads 2     2m37s
#       -format ascii -threads 4     2m16s
#
#       -format binary -threads 1    1m05s
#       -format binary -threads 2    0m47s
#       -format binary -threads 4    0m46s
#

# The tests below ... what do they tell me?
#      Binary format is (usually, but not always) fastest.
#      Multiple threads are important.
#      If you are going to a different sized machine, ascii is
#           roughly as fast as binary because the decompression
#           takes place on the host instead of the SPUs.  And it
#           makes the use of multiple threads even more important,
#           so more processors (on the SMP host) can be doing the
#           decompression in parallel.
#
# 10400 --> 10100     -format ascii     -format binary
#      -threads 1           336              356
#      -threads 4           205              184
#      -threads 8           200              196
#
#
# 10100 --> 10100     -format ascii     -format binary
#      -threads 1           341              143
#      -threads 4           223              106
#      -threads 8           205              102
#


# Generic nzload timing note ...
# Ran load tests, averaging 470GB/Hour from the StoragePad ... with the load files
# split into 4 pieces, and running multiple nzloads simultaneously.

nz_migrate command options

Usage:    nz_migrate  -sdb <dbname> -tdb <dbname> -thost <name/IP>  [optional args]

Purpose:  Migrate (i.e., copy) database table(s) from one NPS server to another.

          It can also be used to make a copy of a database/table on the same server.

          Source     The server/database containing the data which is to be copied.

          Target     The server/database to receive the data.  The database and
                     table(s) must already exist on the target.

                     Optionally, this script can automatically create the entire
                     target database, or just the target table(s),  via the options
                          -CreateTargetDatabase  YES
                          -CreateTargetTable     YES

          This script can be invoked from the source host or from the target host.
          Or, it can be invoked from a linux client -- though that would not provide
          the best performance as the data has to flow from the source to/thru the
          client and then back out to the target.
               SOURCE ==> client ==> TARGET

          Regarding Materialized Views:  One would never migrate the data in a materialized
          view directly (or load it or back it up for that matter).  You only deal with the
          base table itself ... and the materialized view is automatically maintained by the
          NPS software.  However, there is overhead in having the system have to update the
          MView (along with the base table).  You might want to consider suspending the MView
          on the target machine prior to doing the migration ... and then refreshing it upon
          completion.
               ALTER VIEW view MATERIALIZE { REFRESH | SUSPEND }

          Regarding Sequences:  The only 'data' associated with a sequence is the definition
          (the DDL) of the sequence itself, which you can obtain via the script 'nz_ddl_sequence'.

          This script can be run as any user.  No special privileges need to be granted
          beyond the basics (such as the ability to read from the source tables and
          create external tables, and the ability to write to the target tables).

          To migrate tables requires permissions such as these:

          On the SOURCE system
               GRANT LIST         ON <DATABASE|dbname>  TO <user/group>;
               --The user must have access to the database that contains the table(s)
               GRANT SELECT       ON <TABLE|tablename>  TO <user/group>;
               --The user must have access to the tables themselves, and their data
               GRANT CREATE EXTERNAL TABLE              TO <user/group>;
               --The user must be able to create external tables, into which the
               --data will be unloaded

          On the TARGET system
               GRANT LIST         ON <DATABASE|dbname>  TO <user/group>;
               --The user must have access to the database that contains the table(s)
               GRANT SELECT, INSERT , TRUNCATE      ON <TABLE|tablename>  TO <user/group>;
               --The user must be able to insert (i.e., nzload) data into the target
               --tables.  The TRUNCATE privilege is only needed if you want to use the
               --the command line option:  -TruncateTargetTable YES

Inputs:   REQUIRED Arguments
          ==================

          -sdb <dbname>            # Source database name
          -tdb <dbname>            # Target database name

          -shost <name/IP>         # Source host
          -thost <name/IP>         # Target host
             
               # The "-shost <name/IP>", if not specified, will default to "localhost"
               # (i.e., the current system).  If you are not invoking this script directly
               # on the source host, then this value must be specified.
               #
               # If -shost and -thost are both set to the string "localhost", then
               # this script will recognize that you simply want to copy the data
               # from one database into another (on the same machine).  In which case
               # it will perform a simpler/faster cross-database INSERT (rather than
               # doing a 'migration' which has more overhead)

          A minimalist command line might be as simple as something like this:
               nz_migrate  -sdb proddb  -tdb testdb -thost devbox

          OPTIONAL Arguments
          ==================

          -suser <user>             # Source user       [SUSER]
          -tuser <user>             # Target user       [TUSER]

          -spassword <password>     # Source password   [SPASSWORD]
          -tpassword <password>     # Target password   [TPASSWORD]

          -sport <#>                # Source port       [SPORT]
          -tport <#>                # Target port       [TPORT]

          -sschema <schema>         # Source schema     [SSCHEMA]
          -tschema <schema>         # Target schema     [TSCHEMA]

               # If any of these arguments are NOT specified, the default NZ_USER,
               # NZ_PASSWORD, NZ_DBMS_PORT and NZ_SCHEMA environment variables will
               # be used in their place when connecting to the source and target
               # machines.
               #
               # Rather than passing these arguments on the command line, you could
               # instead specify them via the environment variables listed above.
               # For the passwords this would be more secure -- as the passwords
               # wouldn't appear if someone issued a 'ps' command.  Alternatively,
               # if you have set up the password cache (via the 'nzpassword' command)
               # the passwords can be obtained directly from that.
               #
               # Generally, the NZ_DBMS_PORT is never specified (so the default value
               # of 5480 is used).  This simply allows you to override the default if
               # you need to.
 
          -t|-table|-tables  <tablename> [...]

               # The table(s) within the database to be migrated.  If none are specified,
               # then all tables in the source database will be migrated.  This switch
               # can be specified multiple times.
               #
               # Note:  If a table contains a column of type INTERVAL (which is rather
               #        atypical) it can only be migrated by using the "-format binary"
               #        option.

          -exclude <tablename> [...]

               # If you don't specify a list of tables to be migrated, then all of the
               # tables in the source database will be migrated.  This option allows
               # you to identify specific tables that are to be EXCLUDE'd from that
               # migration.

          -tableFile <filename>

               # You can specify a file which lists the tablenames to be migrated (the
               # names can be separated by newlines or tabs).  This switch can be
               # used in conjunction with the -table option.  This switch can be
               # specified multiple times.

          -tableSQL <SQLQueryString>

               # You can use this option to dynamically generate the list of tablenames
               # to be migrated, via a user-specified SQLQueryString that runs against
               # the source database.  This allows you to include/exclude many tables
               # all at once.
               #
               # Sample Usage:
               # -tableSQL "select tablename from _v_table where objtype = 'TABLE' and tablename not like 'TEST%'"
               #
               # This option can be specified only once.  This option must be used by
               # itself (and not in conjunction with the -table or -tableFile options).

          -sTable <tablename>      # source Table
          -tTable <tablename>      # target Table

               # This option allows you to migrate a source table into a target table
               # with a different name.  When using this option you may only process
               # one table at a time via this script.  You must specify both the source
               # tablename and the target tablename.  The -CreateTargetTable and
               # -TruncateTargetTable options may be used.  You may not use the
               # "-t <tablename>" option at the same time.

          -format <ascii|binary>

               # The data transfer format to be used.  The default is ascii.
               #
               # ascii   Universal in nature, but results in a larger volume of data
               #         needing to be transferred -- which means the migration will
               #         typically take longer.  It is especially well suited for
               #         doing migrations
               #            o   between different sized machines (i.e., TF12 --> TF48 )
               #            o   when the DDL differs slightly between the two tables
               #                (i.e., different data types, distribution keys, etc.)
               #
               # binary  The database's compressed external table format.  It can result
               #         in higher throughput, and thus better performance, especially
               #         when doing migrations between the same sized machines (machines
               #         which have the same number of dataslices).
               #
               #         Normally, data is compressed on the source SPUs and then later
               #         decompressed on the target SPUs.  But if the # of dataslices
               #         does not match, then the decompression will actually occur on
               #         the target host rather than on the target SPUs (thus, losing
               #         out on the performance advantages of parallelism).  This can be
               #         offset by using multiple "-threads <n>" ... in which case the
               #         decompression will now run in multiple processes on the target
               #         host (thus making use of multiple SMP processors/cores).
               #
               #         Note: In 4.5 (and earlier revs) nzodbcsql was used to load the
               #               data into the target machine.  Which means that the
               #               password cache could not be used for the target host.
               #
               #               In 4.6 (and later revs) nzload is used to load the data
               #               into the target machine.  Which means that the password
               #               cache can now be used.

          -threads <n>

               # Each table will be processed by using '<n>' threads (parallel unload/load
               # streams) in order to make optimal use of the SMP host and the network
               # bandwidth.  By default
               #      1  thread will be used for small tables ( < 1M rows )
               #      4  threads will be used for larger tables
               # Or, you can override the number of threads to be used with this switch.

          -cksum <yes|no|fast|slow>  [column ...]

               # Should a cksum be run against both tables to confirm that the source and
               # target tables (appear to) contain the same data?  The default is "Yes".
               # Choices are
               #      Yes | Count   --   Perform a simple "select COUNT(*) from <table>"
               #      No            --   Perform NO cksum
               #      Fast          --   Perform a FAST cksum (only 1 column is used)
               #      Slow          --   Perform a SLOW cksum (all columns are used)
               #
               # The source table's cksum is performed at the start of the migration ...
               # so that it reflects the data that is being migrated (in case someone
               # inserts/updates/deletes rows from the table sometime after the migration
               # process began).  It is run as a separate/background job so as to not
               # delay the start of the data flow.
               #
               # The target table's cksum is performed at the end of the migration ...
               # after the data is in it.
               #
               # If column(s) are specified, then only those column(s) will participate
               # when doing a Slow or Fast checksum.  The Count checksum doesn't involve
               # any columns to begin with, since it only counts the number of rows.
               #
               # See also:  nz_cksum

          -TruncateTargetTable <no|yes>

               # Before loading any data into the target table, TRUNCATE it to ensure it
               # is empty.  If you use this switch be careful not to accidentally mix up
               # your source and target hosts.  Or your original/source data will be LOST
               # FOREVER!  The default value is NO.

          -CreateTargetTable <no|yes>

               # If the table does not already exist in the target database, do you want
               # this script to attempt to automatically create it for you?  The default
               # value is NO.  If YES, the table will be created, and its definition will
               # include any primary key and unique column constraints that may have been
               # defined.  The script will also attempt to add any relevant foreign key
               # constraints to the table.  However, as those are dependent upon other
               # tables, that operation may or may not be successful.  If that operation
               # fails it will not be treated as an error by this script.  Rather, the
               # following notice will be included in the output.
               #
               # ....NOTICE:  One (or more) of the foreign key constraints was not added to the table
               #
               # The script 'nz_table_constraints' will be helpful if you wish to add
               # those table constraints back in later on.

          -CreateTargetDatabase <no|yes>

               # If the target DATABASE itself does not exist, do you want this script
               # to attempt to automatically create it (and ALL of its tables and other
               # objects) for you?  The default value is NO.  If YES, then this script
               # will (basically) run
               #
               #      nz_ddl  SOURCE_DATABASE_NAME  -rename TARGET_DATABASE_NAME
               #
               # on the source system (to obtain the original DDL), and then feed that
               # into nzsql running against the target system (to create the objects).
               #
               # Many non-fatal errors might occur (for example, a CREATE VIEW statement
               # might reference objects in another database ... that doesn't exist on the
               # target system ... and thus, that DDL statement would fail).  For purposes
               # of SUCCESS/FAILURE, the only thing that this script cares about is whether
               # the target database itself was successfully created.
               #
               # As the output from these operations can be quite voluminous, the details
               # will be logged to a disk file in case you wish to reference them.
               #
               # To replicate a database (all of its DDL and DATA) on your current system,
               # you could do something like this:
               #      nz_migrate -shost localhost -sdb OLD_DB  -createtargetdatabase True \
               #                 -thost localhost -tdb NEW_DB
               #
               # See 'nz_ddl -help' for additional information.

               # Regarding SCHEMA's:
               # You can migrate from one database.schema into another database.schema.
               # If the target database exists, but the target schema does not ... this
               #     script will NOT create the target schema.
               # If the target database does not exist, this script will create the
               #     target database and use its default schema (which you can rename
               #     after the fact).  In this situation you would NOT specify the
               #     "-tschema <schema>" option, as the target schema is going to be
               #     the default schema (of the newly created target database).

          -CreateTargetUDX <no|yes>

               # This option only applies when the following two conditions are met
               #      You specified  '-CreateTargetDatabase yes'
               #      The target database does not exist (and thus will be created)
               #
               # If both conditions are true, then this option can be used to specify
               # that the script should also recreate any user defined FUNCTION, AGGREGATE
               # or LIBRARY that existed in the source database.
               #
               # The default for this option is NO.
               #
               # If you specify YES then please note the following:
               #   o  On the source machine, you must be running as the linux user "nz"
               #      so that the object files (under /nz/data ) can be accessed
               #   o  The object files are scp'ed to the target machine, so this script
               #      (e.g., scp) will most likely prompt you for the necessary password
               #   o  The source and target machines must be of a compatible architecture
               #      (e.g., Mustang-->Mustang  or  IBM PDA N1001-->IBM PDA N2001)
               #
               # As an alternative for copying/creating UDX's, you might want to consider
               # the following scripts:
               #      nz_ddl_function / nz_ddl_aggregate / nz_ddl_library

          -to <email@recipient>

               # Email address to be sent a copy of the output from this script

          -errors_to <email@recipient>

               # Email address to be sent a copy of the output from this script ...
               # but only if an error is encountered.

          -mail_program <script_or_executable>

               # When sending email, this script invokes
               #       /nz/kit/sbin/sendMail     if you are the linux user "nz"
               # else  /bin/mail                 for all other linux users
               #
               # This option allows you to tie in a different mailer program (or shell
               # script) without needing to make any changes to the nz_migrate script
               # itself.

          -genStats <none|full|express|basic>

               # After each individual table migration completes, this script can
               # automatically issue a GENERATE STATISTICS command against the target
               # table.  It is invoked as a background job so as to not block the rest
               # of the migration process.
               #
               # Options are
               #      None        --  Do nothing (the default)
               #      Full        --  Generate /* full   */ Statistics
               #      Express     --  Generate    EXPRESS   Statistics
               #      Basic       --  Generate /* basic  */ Statistics  (as of 4.6+)
               #
               # For more information about statistics, see 'nz_genstats -help'

          -SourceObjectType <table|any>

               # nz_migrate is used to copy data from a "source table" into a "target table".
               # But why can't the source be something other than a table?  For example,
               # a view or an external table/datafile? This is rather atypical, but allowed.
               #
               # To do this you must
               #  -- specify the source objectname(s), via the "-table" or "-sTable" option
               #  -- include the "-SourceObjectType any" option
               #
               # Otherwise, the default checks that the source "table" is truly a table.

          -SourceWhereClause <clause>

               # Normally, nz_migrate is used to migrate over the entire contents of a
               # table -- all visible rows.  This option allows "you" to tack on a WHERE
               # clause to the data that gets selected ... allowing "you" to migrate a
               # subset of the table.  All the power (and responsibility) is put into your
               # hands.  Do wrap the clause in double quotes so it will be passed into the
               # script correctly.  Examples:
               #    
               #    -SourceWhereClause  "customer_key = 2"
               #    -SourceWhereClause  "customer_key in (1,3,5)"
               #    -SourceWhereClause  "region_name = 'AMERICA' or region_key = 0"
               #    -SourceWhereClause  "order_date between '1998-01-01' and '1998-12-31'"
               #    -SourceWhereClause  "customer_num in (select cust_id from sales..delinquent_accounts)"
               #
               # Because this clause gets applied to all tables being migrated, you would
               # probably only want to migrate a single table at a time (when using this
               # clause) ... since the clause will typically contain column names that are
               # specific to that table.
               #
               # This clause gets applied to the source table.  Its use will be logged in
               # the output of this script (as well as in the pg.log file on the source
               # machine).  After the migration, when the tables are compared (via the
               # cksum step) this clause will be applied to both the source and target
               # tables ... so that only that data that was moved will be cksum'ed.

          -TargetOrderByClause <clause>

               # This option allows the data to be sorted -- after it is moved to the
               # target machine, and before it is actually inserted into the target
               # table -- in order to obtain optimal performance on future queries that
               # would benefit from zonemap restrictions.  It is up to you to decide what
               # column(s) the data should be sorted on, and to pass those column name(s)
               # into this script.  Do wrap the clause in double quotes so it will be
               # passed into the script correctly.  Examples:
               #
               #      -TargetOrderByClause  "purchase_date"
               #      -TargetOrderByClause  "purchase_date asc"
               #      -TargetOrderByClause  "purchase_date asc, store_id asc"
               #
               # Because this clause gets applied to all tables being migrated, you would
               # probably only want to migrate a single table at a time (when using this
               # clause) ... since the clause will typically contain column names that are
               # specific to that table.
               #
               # This clause gets applied to the target table.  Its use will be logged in
               # the output of this script (as well as in the pg.log file on the target
               # machine).
               #
               # How it works (plan wise):  The data will be sent to the target machine --
               # down to the appropriate SPU/dataslice -- and written out to the swap
               # partition.  After all of the data has been moved over, then each SPU/
               # dataslice will sort its dataset and insert that ordered data into the
               # target table.  A typical *.pln file would look something like this:
               #
               #   500[00]: dbs ScanNode table 1000000470
               #   500[02]: dbs ParserNode table "PROD_DB.ADMIN.#aet_3228_1290_903" delim=|
               #   500[05]: dbs DownloadTableNode distribute into link 1000000471, numDistKeys=1 keys=[ 0 ]
               #
               #     1[00]: spu ScanNode table 1000000471
               #     1[03]: spu SortNode (type=0), 1 cols
               #     1[06]: spu InsertNode into table "PROD_DB.ADMIN.CUSTOMER_TBL" 421665
               #
               # The migration WILL take longer ... because we have tacked on an extra step
               # (this sort node) that occurs after the data has been copied over.  However,
               # this does save you the trouble of having to (a) first migrate the data over
               # and (b) then do the sort operation yourself.  Thus, in the long run, it
               # might save you some time, effort, and disk space.
               #
               # It is also recommended that you include these command line options:
               #      -threads 1  -format ascii
               #
               # If you want the data "well sorted", it needs to be sorted as a single dataset.
               # Using multiple threads would cause each thread to sort its data separately.
               # The end result would be that the data was somewhat sorted, but not "well
               # sorted" (so why bother sorting it at all?)
               #
               # If the number of dataslices match (between the source and target machines)
               # then the sortedness of the target data should be comparable to that of the
               # source data (whatever it may be).  If the number of dataslices does not
               # match, you can use "-format binary" ... but (for the reasons just mentioned)
               # you should not use multiple threads.  A binary migration (using just one
               # thread) between dissimilar machines will take a long time -- longer than
               # an ascii migration.  So the "-format ascii" is recommended.

          -TargetDistributeOnClause <clause>

               # IF ... the target table does not exist
               # IF ... you include the "-CreateTargetTable YES" option, thereby requesting this
               #        script to automatically issue a CREATE TABLE statement on your behalf
               # THEN ... this option will allow you to control/override the distribution key.
               #        By default, it will match that of the source table.  But you can
               #        specify whatever you want it to be.  Do wrap the clause in double
               #        quotes so it will be passed into the script correctly.  Examples:
               #
               #      -TargetDistributeOnClause  "Random"
               #      -TargetDistributeOnClause  "(customer_id)"
               #      -TargetDistributeOnClause  "(customer_id, account_id)"
               #
               # Because this clause gets applied to all tables being created, you would
               # probably only want to migrate a single table at a time (when using this
               # clause) ... since the clause will typically contain column names that are
               # specific to a table.  The exception might be if you wanted all of the
               # newly created tables to be distributed on random.
               #
               # This clause gets applied to the target table.  Its use will be logged in
               # the output of this script (as well as in the pg.log file on the target
               # machine).
               #
               # When using this option, you must use "-format ascii" (because a binary
               # migration requires that the source and target table structures match
               # exactly ... to include the distribution key).

          -viaSynonym

               # If you pass the script a synonym name (rather than a table name) ... and
               # If you include this switch ... then
               #
               # The script will find the table that the synonym references (on the source)
               # The table will be migrated to the target
               # Upon successful completion,
               #      a synonym by the same name will be (re)created (on the target)
               #      referencing the table that was just migrated to it.
               #
               # For example, the synonym TODAYS_DATA might reference the table MARCH_12.
               # Once the table MARCH_12 has been successfully migrated to the target host,
               # the synonm (TODAYS_DATA) will be updated to point to that table.

          -cloneDDL

               # This script may issue CREATE TABLE statements    (-CreateTargetTable YES)
               # This script may issue CREATE SYNONYM statements  (-viaSynonym)
               #
               # If the -cloneDDL switch is included, then this script will generate all
               # of the DDL associated with the object -- to attempt to faithfully clone
               # it.  This would include the following statements:
               #        
               #      COMMENT ON ...
               #      GRANT ...
               #      ALTER ... OWNER TO ...
               #
               # Note: This is always the case for the '-CreateTargetDatabase' option.

          -status [<n>]

               # Provides periodic status updates while the data is flowing:
               #
               # .....data flowing.....
               # .....status Total:    5,077,991,424    Average:  507,799,142    elapsed seconds: 10
               # .....status Total:   10,688,921,600    Average:  510,084,561    elapsed seconds: 21
               # .....status Total:   16,720,592,896    Average:  603,167,129    elapsed seconds: 31
               # .....status Total:   23,155,965,952    Average:  643,537,305    elapsed seconds: 41
               # .....status Total:   29,871,308,800    Average:  671,534,284    elapsed seconds: 51
               # .....status Total:   36,969,644,032    Average:  709,833,523    elapsed seconds: 61
               #
               #      The first column is the total number of bytes transferred
               #      The second column is the average number of bytes transferred per second
               #           since the last status update
               #
               # And it provides additional summary statistics for each table, and for the
               # migration as a whole:
               #
               # .....# of bytes xfer'ed                                         2,062,729,601,280
               # .....xfer rate (bytes per second)                               917,584,342
               #
               # By default, the status will be updated every 60 seconds.  You can specify
               # an optional value from 1..3600 seconds.
               #
               # Implementation details:  The dataset, as it is passed between the two
               # hosts, is piped thru "dd" in order to count the number of bytes.  This
               # is not the size of the table on disk -- rather, it is the size of the
               # ascii or binary dataset that gets migrated between the two hosts.  Because
               # it involves an additional process ("dd") getting injected into the data
               # stream there is some -- albeit minimal -- overhead when using this switch.
               # With older versions of linux, the reported "# of bytes xfer'ed" might be
               # slightly less than the actual value because it is based on the number of
               # whole blocks processed by dd.

          -restart <n>

               # Sometimes a table migration may fail because of a momentary network
               # blip between the two hosts (such as a "Communication link failure")
               # or a momentary hardware or software issue.  This script will
               # automatically restart the migration of a table under the following
               # conditions:
               #
               #      You specified the option "-TruncateTargetTable yes"
               #           --or--
               #      You specified the option "-SourceWhereClause <clause>"
               #
               # During the previous attempt to migrate the table, one (or more) of
               # the threads may have successfully completed ... which would result
               # in some (but not all) data existing in the target table.  Which would
               # leave the target table in an inconsistent state.  Before a restart of
               # the migration is attempted, this script will attempt to either
               # TRUNCATE the target table (which should be instantaneous) or
               # issue a DELETE statement against the target table based on the
               # "-SourceWhereClause <clause>" (which could take a long time to
               # run).
               #
               # By default, the # of restart attempts is set to 1 -- though it only
               # kicks in based on the above specified conditions.  You can set
               # the number of restart attempts to a value from 0 (no restart) to 10.
               #
               # A restart does not guarantee success.  It just automates the attempt.
               # A restart will only take place if the problem occurred during the
               # migration of the data (and not during the early setup nor during the
               # final cksum phases).  If the subsequent migration of the table is
               # successful then the script will treat it as a success -- with
               # appropriate log messages being included in the output to identify
               # what the script is doing.

          -timeout [<n>]

               # Sometimes a table migration hangs up ... where, for example, all but one
               # of the threads finishes.  Perhaps the source sent a packet of data but
               # the target never received it.  Neither one received an error.  But neither
               # one will continue ... as they are each waiting for the other to do something.
               #
               # This option tries to monitor for such situations, and will automatically
               # terminate the hung thread(s) when it notices that 0 bytes of data have
               # been transferred between the two systems over the specified length of time
               # (the -timeout value).  Specifying this option automatically enables the
               # '-status' option as well.
               #
               # By default, this option is disabled.  Include this switch on the command
               # line to enable it.  The default timeout value is set to 600 seconds (10
               # minutes).  You can specify an optional value from 1..7200 seconds (2 hours).
               #
               # If the timeout is reached, the migration of the current table will
               # automatically be terminated.  If desired, the migration of that table
               # can automatically be restarted as well.  See the '-restart' option
               # described above for more details.

          -sbin <source bin/executables directory>
          -tbin <target bin/executables directory>

               # You should be able to freely migrate between NPS machines running
               # software versions 3.1, 4.0, 4.5, 4.6, 5.0 and 6.0.
               #
               # If you are migrating from a 3.0 system, there may be software
               # incompatibilities between the source and target machines.
               #
               # To accommodate this difference, do the following:
               #      1) Get a copy of the client toolkit ("cli.package.tar.z") that
               #         matches the version of software running on your TARGET machine.
               #      2) Install it somewhere on your SOURCE machine.  But not under the
               #         directory "/nz" as you don't want to mix up the server + client
               #         software.
               #      3) When you invoke nz_migrate, include the "-tbin" argument so
               #         that the script can find and use the proper toolkit when
               #         communicating with the TARGET machine.
               #
               # For example,
               #      -tbin /export/home/nz/client_toolkits/3.1/bin

Cancel:   When the nz_migrate script runs it launches many subordinate processes.  If you
          need to kill the migration the easiest method would be to press ^C.

          Otherwise, you can issue the following kill statement
               kill -- -NNN

          where NNN is the process ID of the nz_migrate script.  Since many processes
          are associated with the migration, the relevant PID # is listed in the
          nz_migrate output (and is also a part of the directory/log file names).
          For example:

               Data Format   : ascii (using nzsql and nzload)
               Log Directory : /tmp/nz_migrate.20100101_120000.37554
               Log File      : /tmp/nz_migrate.20100101_120000.37554/nz_migrate.output
               Top Level PID : 37554

          The migration is NOT run as one large transaction (where everything gets
          migrated or nothing gets migrated).  Rather, each table is migrated separately.
          Beyond that, if multiple threads are used ... then each thread is a separate/
          independent transaction.  So what does this mean for you ... if you kill a
          migration?

          o   Any tables that have already been migrated are done.

          o   For the current table

              If any threads have already finished ... then that data has been COMMIT'ed
              and does exist in the target table.

              For any threads that are still running ... that data will be rolled back/
              discarded (and the discarded rows may be using up space in the target table).

          So ... your target table will be an indeterminate state.  Before restarting
          the migration for that table, you may want to TRUNCATE it or use the
          "-TruncateTargetTable TRUE" option when invoking nz_migrate.

Errors:   Error: Load session of table '<tablename>' has 1 error(s)

               You are probably doing a "-format ascii" migration between the
               two systems.  The target table does NOT have to be identical
               to the source table.  For example, the column names can be
               different, or an INTEGER column on the source machine can be
               loaded into a VARCHAR column on the target machine.  But if
               you get this error, then some data incompatibility seems to
               have cropped up (e.g., trying to load a 'date' into a column
               that is defined as 'time').  Check the DDL for the two tables.
               Check the *.nzlog file for additional clues.

          [ISQL]ERROR: Could not SQLExecute

               You are doing a "-format binary" migration between the two systems,
               and the definition of the target table (# of columns, data types,
               null/not null constraints, distribution keys) does not match that
               of the source table.  They must match.

          Error: packet header version '9' does not match expected '11'
                 - client and server versions may be different

               It appears that the migration is between two different NPS hosts that
               are running two different versions of the NPS software -- and some of
               the NZ*** tools (most likely, nzsql or nzload) are not compatible
               between the two machines.  See notes above regarding the "-tbin" option.

          ERROR:  loader does not currently support INTERVAL data

               The INTERVAL data type is somewhat atypical, e.g.
                    '1 year 2 months 3 weeks 4 days 5 hours 6 minutes 7 seconds'
               nzload has never supported this data type.  It is a derived column, the
               result of a comparision between two TIME and/or TIMESTAMP columns --
               which you then stored in new table.  Try migrating this specific table
               using the "-format binary" option instead.

          ERROR:  nz_cksum's differ!

               After each table is migrated, an (optional) cksum is calculated against
               both the source + target tables, and then compared.  For some reason
               the checksum's don't match.  The tables should be analyzed + compared
               further to try to determine why this occurred.

          ERROR:  Reload column count mismatch.
                       #  i.e., the total number of columns does not match
          ERROR:  Reload column type mismatch.
                       #  i.e., the column data types do not match
                       #        or the null/not null constraints do not match
          ERROR:  Reload distribution algorithm mismatch.
                       #  i.e., the DISTRIBUTE ON clause does not match

               These errors occur when you are doing a "-format binary" migration
               between the two systems, and the definition of the target table does
               not match that of the source table.  They must match.

               Note: The "nz_ddl_diff" script can be used to compare two table's DDL.

          Error: Unexpected protocol character/message

               The issue is that nzload uses a hard-coded 10 second timeout during
               connection establishment, and the NPS host is not responding in time.
               The NPS host does a REVERSE lookup of the ip address of the client to
               find its hostname.  If the client machine is not in the same sub-net
               as the NPS host, then the name-lookup may take too long.

               Either configure the NPS host to do the name-lookup properly (the domain
               name server might need to be corrected: /etc/resolve.conf), or change the
               /etc/hosts file to include an entry for the client name and ip address.

               To confirm that this is the problem, on the TARGET NPS host try doing this
                    time  host    <ip-of-client-not-working/Source Host>
               or
                    time  dig -x  <ip-of-client-not-working/Source Host>

               If this returns the client name and other information immediately, then
               the problem is something else.  If it takes longer than 10 seconds to
               return the information, then that is causing the timeout of nzload to
               expire.

Outputs:  Status/log/timing information for each table that is migrated will be sent
          to standard out, as well as details about any problems that might need to
          be addressed.

          Exit status:  0 = success, non-0 = ERROR's were encountered

          Sample output (for a migration that included many of the optional options)

nz_migrate table 'MARCH_12' from nps-007:PRODDB --> nps-014:TESTDB

.....processing table 1 of 1
.....referenced via synonym 'TODAYS_DATA'
.....using target table 'MY_TEST_RUN'
.....creating the target table
.....Target DISTRIBUTE ON Clause                                RANDOM
.....truncating the target table
.....migration process                              started at  2011-03-12 11:37:48
.....estimated # of records                                     16,384
.....Source WHERE Clause                                        AND (cust_id < 90000)
.....Target ORDER BY Clause                                     part_num, ship_date
.....nzload starting             ( thread 1 of 1 )
.....unloading data              ( thread 1 of 1 )
.....data flowing.....
.....unload results              ( thread 1 of 1 )              INSERT 0 16384
.....unload finished             ( thread 1 of 1 )              elapsed seconds: 1
.....nzload finished             ( thread 1 of 1 )              elapsed seconds: 2
.....nzload successful           ( thread 1 of 1 )
.....data flow finished
.....migration process                              ended at    2011-03-12 11:37:50
.....actual # of records unloaded                               16,384
.....
.....migration completed                                        TOTAL seconds: 2
.....
.....cksum process                                  started at  2011-03-12 11:37:50
.....cksum process                                  ended at    2011-03-12 11:37:51
.....confirmed cksum: 450386793.9812719738880 16384 MARCH_12
.....
.....cksum completed                                            TOTAL seconds: 1
.....
.....reference synonym was successfully created in the target database