Tuesday, May 20, 2014

nz_migrate command options

Usage:    nz_migrate  -sdb <dbname> -tdb <dbname> -thost <name/IP>  [optional args]

Purpose:  Migrate (i.e., copy) database table(s) from one NPS server to another.

          It can also be used to make a copy of a database/table on the same server.

          Source     The server/database containing the data which is to be copied.

          Target     The server/database to receive the data.  The database and
                     table(s) must already exist on the target.

                     Optionally, this script can automatically create the entire
                     target database, or just the target table(s),  via the options
                          -CreateTargetDatabase  YES
                          -CreateTargetTable     YES

          This script can be invoked from the source host or from the target host.
          Or, it can be invoked from a linux client -- though that would not provide
          the best performance as the data has to flow from the source to/thru the
          client and then back out to the target.
               SOURCE ==> client ==> TARGET

          Regarding Materialized Views:  One would never migrate the data in a materialized
          view directly (or load it or back it up for that matter).  You only deal with the
          base table itself ... and the materialized view is automatically maintained by the
          NPS software.  However, there is overhead in having the system have to update the
          MView (along with the base table).  You might want to consider suspending the MView
          on the target machine prior to doing the migration ... and then refreshing it upon
          completion.
               ALTER VIEW view MATERIALIZE { REFRESH | SUSPEND }

          Regarding Sequences:  The only 'data' associated with a sequence is the definition
          (the DDL) of the sequence itself, which you can obtain via the script 'nz_ddl_sequence'.

          This script can be run as any user.  No special privileges need to be granted
          beyond the basics (such as the ability to read from the source tables and
          create external tables, and the ability to write to the target tables).

          To migrate tables requires permissions such as these:

          On the SOURCE system
               GRANT LIST         ON <DATABASE|dbname>  TO <user/group>;
               --The user must have access to the database that contains the table(s)
               GRANT SELECT       ON <TABLE|tablename>  TO <user/group>;
               --The user must have access to the tables themselves, and their data
               GRANT CREATE EXTERNAL TABLE              TO <user/group>;
               --The user must be able to create external tables, into which the
               --data will be unloaded

          On the TARGET system
               GRANT LIST         ON <DATABASE|dbname>  TO <user/group>;
               --The user must have access to the database that contains the table(s)
               GRANT SELECT, INSERT , TRUNCATE      ON <TABLE|tablename>  TO <user/group>;
               --The user must be able to insert (i.e., nzload) data into the target
               --tables.  The TRUNCATE privilege is only needed if you want to use the
               --the command line option:  -TruncateTargetTable YES

Inputs:   REQUIRED Arguments
          ==================

          -sdb <dbname>            # Source database name
          -tdb <dbname>            # Target database name

          -shost <name/IP>         # Source host
          -thost <name/IP>         # Target host
             
               # The "-shost <name/IP>", if not specified, will default to "localhost"
               # (i.e., the current system).  If you are not invoking this script directly
               # on the source host, then this value must be specified.
               #
               # If -shost and -thost are both set to the string "localhost", then
               # this script will recognize that you simply want to copy the data
               # from one database into another (on the same machine).  In which case
               # it will perform a simpler/faster cross-database INSERT (rather than
               # doing a 'migration' which has more overhead)

          A minimalist command line might be as simple as something like this:
               nz_migrate  -sdb proddb  -tdb testdb -thost devbox

          OPTIONAL Arguments
          ==================

          -suser <user>             # Source user       [SUSER]
          -tuser <user>             # Target user       [TUSER]

          -spassword <password>     # Source password   [SPASSWORD]
          -tpassword <password>     # Target password   [TPASSWORD]

          -sport <#>                # Source port       [SPORT]
          -tport <#>                # Target port       [TPORT]

          -sschema <schema>         # Source schema     [SSCHEMA]
          -tschema <schema>         # Target schema     [TSCHEMA]

               # If any of these arguments are NOT specified, the default NZ_USER,
               # NZ_PASSWORD, NZ_DBMS_PORT and NZ_SCHEMA environment variables will
               # be used in their place when connecting to the source and target
               # machines.
               #
               # Rather than passing these arguments on the command line, you could
               # instead specify them via the environment variables listed above.
               # For the passwords this would be more secure -- as the passwords
               # wouldn't appear if someone issued a 'ps' command.  Alternatively,
               # if you have set up the password cache (via the 'nzpassword' command)
               # the passwords can be obtained directly from that.
               #
               # Generally, the NZ_DBMS_PORT is never specified (so the default value
               # of 5480 is used).  This simply allows you to override the default if
               # you need to.
 
          -t|-table|-tables  <tablename> [...]

               # The table(s) within the database to be migrated.  If none are specified,
               # then all tables in the source database will be migrated.  This switch
               # can be specified multiple times.
               #
               # Note:  If a table contains a column of type INTERVAL (which is rather
               #        atypical) it can only be migrated by using the "-format binary"
               #        option.

          -exclude <tablename> [...]

               # If you don't specify a list of tables to be migrated, then all of the
               # tables in the source database will be migrated.  This option allows
               # you to identify specific tables that are to be EXCLUDE'd from that
               # migration.

          -tableFile <filename>

               # You can specify a file which lists the tablenames to be migrated (the
               # names can be separated by newlines or tabs).  This switch can be
               # used in conjunction with the -table option.  This switch can be
               # specified multiple times.

          -tableSQL <SQLQueryString>

               # You can use this option to dynamically generate the list of tablenames
               # to be migrated, via a user-specified SQLQueryString that runs against
               # the source database.  This allows you to include/exclude many tables
               # all at once.
               #
               # Sample Usage:
               # -tableSQL "select tablename from _v_table where objtype = 'TABLE' and tablename not like 'TEST%'"
               #
               # This option can be specified only once.  This option must be used by
               # itself (and not in conjunction with the -table or -tableFile options).

          -sTable <tablename>      # source Table
          -tTable <tablename>      # target Table

               # This option allows you to migrate a source table into a target table
               # with a different name.  When using this option you may only process
               # one table at a time via this script.  You must specify both the source
               # tablename and the target tablename.  The -CreateTargetTable and
               # -TruncateTargetTable options may be used.  You may not use the
               # "-t <tablename>" option at the same time.

          -format <ascii|binary>

               # The data transfer format to be used.  The default is ascii.
               #
               # ascii   Universal in nature, but results in a larger volume of data
               #         needing to be transferred -- which means the migration will
               #         typically take longer.  It is especially well suited for
               #         doing migrations
               #            o   between different sized machines (i.e., TF12 --> TF48 )
               #            o   when the DDL differs slightly between the two tables
               #                (i.e., different data types, distribution keys, etc.)
               #
               # binary  The database's compressed external table format.  It can result
               #         in higher throughput, and thus better performance, especially
               #         when doing migrations between the same sized machines (machines
               #         which have the same number of dataslices).
               #
               #         Normally, data is compressed on the source SPUs and then later
               #         decompressed on the target SPUs.  But if the # of dataslices
               #         does not match, then the decompression will actually occur on
               #         the target host rather than on the target SPUs (thus, losing
               #         out on the performance advantages of parallelism).  This can be
               #         offset by using multiple "-threads <n>" ... in which case the
               #         decompression will now run in multiple processes on the target
               #         host (thus making use of multiple SMP processors/cores).
               #
               #         Note: In 4.5 (and earlier revs) nzodbcsql was used to load the
               #               data into the target machine.  Which means that the
               #               password cache could not be used for the target host.
               #
               #               In 4.6 (and later revs) nzload is used to load the data
               #               into the target machine.  Which means that the password
               #               cache can now be used.

          -threads <n>

               # Each table will be processed by using '<n>' threads (parallel unload/load
               # streams) in order to make optimal use of the SMP host and the network
               # bandwidth.  By default
               #      1  thread will be used for small tables ( < 1M rows )
               #      4  threads will be used for larger tables
               # Or, you can override the number of threads to be used with this switch.

          -cksum <yes|no|fast|slow>  [column ...]

               # Should a cksum be run against both tables to confirm that the source and
               # target tables (appear to) contain the same data?  The default is "Yes".
               # Choices are
               #      Yes | Count   --   Perform a simple "select COUNT(*) from <table>"
               #      No            --   Perform NO cksum
               #      Fast          --   Perform a FAST cksum (only 1 column is used)
               #      Slow          --   Perform a SLOW cksum (all columns are used)
               #
               # The source table's cksum is performed at the start of the migration ...
               # so that it reflects the data that is being migrated (in case someone
               # inserts/updates/deletes rows from the table sometime after the migration
               # process began).  It is run as a separate/background job so as to not
               # delay the start of the data flow.
               #
               # The target table's cksum is performed at the end of the migration ...
               # after the data is in it.
               #
               # If column(s) are specified, then only those column(s) will participate
               # when doing a Slow or Fast checksum.  The Count checksum doesn't involve
               # any columns to begin with, since it only counts the number of rows.
               #
               # See also:  nz_cksum

          -TruncateTargetTable <no|yes>

               # Before loading any data into the target table, TRUNCATE it to ensure it
               # is empty.  If you use this switch be careful not to accidentally mix up
               # your source and target hosts.  Or your original/source data will be LOST
               # FOREVER!  The default value is NO.

          -CreateTargetTable <no|yes>

               # If the table does not already exist in the target database, do you want
               # this script to attempt to automatically create it for you?  The default
               # value is NO.  If YES, the table will be created, and its definition will
               # include any primary key and unique column constraints that may have been
               # defined.  The script will also attempt to add any relevant foreign key
               # constraints to the table.  However, as those are dependent upon other
               # tables, that operation may or may not be successful.  If that operation
               # fails it will not be treated as an error by this script.  Rather, the
               # following notice will be included in the output.
               #
               # ....NOTICE:  One (or more) of the foreign key constraints was not added to the table
               #
               # The script 'nz_table_constraints' will be helpful if you wish to add
               # those table constraints back in later on.

          -CreateTargetDatabase <no|yes>

               # If the target DATABASE itself does not exist, do you want this script
               # to attempt to automatically create it (and ALL of its tables and other
               # objects) for you?  The default value is NO.  If YES, then this script
               # will (basically) run
               #
               #      nz_ddl  SOURCE_DATABASE_NAME  -rename TARGET_DATABASE_NAME
               #
               # on the source system (to obtain the original DDL), and then feed that
               # into nzsql running against the target system (to create the objects).
               #
               # Many non-fatal errors might occur (for example, a CREATE VIEW statement
               # might reference objects in another database ... that doesn't exist on the
               # target system ... and thus, that DDL statement would fail).  For purposes
               # of SUCCESS/FAILURE, the only thing that this script cares about is whether
               # the target database itself was successfully created.
               #
               # As the output from these operations can be quite voluminous, the details
               # will be logged to a disk file in case you wish to reference them.
               #
               # To replicate a database (all of its DDL and DATA) on your current system,
               # you could do something like this:
               #      nz_migrate -shost localhost -sdb OLD_DB  -createtargetdatabase True \
               #                 -thost localhost -tdb NEW_DB
               #
               # See 'nz_ddl -help' for additional information.

               # Regarding SCHEMA's:
               # You can migrate from one database.schema into another database.schema.
               # If the target database exists, but the target schema does not ... this
               #     script will NOT create the target schema.
               # If the target database does not exist, this script will create the
               #     target database and use its default schema (which you can rename
               #     after the fact).  In this situation you would NOT specify the
               #     "-tschema <schema>" option, as the target schema is going to be
               #     the default schema (of the newly created target database).

          -CreateTargetUDX <no|yes>

               # This option only applies when the following two conditions are met
               #      You specified  '-CreateTargetDatabase yes'
               #      The target database does not exist (and thus will be created)
               #
               # If both conditions are true, then this option can be used to specify
               # that the script should also recreate any user defined FUNCTION, AGGREGATE
               # or LIBRARY that existed in the source database.
               #
               # The default for this option is NO.
               #
               # If you specify YES then please note the following:
               #   o  On the source machine, you must be running as the linux user "nz"
               #      so that the object files (under /nz/data ) can be accessed
               #   o  The object files are scp'ed to the target machine, so this script
               #      (e.g., scp) will most likely prompt you for the necessary password
               #   o  The source and target machines must be of a compatible architecture
               #      (e.g., Mustang-->Mustang  or  IBM PDA N1001-->IBM PDA N2001)
               #
               # As an alternative for copying/creating UDX's, you might want to consider
               # the following scripts:
               #      nz_ddl_function / nz_ddl_aggregate / nz_ddl_library

          -to <email@recipient>

               # Email address to be sent a copy of the output from this script

          -errors_to <email@recipient>

               # Email address to be sent a copy of the output from this script ...
               # but only if an error is encountered.

          -mail_program <script_or_executable>

               # When sending email, this script invokes
               #       /nz/kit/sbin/sendMail     if you are the linux user "nz"
               # else  /bin/mail                 for all other linux users
               #
               # This option allows you to tie in a different mailer program (or shell
               # script) without needing to make any changes to the nz_migrate script
               # itself.

          -genStats <none|full|express|basic>

               # After each individual table migration completes, this script can
               # automatically issue a GENERATE STATISTICS command against the target
               # table.  It is invoked as a background job so as to not block the rest
               # of the migration process.
               #
               # Options are
               #      None        --  Do nothing (the default)
               #      Full        --  Generate /* full   */ Statistics
               #      Express     --  Generate    EXPRESS   Statistics
               #      Basic       --  Generate /* basic  */ Statistics  (as of 4.6+)
               #
               # For more information about statistics, see 'nz_genstats -help'

          -SourceObjectType <table|any>

               # nz_migrate is used to copy data from a "source table" into a "target table".
               # But why can't the source be something other than a table?  For example,
               # a view or an external table/datafile? This is rather atypical, but allowed.
               #
               # To do this you must
               #  -- specify the source objectname(s), via the "-table" or "-sTable" option
               #  -- include the "-SourceObjectType any" option
               #
               # Otherwise, the default checks that the source "table" is truly a table.

          -SourceWhereClause <clause>

               # Normally, nz_migrate is used to migrate over the entire contents of a
               # table -- all visible rows.  This option allows "you" to tack on a WHERE
               # clause to the data that gets selected ... allowing "you" to migrate a
               # subset of the table.  All the power (and responsibility) is put into your
               # hands.  Do wrap the clause in double quotes so it will be passed into the
               # script correctly.  Examples:
               #    
               #    -SourceWhereClause  "customer_key = 2"
               #    -SourceWhereClause  "customer_key in (1,3,5)"
               #    -SourceWhereClause  "region_name = 'AMERICA' or region_key = 0"
               #    -SourceWhereClause  "order_date between '1998-01-01' and '1998-12-31'"
               #    -SourceWhereClause  "customer_num in (select cust_id from sales..delinquent_accounts)"
               #
               # Because this clause gets applied to all tables being migrated, you would
               # probably only want to migrate a single table at a time (when using this
               # clause) ... since the clause will typically contain column names that are
               # specific to that table.
               #
               # This clause gets applied to the source table.  Its use will be logged in
               # the output of this script (as well as in the pg.log file on the source
               # machine).  After the migration, when the tables are compared (via the
               # cksum step) this clause will be applied to both the source and target
               # tables ... so that only that data that was moved will be cksum'ed.

          -TargetOrderByClause <clause>

               # This option allows the data to be sorted -- after it is moved to the
               # target machine, and before it is actually inserted into the target
               # table -- in order to obtain optimal performance on future queries that
               # would benefit from zonemap restrictions.  It is up to you to decide what
               # column(s) the data should be sorted on, and to pass those column name(s)
               # into this script.  Do wrap the clause in double quotes so it will be
               # passed into the script correctly.  Examples:
               #
               #      -TargetOrderByClause  "purchase_date"
               #      -TargetOrderByClause  "purchase_date asc"
               #      -TargetOrderByClause  "purchase_date asc, store_id asc"
               #
               # Because this clause gets applied to all tables being migrated, you would
               # probably only want to migrate a single table at a time (when using this
               # clause) ... since the clause will typically contain column names that are
               # specific to that table.
               #
               # This clause gets applied to the target table.  Its use will be logged in
               # the output of this script (as well as in the pg.log file on the target
               # machine).
               #
               # How it works (plan wise):  The data will be sent to the target machine --
               # down to the appropriate SPU/dataslice -- and written out to the swap
               # partition.  After all of the data has been moved over, then each SPU/
               # dataslice will sort its dataset and insert that ordered data into the
               # target table.  A typical *.pln file would look something like this:
               #
               #   500[00]: dbs ScanNode table 1000000470
               #   500[02]: dbs ParserNode table "PROD_DB.ADMIN.#aet_3228_1290_903" delim=|
               #   500[05]: dbs DownloadTableNode distribute into link 1000000471, numDistKeys=1 keys=[ 0 ]
               #
               #     1[00]: spu ScanNode table 1000000471
               #     1[03]: spu SortNode (type=0), 1 cols
               #     1[06]: spu InsertNode into table "PROD_DB.ADMIN.CUSTOMER_TBL" 421665
               #
               # The migration WILL take longer ... because we have tacked on an extra step
               # (this sort node) that occurs after the data has been copied over.  However,
               # this does save you the trouble of having to (a) first migrate the data over
               # and (b) then do the sort operation yourself.  Thus, in the long run, it
               # might save you some time, effort, and disk space.
               #
               # It is also recommended that you include these command line options:
               #      -threads 1  -format ascii
               #
               # If you want the data "well sorted", it needs to be sorted as a single dataset.
               # Using multiple threads would cause each thread to sort its data separately.
               # The end result would be that the data was somewhat sorted, but not "well
               # sorted" (so why bother sorting it at all?)
               #
               # If the number of dataslices match (between the source and target machines)
               # then the sortedness of the target data should be comparable to that of the
               # source data (whatever it may be).  If the number of dataslices does not
               # match, you can use "-format binary" ... but (for the reasons just mentioned)
               # you should not use multiple threads.  A binary migration (using just one
               # thread) between dissimilar machines will take a long time -- longer than
               # an ascii migration.  So the "-format ascii" is recommended.

          -TargetDistributeOnClause <clause>

               # IF ... the target table does not exist
               # IF ... you include the "-CreateTargetTable YES" option, thereby requesting this
               #        script to automatically issue a CREATE TABLE statement on your behalf
               # THEN ... this option will allow you to control/override the distribution key.
               #        By default, it will match that of the source table.  But you can
               #        specify whatever you want it to be.  Do wrap the clause in double
               #        quotes so it will be passed into the script correctly.  Examples:
               #
               #      -TargetDistributeOnClause  "Random"
               #      -TargetDistributeOnClause  "(customer_id)"
               #      -TargetDistributeOnClause  "(customer_id, account_id)"
               #
               # Because this clause gets applied to all tables being created, you would
               # probably only want to migrate a single table at a time (when using this
               # clause) ... since the clause will typically contain column names that are
               # specific to a table.  The exception might be if you wanted all of the
               # newly created tables to be distributed on random.
               #
               # This clause gets applied to the target table.  Its use will be logged in
               # the output of this script (as well as in the pg.log file on the target
               # machine).
               #
               # When using this option, you must use "-format ascii" (because a binary
               # migration requires that the source and target table structures match
               # exactly ... to include the distribution key).

          -viaSynonym

               # If you pass the script a synonym name (rather than a table name) ... and
               # If you include this switch ... then
               #
               # The script will find the table that the synonym references (on the source)
               # The table will be migrated to the target
               # Upon successful completion,
               #      a synonym by the same name will be (re)created (on the target)
               #      referencing the table that was just migrated to it.
               #
               # For example, the synonym TODAYS_DATA might reference the table MARCH_12.
               # Once the table MARCH_12 has been successfully migrated to the target host,
               # the synonm (TODAYS_DATA) will be updated to point to that table.

          -cloneDDL

               # This script may issue CREATE TABLE statements    (-CreateTargetTable YES)
               # This script may issue CREATE SYNONYM statements  (-viaSynonym)
               #
               # If the -cloneDDL switch is included, then this script will generate all
               # of the DDL associated with the object -- to attempt to faithfully clone
               # it.  This would include the following statements:
               #        
               #      COMMENT ON ...
               #      GRANT ...
               #      ALTER ... OWNER TO ...
               #
               # Note: This is always the case for the '-CreateTargetDatabase' option.

          -status [<n>]

               # Provides periodic status updates while the data is flowing:
               #
               # .....data flowing.....
               # .....status Total:    5,077,991,424    Average:  507,799,142    elapsed seconds: 10
               # .....status Total:   10,688,921,600    Average:  510,084,561    elapsed seconds: 21
               # .....status Total:   16,720,592,896    Average:  603,167,129    elapsed seconds: 31
               # .....status Total:   23,155,965,952    Average:  643,537,305    elapsed seconds: 41
               # .....status Total:   29,871,308,800    Average:  671,534,284    elapsed seconds: 51
               # .....status Total:   36,969,644,032    Average:  709,833,523    elapsed seconds: 61
               #
               #      The first column is the total number of bytes transferred
               #      The second column is the average number of bytes transferred per second
               #           since the last status update
               #
               # And it provides additional summary statistics for each table, and for the
               # migration as a whole:
               #
               # .....# of bytes xfer'ed                                         2,062,729,601,280
               # .....xfer rate (bytes per second)                               917,584,342
               #
               # By default, the status will be updated every 60 seconds.  You can specify
               # an optional value from 1..3600 seconds.
               #
               # Implementation details:  The dataset, as it is passed between the two
               # hosts, is piped thru "dd" in order to count the number of bytes.  This
               # is not the size of the table on disk -- rather, it is the size of the
               # ascii or binary dataset that gets migrated between the two hosts.  Because
               # it involves an additional process ("dd") getting injected into the data
               # stream there is some -- albeit minimal -- overhead when using this switch.
               # With older versions of linux, the reported "# of bytes xfer'ed" might be
               # slightly less than the actual value because it is based on the number of
               # whole blocks processed by dd.

          -restart <n>

               # Sometimes a table migration may fail because of a momentary network
               # blip between the two hosts (such as a "Communication link failure")
               # or a momentary hardware or software issue.  This script will
               # automatically restart the migration of a table under the following
               # conditions:
               #
               #      You specified the option "-TruncateTargetTable yes"
               #           --or--
               #      You specified the option "-SourceWhereClause <clause>"
               #
               # During the previous attempt to migrate the table, one (or more) of
               # the threads may have successfully completed ... which would result
               # in some (but not all) data existing in the target table.  Which would
               # leave the target table in an inconsistent state.  Before a restart of
               # the migration is attempted, this script will attempt to either
               # TRUNCATE the target table (which should be instantaneous) or
               # issue a DELETE statement against the target table based on the
               # "-SourceWhereClause <clause>" (which could take a long time to
               # run).
               #
               # By default, the # of restart attempts is set to 1 -- though it only
               # kicks in based on the above specified conditions.  You can set
               # the number of restart attempts to a value from 0 (no restart) to 10.
               #
               # A restart does not guarantee success.  It just automates the attempt.
               # A restart will only take place if the problem occurred during the
               # migration of the data (and not during the early setup nor during the
               # final cksum phases).  If the subsequent migration of the table is
               # successful then the script will treat it as a success -- with
               # appropriate log messages being included in the output to identify
               # what the script is doing.

          -timeout [<n>]

               # Sometimes a table migration hangs up ... where, for example, all but one
               # of the threads finishes.  Perhaps the source sent a packet of data but
               # the target never received it.  Neither one received an error.  But neither
               # one will continue ... as they are each waiting for the other to do something.
               #
               # This option tries to monitor for such situations, and will automatically
               # terminate the hung thread(s) when it notices that 0 bytes of data have
               # been transferred between the two systems over the specified length of time
               # (the -timeout value).  Specifying this option automatically enables the
               # '-status' option as well.
               #
               # By default, this option is disabled.  Include this switch on the command
               # line to enable it.  The default timeout value is set to 600 seconds (10
               # minutes).  You can specify an optional value from 1..7200 seconds (2 hours).
               #
               # If the timeout is reached, the migration of the current table will
               # automatically be terminated.  If desired, the migration of that table
               # can automatically be restarted as well.  See the '-restart' option
               # described above for more details.

          -sbin <source bin/executables directory>
          -tbin <target bin/executables directory>

               # You should be able to freely migrate between NPS machines running
               # software versions 3.1, 4.0, 4.5, 4.6, 5.0 and 6.0.
               #
               # If you are migrating from a 3.0 system, there may be software
               # incompatibilities between the source and target machines.
               #
               # To accommodate this difference, do the following:
               #      1) Get a copy of the client toolkit ("cli.package.tar.z") that
               #         matches the version of software running on your TARGET machine.
               #      2) Install it somewhere on your SOURCE machine.  But not under the
               #         directory "/nz" as you don't want to mix up the server + client
               #         software.
               #      3) When you invoke nz_migrate, include the "-tbin" argument so
               #         that the script can find and use the proper toolkit when
               #         communicating with the TARGET machine.
               #
               # For example,
               #      -tbin /export/home/nz/client_toolkits/3.1/bin

Cancel:   When the nz_migrate script runs it launches many subordinate processes.  If you
          need to kill the migration the easiest method would be to press ^C.

          Otherwise, you can issue the following kill statement
               kill -- -NNN

          where NNN is the process ID of the nz_migrate script.  Since many processes
          are associated with the migration, the relevant PID # is listed in the
          nz_migrate output (and is also a part of the directory/log file names).
          For example:

               Data Format   : ascii (using nzsql and nzload)
               Log Directory : /tmp/nz_migrate.20100101_120000.37554
               Log File      : /tmp/nz_migrate.20100101_120000.37554/nz_migrate.output
               Top Level PID : 37554

          The migration is NOT run as one large transaction (where everything gets
          migrated or nothing gets migrated).  Rather, each table is migrated separately.
          Beyond that, if multiple threads are used ... then each thread is a separate/
          independent transaction.  So what does this mean for you ... if you kill a
          migration?

          o   Any tables that have already been migrated are done.

          o   For the current table

              If any threads have already finished ... then that data has been COMMIT'ed
              and does exist in the target table.

              For any threads that are still running ... that data will be rolled back/
              discarded (and the discarded rows may be using up space in the target table).

          So ... your target table will be an indeterminate state.  Before restarting
          the migration for that table, you may want to TRUNCATE it or use the
          "-TruncateTargetTable TRUE" option when invoking nz_migrate.

Errors:   Error: Load session of table '<tablename>' has 1 error(s)

               You are probably doing a "-format ascii" migration between the
               two systems.  The target table does NOT have to be identical
               to the source table.  For example, the column names can be
               different, or an INTEGER column on the source machine can be
               loaded into a VARCHAR column on the target machine.  But if
               you get this error, then some data incompatibility seems to
               have cropped up (e.g., trying to load a 'date' into a column
               that is defined as 'time').  Check the DDL for the two tables.
               Check the *.nzlog file for additional clues.

          [ISQL]ERROR: Could not SQLExecute

               You are doing a "-format binary" migration between the two systems,
               and the definition of the target table (# of columns, data types,
               null/not null constraints, distribution keys) does not match that
               of the source table.  They must match.

          Error: packet header version '9' does not match expected '11'
                 - client and server versions may be different

               It appears that the migration is between two different NPS hosts that
               are running two different versions of the NPS software -- and some of
               the NZ*** tools (most likely, nzsql or nzload) are not compatible
               between the two machines.  See notes above regarding the "-tbin" option.

          ERROR:  loader does not currently support INTERVAL data

               The INTERVAL data type is somewhat atypical, e.g.
                    '1 year 2 months 3 weeks 4 days 5 hours 6 minutes 7 seconds'
               nzload has never supported this data type.  It is a derived column, the
               result of a comparision between two TIME and/or TIMESTAMP columns --
               which you then stored in new table.  Try migrating this specific table
               using the "-format binary" option instead.

          ERROR:  nz_cksum's differ!

               After each table is migrated, an (optional) cksum is calculated against
               both the source + target tables, and then compared.  For some reason
               the checksum's don't match.  The tables should be analyzed + compared
               further to try to determine why this occurred.

          ERROR:  Reload column count mismatch.
                       #  i.e., the total number of columns does not match
          ERROR:  Reload column type mismatch.
                       #  i.e., the column data types do not match
                       #        or the null/not null constraints do not match
          ERROR:  Reload distribution algorithm mismatch.
                       #  i.e., the DISTRIBUTE ON clause does not match

               These errors occur when you are doing a "-format binary" migration
               between the two systems, and the definition of the target table does
               not match that of the source table.  They must match.

               Note: The "nz_ddl_diff" script can be used to compare two table's DDL.

          Error: Unexpected protocol character/message

               The issue is that nzload uses a hard-coded 10 second timeout during
               connection establishment, and the NPS host is not responding in time.
               The NPS host does a REVERSE lookup of the ip address of the client to
               find its hostname.  If the client machine is not in the same sub-net
               as the NPS host, then the name-lookup may take too long.

               Either configure the NPS host to do the name-lookup properly (the domain
               name server might need to be corrected: /etc/resolve.conf), or change the
               /etc/hosts file to include an entry for the client name and ip address.

               To confirm that this is the problem, on the TARGET NPS host try doing this
                    time  host    <ip-of-client-not-working/Source Host>
               or
                    time  dig -x  <ip-of-client-not-working/Source Host>

               If this returns the client name and other information immediately, then
               the problem is something else.  If it takes longer than 10 seconds to
               return the information, then that is causing the timeout of nzload to
               expire.

Outputs:  Status/log/timing information for each table that is migrated will be sent
          to standard out, as well as details about any problems that might need to
          be addressed.

          Exit status:  0 = success, non-0 = ERROR's were encountered

          Sample output (for a migration that included many of the optional options)

nz_migrate table 'MARCH_12' from nps-007:PRODDB --> nps-014:TESTDB

.....processing table 1 of 1
.....referenced via synonym 'TODAYS_DATA'
.....using target table 'MY_TEST_RUN'
.....creating the target table
.....Target DISTRIBUTE ON Clause                                RANDOM
.....truncating the target table
.....migration process                              started at  2011-03-12 11:37:48
.....estimated # of records                                     16,384
.....Source WHERE Clause                                        AND (cust_id < 90000)
.....Target ORDER BY Clause                                     part_num, ship_date
.....nzload starting             ( thread 1 of 1 )
.....unloading data              ( thread 1 of 1 )
.....data flowing.....
.....unload results              ( thread 1 of 1 )              INSERT 0 16384
.....unload finished             ( thread 1 of 1 )              elapsed seconds: 1
.....nzload finished             ( thread 1 of 1 )              elapsed seconds: 2
.....nzload successful           ( thread 1 of 1 )
.....data flow finished
.....migration process                              ended at    2011-03-12 11:37:50
.....actual # of records unloaded                               16,384
.....
.....migration completed                                        TOTAL seconds: 2
.....
.....cksum process                                  started at  2011-03-12 11:37:50
.....cksum process                                  ended at    2011-03-12 11:37:51
.....confirmed cksum: 450386793.9812719738880 16384 MARCH_12
.....
.....cksum completed                                            TOTAL seconds: 1
.....
.....reference synonym was successfully created in the target database


2 comments:

  1. good information. thanks alot.

    ReplyDelete
  2. Nice info, we can also get the information from the NZ_MIGRATE script.

    ReplyDelete