Transportable Methods

Overview

Transportable technology can be described generally is a special mode of data pump. They offers however, an faster data transfer rate , especially for larger database compared to conventional data pump jobs. This is enabled by splitting the database content into two groups, the application data and the metadata or in oracle terms the data dictionary and using the most efficient way to transfer each. The first group is transferred by reusing of a “consistent” copy of the source database’s datafiles for the tablespaces containing application data. More advanced variants of transportable methods relax this consistency requirement, as to be discussed later. For application data, the transfer bypasses database instance processing entirely, relying instead on operating system– level file copy utilities to move large data volumes efficiently. Metadata, by contrast, is transferred through a specialized Data Pump mode that generates a comparatively small dump file containing object definitions and other elements like data segments pointers of the migrated datafiles. This process is generally much faster than loading the whole data in data pump format. Transportable works on database level and tablespace level. The latter can be used to move individual tables or partitions to other databases, a capability particularly useful for data archiving. For more details about transporting individual tables and partitions refer to this link from Database Administrator’s Guide 19c.

Limitation & Consideration

The following limitations apply to all transportable variants:

(1) Character Set Compatibility   

The source and destination databases must use compatible character sets, fulfilling any of the following conditions:

  • Both databases use identical character sets
  • The source character set is a strict binary subset of the destination character set

A character set C1 qualifies as a binary subset of character set C2 when both conditions hold:

  1. Every character defined in C1 exists in C2’s character repertoire
  2. Each character’s byte representation (codepoint) is identical in both C1 and C2

For example, US7ASCII is a binary subset of UTF8 because all ASCII characters exist in UTF8 with identical codepoint values. Conversely, WE8ISO8859P1 is not a binary superset of US7ASCII because it contains characters (such as é, ñ, ü) that do not exist in ASCII.

The following link  presents list of oracle database character sets and their binary subset/superset relationship.

(2) National Character Set Compatibility

Source and destination databases must satisfy one of two national character set requirements:

  • Both databases use identical national character sets (typically AL16UTF16 or UTF8)
  • The source database AND the transported tablespaces contain no columns using national character datatypes (NCHAR, NVARCHAR2, NCLOB)

(3) Matching Timezone offset

Source and target Database needs to have same timezone definitions, otherwise tablespace containing tables with TIMESTAMP WITH LOCAL TIME ZONE (TSLTZ) columns will be skipped.

Time zone of the can be determined with the following query:

SQL> SELECT DBTIMEZONE FROM DUAL;

(4) The Automatic Workload Repository (AWR)

AWR is not supported by full transportable export/import. Use the awrextr.sql and awrload.sql scripts to move AWR data between Oracle databases.

(5) Global Statistics Preferences

Global statistics preferences are not exported. They must be redefined in the target database explicitly after import using DBMS STATS.SET_GLOBAL_PREFS.

To get current value for global statistics use DBMS_STATS.GET_PREFS. For example the following query get global setting for STALE_PERCENT statistics preference:

SQL> SELECT DBMS_STATS.GET_PREFS('STALE_PERCENT') FROM DUAL;

(6) XMLTYPE Data type

If the source database contains XMLTYPE Data type, XDB database components should be installed at the destination PDB for successful migration of dependent XML objects.  XMLTYPE to be transported should not reside in XDB schema at the source database

The following query returns a list of tablespaces that contain XMLTypes,

SQL> select distinct p.tablespace_name from dba_tablespaces p,dba_xml_tables x, dba_users u, all_all_tables t where  t.table_name=x.table_name and t.tablespace_name=p.tablespace_name and x.owner=u.username;

Variants

Considering a full database migration scenario, three variants are relevant for this discussion. First, Full transportable export/import (FTEX), which is a full database Export (FULL=Y). Second, Cross-platform transportable tablespaces (XTTS), which is tablespace-level transport option. XTTS is widely used for large database migrations across different platforms with different endianness. The third option is a more elegant version of XTTS that uses RMAN incremental backups, which provides even a shorter downtime than classic transportable methods, which can be called XTTS + incrementally updated backup. These methods discusses in more details in their own articles.

Migration best practices

Target Oracle home Preparation

  1. Apply RU and data pump bundle patch, the later offers many performance improvement during loading.
  2. Transfer DB configuration files (TNSnames.ora, SQLNet.ora, wallet, …). Pay very close attention to the content of TNSNAMES.ORA, update TNS names referencing the old production database with the Network address of the new one. This saves long investigation time for invalid objects after import. Moreover, misconfiguration in TNS names can end up with objects performing select and DML referencing the old systems through DB_links

Target Database Preparation

  1. Create an empty database with similar options and components as installed in source database. This is important, verify using V$OPTION and DBA_REGISTRY views.
  2. Configure target database parameters. This is a sensitive task, requires a fine balance between maintaining current parameter and adapting the new database to the new hardware configuration. Some parameters are just derived from processer-related parameter (CPU_COUNT) like processes, sessions and parallel_xxxx parameters. Check the current values in the source system and if non-default values are adopted, verify why such non-default values needs to be adopted. Moreover, clean-up unsupported / decommissioned parameters (if any).
  3. Create Undo and Temp for the container and the PDB, for RAC instances a more descriptive name than the default UNDOTBS01 makes life easier in the future
  4. Create necessary services, a service dedicated for migrartion is helpful especially during early testing phases
  5. Create directory object on both database, which will be used to store export data/Log. Use the same NFS mount point (if possible), to save time copying dump files. Ensure enough space is available. Refrain from relying on the default directory object.
  6. Create services with exact names and configurations as source Database, ensure correct domain_name appended (if any), default PDB, and desired MAA configuration like FAN, TAF,  AC/TAC … Etc.

Limiting access to Source Database after cut-off

Restrict access to the source database after migration validation completed to prevent accidental application user connections or object referencing the old source system via “misconfigured” database links. The most secure approach is to maintain the source database in a closed or mounted state after validation completes. Alternatively, stop all services, place the database in restricted mode, and revoke the DBA and RESTRICTED SESSION privileges from non-administrative users.