Introduction
Cross-Platform Transportable Tablespaces (XTTS) is another member of the transportable family , which extends standard transportable tablespace functionality to support migrations between platforms with different endianness format. This method transports data using copies of source datafiles while metadata is imported through Datapump, as discussed in the overview section of transportable migration method article here. Full Database Migration across different platforms with the same endianness can be performed using Full transportable export import (FTEX), which is simpler to implement for a full database transport, compared to XTTS. However, FTEX do not support migration across platforms with different endianness format. There are two variants of XTTS discussed in this articles. The first will be called the ‘conventional’ method and is generally referenced when the term XTTS is used. The second is XTTS with incrementally updated backup, this is a more elegant variant, which offers a substantially lower downtime. Implementation Steps of both variants will be discussed in this article.
The following query returns Platform name and endianness for the database:
SQL> SELECT d.PLATFORM_NAME, ENDIAN_FORMAT FROM V$TRANSPORTABLE_PLATFORM tp, V$DATABASE d WHERE tp.PLATFORM_NAME = d.PLATFORM_NAME;
Limitation and consideration
All transportable technology restrictions apply to XTTS, especially the self-containment requirement. Consider the following points during evaluation and preparation phases.
(1) General transportable limitations
The following general transportable tablespaces constraints also govern XTTS operations, where are discussed here with more details:
- Character Set Compatibility
- National Character Set Compatibility
- Matching Timezone offset
- The Automatic Workload Repository (AWR)
- Statistics Preferences
- XMLTTYPE
(2) Tablespaces included in the transport set are self-contained
Every object contained in the transportable set to be migrated should have no dependency on other objects outside of the tablespace set. For example, tables of materialized views, table partitions or any other segment types located on tablespaces not included in the migrated tablespace set.
(3) Timezone file version
Transportable tablespaces cannot migrate tables containing TIMESTAMP WITH TIMEZONE (TSTZ) columns when the destination database uses an older timezone file version than the source. The transport operation excludes these tables and they need to be migrated separately using conventional Data Pump export later. This behavior discussed in more details under TIMESTAMP WITH TIMEZONE Restrictions section in Oracle Database Utilities reference here. Best practice is to update the destination database’s timezone file to an equal or higher version than the source version before initiating the transport. Consult My Oracle Support note [Doc ID 412160.1] for the latest timezone file definitions.
Current applied (not just installed ) timezone file definition can be queried using the following query:
SQL> select * from v$timezone_file;
(4) Transportable set
Only user-defined tablespaces can be transported using transportable tablespace, administrative tablespaces, such as SYSTEM and SYSAUX can’t be included in the transport set.
(5) TDE tablespace encryption
If a tablespace is encrypted using TDE, you can only transport this tablespace to a platform that uses the same byte ordering format (little/big-endian). Therefore, during XTTS migration migration encrypted tablespaces must be decrypted, transported, and later re-encrypted at the destination database. These operations can be performed online (starting 12.2).
Note: Transportable tablespace sets include TDE policies only. Other security policies (such as redaction policies, masking policies, and so on) are not included; they must be recreated after the tablespace is imported into the new database.
(6) Transported tablespaces set
The importing user’s default tablespace in the destination database can’t be included in the transportable tablespace set. The SYSTEM user is commonly used for the import at the destination database, which has SYSTEM tablespace as the default tablespace. in Such scenario, this condition is fulfilled.
A tablespace that have same name in the target Database can’t be included in the transport set. This is a common situation, were both database has USERS tablespace. In such case, one of both tablespaces needs to be renamed.
(7) User-defined objects partially stored in administrative tablespaces
XTTS does not support transporting objects partially stored in administrative tablespaces. For example, a subset of table’s partitions which are stored in SYSTEM / SYSAUX tablespace. Such objects should be modified to have all child segments stored in user-defined tablespaces or migrated separately using conventional Datapump Export or alternatively being.
(8) Index Organized Tables (IOTs)
Index Organized Tables (IOT) can become corrupt when using Transportable Tablespace (TTS) from Solaris, Linux or AIX to HP/UX. Currently (at the time of publishing), there is no patch for this issue , the Index Organized Tables (IOTs) need to be recreated after the TTS.
(9) Data Type Restrictions
RMAN’ s CONVERT command does not perform endianness conversions for data stored in the following data types:
- RAW
- LONG RAW
- BLOB
- ANYTYPE / ANYDATA / ANYDATASET
- User-defined types or Oracle abstract types (such as the ORD Image media type) that contain attributes of any of the above data types
Migration steps for conventional XTTS
The following Example syntax assume the transportable tablespace sets are two tablespaces (TBS1, TBS2) and the user SYSTEM is used for datapump. Tablespaces are transported between source database DBSRC to the target DBTRG, both uses ASM diskgroups +DATA01 and +DATA02 to store datafile in respective order.
Steps (1): Preparation
1.1 Destination server Preparation
Oracle Home for the target database recommended to be patched with the latest RU patches. Moreover, Database options, features and configurations need to match the source database. More details are discussed here under general transportable article.
1.2 Source server Preparation
- Object count
Record schema object counts to establish a baseline for post-migration validation. Document objects requiring special handling after migration, such as index-organized tables (IOTs), text indexes, XMLType structures, and external tables. - Physical segment Clean-up
Analyze the database for physical cleanup opportunities that facilitate smoother migration: remove orphaned segments, purge recycle bin objects. - Invalid objects
Ideally, there should be no invalid objects in a production environment. Regardless of whether the migrated Database is production or not, many environments have invalid objects. List of these object needs to be documented and resolution alternatives discussed with objects owner(s). If such objects will be migrated in that invalid state, invalid objects list needs to be excluded from post-migration validation. This saves the precious post-migration validation time from being spent on investigating the state of these objects. - Database parameters
Review underscore parameters that may be obsolete or no longer necessary for the target database release. There might be cases for underscore parameter, which had been recommended for some bug in the past, probably a past release which is already fixed in the target database patching level.
1.3 Self-containment verification of the migrated tablespace set
To avoid consistency errors performed by datapump at the start of the job, tablespace set self-containment should be verified beforehand. In full database migrations, where all user-defined tablespaces are included in the transport set, this check may appear redundant. However, it remains critical because it detects dependencies between transported objects and other objects / child-objects “incorrectly” reside in administrative tablespaces (SYSTEM or SYSAUX).
To determine whether a set of tablespaces is self-contained, use DBMS_TTS.TRANSPORT_SET_CHECK procedure with the tablespace set as input. Execute permission on this procedure is included in EXECUTE_CATALOG_ROLE role (granted to SYS by default). For example, the following command verifies whether tablespace (TBS1, TBS2) are self-contained:
SQL> EXECUTE DBMS_TTS.TRANSPORT_SET_CHECK('TBS1,TBS2',TRUE);
Result is saved in dictionary tables and can be accessed through the following view:
SQL> SELECT * FROM TRANSPORT_SET_VIOLATIONS;
Steps (2): Metadata export
A full metadata structural export captures all objects DDL (no data ) by running Data Pump export in metadata_only mode. This export is essential for full database migration through XTTS because objects, which are not associated with data segment like views, triggers, roles, grants resides in administrative tablespace. SYSTEM and SYSAUX tablespaces are excluded from transportable tablespace set. Therefore, this structural export ensures these objects which do not reside in transport set are also migrated. This is one of the differences between FTEX and XTTS, where the former takes care of these dictionary-residents objects.
The following is an example of such job, configured with parallel degree 4, a suitable value for 4-core system. Higher value won’t guarantee a linear performance improvement due to the complex dependency of the exported objects and how groups of objects is distributed across workers. The parameter metrics=y offers additional information in the data pump log related to export time for groups of object type. This information can be helpful when investigation of export time is desired. This parameter is discussed in more details in Mike’s blog here.
#cat par_meta_exp.dat
full=y
content=metadata_only
metrics=y
parallel=4
dumpfile=metadata_full_%U.dmp
directory=dpump_dir
logfile=metadata_full_exp.log
#expdp system@dbsrc parfile='par_meta_exp.dat'
Steps (3): Transportable dump file export
1. Switch all tablespace to read only.
SQL> ALTER TABLESPACE TBS1 READ ONLY;SQL> ALTER TABLESPACE TBS2 READ ONLY;
2. Generate transportable dump file
In this example, Data Pump exports only data dictionary structural information (metadata) for the tablespaces. No actual data is unloaded, so this operation goes relatively quickly even for large tablespace sets, compared to a conventional datapump export job
# expdp system@dbsrc DUMPFILE=xtts_exp.dmp DIRECTORY=dpump_dir \TRANSPORT_TABLESPACES =TBS1,TBS2 logfile=xtts_exp.log
TRANSPORT_FULL_CHECK parameter configured with YES, enables a more restrictive self-containment check (two-way) for objects within the transportable set. The default value of this parameter is NO, which means that only one-way consistency check is performed. This means only dependency of an object inside the transportable set is verified against it’s parent object that reside outside of the transport set. This leads to successful verification in case of a table included in the set, which have indexes outside of the set, since a table is not dependent on its indexes.
3. Check export logfile carefully, for skipped objects and other export errors. Transfer the valid export dump file to the destination system, where needed.
Steps (4): Datafiles conversion
Datafiles copies from a source database with a different endianness can’t be directly imported ti the destination database. They need to be converted to the target platform endianness. This can be achieved using two methods: RMAN CONVERT command or DBMS_FILE_TRANSFER procedures (GET_FILE / PUT_FILE). Both methods support files stored in file system and ASM. RMAN though requires a temporary directory, which might be challenging for very big databases. Both methods accept datafile as input, converts datablocks and returns the output datafile in the defined destination with the target platform endianness.
RMAN pros and cons:
- Can offer better overall performance, since it accepts PARALLEL parameter which enables higher parallelism of disk channels
- Can be simpler to script, when performed at the source system, since it can operate in tablespace mode, rather than specifying each single datafile
- Requires temporary storage equal to the total size of all datafiles being converted
DBMS_FILE_TRANSFER pros and cons:
- Data blocks converted in memory, transferred through database link directly to the destination system
- A single write process, with no temporary destination requirement
- Each datafiles must be specified by name, which is less convenient for big databases with hundreds of datafiles
Scenario (1): Datafiles conversion using RMAN on source database
In this examples RMAN is used for files conversion from Solaris SPARC 64-bit to to Linux 64-bit. Conversion takes place on the source system. Both systems use ASM as underlying storage solution. The process starts by writing the converted files to an intermediary directory ‘/nfs’, which is mounted on both systems. The converted files copied afterwards by another RMAN session initiated from the target system from the intermediary directory to ASM of the target system. simplified steps can be as follows:
(1) Connect to source DB RMAN, Convert datafiles, specifying tablespace
RMAN> CONVERT TABLESPACE TBS1,TBS2 TO PLATFORM 'Linux x86 64-bit' FORMAT '/nfs/%U' parallel 4;
(2) Copy the converted datafiles from ‘/nfs’ shared NFS directory, which is mounted on the target system, to the target Database ASM diskgroup +DATA02
RMAN> CONVERT DATAFILE '/nfs/dbsrc-nnnnnn-TBS1_xxxxx.dbf','/nfs/dbsrc-nnnnnn-TBS1_xxxxx.dbf' FORMAT '+DATA02' PARALLELISM 4;
Note: the second convert do not perform any convert, rather it copies databolcks from NFS filesystem to the target database's ASM diskgroup
(3) Run transportable import as in Step (5) below
Scenario (2): Datafiles conversion through DBMS_FILE_TRANSFER on the source database using PUT_FILE procedure
In this examples DBMS_FILE_TRANSFER is used for datafiles conversion from database names DBSRC, running Solaris SPARC 64-bit to target database named DBTRG, running Linux 64-bit. Conversion takes place on the source system using PUT_FILE procedure. Both systems use ASM as underlying storage solution. The process initiated from the source system by PUT_FILE procedure. Datablocks transmitted through the defined database link. Conversion happens in the source instance’s memory, written at the target database’s defined ASM location.
(1) Create directory object in ASM diskgroup ‘+DATA02’ of target Database, for an already existing ASM directory
SQL> CREATE OR REPLACE DIRECTORY target_dir AS '+DATA02/DBTRG/DATAFILE’;
SQL> GRANT WRITE ON DIRECTORY target_dir TO 'SYSTEM';
(2) Create directory object in ASM diskgroup ‘+DATA01’ of source Database
SQL> CREATE OR REPLACE DIRECTORY source_dir AS '+DATA01/DBSRC/DATAFILE';
SQL> GRANT READ,WRITE ON DIRECTORY source_dir TO 'SYSTEM';
(3) Create database link named DBS2T on the source database which points to the target database, using the TNS name DBTRGTNS. This entry is configured in TNSNAMES.ORA of the source DB to reference the target Database DBTRG
SQL> CREATE DATABASE LINK DBS2T CONNECT TO 'SYSTEM' IDENTIFIED BY 'password' USING 'DBTRGTNS';
(4) Connect to the source database, initiate file transfer, which implicitly performs datafile conversion to the target database platform endianness
SQL> CONNECT SYSTEM@DBSRCSQL> BEGIN DBMS_FILE_TRANSFER.PUT_FILE('source_dir','df01.dbf','target_dir','df01.dat','DBS2T'); END;
/
Steps (5): Import tablespace transportable set
Before initiating transportable import, database schemas owning the objects included in the transportable datafiles must exist at the destination. Creation of the schemas is part of the full meta_data imported which had been exported in Step (2) above. Many production system however, use custom profiles for enforcing environment-specific security policy. In that case, importing sources database users to the a fresh PDB will fail, due to absence of those custom database profiles. In that case, a separate database profile import job is needed beforehand. Transportable import will be performed in the following sequence of three datapump import jobs.
Note: Carefully review import log files for errors after each operation operation. Undetected errors can result in missing objects that trigger cascading invalidation tree across dependent structures.
(1) Import Profiles (through DB_link)
If no non-default Profiles had been created, this step can be skipped. This usually takes minutes, if not less. The following example import job can be used to import profiles definition from source database through database link.
#cat par_profile_imp.dat
nework_link=dbt2s
directory=dpump_dir
dumpfile=profiles.dmp
include=profile
full=Y
content=metadata_only
logfile=profiles_imp.log
#impdp system@dbtrg parfile='par_profile_imp.dat'
(2) Import Full metadata
Run a Full database metadata_only import using dump file generated in step (2). This can be also performed over a DB_link and step (2) can be skipped altogether. It’s recommended to experiment with approaches to identify which offers more efficient alternative. The example below builds upon the dumpfile exported earlier in step (2).
# cat par_meta_imp.dat
full=y
content=metadata_only
metrics=y
parallel=4
dumpfile=metadata_full_%U.dmp
directory=dpump_dir
logfile=metadata_full_imp.log
# impdp system@dbtrg parfile='par_meta_imp.dat'
(3) Transportable dump Import
The transportable import job loads metadata describing the datafiles belonging to the transport set, which had been exported and converted in steps (3) and (4). The transportable import contains two categories of metadata:
(a) Datafiles definition and parent tablespaces
(b) Segment-to-datafile mappings showing where database segments (tables, partitions, indexes, … ) and their extents reside within the transported datafiles
The following example parameter file can be used for this job. TRANSPORT_DATAFILES parameter should list every datafile in the transportable set using their destination system paths. It’s recommended to crosscheck the datafile list with the output of the transportable export log generated in step (3) above and EDIT the parameter files with correct datafiles path at the target system.
# cat par_xtts_imp.dat
dumpfile= xtts_exp.dmp
directory=data_pump_dir
transport_datafiles=
'+DATA02/DBTRG/DATAFILE/df01.dbf',
'+DATA02/DBTRG/DATAFILE/df02.dbf'
logfile=xtts_imp.log
# impdp system@dbtrg parfile='par_xtts_imp.dat'
Steps (6): Database Validation
Post Import validation task verifies whether the newly migrated database matches the source database with respect to objects and their status and also dictionary structure. For a much efficient comparison in this time critical step queries across DB_link to the source system , which run on the target system enable faster comparison. Starting from the lower level of Database objects, one can proceed with verification of the following components on both systems:
(1) Physical Database structure:
Tablespace definitions, associated datafiles, status and count
(2) Schemas:
User count, status, profiles, default users tablespace, temporary tablespace assignments, default profiles, quota …
(3) Database rights:
System rights, roles rights, roles memberships and objects rights
(4) Schema objects:
Object count and size. Reporting objects size is helpful to verify physical segments are intact
(5) Invalid objects:
Comparison of invalid objects on both source and target DB
(6) Another auxiliary schema Objects:
External tables, directories, scheduler jobs, UDT, constrains, DB_Links
Steps (7): Post-Validation & Go-Live
Once Objects migration had been validated, next steps is to prepare database for application-level validation and later Go-Live
(1) Set all transported tablespaces to read-write mode
(2) Take level 0 full database backup as soon as possible
(3) Importing statistics: Importing statistics through DBMS_STATS from an intermediary statistics table is significantly faster than the implicit statistics import in Data Pump.
(4) Call for application-level verification and readiness to go-live
Migration steps for XTTS with incrementally updated backups
Migration using Transportable tablespaces requires less downtime than a conventional Datapump approach but still demands extended outages for moderate to large databases. On the other hand, a near-zero-downtime replication solution is a challenging investments in cost, time, and resources. Solution for this dilemma is XTTS with incrementally updated backups Method. This method combines transportable technology with RMAN Incremental backups to minimize downtime. Incremental RMAN recovery of datafiles at the target system reduces the outage window to only the time needed for the backup and recovery of level 1 backup, which is necessary for making the datafiles at the target system consistent.
The approach uses tablespace backups to transfer files to the destination system rather than placing tablespaces in read-only mode during datafile copying and conversion as in a conventional XTTS process. This generate inconsistent copies of datafiles. After restoring (and converting) datafiles at the destination, a level 1 incremental backup is taken from production to roll-forward the restored datafiles. This recovery is repeated multiple times, allowing datafiles to be restored and recovered at the destination without impacting production database availability. In other words, the production is serving the application during backup and restore and recovery of level0 and level 1 backups. At this stage, the production datafiles always maintain a higher SCN than the restored and recovered datafiles at the destination database. Closing this gap requires one final level 1 backup, which defines the start of downtime window of this method. The cut-over begins by setting the transportable tablespace set to read-only, capturing a final level 1 backup. This is followed by generating the transportable export dumpfile. Then by recovering the last level 1 backup at the target database, followed by importing the transportable export dumpfile. Process concludes with the post-import validation steps.
The process can be split into three phases:
Phase (1): Restore and roll-forwards datafiles copies to the destination system
>>> Source DB remains online <<<
- Create a level 0 backup of the transportable tablespace set at the source production system, or use an existing level 0 backup
- Restore the level 0 backup at the destination using foreign datafile restore mode, which permits restoring datafiles not registered in the destination database’s control file
- Create a level 1 incremental backup of the tablespace set from the source production system, noting SCN of the parent backup
- Recover the restored datafiles at the destination using the level 1 backup
- Repeat steps 3-4 until the SCN gap between systems is narrow enough, enabling a relatively fast final level 1 recovery
Phase 2: Final roll-forward and transportable Import
>>> Downtime Window <<<
- Set the transportable tablespace at source DB set to read-only mode (downtime start)
- Create the final level 1 incremental backup
- Generate the transportable export for the migrated tablespace set
- Apply the final level 1 backup to recover the transported datafiles
- Execute the transportable import after configuring correct datafile paths in the import parameter file
Phase (3): Migration Validation
Exact approach discussed in steps (6), (7) for the first variant of XTTS above
Migration process using the second variant might appear technically a more complex approach than a conventional Cross-Platform Transportable Tablespaces (XTTS). However, the reduced downtime makes it a preferred alternative when migration windows are constrained for large databases. The good news is Oracle Support provides a tool which automates the whole process, offering far simpler implementation steps.
The Process using the automated workflow is as follows:
- Configure tool’s parameter files
- Execute the script in level 0 mode
- Execute the script in level 1 mode
- Repeat 2,3
- Execute the script in level 1F (final)
- Validate
The latest version of this tool, which is M5 (at the time of writing), is available through the following MOS note:
M5 Cross Endian Platform Migration using Full Transportable Data Pump Export/Import and RMAN Incremental Backups (Doc ID 2999157.1)