I got ” ORA-01466: unable to read data – table definition has changed ” error in Oracle database during impdp.
ORA-01466: unable to read data – table definition has changed
Details of error are as follows.
ORA-31693: Table data object "MSDBA"."TEST_TABLE" failed to load/unload and is being skipped due to error: ORA-02354: error in exporting/importing data ORA-01466: unable to read data - table definition has changed Processing object type TABLE_EXPORT/TABLE/PROCACT_INSTANCE Processing object type TABLE_EXPORT/TABLE/TABLE ORA-01466: Unable to read data -- Table definition has changed Cause: This is a time-based read consistency error for a database object, such as a table or index. Either of the following may have happened: The query was parsed and executed with a snapshot older than the time the object was changed. The creation time-stamp of the object is greater than the current system time. This happens, for example, when the system time is set to a time earlier than the creation time of the object. Action: If the cause is an old snapshot, then commit or rollback the transaction and resume work. a creation time-stamp in the future, ensure the system time is set correctly. If the object creation time-stamp is still greater than the system time, then export the object's data, drop the object, recreate the object so it has a new creation time-stamp, import the object's data, and resume work.
The ORA-01466 error is related with the table or index last DDL time is greater than the time of impdp. Namely someone may drop or truncate the table when you did this action.
Or if you use flashback_scn option, The value of SCN supplied to parameter flashback_scn does not correspond to the time of LAST_DDL_TIME from DBA_OBJECTS for the specific table.
A table’s DDL may have changed due to a truncate job and LAST_DDL_TIME can end up ahead of the flashback_scn time supplied in the parameter.
ORA-01466
In This example, a developer or business analyist created a job that ran a truncate command while datapump job was running.
To solve this error, you should Check if there is any DDL job which is causing LAST_DDL_TIME to be modified to newer value. If so, change the time of execution of that job.
As a work around you can remove the FLASHBACK_SCN parameter until you get the solution for the DDL job.
EXPDP syntax :
expdp userid=<LOGIN>
dumpfile=<DUMP_NAME>_%U.dmp
logfile=<LOG_NAME>.log
filesize=28000m
flashback_scn=$SCN
parallel=8
cluster=N
full=y
This error can also be generated for FLASHBACK_TIME.
To determine if you are facing this issue, execute following steps:
- Check dba_objects for LAST_DDL_TIME :
SQL> select object_name,last_ddl_time from dba_objects where owner='<SCHEMA_NAME>' and object_name='<TABLE_NAME>';
- Collect SCN timestamp
SQL> select SCN_TO_TIMESTAMP(SCN_specified_with_export) from dual;
If the LAST_TIME is newer than the scn time, you are facing this issue. - Remove the job which is running any DDL command on the failing object.
Do you want to learn Oracle Datapump ( Export import ), then Click This Link and read the articles.
Oracle Data Pump – Export Import ( Expdp Impdp ) Tutorial with Examples-4