I got ” ORA-31623: a job is not attached to this session via the specified handle ” error in Oracle database.
ORA-31623: a job is not attached to this session via the specified handle
Details of error are as follows.
ORA-31623: A job is not attached to this session via the specified handle Cause: An attempt to reference a job using a handle which is invalid or no longer valid for the current session. Action: Select a handle corresponding to a valid active job or start a new job.
$ expdp system/<PASSWORD> DIRECTORY=<directory_name> DUMPFILE=<dmp_name>.dmp LOGFILE=<log_name>.log FULL=y Export: Release 11.2.0.1.0 - Production on Thu Jun 19 13:14:32 2014 Copyright 1982, 2009, Oracle and/or its affiliates. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options UDE-31623: operation generated ORACLE error 31623 ORA-31623: a job is not attached to this session via the specified handle ORA-06512: at "SYS.DBMS_DATAPUMP", line 3263 ORA-06512: at "SYS.DBMS_DATAPUMP", line 4488 ORA-06512: at line 1 -- or: -- UDI-31623: operation generated ORACLE error 31623 ORA-31623: a job is not attached to this session via the specified handle ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79 ORA-06512: at "SYS.DBMS_DATAPUMP", line 1137 ORA-06512: at "SYS.DBMS_DATAPUMP", line 4583 ORA-06512: at line 1 -- or: -- UDI-00008: operation generated ORACLE error 31623 ORA-31623: a job is not attached to this session via the specified handle ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79 ORA-06512: at "SYS.DBMS_DATAPUMP", line 1137 ORA-06512: at "SYS.DBMS_DATAPUMP", line 4583 ORA-06512: at line 1 ...
A job is not attached to this session via the specified handle
This ORA-31623 error is related with the An attempt to reference a job using a handle which is invalid or no longer valid for the current session.
To solve this error, set streams_pool_size to 64M as follows.
Alter system set streams_pool_size
SQL> show parameter streams_pool_size NAME TYPE VALUE ———————————— ———– —————————— streams_pool_size big integer 0 SQL> alter system set streams_pool_size=64M scope=both sid='*'; System altered.
Use the steps below one by one to address and fix this issue:
Step 1. First check the value for the STREAMS_POOL_SIZE in the database:
connect / as sysdba show parameter streams_pool select * from v$sgainfo; ... Streams Pool Size 0 Yes
If the STREAMS_POOL_SIZE is too small, then a Data Pump job will fail. This can also happen when using Automatic Shared Memory Management (ASMM), or Automatic Memory Management (AMM) and there is not sufficient memory to increase the STREAMS_POOL_SIZE.
Manual settings for the STREAMS_POOL_SIZE of 64M, 128M or even to 256M have proven to be successful.
Also increase sga_target (for ASMM) or memory_target (for AMM) to have more free memory available during automatic tuning of the SGA components.
To avoid this DataPump error, you will need to configure the database with some Streams Pool.
Manually set the STREAMS_POOL_SIZE (using ALTER SYSTEM or by changing the value in the the PFILE/SPFILE), re-start the database and re-attempt the Data Pump Export.
Step 2. Check for any possible invalid Data Pump queue objects:
connect / as sysdba show parameter aq col owner for a10 col object_name for a30 analyze table kupc$datapump_quetab validate structure cascade; analyze table kupc$datapump_quetab_1 validate structure cascade; select object_id, owner, object_name, status from dba_objects where object_name like 'KUPC$DATAPUMP_QUETAB%'; set lines 100 col status for a9 col object_type for a20; col owner.object for a50 select status, object_id, object_type, owner||'.'||object_name "OWNER.OBJECT" from dba_objects where object_name like '%DATAPUMP_QUETAB%' order by 3,4;
If there are any invalid queue objects, then a Data Pump job will fail. This usually also results in the following error in the alert.log file:
ORA-00600: internal error code, arguments: [kwqbgqc: bad state], [1], [1], [], [], [], [], [] For details and full resolution, see: Note 754401.1 - Errors ORA-31623 And ORA-600 [kwqbgqc: bad state] During DataPump Export Or Import
Step 3. Check for any invalid registry components (CATALOG, CATPROC and JAVAVM), and invalid sys owned objects:
connect / as sysdba set lines 90 col version for a12 col comp_id for a8 col schema like version col comp_name format a35 col status for a12 select comp_id,schema,status,version,comp_name from dba_registry order by 1; set lines 120 col status for a9 col object_type for a20; col owner.object for a50 select status, object_id, object_type, owner||'.'||object_name "OWNER.OBJECT" from dba_objects where status != 'VALID' and owner='SYS' and object_name not like 'BIN$%' order by 4,2;
If the registry components CATALOG, CATPROC and/or JAVAVM, and/or objects like SYS.KUPW$WORKER or SYS.KUPP$PROC are invalid, then a Data Pump job will likely fail.
To resolve this problem, reload Data Pump in the database:
connect / as sysdba -- Start spooling to file: spool catproc.out set lines 120 numwidth 12 pages 10000 long 2000000000 alter session set NLS_DATE_FORMAT='YYYY-MM-DD HH24:MI:SS'; show user select sysdate from dual; shutdown immediate -- for 9.2, use: startup migrate startup migrate @?/rdbms/admin/catalog.sql @?/rdbms/admin/catproc.sql @?/rdbms/admin/utlrp.sql spool off spool registry.out -- Registry status: set lines 90 col version for a12 col comp_id for a8 col schema like version col comp_name format a35 col status for a12 select comp_id,schema,status,version,comp_name from dba_registry order by 1; -- Invalid objects: set lines 120 col status for a9 col object_type for a20; col owner.object for a50 select status, object_id, object_type, owner||'.'||object_name "OWNER.OBJECT" from dba_objects where status != 'VALID' and owner='SYS' and object_name not like 'BIN$%' order by 4,2; shutdown immediate startup spool off
For details and references, see:
Note 430221.1 – How To Reload Datapump Utility EXPDP/IMPDP
Note 863312.1 – Best Practices for running catalog, catproc and utlrp script
Note 308388.1 – Error ORA-31623 When Submitting A DataPump Export Job
In case JAVAVM component is invalid, validate it using the steps from:
Note 1112983.1 – How to Reload the JVM in 11.2.0.x
Note 276554.1 – How to Reload the JVM in 10.1.0.X and 10.2.0.X
Note 1612279.1 – How to Reload the JVM in 12.1.0.x
and/or create a Java SR if more help is needed.
Step 4. Check if parameter _FIX_CONTROL is set for Bug 6167716:
connect / as sysdba show parameter _fix_control
If this hidden parameter is set, then a Data Pump job will fail.
For details and full resolution, see:
Note 1150733.1 – DataPump Export (EXPDP) Fails With Errors ORA-31623 ORA-6512 If Parameter _FIX_CONTROL=’6167716:OFF’ Has Been Set
Step 5. If the Data Pump job is started through a package, check if the package was created with invoker’s right (AUTHID clause):
connect / as sysdba set lines 120 numwidth 12 pages 10000 long 2000000000 col ddl for a100 select dbms_metadata.get_ddl('PACKAGE','<PACKAGE_NAME>','<SCHEMA_NAME>') "DDL" from dual;
If the package was created with an invoker’s right, then a Data Pump job will fail when started through this package.
For details and full resolution, see:
Note 1579091.1 – DataPump Job Fails With Error ORA-31623 A Job Is Not Attached To This Session Via The Specified Handle
Step 6. If the Data Pump job is started in DBConsole / OEM, and the job is selected to be re-run (or you want to edit the job), then the Data Pump job will fail and following errors will be reported:
ERROR: No data pump job named "jobname" exists in the database ORA-31623: a job is not attached to this session via the specified handle Execute Failed: ORA-31623: a job is not attached to this session via the specified handle ORA-6512: at "SYS.DBMS_DATAPUMP", line 2315 ORA-6512: at "SYS.DBMS_DATAPUMP", line 3157 ORA-6512: at line 27 (DBD ERROR: OCIStmtExecute) -- or -- Edit is not supported for this job type, only general information
For details and full resolution, see:
Note 788301.1 – Error ORA-31623 On DataPump Export Via DBScheduler After First Run Was Successful
Note 461307.1 – How To Export Database Using DBConsole/OEM In 10G
Step 7. If parameter LOGTIME is being used, Data Pump export/import with LOGTIME parameter crashes if the environment variable NLS_DATE_FORMAT is set.
For details and full resolution, see:
Note 1936319.1 – Data Pump Export Or Import Throws ORA-31623 When Using LOGTIME Parameter
Step 8. Running a remote DataPump job against Oracle 12.1.0.2 database, the export can fail with ORA-31623. Database alert.log file reports ORA-0600 [ksfdcmtcre4], [KGNFS SERVER REBOOT] error.
The incident trace file shows the following information:
Dump continued from file: <ADR_base>/diag/product_type/product_id/instance_id>/trace/<SID>_dm00_<INCIDENT_NUMBER>.trc
[TOC00001]
ORA-00600: internal error code, arguments: [ksfdcmtcre4], [KGNFS SERVER REBOOT], [], [], [], [], [], [], [], [], [], []
ORA-06512: at "SYS.KUPF$FILE_INT", line 79
ORA-06512: at "SYS.KUPF$FILE", line 2151
ORA-06512: at "SYS.KUPF$FILE", line 1473
...
Stack Trace: ... kkgereml kuppChkErr kupprdp opirip opidrv sou2o ...
DNFS is enabled by default when installing Oracle. When an alternative storage vendor is used, there is a conflict causing the ORA-0600 [ksfdcmtcre4], [KGNFS SERVER REBOOT] error preventing tasks to be completed successfully. In this case, the task was EXPDP job.
To prevent the error, disable DNFS, since it is not being used.
For details, see Note 954425.1 – Direct NFS: FAQ.
Do you want to learn Oracle Database for Beginners, then read the following articles.
Oracle Tutorial | Oracle Database Tutorials for Beginners ( Junior Oracle DBA )