Manage schema data

Go back to OCM 12c Preparation Project 

Hands On Lab – Index

Manage schema data
[1] DataPump
[2] SqlLoader

[1] DataPump
-> New Features in 12.1.0.2

Export parameters

1.1 tables

-bash-4.1$ sqlplus system/oracle_4U@racnode2/pdb2

SQL> create table tbl_dp_tab_export as select * from dba_objects;

Table created.

SQL> create directory DATA_PUMP_PDIR AS '/u03/app/oracle/admin/pdb2/dpdump';

Directory created.

$ expdp system/oracle_4U@racnode2/pdb2 directory=DATA_PUMP_PDIR
 dumpfile=exp_table.dmp logfile=log_table.log tables=tbl_dp_tab_export

Export: Release 12.1.0.2.0 - Production on Sun Sep 15 10:55:41 2019

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_TABLE_01": system/********@racnode2/pdb2 directory=DATA_PUMP_PDIR dumpfile=exp_table.dmp logfile=log_table.log tables=tbl_dp_tab_export
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 13 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
. . exported "SYSTEM"."TBL_DP_TAB_EXPORT" 10.45 MB 91732 rows
Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is:
/u03/app/oracle/admin/pdb2/dpdump/exp_table.dmp
Job "SYSTEM"."SYS_EXPORT_TABLE_01" successfully completed at Sun Sep 15 10:56:20 2019 elapsed 0 00:00:31

#Create an partition table and export only single partition.

SQL> create table tbl_dp_tab_export1
2 partition by range(object_id)
3 ( partition one values less than (100),
4 partition two values less than (maxvalue))
5 as
6 select * from dba_objects;

Table created.

SQL> select count(*) from tbl_dp_tab_export1 partition (one);

COUNT(*)
----------
98

SQL> select count(*) from tbl_dp_tab_export1 partition (two)

COUNT(*)
----------
91637

$ expdp system/oracle_4U@racnode2/pdb2 directory=DATA_PUMP_PDIR 
dumpfile=exp_tab_par2.dmp logfile=log_tab_part2.log tables=tbl_dp_tab_export1:one,tbl_dp_tab_export

Export: Release 12.1.0.2.0 - Production on Sun Sep 15 11:02:10 2019

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_TABLE_01": system/********@racnode2/pdb2 directory=DATA_PUMP_PDIR dumpfile=exp_tab_par2.dmp logfile=log_tab_part2.log tables=tbl_dp_tab_export1:one,tbl_dp_tab_export
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 13.06 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
. . exported "SYSTEM"."TBL_DP_TAB_EXPORT" 10.45 MB 91732 rows
. . exported "SYSTEM"."TBL_DP_TAB_EXPORT1":"ONE" 21.42 KB 98 rows
Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is:
/u03/app/oracle/admin/pdb2/dpdump/exp_tab_par2.dmp
Job "SYSTEM"."SYS_EXPORT_TABLE_01" successfully completed at Sun Sep 15 11:02:33 2019 elapsed 0 00:00:20

1.2 schemas

-bash-4.1$ sqlplus system/oracle_4U@racnode2/pdb2

SQL> create user usr_exp1 identified by usr_exp1;

User created.

SQL> r
1* create user usr_exp2 identified by usr_exp2

User created.

SQL> grant connect,resource to usr_exp1,usr_exp2;

Grant succeeded.

SQL> alter user usr_exp1 default tablespace users quota unlimited on users;

User altered.

SQL> r
1* alter user usr_exp2 default tablespace users quota unlimited on users

User altered.

SQL> create table usr_exp1.tbl_one as select * from dba_objects;

Table created.

SQL> c/one/two
1* create table usr_exp1.tbl_two as select * from dba_objects
SQL> r
1* create table usr_exp1.tbl_two as select * from dba_objects

Table created.

SQL> c/exp1/exp2
1* create table usr_exp2.tbl_two as select * from dba_objects
SQL> r
1* create table usr_exp2.tbl_two as select * from dba_objects

Table created.

SQL> c/two/one
1* create table usr_exp2.tbl_one as select * from dba_objects
SQL> r
1* create table usr_exp2.tbl_one as select * from dba_objects

Table created.

SQL> quit

$ expdp system/oracle_4U@racnode2/pdb2 directory=DATA_PUMP_PDIR 
dumpfile=schema_exp.dmp logfile=schema_log.log schemas=usr_exp1,usr_exp2

Export: Release 12.1.0.2.0 - Production on Sun Sep 15 11:12:31 2019

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/********@racnode2/pdb2 directory=DATA_PUMP_PDIR dumpfile=schema_exp.dmp logfile=schema_log.log schemas=usr_exp1,usr_exp2
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 52 MB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
. . exported "USR_EXP1"."TBL_ONE" 10.45 MB 91736 rows
. . exported "USR_EXP1"."TBL_TWO" 10.45 MB 91737 rows
. . exported "USR_EXP2"."TBL_ONE" 10.45 MB 91739 rows
. . exported "USR_EXP2"."TBL_TWO" 10.45 MB 91738 rows
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
/u03/app/oracle/admin/pdb2/dpdump/schema_exp.dmp
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at Sun Sep 15 11:13:09 2019 elapsed 0 00:00:37

1.3 full
#Export entire database [NO].
$expdp system/oracle_4U@racnode2/pdb2 directory=DATA_PUMP_PDIR 
dumpfile=exp_full.dmp logfile=log_full.log full=y

#Import everything from source [YES].
$impdp system/oracle_4U@racnode2/pdb2 directory=DATA_PUMP_PDIR 
dumpfile=exp_full.dmp logfile=log_full.log full=y

1.4 Filtering export of DB objects
(INCLUDE/EXCLUDE/parfile Parameter)

Include specific object types.
For example, INCLUDE=TABLE_DATA.

Exclude specific object types.
For example, EXCLUDE=SCHEMA:"='HR'".

Use the following views to get the list of objects that may be passed 
as a value for both parameters.

col object_path format a30
col comments format a60
set linesize 150
set pagesize 32676
select * from database_export_objects where named='Y';
select * from schema_export_objects where named='Y';
select * from table_export_objects where named='Y';

SQL> grant connect,resource,unlimited tablespace to usr_deptest1 
identified by oracle;

Grant succeeded.

SQL> create table usr_dptest1.tbl_one as select * from dba_objects where rownum < 100;
create table usr_dptest1.tbl_one as select * from dba_objects where rownum < 100 * ERROR at line 1: ORA-01918: user 'USR_DPTEST1' does not exist SQL> create table usr_deptest1.tbl_one as select * from dba_objects where rownum < 100; Table created. SQL> c/one/two
1* create table usr_deptest1.tbl_two as select * from dba_objects where rownum < 100 SQL> r
1* create table usr_deptest1.tbl_two as select * from dba_objects where rownum < 100 Table created. SQL> c/two/three
1* create table usr_deptest1.tbl_three as select * from dba_objects where rownum < 100 SQL> r
1* create table usr_deptest1.tbl_three as select * from dba_objects where rownum < 100 Table created. SQL> create procedure usr_deptest1.my_proc
2 as
3 begin
4 NULL;
5 END;
6 /

Procedure created.

SQL> create view usr_deptest1.my_view as select * from tbl_one;

View created.

SQL> quit

$ expdp system/oracle_4U@racnode2/pdb2 directory=DATA_PUMP_PDIR 
dumpfile=exp_include_test1.dmp schemas=usr_deptest1 include=table

Export: Release 12.1.0.2.0 - Production on Sun Sep 15 11:26:10 2019

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/********@racnode2/pdb2 directory=DATA_PUMP_PDIR dumpfile=exp_include_test1.dmp schemas=usr_deptest1 include=table
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 192 KB
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "USR_DEPTEST1"."TBL_ONE" 21.50 KB 99 rows
. . exported "USR_DEPTEST1"."TBL_THREE" 21.50 KB 99 rows
. . exported "USR_DEPTEST1"."TBL_TWO" 21.50 KB 99 rows
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
/u03/app/oracle/admin/pdb2/dpdump/exp_include_test1.dmp
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at Sun Sep 15 11:26:20 2019 elapsed 0 00:00:09

$ expdp system/oracle_4U@racnode2/pdb2 directory=DATA_PUMP_PDIR 
dumpfile=exp_include_test2.dmp schemas=usr_deptest1 include=view

Export: Release 12.1.0.2.0 - Production on Sun Sep 15 11:26:35 2019

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/********@racnode2/pdb2 directory=DATA_PUMP_PDIR dumpfile=exp_include_test2.dmp schemas=usr_deptest1 include=view
Estimate in progress using BLOCKS method...
Total estimation using BLOCKS method: 0 KB
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
/u03/app/oracle/admin/pdb2/dpdump/exp_include_test2.dmp
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at Sun Sep 15 11:26:43 2019 elapsed 0 00:00:07

#parfile

$ cat test_parfile.dat
schemas=usr_deptest1
directory=DATA_PUMP_PDIR
dumpfile=exp_parfile_test.dmp
logfile=log_parfile_test.log
include=procedure
include=table:"in ('TBL_ONE','TBL_TWO')"
include=view:"LIKE 'MY%'"

$ expdp system/oracle_4U@racnode2/pdb2 parfile=test_parfile.dat

Export: Release 12.1.0.2.0 - Production on Sun Sep 15 11:39:06 2019

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/********@racnode2/pdb2 parfile=test_parfile.dat
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 128 KB
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "USR_DEPTEST1"."TBL_ONE" 21.50 KB 99 rows
. . exported "USR_DEPTEST1"."TBL_TWO" 21.50 KB 99 rows
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
/u03/app/oracle/admin/pdb2/dpdump/exp_parfile_test.dmp
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at Sun Sep 15 11:39:20 2019 elapsed 0 00:00:12

1.5 Perform compressed export

COMPRESSION
Reduce the size of a dump file.
Valid keyword values are: ALL, DATA_ONLY, [METADATA_ONLY] and NONE.

COMPRESSION_ALGORITHM
Specify the compression algorithm that should be used.
Valid keyword values are: [BASIC], LOW, MEDIUM and HIGH.

Specify the compression algorithm that should be used.
Valid keyword values are: [BASIC], LOW, MEDIUM and HIGH.

$ expdp system/oracle_4U@racnode2/pdb2 directory=DATA_PUMP_PDIR 
schemas=usr_deptest1 dumpfile=exp_compress.dmp logfile=log_compress.log 
compression=all compression_algorithm=HIGH

Export: Release 12.1.0.2.0 - Production on Sun Sep 15 11:44:06 2019
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/********@racnode2/pdb2 directory=DATA_PUMP_PDIR schemas=usr_deptest1 dumpfile=exp_compress.dmp logfile=log_compress.log compression=all compression_algorithm=HIGH
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 192 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
. . exported "USR_DEPTEST1"."TBL_ONE" 6.148 KB 99 rows
. . exported "USR_DEPTEST1"."TBL_THREE" 6.148 KB 99 rows
. . exported "USR_DEPTEST1"."TBL_TWO" 6.148 KB 99 rows
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
/u03/app/oracle/admin/pdb2/dpdump/exp_compress.dmp
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at Sun Sep 15 11:44:27 2019 elapsed 0 00:00:19

$ expdp system/oracle_4U@racnode2/pdb2 directory=DATA_PUMP_PDIR 
schemas=usr_deptest1 dumpfile=exp_noncompress.dmp 
logfile=log_noncompress.log compression=NONE

Export: Release 12.1.0.2.0 - Production on Sun Sep 15 11:45:08 2019
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/********@racnode2/pdb2 directory=DATA_PUMP_PDIR schemas=usr_deptest1 dumpfile=exp_noncompress.dmp logfile=log_noncompress.log compression=NONE
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 192 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
. . exported "USR_DEPTEST1"."TBL_ONE" 21.50 KB 99 rows
. . exported "USR_DEPTEST1"."TBL_THREE" 21.50 KB 99 rows
. . exported "USR_DEPTEST1"."TBL_TWO" 21.50 KB 99 rows
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
/u03/app/oracle/admin/pdb2/dpdump/exp_noncompress.dmp
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at Sun Sep 15 11:45:29 2019 elapsed 0 00:00:20

$ ls -ltr *compress*
-rw-r----- 1 oracle oinstall 102400 Sep 15 11:44 exp_compress.dmp
-rw-r--r-- 1 oracle oinstall 1884 Sep 15 11:44 log_compress.log
-rw-r----- 1 oracle oinstall 446464 Sep 15 11:45 exp_noncompress.dmp
-rw-r--r-- 1 oracle oinstall 1867 Sep 15 11:45 log_noncompress.log

1.6 Specify type of content to export
CONTENT Parameter

Specifies data to unload.
Valid keyword values are: [ALL], DATA_ONLY and METADATA_ONLY.

Specifies data to load.
Valid keywords are: [ALL], DATA_ONLY and METADATA_ONLY.

CONTENT = DATA_ONLY | METADATA_ONLY | ALL(default option)

-bash-4.1$ sqlplus system/oracle_4U@racnode2/pdb2

SQL> grant connect,resource,unlimited tablespace to usr_content_test identified by oracle;

Grant succeeded.

SQL> create table usr_content_test.tbl_one as select * from dba_objects;

Table created.

SQL>
SQL> select count(*) from usr_content_test.tbl_one;

COUNT(*)
----------
91745

-bash-4.1$
$ expdp system/oracle_4U@racnode2/pdb2 directory=DATA_PUMP_PDIR 
dumpfile=usr_content_test.dmp content=metadata_only schemas=usr_content_test

Export: Release 12.1.0.2.0 - Production on Sun Sep 15 11:51:45 2019
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/********@racnode2/pdb2 directory=DATA_PUMP_PDIR dumpfile=usr_content_test.dmp content=metadata_only schemas=usr_content_test
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
/u03/app/oracle/admin/pdb2/dpdump/usr_content_test.dmp
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at Sun Sep 15 11:52:02 2019 elapsed 0 00:00:15

-bash-4.1$ sqlplus system/oracle_4U@racnode2/pdb2

SQL> drop table usr_content_test.tbl_one;

Table dropped.

-bash-4.1$ impdp system/oracle_4U@racnode2/pdb2 directory=DATA_PUMP_PDIR 
dumpfile=usr_content_test.dmp

Import: Release 12.1.0.2.0 - Production on Sun Sep 15 11:52:43 2019
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_FULL_01": system/********@racnode2/pdb2 directory=DATA_PUMP_PDIR dumpfile=usr_content_test.dmp
Processing object type SCHEMA_EXPORT/USER
ORA-31684: Object type USER:"USR_CONTENT_TEST" already exists
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
ORA-39151: Table "USR_CONTENT_TEST"."TBL_ONE" exists. All dependent metadata and data will be skipped due to table_exists_action of skip
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Job "SYSTEM"."SYS_IMPORT_FULL_01" completed with 2 error(s) at Sun Sep 15 11:52:46 2019 elapsed 0 00:00:02

-bash-4.1$ sqlplus system/oracle_4U@racnode2/pdb2
SQL> select count(*) from usr_content_test.tbl_one;

COUNT(*)
----------
0

SQL> quit

1.6 Perform encrypted export

(ENCRYPTION(ALL | DATA_ONLY | METADATA_ONLY | ENCRYPTED_COLUMNS_ONLY | NONE(default)),
ENCRYPTION_ALGORITHM(AES128,AES192,AES256 Options),
ENCRYPTION_MODE(PASSWORD(default),TRANSPERENT,DUAL Options),
ENCRYPTION_PASSWORD)

ENCRYPTION_PWD_PROMPT
Specifies whether to prompt for the encryption password [NO].
Terminal echo will be suppressed while standard input is read.

$ expdp system/oracle_4U@racnode2/pdb2 directory=DATA_PUMP_PDIR 
dumpfile=exp_encrypt_test1.dmp logfile=log_encrypt_test1.log 
schemas=usr_deptest1 encryption=all encryption_algorithm=AES256 
encryption_mode=password encryption_password=oracle

Export: Release 12.1.0.2.0 - Production on Sun Sep 15 11:58:06 2019
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/********@racnode2/pdb2 directory=DATA_PUMP_PDIR dumpfile=exp_encrypt_test1.dmp logfile=log_encrypt_test1.log schemas=usr_deptest1 encryption=all encryption_algorithm=AES256 encryption_mode=password encryption_password=********
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 192 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
. . exported "USR_DEPTEST1"."TBL_ONE" 21.51 KB 99 rows
. . exported "USR_DEPTEST1"."TBL_THREE" 21.51 KB 99 rows
. . exported "USR_DEPTEST1"."TBL_TWO" 21.51 KB 99 rows
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
/u03/app/oracle/admin/pdb2/dpdump/exp_encrypt_test1.dmp
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at Sun Sep 15 11:58:32 2019 elapsed 0 00:00:25

SQL> drop user usr_deptest1 cascade;

User dropped.

SQL> grant connect,resource,unlimited tablespace to usr_deptest1 identified by oracle;

Grant succeeded.

$ impdp system/oracle_4U@racnode2/pdb2 directory=DATA_PUMP_PDIR 
dumpfile=exp_encrypt_test1.dmp schemas=usr_deptest1 encryption_password=oracle

Import: Release 12.1.0.2.0 - Production on Sun Sep 15 12:03:16 2019
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_SCHEMA_02" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_SCHEMA_02": system/********@racnode2/pdb2 directory=DATA_PUMP_PDIR dumpfile=exp_encrypt_test1.dmp schemas=usr_deptest1 encryption_password=********
Processing object type SCHEMA_EXPORT/USER
ORA-31684: Object type USER:"USR_DEPTEST1" already exists
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "USR_DEPTEST1"."TBL_ONE" 21.51 KB 99 rows
. . imported "USR_DEPTEST1"."TBL_THREE" 21.51 KB 99 rows
. . imported "USR_DEPTEST1"."TBL_TWO" 21.51 KB 99 rows
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Job "SYSTEM"."SYS_IMPORT_SCHEMA_02" completed with 1 error(s) at Sun Sep 15 12:03:32 2019 elapsed 0 00:00:15

#Test with ENCRYPTION_PWD_PROMPT

-> In previous versions, data pump encryption required the ENCRYPTION_PASSWORD parameter to be entered on the command line, making password snooping relatively easy.
In Oracle 12c, the ENCRYPTION_PWD_PROMPT parameter enables encryption without requiring the password to be entered as a command line parameter. Instead, the user
is prompted for the password at runtime, with their response not echoed to the screen.
ENCRYPTION_PWD_PROMPT=[YES | NO]

SQL> drop user usr_deptest1 cascade;

User dropped.

SQL> grant connect,resource,unlimited tablespace to usr_deptest1 identified by oracle;

Grant succeeded.

SQL> quit

-bash-4.1$ impdp system/oracle_4U@racnode2/pdb2 directory=DATA_PUMP_PDIR
 dumpfile=exp_encrypt_test1.dmp schemas=usr_deptest1 encryption_pwd_prompt=Y

Import: Release 12.1.0.2.0 - Production on Sun Sep 15 12:06:08 2019

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

Encryption Password:
Master table "SYSTEM"."SYS_IMPORT_SCHEMA_02" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_SCHEMA_02": system/********@racnode2/pdb2 directory=DATA_PUMP_PDIR dumpfile=exp_encrypt_test1.dmp schemas=usr_deptest1 encryption_pwd_prompt=Y
Processing object type SCHEMA_EXPORT/USER
ORA-31684: Object type USER:"USR_DEPTEST1" already exists
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "USR_DEPTEST1"."TBL_ONE" 21.51 KB 99 rows
. . imported "USR_DEPTEST1"."TBL_THREE" 21.51 KB 99 rows
. . imported "USR_DEPTEST1"."TBL_TWO" 21.51 KB 99 rows
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Job "SYSTEM"."SYS_IMPORT_SCHEMA_02" completed with 1 error(s) at Sun Sep 15 12:06:29 2019 elapsed 0 00:00:20

#Test with ENCRYPTION_MODE=DUAL

-bash-4.1$ expdp system/oracle_4U@racnode2/pdb2 directory=DATA_PUMP_PDIR dumpfile=exp_encrypt_test2.dmp logfile=log_encrypt_test2.log schemas=usr_deptest1 encryption=all encryption_algorithm=AES192 encryption_mode=dual encryption_password=oracle
Export: Release 12.1.0.2.0 - Production on Sun Sep 15 12:00:03 2019
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
ORA-39002: invalid operation
ORA-39188: unable to encrypt dump file set
ORA-28365: wallet is not open

-bash-4.1$ sql
SQL> select status,wrl_parameter from v$encryption_wallet;

STATUS WRL_PARAMETER
--------------------------------------------------------------------------------
CLOSED /u01/app/oracle/product/12.1.0.2/db_1/wallets/cdb2/

SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY oracle_4U CONTAINER=ALL;

keystore altered.

SQL> select status,wrl_parameter from v$encryption_wallet;

STATUS WRL_PARAMETER
--------------------------------------------------------------------------------
OPEN /u01/app/oracle/product/12.1.0.2/db_1/wallets/cdb2/

SQL> ALTER SESSION SET CONTAINER=PDB2;

Session altered.

SQL> select status,wrl_parameter from v$encryption_wallet;

STATUS WRL_PARAMETER
--------------------------------------------------------------------------------
OPEN /u01/app/oracle/product/12.1.0.2/db_1/wallets/cdb2/

$ expdp system/oracle_4U@racnode2/pdb2 directory=DATA_PUMP_PDIR 
dumpfile=exp_encrypt_test2.dmp logfile=log_encrypt_test2.log 
schemas=usr_deptest1 encryption=all encryption_algorithm=AES192 
encryption_mode=dual encryption_pwd_prompt=yes

Export: Release 12.1.0.2.0 - Production on Sun Sep 15 12:15:06 2019

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

Encryption Password:
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/********@racnode2/pdb2 directory=DATA_PUMP_PDIR dumpfile=exp_encrypt_test2.dmp logfile=log_encrypt_test2.log schemas=usr_deptest1 encryption=all encryption_algorithm=AES192 encryption_mode=dual encryption_pwd_prompt=yes
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 192 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
. . exported "USR_DEPTEST1"."TBL_ONE" 21.44 KB 99 rows
. . exported "USR_DEPTEST1"."TBL_THREE" 21.44 KB 99 rows
. . exported "USR_DEPTEST1"."TBL_TWO" 21.44 KB 99 rows
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
/u03/app/oracle/admin/pdb2/dpdump/exp_encrypt_test2.dmp
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at Sun Sep 15 12:15:30 2019 elapsed 0 00:00:23

SQL> drop user usr_deptest1 cascade;

User dropped.

SQL> grant connect,resource,unlimited tablespace to usr_deptest1 identified by oracle;

Grant succeeded.

$ impdp system/oracle_4U@racnode2/pdb2 directory=DATA_PUMP_PDIR 
dumpfile=exp_encrypt_test2.dmp schemas=usr_deptest1 encryption_pwd_prompt=yes
Import: Release 12.1.0.2.0 - Production on Sun Sep 15 12:17:08 2019

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

Encryption Password:
Master table "SYSTEM"."SYS_IMPORT_SCHEMA_02" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_SCHEMA_02": system/********@racnode2/pdb2 directory=DATA_PUMP_PDIR dumpfile=exp_encrypt_test2.dmp schemas=usr_deptest1 encryption_pwd_prompt=yes
Processing object type SCHEMA_EXPORT/USER
ORA-31684: Object type USER:"USR_DEPTEST1" already exists
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "USR_DEPTEST1"."TBL_ONE" 21.44 KB 99 rows
. . imported "USR_DEPTEST1"."TBL_THREE" 21.44 KB 99 rows
. . imported "USR_DEPTEST1"."TBL_TWO" 21.44 KB 99 rows
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Job "SYSTEM"."SYS_IMPORT_SCHEMA_02" completed with 1 error(s) at Sun Sep 15 12:17:30 2019 elapsed 0 00:00:20

1.7 Estimate the disk space of the export process
ESTIMATE(blocks(default) | statistics),ESTIMATE_ONLY=Y Parameters

$ expdp system/oracle_4U@racnode2/pdb2 directory=DATA_PUMP_PDIR 
dumpfile=exp_estimate_test1.dmp logfile=log_estimate_test1.log 
estimate=blocks schemas=usr_deptest1

Export: Release 12.1.0.2.0 - Production on Sun Sep 15 12:22:05 2019
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/********@racnode2/pdb2 directory=DATA_PUMP_PDIR dumpfile=exp_estimate_test1.dmp logfile=log_estimate_test1.log estimate=blocks schemas=usr_deptest1
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. estimated "USR_DEPTEST1"."TBL_ONE" 64 KB
. estimated "USR_DEPTEST1"."TBL_THREE" 64 KB
. estimated "USR_DEPTEST1"."TBL_TWO" 64 KB
Total estimation using BLOCKS method: 192 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
. . exported "USR_DEPTEST1"."TBL_ONE" 21.43 KB 99 rows
. . exported "USR_DEPTEST1"."TBL_THREE" 21.43 KB 99 rows
. . exported "USR_DEPTEST1"."TBL_TWO" 21.43 KB 99 rows
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
/u03/app/oracle/admin/pdb2/dpdump/exp_estimate_test1.dmp
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at Sun Sep 15 12:22:27 2019 elapsed 0 00:00:21

$expdp system/oracle_4U@racnode2/pdb2 directory=DATA_PUMP_PDIR 
dumpfile=exp_estimate_test2.dmp logfile=log_estimate_test2.log 
estimate=statistics schemas=usr_deptest1

Export: Release 12.1.0.2.0 - Production on Sun Sep 15 12:22:52 2019
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/********@racnode2/pdb2 directory=DATA_PUMP_PDIR dumpfile=exp_estimate_test2.dmp logfile=log_estimate_test2.log estimate=statistics schemas=usr_deptest1
Estimate in progress using STATISTICS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. estimated "USR_DEPTEST1"."TBL_ONE" 19.84 KB
. estimated "USR_DEPTEST1"."TBL_THREE" 19.84 KB
. estimated "USR_DEPTEST1"."TBL_TWO" 19.84 KB
Total estimation using STATISTICS method: 59.53 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
. . exported "USR_DEPTEST1"."TBL_ONE" 21.43 KB 99 rows
. . exported "USR_DEPTEST1"."TBL_THREE" 21.43 KB 99 rows
. . exported "USR_DEPTEST1"."TBL_TWO" 21.43 KB 99 rows
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
/u03/app/oracle/admin/pdb2/dpdump/exp_estimate_test2.dmp
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at Sun Sep 15 12:23:13 2019 elapsed 0 00:00:20

#->Test with ESTIMATE_ONLY
It will prevent creation of dump file and just estimate the size of the dump file.

$ expdp system/oracle_4U@racnode2/pdb2 directory=DATA_PUMP_PDIR 
estimate=blocks schemas=usr_deptest1 estimate_only=yes

Export: Release 12.1.0.2.0 - Production on Sun Sep 15 12:30:58 2019
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/********@racnode2/pdb2 directory=DATA_PUMP_PDIR estimate=blocks schemas=usr_deptest1 estimate_only=yes
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. estimated "USR_DEPTEST1"."TBL_ONE" 64 KB
. estimated "USR_DEPTEST1"."TBL_THREE" 64 KB
. estimated "USR_DEPTEST1"."TBL_TWO" 64 KB
Total estimation using BLOCKS method: 192 KB
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at Sun Sep 15 12:31:03 2019 elapsed 0 00:00:04

$ sqlplus system/oracle_4U@racnode2/pdb2
SQL> select segment_name obj_name,(blocks*8192)/1024 size_kb from dba_extents where owner='USR_DEPTEST1';

OBJ_NAME SIZE_KB
----------------------
TBL_THREE 64
TBL_TWO 64
TBL_ONE 64

-bash-4.1$ expdp system/oracle_4U@racnode2/pdb2 directory=DATA_PUMP_PDIR 
estimate=statistics schemas=usr_deptest1 estimate_only=yes
Export: Release 12.1.0.2.0 - Production on Sun Sep 15 12:31:17 2019

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/********@racnode2/pdb2 directory=DATA_PUMP_PDIR estimate=statistics schemas=usr_deptest1 estimate_only=yes
Estimate in progress using STATISTICS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. estimated "USR_DEPTEST1"."TBL_ONE" 19.84 KB
. estimated "USR_DEPTEST1"."TBL_THREE" 19.84 KB
. estimated "USR_DEPTEST1"."TBL_TWO" 19.84 KB
Total estimation using STATISTICS method: 59.53 KB
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at Sun Sep 15 12:31:21 2019 elapsed 0 00:00:03

1.8 FILESIZE Parameter
Specify the size of each dump file in units of bytes.

ORA-31618: FILESIZE is outside valid range of 40KB to 16TB
31618, 00000, "FILESIZE is outside valid range of %sKB to %sTB"
// *Cause: The value specified for the FILESIZE parameter was either too small
// to hold the header information for the export file, plus any data,
// or exceeded the allowed maximum size for a dump file.
// *Action: Retry the operation using a value for FILESIZE that falls within
// the valid range.

$ expdp system/oracle_4U@racnode2/pdb2 directory=DATA_PUMP_PDIR 
dumpfile=exp_filesize_test%U.dmp logfile=log_filesize_test.log 
filesize=40K schemas=usr_deptest1

Export: Release 12.1.0.2.0 - Production on Sun Sep 15 12:39:07 2019
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/********@racnode2/pdb2 directory=DATA_PUMP_PDIR dumpfile=exp_filesize_test%U.dmp logfile=log_filesize_test.log filesize=40K schemas=usr_deptest1
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 192 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
. . exported "USR_DEPTEST1"."TBL_ONE" 21.43 KB 99 rows
. . exported "USR_DEPTEST1"."TBL_THREE" 21.43 KB 99 rows
. . exported "USR_DEPTEST1"."TBL_TWO" 21.43 KB 99 rows
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
/u03/app/oracle/admin/pdb2/dpdump/exp_filesize_test01.dmp
/u03/app/oracle/admin/pdb2/dpdump/exp_filesize_test02.dmp
/u03/app/oracle/admin/pdb2/dpdump/exp_filesize_test03.dmp
/u03/app/oracle/admin/pdb2/dpdump/exp_filesize_test04.dmp
/u03/app/oracle/admin/pdb2/dpdump/exp_filesize_test05.dmp
/u03/app/oracle/admin/pdb2/dpdump/exp_filesize_test06.dmp
/u03/app/oracle/admin/pdb2/dpdump/exp_filesize_test07.dmp
/u03/app/oracle/admin/pdb2/dpdump/exp_filesize_test08.dmp
/u03/app/oracle/admin/pdb2/dpdump/exp_filesize_test09.dmp
/u03/app/oracle/admin/pdb2/dpdump/exp_filesize_test10.dmp
/u03/app/oracle/admin/pdb2/dpdump/exp_filesize_test11.dmp
/u03/app/oracle/admin/pdb2/dpdump/exp_filesize_test12.dmp
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at Sun Sep 15 12:39:30 2019 elapsed 0 00:00:21

1.9 FLASHBACK_TIME,FLASHBACK_SCN:Parameter

By default the expdp utility exports are only consistent on a per table basis. 
If you want all tables in the export to be
consistent to the same point in time, you need to use the FLASHBACK_SCN or 
FLASHBACK_TIME parameter.

FLASHBACK_TIME
Time used to find the closest corresponding SCN value.

expdp ..... flashback_time=systimestamp

# In parameter file.
flashback_time="to_timestamp('09-05-2011 09:00:00', 'DD-MM-YYYY HH24:MI:SS')"

# Escaped on command line.
expdp ..... flashback_time=\"to_timestamp\(\'09-05-2011 09:00:00\', \'DD-MM-YYYY HH24:MI:SS\'\)\"

FLASHBACK_SCN
SCN used to reset session snapshot.

Demo

SQL> grant connect,resource to usr_flashback_dp identified by oracle;

Grant succeeded.

SQL> alter user usr_flashback_dp default tablespace users quota unlimited on users;

User altered.

SQL> create table usr_flashback_dp.tbl_test tablespace users as select * from dba_objects;

Table created.

SQL> select owner,table_name,tablespace_name from dba_tables where owner='USR_FLASHBACK_DP';

OWNER TABLE_NAME TABLESPACE_NAME
-----------------------------------------------------------------
USR_FLASHBACK_DP TBL_TEST USERS

SQL> select current_scn from v$database;

CURRENT_SCN
-----------
4161987

SQL> delete from usr_flashback_dp.tbl_test;

91755 rows deleted.

SQL> commit;

Commit complete.

SQL> select * from usr_flashback_dp.tbl_test;

no rows selected

SQL> quit

$ expdp system/oracle_4U@racnode2/pdb2 directory=DATA_PUMP_PDIR 
dumpfile=exp_flashback_test01.dmp logfile=log_flashback_test01.log 
schemas=usr_flashback_dp flashback_scn=4161987

Export: Release 12.1.0.2.0 - Production on Sun Sep 15 12:48:29 2019
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/********@racnode2/pdb2 directory=DATA_PUMP_PDIR dumpfile=exp_flashback_test01.dmp logfile=log_flashback_test01.log schemas=usr_flashback_dp flashback_scn=4161987
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 13 MB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
. . exported "USR_FLASHBACK_DP"."TBL_TEST" 10.45 MB 91755 rows
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
/u03/app/oracle/admin/pdb2/dpdump/exp_flashback_test01.dmp
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at Sun Sep 15 12:48:51 2019 elapsed 0 00:00:20

-bash-4.1$ sql
SQL> select flashback_on from v$database;

FLASHBACK_ON
------------------
NO

SQL> alter session set container=pdb2;

Session altered.

SQL> select flashback_on from v$database;

FLASHBACK_ON
------------------
NO

SQL> quit

#Test with CONSISTENT=Y

The exp utility used the CONSISTENT=Y parameter to indicate the export 
should be consistent to a point in time.
In 11.2, the introduction of legacy mode means that you can use the 
CONSISTENT=Y parameter with the expdp utility if you wish.

$ expdp system/oracle_4U@racnode2/pdb2 directory=DATA_PUMP_PDIR 
dumpfile=exp_flashback_test01.dmp logfile=log_flashback_test01.log 
schemas=usr_flashback_dp CONSISTENT=y

Export: Release 12.1.0.2.0 - Production on Sun Sep 15 12:54:57 2019
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Legacy Mode Active due to the following parameters:
Legacy Mode Parameter: "consistent=TRUE" Location: Command Line, Replaced with: "flashback_time=TO_TIMESTAMP('2019-09-15 12:54:57', 'YYYY-MM-DD HH24:MI:SS')"
Legacy Mode has set reuse_dumpfiles=true parameter.
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/********@racnode2/pdb2 directory=DATA_PUMP_PDIR dumpfile=exp_flashback_test01.dmp logfile=log_flashback_test01.log schemas=usr_flashback_dp flashback_time=TO_TIMESTAMP('2019-09-15 12:54:57', 'YYYY-MM-DD HH24:MI:SS') reuse_dumpfiles=true
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 13 MB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
. . exported "USR_FLASHBACK_DP"."TBL_TEST" 12.39 KB 0 rows
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
/u03/app/oracle/admin/pdb2/dpdump/exp_flashback_test01.dmp
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at Sun Sep 15 12:55:15 2019 elapsed 0 00:00:17

1.10 PARALLEL Parameter
Change the number of active workers for current job.

$ expdp system/oracle_4U@racnode2/pdb2 directory=DATA_PUMP_PDIR 
dumpfile=exp_parallel_test%U.dmp logfile=log_parallel_test.log 
schemas=usr_deptest1 parallel=2

Export: Release 12.1.0.2.0 - Production on Sun Sep 15 13:01:33 2019
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/********@racnode2/pdb2 directory=DATA_PUMP_PDIR dumpfile=exp_parallel_test%U.dmp logfile=log_parallel_test.log schemas=usr_deptest1 parallel=2
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 192 KB
. . exported "USR_DEPTEST1"."TBL_ONE" 21.43 KB 99 rows
. . exported "USR_DEPTEST1"."TBL_THREE" 21.43 KB 99 rows
. . exported "USR_DEPTEST1"."TBL_TWO" 21.43 KB 99 rows
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
/u03/app/oracle/admin/pdb2/dpdump/exp_parallel_test01.dmp
/u03/app/oracle/admin/pdb2/dpdump/exp_parallel_test02.dmp
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at Sun Sep 15 13:01:55 2019 elapsed 0 00:00:20

$ ll exp_parallel_test*
-rw-r----- 1 oracle oinstall 380928 Sep 15 13:01 exp_parallel_test01.dmp
-rw-r----- 1 oracle oinstall 65536 Sep 15 13:01 exp_parallel_test02.dmp

1.11 QUERY Parameter
Predicate clause used to export a subset of a table.
For example, QUERY=employees:"WHERE department_id > 10".

-bash-4.1$ sqlplus system/oracle_4U@racnode2/pdb2

SQL> create table usr_deptest1.new_table tablespace users as select * from dba_objects;

Table created.

SQL> r
1* create table usr_deptest1.new_table2 tablespace users as select * from dba_objects

Table created.

SQL> select count(*) from new_table;

COUNT(*)
----------
91756

SQL> select count(*) from new_table2;

COUNT(*)
----------
91757

SQL> select count(*) from usr_deptest1.NEW_TABLE WHERE object_id<100; 
COUNT(*) 
---------- 
98 

SQL> select count(*) from usr_deptest1.NEW_TABLE2 WHERE object_id<400 
and object_type='TABLE';

COUNT(*)
----------
170

$ cat test2_parfile.dat
directory=data_pump_pdir
dumpfile=exp_query_03.dmp
logfile=log_query_03.log
schemas=usr_deptest1
include=table:"in ('NEW_TABLE','NEW_TABLE2')"
query=usr_deptest1.NEW_TABLE:"WHERE object_id<100"
query=usr_deptest1.NEW_TABLE2:"WHERE object_id<400 and object_type='TABLE'"

$ expdp system/oracle_4U@racnode2/pdb2 parfile=test2_parfile.dat

Export: Release 12.1.0.2.0 - Production on Sun Sep 15 13:53:16 2019
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/********@racnode2/pdb2 parfile=test2_parfile.dat
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 26 MB
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "USR_DEPTEST1"."NEW_TABLE" 21.41 KB 98 rows
. . exported "USR_DEPTEST1"."NEW_TABLE2" 29.5 KB 170 rows
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
/u03/app/oracle/admin/pdb2/dpdump/exp_query_03.dmp
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at Sun Sep 15 13:53:27 2019 elapsed 0 00:00:10

1.12 REMAP_DATA Parameter
Apply specific function to each row that is being exported

Specify a data conversion function.
For example, REMAP_DATA=EMP.EMPNO:REMAPPKG.EMPNO.

create or replace package usr_deptest1.pkg_remap
as
function make_upper(p_val varchar2)
return varchar2;
end;
/

create or replace package body usr_deptest1.pkg_remap
as
function make_upper(p_val varchar2)
return varchar2
is
begin
return upper(p_val);
end;
end;
/

create table usr_deptest1.all_lowers
as
select lower(object_name) obj_name,object_id,object_type from dba_objects 
where rownum <= 5; 

SET LINESIZE 150 
COL obj_name FORMAT a15 
SELECT * FROM usr_deptest1.all_lowers; 
OBJ_NAME OBJECT_ID OBJECT_TYPE 
--------------- ---------- ----------------------- 
icol$ 20 TABLE tab$ 4 TABLE 
i_user2 47 INDEX 
i_ccol2 58 INDEX 
i_obj1 36 INDEX 
$ cat test3_parfile.dat directory=data_pump_pdir 
dumpfile=exp_remap_data_test1.dmp logfile=log_remap_data_test1.log 
schemas=usr_deptest1 include=table:"in('ALL_LOWERS')" 
remap_data=usr_deptest1.all_lowers.obj_name:usr_deptest1.pkg_remap.make_upper 

$ expdp system/oracle_4U@racnode2/pdb2 parfile=test3_parfile.dat 
Export: Release 12.1.0.2.0 - Production on Sun Sep 15 14:18:48 2019 
Copyright (c) 1982, 2014, Oracle and/or its affiliates. 
All rights reserved. Connected to: Oracle Database 12c 
Enterprise Edition Release 12.1.0.2.0 - 64bit Production 
With the Partitioning, OLAP, Advanced Analytics and Real 
Application Testing options Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": 
system/********@racnode2/pdb2 parfile=test3_parfile.dat
 Estimate in progress using BLOCKS method... Processing object type 
SCHEMA_EXPORT/TABLE/TABLE_DATA Total estimation using BLOCKS method: 
64 KB Processing object type SCHEMA_EXPORT/TABLE/TABLE Processing object 
type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS . . 
exported "USR_DEPTEST1"."ALL_LOWERS" 6.023 KB 5 rows Master table 
"SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
 ****************************************************************************** 
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is: 
/u03/app/oracle/admin/pdb2/dpdump/exp_remap_data_test1.dmp 
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at
Sun Sep 15 14:19:09 2019 elapsed 0 00:00:18 

SQL> drop table usr_deptest1.all_lowers;

Table dropped.

$ impdp system/oracle_4U@racnode2/pdb2 directory=data_pump_pdir dumpfile=exp_remap_data_test1.dmp

Import: Release 12.1.0.2.0 - Production on Sun Sep 15 14:21:15 2019
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_FULL_01": system/********@racnode2/pdb2 directory=data_pump_pdir dumpfile=exp_remap_data_test1.dmp
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "USR_DEPTEST1"."ALL_LOWERS" 6.023 KB 5 rows
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Job "SYSTEM"."SYS_IMPORT_FULL_01" successfully completed at Sun Sep 15 14:21:19 2019 elapsed 0 00:00:02

-bash-4.1$ sqlplus system/oracle_4U@racnode2/pdb2

SET LINESIZE 150
COL obj_name FORMAT a15
SELECT * FROM usr_deptest1.all_lowers;

OBJ_NAME OBJECT_ID OBJECT_TYPE
--------------- ---------- -----------------------
ICOL$ 20 TABLE
TAB$ 4 TABLE
I_USER2 47 INDEX
I_CCOL2 58 INDEX
I_OBJ1 36 INDEX

1.13 REUSE_DUMPFILES Parameters

Specify if the dump file should be reused if the same command is run again
Overwrite destination dump file if it exists [NO].

$ expdp system/oracle_4U@racnode2/pdb2 parfile=test3_parfile.dat

Export: Release 12.1.0.2.0 - Production on Sun Sep 15 14:26:16 2019

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-31641: unable to create dump file "/u03/app/oracle/admin/pdb2/dpdump/exp_remap_data_test1.dmp"
ORA-27038: created file already exists
Additional information: 1

$ expdp system/oracle_4U@racnode2/pdb2 parfile=test3_parfile.dat reuse_dumpfiles=yes

Export: Release 12.1.0.2.0 - Production on Sun Sep 15 14:26:34 2019
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/********@racnode2/pdb2 parfile=test3_parfile.dat reuse_dumpfiles=yes
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "USR_DEPTEST1"."ALL_LOWERS" 6.015 KB 5 rows
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
/u03/app/oracle/admin/pdb2/dpdump/exp_remap_data_test1.dmp
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at Sun Sep 15 14:26:52 2019 elapsed 0 00:00:17

1.14 SAMPLE Parameter
Percentage of data to be exported.

-> Sometimes using Query parameter might not help to limit amount of rows being exported.
Especially, if you export the schemas with lot of different tables it would be hard to specify query parameter for each table.
-> Using SAMPLE parameter you can export specific percentage of data.

create table usr_deptest1.tbl_sample_test tablespace users
as
select * from dba_objects where rownum <= 1000;

 Table created. 

$ expdp system/oracle_4U@racnode2/pdb2 directory=DATA_PUMP_PDIR 
schemas=usr_deptest1 sample=10 

Export: Release 12.1.0.2.0 - Production on Sun Sep 15 14:34:38 2019 
Copyright (c) 1982, 2014, Oracle and/or its affiliates. 
All rights reserved. Connected to: 
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production 
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options 
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/********@racnode2/pdb2 directory=DATA_PUMP_PDIR 
schemas=usr_deptest1 sample=10 Estimate in progress using BLOCKS method... 
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA Total estimation using 
BLOCKS method: 2.643 MB Processing object type SCHEMA_EXPORT/USER Processing 
object type SCHEMA_EXPORT/SYSTEM_GRANT Processing object type SCHEMA_EXPORT/ROLE_GRANT 
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE Processing 
object type SCHEMA_EXPORT/TABLESPACE_QUOTA Processing object type 
SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA Processing object type SCHEMA_EXPORT/TABLE/TABLE 
Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_SPEC Processing object type
SCHEMA_EXPORT/PROCEDURE/PROCEDURE Processing object type SCHEMA_EXPORT/PACKAGE/COMPILE_PACKAGE
/PACKAGE_SPEC/ALTER_PACKAGE_SPEC Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE 
Processing object type SCHEMA_EXPORT/VIEW/VIEW Processing object type SCHEMA_EXPORT/PACKAGE/
PACKAGE_BODY Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS Processing 
object type SCHEMA_EXPORT/STATISTICS/MARKER . . 
exported "USR_DEPTEST1"."NEW_TABLE" 1.053 MB 9140 rows . . 
exported "USR_DEPTEST1"."NEW_TABLE2" 1.056 MB 9165 rows . .
exported "USR_DEPTEST1"."TBL_SAMPLE_TEST" 22.68 KB 104 rows . . 
exported "USR_DEPTEST1"."ALL_LOWERS" 5.937 KB 1 rows . . 
exported "USR_DEPTEST1"."TBL_ONE" 13.33 KB 11 rows . . 
exported "USR_DEPTEST1"."TBL_THREE" 12.96 KB 7 rows . . 
exported "USR_DEPTEST1"."TBL_TWO" 12.60 KB 3 rows Master table "SYSTEM".
"SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded *********************
********************************************************* 
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is: /u03/app/oracle/admin/pdb2/dpdump/expdat.dmp 
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at Sun Sep 15 14:35:27 2019 elapsed 
0 00:00:45 

Note: The usage of CONTENT,INCLUDE,EXCLUDE,PARALLEL,QUERY,REMAP_DATA,SCHEMAS,TABLES 
parameters are same in both EXPDP and IMPDP. Import parameters 

1.15 REMAP_SCHEMA Parameter Objects 
from one schema are loaded into another schema. 


$ impdp system/oracle_4U@racnode2/pdb2 directory=data_pump_pdir 
dumpfile=expdat.dmp remap_schema=usr_deptest1:hitesh 

Import: Release 12.1.0.2.0 - Production on Sun Sep 15 14:47:48 2019 
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved. 
Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 
64bit Production With the Partitioning, OLAP, Advanced Analytics and 
Real Application Testing options Master table "SYSTEM"."SYS_IMPORT_FULL_01" 
successfully loaded/unloaded Starting "SYSTEM"."SYS_IMPORT_FULL_01": 
system/********@racnode2/pdb2 directory=data_pump_pdir dumpfile=expdat.dmp 
remap_schema=usr_deptest1:hitesh Processing object type SCHEMA_EXPORT/USER Processing 
object type SCHEMA_EXPORT/SYSTEM_GRANT Processing object type SCHEMA_EXPORT/ROLE_GRANT 
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE Processing object type 
SCHEMA_EXPORT/TABLESPACE_QUOTA Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA 
Processing object type SCHEMA_EXPORT/TABLE/TABLE Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA 
. . imported "HITESH"."NEW_TABLE" 1.053 MB 9140 rows . . 
imported "HITESH"."NEW_TABLE2" 1.056 MB 9165 rows . . 
imported "HITESH"."TBL_SAMPLE_TEST" 22.68 KB 104 rows . . 
imported "HITESH"."ALL_LOWERS" 5.937 KB 1 rows . . 
imported "HITESH"."TBL_ONE" 13.33 KB 11 rows . . 
imported "HITESH"."TBL_THREE" 12.96 KB 7 rows . . 
imported "HITESH"."TBL_TWO" 12.60 KB 3 rows 
Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_SPEC 
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE 
Processing object type SCHEMA_EXPORT/PACKAGE/COMPILE_PACKAGE/PACKAGE_SPEC/ALTER_PACKAGE_SPEC 
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE 
Processing object type SCHEMA_EXPORT/VIEW/VIEW 
Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_BODY 
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS 
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER 
Job "SYSTEM"."SYS_IMPORT_FULL_01" successfully completed at Sun Sep 15 14:48:23 2019 elapsed 0 00:00:34 

sqlplus HITESH/oracle@racnode2/pdb2 

SQL> select tname from tab;

TNAME
--------------------------------------
ALL_LOWERS
MY_VIEW
NEW_TABLE
NEW_TABLE2
TBL_ONE
TBL_SAMPLE_TEST
TBL_THREE
TBL_TWO

8 rows selected.

1.16 REMAP_TABLE Parameter
Table names are remapped to another table.
For example, REMAP_TABLE=HR.EMPLOYEES:EMPS.

$ impdp system/oracle_4U@racnode2/pdb2 directory=DATA_PUMP_PDIR 
dumpfile=expdat.dmp remap_table=NEW_TABLE:NEW_TABLE_RENAMED

1.17 REMAP_TABLESPACE Parameter
Tablespace objects are remapped to another tablespace.

$ impdp system/oracle_4U@racnode2/pdb2 directory=data_pump_pdir 
dumpfile=expdat.dmp remap_schema=usr_deptest1:usr_remap_test2 remap_tablespace=users:tbs_remap_test

1.18 SKIP_UNUSABLE_INDEXES Parameter
Skip indexes that were set to the Index Unusable state.

Purpose of this parameter is to skip the import of a table or a table partition with
index in UNUSABLE state. Which means the table should be there in the database with
an index in unusable state. This will reduce the unnecessary time for index creation
which is in unusable state during the import. By default value for this parameter is ‘Y’.
It will come into action when its value is N.

grant connect,resource to test_unused_idx identified by oracle;

alter user test_unused_idx default tablespace users quota unlimited on users;

create table test_unused_idx.tbl_one tablespace users as select * from dba_objects;
create table test_unused_idx.tbl_two tablespace users as select * from dba_objects;

create index test_unused_idx.idx1 on test_unused_idx.tbl_one(object_id) tablespace users;
create index test_unused_idx.idx2 on test_unused_idx.tbl_two(object_id) tablespace users;

alter index test_unused_idx.idx2 unusable;

SQL> select index_name,table_name,STATUS from dba_indexes where owner='TEST_UNUSED_IDX';

INDEX_NAME TABLE_NAME STATUS
------------------------------------------------
IDX1 TBL_ONE VALID
IDX2 TBL_TWO UNUSABLE

$ expdp system/oracle_4U@racnode2/pdb2 directory=data_pump_pdir 
dumpfile=exp_unused_idx01.dmp schemas=test_unused_idx

Export: Release 12.1.0.2.0 - Production on Sun Sep 15 15:03:15 2019
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/********@racnode2/pdb2 directory=data_pump_pdir dumpfile=exp_unused_idx01.dmp schemas=test_unused_idx
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 26 MB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
. . exported "TEST_UNUSED_IDX"."TBL_ONE" 10.45 MB 91774 rows
. . exported "TEST_UNUSED_IDX"."TBL_TWO" 10.45 MB 91775 rows
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
/u03/app/oracle/admin/pdb2/dpdump/exp_unused_idx01.dmp
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at Sun Sep 15 15:03:59 2019 elapsed 0 00:00:42

$ impdp system/oracle_4U@racnode2/pdb2 directory=data_pump_pdir 
dumpfile=exp_unused_idx01.dmp skip_unusable_indexes=n schemas=test_unused_idx 
table_exists_action=append

Import: Release 12.1.0.2.0 - Production on Sun Sep 15 15:08:11 2019
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_SCHEMA_02" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_SCHEMA_02": system/********@racnode2/pdb2 directory=data_pump_pdir dumpfile=exp_unused_idx01.dmp skip_unusable_indexes=n schemas=test_unused_idx table_exists_action=append
Processing object type SCHEMA_EXPORT/USER
ORA-31684: Object type USER:"TEST_UNUSED_IDX" already exists
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Table "TEST_UNUSED_IDX"."TBL_ONE" exists. Data will be appended to existing table but all dependent metadata will be skipped due to table_exists_action of append
Table "TEST_UNUSED_IDX"."TBL_TWO" exists. Data will be appended to existing table but all dependent metadata will be skipped due to table_exists_action of append
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "TEST_UNUSED_IDX"."TBL_ONE" 10.45 MB 91774 rows
ORA-31693: Table data object "TEST_UNUSED_IDX"."TBL_TWO" failed to load/unload and is being skipped due to error:
ORA-02354: error in exporting/importing data
ORA-26028: index TEST_UNUSED_IDX.IDX2 initially in unusable state
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Job "SYSTEM"."SYS_IMPORT_SCHEMA_02" completed with 2 error(s) at Sun Sep 15 15:08:27 2019 elapsed 0 00:00:13

1.19 SQLFILE Parameter
Write all the SQL DDL to a specified file.

#Export DDL of tablespaces

$ impdp system/oracle_4U@racnode2/pdb2 directory=data_pump_pdir 
dumpfile=exp_full.dmp include=tablespace full=y 
sqlfile=data_pump_pdir:tablespace.sql content=metadata_only

#Export DDL of View
$ impdp system/oracle_4U@racnode2/pdb2 directory=data_pump_pdir 
dumpfile=exp_full.dmp include=view full=y 
sqlfile=data_pump_pdir:view.sql content=metadata_only

#Export procedure,package DDL of usr_dptest1 schema
$ impdp system/oracle_4U@racnode2/pdb2 directory=data_pump_pdir 
schemas=usr_deptest1 dumpfile=expdat.dmp include=procedure,package 
sqlfile=data_pump_pdir:procedure.sql content=metadata_only

#Export Indexes DDL of test_unused_idx schema
$ impdp system/oracle_4U@racnode2/pdb2 directory=data_pump_pdir 
schemas=test_unused_idx dumpfile=exp_unused_idx01.dmp include=index 
sqlfile=data_pump_pdir:index.sql content=metadata_only

1.20 TABLE_EXISTS_ACTION Parameter
Action to take if imported object already exists.
Valid keywords are: APPEND, REPLACE, [SKIP] and TRUNCATE.

expdp system/oracle_4U@racnode2/pdb2 directory=DATA_PUMP_PDIR 
schemas=usr_deptest1 sample=10 dumpfile=exp_sample_test01.dmp

impdp system/oracle_4U@racnode2/pdb2 directory=DATA_PUMP_PDIR 
schemas=usr_deptest1 dumpfile=exp_sample_test01.dmp table_exists_action=skip
impdp system/oracle_4U@racnode2/pdb2 directory=DATA_PUMP_PDIR 
schemas=usr_deptest1 dumpfile=exp_sample_test01.dmp table_exists_action=truncate
impdp system/oracle_4U@racnode2/pdb2 directory=DATA_PUMP_PDIR 
schemas=usr_deptest1 dumpfile=exp_sample_test01.dmp table_exists_action=append
impdp system/oracle_4U@racnode2/pdb2 directory=DATA_PUMP_PDIR 
schemas=usr_deptest1 dumpfile=exp_sample_test01.dmp table_exists_action=replace

1.21 DATAPUMP Interactive-command mode

CONTINUE_CLIENT: is used to exit the interactive mode.
KILL_JOB: terminates the current job and detach the client session
STOP_JOB: is used to stop the job and might be restart again(using attach=job_name parameter of export command)
START_JOB: is used to restart a stopped job.
STATUS: command is used to get the status of the current job
PARALLEL: is used to change parallelism degree of the running job

$ expdp system/oracle_4U@racnode2/pdb2 directory=data_pump_pdir 
full=y dumpfile=full_15092019.dmp logfile=log_full_15092019.log

Export: Release 12.1.0.2.0 - Production on Sun Sep 15 15:36:57 2019

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_FULL_01": system/********@racnode2/pdb2 directory=data_pump_pdir full=y dumpfile=full_15092019.dmp logfile=log_full_15092019.log
Estimate in progress using BLOCKS method...
Processing object type DATABASE_EXPORT/EARLY_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA
Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/TABLE_DATA
Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 367.9 MB
Processing object type DATABASE_EXPORT/PRE_SYSTEM_IMPCALLOUT/MARKER
Processing object type DATABASE_EXPORT/PRE_INSTANCE_IMPCALLOUT/MARKER
Processing object type DATABASE_EXPORT/TABLESPACE
Processing object type DATABASE_EXPORT/PROFILE
Processing object type DATABASE_EXPORT/SYS_USER/USER
Processing object type DATABASE_EXPORT/SCHEMA/USER
Processing object type DATABASE_EXPORT/ROLE
Processing object type DATABASE_EXPORT/RADM_FPTM
Processing object type DATABASE_EXPORT/GRANT/SYSTEM_GRANT/PROC_SYSTEM_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/GRANT/SYSTEM_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/ROLE_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/DEFAULT_ROLE
Processing object type DATABASE_EXPORT/SCHEMA/ON_USER_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/TABLESPACE_QUOTA
Processing object type DATABASE_EXPORT/RESOURCE_COST
Processing object type DATABASE_EXPORT/TRUSTED_DB_LINK
Processing object type DATABASE_EXPORT/SCHEMA/SEQUENCE/SEQUENCE
Processing object type DATABASE_EXPORT/SCHEMA/SEQUENCE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type DATABASE_EXPORT/DIRECTORY/DIRECTORY
Processing object type DATABASE_EXPORT/DIRECTORY/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/SYNONYM
Processing object type DATABASE_EXPORT/SCHEMA/TYPE/INC_TYPE
Processing object type DATABASE_EXPORT/SCHEMA/TYPE/TYPE_SPEC
^C
Export> help
------------------------------------------------------------------------------

The following commands are valid while in interactive mode.
Note: abbreviations are allowed.

ADD_FILE
Add dumpfile to dumpfile set.

CONTINUE_CLIENT
Return to logging mode. Job will be restarted if idle.

EXIT_CLIENT
Quit client session and leave job running.

FILESIZE
Default filesize (bytes) for subsequent ADD_FILE commands.

HELP
Summarize interactive commands.

KILL_JOB
Detach and delete job.

PARALLEL
Change the number of active workers for current job.

REUSE_DUMPFILES
Overwrite destination dump file if it exists [NO].

START_JOB
Start or resume current job.
Valid keyword values are: SKIP_CURRENT.

STATUS
Frequency (secs) job status is to be monitored where
the default [0] will show new status when available.

STOP_JOB
Orderly shutdown of job execution and exits the client.
Valid keyword values are: IMMEDIATE.

Export> status

Job: SYS_EXPORT_FULL_01
Operation: EXPORT
Mode: FULL
State: EXECUTING
Bytes Processed: 0
Current Parallelism: 1
Job Error Count: 0
Dump File: /u03/app/oracle/admin/pdb2/dpdump/full_15092019.dmp
bytes written: 4,096

Worker 1 Status:
Instance ID: 1
Instance name: cdb2
Host name: racnode2
Process Name: DW00
State: EXECUTING
Object Type: DATABASE_EXPORT/SYSTEM_PROCOBJACT/POST_SYSTEM_ACTIONS/PROCACT_SYSTEM
Completed Objects: 2
Total Objects: 2
Worker Parallelism: 1

Export> parallel 2
LRM-00108: invalid positional parameter value '2'

Export> parallel=2

Export> status

Job: SYS_EXPORT_FULL_01
Operation: EXPORT
Mode: FULL
State: EXECUTING
Bytes Processed: 0
Current Parallelism: 2
Job Error Count: 0
Dump File: /u03/app/oracle/admin/pdb2/dpdump/full_15092019.dmp
bytes written: 4,096

Worker 1 Status:
Instance ID: 1
Instance name: cdb2
Host name: racnode2
Process Name: DW00
State: EXECUTING
Object Type: DATABASE_EXPORT/EARLY_POST_INSTANCE_IMPCALLOUT/MARKER
Completed Objects: 1
Total Objects: 1
Worker Parallelism: 1

Export> stop_job
Are you sure you wish to stop this job ([yes]/no): yes

-bash-4.1$ expdp system/oracle_4U@racnode2/pdb2 ATTACH=SYS_EXPORT_FULL_01

Export: Release 12.1.0.2.0 - Production on Sun Sep 15 15:39:54 2019

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

Job: SYS_EXPORT_FULL_01
Owner: SYSTEM
Operation: EXPORT
Creator Privs: TRUE
GUID: 9295A5C54AEE3BDBE0536638A8C099B6
Start Time: Sunday, 15 September, 2019 15:39:57
Mode: FULL
Instance: cdb2
Max Parallelism: 2
Timezone: -07:00
Timezone version: 18
Endianness: LITTLE
NLS character set: AL32UTF8
NLS NCHAR character set: AL16UTF16
EXPORT Job Parameters:
Parameter Name Parameter Value:
CLIENT_COMMAND system/********@racnode2/pdb2 directory=data_pump_pdir full=y dumpfile=full_15092019.dmp logfile=log_full_15092019.log
State: IDLING
Bytes Processed: 0
Current Parallelism: 2
Job Error Count: 0
Dump File: /u03/app/oracle/admin/pdb2/dpdump/full_15092019.dmp
bytes written: 4,096

Worker 1 Status:
Instance ID: 1
Instance name: cdb2
Host name: racnode2
Process Name: DW00
State: UNDEFINED

Export> START_JOB

Export> STATUS

Job: SYS_EXPORT_FULL_01
Operation: EXPORT
Mode: FULL
State: EXECUTING
Bytes Processed: 423,480
Percent Done: 1
Current Parallelism: 2
Job Error Count: 0
Dump File: /u03/app/oracle/admin/pdb2/dpdump/full_15092019.dmp
bytes written: 634,880

Worker 1 Status:
Instance ID: 1
Instance name: cdb2
Host name: racnode2
Process Name: DW00
State: EXECUTING

Worker 2 Status:
Instance ID: 1
Instance name: cdb2
Host name: racnode2
Process Name: DW01
State: EXECUTING
Object Schema: SYSTEM
Object Name: SCHEDULER_PROGRAM_ARGS
Object Type: DATABASE_EXPORT/NORMAL_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA
Completed Objects: 67
Total Objects: 10
Worker Parallelism: 1

Export> CONTINUE_CLIENT
Job SYS_EXPORT_FULL_01 has been reopened at Sun Sep 15 15:39:57 2019
Restarting "SYSTEM"."SYS_EXPORT_FULL_01": system/********@racnode2/pdb2 directory=data_pump_pdir full=y dumpfile=full_15092019.dmp logfile=log_full_15092019.log
. . exported "SYS"."KU$_USER_MAPPING_VIEW" 6.359 KB 54 rows
. . exported "ORDDATA"."ORDDCM_DOCS" 252.9 KB 9 rows
. . exported "WMSYS"."WM$CONSTRAINTS_TABLE$" 0 KB 0 rows
. . exported "SYS"."AUD$" 26.62 KB 35 rows
. . exported "WMSYS"."WM$LOCKROWS_INFO$" 0 KB 0 rows
. . exported "WMSYS"."WM$UDTRIG_INFO$" 0 KB 0 rows
. . exported "LBACSYS"."OLS$AUDIT_ACTIONS" 5.757 KB 8 rows
. . exported "LBACSYS"."OLS$DIP_EVENTS" 5.539 KB 2 rows
. . exported "LBACSYS"."OLS$INSTALLATIONS" 6.960 KB 2 rows
. . exported "LBACSYS"."OLS$PROPS" 6.234 KB 5 rows
. . exported "SYS"."DAM_CONFIG_PARAM$" 6.539 KB 14 rows
. . exported "SYS"."TSDP_PARAMETER$" 5.953 KB 1 rows
. . exported "SYS"."TSDP_POLICY$" 5.921 KB 1 rows
. . exported "SYS"."TSDP_SUBPOL$" 6.328 KB 1 rows
. . exported "SYSTEM"."REDO_DB" 25.59 KB 1 rows
. . exported "WMSYS"."WM$BATCH_COMPRESSIBLE_TABLES$" 0 KB 0 rows
. . exported "WMSYS"."WM$CONS_COLUMNS$" 0 KB 0 rows
. . exported "WMSYS"."WM$ENV_VARS$" 6.015 KB 3 rows
. . exported "WMSYS"."WM$EVENTS_INFO$" 5.812 KB 12 rows
. . exported "WMSYS"."WM$HINT_TABLE$" 9.453 KB 75 rows
. . exported "WMSYS"."WM$MODIFIED_TABLES$" 0 KB 0 rows
. . exported "WMSYS"."WM$MP_GRAPH_WORKSPACES_TABLE$" 0 KB 0 rows
^C
Export> EXIT_CLIENT

-bash-4.1$
-bash-4.1$ expdp system/oracle_4U@racnode2/pdb2 ATTACH=SYS_EXPORT_FULL_01

Export: Release 12.1.0.2.0 - Production on Sun Sep 15 15:42:44 2019

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

Job: SYS_EXPORT_FULL_01
Owner: SYSTEM
Operation: EXPORT
Creator Privs: TRUE
GUID: 9295A5C54AEE3BDBE0536638A8C099B6
Start Time: Sunday, 15 September, 2019 15:39:57
Mode: FULL
Instance: cdb2
Max Parallelism: 2
Timezone: -07:00
Timezone version: 18
Endianness: LITTLE
NLS character set: AL32UTF8
NLS NCHAR character set: AL16UTF16
EXPORT Job Parameters:
Parameter Name Parameter Value:
CLIENT_COMMAND system/********@racnode2/pdb2 directory=data_pump_pdir full=y dumpfile=full_15092019.dmp logfile=log_full_15092019.log
State: EXECUTING
Bytes Processed: 203,863,160
Percent Done: 99
Current Parallelism: 2
Job Error Count: 0
Dump File: /u03/app/oracle/admin/pdb2/dpdump/full_15092019.dmp
bytes written: 204,263,424

Worker 1 Status:
Instance ID: 1
Instance name: cdb2
Host name: racnode2
Process Name: DW00
State: EXECUTING
Object Type: DATABASE_EXPORT/SCHEMA/TABLE/FGA_POLICY
Completed Objects: 1
Total Objects: 1
Worker Parallelism: 1

Worker 2 Status:
Instance ID: 1
Instance name: cdb2
Host name: racnode2
Process Name: DW01
State: WORK WAITING

Export> kill_job
Are you sure you wish to stop this job ([yes]/no): yes

1.22 Monitoring

SET LINESIZE 150
COL EVENT FORMAT a60
SELECT W.SID,W.EVENT,W.SECONDS_IN_WAIT
FROM V$SESSION S,DBA_DATAPUMP_SESSIONS d,V$SESSION_WAIT w
WHERE S.saddr = d.saddr and S.sid = w.sid;

SID EVENT SECONDS_IN_WAIT
---------- ------------------------------------------------------------ ---------------
278 db file sequential read 7
26 latch: row cache objects 0
237 wait for unread message on broadcast channel 4
27 SQL*Net message from client 17

COL owner_name format a10
COL job_name format a20
COL operation format a10
COL state format a20
select owner_name,
job_name,
operation,
state,
degree,
attached_sessions
from dba_datapump_jobs;

OWNER_NAME JOB_NAME OPERATION STATE DEGREE ATTACHED_SESSIONS
---------- -------------------- ---------- -------------------- ---------- -----------------
SYSTEM SYS_EXPORT_FULL_01 EXPORT EXECUTING 2 1

1.23 NETWORK_LINK Parameter
Name of remote database link to the source system.

Using the NETWORK_LINK option you can import the schema from source database
to target database. One advantage of this option you don’t need export and
import as it does the export and import in single shot from the source to
destination. Also, the file system space is not needed to accommodate the
huge dump files as we can directly import to target using network_link.

It is very amazing option with data pump. You can take the backup of source
database schema from another database and you can store in dump files in target
location as well.

-bash-4.1$ sqlplus system/oracle_4U@pdb2

grant connect,resource to usr_net_link identified by oracle;

alter user usr_net_link default tablespace users quota unlimited on users;

create table usr_net_link.tbl_one tablespace users as select * from dba_objects;

select count(*) from usr_net_link.tbl_one;

COUNT(*)
----------
91833

-bash-4.1$ sqlplus system/oracle_4U@pdb1

SQL> create public database link cdb2.example.com connect to system identified 
by oracle_4U using 'pdb2';

Database link created.

SQL> select count(*) from tab@cdb2.example.com;

COUNT(*)
----------
202

create directory data_pump_pdir as '/u03/app/oracle/admin/pdb1/dpump';

$ expdp system/oracle_4U@pdb1 directory=data_pump_pdir 
dumpfile=exp_net_link_pdb3.dmp network_link=cdb2.example.om tables=usr_net_link.tbl_one
Export: Release 12.1.0.2.0 - Production on Sun Sep 15 21:50:10 2019

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_TABLE_01": system/********@pdb1 directory=data_pump_pdir dumpfile=exp_net_link_pdb3.dmp network_link=cdb2.example.com tables=usr_net_link.tbl_one
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 13 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
. . exported "USR_NET_LINK"."TBL_ONE" 10.46 MB 91833 rows
Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is:
/u03/app/oracle/admin/pdb1/dpump/exp_net_link_pdb3.dmp
Job "SYSTEM"."SYS_EXPORT_TABLE_01" successfully completed at Sun Sep 15 21:50:46 2019 elapsed 0 00:00:33

$ impdp system/oracle_4U@pdb1 directory=data_pump_pdir network_link=cdb2.example.com \
> tables=usr_net_link.tbl_one remap_table=tbl_one:tbl_two table_exists_action=replace

Import: Release 12.1.0.2.0 - Production on Sun Sep 15 21:51:56 2019

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SYSTEM"."SYS_IMPORT_TABLE_01": system/********@pdb1 directory=data_pump_pdir network_link=cdb2.example.com tables=usr_net_link.tbl_one remap_table=tbl_one:tbl_two table_exists_action=replace
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 13 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
ORA-39083: Object type TABLE:"USR_NET_LINK"."TBL_TWO" failed to create with error:
ORA-01918: user 'USR_NET_LINK' does not exist
Failing sql is:
CREATE TABLE "USR_NET_LINK"."TBL_TWO" ("OWNER" VARCHAR2(128 BYTE), "OBJECT_NAME" VARCHAR2(128 BYTE), "SUBOBJECT_NAME" VARCHAR2(128 BYTE), "OBJECT_ID" NUMBER, "DATA_OBJECT_ID" NUMBER, "OBJECT_TYPE" VARCHAR2(23 BYTE), "CREATED" DATE, "LAST_DDL_TIME" DATE, "TIMESTAMP" VARCHAR2(19 BYTE), "STATUS" VARCHAR2(7 BYTE), "TEMPORARY" VARCHAR2(1 BYTE), "GENERATED" VARCHAR2(1 BY
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Job "SYSTEM"."SYS_IMPORT_TABLE_01" completed with 1 error(s) at Sun Sep 15 21:52:07 2019 elapsed 0 00:00:10

$ sqlplus system/oracle_4U@pdb1

grant connect,resource to usr_net_link identified by oracle;

alter user usr_net_link default tablespace users quota unlimited on users;

$ impdp system/oracle_4U@pdb1 directory=data_pump_pdir network_link=cdb2.example.com \
> tables=usr_net_link.tbl_one remap_table=tbl_one:tbl_two table_exists_action=replace

Import: Release 12.1.0.2.0 - Production on Sun Sep 15 21:56:11 2019

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SYSTEM"."SYS_IMPORT_TABLE_01": system/********@pdb1 directory=data_pump_pdir network_link=cdb2.example.com tables=usr_net_link.tbl_one remap_table=tbl_one:tbl_two table_exists_action=replace
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 13 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
. . imported "USR_NET_LINK"."TBL_TWO" 91833 rows
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Job "SYSTEM"."SYS_IMPORT_TABLE_01" successfully completed at Sun Sep 15 21:56:59 2019 elapsed 0 00:00:46

sqlplus usr_net_link/oracle@pdb1

select count(*) from tbl_two;

COUNT(*)
----------
91833

sqlplus usr_net_link/oracle@pdb2

select count(*) from tbl_one;

COUNT(*)
----------
91833

1.24 TRANSFORM

#For Impdp 12c New Feature

Metadata transform to apply to applicable objects.
Valid keywords are: DISABLE_ARCHIVE_LOGGING, INMEMORY, INMEMORY_CLAUSE,
LOB_STORAGE, OID, PCTSPACE, SEGMENT_ATTRIBUTES, STORAGE, and
TABLE_COMPRESSION_CLAUSE.

-> DISABLE_ARCHIVE_LOGGING

The TRANSFORM parameter of impdp has been extended to include a 
DISABLE_ARCHIVE_LOGGING option.
The default setting of "N" has no affect on logging behaviour. Using a 
value "Y" reduces the logging associated with tables and indexes during 
the import by setting their logging attribute to NOLOGGING
before the data is imported and resetting it to LOGGING once the operation 
is complete.

TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y
TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y:TABLE
TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y:INDEX

An example of its use is shown below.

$ impdp system/Password1@pdb1 directory=test_dir dumpfile=emp.dmp logfile=impdp_emp.log \
remap_schema=scott:test transform=disable_archive_logging:y

Note: The DISABLE_ARCHIVE_LOGGING option has no effect if the database is 
running in FORCE LOGGING mode.

-> Change Table Compression at Import

The TABLE_COMPRESSION_CLAUSE clause of the TRANSFORM parameter allows the 
table compression characteristics of the tables
in an import to be altered on the fly.

TRANSFORM=TABLE_COMPRESSION_CLAUSE:[NONE | compression_clause]

The allowable values for the TABLE_COMPRESSION_CLAUSE include the following.

NONE : The table compression clause is omitted, so the table takes on the 
compression characteristics of the tablespace.
NOCOMPRESS : Disables table compression.
COMPRESS : Enables basic table compression.
ROW STORE COMPRESS BASIC : Same as COMPRESS.
ROW STORE COMPRESS BASIC : Same as COMPRESS.
ROW STORE COMPRESS ADVANCED : Enables advanced compression, also known as 
OLTP compression.
COLUMN STORE COMPRESS FOR QUERY : Hybrid Columnar Compression (HCC) available 
in Exadata and ZFS storage appliances.
COLUMN STORE COMPRESS FOR ARCHIVE : Hybrid Columnar Compression (HCC) available 
in Exadata and ZFS storage appliances.

Note: Compression clauses that contain whitespace must be enclosed by 
single or double quotes.

Example:

impdp system/Password1@pdb1 directory=test_dir dumpfile=emp.dmp logfile=impdp_emp.log \
remap_schema=scott:test transform=table_compression_clause:compress

-> Change Table LOB Storage at Import

The LOB_STORAGE clause of the TRANSFORM parameter allows the LOB storage 
characteristics of table columns in a
non-transportable import to be altered on the fly.

TRANSFORM=LOB_STORAGE:[SECUREFILE | BASICFILE | DEFAULT | NO_CHANGE]

The allowable values for the LOB_STORAGE clause include the following.

SECUREFILE : The LOBS are stored as SecureFiles.
BASICFILE : The LOBS are stored as BasicFiles.
DEFAULT : The LOB storage is determined by the database default.
NO_CHANGE : The LOB storage matches that of the source object.

An example of its use is shown below.

Example:
TRANSFORM=LOB_STORAGE:SECUREFILE

$impdp system/Password1@pdb1 directory=test_dir dumpfile=lob_table.dmp logfile=impdp_lob_table.log \
transform=lob_storage:securefile

-> Change Table INMEMORY Storage at Import

Example: TRANSFORM=INMEMORY:Y|N

TRANSFORM=INMEMORY_CLAUSE:string option

Example:
INMEMORY_CLAUSE=\"NO INMEMORY\"
INMEMORY_CLAUSE=\"INMEMORY MEMCOMPRESS FOR QUERY HIGH PRIORITY LOW\"
INMEMORY_CLAUSE:\"INMEMORY MEMCOMPRESS FOR DML\"

1.25 VIEWS_AS_TABLES

The VIEWS_AS_TABLES parameter allows Data Pump to export the specified 
views as if they were tables.
The table structure matches the view columns, with the data being the rows 
returned by the query supporting the views.

#For Expdp
Identifies one or more views to be exported as tables.
For example, VIEWS_AS_TABLES=HR.EMP_DETAILS_VIEW.

$ expdp system/oracle_4U@racnode2/pdb2 directory=DATA_PUMP_PDIR 
dumpfile=exp_view_as_table.dmp VIEWS_AS_TABLES=USR_DEPTEST1.MY_VIEW

Export: Release 12.1.0.2.0 - Production on Sun Sep 15 11:30:22 2019

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_TABLE_01": system/********@racnode2/pdb2 directory=DATA_PUMP_PDIR dumpfile=exp_view_as_table.dmp VIEWS_AS_TABLES=USR_DEPTEST1.MY_VIEW
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/VIEWS_AS_TABLES/TABLE_DATA
Total estimation using BLOCKS method: 16 KB
Processing object type TABLE_EXPORT/VIEWS_AS_TABLES/TABLE
. . exported "USR_DEPTEST1"."MY_VIEW" 21.44 KB 99 rows
Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is:
/u03/app/oracle/admin/pdb2/dpdump/exp_view_as_table.dmp
Job "SYSTEM"."SYS_EXPORT_TABLE_01" successfully completed at Sun Sep 15 11:30:31 2019 elapsed 0 00:00:08

#For Impdp
Identifies one or more views to be imported as tables.
For example, VIEWS_AS_TABLES=HR.EMP_DETAILS_VIEW.
Note that in network import mode, a table name may be appended
to the view name.

1.26 Data Transportable Option

# For Expdp
TRANSPORTABLE
Specify whether transportable method can be used.
Valid keyword values are: ALWAYS and [NEVER].

TRANSPORT_FULL_CHECK
Verify storage segments of all tables [NO].

TRANSPORT_TABLESPACES
List of tablespaces from which metadata will be unloaded.

# For Impdp

TRANSPORTABLE
Options for choosing transportable data movement.
Valid keywords are: ALWAYS and [NEVER].
Only valid in NETWORK_LINK mode import operations.

TRANSPORT_DATAFILES
List of data files to be imported by transportable mode.

TRANSPORT_FULL_CHECK
Verify storage segments of all tables [NO].
Only valid in NETWORK_LINK mode import operations.

TRANSPORT_TABLESPACES
List of tablespaces from which metadata will be loaded.
Only valid in NETWORK_LINK mode import operations.

1.27 VERSION
Version of objects to export.
Valid keyword values are: [COMPATIBLE], LATEST or any valid database version.

Version of objects to import.
Valid keywords are: [COMPATIBLE], LATEST, or any valid database version.

12c
The TRANSPORTABLE option can now be combined with the FULL option to transport a whole database.

expdp system/Password1 full=Y transportable=always version=12 directory=TEMP_DIR \
dumpfile=orcl.dmp logfile=expdporcl.log

1.28 ACCESS_METHOD

Instructs Export to use a particular method to unload data.
Valid keyword values are: [AUTOMATIC], DIRECT_PATH and EXTERNAL_TABLE.

Instructs Import to use a particular method to load data.
Valid keyword values are: [AUTOMATIC], CONVENTIONAL, DIRECT_PATH
and EXTERNAL_TABLE.

1.29 PARTITION_OPTIONS
Specify how partitions should be transformed.
Valid keywords are: DEPARTITION, MERGE and [NONE].

1.30 LOGTIME
Specifies that messages displayed during export operations be timestamped.
Valid keyword values are: ALL, [NONE], LOGFILE and STATUS.

Specifies that messages displayed during import operations be timestamped.
Valid keyword values are: ALL, [NONE], LOGFILE and STATUS.

The LOGTIME parameter determines if timestamps should be included in the 
output messages from the expdp and impdp utilities.

NONE : The default value, which indicates that no timestamps should be 
       included in the output, making the output look similar to that of 
       previous versions.
STATUS : Timestamps are included in output to the console, but not in the 
         associated log file.
LOGFILE : Timestamps are included in output to the log file, but not in 
          the associated console messages.
ALL : Timestamps are included in output to the log file and console.

expdp scott/tiger@pdb1 tables=emp directory=test_dir 
dumpfile=emp.dmp logfile=expdp_emp.log logtime=all

ATTACH
Attach to an existing job.
For example, ATTACH=job_name.

JOB_NAME
Name of export/import job to create.

NOLOGFILE
Do not write log file [NO].

CLUSTER
Utilize cluster resources and distribute workers across the Oracle RAC [YES].

SERVICE_NAME
Name of an active Service and associated resource group to constrain Oracle RAC resources.

METRICS
Report additional job information to the export/import log file [NO].

#####Reference#####

===========
Data Export
===========

-bash-4.1$ expdp help=y

Export: Release 12.1.0.2.0 - Production on Sat Sep 14 19:34:10 2019

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

The Data Pump export utility provides a mechanism for transferring data objects
between Oracle databases. The utility is invoked with the following command:

Example: expdp scott/tiger DIRECTORY=dmpdir DUMPFILE=scott.dmp

You can control how Export runs by entering the 'expdp' command followed
by various parameters. To specify parameters, you use keywords:

Format: expdp KEYWORD=value or KEYWORD=(value1,value2,...,valueN)
Example: expdp scott/tiger DUMPFILE=scott.dmp DIRECTORY=dmpdir SCHEMAS=scott
or TABLES=(T1:P1,T1:P2), if T1 is partitioned table

USERID must be the first parameter on the command line.

------------------------------------------------------------------------------

The available keywords and their descriptions follow. Default values are listed within square brackets.

ABORT_STEP
Stop the job after it is initialized or at the indicated object.
Valid values are -1 or N where N is zero or greater.
N corresponds to the object's process order number in the master table.

ACCESS_METHOD
Instructs Export to use a particular method to unload data.
Valid keyword values are: [AUTOMATIC], DIRECT_PATH and EXTERNAL_TABLE.

ATTACH
Attach to an existing job.
For example, ATTACH=job_name.

CLUSTER
Utilize cluster resources and distribute workers across the Oracle RAC [YES].

COMPRESSION
Reduce the size of a dump file.
Valid keyword values are: ALL, DATA_ONLY, [METADATA_ONLY] and NONE.

COMPRESSION_ALGORITHM
Specify the compression algorithm that should be used.
Valid keyword values are: [BASIC], LOW, MEDIUM and HIGH.

CONTENT
Specifies data to unload.
Valid keyword values are: [ALL], DATA_ONLY and METADATA_ONLY.

DATA_OPTIONS
Data layer option flags.
Valid keyword values are: XML_CLOBS.

DIRECTORY
Directory object to be used for dump and log files.

DUMPFILE
Specify list of destination dump file names [expdat.dmp].
For example, DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp.

ENCRYPTION
Encrypt part or all of a dump file.
Valid keyword values are: ALL, DATA_ONLY, ENCRYPTED_COLUMNS_ONLY, METADATA_ONLY and NONE.

ENCRYPTION_ALGORITHM
Specify how encryption should be done.
Valid keyword values are: [AES128], AES192 and AES256.

ENCRYPTION_MODE
Method of generating encryption key.
Valid keyword values are: DUAL, PASSWORD and [TRANSPARENT].

ENCRYPTION_PASSWORD
Password key for creating encrypted data within a dump file.

ENCRYPTION_PWD_PROMPT
Specifies whether to prompt for the encryption password [NO].
Terminal echo will be suppressed while standard input is read.

ESTIMATE
Calculate job estimates.
Valid keyword values are: [BLOCKS] and STATISTICS.

ESTIMATE_ONLY
Calculate job estimates without performing the export [NO].

EXCLUDE
Exclude specific object types.
For example, EXCLUDE=SCHEMA:"='HR'".

FILESIZE
Specify the size of each dump file in units of bytes.

FLASHBACK_SCN
SCN used to reset session snapshot.

FLASHBACK_TIME
Time used to find the closest corresponding SCN value.

FULL
Export entire database [NO].

HELP
Display Help messages [NO].

INCLUDE
Include specific object types.
For example, INCLUDE=TABLE_DATA.

JOB_NAME
Name of export job to create.

KEEP_MASTER
Retain the master table after an export job that completes successfully [NO].

LOGFILE
Specify log file name [export.log].

LOGTIME
Specifies that messages displayed during export operations be timestamped.
Valid keyword values are: ALL, [NONE], LOGFILE and STATUS.

METRICS
Report additional job information to the export log file [NO].

NETWORK_LINK
Name of remote database link to the source system.

NOLOGFILE
Do not write log file [NO].

PARALLEL
Change the number of active workers for current job.

PARFILE
Specify parameter file name.

QUERY
Predicate clause used to export a subset of a table.
For example, QUERY=employees:"WHERE department_id > 10".

REMAP_DATA
Specify a data conversion function.
For example, REMAP_DATA=EMP.EMPNO:REMAPPKG.EMPNO.

REUSE_DUMPFILES
Overwrite destination dump file if it exists [NO].

SAMPLE
Percentage of data to be exported.

SCHEMAS
List of schemas to export [login schema].

SERVICE_NAME
Name of an active Service and associated resource group to constrain Oracle RAC resources.

SOURCE_EDITION
Edition to be used for extracting metadata.

STATUS
Frequency (secs) job status is to be monitored where
the default [0] will show new status when available.

TABLES
Identifies a list of tables to export.
For example, TABLES=HR.EMPLOYEES,SH.SALES:SALES_1995.

TABLESPACES
Identifies a list of tablespaces to export.

TRANSPORTABLE
Specify whether transportable method can be used.
Valid keyword values are: ALWAYS and [NEVER].

TRANSPORT_FULL_CHECK
Verify storage segments of all tables [NO].

TRANSPORT_TABLESPACES
List of tablespaces from which metadata will be unloaded.

VERSION
Version of objects to export.
Valid keyword values are: [COMPATIBLE], LATEST or any valid database version.

VIEWS_AS_TABLES
Identifies one or more views to be exported as tables.
For example, VIEWS_AS_TABLES=HR.EMP_DETAILS_VIEW.

------------------------------------------------------------------------------

The following commands are valid while in interactive mode.
Note: abbreviations are allowed.

ADD_FILE
Add dumpfile to dumpfile set.

CONTINUE_CLIENT
Return to logging mode. Job will be restarted if idle.

EXIT_CLIENT
Quit client session and leave job running.

FILESIZE
Default filesize (bytes) for subsequent ADD_FILE commands.

HELP
Summarize interactive commands.

KILL_JOB
Detach and delete job.

PARALLEL
Change the number of active workers for current job.

REUSE_DUMPFILES
Overwrite destination dump file if it exists [NO].

START_JOB
Start or resume current job.
Valid keyword values are: SKIP_CURRENT.

STATUS
Frequency (secs) job status is to be monitored where
the default [0] will show new status when available.

STOP_JOB
Orderly shutdown of job execution and exits the client.
Valid keyword values are: IMMEDIATE

===========
Data Import
===========
-bash-4.1$ impdp help=y

Import: Release 12.1.0.2.0 - Production on Sat Sep 14 19:39:20 2019

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

The Data Pump Import utility provides a mechanism for transferring data objects
between Oracle databases. The utility is invoked with the following command:

Example: impdp scott/tiger DIRECTORY=dmpdir DUMPFILE=scott.dmp

You can control how Import runs by entering the 'impdp' command followed
by various parameters. To specify parameters, you use keywords:

Format: impdp KEYWORD=value or KEYWORD=(value1,value2,...,valueN)
Example: impdp scott/tiger DIRECTORY=dmpdir DUMPFILE=scott.dmp

USERID must be the first parameter on the command line.

------------------------------------------------------------------------------

The available keywords and their descriptions follow. Default values are listed within square brackets.

ABORT_STEP
Stop the job after it is initialized or at the indicated object.
Valid values are -1 or N where N is zero or greater.
N corresponds to the object's process order number in the master table.

ACCESS_METHOD
Instructs Import to use a particular method to load data.
Valid keyword values are: [AUTOMATIC], CONVENTIONAL, DIRECT_PATH
and EXTERNAL_TABLE.

ATTACH
Attach to an existing job.
For example, ATTACH=job_name.

CLUSTER
Utilize cluster resources and distribute workers across the Oracle RAC [YES].

CONTENT
Specifies data to load.
Valid keywords are: [ALL], DATA_ONLY and METADATA_ONLY.

DATA_OPTIONS
Data layer option flags.
Valid keywords are: DISABLE_APPEND_HINT and SKIP_CONSTRAINT_ERRORS.

DIRECTORY
Directory object to be used for dump, log and SQL files.

DUMPFILE
List of dump files to import from [expdat.dmp].
For example, DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp.

ENCRYPTION_PASSWORD
Password key for accessing encrypted data within a dump file.
Not valid for network import jobs.

ENCRYPTION_PWD_PROMPT
Specifies whether to prompt for the encryption password [NO].
Terminal echo will be suppressed while standard input is read.

ESTIMATE
Calculate network job estimates.
Valid keywords are: [BLOCKS] and STATISTICS.

EXCLUDE
Exclude specific object types.
For example, EXCLUDE=SCHEMA:"='HR'".

FLASHBACK_SCN
SCN used to reset session snapshot.

FLASHBACK_TIME
Time used to find the closest corresponding SCN value.

FULL
Import everything from source [YES].

HELP
Display help messages [NO].

INCLUDE
Include specific object types.
For example, INCLUDE=TABLE_DATA.

JOB_NAME
Name of import job to create.

KEEP_MASTER
Retain the master table after an import job that completes successfully [NO].

LOGFILE
Log file name [import.log].

LOGTIME
Specifies that messages displayed during import operations be timestamped.
Valid keyword values are: ALL, [NONE], LOGFILE and STATUS.

MASTER_ONLY
Import just the master table and then stop the job [NO].

METRICS
Report additional job information to the import log file [NO].

NETWORK_LINK
Name of remote database link to the source system.

NOLOGFILE
Do not write log file [NO].

PARALLEL
Change the number of active workers for current job.

PARFILE
Specify parameter file.

PARTITION_OPTIONS
Specify how partitions should be transformed.
Valid keywords are: DEPARTITION, MERGE and [NONE].

QUERY
Predicate clause used to import a subset of a table.
For example, QUERY=employees:"WHERE department_id > 10".

REMAP_DATA
Specify a data conversion function.
For example, REMAP_DATA=EMP.EMPNO:REMAPPKG.EMPNO.

REMAP_DATAFILE
Redefine data file references in all DDL statements.

REMAP_SCHEMA
Objects from one schema are loaded into another schema.

REMAP_TABLE
Table names are remapped to another table.
For example, REMAP_TABLE=HR.EMPLOYEES:EMPS.

REMAP_TABLESPACE
Tablespace objects are remapped to another tablespace.

REUSE_DATAFILES
Tablespace will be initialized if it already exists [NO].

SCHEMAS
List of schemas to import.

SERVICE_NAME
Name of an active Service and associated resource group to constrain Oracle RAC resources.

SKIP_UNUSABLE_INDEXES
Skip indexes that were set to the Index Unusable state.

SOURCE_EDITION
Edition to be used for extracting metadata.

SQLFILE
Write all the SQL DDL to a specified file.

STATUS
Frequency (secs) job status is to be monitored where
the default [0] will show new status when available.

STREAMS_CONFIGURATION
Enable the loading of Streams metadata [YES].

TABLE_EXISTS_ACTION
Action to take if imported object already exists.
Valid keywords are: APPEND, REPLACE, [SKIP] and TRUNCATE.

TABLES
Identifies a list of tables to import.
For example, TABLES=HR.EMPLOYEES,SH.SALES:SALES_1995.

TABLESPACES
Identifies a list of tablespaces to import.

TARGET_EDITION
Edition to be used for loading metadata.

TRANSFORM
Metadata transform to apply to applicable objects.
Valid keywords are: DISABLE_ARCHIVE_LOGGING, INMEMORY, INMEMORY_CLAUSE,
LOB_STORAGE, OID, PCTSPACE, SEGMENT_ATTRIBUTES, STORAGE, and
TABLE_COMPRESSION_CLAUSE.

TRANSPORTABLE
Options for choosing transportable data movement.
Valid keywords are: ALWAYS and [NEVER].
Only valid in NETWORK_LINK mode import operations.

TRANSPORT_DATAFILES
List of data files to be imported by transportable mode.

TRANSPORT_FULL_CHECK
Verify storage segments of all tables [NO].
Only valid in NETWORK_LINK mode import operations.

TRANSPORT_TABLESPACES
List of tablespaces from which metadata will be loaded.
Only valid in NETWORK_LINK mode import operations.

VERSION
Version of objects to import.
Valid keywords are: [COMPATIBLE], LATEST, or any valid database version.
Only valid for NETWORK_LINK and SQLFILE.

VIEWS_AS_TABLES
Identifies one or more views to be imported as tables.
For example, VIEWS_AS_TABLES=HR.EMP_DETAILS_VIEW.
Note that in network import mode, a table name may be appended
to the view name.

------------------------------------------------------------------------------

The following commands are valid while in interactive mode.
Note: abbreviations are allowed.

CONTINUE_CLIENT
Return to logging mode. Job will be restarted if idle.

EXIT_CLIENT
Quit client session and leave job running.

HELP
Summarize interactive commands.

KILL_JOB
Detach and delete job.

PARALLEL
Change the number of active workers for current job.

START_JOB
Start or resume current job.
Valid keywords are: SKIP_CURRENT.

STATUS
Frequency (secs) job status is to be monitored where
the default [0] will show new status when available.

STOP_JOB
Orderly shutdown of job execution and exits the client.
Valid keywords are: IMMEDIATE.

[2] SqlLoader
-> New Features in 12.1.0.2

$ sqlldr

SQL*Loader: Release 12.1.0.2.0 - Production on Mon Sep 16 09:40:23 2019
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Usage: SQLLDR keyword=value [,keyword=value,...]

Valid Keywords:
userid -- ORACLE username/password
control -- control file name
log -- log file name
bad -- bad file name
data -- data file name
discard -- discard file name
discardmax -- number of discards to allow (Default all)
skip -- number of logical records to skip (Default 0)
load -- number of logical records to load (Default all)
errors -- number of errors to allow (Default 50)
rows -- number of rows in conventional path bind array or between direct path data saves
(Default: Conventional path 64, Direct path all)
bindsize -- size of conventional path bind array in bytes (Default 256000)
silent -- suppress messages during run (header,feedback,errors,discards,partitions)
direct -- use direct path (Default FALSE)
parfile -- parameter file: name of file that contains parameter specifications
parallel -- do parallel load (Default FALSE)
file -- file to allocate extents from
skip_unusable_indexes -- disallow/allow unusable indexes or index partitions (Default FALSE)
skip_index_maintenance -- do not maintain indexes, mark affected indexes as unusable (Default FALSE)
commit_discontinued -- commit loaded rows when load is discontinued (Default FALSE)
readsize -- size of read buffer (Default 1048576)
external_table -- use external table for load; NOT_USED, GENERATE_ONLY, EXECUTE
columnarrayrows -- number of rows for direct path column array (Default 5000)
streamsize -- size of direct path stream buffer in bytes (Default 256000)
multithreading -- use multithreading in direct path
resumable -- enable or disable resumable for current session (Default FALSE)
resumable_name -- text string to help identify resumable statement
resumable_timeout -- wait time (in seconds) for RESUMABLE (Default 7200)
date_cache -- size (in entries) of date conversion cache (Default 1000)
no_index_errors -- abort load on any index errors (Default FALSE)
partition_memory -- direct path partition memory limit to start spilling (kb) (Default 0)
table -- Table for express mode load
date_format -- Date format for express mode load
timestamp_format -- Timestamp format for express mode load
terminated_by -- terminated by character for express mode load
enclosed_by -- enclosed by character for express mode load
optionally_enclosed_by -- optionally enclosed by character for express mode load
characterset -- characterset for express mode load
degree_of_parallelism -- degree of parallelism for express mode load and external table load
trim -- trim type for express mode load and external table load
csv -- csv format data files for express mode load
nullif -- table level nullif clause for express mode load
field_names -- field names setting for first record of data files for express mode load
dnfs_enable -- option for enabling or disabling Direct NFS (dNFS) for input data files (Default FALSE)
dnfs_readbuffers -- the number of Direct NFS (dNFS) read buffers (Default 4)

PLEASE NOTE: Command-line parameters may be specified either by
position or by keywords. An example of the former case is 'sqlldr
scott/tiger foo'; an example of the latter is 'sqlldr control=foo
userid=scott/tiger'. One may specify parameters by position before
but not after parameters specified by keywords. For example,
'sqlldr scott/tiger control=foo logfile=log' is allowed, but
'sqlldr scott/tiger control=foo log' is not, even though the
position of the parameter 'log' is correct.

Database Utilities
7 SQL*Loader Concepts
SQL*Loader Case Studies

The following SQL*Loader features are new in this release:

[1] A new SQL*Loader express mode provides a streamlined way to quickly and 
easily load tables that have simple column data types and data files that 
contain only delimited character data.
See SQL*Loader Express .

To activate SQL*Loader express mode, you can simply specify your username 
and a table name. SQL*Loader prompts you for a password, for example:

sqlldr username TABLE=employees

If you activate SQL*Loader express mode by specifying only the TABLE parameter,
then SQL*Loader uses default settings for a number of other parameters. You can
override most of the default values by specifying additional parameters on the command line.

SQL*Loader express mode generates a log file that includes a SQL*Loader 
control file. The log file also contains SQL scripts for creating the external 
table and performing the load using a SQL INSERT AS SELECT statement. Neither 
the control file nor the SQL scripts are used by SQL*Loader express mode. 
They are made available to you in case you want to use them as a starting 
point to perform operations using regular SQL*Loader or standalone external 
tables; the control file is for use with SQL*Loader, whereas the
SQL scripts are for use with standalone external tables operations.

[2] The maximum size of the VARCHAR2, NVARCHAR2, and RAW data types has 
been increased to 32 KB when the COMPATIBLE initialization parameter is 
set to 12.0 or later and the MAX_STRING_SIZE initialization parameter is 
set to EXTENDED. SQL*Loader supports this new maximum size.
See "SQL*Loader Data Types" for information about SQL*Loader data types.

[3] You can now enable auditing of SQL*Loader operations that use direct 
path mode. See "Auditing SQL*Loader Operations That Use Direct Path Mode".

[4] The SQL*Loader control file has several new clauses available that 
simplify its use. Additionally some existing parameters have been modified 
and new parameters have been added to streamline load execution.

4.1 You can specify wildcard characters on the INFILE clause. 
See "Specifying Data Files".
An asterisk (*) represents multiple characters and a question mark 
(?) represents a single character.
For example:
INFILE 'emp*.dat'
INFILE 'm?emp.dat'

*
If your data is in the control file itself, then use an asterisk instead 
of the file name.
If you have data in the control file and in data files, then you must 
specify the asterisk first in
order for the data to be read.

4.2 You can direct SQL*Loader to access the data files as comma-separated-values 
(CSV) format files

FIELDS CSV [WITH EMBEDDED|WITHOUT EMBEDDED] [FIELDS TERMINATED BY ','] 
[OPTIONALLY ENCLOSED BY '"']

When the CSV clause is used, only delimitable data types are allowed as 
control file fields.
Delimitable data types include CHAR, datetime, interval, and numeric EXTERNAL.

The TERMINATED BY and ENCLOSED BY clauses cannot be used at the field level 
when the CSV clause is specified.

When the CSV clause is specified, normal SQL*Loader blank trimming is done 
by default. You can specify PRESERVE BLANKS to avoid trimming of spaces. 
Or, you can use the SQL functions LTRIM and RTRIM in the field
specification to remove left and/or right spaces.

When the CSV clause is specified, the INFILE * clause in not allowed. 
This means that there cannot be any data
included in the SQL*Loader control file.

The following sample SQL*Loader control file uses the FIELDS CSV clause 
with the default delimiters:

LOAD DATA
INFILE "mydata.dat"
TRUNCATE
INTO TABLE mytable
FIELDS CSV WITH EMBEDDED
TRAILING NULLCOLS
(
c0 char,
c1 char,
c2 char,
)

4.3 At the table level, you can specify a datetime format to apply to all 
datetime fields.

You can specify certain datetime formats at the table level in a 
SQL*Loader control file.
The syntax for each is as follows:

DATE FORMAT mask
TIMESTAMP FORMAT mask
TIMESTAMP WITH TIME ZONE mask
TIMESTAMP WITH LOCAL TIME ZONE mask

The following is an example of using the DATE FORMAT clause in a 
SQL*Loader control file. The DATE FORMAT clause is overridden by DATE at 
the field level for the hiredate and entrydate fields:

LOAD DATA
INFILE myfile.dat
APPEND
INTO TABLE EMP
FIELDS TERMINATED BY ","
DATE FORMAT "DD-Month-YYYY"
(empno,
ename,
job,
mgr,
hiredate DATE,
sal,
comm,
deptno,
entrydate DATE)

4.4 At the table level, you can specify NULLIF to apply to all character 
fields. See "Specifying a NULLIF Clause At the Table Level".

NULLIF {=|!=}{"char_string"|x'hex_string'|BLANKS}

SQL*Loader checks the specified value against the value of the field in 
the record. If there is a match using the equal or not equal specification,
then the field is set to NULL for that row. Any field that has a length of 
0 after blank trimming is also set to NULL.

If you do not want the default NULLIF or any other NULLIF clause applied 
to a field, you can specify NO NULLIF at the field level.

4.5 You can specify that SQL*Loader should determine the field order from 
the order of the field names in the first record of the data file. 
See "Specifying Field Order".

FIELD NAMES {FIRST FILE|FIRST FILE IGNORE|ALL FILES|ALL FILES IGNORE|NONE}

4.6 The SQL*Loader command line has new and modified parameters that help 
to streamline load execution:

The new TRIM command-line parameter allows you to override the TRIM=LDRTRIM 
default when you are using the external tables option. See "TRIM".

The new DEGREE_OF_PARALLELISM command-line parameter allows you to specify a
degree of parallelism to use for the load when the external tables option is used.

When specifying the bad, discard, and log files on the SQL*Loader command line,
you now have the option of specifying only a directory name.

#Test with traditional SQL*Loader

SQL> create table tbl_sqlldr_test01
2 (id number,name varchar2(10),surname varchar2(10),team varchar2(10),
salary number,username varchar2(10),year number);

Table created.

-bash-4.1$ cat empxt1.dat
1.john.watson.mg_team.3000.jwtson.1970
1;ohn;watson;mg_team;3000;jwtson;1970
1;hn;watson;mg_team;3000;jwtson;1970
1;n;watson;mg_team;3000;jwtson;1970
1;joh;watson;mg_team;3000;jwtson;1970

-bash-4.1$ cat control01.ctl
load data
infile '/home/oracle/sqlldr/empxt1.dat'
into table tbl_sqlldr_test01
fields terminated by ';' optionally enclosed by '"'
(id,name,surname,team,salary,username,year)

-bash-4.1$ sqlldr HR/HR@pdb2 control=control01.ctl

SQL*Loader: Release 12.1.0.2.0 - Production on Mon Sep 16 10:44:47 2019

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Path used: Conventional
Commit point reached - logical record count 6

Table TBL_SQLLDR_TEST01:
4 Rows successfully loaded.

Check the log file:
control01.log
for more information about the load.

$ sqlplus HR/HR@pdb2

SQL> select * from tbl_sqlldr_test01;

ID NAME SURNAME TEAM SALARY USERNAME YEAR
---------- ---------- ---------- ---------- ---------- ---------- ----------
1 ohn watson mg_team 3000 jwtson 1970
1 hn watson mg_team 3000 jwtson 1970
1 n watson mg_team 3000 jwtson 1970
1 joh watson mg_team 3000 jwtson 1970

#Test with SQL*Loader Express Mode

-> Records must be in comma separated.
-> data file name must be same with table name with .dat

SQL> truncate table tbl_sqlldr_test01;

-bash-4.1$ cat tbl_sqlldr_test01.dat
1.john.watson.mg_team.3000.jwtson.1970
1,ohn,watson,mg_team,3000,jwtson,1970
1,hn,watson,mg_team,3000,jwtson,1970
1,n,watson,mg_team,3000,jwtson,1970
1,joh,watson,mg_team,3000,jwtson,1970

$ sqlldr HR/HR@pdb2 table=tbl_sqlldr_test01

SQL*Loader: Release 12.1.0.2.0 - Production on Mon Sep 16 11:00:53 2019

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Express Mode Load, Table: TBL_SQLLDR_TEST01
Path used: External Table, DEGREE_OF_PARALLELISM=AUTO

Table TBL_SQLLDR_TEST01:
4 Rows successfully loaded.

Check the log files:
tbl_sqlldr_test01.log
tbl_sqlldr_test01_%p.log_xt
for more information about the load.

-bash-4.1$ sqlplus HR/HR@pdb2

SQL> select * from tbl_sqlldr_test01;

ID NAME SURNAME TEAM SALARY USERNAME YEAR
---------- ---------- ---------- ---------- ---------- ---------- ----------
1 ohn watson mg_team 3000 jwtson 1970
1 hn watson mg_team 3000 jwtson 1970
1 n watson mg_team 3000 jwtson 1970
1 joh watson mg_team 3000 jwtson 1970

$ cat tbl_sqlldr_test01_6990.bad
1.john.watson.mg_team.3000.jwtson.1970

$ cat tbl_sqlldr_test01.log
SQL*Loader: Release 12.1.0.2.0 - Production on Mon Sep 16 11:00:53 2019
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Express Mode Load, Table: TBL_SQLLDR_TEST01
Data File: tbl_sqlldr_test01.dat
Bad File: tbl_sqlldr_test01_%p.bad
Discard File: none specified

(Allow all discards)

Number to load: ALL
Number to skip: 0
Errors allowed: 50
Continuation: none specified
Path used: External Table

Table TBL_SQLLDR_TEST01, loaded from every logical record.
Insert option in effect for this table: APPEND

Column Name Position Len Term Encl Datatype
------------------------------ ---------- ----- ---- ---- ---------------------
ID FIRST * , CHARACTER
NAME NEXT * , CHARACTER
SURNAME NEXT * , CHARACTER
TEAM NEXT * , CHARACTER
SALARY NEXT * , CHARACTER
USERNAME NEXT * , CHARACTER
YEAR NEXT * , CHARACTER

Generated control file for possible reuse:
OPTIONS(EXTERNAL_TABLE=EXECUTE, TRIM=LRTRIM)
LOAD DATA
INFILE 'tbl_sqlldr_test01'
APPEND
INTO TABLE TBL_SQLLDR_TEST01
FIELDS TERMINATED BY ","
(
ID,
NAME,
SURNAME,
TEAM,
SALARY,
USERNAME,
YEAR
)
End of generated control file for possible reuse.

enable parallel DML: ALTER SESSION ENABLE PARALLEL DML

creating external table "SYS_SQLLDR_X_EXT_TBL_SQLLDR_TE"

CREATE TABLE "SYS_SQLLDR_X_EXT_TBL_SQLLDR_TE"
(
"ID" NUMBER,
"NAME" VARCHAR2(10),
"SURNAME" VARCHAR2(10),
"TEAM" VARCHAR2(10),
"SALARY" NUMBER,
"USERNAME" VARCHAR2(10),
"YEAR" NUMBER
)
ORGANIZATION external
(
TYPE oracle_loader
DEFAULT DIRECTORY SYS_SQLLDR_XT_TMPDIR_00000
ACCESS PARAMETERS
(
RECORDS DELIMITED BY NEWLINE CHARACTERSET AL32UTF8
BADFILE 'SYS_SQLLDR_XT_TMPDIR_00000':'tbl_sqlldr_test01_%p.bad'
LOGFILE 'tbl_sqlldr_test01_%p.log_xt'
READSIZE 1048576
FIELDS TERMINATED BY "," LRTRIM
REJECT ROWS WITH ALL NULL FIELDS
(
"ID" CHAR(255),
"NAME" CHAR(255),
"SURNAME" CHAR(255),
"TEAM" CHAR(255),
"SALARY" CHAR(255),
"USERNAME" CHAR(255),
"YEAR" CHAR(255)
)
)
location
(
'tbl_sqlldr_test01.dat'
)
)REJECT LIMIT UNLIMITED

executing INSERT statement to load database table TBL_SQLLDR_TEST01

INSERT /*+ append parallel(auto) */ INTO TBL_SQLLDR_TEST01
(
ID,
NAME,
SURNAME,
TEAM,
SALARY,
USERNAME,
YEAR
)
SELECT
"ID",
"NAME",
"SURNAME",
"TEAM",
"SALARY",
"USERNAME",
"YEAR"
FROM "SYS_SQLLDR_X_EXT_TBL_SQLLDR_TE"

dropping external table "SYS_SQLLDR_X_EXT_TBL_SQLLDR_TE"

Table TBL_SQLLDR_TEST01:
4 Rows successfully loaded.

Run began on Mon Sep 16 11:00:53 2019
Run ended on Mon Sep 16 11:00:53 2019

Elapsed time was: 00:00:00.60
CPU time was: 00:00:00.03

------------------------------------------
#Load a data without providing a flat file
------------------------------------------
$ cat control02.ctl
load data
infile *
into table tbl_sqlldr_test02
fields terminated by ';' optionally enclosed by '"'
(id,name,surname,team,salary,username,year)
begindata
1.john.watson.mg_team.3000.jwtson.1970
2;ohn;watson;mg_team;3000;jwtson;1970
3;hn;watson;mg_team;3000;jwtson;1970
4;n;watson;mg_team;3000;jwtson;1970
5;joh;watson;mg_team;3000;jwtson;1970

$ sqlldr HR/HR@pdb2 control=control02.ctl

SQL*Loader: Release 12.1.0.2.0 - Production on Mon Sep 16 11:08:33 2019

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Path used: Conventional
Commit point reached - logical record count 5

Table TBL_SQLLDR_TEST02:
4 Rows successfully loaded.

Check the log file:
control02.log
for more information about the load.

$ cat control02.bad
1.john.watson.mg_team.3000.jwtson.1970

SQL> select count(*) from tbl_sqlldr_test02;

COUNT(*)
----------
4

$ cat control02.log

----------------------------------
Load a data to the non-empty table
----------------------------------
$ sqlldr HR/HR@pdb2 control=control02.ctl

SQL*Loader: Release 12.1.0.2.0 - Production on Mon Sep 16 11:11:53 2019

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Path used: Conventional
SQL*Loader-601: For INSERT option, table must be empty. Error on table TBL_SQLLDR_TEST02

cp control02.ctl control03.ctl

$ cat control03.ctl
load data
infile *
append
into table tbl_sqlldr_test02
fields terminated by ';' optionally enclosed by '"'
(id,name,surname,team,salary,username,year)
begindata
1.john.watson.mg_team.3000.jwtson.1970
2;ohn;watson;mg_team;3000;jwtson;1970
3;hn;watson;mg_team;3000;jwtson;1970
4;n;watson;mg_team;3000;jwtson;1970
5;joh;watson;mg_team;3000;jwtson;1970

$ sqlldr HR/HR@pdb2 control=control03.ctl

SQL*Loader: Release 12.1.0.2.0 - Production on Mon Sep 16 11:12:48 2019

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Path used: Conventional
Commit point reached - logical record count 5

Table TBL_SQLLDR_TEST02:
4 Rows successfully loaded.

Check the log file:
control03.log
for more information about the load.

$ cat control03.bad
1.john.watson.mg_team.3000.jwtson.1970

$ sqlplus HR/HR@pdb2

SQL> select count(*) from tbl_sqlldr_test02;

COUNT(*)
----------
8

--------------------------------
Truncate table before the load
--------------------------------
$ cat control04.ctl
load data
infile *
truncate
into table tbl_sqlldr_test02
fields terminated by ';' optionally enclosed by '"'
(id,name,surname,team,salary,username,year)
begindata
1.john.watson.mg_team.3000.jwtson.1970
2;ohn;watson;mg_team;3000;jwtson;1970
3;hn;watson;mg_team;3000;jwtson;1970
4;n;watson;mg_team;3000;jwtson;1970
5;joh;watson;mg_team;3000;jwtson;1970

$ sqlldr HR/HR@pdb2 control=control04.ctl

$ sqlplus HR/HR@pdb2

SQL> select count(*) from tbl_sqlldr_test02;

COUNT(*)
----------
4
--------------------------------
Load the data with fixed format
--------------------------------
$vi empxt2.dat
1 john watson mg_team 3000 jwtson 1970
1 ohn watson mg_team 3000 jwtson 1970
1 hn watson mg_team 3000 jwtson 1970
1 n watson mg_team 3000 jwtson 1970
1 joh watson mg_team 3000 jwtson 1970

sqlldr HR/HR@pdb2 control=control05.ctl

-----------------------------
Load data from multiple files
-----------------------------
$ cp empxt1.dat empxt3.dat
$ vi control06.ctl
$ cat control06.ctl
load data
infile '/home/oracle/sqlldr/empxt1.dat'
infile '/home/oracle/sqlldr/empxt3.dat'
truncate
into table tbl_sqlldr_test01
fields terminated by ';' optionally enclosed by '"'
(id,name,surname,team,salary,username,year)

$ sqlldr HR/HR@pdb2 control=control06.ctl

SQL*Loader: Release 12.1.0.2.0 - Production on Mon Sep 16 11:33:31 2019

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Path used: Conventional
Commit point reached - logical record count 5
Commit point reached - logical record count 10

Table TBL_SQLLDR_TEST01:
8 Rows successfully loaded.

Check the log file:
control06.log
for more information about the load.

-bash-4.1$ sqlplus HR/HR@pdb2
SQL> select count(*) from tbl_sqlldr_test01;

COUNT(*)
----------
8

----------------------------
Load data to multiple tables
----------------------------
$ cat control07.ctl
load data
infile '/home/oracle/sqlldr/empxt2.dat'
truncate
into table tbl_sqlldr_test01
(id position(1:1),name position(3:6),surname position(8:13),team position(15:21),salary position(23:26),username position(28:33),year position(35:38))
into table tbl_sqlldr_test02
(id position(1:1),name position(3:6),surname position(8:13),team position(15:21),salary position(23:26),username position(28:33),year position(35:38))
-bash-4.1$

$ sqlldr HR/HR@pdb2 control=control07.ctl

SQL*Loader: Release 12.1.0.2.0 - Production on Mon Sep 16 11:36:22 2019

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Path used: Conventional
Commit point reached - logical record count 5

Table TBL_SQLLDR_TEST01:
5 Rows successfully loaded.

Table TBL_SQLLDR_TEST02:
5 Rows successfully loaded.

Check the log file:
control07.log
for more information about the load.

$ sqlplus HR/HR@pdb2

SQL> select count(*) from tbl_sqlldr_test01;

COUNT(*)
----------
5

SQL> r
1* select count(*) from tbl_sqlldr_test02

COUNT(*)
----------
5

---------------------------------
Using WHEN clause with SQL*Loader
---------------------------------
$ cat empxt8.dat
1 john watson mg_team 3000 jwtson 1970
1 ohn watson mg_team 3000 jwtson 1970
1 hn watson mgteam 3000 jwtson 1970
1 n watson mgteam 3000 jwtson 1970
1 joh watson mg_team 3000 jwtson 1970

-bash-4.1$ vi control08.ctl
-bash-4.1$ cat control08.ctl
load data
infile '/home/oracle/sqlldr/empxt8.dat'
truncate
into table tbl_sqlldr_test01
WHEN team = 'mg_team'
(id position(1:1),name position(3:6),surname position(8:13),team position(15:21),salary position(23:26),username position(28:33),year position(35:38))

$ sqlldr HR/HR@pdb2 control=control08.ctl

$ sqlplus HR/HR@pdb2

SQL> select count(*) from tbl_sqlldr_test01;

COUNT(*)
----------
3

SQL> select * from tbl_Sqlldr_test01;

ID NAME SURNAME TEAM SALARY USERNAME YEAR
---------- ---------- ---------- ---------- ---------- ---------- ----------
1 john watson mg_team 3000 jwtson 1970
1 ohn watson mg_team 3000 jwtson 1970
1 joh watson mg_team 3000 jwtson 1970

$ vi empxt8.dat
-bash-4.1$ cat empxt8.dat
1 john watson mg_team 3000 jwtson 1970
2 ohn watson mg_team 3000 jwtson 1970
3 hn watson mgteam 3000 jwtson 1970
4 n watson mgteam 3000 jwtson 1970
5 joh watson mg_team 3000 jwtson 1970
-bash-4.1$
-bash-4.1$ vi control08.ctl
-bash-4.1$ cat control08.ctl
load data
infile '/home/oracle/sqlldr/empxt8.dat'
truncate
into table tbl_sqlldr_test01
WHEN ID='2'
(id position(1:1),name position(3:6),surname position(8:13),team position(15:21),salary position(23:26),username position(28:33),year position(35:38))

$ sqlldr HR/HR@pdb2 control=control08.ctl

$ sqlplus HR/HR@pdb2

SQL> select * from tbl_sqlldr_test01;

ID NAME SURNAME TEAM SALARY USERNAME YEAR
---------- ---------- ---------- ---------- ---------- ---------- ----------
2 ohn watson mg_team 3000 jwtson 1970

-----------------------
Method of data loading
-----------------------
Two methods
1. Conventional path(default): INSERT Command are used to load the data to 
the table
2. Direct path(direct=true parameter): The insertion process goes directly 
to the data files
by skiping the database buffer cache.
This method doesn't validate check constraint and doesn't fire triggers

$ sqlldr HR/HR@pdb2 control=control08.ctl log=control08.log bad=control08.bad 
discard=control08.dsc direct=true
SQL*Loader: Release 12.1.0.2.0 - Production on Mon Sep 16 11:47:07 2019
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Path used: Direct

Load completed - logical record count 5.
Table TBL_SQLLDR_TEST01:
1 Row successfully loaded.

Check the log file:
control08.log
for more information about the load.
-bash-4.1$
-bash-4.1$ cat control08.dsc
1 john watson mg_team 3000 jwtson 1970
3 hn watson mgteam 3000 jwtson 1970
4 n watson mgteam 3000 jwtson 1970
5 joh watson mg_team 3000 jwtson 1970
-bash-4.1$
-bash-4.1$ cat control08.bad
cat: control08.bad: No such file or directory

----------------
Loading LOB data
----------------

SQL> create table tbl_loader_lob(id number,product varchar2(10),text CLOB);

Table created.

$ cat control09.ctl
load data
infile *
truncate
into table tbl_loader_lob
FIELDS TERMINATED BY ','
(id char(1),product char(10),clob_fname FILLER char(500),text lobfile(clob_fname)
TERMINATED BY EOF)
begindata
1,Exadata,/home/oracle/sqlldr/exadata.dat

sqlplus HR/HR@pdb2
SQL> SET LINESIZE 150
SQL> COL TEXT FORMAT A50
SQL> SELECT * FROM TBL_LOADER_LOB;
ID PRODUCT TEXT
---------- ---------- --------------------------------------------------
1 Exadata This is advanced database machine in the earth
This is advanced database machine

======================
Patch to Documentation:
======================
Database Utilities
Part I Oracle Data Pump
Part II SQL*Loader

Database Utilities
11 Loading Objects, LOBs, and Collections
Examples of Loading LOB Data from LOBFILEs

Thank you for visiting this blog 🙂