Nobody tagged me, but here are 5 things about Data Pump that it would be nice to have available in my brain when someone asks me.
Data Pump is a utility that moves data and metadata.
To the user it looks like export/import, but they are separate utilities and their files are not compatible.
Data Pump runs on the server. This helps performance since there is no movement of data back and forth across the network between client and server.
2. Dump Files
Export dumps are a bunch of DDL and INSERT statements.
DataPump dump files in binary format - very similar to the format stored in Oracle database datafiles inside of tablespaces.
You may export/import tables, schema, or the whole database
You may export/import just metadata, or metadata and data
You may run Data Dump export in Estimate Only mode to determine the amount of space that will be required for the dump file.
You need to create a directory on the server to receive the dump files.
The schema that will be running DataPump needs read/write privileges on the directory.
You can disconnect/ reconnect from/to a data pump job without stopping it.
You can connect and check the status of a running Data Pump job
A data pump job can be restarted if it fails.
You can check the existence of a directory in the dba_directories view.
You can check the privileges that a user has on the directory in the dba_tab_privs view. The directory name is found in the table_name field.
The major components of Data Pump are:
Master Table – holds the job info while the job is running.
Processes – including:
Master process – controls execution of the job
Client process – the expdp & impdp
Shadow process – Creates the Master table & the AQ queues.
Worker processes – does the actual load & unload
There are 8 (or maybe 42) more things that I know about and would love to be able to talk about without stumbling and sounding like an idiot! More posts to follow.