Datapump is 10g new features useful to do export/import which is called as unload/load data too. There is one drawback with compare to normal export/import is…

The DataPump export/import jobs need more startup time then the old export and import utility, so for small jobs better to use normal export/import utility.
2) DataPump export data only to the disk not to a tape.

Now you will have the question that why to use datapump export/import.

1) Oracle claims that Datapump enables you to decrease total export time and import is 15 to 30 mins faster than the normal import method.

2) You can estimate the job times before hand means how much time it will take to do the export or import,

3) Ycan directly load one database from remote instance.

4) and you can select the objects to get exported or imported ( so fine-grained object selection)

5) so all in all its a superset of old export/import utility.

The commands to do use Datapum is
$expdp username/passwd (various parameters) $expdp help=y

Data Pump export utility will unload the data into operating system files known as dump files. It writes to these files in a proprietary format, which only the Data Pump import utility can understand while loading the data in the dump files into the same or other database.

==>You can take DataPump export dump files from one operating system and import them into a database running on a different type of platform. (i.e. from Linux to Solaris)

so when you export using Datapump you can not import it using the normal old import utility.

Mechanics of the DataPump:

To perform complete datapump job it uses several processes which are new background process to our database 10g, Master process,worker process,shadow processes.

Master Process:(MCP) That is Master control process recognized at DMnn. from the backend <instance>_DMnn_<pid>.

1) creates jobs and controls them
2)creates and manages the worker processes
3)Monitor the jobs and log the progress
4) Maintain the job state and restart information in the Master table
5)manages the necessary files, including dump file set.

so here comes the question What is master table??

MCP uses a special table called Master table(MT) to log the location of the various database objects in the export dump file. MCP maintain the information like job state and restart information in the master table,

Job state?? restart??
–Yes so here you can temporarily stop your job(export or import running) and then again after doing some alteration you can restart your job we will see in detail how to do the same.

Where this Master table resides?? Oracle creates the master table in the schema of the user who is running the Datapump job at the begining of every export job.

so Master table will have info like what all parameters of the export job, status of all worker process, location of the objects which are inside the dumpfile set.

here dumpfile is same as old export/import dump file and log file too we can have which will log complete operation.  one more file is used in Datapump is SQLFILE.
SQLFILE ????— Yup , DataPump import uses a special parameter called SQLFILE which will write all the DDL statements it execute during the import job to a file.

here we will have the job name as well as master table name both will be same.
so what could be the job name?? we can use the JOBNAME parameter to provide your own job name(JOBNAME=priyaexp). Well this parameter is –optional however if you don’t specify the jobname parameter DataPump generates a unique sysem name like <user>_<operation>_<mode>_%N.

suppose username is SYSTEM and you are doing full database export then the name will be SYSTEM_EXPORT_FULL_01.

More on datapump coming soon,   If you like this don’t forget to leave your comments