Learn How To Use Data Pump(Syntax struct)

  Data Pump Export is a utility for unloading data and metadata into a set of operating system files called a dump file set. The dump file set can be imported only by the Data Pump Import utility. The dump file set can be imported on the same system or it can be moved to another system and loaded there.

  Data Pump Export Modes:

  • Full Export Mode(FULL={y | n})

  Export a full database,This mode requires that you have the EXP_FULL_DATABASE role.

  • Schema Mode(SCHEMAS=schema_name [, ...])

  If you do not have the EXP_FULL_DATABASE role, you can export only your own schema.

  • Table Mode(TABLES=[schema_name.]table_name[:partition_name] [, ...])

  All specified tables must reside in a single schema.

  • Tablespace Mode(TABLESPACES=tablespace_name [, ...])

  If a table is unloaded, its dependent objects are also unloaded. Both object metadata and data are unloaded. In tablespace mode, if any part of a table resides in the specified set, then that table and all of its dependent objects are exported.

  • Transportable Tablespace Mode(TRANSPORT_TABLESPACES=tablespace_name [, ...])

  In transportable tablespace mode, only the metadata for the tables (and their dependent objects) within a specified set of tablespaces are unloaded. This allows the tablespace datafiles to then be copied to another Oracle database and incorporated using transportable tablespace import. This mode requires that you have the EXP_FULL_DATABASE role.

  Note:

  You cannot export transportable tablespaces and then import them into a database at a lower release level. The target database must be at the same or higher release level as the source database.

 

Syntax Diagrams for Data Pump Export:

ExpInit:

ExpStart:

ExpModes:

ExpOpts:

ExpFileOpts:

ExpDynOpts:

Syntax Diagrams for Data Pump Import:

ImpInit:

ImpStart:

ImpModes:

ImpOpts:

ImpOpts_Cont:

ImpSourceFileOpts:

ImpNetworkOpts:

ImpDynOpts:

  If you are familiar with the original Export (exp) and Import (imp) utilities, it is important to understand that many of the concepts behind them do not apply to Data Pump Export (expdp) and Data Pump Import (impdp). In particular:

  • Data Pump Export and Import operate on a group of files called a dump file set rather than on a single sequential dump file.
  • Data Pump Export and Import access files on the server rather than on the client. This results in improved performance. It also means that directory objects are required when you specify file locations.
  • The Data Pump Export and Import modes operate symmetrically, whereas original export and import did not always exhibit this behavior.
  • For example, suppose you perform an export with FULL=Y, followed by an import using SCHEMAS=HR. This will produce the same results as if you performed an export with SCHEMAS=HR, followed by an import with FULL=Y.
  • Data Pump Export and Import use parallel execution rather than a single stream of execution, for improved performance. This means that the order of data within dump file sets and the information in the log files is more variable.
  • Data Pump Export and Import represent metadata in the dump file set as XML documents rather than as DDL commands. This provides improved flexibility for transforming the metadata at import time.
  • Data Pump Export and Import are self-tuning utilities. Tuning parameters that were used in original Export and Import, such as BUFFER and RECORDLENGTH, are neither required nor supported by Data Pump Export and Import.
  • At import time there is no option to perform interim commits during the restoration of a partition. This was provided by the COMMIT parameter in original Import.
  • There is no option to merge extents when you re-create tables. In original Import, this was provided by the COMPRESS parameter. Instead, extents are reallocated according to storage parameters for the target table.
  • Sequential media, such as tapes and pipes, are not supported.
  • The Data Pump method for moving data between different database versions is different than the method used by original Export/Import. With original Export, you had to run an older version of Export (exp) to produce a dump file that was compatible with an older database version. With Data Pump, you can use the current Export (expdp) version and simply use the VERSION parameter to specify the target database version. See Moving Data Between Different Database Versions.
  • When you are importing data into an existing table using either APPEND or TRUNCATE, if any row violates an active constraint, the load is discontinued and no data is loaded. This is different from original Import, which logs any rows that are in violation and continues with the load.
  • Data Pump Export and Import consume more undo tablespace than original Export and Import. This is due to additional metadata queries during export and some relatively long-running master table queries during import. As a result, for databases with large amounts of metadata, you may receive an ORA-01555: snapshot too old error. To avoid this, consider adding additional undo tablespace or increasing the value of the UNDO_RETENTION initialization parameter for the database.
  • If a table has compression enabled, Data Pump Import attempts to compress the data being loaded. Whereas, the original Import utility loaded data in such a way that if a even table had compression enabled, the data was not compressed upon import.
  • Data Pump supports character set conversion for both direct path and external tables. Most of the restrictions that exist for character set conversions in the original Import utility do not apply to Data Pump. The one case in which character set conversions are not supported under the Data Pump is when using transportable tablespaces.

For more information you can take this article,It list more detailes about expdp/impdp parameters.

http://hi.baidu.com/goylsf/item/14228c342dda9b352e0f81b0

心有猛虎,细嗅蔷薇。
原文地址:https://www.cnblogs.com/assassinann/p/expdp_impdp.html