Tips to import DB dump of a big size

The way I used to resize/shrink datafile of Database:

SELECT a.file#, a.name, a.bytes/1024/1024 CurrentMB,
 ceil(HWM * a.block_size)/1024/1024 ResizeTo,
 (a.bytes - HWM * a.block_size)/1024/1024 ReleaseMB,
 'alter database datafile ''' || a.name || ''' resize ' ||
 ceil(HWM * a.block_size/1024/1024) || 'M;' ResizeCMD
FROM v$datafile a,
 (SELECT file_id, MAX(block_id + blocks - 1) HWM
 FROM dba_extents WHERE file_id in
  (SELECT b.file# FROM v$tablespace a, v$datafile b
  WHERE a.ts#=b.ts# AND a.name like '%%')
 GROUP BY file_id) b
WHERE a.file# = b.file_id(+)
AND (a.bytes - HWM * block_size) >0
ORDER BY 5;

While using imp command to import dump, assign the arguments as follows:

 INDEXES=N CONSTRAINTS=N COMMIT=Y

Specifies INDEXES=Y/N whether or not to import indexes. System-generated indexes such as LOB indexes, OID indexes, or unique constraint indexes are re-created by Import regardless of the setting of this parameter.

Specifies CONSTRAINTS=Y/N whether or not table constraints are to be imported. The default is to import constraints. If you do not want constraints to be imported, you must set the parameter value to n.

 

Specifying COMMIT=y prevents rollback segments from growing inordinately large and improves the performance of large imports. Specifying COMMIT=y is advisable if the table has a uniqueness constraint. If the import is restarted, any rows that have already been imported are rejected with a recoverable error.

 

Reference: http://docs.oracle.com/cd/B10501_01/server.920/a96652/ch02.htm

原文地址:https://www.cnblogs.com/feishunji/p/2663245.html