module16-backup_recover.htm; Updated July 8, 2013; Reference: Oracle Database Backup and Recovery User's Guide 11g Release 2 (11.2) E10642-05





        Learn basic concepts related to backup and recovery operations.

        Perform normal backups (operating system) including cold backup (offline) and hot backup (online).

        Create exports and read imports for logical backups.

        Understand how archive logging of redo logs is incorporated into recovery operations.




Backup and recovery is based on a threefold recovery methodology consisting of: (1) exports and imports, (2) normal backups, and (3) the use of archive logging of redo logs. 


Types of Database Failure


The common types of failure are:

        Statement failure.  This failure is caused by an error in an Oracle program.

        Process failure.  This is a user process failure such as abnormal disconnection or process termination (someone clicks the close button on an telnet session when they did not intend to disconnect).

        Instance failure.  Some problem prevents the Oracle instance from functioning - lack of SGA memory to be allocated or failure of a background process.

        User or application error.  User accidentally deletes data that was not to be deleted; or the application program causes a similar error.

        Media failure.  A physical problem such as a disk head crash that causes the loss of data on a disk drive.



What is Backup and Recovery?

Backup means to create a means for recovering a database from disaster.

There are two types of backups:  physical and logical.

        physical backup is the creation of copies of critical physical database files.

        logical backup is the use of the Oracle Data Pump Export or Oracle Export utility to extract specific data and to store that data to an export binary file.


A physical backup can be made by using either the Recovery Manager utility program or operating system utilities such as the UNIX cp (copy) command to copy files to a backup location.


Restore -- reconstruct an Oracle database by copying backup files to the original file locations of an Oracle database.


Recover -- update a restored datafile by applying redo records from redo logs to bring a database back to the point in time where a failure occurred.


When a database is recovered, it is first restored from a physical backup, then redo logs are used to roll forward to the point of failure.  This is illustrated in the figure shown here.



The use of Oracle's Recovery Manager (RMAN) utility also enables you to recover restored datafiles using incremental backups, which are backups of a datafile that contain only blocks that changed after the last backup.


Crash recovery/instance recovery are performed by Oracle automatically after an instance fails and is restarted.


Instance Recovery


Instance recovery is an automatic procedure that includes two operations:

        Rolling forward the backup to a more current time by applying online redo records.

        Rolling back all changes made in uncommitted transactions to their original state.


Media Recovery


Media recovery (replacement of a failed hard drive, for example) requires the DBA to use recovery commands.

        The SQLPLUS commands RECOVER or ALTER DATABASE RECOVER are used to apply archived redo logs to datafiles being recovered.

        Use this approach to recover a lost data file:

o   Copy the lost file from the previous physical backup using operating system commands such as the UNIX cp command.

o   Open the database to the mount stage and issue the ALTER DATABASE RECOVER command.

o   Following this, alter the database to the open stage: ALTER DATABASE OPEN.

        RMAN (recovery manager) can be used to apply archived redo logs or incremental backups to datafiles being recovered.


System Change Number


The SCN (system change number) is an ever-increasing internal timestamp.  Oracle uses this to identify a committed version of the database.

        Each new committed transaction requires Oracle to record a new SCN.

        The SCN can be used to perform an incomplete recovery to a specific point in time.

        The SCN is displayed in the alert log file.

        Each control file, datafile header, and redo log record stores an SCN.

        The redo log files have a log sequence number, a low SCN, and a high SCN.

        The low SCN records the lowest SCN in the log file and the high SCN records the highest SCN in the log file.


Archive Logs


Redo logs store all transactions that alter the database, all committed updates, adds, deletes of tables, structures, or data.

        When data changes are made to Oracle tables, index, and other objects, Oracle records both the original and new values of the objects to the redo log buffer in memory.  This is a redo record.

        Oracle records both committed and uncommitted changes in redo log buffers.

        The redo log buffer records are written to the online redo log file (see earlier notes for details on this activity). 

        Recall there are at least two online redo log file groups used in a circular fashion.


When archiving is disabled, only data in the current offline and online redo logs can be recovered. When the system recycles through all redo logs, old ones are reused destroying the contents of earlier database modifications.


When archiving is enabled, redo logs are written out to storage before reuse allowing recovery to a specific point in time since the last full cold backup.


Under Oracle redo logs are specified in groups, each group is archived together.


Redo logs cannot be used to recover a database brought back from a full export.


Simple Backup and Recovery Strategy


There are just a few basic principles you need to follow for an effective backup and recovery strategy.  These are:


1.  Maintain multiple copies of the online redo logs (run multiplexed copies on different disks).


2.  Archive the redo logs to multiple locations or make frequent backups of your archived redo logs.


3.  Maintain multiple, concurrent copies of your control file using Oracle multiplexing in conjunction with operating system  mirroring.


4.  Backup datafiles (these files include all tablespaces), control files, and archived redo logs frequently (but not the online redo log files).  Optionally, backup the init.ora and config.ora files.  Store them in a safe place.

Control Files


The control file is a binary file that contains the following:

        The operating system level filename of every file that constitutes the database.

        The database name.

        The database creation timestamp.

        Names of the online and archived redo log files.

        A checkpoint record indicating the point in time in the active redo log file that indicates that all database changes made prior to this point in time have been saved to the datafiles.

        Information on backups if the Recovery Manager utility was used.


The control file is read whenever an Oracle database is mounted - this enables the system to identify the datafiles and online redo log files to be opened for database operation.

        If, during this mount process, the system identifies that the database has physically changed (new datafile or new redo log file), then Oracle modifies the database's control file to reflect the change.

        The control file checkpoint record stores the highest SCN (system change number -- see above) of all changes to data blocks that have been written to disk by the DBWR process.

        If there is a discrepancy between the SCN in the datafile header and the SCN stored in the control file, Oracle will require media recovery.

Undo Segments


Undo segments store information about a data block before it is changed.

        These old data values represent data that have been uncommitted (not written to a datafile by DBWn).

        Oracle uses undo segment information during database recovery to undo uncommitted changes that are applied from the redo log files to the datafiles. 

        Thus the redo log file records are applied to the datafiles during recovery, then the undo segments are used to undo uncommitted changes.

Online Redo Log Files


Each Oracle database has two or more online redo log files (usually in sets that are multiplexed).


Each redo log file has assigned a unique log sequence number (you can see this number written to the alert log file when a redo log file change occurs).


All database changes are written to the current redo log file.


Example:  A user updates a customer account balance from $100.00 to $350.00.

        DBWR will eventually store the changed value of $350.00 to the datafile block where the CUSTOMER table in the DATA tablespace is located.

        Oracle also stores the old value of $100.00 to the undo segment.

        The redo log record for this transaction includes the following:

o   The change to the CUSTOMER table data block.

o   The change to the transaction table of the undo segment.

o   The change to the undo segment data block.

        When the update is committed, Oracle generates another redo record and assigns the change an SCN.

Archive Logging


Archive logging should be used for all production Oracle databases.


        Archive logging may consume disk resources in a highly active environment. When recovery is required, the system will ask for the archive logs it needs and perform recovery from them.

        Archive logs allow point-in-time recovery.

        Use the init.ora parameter file to specify archive logging, the destination, frequency, and size of archive logs.

Using Online Redo Log Files


A typical type of failure is a power outage.


        In this case, Oracle is prevented from writing data from the database buffer cache to the datafiles.

        Recall, however, that LGWR did write redo log records of committed changes to the redo log files.

        The old version of datafiles can be combined with changes in the online and archived redo log files to reconstruct data that was lost during the power outage.


Logical Backups (Exports)


A logical backup involves reading a set of database records and writing them to a file.


        The Data Pump Export utility is used for this type of backup. This is commonly termed an export.

        The Data Pump Import utility is used to recover data generated by the export. This is commonly termed an import.


The Data Pump Export and Data Pump Import utilities are meant to replace the Export and Import utilities provided with earlier versions of Oracle. However, the Export and Import utilities are still available. We will discuss both of these.





Data Pump Export Utility


        This utility queries the database including the data dictionary and writes output to an XML file called an export dump file.

        Export capabilities include:

o   Full database.

o   Specific users.

o   Specific tablespaces.

o   Specific tables.

o   Ability to specify whether to export grants, indexes, and constraints associated with tables.

        Export dump file contains commands needed to recreate all selected objects and data completely.

        Data Pump export dump files are NOT compatible with files created by earlier versions of the Export utility (9i and earlier).


Data Pump Import Utility


        Reads an export dump file and executes any commands found there.

        Import capabilities include:

o   Can import data into same database or a different database.

o   Can import data into the same or a different schema.

o   Can import selected data.


Using the Data Pump Export and Import


Data Pump runs as a server process. This provides the following performance advantages:

        Client processes used to start a job can disconnect and later reattach to the job.

        Performance is enhanced because data no longer has to be processed by a client program (the old export/import utility programs).

        Data Pump extractions can be parallelized.


Data Pump requires the DBA to create directories for the datafiles and log files it creates.

        Requires the CREATE ANY DIRECTORY privilege, and the external directory must already exist.

        Use the CREATE DIRECTORY command to create a directory pointer within Oracle to the external directory to be used.

        Write/read privileges are required for this directory.


Data Pump Export Options


The utility named expdp serves as the interface to Data Pump.

        This utility has various command-line input parameters to specify characteristics of an export job when one is created.

        This table shows the parameters for the expdp utility.


Keyword Description (Default)


ATTACH Attach to existing job, e.g. ATTACH [=job name].

CONTENT Specifies data to unload where the valid keywords are:


DIRECTORY Directory object to be used for dumpfiles and logfiles.

DUMPFILE List of destination dump files (expdat.dmp),

e.g. DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp.

ESTIMATE Calculate job estimates where the valid keywords are:


ESTIMATE_ONLY Calculate job estimates without performing the export.

EXCLUDE Exclude specific object types, e.g. EXCLUDE=TABLE:EMP.

FILESIZE Specify the size of each dumpfile in units of bytes.

FLASHBACK_SCN SCN used to set session snapshot back to.

FLASHBACK_TIME Time used to get the SCN closest to the specified time.

FULL Export entire database (N).

HELP Display Help messages (N).

INCLUDE Include specific object types, e.g. INCLUDE=TABLE_DATA.

JOB_NAME Name of export job to create.

LOGFILE Log file name (export.log).

NETWORK_LINK Name of remote database link to the source system.

NOLOGFILE Do not write logfile (N).

PARALLEL Change the number of active workers for current job.

PARFILE Specify parameter file.

QUERY Predicate clause used to export a subset of a table.

SCHEMAS List of schemas to export (login schema).

STATUS Frequency (secs) job status is to be monitored where

the default (0) will show new status when available.

TABLES Identifies a list of tables to export - one schema only.

TABLESPACES Identifies a list of tablespaces to export.

TRANSPORT_FULL_CHECK Verify storage segments of all tables (N).

TRANSPORT_TABLESPACES List of tablespaces from which metadata will be unloaded.

VERSION Version of objects to export where valid keywords are:

(COMPATIBLE), LATEST, or any valid database version.


        Oracle generates a system-generated name for the export job unless you specify a name with the JOB_NAME parameter.

        If you specify a name, ensure it does not conflict with a table or view name in your schema because Oracle creates a master table for the export job with the same name as the Data Pump job this avoids naming conflicts.

        When a job is running, you can execute these commands via Data Pump's interface in interactive mode.


Command Description


ADD_FILE Add dumpfile to dumpfile set.


CONTINUE_CLIENT Return to logging mode. Job will be re-started if idle.

EXIT_CLIENT Quit client session and leave job running.

HELP Summarize interactive commands.

KILL_JOB Detach and delete job.

PARALLEL Change the number of active workers for current job.

PARALLEL=<number of workers>.

START_JOB Start/resume current job.

STATUS Frequency (secs) job status is to be monitored where

the default (0) will show new status when available.


STOP_JOB Orderly shutdown of job execution and exits the client.

STOP_JOB=IMMEDIATE performs an immediate shutdown of the

Data Pump job.


        Export parameters can be stored to a plain text file and referenced with the PARFILE parameter of the expdp command.

        Dump files will NOT overwrite previously existing dump files in the same directory.


Example #1: Begin the example by changing directories to $HOME directory (on disk drive /u02)for the student account dbockstd and creating a directory named dtpump.



dbockstd/@oracle2=>mkdir dtpump


Set the permissions to 775 to allow members of the DBA group read/write permissions for the dtpump directory.



dbockstd/@oracle2=>chmod 775 dtpump

$ ls -al

drwxrwxr-x 2 dbockstd dba 4096 Jul 19 21:41 dtpump


Next connect as the user SYS as SYSDBA to SQLPlus. Execute the CREATE DIRECTORY command to create an internal Oracle directory that corresponds to the operating system directory.


SQL> CREATE DIRECTORY dtpump AS '/u02/student/dbockstd/oradata/dtpump';

Directory created.


While connected as the user SYS as SYSDBA, grant permission to READ and WRITE to the directory is to users who may connect and create exports and read imports. Additionally the permissions needed to perform exports and imports are granted. My experience with 10g expdp utility is that with LINUX/UNIX, if you don't specifically grant permission to the database username that corresponds with your student username, then the utility generates errors. For this reason, I granted permission to dbockstd on the directory where exports will be written.




Grant succeeded.



Grant succeeded.


Logoff SQLPlus. Now create a parameter file named dp1.par by using the Windows Notepad text editor and transfer the parameter file to the $HOME directory for your account (in this case that is dbockstd).


The contents of the parameter file are shown here.

        Note these parameters assign a job name to the export job and specify the directory name and name of the dumpfile.

        Specifying the dumpfile name is important because the default name is expdat.dmp and the Data Pump export utility will NOT overwrite file names change the name each time you run an export.

        The content of this export is metadata only.

        No logfile is created (I don't recommend creating a log file for your student exercises as you won't be using it).








The first time I ran the export, it failed because there was not enough disk space allocated to the DATA01 tablespace -- I had to alter it to add space.



DATAFILE '/u02/student/dbockstd/oradata/USER350data01.dbf'



Database altered.


While connected to the operating system, the expdp command is executed to start the utility and to specify the name of the parameter file as dp1.par. I have used the DBA name for my database (dbock) to do the export because dbock was given the privileges earlier.


The resulting output is shown below.


dbockstd/> expdp dbock/<mypassword> PARFILE=dp1.par


Export: Release - Production on Wed Jul 10 01:22:18 2013


Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.


Connected to: Oracle Database 11g Enterprise Edition Release - 64bit Production

With the Partitioning, OLAP, Data Mining and Real Application Testing options

Starting "DBOCK"."EXPDUMP2B": dbock/******** PARFILE=dp1.par

Processing object type SCHEMA_EXPORT/USER

Processing object type SCHEMA_EXPORT/SYSTEM_GRANT

Processing object type SCHEMA_EXPORT/ROLE_GRANT

Processing object type SCHEMA_EXPORT/DEFAULT_ROLE






Processing object type SCHEMA_EXPORT/CLUSTER/INDEX

Processing object type SCHEMA_EXPORT/TABLE/TABLE


Processing object type SCHEMA_EXPORT/TABLE/COMMENT




Processing object type SCHEMA_EXPORT/VIEW/VIEW





Master table "DBOCK"."EXPDUMP2B" successfully loaded/unloaded


Dump file set for DBOCK.EXPDUMP2B is:


Job "DBOCK"."EXPDUMP2B" successfully completed at 01:22:46


Example #2. This example does a tablespace export of the DATA01 tablespace.


The parameters in the data01.par file are.








The resulting output:


dbockstd/@oracle2=>expdp dbock/password PARFILE=data01.par


Export: Release - Production on Wed Jul 10 01:29:48 2013


Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.


Connected to: Oracle Database 11g Enterprise Edition Release - 64bit Production

With the Partitioning, OLAP, Data Mining and Real Application Testing options

Starting "DBOCK"."DATA01TS": dbock/******** PARFILE=data01.par

Estimate in progress using BLOCKS method...

Processing object type TABLE_EXPORT/TABLE/TABLE_DATA

Total estimation using BLOCKS method: 1.078 MB

Processing object type TABLE_EXPORT/TABLE/TABLE


Processing object type TABLE_EXPORT/TABLE/COMMENT








. . exported "DBOCK"."EXPDUMP2" 281.5 KB 1256 rows

. . exported "DBOCK"."STATES" 5.890 KB 2 rows

. . exported "AL"."TEST" 5.125 KB 6 rows

. . exported "DBOCK"."COURSE" 5.921 KB 2 rows

. . exported "DBOCK"."ENROLL" 6.718 KB 1 rows

. . exported "DBOCK"."FACULTY" 6.726 KB 3 rows

. . exported "DBOCK"."INVOICE" 6.757 KB 2 rows

. . exported "DBOCK"."INVOICE_DETAILS" 6.453 KB 4 rows

. . exported "DBOCK"."PRODUCT" 6.968 KB 5 rows

. . exported "DBOCK"."SECTION" 6.867 KB 4 rows

. . exported "DBOCK"."STUDENT" 6.328 KB 2 rows

. . exported "DBOCK"."TESTORDERDETAILS" 6.468 KB 3 rows

. . exported "DBOCK"."TESTORDERS" 5.945 KB 2 rows

. . exported "DBOCK"."VENDOR" 8.117 KB 3 rows

Master table "DBOCK"."DATA01TS" successfully loaded/unloaded


Dump file set for DBOCK.DATA01TS is:


Job "DBOCK"."DATA01TS" successfully completed at 01:30:25




You can exclude or include sets of tables via the EXCLUDE and INCLUDE options.

        Exclude objects by type and by name.

        An object that is excluded also has all dependent objects excluded.

        You cannot EXCLUDE if you specify CONTENT=DATA_ONLY.

        Example format of the EXCLUDE parameter option.

        The object_type can be any Oracle object type including a grant, index, or table.


EXCLUDE=object_type[:name_cluause] [, . . . ]


Example, to exclude the DBOCK schema from a full export, the format is shown here. The limiting condition ='DBOCK' is specified within a set of double quotes.




To exclude all tables that begin with the letters "VEN" the EXCLUDE clause is shown here. The limiting condition is LIKE 'VEN%' and is again specified within a set of double quotes.




This example excludes all INDEX objects.




A listing of objects you can filter can be produced by querying the DATABASE_EXPORT_OBJECTS, SCHEMA_EXPORT_OBJECTS, and TABLE_EXPORT_OBJECTS data dictionary views. You cannot exclude constraints needed for a table to be successfully created, such as primary key constraints for index-organized tables.


You can specify to export specific objects with the INCLUDE clause.


This example of two INCLUDE specifications in the export parameter file will exports two tables and all procedures.





Unless otherwise specified, all rows for exported objects will be included in the export. The QUERY option can be used to limit the rows exported. In this example, rows are included from the INVOICE table if the ORDERAMOUNT column has a value that exceeds $200.00.


QUERY=INVOICE:' "WHERE OrderAmount > 200" '


Import Options


The utility named impdp serves as the interface to Data Pump Import.

        Like expdp, the impdp utility also has various command-line input parameters to specify characteristics of an import job when one is created.

        Parameters can also be stored to a parameter file.

        This table shows the parameters for the impdp utility.


Keyword Description (Default)


ATTACH Attach to existing job, e.g. ATTACH [=job name].

CONTENT Specifies data to load where the valid keywords are:


DIRECTORY Directory object to be used for dump, log, and sql files.

DUMPFILE List of dumpfiles to import from (expdat.dmp),

e.g. DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp.

ESTIMATE Calculate job estimates where the valid keywords are:


EXCLUDE Exclude specific object types, e.g. EXCLUDE=TABLE:EMP.

FLASHBACK_SCN SCN used to set session snapshot back to.

FLASHBACK_TIME Time used to get the SCN closest to the specified time.

FULL Import everything from source (Y).

HELP Display help messages (N).

INCLUDE Include specific object types, e.g. INCLUDE=TABLE_DATA.

JOB_NAME Name of import job to create.

LOGFILE Log file name (import.log).

NETWORK_LINK Name of remote database link to the source system.

NOLOGFILE Do not write logfile.

PARALLEL Change the number of active workers for current job.

PARFILE Specify parameter file.

QUERY Predicate clause used to import a subset of a table.

REMAP_DATAFILE Redefine datafile references in all DDL statements.

REMAP_SCHEMA Objects from one schema are loaded into another schema.

REMAP_TABLESPACE Tablespace object are remapped to another tablespace.

REUSE_DATAFILES Tablespace will be initialized if it already exists (N).

SCHEMAS List of schemas to import.

SKIP_UNUSABLE_INDEXES Skip indexes that were set to the Index Unusable state.

SQLFILE Write all the SQL DDL to a specified file.

STATUS Frequency (secs) job status is to be monitored where

the default (0) will show new status when available.

STREAMS_CONFIGURATION Enable the loading of Streams metadata

TABLE_EXISTS_ACTION Action to take if imported object already exists.

Valid keywords: (SKIP), APPEND, REPLACE and TRUNCATE.

TABLES Identifies a list of tables to import.

TABLESPACES Identifies a list of tablespaces to import.

TRANSFORM Metadata transform to apply (Y/N) to specific objects.

Valid transform keywords: SEGMENT_ATTRIBUTES and STORAGE.


TRANSPORT_DATAFILES List of datafiles to be imported by transportable mode.

TRANSPORT_FULL_CHECK Verify storage segments of all tables (N).

TRANSPORT_TABLESPACES List of tablespaces from which metadata will be loaded.

Only valid in NETWORK_LINK mode import operations.

VERSION Version of objects to export where valid keywords are:

(COMPATIBLE), LATEST, or any valid database version.

Only valid for NETWORK_LINK and SQLFILE.


Changing Schemas During Import


You can use the REMAP_SCHEMA option in the parameter file to be read during import.

        For example, you may want to import the tables belonging to dbock to a new user named rsmith.

        You can also use the REMAP_TABLESPACE option to change tablespace assignments for objects as the same time, but ensure the user has a quota on the tablespace to be used.


The steps are as follows:

        Create the new user schema if it does not already exist.






        Create the parameter file (here named dp2.par) with the appropriate parameters. Note the need to specify the export file to be read during import.









        Start the import.


impdp dbock/password PARFILE=dp2.par


        If the NOLOGFILE=Y parameter option is removed from the parameter file, then a log file will be created that shows the progress of the import. The file will be named import.log by default.


Generating SQL


Instead of importing data and objects, you can generate SQL for the objects (not the data) and store it to a file.

        The file is specified with the SQLFILE option.

        This example will create a file with SQL named sqlcode.txt.







The original Export and Import utilities support incremental commit capability offered by two parameters, COMMIT and BUFFER.


Additionally, the files created by Export cannot be read by Data Pump Import. Likewise, files created by Data Pump Export cannot be read by Import.


Like the Data Pump Export and Data Pump Import, the process of using the Export and Import utilities will extract data from or insert data into an Oracle database.

        Exports can be used to recover single data structures to the single date/time (this is a point in time recovery method) that the export was taken. Note that any transactions that take place after the database is exported are not recoverable from an export.

        Exports come in three types: full, cumulative, and incremental.

o   A Full Export provides a full logical copy of the database and its structures.

o   A Cumulative Export provides a complete copy of altered structures since the last full or last cumulative export.

o   An Incremental Export provides a complete copy of altered structures since the last incremental, cumulative, or full export.


Note that in differentiating a Cumulative from an Incremental, the Cumulative will export all altered structures since the last full or cumulative export regardless of whether or not the structure was exported during an Incremental export.


Many sites use a combination of these three although many sites just use Full and Incremental exports.

Limitations on Exports and Imports


        The database must be running to perform either an export or import this limitation also applies to the Data Pump utilities.

        Export dump files should not be edited and are only used (read) by the Import program.

        Unlike the Data Pump utility, the Import program only imports full tables - conditional loads are not allowed.

EXPORT Utility


The Export utility has three levels of functionality:  Full mode (also called Complete), User mode, and Table mode.


In Full mode, the full database is exported.

        The entire data dictionary is read and exported.

        The Export utility creates DDL that can be used to re-create the full database from an export dump file.

        This includes all tablespaces, all users, and all objects, data, and privileges.

        In order to use the Full mode, the FULL parameter in the EXPORT command line must be set to Y (yes).


In User mode, you can export objects belonging to a specific user.

        All grants and indexes created by a user are also exported.

        Grants and indexes created by someone other than the user are NOT exported, so if the DBA creates an index on a user table, that index will not be exported--if the table were to be imported, the index would have to be recreated.


In Table mode, you can export a specific table along with its structure, index(es), and grants. 

        You can also export a full set of tables owned by a user.

        It would be very nice to be able to use a Tablespace Exports to defragment a tablespace or to create a copy of the tablespace elsewhere. Unfortunately, such a command does not exist -- there is no TABLESPACE= parameter the best approach is to use the Data Pump Export however, you can use Export to accomplish a tablespace export by exporting all of the Users for a tablespace (through a series of User Exports) to produce the desired result this approach would be almost infeasible for a large number of users.

The format of the EXPORT utility command is:

exp KEYWORD=value --or-- KEYWORD=(list of values)

The command (binary file) is found in the $ORACLE_HOME/bin subdirectory on a LINUX/UNIX.

A table of keywords and their descriptions is provided below. Based on this listing, interpret the example exports command given below:

exp userid=dbockstd/password grants=N
    tables=(Orders, Customers, Orderline, Products)

exp userid=SYS/PASSWORD full=Y inctype=complete

exp userid=SYS/PASSWORD full=y, inctype=incremental





Username/password, e.g. USER100/mypassword


size of the data buffer 


output file name, e.g. EXPDAT.DMP


import into one extent (Y is default)


export grants (Y is default)


export indexes (Y is default)


export rows (Y is default)


export table constraints (Y is default)


cross-table consistency - new in Oracle 7


log file of screen output


analyze objects - new in Oracle 7


must be Y to do a FULL export (N is default)


list of owner usernames


list of table names


length of IO record


Set to Complete, Cumulative, or Incremental to specify the type of export.


track incremental export (Y is default)


parameter filename


NOTE: Exports, whether through the Data Pump Export or the Export utility should be automated and scheduled to run automatically. An export methodology should be worked out such that the DBA is reasonably certain a deleted file can be recovered.

Export Features

        CONSISTENT - this option provides cross-table consistency for foreign key relationships. This option cannot be specified for a cumulative or incremental export.

        LOG - this command causes Oracle to write a log file of screen output. This makes it easy to record export operations as they process for later error-checking.

  • ANALYZE - this forces Oracle to analyze objects when they are imported from this export.


IMPORT Utility

The Import utility reads an export dump file and runs the commands that are stored in the file.  This is the file that was created when you ran the Export utility.

You can selectively bring back objects that have been exported from an export dump file by specifying which objects are to be imported.


The format of the IMPORT utility command is:

imp KEYWORD=value --or-- KEYWORD=(list of values)


        A table of additional keywords for importing a database is provided below.

        Note that keywords that are identical for the EXPORT of a database are not duplicated in this table.

        Some of the command parameters conflict with one another; If you attempt to use conflicting parameters, your import may fail.

        The database issues a COMMIT after every table is completely imported.  This means that you will need some very large undo segments if you are importing large tables - a 300Mb table would require an undo segment at least that large.  You can alter this approach to committing imports by using the COMMIT=Y command along with a value for the BUFFER size before a commit executes, e.g.

imp userid=USER100/MyPassword file='expdat.dmp'
    buffer=64000 commit=Y


You can export objects from one user's account to another user's account by using the FROMUSER and TOUSER parameters.  In this example, specific tables belonging to USER100 are exported, then imported into USER101's schema.

exp userid=dbock/Password file='USER100.dat' owner=USER100
    grants=N indexes=Y compress=Y rows=Y

imp userid= dbock/password file='USER100.dat'
    fromuser=USER100 touser=USER101
    rows=y indexes=y


Based on the table of keywords, interpret the example import command given below:

imp userid=SCOTT/TIGER
    ignore=Y tables=(expenses, advances) full=N




just list file contents (N is default)


ignore create errors (N is default)


import grants (Y is default)


import indexes (Y is default)


import rows (Y is default)


import entire file (N is default)


list of owner usernames


list of usernames


commit array insert (N is default)


write table/index information to specified file


overwrite tablespace data (N is default)


character set of export file (language)


IMPORT Features

        DESTROY - this option causes the import to overwrite existing tablespace data.

        INDEXFILE - this option causes index creation commands to be written to the specified file.

        CHARSET - allows specification of export file character sets from one of the National Language System (NLS) character sets.

Database Recovery with IMPORT

Recovery using IMPORT must be done in steps.

        First, take the most recent export and re-create the data dictionary and other database internal tables/views by using IMPORT with FULL=Y and INCTYPE=system.  Import has the limitation that you cannot load SYS-owned database objects in FULL mode in the data dictionary, but you can load SYSTEM-owned database objects.

        Next, run IMPORT against the most recent complete database export with FULL=Y and INCTYPE=restore parameters.

        Next, import all cumulative exports taken since the most recent complete export in chronological order starting with the oldest cumulative export first and proceeding to the newest one.

        Finally, apply all incremental exports taken since the last cumulative export in chronological order with the oldest one first.

IMPORT and EXPORT Schedules

        As a minimum, exports should follow the following schedule:

        Daily: Incremental export during off-peak time.

        Weekly: Full export during off-peak time.

        Once a full export is taken, the DBA can remove previous full and incremental backups. This should be automated using batch operating system shell scripts.



Physical Backups (Offline)


A physical backup involves copying the files that comprise the database.


The whole database backup (in Offline mode) is also termed a cold backup. This type of backup will produce a consistent backup.

        The whole database backup when the database is shutdown is consistent as all files have the same SCN.

        The database can be restored from this type of backup without performing recovery; however, this is to a recovery only to the point of the last backup -- not to the point-of-last-committed-transaction.


A cold backup uses operating system command (such as the UNIX and LINUX cp command) to backup while the database is shut down normally (not due to an instance failure).

        This means the shutdown was either: shutdown normal, shutdown immediate, or shutdown transactional.

        If you must execute shutdown abort, then you should restart the database and shutdown normally before taking an offline backup.

        Files to backup include:

o   Required: All datafiles.

o   Required: All control files.

o   Required: All online redo log files.

o   Optional, but recommended: The init.ora file and server parameter file, and the password file.

        Backups performed using operating system commands while the database is running are NOT valid unless an online backup is being performed.

        Offline backups performed after a database aborts will be inconsistent and may require considerable effort to use for recovery, if they work at all.

        If the instance has crashed, you cannot do a cold backup.


The whole database backup approach can be used with either ARCHIVELOG or NOARCHIVELOG mode.

        If you run in ARCHIVELOG mode, you can take additional recovery steps outlined in these notes to complete a backup to a point-of-last-committed-transaction.

        The Oracle database should be shut down and a full cold backup taken. If this is not possible, develop a hot backup procedure.

        When a full cold backup is taken, archive logs and exports from the time period prior to the backup can be copied to tape and removed from the system.


Obtain a List of Files to Backup

Use SQL*PLUS and query V$DATAFILE to list all datafiles in your database.

SELECT name FROM v$datafile;












7 rows selected.


Alternative, a DBA may want a list of datafiles and their associated tablespaces.  You can join query the V$TABLESPACE and V$DATAFILE views for this listing.


COLUMN "Datafile" FORMAT A50;

COLUMN "Tablespace" FORMAT A10;

SELECT "Tablespace", "Datafile"

FROM v$tablespace t, v$datafile f

WHERE t.ts# = f.ts#


Tablespace Datafile

---------- --------------------------------------------------

COMP_DATA /u02/student/dbockstd/oradata/USER350comp_data.dbf

DATA01 /u02/student/dbockstd/oradata/USER350data01.dbf

INDEX01 /u03/student/dbockstd/oradata/USER350index01.dbf

SYSAUX /u01/student/dbockstd/oradata/USER350sysaux01.dbf

SYSTEM /u01/student/dbockstd/oradata/USER350system01.dbf

UNDO02 /u01/student/dbockstd/oradata/USER350undo02.dbf

USERS /u02/student/dbockstd/oradata/USER350users01.dbf


7 rows selected.


Use SQL*PLUS and query the V$PARAMETER view to obtain a list of control files.

SELECT value FROM v$parameter
WHERE name = 'control_files';




/u01/student/dbockstd/oradata/USER350control01.ctl, /u02/student/dbockstd/oradat

a/USER350control02.ctl, /u03/student/dbockstd/oradata/USER350control03.ctl



Directory Structure


A consistent directory structure for datafiles will simplify the backup process.

        Datafiles must be restored to their original location from a backup in order to restart a database without starting in mount mode and specifying where the datafiles are to be relocated.

        Example: This shows datafiles located on three disk drives. Note that the directory structure is consistent.






        The UNIX tar command shown here will backup all files in the oradata directories belonging to dbockstd to a tape drive named /dev/rmt/0hc because the drives are named /u01 through /u03. The cvf flag creates a new tar saveset.


> tar cvf /dev/rmt/0hc /u0[1-3]/student/dbockstd/oradata







Physical Backups (Online)


Online backups are also physical backups, but the database MUST BE running in ARCHIVELOG mode.

        These are also called hot backups (also termed inconsistent backups) because the database is in use you don't have to shut it down, and this is an important advantage.

        This type of backup can give a read-consistent copy of the database, but will not backup active transactions.

        These are best performed during times of least database activity because online backups use operating system commands to backup physical files this can affect system performance.

        Online backup involves setting each tablespace into a backup state, backup of the datafiles, and then restoring each tablespace to a normal state.

        Recovery involves using archived redo logs and roll forward to a point in time.

        The following files can be backed up with the database open:

o   All datafiles.

o   All archived redo log files.

o   One control file (via the ALTER DATABASE command).

        Online backups :

o   Provide full point-in-time recovery.

o   Allow the database to stay open during file system backup.

o   Keeps the System Global Area (SGA) of the instance from having to be reset during database backups.


When you tell Oracle to backup an individual datafile or tablespace, Oracle will stop recording checkpoint records in the headers of the online datafiles to be backed up.

        Use the ALTER TABLESPACE BEGIN BACKUP statement to tell Oracle to put a tablespace in hot backup mode.

        If the tablespace is read-only, you can simply backup the online datafiles.

        After completing a hot backup, Oracle advances the file headers to the current database checkpoint after you execute the ALTER TABLESPACE END BACKUP command. 

        When tablespaces are backed up, the tablespace is put into an "online backup" mode and the DBWR process writes all blocks to the buffer cache that belong to any file that is part of the tablespace back to disk.

        You must restore the tablespace to normal status once it is backed up or a redo log mismatch will occur and archiving/rollback cannot be successfully accomplished.


Example:  A database with 5 tablespaces can have a different tablespace and the control file backed up every night and at the end of a work week, you would have an entire database backup.


The online and archived redo log files are used to make the backup consistent during recovery.


In order to guarantee that you have the redo log files needed to recover an inconsistent backup, you need to issue this SQL statements to force Oracle to switch the current log file and to archive it and all other unarchived log files.


If you have log groups, the following SQL statement will archive a specified log group (replace the word integer with the log group number).



A hot backup is complex and should be automated with an SQL script.  The steps are given below. Also, an automatic backup script should be first tested on a dummy database.


Starting ARCHIVELOG Mode


Ensure that the database is in ARCHIVELOG mode. This series of commands connects as SYS in the SYSDBA role and starts up the dbockstd database in mount mode, then alters the database to start ARCHIVELOG and then opens the database.







Performing Online Database Backups


Steps in an online database backup are:


1.    Obtain a list of Datafiles to Backup (see the commands given earlier in these notes).

2.    Start the hot backup for a tablespace.




3.    Backup the datafiles belonging to the tablespace using operating system commands.


$ cp /u01/student/dbockstd/oradata/dbockstdINDEX01.dbf



4.     Indicate the end of the hot backup with the ALTER TABLESPACE command.




Datafile backups, which are not as common as tablespace backups, are valid in ARCHIVELOG databases.


The only time a datafile backup is valid for a database in NOARCHIVELOG mode is if every datafile in a tablespace is backed up. You cannot restore the database unless all datafiles are backed up. The datafiles must be read-only or offline-normal.

Backing Up Multiple Online Tablespaces


This sequence of SQL*PLUS and UNIX operating system commands demonstrates backing up more than one tablespace that is online at a time.  The sequence is self-explanatory.

$ cp /u01/student/dbockstd/oradata/dbockstdDATA01.dbf /u03/student/dbockstd/backup/u01/dbockstdDATA01.dbf

$ cp /u01/student/dbockstd/oradata/dbockstdINDEX01.dbf /u03/student/dbockstd/backup/u01/dbockstdINDEX01.dbf



Datafile Backup Status

A DBA can check the backup status of a datafile by querying the V$BACKUP view.

SELECT file#, status FROM v$backup;

-----   ---------
    1   ACTIVE
    2   NOT ACTIVE
    3   ACTIVE


        The term NOT ACTIVE means the datafile is not actively being backed up whereas ACTIVE means the file is being backed up.

        This view is also useful when a database crashes because it shows the backup status of the files at the time of crash.

        This view is NOT useful when the control file in use is a restored backup or a new control file created after the media failure occurred since it will not contain the correct information.

        If you have restored a backup of a file, the V$BACKUP view reflects the backup status of the older version of the file and thus it can contain misleading information.

Backup Archived Logs


After completing an inconsistent backup, backup all archived redo logs that have been produced since the backup began; otherwise, you cannot recover from the backup.


You can delete the original archived logs from the disk.



Backup the control file whenever the structure of the database is altered while running in ARCHIVELOG mode. 


Examples of structural modifications include the creation of a new tablespace or the movement of a datafile to a new disk drive.


You can backup a control file to a physical file or to a trace file.


Backup a Control File to a Physical File


Use SQLPLUS to generate a binary file.

ALTER DATABASE BACKUP CONTROLFILE TO '/u03/student/dbockstd/backup/dbockstdctrl1.bak' REUSE;


The REUSE clause will overwrite any current backup that exists.




The TRACE option is used to manage and recover a control file -- it prompts Oracle to write SQL statements to a database trace file rather than generating a physical binary backup file.



        The trace file statements can be used to start the database, recreate the control file, recover, and open the database.

        You can copy the trace file statements to a script file and edit the script to develop a database recovery script if necessary, or to change parameters such as MAXDATAFILES.

        The trace file will be written to the location specified by the USER_DUMP_DEST parameter in the init.ora file.



Complete Media Recovery


Concepts in Media Recovery


        This discussion is based on operating system recovery (not Recovery manager -- RMAN).

        Complete media recovery gives the DBA the option to recover the whole database at one time or to recover individual tablespaces or datafiles one at a time.

        Whichever method you choose (operating system or RMAN), you can recover a database, tablespace, or datafile.

        In order to determine which datafiles need recovery use the fixed view V$RECOVER_FILE which is available by querying a database that is in MOUNT mode.

Closed (Offline) Database Recovery


Media recovery is performed in stages.



        Shut down the database.  If the database is open, shut it down with the SHUTDOWN ABORT command.

        Correct the media damage if possible - otherwise consider moving the damaged datafile(s) to existing media if unused disk space is sufficient.


Note:  If the hardware problem was temporary and the database is undamaged (disk or controller power failure), start the database and resume normal operations.



        Restore the necessary files.  This requires the DBA to determine which datafiles need recovered - remember to query the V$RECOVER_FILE view.

        Permanently damaged files - identify the most recent backups of the damaged files.

        Restore only the damaged datafiles - do not restore any undamaged datafiles or any online redo log files.  Use an operating system utility (such as the UNIX cp command to copy) to restore files to their default or not location.

        If you do not have a backup of a specific datafile, you may be able to create an empty replacement file that can be recovered.

        If you can fix the hardware problem (example, replace disk /u02 and format and name the new disk /u02), then restore the datafiles to their original default location.

        If you cannot fix the hardware problem immediately, select an alternate location for the restored datafiles.  This will require specifying the new location by using the datafile renaming/relocation procedure specified in the Oracle Administrator's Guide.

        Recover the datafiles.

o   Connect to Oracle as the DBA with administrator privileges and start a new instance and mount, but do not open the database, e.g.  STARTUP MOUNT.

o   Obtain all datafile names by querying the V$DATAFILE view, example:

SELECT name FROM v$datafile;


        Ensure datafiles are online.

o   You may wish to create a script to bring all datafiles online at once or you may decide to alter the database to bring an individual file online.

o   Bring the datafiles online by using one of the following command which Oracle ignores if a datafile is already online, example:

ALTER DATABASE DATAFILE '/u01/student/dbockstd/oradata/dbockstdINDEX01.dbf' ONLINE;



        Recover the database with the appropriate command.

RECOVER DATABASE  # recovers whole database


RECOVER TABLESPACE data  # recovers specific tablespace

RECOVER DATAFILE '/u10/student/USER310data.dbf';  # recovers specific datafile



        If you do not automate recovery, during execution of the RECOVER command, Oracle will ask you if a specific redo log file is to be applied during recovery - in fact, it will prompt you for the files individually - you simply answer yes or no as appropriate.

        Alternatively, you can automate recovery and Oracle applies the needed logs automatically by turning on autorecovery, and Oracle will apply the redo log files needed for recovery -- this is the preferred method.



        Oracle will notify you when media recovery is finished.  Oracle will apply all needed online redo log files and terminate recovery.



        Use the following command to open the database.




        Suppose that you perform a full backup of all database files by copying them to an offline location on Monday at 1:00 a.m.

        Throughout the rest of the day the database is modified by insertions, deletions, and updates.

        The redo log files switch several times and the database is running in ARCHIVELOG mode.

        At 3:00 p.m., a disk drive containing one tablespace fails.

        Recovery is accomplished by first replacing the disk drive (or using an existing disk drive that has sufficient storage capacity) and then restoring the complete database (not just the files for the tablespace that failed) from the last full backup.

        The archived redo logs are next used to recover the database. Oracle uses them automatically when you open the database to the mount stage and issue the ALTER DATABASE RECOVER command. Again, following this alter the database to the open stage: ALTER DATABASE OPEN.

        Following this the database is restarted and the Oracle automatically uses the online redo log files to recover to the point of failure as part of Instance Recovery.


Alternative Media Recovery

If the database file lost is a Temp or Undo tablespace file, you can restore the individual file that failed from the last full backup as described earlier.


Open (Online) Database Recovery


Sometimes a DBA must recover from a media failure while the database remains open

        This procedure does not apply to the datafiles that constitute the SYSTEM tablespace - damage to this tablespace causes Oracle to shutdown the database.

        In this situation, undamaged datafiles are left online and available for use.

        If DBWR fails to write to a datafile, then Oracle will take the damaged datafiles offline, but not the tablespaces contained in them.

        The stages to open database recovery are discussed below.



If the database is not open, startup the database with the STARTUP command.



        Take all tablespaces with damaged datafiles offline, example:



        Correct the hardware problem or restore the damaged files to an alternative storage device.


        Restore the most recent backup of files that are permanently damaged by the media failure.

        Do not restore undamaged datafiles, online redo log files, or control files.

        If you have no backup of a specific datafile, use the following command to create an empty replacement file for recovery.



        If you restored a datafile to a new location, use the procedure in the Oracle Administrator's Guide to indicate the new location of the file.


        Connect as a DBA with administrator privileges or as SYS.

        Use the RECOVER TABLESPACE command to start offline tablespace recovery for all damaged datafiles in a tablespace (where the tablespace has more than one datafile, this single command will recover all datafiles for the tablespace).

RECOVER TABLESPACE data01 # begins recovery of all datafiles in the data01 tablespace.


        At this point Oracle will begin to roll forward by applying all necessary archived redo log files (archived and online) to reconstruct the restored datafiles.  You will probably wish to automate this by turning autorecovery on.



        Oracle will continue by applying online redo log files automatically.

        After recovery is complete, bring the offline tablespaces online.



        A detailed procedure for incomplete media recovery is also available and is described in the Oracle Backup and Recovery Guide.  We do not detail these procedures here.


        The incomplete media recovery procedure might be used, for example, when some of the archived redo log files needed for complete recovery are unavailable (perhaps someone deleted them by mistake?).