Skip Headers
Oracle® Data Guard Concepts and Administration
10g Release 2 (10.2)

Part Number B14239-04
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Master Index
Master Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
View PDF

5 Redo Transport Services

This chapter describes how to configure redo transport services to transmit redo from a production database to one or more destinations. It contains the following topics:

5.1 Introduction to Redo Transport Services

Redo transport services control the automated transfer of redo data from a database destination to one or more destinations. Redo transport services also manage the process of resolving any gaps in the archived redo log files due to a network failure.

Redo transport services can transmit redo data to local and remote destinations. Remote destinations can include any of the following types: physical and logical standby databases, archived redo log repositories, Oracle Change Data Capture staging databases, and Oracle Streams downstream capture databases.

Figure 5-1 shows a simple Data Guard configuration with redo transport services archiving redo data to a local destination on the primary database while also transmitting it to archived redo log files or standby redo log files on a remote standby database destination.

Figure 5-1 Transmitting Redo Data

Description of Figure 5-1 follows
Description of "Figure 5-1 Transmitting Redo Data"

5.2 Where to Send Redo Data

This section contains the following topics:

5.2.1 Destination Types

There are several types of destinations supported by redo transport services:

  • Oracle Data Guard standby databases

    Standby database destinations can be either physical standby databases or logical standby databases. Section 1.1.2 discusses standby databases.

  • Archived redo log repository

    This type of destination allows off-site archiving of redo data. An archive log repository is created by using a physical standby control file, starting the instance, and mounting the database. This database contains no datafiles and cannot be used for switchover or failover. This alternative is useful as a way of holding archived redo log files for a short period of time, perhaps a day, after which the log files can then be deleted. This avoids most of the storage and processing expense of another fully configured standby database.

    Oracle recommends using an archived redo log repository for temporary storage of archived redo log files. This can be accomplished by configuring the repository destination for archiver-based transport (using the ARCH attribute on LOG_ARCHIVE_DEST_n parameter) in a Data Guard configuration running in maximum performance mode. For a no data loss environment, you should use a fully configured standby database using the LGWR, SYNC, and AFFIRM transport settings in a Data Guard configuration and running in either maximum protection mode or maximum availability mode.

  • Oracle Streams real-time downstream capture database

    This destination type allows Oracle Streams to configure a capture process on a remote downstream database. The Streams downstream capture process uses redo transport services to transfer redo data to the downstream database where a Streams capture process captures changes in the standby redo log files and archived redo log files on the remote destination. See Oracle Streams Concepts and Administration for more information.

  • Oracle Change Data Capture staging database

    This destination type supports a Change Data Capture Asynchronous AutoLog configuration remotely at a staging database. Redo data is copied from the source database to the staging database using redo transport services. The Change Data Capture configuration captures changes from the redo data. See Oracle Database Data Warehousing Guide for more information.

For discussion purposes, this guide refers to the production database as a primary database and to archival destinations as standby databases (as defined in Section 1.1). If you are using Oracle Change Data Capture, substitute the terms source and staging database for primary and standby database, respectively. If you are using Oracle Streams, substitute the terms source and downstream capture database for primary and standby database, respectively.

5.2.2 Configuring Destinations with the LOG_ARCHIVE_DEST_n Parameter

The LOG_ARCHIVE_DEST_n initialization parameter defines up to ten (where n = 1, 2, 3, ... 10) destinations, each of which must specify either the LOCATION or the SERVICE attribute to specify where to archive the redo data.

The LOCATION and SERVICE attributes describe either a local disk location or an Oracle Net service name that represents a standby destination to which redo transport services will transmit redo data. Specifying remote destinations with the SERVICE attribute allows Data Guard to maintain a transactionally consistent remote copy of the primary database for disaster recovery.

For every LOG_ARCHIVE_DEST_n initialization parameter that you define, specify a corresponding LOG_ARCHIVE_DEST_STATE_n parameter. The LOG_ARCHIVE_DEST_STATE_n (where n is an integer from 1 to 10) initialization parameter specifies whether the corresponding destination is currently on (enabled) or off (disabled). Table 5-1 describes the LOG_ARCHIVE_DEST_STATE_n parameter attributes.

Table 5-1 LOG_ARCHIVE_DEST_STATE_n Initialization Parameter Attributes

Attribute Description

ENABLE

Redo transport services can transmit redo data to this destination. This is the default.

DEFER

Redo transport services will not transmit redo data to this destination. This is a valid but unused destination.

ALTERNATE

This destination is not enabled, but it will become enabled if communication to its associated destination fails.

RESET

Functions the same as DEFER, but clears any error messages for the destination if it had previously failed.


Example 5-1 provides an example of one destination with the LOCATION attribute.

Example 5-1 Specifying a Local Archiving Destination

LOG_ARCHIVE_DEST_1='LOCATION=/arch1/chicago/'
LOG_ARCHIVE_DEST_STATE_1=ENABLE

Figure 5-2 shows what this simple configuration, consisting of a single local destination, would look like. The log writer process writes redo data to the online redo log file. As each online redo log file is filled, a log switch occurs and an ARCn process archives the filled online redo log file to an archived redo log file. The filled online redo log file is now available for reuse.

Figure 5-2 Primary Database Archiving When There Is No Standby Database

Description of Figure 5-2 follows
Description of "Figure 5-2 Primary Database Archiving When There Is No Standby Database "

It is important to note that the configuration shown in Figure 5-2 does not include a standby database and thus does not provide disaster-recovery protection. To make this simple configuration into a Data Guard configuration that provides disaster recovery, add a standby database at a remote destination by specifying the SERVICE attribute.

Example 5-2 shows the initialization parameters that enable redo transport services to archive the online redo log on the local destination chicago and transmit redo data to a remote standby database with the Oracle Net service name boston. The example takes the default values for all of the other LOG_ARCHIVE_DEST_n attributes:

Example 5-2 Specifying a Remote Archiving Destination

LOG_ARCHIVE_DEST_1='LOCATION=/arch1/chicago/'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_2='SERVICE=boston'
LOG_ARCHIVE_DEST_STATE_2=ENABLE

These initialization parameters set up a Data Guard configuration that uses archiver (ARCn) processes to archive to both the local and remote destinations. This configuration provides the maximum performance level of data protection.

Although you can create a Data Guard configuration by specifying only the LOCATION or the SERVICE attributes on the LOG_ARCHIVE_DEST_n parameter, you can optionally specify more attributes to further define each destination's behavior. Chapter 14 provides reference information for all of the LOG_ARCHIVE_DEST_n parameter attributes.

5.2.2.1 Changing Destination Attributes

You can dynamically update most of the attribute values of the LOG_ARCHIVE_DEST_n and the LOG_ARCHIVE_DEST_STATE_n parameters using the ALTER SYSTEM SET statement.

The modifications take effect after the next log switch on the primary database. For example, to defer redo transport services from transmitting redo data to the remote standby database named boston, issue the following statements on the primary database:

SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=boston 
  2> VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)';
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=DEFER;

5.2.2.2 Viewing Attribute with V$ARCHIVE_DEST

Query the V$ARCHIVE_DEST view to see current settings of the LOG_ARCHIVE_DEST_n initialization parameter.

5.2.3 Setting Up Flash Recovery Areas

The Oracle database enables you to configure a disk area called a flash recovery area that is a directory or Oracle Storage Manager disk group that serves as the default storage area for files related to recovery.

To configure a flash recovery area, use the DB_RECOVERY_FILE_DEST initialization parameter. LOG_ARCHIVE_DEST_10 is implicitly set to USE_DB_RECOVERY_FILE_DEST (meaning that archived redo log files will be sent to the flash recovery area) if you create a recovery area and do not set any other local archiving destinations. (See Oracle Database Backup and Recovery Basics to configure the flash recovery area and Oracle Database Administrator's Guide for more information about Oracle Storage Manager and Oracle Managed Files.)

Note:

The filenames for archived redo log files stored in a flash recovery area are generated automatically by Oracle Managed Files (OMF); the filenames are not based on the format specified by the LOG_ARCHIVE_FORMAT initialization parameter.

This section contains the following topics:

See Oracle Database Backup and Recovery Basics to configure flash recovery areas and Section 10.3.4 for information about setting up a deletion policy for archived redo log files in flash recovery areas.

5.2.3.1 Using the LOG_ARCHIVE_DEST_10 Destination

If a flash recovery area has been configured and no local destinations are defined, Data Guard implicitly uses the LOG_ARCHIVE_DEST_10 destination as the flash recovery area.

When the LOG_ARCHIVE_DEST_10 destination is used, Data Guard automatically uses the default values for all of the LOG_ARCHIVE_DEST_10 parameter attributes. To override the defaults, you can dynamically set the values for most of the attributes by explicitly specifying the LOG_ARCHIVE_DEST_10 parameter. For example, the following ALTER SYSTEM SET statement specifies several attributes on the LOG_ARCHIVE_DEST_10 initialization parameter:

SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_10='LOCATION=USE_DB_RECOVERY_FILE_DEST LGWR MANDATORY REOPEN=5 VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)'

When setting LOG_ARCHIVE_DEST_n attributes, the TEMPLATE attribute of a LOG_ARCHIVE_DEST_n parameter will override all other specifications for the flash recovery area. If the TEMPLATE attribute is specified for a remote destination and that destination archives redo data to a flash recovery area, the archived redo log file will use the directory and file name specified by the TEMPLATE attribute.

5.2.3.2 Using Other LOG_ARCHIVE_DEST_n Destinations

You can explicitly set up one or more other LOG_ARCHIVE_DEST_n destinations to point to a flash recovery area. For example, you can optionally:

  • Configure destinations other than LOG_ARCHIVE_DEST_10

    For example, an existing Data Guard configuration may have already used the LOG_ARCHIVE_DEST_10 destination for another purpose, or you may want to release the LOG_ARCHIVE_DEST_10 destination for other uses.

    To configure another archival destination to point to the flash recovery area, you must specify the LOCATION=USE_DB_RECOVERY_FILE_DEST attribute to define the new destination. For example:

    SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_1='LOCATION=USE_DB_RECOVERY_FILE_DEST ARCH MANDATORY REOPEN=5 VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)' 
    
    

    The implicit setting (for LOG_ARCHIVE_DEST_10 to use the flash recovery area) will be cleared.

  • Configure destinations in addition to LOG_ARCHIVE_DEST_10 destination for use after a role transition

    For example, you can configure one destination to be valid for standby redo log archival when the database operates in the standby role and another destination to be valid for online redo log archival when the database operates in the primary role.

    To configure a LOG_ARCHIVE_DEST_n destination in addition to LOG_ARCHIVE_DEST_10, you must explicitly specify both destinations:

    LOG_ARCHIVE_DEST_9='LOCATION=USE_DB_RECOVERY_FILE_DEST ARCH MANDATORY REOPEN=5 VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE)'
    LOG_ARCHIVE_DEST_10='LOCATION=USE_DB_RECOVERY_FILE_DEST ARCH MANDATORY REOPEN=5 VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)'
    

5.2.3.3 Using the STANDBY_ARCHIVE_DEST Destination

On a physical standby database, you can define the STANDBY_ARCHIVE_DEST parameter to point to the flash recovery area. For example:

STANDBY_ARCHIVE_DEST='LOCATION=USE_DB_RECOVERY_FILE_DEST'

Note:

Flash recovery area destinations pointed to by the STANDBY_ARCHIVE_DEST parameter on logical standby databases (SQL Apply) are ignored.

5.2.3.4 Sharing a Flash Recovery Area Between Primary and Standby Databases

You can share a flash recovery area between databases provided each database that shares the flash recovery area has a unique database name, specified with the DB_UNIQUE_NAME initialization parameter.

The following examples show how to specify initialization parameters on the primary and standby databases that will share a flash recovery area in the /arch/oradata location. Although the DB_UNIQUE_NAME parameter is not specified in Example 5-3, it defaults to PAYROLL, which is the name specified for the DB_NAME initialization parameter.

Example 5-3 Primary Database Initialization Parameters for a Shared Recovery Area

DB_NAME=PAYROLL
LOG_ARCHIVE_DEST_1='LOCATION=USE_DB_RECOVERY_FILE_DEST'
DB_RECOVERY_FILE_DEST='/arch/oradata'
DB_RECOVERY_FILE_DEST_SIZE=20G

Example 5-4 Standby Database Initialization Parameters for a Shared Recovery Area

DB_NAME=PAYROLL
DB_UNIQUE_NAME=boston
LOG_ARCHIVE_DEST_1='LOCATION=USE_DB_RECOVERY_FILE_DEST'
STANDBY_ARCHIVE_DEST='LOCATION=USE_DB_RECOVERY_FILE_DEST'
DB_RECOVERY_FILE_DEST='/arch/oradata'
DB_RECOVERY_FILE_DEST_SIZE=5G

See Oracle Database Backup and Recovery Advanced User's Guide for more information about sharing a flash recovery area among multiple databases.

5.3 How to Send Redo Data

On the primary database, Oracle Data Guard uses archiver processes (ARCn) or the log writer process (LGWR) to collect transaction redo data and transmit it to standby destinations. Although you cannot use both the archiver and log writer processes to send redo data to the same destination, you can choose to use the log writer process for some destinations, while archiver processes send redo data to other destinations.This section contains the following topics:

Data Guard also uses the fetch archive log (FAL) client and server to send archived redo log files to standby destinations following a network outage, for automatic gap resolution, and resynchronization. The FAL process and gap resolution are discussed in Section 5.8.

5.3.1 Using Archiver Processes (ARCn) to Archive Redo Data

By default, redo transport services use ARCn processes to archive the online redo log files on the primary database. ARCn archival processing supports only the maximum performance level of data protection in Data Guard configurations. You must use the LGWR process to transmit redo data to standby locations that operate in other data protection modes. (See Section 5.6 for more information about the Data Guard data protection modes.)

The following sections discuss these topics:

5.3.1.1 Initialization Parameters That Control ARCn Archival Behavior

The following descriptions tell how to use the LOG_ARCHIVE_DEST_n and the LOG_ARCHIVE_MAX_PROCESSES initialization parameters.

Enabling ARCn Processes to Archive to Local or Remote Destinations

You specify attributes on the LOG_ARCHIVE_DEST_n initialization parameter to control the automated transfer of redo data from the primary database to other destinations. Because ARCn archiver processing is the default archival behavior, specifying the ARCH attribute on the LOG_ARCHIVE_DEST_n parameter is optional. However, you must specify either the LOCATION attribute to archive to a local destination or the SERVICE attribute for remote archival (as described in Section 5.2.2).

Specifying the Number of ARCn Processes to be Invoked

The LOG_ARCHIVE_MAX_PROCESSES initialization parameter specifies the maximum number of ARCn processes. By default, 4 archiver processes are invoked when the primary database instance starts and Oracle Database dynamically adjusts the number of processes to balance the archiver workload. Thus, the actual number of archiver processes may vary at any time.

If you anticipate a heavy workload for archiving, you can increase the maximum number of archiver processes to as many as 30 by setting the initialization parameter LOG_ARCHIVE_MAX_PROCESSES. This initialization parameter is dynamic and can be altered by the ALTER SYSTEM command to increase or decrease the maximum number of archiver processes. For example:

ALTER SYSTEM SET LOG_ARCHIVE_MAX_PROCESSES = 20;

5.3.1.2 ARCn Archival Processing

Figure 5-3 shows an example of archival processing in a Data Guard configuration, with a primary database located in Chicago and one physical standby database located in Boston.

Archiving happens when a log switch occurs on the primary database:

  • On the primary database, after the ARC0 process successfully archives the local online redo log to the local destination (LOG_ARCHIVE_DEST_1), the ARC1 process transmits redo from the local archived redo log files (instead of the online redo log files) to the remote standby destination (LOG_ARCHIVE_DEST_2).

  • On the remote destination, the remote file server process (RFS) will, in turn, write the redo data to an archived redo log file from a standby redo log file. Log apply services use Redo Apply (MRP processFoot 1 ) or SQL Apply (LSP processFoot 2 ) to apply the redo to the standby database.

Because the online redo log files are archived locally first, the LGWR process reuses the online redo log files much earlier than would be possible if the ARCn processes archived to the standby database concurrently with the local destination.

As shown in Figure 5-3, you need to have at least 2 ARCn processes to separate local archival from remote archival. This can be done by setting the LOG_ARCHIVE_MAX_PROCESSES initialization parameter (the default setting is 4, but the maximum value is 30).

Figure 5-3 Archiving to Local Destinations Before Archiving to Remote Destinations

Description of Figure 5-3 follows
Description of "Figure 5-3 Archiving to Local Destinations Before Archiving to Remote Destinations"

Because the default ARCn archival processing disassociates local archiving from remote archiving, sites that may have policies to delete archived redo log files on the primary database immediately after backing them up must make sure that the standby destinations receive the corresponding redo data before deleting the archived redo log files on the primary database. You can query the V$ARCHIVED_LOG view to verify the redo data was received on standby destinations.

5.3.2 Using the Log Writer Process (LGWR) to Archive Redo Data

You can optionally enable redo transport services to use the LGWR process to transmit redo data to remote destinations.

Using the LGWR process differs from ARCn processing (described in Section 5.3.1), because instead of waiting for the online redo log to switch at the primary database and then writing the entire archived redo log at the remote destination all at once, the LGWR process selects a standby redo log file at the standby site that reflects the log sequence number (and size) of the current online redo log file of the primary database. Then, as redo is generated at the primary database, it is also transmitted to the remote destination. The transmission to the remote destination will either be synchronous or asynchronous, based on whether the SYNC or the ASYNC attribute is set on the LOG_ARCHIVE_DEST_n parameter. Synchronous LGWR processing is required for the maximum protection and maximum availability modes of data protection in Data Guard configurations. (See Section 5.6 for information about the Data Guard data protection modes.)This section contains the following topics:

5.3.2.1 LOG_ARCHIVE_DEST_n Attributes for LGWR Archival Processing

The following sections describe the LGWR, SYNC, and ASYNC attributes.

Enabling Redo Transport Services to Use the LGWR Process

You must specify the LGWR and SERVICE attributes on the LOG_ARCHIVE_DEST_n parameter to enable redo transport services to use the LGWR process to transmit redo data to remote archival destinations.

Specifying Synchronous or Asynchronous Network Transmission

The LGWR process synchronously writes to the local online redo log files at the same time it transmits redo data to the remote destination:

  • The SYNC attribute performs all network I/O synchronously, in conjunction with each write operation to the online redo log file, and waits for the network I/O to complete. Section 5.3.2.2 shows an example of synchronous network transmission in a Data Guard configuration. This is the default network transmission setting.

  • The ASYNC attribute performs all network I/O asynchronously and control is returned to the executing application or user immediately, without waiting for the network I/O to complete. Section 5.3.2.3 shows an example of asynchronous network transmission in a Data Guard configuration.

    Note:

    If you configure a destination to use the LGWR process, but for some reason the LGWR process becomes unable to archive to the destination, then redo transport will revert to using the ARCn process to complete archival operations.

5.3.2.2 LGWR SYNC Archival Processing

Example 5-5 shows the primary role LOG_ARCHIVE_DEST_n parameters that configure the LGWR process for synchronous network transmission.

Example 5-5 Initialization Parameters for LGWR Synchronous Archival

LOG_ARCHIVE_DEST_1='LOCATION=/arch1/chicago'
LOG_ARCHIVE_DEST_2='SERVICE=boston LGWR SYNC NET_TIMEOUT=30'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE

Specifying the SYNC attribute on the LOG_ARCHIVE_DEST_n parameter is optional, because this is the default for LGWR archival processing. The NET_TIMEOUT attribute is recommended, because it controls the amount of time that the LGWR process waits for status from the network server process before terminating the network connection. If there is no reply within NET_TIMEOUT seconds, then the LGWR process returns an error message.

Figure 5-4 shows a Data Guard configuration that uses the LGWR process to synchronously transmit redo data to the standby system at the same time it is writing redo data to the online redo log file on the primary database:

  • On the primary database, the LGWR process submits the redo data to one or more network server (LNSn) processes, which then initiate the network I/O in parallel to multiple remote destinations. Transactions are not committed on the primary database until the redo data necessary to recover the transaction is received by all LGWR SYNC destinations.

  • On the standby system, the remote file server (RFS) receives redo data over the network from the LGWR process and writes the redo data to the standby redo log files.

A log switch on the primary database triggers a log switch on the standby database, causing ARCn processes on the standby database to archive the standby redo log files to archived redo log files on the standby database. Then, Redo Apply (MRP process) or SQL Apply (LSP process) applies the redo data to the standby database. If real-time apply is enabled, Data Guard recovers redo data directly from the current standby redo log file as it is being filled up by the RFS process.

Figure 5-4 LGWR SYNC Archival to a Remote Destination with Standby Redo Log Files

Description of Figure 5-4 follows
Description of "Figure 5-4 LGWR SYNC Archival to a Remote Destination with Standby Redo Log Files"

5.3.2.3 LGWR ASYNC Archival Processing

Example 5-6 shows the primary role LOG_ARCHIVE_DEST_n parameters that configure the LGWR process for asynchronous network transmission.

Example 5-6 Initialization Parameters for LGWR Asynchronous Archiving

LOG_ARCHIVE_DEST_1='LOCATION=/arch1/chicago'
LOG_ARCHIVE_DEST_2='SERVICE=boston LGWR ASYNC'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE

Figure 5-5 shows the LNSn process collecting redo data from the online redo log files and transmitting it over Oracle Net to the RFS process on the standby database.

When the LGWR and ASYNC attributes are specified, the log writer process writes to the local online redo log file, while the network server (LNSn) processes (one for each destination) asynchronously transmit the redo to remote destinations. The LGWR process continues processing the next request without waiting for the LNS network I/O to complete.

If redo transport services transmit redo data to multiple remote destinations, the LNSn processes (one for each destination) initiate the network I/O to all of the destinations in parallel.

When an online redo log file fills up, a log switch occurs and an archiver process archives the log file locally, as usual.

Note:

Beginning with Oracle Database 10g Release 10.2, it is unnecessary to specify the NET_TIMEOUT attribute on the LOG_ARCHIVE_DEST_n destinations configured with both the LGWR and ASYNC attributes. This is because the log writer process never waits for the LNSn for any reason in release 10.2. Thus, specifying the NET_TIMEOUT attribute is not required.

Figure 5-5 LGWR ASYNC Archival with Network Server (LNSn) Processes

Description of Figure 5-5 follows
Description of "Figure 5-5 LGWR ASYNC Archival with Network Server (LNSn) Processes"

Note:

Do not use the LOG_ARCHIVE_DEST or the LOG_ARCHIVE_DUPLEX_DEST initialization parameters to specify a flash recovery area destination.

5.3.3 Providing for Secure Redo Data Transmission

Data Guard provides a secure environment and prevents the possible tampering of redo data as it is being transferred to the standby database.

Redo transport services use authenticated network sessions to transfer redo data. These sessions are authenticated using the SYS user password contained in the password file. All databases in the Data Guard configuration must use a password file, and the SYS password contained in this password file must be identical on all systems. This authentication can be performed even if Oracle Advanced Security is not installed, and provides some level of security when shipping redo.

Note:

To further protect redo (for example, to encrypt redo or compute an integrity checksum value for redo traffic over the network to disallow redo tampering on the network), Oracle recommends that you install and use Oracle Advanced Security. See the Oracle Database Advanced Security Administrator's Guide.

Perform the following steps on the primary database and each standby database:

  1. Create a password file (using the orapwd utility) on the primary and all standby databases. For example:

    ORAPWD FILE=orapw PASSWORD=mypassword ENTRIES=10
    
    

    This example creates a password file with 10 entries, where the password for SYS is mypassword. For redo data transmission to succeed, ensure you set the password for the SYS user account identically for every primary and standby database.

  2. Set the REMOTE_LOGIN_PASSWORDFILE initialization parameter to EXCLUSIVE or SHARED to enable Oracle to check for a password file and to specify how many databases can use the password file. For example:

    REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
    
    

    See the Oracle Database Reference for more information about this parameter.

5.4 When Redo Data Should Be Sent

This section contains the following topics:

5.4.1 Specifying Role-Based Destinations with the VALID_FOR Attribute

The VALID_FOR attribute enables you to configure destination attributes for both the primary and standby database roles in one server parameter file (SPFILE), so that your Data Guard configuration operates properly after a role transition. This simplifies switchovers and failovers by removing the need to enable and disable the role-specific parameter files after a role transition.

When you specify the VALID_FOR attribute of the LOG_ARCHIVE_DEST_n parameter, it identifies when redo transport services can transmit redo data to destinations based on the following factors:

  • Whether the database is currently running in the primary or the standby role

  • Whether archival of the online redo log file, standby redo log file, or both is required depending on the current role of the database

To configure these factors for each LOG_ARCHIVE_DEST_n destination, you specify this attribute with a pair of keywords: VALID_FOR=(redo_log_type,database_role). The redo_log_type keyword identifies the destination as valid for archiving the following: ONLINE_LOGFILE, STANDBY_LOGFILE, or ALL_LOGFILES. The database_role keyword identifies the role in which the current database must be in for the destination to be valid: PRIMARY_ROLE, STANDBY_ROLE, or ALL_ROLES.

If you do not specify the VALID_FOR attribute for a destination, by default, archiving the online redo log and standby redo log is enabled to the destination, regardless of the database role. This default behavior is equivalent to setting the (ALL_LOGFILES,ALL_ROLES) keyword pair on the VALID_FOR attribute. For example:

LOG_ARCHIVE_DEST_1='LOCATION=/ARCH1/CHICAGO/ VALID_FOR=(ALL_LOGFILES,ALL_ROLES)'

Although the (ALL_LOGFILES,ALL_ROLES) keyword pair is the default, it is not recommended for every destination. For example, logical standby databases, unlike physical standby databases, are open databases that generate redo data and have multiple log files (online redo log files, archived redo log files, and standby redo log files). In most cases, the online redo log files generated by the logical standby database are located in the same directory as the standby redo logs files that are receiving redo from the primary database.

Therefore, it is recommended that you define a VALID_FOR attribute for each destination so that your Data Guard configuration operates properly, including after a role transition. See the scenarios in Section 12.1 for examples of the VALID_FOR attribute settings for various Data Guard configurations,

If you choose not to use the VALID_FOR attribute to configure destinations, you must maintain two database server parameter files (SPFILEs) for each database: one for when the database is in the primary role and the other for the standby role. See Chapter 12 for more configuration examples.

5.4.2 Specify Unique Names for Primary and Standby Databases

The DB_UNIQUE_NAME attribute enables you to specify unique database names when you configure destinations. This makes it possible to dynamically add a standby database to a Data Guard configuration that contains a Real Applications Clusters primary database, when that primary database is operating in either the maximum protection or the maximum availability level of protection. The DB_UNIQUE_NAME initialization parameter is required if the LOG_ARCHIVE_CONFIG parameter has been defined.

Note:

If the standby database on a remote destination has not been identified using the DB_UNIQUE_NAME initialization parameter, the standby database must be accessible before the primary instance is started.

Together, the DB_UNIQUE_NAME attribute of the LOG_ARCHIVE_DEST_n parameter and the DG_CONFIG attribute of the LOG_ARCHIVE_CONFIG parameter specify the unique name of each database of the Data Guard configuration. The names you supply must match what was defined for each database with the DB_UNIQUE_NAME initialization parameter.

For example, the following initialization parameters show the DB_UNIQUE_NAME and LOG_ARCHIVE_CONFIG definitions for the primary database (chicago) in the Data Guard configuration described in Chapter 3:

DB_NAME=chicago
DB_UNIQUE_NAME=chicago
LOG_ARCHIVE_CONFIG='DG_CONFIG=(chicago, boston)'
LOG_ARCHIVE_DEST_1='LOCATION=/arch1/chicago/ VALID_FOR=(ALL_LOGFILES,ALL_ROLES) 
LOG_ARCHIVE_DEST_2=
 'SERVICE=boston LGWR ASYNC
  VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) 
  DB_UNIQUE_NAME=boston'

The DB_UNIQUE_NAME attribute is required for remote destinations specified with the SERVICE attribute. In the example, the LOG_ARCHIVE_DEST_2 parameter specifies the DB_UNIQUE_NAME=boston for the remote destination; redo transport services validate this information at the remote destination. If the names do not match, the connection to that destination is refused.

The LOG_ARCHIVE_CONFIG parameter also has SEND, NOSEND, RECEIVE, and NORECEIVE attributes:

  • SEND enables a database to send redo data to remote destinations

  • RECEIVE enables the standby database to receive redo from another database

To disable these settings, use the NOSEND and NORECEIVE keywords.

For example, to ensure the primary database never accidentally receives any archived redo data, set the LOG_ARCHIVE_CONFIG initialization parameter to NORECEIVE on the primary database, as follows:

LOG_ARCHIVE_CONFIG='NORECEIVE,DG_CONFIG=(chicago,boston)'

However, keep in mind that specifying either the NOSEND or the NORECEIVE attributes may limit the database instance's capabilities after a role transition. For example, if a standby database with the NOSEND attribute set is transitioned to the primary role, it would not be able to transmit redo data to other standby databases until you reset the parameter value to SEND. Similarly, a database that has the NORECEIVE attribute specified cannot receive redo from the primary database.

By default, the LOG_ARCHIVE_CONFIG parameter allows the primary database to send redo data to the standby database and allows the standby database to receive redo from the primary database for archiving. This is equivalent to setting both SEND and RECEIVE attributes on the LOG_ARCHIVE_CONFIG parameter.

Note:

The LOG_ARCHIVE_CONFIG initialization parameter replaces the REMOTE_ARCHIVE_ENABLE initialization parameter, which is deprecated. Do not specify both parameters in the same SPFILE or text initialization parameter file.

5.5 What to Do If Errors Occur

To handle archiving failures, you can use the REOPEN, MAX_FAILURES, and ALTERNATE attributes of the LOG_ARCHIVE_DEST_n parameter to specify what actions are to be taken when archival processing to a destination fails. These actions include:

5.5.1 Retrying the Archival Operation

Use the REOPEN attribute to determine if and when the ARCn process or the LGWR process attempts to transmit redo data again to a failed destination following an error.

Use the REOPEN=seconds attribute to specify the minimum number of seconds that must elapse following an error before the archiving process will try again to access a failed destination. The default value is 300 seconds. The value set for the REOPEN attribute applies to all errors, not just connection failures. You can turn off the option by specifying REOPEN=0, which prevents the destination from being retried after a failure occurs.

If transmission to the alternate destination fails and the REOPEN attribute is set to zero (0), redo transport services will attempt to send redo data to the alternate destination the next time redo data is archived.

5.5.2 Using an Alternate Destination

The ALTERNATE attribute defines an alternate archiving destination that can be used when the original archiving destination fails. If no alternate destination is specified, the destination does not automatically change to another destination upon failure.

Figure 5-6 shows a scenario where redo data is archived to a local disk device. If the original destination device becomes full or unavailable, the archival operation is automatically redirected to the alternate destination device.

Figure 5-6 Archival Operation to an Alternate Destination Device

Description of Figure 5-6 follows
Description of "Figure 5-6 Archival Operation to an Alternate Destination Device"

The REOPEN attribute takes precedence over the ALTERNATE attribute. The alternate destination is used only if one of the following conditions is true:

  • A value of zero (0) is specified for the REOPEN attribute.

  • A nonzero REOPEN attribute and a nonzero MAX_FAILURE count have been exceeded.

The ALTERNATE attribute takes precedence over the MANDATORY attribute. This means that a destination fails over to a valid alternate destination even if the current destination is mandatory.

See Also:

The ALTERNATE attribute

5.5.3 Controlling the Number of Retry Attempts

Use the MAX_FAILURE attribute to specify the maximum number of consecutive times that redo transport services attempt to transmit redo data to a failed destination. To limit the number of consecutive attempts that will be made to reestablish communication with a failed destination, use the REOPEN attribute in conjunction with the MAX_FAILURE attribute. Once the specified number of consecutive attempts is exceeded, the destination is treated as if the REOPEN attribute was set to zero.

The REOPEN attribute is required when you use the MAX_FAILURE attribute. Example 5-7 shows how to set a retry time of 60 seconds and limit retries to 3 attempts.

Example 5-7 Setting a Retry Time and Limit

LOG_ARCHIVE_DEST_1='LOCATION=/arc_dest REOPEN=60 MAX_FAILURE=3' 

5.6 Setting Up a Data Protection Mode

Data Guard provides three modes of data protection: maximum protection, maximum availability, and maximum performance. The level of data protection you choose controls what happens if the primary database loses its connection to the standby database. This section contains the following topics:

5.6.1 Choosing a Data Protection Mode

To determine the appropriate data protection mode to use, review the following descriptions of the data protection modes to help assess your business requirements for data availability against user demands for response time and performance. Also, see Section 5.6.2 for information about setting up the data protection mode.

5.6.1.1 Maximum Protection Mode

This protection mode ensures that no data loss will occur if the primary database fails. To provide this level of protection, the redo data needed to recover each transaction must be written to both the local online redo log and to the standby redo log on at least one standby database before the transaction commits. To ensure data loss cannot occur, the primary database shuts down if a fault prevents it from writing its redo stream to at least one remote standby redo log. For multiple-instance RAC databases, Data Guard shuts down the primary database if it is unable to write the redo records to at least one properly configured database instance. The maximum protection mode requires that at least one standby instance has a standby redo log and the LGWR, SYNC, and AFFIRM attributes be used on the LOG_ARCHIVE_DEST_n parameter for this destination.

5.6.1.2 Maximum Availability Mode

This protection mode provides the highest level of data protection that is possible without compromising the availability of the primary database. Like maximum protection mode, a transaction will not commit until the redo needed to recover that transaction is written to the local online redo log and to at least one remote standby redo log. Unlike maximum protection mode, the primary database does not shut down if a fault prevents it from writing its redo stream to a remote standby redo log. Instead, the primary database operates in maximum performance mode until the fault is corrected and all gaps in redo log files are resolved. When all gaps are resolved, the primary database automatically resumes operating in maximum availability mode.

This mode ensures that no data loss will occur if the primary database fails, but only if a second fault does not prevent a complete set of redo data from being sent from the primary database to at least one standby database.

Like maximum protection mode, the maximum availability mode requires that you:

  • Configure standby redo log files on at least one standby database.

  • Set the SYNC, LGWR, and AFFIRM attributes of the LOG_ARCHIVE_DEST_n parameter for at least 1 standby database.

5.6.1.3 Maximum Performance Mode

This protection mode (the default) provides the highest level of data protection that is possible without affecting the performance of the primary database. This is accomplished by allowing a transaction to commit as soon as the redo data needed to recover that transaction is written to the local online redo log. The primary database's redo data stream is also written to at least one standby database, but that redo stream is written asynchronously with respect to the commitment of the transactions that create the redo data.

When network links with sufficient bandwidth are used, this mode provides a level of data protection that approaches that of maximum availability mode with minimal impact on primary database performance.

The maximum performance mode enables you to either set the LGWR and ASYNC attributes, or set the ARCH attribute on the LOG_ARCHIVE_DEST_n parameter for the standby database destination. If the primary database fails, you can reduce the amount of data that is not received on the standby destination by setting the LGWR and ASYNC attributes.

5.6.2 Setting the Data Protection Mode of a Data Guard Configuration

To set up redo transport services and specify a level of data protection for the Data Guard configuration, perform the following steps.


Step 1 Configure the LOG_ARCHIVE_DEST_n parameters on the primary database.

On the primary database, configure the LOG_ARCHIVE_DEST_n parameter attributes appropriately. Each of the Data Guard data protection modes requires that at least one standby database in the configuration meet the minimum set of requirements listed in Table 5-2.

Table 5-2 Minimum Requirements for Data Protection Modes


Maximum Protection Maximum Availability Maximum Performance

Redo archival process

LGWR

LGWR

LGWR or ARCH

Network transmission mode

SYNC

SYNC

SYNC or ASYNC when using LGWR process. SYNC if using ARCH process

Disk write option

AFFIRM

AFFIRM

AFFIRM or NOAFFIRM

Standby redo log required?

Yes

Yes

No, but it is recommended


Note:

Oracle recommends that a Data Guard configuration that is running in maximum protection mode contains at least two standby databases that meet the requirements listed in Table 5-2. That way, the primary database can continue processing if one of the standby databases cannot receive redo data from the primary database.

The following example shows how to configure the maximum availability mode:

SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=chicago
  2> OPTIONAL LGWR SYNC AFFIRM 
  3> VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
  4> DB_UNIQUE_NAME=chicago';

If they are not already specified in the SPFILE, you should also specify unique names with the DB_UNIQUE_NAME initialization parameter and list all databases on the LOG_ARCHIVE_CONFIG parameter with the DG_CONFIG attribute. For example:

SQL> ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(chicago,boston)'

This will enable the dynamic addition of a standby database to a Data Guard configuration that has a Real Application Clusters primary database running in either maximum protection or maximum availability mode.


Step 1 If you are upgrading the protection mode, perform this step.

Perform this step only if you are upgrading the protection mode (for example, from maximum performance to maximum availability mode). Otherwise, go to Step 3.

Assume this example is upgrading the Data Guard configuration from the maximum performance mode to the maximum availability mode. Shut down the primary database and restart it in mounted mode:

SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP MOUNT;

For a Real Application Clusters database, shut down all of the primary instances but start and mount only one primary instance.

Step 2 Set the data protection mode.

To specify a data protection mode, issue the SQL ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE {PROTECTION | AVAILABILITY | PERFORMANCE} statement on the primary database. For example, the following statement specifies the maximum availability mode:

SQL> ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE AVAILABILITY;

Step 3 Open the primary database.

If you performed Step 1 to upgrade the protection mode, open the database:

SQL> ALTER DATABASE OPEN;

If you are downgrading the protection mode, the database will already be open.

Step 4 Configure the LOG_ARCHIVE_DEST_n parameters on standby databases.

On the standby databases, configure the LOG_ARCHIVE_DEST_n parameter attributes so the configuration can continue to operate in the new protection mode after a switchover.

For example:

SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=boston
  2> OPTIONAL LGWR SYNC AFFIRM
  3> VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
  4> DB_UNIQUE_NAME=boston';

Step 5 Confirm the configuration is operating in the new protection mode.

Query the V$DATABASE view to confirm the Data Guard configuration is operating in the new protection mode. For example:

SQL> SELECT PROTECTION_MODE, PROTECTION_LEVEL FROM V$DATABASE;

PROTECTION_MODE                   PROTECTION_LEVEL
---------------------             ---------------------
MAXIMUM AVAILABILITY              MAXIMUM AVAILABILITY

See Chapter 15 and Oracle Database SQL Reference for information about SQL statements.

5.7 Managing Log Files

This section contains the following topics:

5.7.1 Specifying Alternate Directory Locations for Archived Redo Log Files

Typically, when redo data is received from the primary database, the redo data is written to archived redo log files that are stored in the directory you specify with the LOCATION attribute of the LOG_ARCHIVE_DEST_n parameter. Alternatively, you can specify the STANDBY_ARCHIVE_DEST initialization parameter on the standby database to indicate an alternate directory where the archived redo log files are to be stored when received from the primary database.

If both parameters are specified, the STANDBY_ARCHIVE_DEST initialization parameter overrides the directory location specified with the LOG_ARCHIVE_DEST_n parameter.

The location where archived redo log files are stored on the standby database is determined according to the following list of rules. When the database instance is started, the archived redo log files are evaluated in the list order:

  1. If the STANDBY_ARCHIVE_DEST initialization parameter is specified on the standby database, that location is used.

  2. If the LOG_ARCHIVE_DEST_n parameter contains the VALID_FOR=(STANDBY_LOGFILE,*) attribute, then the location specified for this destination is used.

  3. If the COMPATIBLE parameter is set to 10.0 or greater and none of the LOG_ARCHIVE_DEST_n parameters contain the VALID_FOR=(STANDBY_LOGFILE,*)attribute, then an arbitrary LOG_ARCHIVE_DEST_n parameter that is valid for the destination is used.

  4. If none of the initialization parameters have been specified, then archived redo log files are stored in the default location for the STANDBY_ARCHIVE_DEST initialization parameter.

    To see the implicit default value of the STANDBY_ARCHIVE_DEST initialization parameter, query the V$ARCHIVE_DEST view:

    SQL> SELECT DEST_NAME, DESTINATION FROM V$ARCHIVE_DEST 
      2> WHERE DEST_NAME='STANDBY_ARCHIVE_DEST';
    
    DEST_NAME
    ------------------------------------------------------------------------------------------------------------------------------------
    DESTINATION
    ------------------------------------------------------------------------------------------------------------------------------------
    STANDBY_ARCHIVE_DEST
    /oracle/dbs/arch
    
    

Redo transport services use the value specified with the STANDBY_ARCHIVE_DEST initialization parameter in conjunction with the LOG_ARCHIVE_FORMAT parameter to generate the filenames for the archived redo log files on the standby site. For example:

STANDBY_ARCHIVE_DEST='/arc_dest/arls' 
LOG_ARCHIVE_FORMAT=log%t_%s_%r.arc

In the example, %s corresponds to the sequence number, and %r corresponds to the resetlogs ID. Together, these ensure unique names are constructed for the archived redo log files across multiple incarnations of the database. The %t, which is required for Real Application Clusters configurations, corresponds to the thread number.

For a physical standby database, redo transport services store the fully qualified filenames in the standby database control file, and Redo Apply uses this information to perform recovery on the standby database.

Note:

If you have specified the TEMPLATE attribute of the LOG_ARCHIVE_DEST_n parameter, it will override the filename generated with the STANDBY_ARCHIVE_DEST and LOG_ARCHIVE_FORMAT parameter. See Chapter 14 for information about the TEMPLATE attributes.

To display the list of archived redo log files that are on the standby system, query the V$ARCHIVED_LOG view on the standby database:

SQL> SELECT NAME FROM V$ARCHIVED_LOG;
NAME                                                                            
--------------------------------------------------------------------------------
/arc_dest/log_1_771.arc                                       
/arc_dest/log_1_772.arc                                       
/arc_dest/log_1_773.arc                                       
/arc_dest/log_1_774.arc                                       
/arc_dest/log_1_775.arc

5.7.2 Reusing Online Redo Log Files

You can specify a policy for reusing the online redo log file by setting the OPTIONAL or MANDATORY attribute of the LOG_ARCHIVE_DEST_n parameter. By default, remote destinations are set to OPTIONAL. The archival operation of an optional destination can fail, and the online redo log file can be reused even though transmitting the redo data and writing the log contents was not successful. If the archival operation of a mandatory destination fails, online redo log files cannot be overwritten until the failed archive is completed to the mandatory destination.

By default, one local destination is mandatory even if you designate all destinations to be optional.

Example 5-8 shows how to set a mandatory local archiving destination and enable that destination. When specifying the MANDATORY attribute, also consider specifying the REOPEN and MAX_FAILURE attributes as described in Section 5.5 to handle failure conditions.

Example 5-8 Setting a Mandatory Archiving Destination

LOG_ARCHIVE_DEST_3 = 'LOCATION=/arc_dest MANDATORY'

5.7.3 Managing Standby Redo Log Files

This section contains the following topics:

5.7.3.1 Determining If a Standby Redo Log File Group Configuration Is Adequate

The easiest way to verify the standby redo log has an appropriate number of log file groups is to examine the RFS process trace file and database alert log. If either log contains messages that indicate the RFS process frequently has to wait for a group because archiving did not complete, then add more log file groups to the standby redo log. The additional standby redo log file groups give the archival operation time to complete before the standby redo log file is reused by the RFS process.

Caution:

Whenever you add an online redo log file group to the primary database, you must add a corresponding standby redo log file group to the standby database. If the number of standby redo log file groups is inadequate, the primary database will shut down if it is operating in maximum protection mode or switch to maximum performance mode if it is operating in maximum availability mode.

5.7.3.2 Adding Standby Redo Log Members to an Existing Group

In some cases, it might not be necessary to create a complete group of standby redo log files. A group could already exist, but may not be complete because one or more members were dropped (for example, because of disk failure). In this case, you can add new members to an existing group.

To add new members to a standby redo log file group, use the ALTER DATABASE statement with the ADD STANDBY LOGFILE MEMBER clause. The following statement adds a new member to the standby redo log file group number 2:

SQL> ALTER DATABASE ADD STANDBY LOGFILE MEMBER '/disk1/oracle/dbs/log2b.rdo' 
  2> TO GROUP 2;

Use fully qualified filenames of new members to indicate where the file should be created. Otherwise, files will be created in either the default or current directory of the database, depending on your operating system.

5.7.4 Planning for Growth and Reuse of the Control Files

This section describes:

5.7.4.1 Sizing the Disk Volumes that Contain the Control Files

As archived redo log files are generated and RMAN backups are made, Oracle adds new records to the reusable section of the control file. If no records are available for reuse (because all records are still within the number of days specified by CONTROL_FILE_RECORD_KEEP_TIME), then the control file is expanded and new records are added to the control file.

The maximum control file size is 20000 database blocks. If DB_BLOCK_SIZE equals 8192, then the maximum control file size is 156 MB. If the control files are stored in pre-created volumes, then the volumes that contain the primary and standby control files should be sized to accommodate a control file of maximum size. If the control file volume is too small and cannot be extended, then existing records in the control file will be overwritten before their intended reuse. This behavior is indicated by the following message in the alert log:

krcpwnc: following controlfile record written over:

5.7.4.2 Specifying the Reuse of Records in the Control File

The CONTROL_FILE_RECORD_KEEP_TIME initialization parameter specifies the minimum number of days that must pass before a reusable record in the control file can be reused. Setting this parameter appropriately prevents redo transport services from overwriting a reusable record in the control file and ensures redo information remains available on the standby database:

  • Set CONTROL_FILE_RECORD_KEEP_TIME to a value that allows all on disk backup information to be retained in the control file. CONTROL_FILE_RECORD_KEEP_TIME specifies the number of days that records are kept within the control file before becoming a candidate for reuse.

  • Set CONTROL_FILE_RECORD_KEEP_TIME to a value slightly longer than the oldest backup file that you intend to keep on disk, as determined by the size of the backup area.

    For example, if the backup area is sized to maintain two full backups that are taken every 7 days, as well as daily incremental backups and archived redo log files, then set CONTROL_FILE_RECORD_KEEP_TIME to a value of 21 or 30. Records older than this will be reused. However, the backup metadata will still be available in the RMAN recovery catalog.

Make sure you specify a large enough value if an apply delay is also set for the standby database (described in Section 6.2.2). The range of values for this parameter is 0 to 365 days. The default value is 7 days.

See Oracle Database Reference for more details about the CONTROL_FILE_RECORD_KEEP_TIME initialization parameter and Oracle Database Backup and Recovery Advanced User's Guide.

5.7.5 Sharing a Log File Destination Among Multiple Standby Databases

Use the DEPENDENCY attribute of the LOG_ARCHIVE_DEST_n initialization parameter to define one archival destination to receive redo data on behalf of several destinations, rather than transmitting redo data to each individual destination.

Figure 5-7 shows a Data Guard configuration in which the primary database transports redo data to one archiving destination that shares its archived redo log files with both a logical standby database and a physical standby database. These destinations are dependent on the successful completion of archival operations to the parent destination.

Figure 5-7 Data Guard Configuration with Dependent Destinations

Description of Figure 5-7 follows
Description of "Figure 5-7 Data Guard Configuration with Dependent Destinations"

Specifying a destination dependency can be useful in the following situations:

  • When you configure a physical standby database and a logical standby database on the same system.

  • When you configure the standby database and the primary database on the same system. Therefore, the archived redo log files are implicitly accessible to the standby database.

  • When clustered file systems are used to provide remote standby databases with access to the primary database archived redo log files.

  • When operating system-specific network file systems are used, providing remote standby databases with access to the primary database archived redo log files.

For example, assume there are two standby databases stdby1 and stdby2 that reside on the same piece of hardware. There is no need to use network bandwidth and disk space to send the same redo data to both destinations. The databases can share the same archived redo log files if you use the DEPENDENCY attribute to designate one of the destinations as being a dependent destination. That is, the primary database sends redo to be archived on the destination that is not defined as the dependent destination. If the redo data successfully arrives at that destination, the primary database considers it archived to both destinations. For example:

LOG_ARCHIVE_DEST_1='LOCATION=DISK1 MANDATORY'
LOG_ARCHIVE_DEST_2='SERVICE=stdby1 OPTIONAL'
LOG_ARCHIVE_DEST_3='SERVICE=stdby2 OPTIONAL DEPENDENCY=LOG_ARCHIVE_DEST_2'

With these parameter definitions, the primary database transmits redo data to stdby1 but not to stdby2. The stdby2 database instead recovers redo from the archived redo log files that are shipped to stdby1.

See Also:

The DEPENDENCY attribute

5.8 Managing Archive Gaps

An archive gap can occur on the standby system when it is has not received one or more archived redo log files generated by the primary database. The missing archived redo log files are the gap. If there is a gap, it is automatically detected and resolved by Data Guard by copying the missing sequence of log files to the standby destination. For example, an archive gap can occur when the network becomes unavailable and automatic archiving from the primary database to the standby database temporarily stops. When the network is available again, automatic transmission of the redo data from the primary database to the failed standby database resumes.

Data Guard requires no manual intervention by the DBA to detect and resolve such gaps. The following sections describe gap detection and resolution.

5.8.1 When Is an Archive Gap Discovered?

An archive gap can occur whenever the primary database archives a log locally, but the log is not received at the standby site. Every minute, the primary database polls its standby databases to see if there are gaps in the sequence of archived redo log files.

5.8.2 How Is a Gap Resolved?

Gap recovery is handled through the polling mechanism. For physical and logical standby databases, Oracle Change Data Capture, and Oracle Streams, Data Guard performs gap detection and resolution by automatically retrieving missing archived redo log files from the primary database. No extra configuration settings are required to poll the standby databases, to detect any gaps, or to resolve the gaps.

The important consideration here is that automatic gap recovery is contingent on the availability of the primary database. If the primary database is not available and you have a configuration with multiple physical standby databases, you can set up additional initialization parameters so that the Redo Apply can resolve archive gaps from another standby database, as described in Section 5.8.3.

See Section 12.11 for a scenario that shows how to resolve a gap manually.

Note:

Prior to Oracle Database 10g Release 1, the FAL client and server were used to resolve gaps from the primary database.

5.8.3 Using the Fetch Archive Log (FAL) to Resolve Archive Gaps

The fetch archive log (FAL) client and server resolve gaps detected in the range of archived redo log files generated at the primary database and received at the physical standby database.

  • The FAL client requests the transfer of archived redo log files automatically.

  • The FAL server services the FAL requests coming from the FAL client.

The FAL mechanism handles the following types of archive gaps and problems:

  • When creating a physical or logical standby database, the FAL mechanism can automatically retrieve any archived redo log files generated during a hot backup of the primary database.

  • When there are problems with archived redo log files that have already been received on the standby database, the FAL mechanism can automatically retrieve archived redo log files to resolve any of the following situations:

    • When the archived redo log file is deleted from disk before it is applied to the standby database.

    • When the archived redo log file cannot be applied because of a disk corruption.

    • When the archived redo log file is accidentally replaced by another file (for example, a text file) that is not an archived redo log file before the redo data has been applied to the standby database.

  • When you have multiple physical standby databases, the FAL mechanism can automatically retrieve missing archived redo log files from another physical standby database.

The FAL client and server are configured using the FAL_CLIENT and FAL_SERVER initialization parameters that are set on the standby database. Define the FAL_CLIENT and FAL_SERVER initialization parameters only for physical standby databases in the initialization parameter file as shown in the following table:

Parameter Function Syntax
FAL_SERVER Specifies the network service name that the standby database should use to connect to the FAL server. It can consist of multiple values in a list. Syntax

FAL_SERVER=net_service_name

Example

FAL_SERVER=standby2_db,standby3_db

FAL_CLIENT Specifies the network service name that the FAL server should use to connect to the standby database. Syntax

FAL_CLIENT=net_service_name

Example

FAL_CLIENT=standby1_db


5.8.4 Manually Determining and Resolving Archive Gaps

In some situations, automatic gap recovery may not take place and you will need to perform gap recovery manually. For example, you will need to perform gap recovery manually if you are using logical standby databases and the primary database is not available.

The following sections describe how to query the appropriate views to determine which log files are missing and perform manual recovery.

On a physical standby database

To determine if there is an archive gap on your physical standby database, query the V$ARCHIVE_GAP view as shown in the following example:

SQL> SELECT * FROM V$ARCHIVE_GAP;
    THREAD#  LOW_SEQUENCE#  HIGH_SEQUENCE#
-----------  -------------  --------------
          1              7              10

The output from the previous example indicates your physical standby database is currently missing log files from sequence 7 to sequence 10 for thread 1. After you identify the gap, issue the following SQL statement on the primary database to locate the archived redo log files on your primary database (assuming the local archive destination on the primary database is LOG_ARCHIVE_DEST_1):

SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE THREAD#=1 AND DEST_ID=1 AND   2> SEQUENCE# BETWEEN 7 AND 10;
NAME
--------------------------------------------------------------------------------
/primary/thread1_dest/arcr_1_7.arc /primary/thread1_dest/arcr_1_8.arc /primary/thread1_dest/arcr_1_9.arc

Copy these log files to your physical standby database and register them using the ALTER DATABASE REGISTER LOGFILE statement on your physical standby database. For example:

SQL> ALTER DATABASE REGISTER LOGFILE
'/physical_standby1/thread1_dest/arcr_1_7.arc';
SQL> ALTER DATABASE REGISTER LOGFILE
'/physical_standby1/thread1_dest/arcr_1_8.arc';

After you register these log files on the physical standby database, you can restart Redo Apply.

Note:

The V$ARCHIVE_GAP fixed view on a physical standby database only returns the next gap that is currently blocking Redo Apply from continuing. After resolving the gap and starting Redo Apply, query the V$ARCHIVE_GAP fixed view again on the physical standby database to determine the next gap sequence, if there is one. Repeat this process until there are no more gaps.

On a logical standby database:

To determine if there is an archive gap, query the DBA_LOGSTDBY_LOG view on the logical standby database. For example, the following query indicates there is a gap in the sequence of archived redo log files because it displays two files for THREAD 1 on the logical standby database. (If there are no gaps, the query will show only one file for each thread.) The output shows that the highest registered file is sequence number 10, but there is a gap at the file shown as sequence number 6:

SQL> COLUMN FILE_NAME FORMAT a55
SQL> SELECT THREAD#, SEQUENCE#, FILE_NAME FROM DBA_LOGSTDBY_LOG L
  2> WHERE NEXT_CHANGE# NOT IN
  3> (SELECT FIRST_CHANGE# FROM DBA_LOGSTDBY_LOG WHERE L.THREAD# = THREAD#)
  4> ORDER BY THREAD#,SEQUENCE#;

   THREAD#  SEQUENCE# FILE_NAME
---------- ---------- -----------------------------------------------
         1          6 /disk1/oracle/dbs/log-1292880008_6.arc
         1         10 /disk1/oracle/dbs/log-1292880008_10.arc

Copy the missing log files, with sequence numbers 7, 8, and 9, to the logical standby system and register them using the ALTER DATABASE REGISTER LOGICAL LOGFILE statement on your logical standby database.

For example:

SQL> ALTER DATABASE REGISTER LOGICAL LOGFILE '/disk1/oracle/dbs/log-1292880008_10.arc';

After you register these log files on the logical standby database, you can restart SQL Apply.

Note:

The DBA_LOGSTDBY_LOG view on a logical standby database only returns the next gap that is currently blocking SQL Apply from continuing. After resolving the identified gap and starting SQL Apply, query the DBA_LOGSTDBY_LOG view again on the logical standby database to determine the next gap sequence, if there is one. Repeat this process until there are no more gaps.

5.9 Verification

This section contains the following topics:

5.9.1 Monitoring Log File Archival Information

This section describes using views to monitor redo log archival activity for the primary database. See Oracle Data Guard Broker and Oracle Enterprise Manager online help for more information about the graphical user interface that automates many of the tasks involved in monitoring a Data Guard environment


Step 1 Determine the status of redo log files.

Enter the following query on the primary database to determine the status of all online redo log files:

SQL> SELECT THREAD#, SEQUENCE#, ARCHIVED, STATUS FROM V$LOG;

Step 2 Determine the most recent archived redo log file.

Enter the following query on the primary database to determine recently archived thread and sequence number:

SQL> SELECT MAX(SEQUENCE#), THREAD# FROM V$ARCHIVED_LOG GROUP BY THREAD#;

Step 3 Determine the most recent archived redo log file at each destination.

Enter the following query on the primary database to determine which archived redo log file was most recently transmitted to each of the archiving destinations:

SQL> SELECT DESTINATION, STATUS, ARCHIVED_THREAD#, ARCHIVED_SEQ#
  2> FROM V$ARCHIVE_DEST_STATUS
  3> WHERE STATUS <> 'DEFERRED' AND STATUS <> 'INACTIVE';

DESTINATION         STATUS  ARCHIVED_THREAD#  ARCHIVED_SEQ#
------------------  ------  ----------------  -------------
/private1/prmy/lad   VALID                 1            947
standby1             VALID                 1            947

The most recently written archived redo log file should be the same for each archive destination listed. If it is not, a status other than VALID might identify an error encountered during the archival operation to that destination.

Step 4 Find out if archived redo log files have been received.

You can issue a query at the primary database to find out if an archived redo log file was not received at a particular site. Each destination has an ID number associated with it. You can query the DEST_ID column of the V$ARCHIVE_DEST fixed view on the primary database to identify each destination's ID number.

Assume the current local destination is 1, and one of the remote standby destination IDs is 2. To identify which log files are missing at the standby destination, issue the following query:

SQL> SELECT LOCAL.THREAD#, LOCAL.SEQUENCE# FROM
  2> (SELECT THREAD#, SEQUENCE# FROM V$ARCHIVED_LOG WHERE DEST_ID=1)
  3> LOCAL WHERE
  4> LOCAL.SEQUENCE# NOT IN
  5> (SELECT SEQUENCE# FROM V$ARCHIVED_LOG WHERE DEST_ID=2 AND
  6> THREAD# = LOCAL.THREAD#);

  THREAD#  SEQUENCE#
---------  ---------
  1        12
  1        13
  1        14

See Appendix A for details about monitoring the archiving status of the primary database.

Step 5 Trace the progression of transmitted redo on the standby site.

To see the progression of the transmission of redo data to the standby destination, set the LOG_ARCHIVE_TRACE parameter in the primary and standby initialization parameter files. See Appendix G for complete details and examples.

5.9.2 Monitoring the Performance of Redo Transport Services

This section describes the wait events that monitor the performance of the redo transport services that were specified on the primary database with the ARCH, LGWR, SYNC, and ASYNC attributes on the LOG_ARCHIVE_DEST_n initialization parameter.

The following sections describe the wait events and associated timing information that are displayed by the V$SYSTEM_EVENT view:

5.9.2.1 ARCn Process Wait Events

For ARCn archival processing, Table 5-3 shows several of the wait events that monitor the time it takes to spawn or delete RFS connections and to send the redo data to the standby database when using ARCH for the transport mode. See Section 5.3.1 for information about ARCn archival processing.

Table 5-3 Wait Events for Destinations Configured with the ARCH Attribute

Wait Event Monitors the Amount of Time Spent By . . .

ARCH wait on ATTACH

All ARCn processes to spawn an RFS connection.

ARCH wait on SENDREQ

All ARCn processes to write the received redo data to disk as well as open and close the remote archived redo log files.

ARCH wait on DETACH

All ARCn processes to delete an RFS connection.


5.9.2.2 LGWR SYNC Wait Events

For LGWR SYNC archival processing, Table 5-4 shows several of the wait events that monitor the time it takes for the LGWR process on the primary database to:

  • Complete writing to the online redo log files on the primary database

  • Transmit the redo data to the remote standby destination

  • Wait for the redo data to be written to the standby redo log files

  • Receive acknowledgment from the remote standby destination

See Section 5.3.2 for information about LGWR SYNC archival processing.

Table 5-4 Wait Events for Destinations Configured with the LGWR SYNC Attributes

Wait Event Monitors the Amount of Time Spent By . . .

LGWR wait on LNS

The LGWR process waiting to receive messages from the LNSn process.

LNS wait on ATTACH

All network servers to spawn an RFS connection.

LNS wait on SENDREQ

All network servers to write the received redo data to disk as well as open and close the remote archived redo log files.

LNS wait on DETACH

All network servers to delete an RFS connection.


5.9.2.3 LGWR ASYNC Wait Events

For LGWR ASYNC archival processing, Table 5-5 shows several of the wait events that monitor the time it takes to write the redo data to the online redo log files on the primary database. See Section 5.3.2 for information about LGWR ASYNC archival processing.

Table 5-5 Wait Events for Destinations Configured with the LGWR ASYNC Attributes

Wait Event Monitors the Amount of Time Spent By . . .

LNS wait on DETACH

All network servers to delete an RFS connection.

LNS wait on ATTACH

All network servers to spawn an RFS connection.

LNS wait on SENDREQ

All network servers to write the received redo data to disk as well as open and close the remote archived redo log files.

True ASYNC Control FileTXN Wait

The LNSn process to get hold of the control file transaction during its lifetime.

True ASYNC Wait for ARCH log

The LNSn process waiting to see the archived redo log (if the LNSn process is archiving a current log file and the log is switched out).

Waiting for ASYNC dest activation

The LNSn process waiting for an inactive destination to become active.

True ASYNC log-end-of-file wait

The LNSn process waiting for the next bit of redo after it has reached the logical end of file.




Footnote Legend

Footnote 1: The managed recovery process (MRP) applies archived redo log files to the physical standby database, and automatically determines the optimal number of parallel recovery processes at the time it starts. The number of parallel recovery slaves spawned is based on the number of CPUs available on the standby server.
Footnote 2: The logical standby process (LSP) uses parallel execution (Pnnn) processes to apply archived redo log files to the logical standby database, using SQL interfaces.