Skip Headers
Oracle® Database Oracle Clusterware and Oracle Real Application Clusters Installation Guide
10g Release 2 (10.2) for Solaris Operating System

Part Number B14205-07
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Master Index
Master Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
View PDF

3 Configuring Oracle Clusterware and Oracle Database Storage

This chapter describes the storage configuration tasks that you must complete before you start Oracle Universal Installer. It includes information about the following tasks:

3.1 Reviewing Storage Options for Oracle Clusterware, Database, and Recovery Files

This section describes supported options for storing Oracle Clusterware files, Oracle Database files, and data files. It includes the following sections:

3.1.1 Overview of Storage Options

Use the information in this overview to help you select your storage option.

3.1.1.1 Overview of Oracle Clusterware Storage Options

There are two ways of storing Oracle Clusterware files:

  • A supported shared file system: Supported file systems include the following:

    • Cluster File System: A supported cluster file system. At the time of this release, there is no supported cluster file system. Refer to the Certify page available on the OracleMetaLink Web site (http://metalink.oracle.com) for a list of certified cluster file systems.

    • Network File System (NFS): A file-level protocol that enables access and sharing of files

  • Raw partitions: Raw partitions are disk partitions that are not mounted and written to using the operating system, but instead are accessed directly by the application.

3.1.1.2 Overview of Oracle Database and Recovery File Options

There are three ways of storing Oracle Database and recovery files:

  • Automatic Storage Management: Automatic Storage Management (ASM) is an integrated, high-performance database file system and disk manager for Oracle Database files.

  • A supported shared file system: Supported file systems include the following:

    • OSCP-Certified NAS Network File System (NFS): Note that if you intend to use NFS for your data files, then you should create partitions large enough for the database files when you create partitions for Oracle Clusterware.

  • Raw partitions (database files only): A raw partition is required for each database file.

See Also:

For information about certified compatible storage options, refer to the Oracle Storage Compatibility Program (OSCP) Web site, which is at the following URL:

http://www.oracle.com/technology/deploy/availability/htdocs/oscp.html

3.1.1.3 General Storage Considerations

For all installations, you must choose the storage option that you want to use for Oracle Clusterware files and Oracle Database files. If you want to enable automated backups during the installation, then you must also choose the storage option that you want to use for recovery files (the flash recovery area). You do not have to use the same storage option for each file type.

For voting disk file placement, ensure that each voting disk is configured so that it does not share any hardware device or disk, or other single point of failure. An absolute majority of voting disks configured (more than half) must be available and responsive at all times for Oracle Clusterware to operate.

For single-instance Oracle Database installations using Oracle Clusterware for failover, you must use ASM, or shared raw disks if you do not want the failover processing to include dismounting and remounting disks.

The following table shows the storage options supported for storing Oracle Clusterware files, Oracle Database files, and Oracle Database recovery files. Oracle Database files include data files, control files, redo log files, the server parameter file, and the password file. Oracle Clusterware files include the Oracle Cluster Registry (OCR), a mirrored OCR file (optional), the Oracle Clusterware voting disk, and additional voting disk files (optional).

Note:

For the most up-to-date information about supported storage options for RAC installations, refer to the Certify pages on the OracleMetaLink Web site:
http://metalink.oracle.com
Storage Option File Types Supported
OCR and Voting Disk Oracle Software Database Recovery
Automatic Storage Management No No Yes Yes
Local storage No Yes No No
NFS file system

Note: Requires a certified NAS device

Yes Yes Yes Yes
Shared raw partitions Yes No Yes No

Use the following guidelines when choosing the storage options that you want to use for each file type:

  • You can choose any combination of the supported storage options for each file type, provided that you satisfy all requirements listed for the chosen storage options.

  • Oracle recommends that you choose Automatic Storage Management (ASM) as the storage option for database and recovery files.

  • For Standard Edition RAC installations, ASM is the only supported storage option for database or recovery files.

  • You cannot use ASM to store Oracle Clusterware files, because these files must be accessible before any ASM instance starts.

  • If you intend to use ASM with RAC, and you are configuring a new ASM instance, then your system must meet the following conditions:

    • All nodes on the cluster have the release 2 (10.2) version of Oracle Clusterware installed.

    • Any existing ASM instance on any node in the cluster is shut down.

  • If you intend to upgrade an existing RAC database, or a RAC database with ASM instances, then you must ensure that your system meets the following conditions:

    • Oracle Universal Installer (OUI) and Database Configuration Assistant (DBCA) are run on the node where the RAC database or RAC database with ASM instance is located.

    • The RAC database or RAC database with an ASM instance is running on the same nodes that you intend to make members of the new cluster installation. For example, if you have an existing RAC database running on a three-node cluster, then you must install the upgrade on all three nodes. You cannot upgrade only 2 nodes of the cluster, removing the third instance in the upgrade.

    See Also:

    Oracle Database Upgrade Guide for information about how to prepare for upgrading an existing database
  • If you do not have a storage option that provides external file redundancy, then you must configure at least three voting disk areas to provide voting disk redundancy.

3.1.1.4 After You Have Selected Disk Storage Options

When you have determined your disk storage options, you must perform the following tasks in the following order:

1: Check for available shared storage with CVU

Refer to Checking for Available Shared Storage with CVU.

2: Configure shared storage for Oracle Clusterware files

3: Configure storage for Oracle Database files and recovery files

3.1.2 Checking for Available Shared Storage with CVU

To check for all shared file systems available across all nodes on the cluster on a supported shared file system, use the following command:

/mountpoint/crs/Disk1/cluvfy/runcluvfy.sh comp ssa -n node_list

If you want to check the shared accessibility of a specific shared storage type to specific nodes in your cluster, then use the following command syntax:

/mountpoint/crs/Disk1/cluvfy/runcluvfy.sh comp ssa -n node_list -s storageID_list

In the preceding syntax examples, the variable mountpoint is the mountpoint path of the installation media, the variable node_list is the list of nodes you want to check, separated by commas, and the variable storageID_list is the list of storage device IDs for the storage devices managed by the file system type that you want to check.

For example, if you want to check the shared accessibility from node1 and node2 of storage devices /dev/c0t0d0s2 and /dev/c0t0d0s3, and your mountpoint is /dev/dvdrom/, then enter the following command:

/dev/dvdrom/crs/Disk1/cluvfy/runcluvfy.sh comp ssa -n node1,node2 -s\ /dev/c0t0d0s2,/dev/c0t0d0s3

If you do not specify specific storage device IDs in the command, then the command searches for all available storage devices connected to the nodes on the list

3.2 Configuring Storage for Oracle Clusterware Files on a Supported Shared File System

Oracle Universal Installer (OUI) does not suggest a default location for the Oracle Cluster Registry (OCR) or the Oracle Clusterware voting disk. If you choose to create these files on a file system, then review the following sections to complete storage requirements for Oracle Clusterware files:

3.2.1 Requirements for Using a File System for Oracle Clusterware Files

To use a file system for Oracle Clusterware files, the file system must comply with the following requirements:

  • To use an NFS file system, it must be on a certified NAS device.

    Note:

    If you are using a shared file system on a NAS device to store a shared Oracle home directory for Oracle Clusterware or RAC, then you must use the same NAS device for Oracle Clusterware file storage.
  • If you choose to place your Oracle Cluster Registry (OCR) files on a shared file system, then one of the following must be true:

    • The disks used for the file system are on a highly available storage device, (for example, a RAID device that implements file redundancy)

    • At least two file systems are mounted, and use the features of Oracle Database 10g Release 2 (10.2) to provide redundancy for the OCR.

    In addition, if you put the OCR and voting disk files on a shared file system, then that shared files system must be a shared QFS file system, and not a globally mounted UFS or VxFS file system.

  • If you intend to use a shared file system to store database files, then use at least two independent file systems, with the database files on one file system, and the recovery files on a different file system.

  • The oracle user must have write permissions to create the files in the path that you specify.

Note:

If you are upgrading from Oracle9i release 2, then you can continue to use the raw device or shared file that you used for the SRVM configuration repository instead of creating a new file for the OCR.

Use Table 3-1 to determine the partition size for shared file systems.

Table 3-1 Shared File System Volume Size Requirements

File Types Stored Number of Volumes Volume Size

Oracle Clusterware files (OCR and voting disks) with external redundancy

1

At least 256 MB for each volume

Oracle Clusterware files (OCR and voting disks) with redundancy provided by Oracle software

1

At least 256 MB for each volume

Redundant Oracle Clusterware files with redundancy provided by Oracle software (mirrored OCR and two additional voting disks)

1

At least 256 MB of free space for each OCR location, if the OCR is configured on a file system

or

At least 256 MB available for each OCR location if the OCR is configured on raw devices or block devices.

and

At least 256 MB for each voting disk location, with a minimum of three disks.

Oracle Database files

1

At least 1.2 GB for each volume

Recovery files

Note: Recovery files must be on a different volume than database files

1

At least 2 GB for each volume


In Table 3-1, the total required volume size is cumulative. For example, to store all files on the shared file system, you should have at least 3.4 GB of storage available over a minimum of two volumes.

3.2.2 Checking NFS Buffer Size Parameters

If you are using NFS, then you must set the values for the NFS buffer size parameters rsize and wsize to at least 16384. Oracle recommends that you use the value 32768.

For example, if you decide to use rsize and wsize buffer settings with the value 32768, then update the /etc/vfstab file on each node with an entry similar to the following:

nfs_server:/vol/DATA/oradata  /home/oracle/netapp nfs -yes
rw,hard,nointr,rsize=32768,wsize=32768,tcp,noac,vers=3

If you use NFS mounts, then Oracle recommends that you use the option forcedirectio to force direct I/O for better performance. However, if you add forcedirectio to the mount option, then the same mount point cannot be used for Oracle software binaries, executables, shared libraries, and objects. You can only use the forcedirectio option for Oracle data files, the OCR, and voting disks. For these mount points, enter the following line:

nfs_server:/vol/DATA/oradata  /home/oracle/netapp nfs -yes
rw,hard,nointr,rsize=32768,wsize=32768,tcp,noac,forcedirectio,vers=3

3.2.3 Creating Required Directories for Oracle Clusterware Files on Shared File Systems

Use the following instructions to create directories for Oracle Clusterware files. You can also configure shared file systems for the Oracle Database and recovery files.

Note:

For NFS storage, you must complete this procedure only if you want to place the Oracle Clusterware files on a separate file system to the Oracle base directory.

For Storage Area Network (SAN) storage configured without Sun Cluster, Oracle recommends the following:

To ensure that devices are mapped to the same controller in all the nodes, before you install the operating system, you should first install the HBA cards in all the nodes (in the same slots). Doing this ensures that devices will be mapped to the same controllers in all the nodes.

To create directories for the Oracle Clusterware files on separate file systems from the Oracle base directory, follow these steps:

  1. If necessary, configure the shared file systems that you want to use and mount them on each node.

    Note:

    The mount point that you use for the file system must be identical on each node. Ensure that the file systems are configured to mount automatically when a node restarts.
  2. Use the df -k command to determine the free disk space on each mounted file system.

  3. From the display, identify the file systems that you want to use:

    File Type File System Requirements
    Oracle Clusterware files Choose a file system with at least 512 MB of free disk space (one OCR and one voting disk, with external redundancy)
    Database files Choose either:
    • A single file system with at least 1.2 GB of free disk space

    • Two or more file systems with at least 1.2 GB of free disk space in total

    Recovery files Choose a file system with at least 2 GB of free disk space.

    If you are using the same file system for more than one type of file, then add the disk space requirements for each type to determine the total disk space requirement.

  4. Note the names of the mount point directories for the file systems that you identified.

  5. If the user performing installation (typically, oracle) has permissions to create directories on the disks where you plan to install Oracle Clusterware and Oracle Database, then OUI creates the Oracle Clusterware file directory, and DBCA creates the Oracle Database file directory, and the Recovery file directory.

    If the user performing installation does not have write access, then you must create these directories manually using commands similar to the following to create the recommended subdirectories in each of the mount point directories and set the appropriate owner, group, and permissions on them:

    • Oracle Clusterware file directory:

      # mkdir /mount_point/oracrs
      # chown oracle:oinstall /mount_point/oracrs
      # chmod 775 /mount_point/oracrs
      
      
    • Database file directory:

      # mkdir /mount_point/oradata
      # chown oracle:oinstall /mount_point/oradata
      # chmod 775 /mount_point/oradata
      
      
    • Recovery file directory (flash recovery area):

      # mkdir /mount_point/flash_recovery_area
      # chown oracle:oinstall /mount_point/flash_recovery_area
      # chmod 775 /mount_point/flash_recovery_area
      
      

By making the oracle user the owner of these directories, this permits them to be read by multiple Oracle homes, including those with different OSDBA groups.

When you have completed creating subdirectories in each of the mount point directories, and set the appropriate owner, group, and permissions, you have completed CFS or NFS configuration.

3.3 Configuring Storage for Oracle Clusterware Files on Raw Devices

The following subsection describe how to configure Oracle Clusterware files on raw partitions.

3.3.1 Identifying Required Raw Partitions for Clusterware Files

Table 3-2 lists the number and size of the raw partitions that you must configure for Oracle Clusterware files.

Table 3-2 Raw Partitions Required for Oracle Clusterware Files

Number Size for Each Partition (MB) Purpose

2

(or 1, if you have external redundancy support for this file)

256

Oracle Cluster Registry

Note: You need to create these raw partitions only once on the cluster. If you create more than one database on the cluster, then they all share the same Oracle Cluster Registry (OCR).

You should create two partitions: One for the OCR, and one for a mirrored OCR.

If you are upgrading from Oracle9i release 2, then you can continue to use the raw device that you used for the SRVM configuration repository instead of creating this new raw device.

3

(or 1, if you have external redundancy support for this file)

256

Oracle Clusterware voting disks

Note: You need to create these raw partitions only once on the cluster. If you create more than one database on the cluster, then they all share the same Oracle Clusterware voting disk.

You should create three partitions: One for the voting disk, and two for additional voting disks.


Note:

If you put Oracle Clusterware files on a Cluster File System (CFS) then you should ensure that the CFS volumes are at least 500 GB in size.

3.4 Choosing a Storage Option for Oracle Database Files

Database files consist of the files that make up the database, and the recovery area files.

There are four options for storing database files:

During configuration of Oracle Clusterware, if you selected NFS, and the volumes that you created are large enough to hold the database files and recovery files, then you have completed required pre-installation steps. You can proceed to Chapter 4, "Installing Oracle Clusterware".

If you want to place your database files on ASM, then proceed to Configuring Disks for Automatic Storage Management.

If you want to place your database files on raw devices, and manually provide storage management for your database and recovery files, then proceed to "Configuring Database File Storage on Raw Devices".

Note:

Databases can consist of a mixture of ASM files and non-ASM files. Refer to Oracle Database Administrator's Guide for additional information about ASM.

3.5 Configuring Disks for Automatic Storage Management

This section describes how to configure disks for use with Automatic Storage Management. Before you configure the disks, you must determine the number of disks and the amount of free disk space that you require. The following sections describe how to identify the requirements and configure the disks:

Note:

For Automatic Storage Management installations, Although this section refers to disks, you can also use zero-padded files on a certified NAS storage device in an Automatic Storage Management disk group. Refer to Oracle Database Installation Guide for Solaris Operating System for information about creating and configuring NAS-based files for use in an Automatic Storage Management disk group.

3.5.1 Identifying Storage Requirements for Automatic Storage Management

To identify the storage requirements for using Automatic Storage Management, you must determine how many devices and the amount of free disk space that you require. To complete this task, follow these steps:

  1. Determine whether you want to use Automatic Storage Management for Oracle Database files, recovery files, or both.

    Note:

    You do not have to use the same storage mechanism for database files and recovery files. You can use the file system for one file type and Automatic Storage Management for the other.

    If you choose to enable automated backups and you do not have a shared file system available, then you must choose Automatic Storage Management for recovery file storage.

    If you enable automated backups during the installation, you can choose Automatic Storage Management as the storage mechanism for recovery files by specifying an Automatic Storage Management disk group for the flash recovery area. Depending on how you choose to create a database during the installation, you have the following options:

    • If you select an installation method that runs Database Configuration Assistant in interactive mode (for example, by choosing the Advanced database configuration option) then you can decide whether you want to use the same Automatic Storage Management disk group for database files and recovery files, or use different disk groups for each file type.

      The same choice is available to you if you use Database Configuration Assistant after the installation to create a database.

    • If you select an installation method that runs Database Configuration Assistant in noninteractive mode, then you must use the same Automatic Storage Management disk group for database files and recovery files.

  2. Choose the Automatic Storage Management redundancy level that you want to use for the Automatic Storage Management disk group.

    The redundancy level that you choose for the Automatic Storage Management disk group determines how Automatic Storage Management mirrors files in the disk group and determines the number of disks and amount of free disk space that you require, as follows:

    • External redundancy

      An external redundancy disk group requires a minimum of one disk device. The effective disk space in an external redundancy disk group is the sum of the disk space in all of its devices.

      Because Automatic Storage Management does not mirror data in an external redundancy disk group, Oracle recommends that you use only RAID or similar devices that provide their own data protection mechanisms as disk devices in this type of disk group.

    • Normal redundancy

      In a normal redundancy disk group, Automatic Storage Management uses two-way mirroring by default, to increase performance and reliability. A normal redundancy disk group requires a minimum of two disk devices (or two failure groups). The effective disk space in a normal redundancy disk group is half the sum of the disk space in all of its devices.

      For most installations, Oracle recommends that you select normal redundancy disk groups.

    • High redundancy

      In a high redundancy disk group, Automatic Storage Management uses three-way mirroring to increase performance and provide the highest level of reliability. A high redundancy disk group requires a minimum of three disk devices (or three failure groups). The effective disk space in a high redundancy disk group is one-third the sum of the disk space in all of its devices.

      While high redundancy disk groups do provide a high level of data protection, you should consider the greater cost of additional storage devices before deciding to select high redundancy disk groups.

  3. Determine the total amount of disk space that you require for the database files and recovery files.

    Use the following table to determine the minimum number of disks and the minimum disk space requirements for installing the starter database:

    Redundancy Level Minimum Number of Disks Database Files Recovery Files Both File Types
    External 1 1.15 GB 2.3 GB 3.45 GB
    Normal 2 2.3 GB 4.6 GB 6.9 GB
    High 3 3.45 GB 6.9 GB 10.35 GB

    For RAC installations, you must also add additional disk space for the Automatic Storage Management metadata. You can use the following formula to calculate the additional disk space requirements (in MB):

    15 + (2 * number_of_disks) + (126 * number_of_Automatic_Storage_Management_instances)

    For example, for a four-node RAC installation, using three disks in a high redundancy disk group, you require an additional 525 MB of disk space:

    15 + (2 * 3) + (126 * 4) = 525

    If an Automatic Storage Management instance is already running on the system, then you can use an existing disk group to meet these storage requirements. If necessary, you can add disks to an existing disk group during the installation.

    The following section describes how to identify existing disk groups and determine the free disk space that they contain.

  4. Optionally, identify failure groups for the Automatic Storage Management disk group devices.

    Note:

    You need to complete this step only if you intend to use an installation method that runs Database Configuration Assistant in interactive mode, for example, if you intend to choose the Custom installation type or the Advanced database configuration option. Other installation types do not enable you to specify failure groups.

    If you intend to use a normal or high redundancy disk group, then you can further protect your database against hardware failure by associating a set of disk devices in a custom failure group. By default, each device comprises its own failure group. However, if two disk devices in a normal redundancy disk group are attached to the same SCSI controller, then the disk group becomes unavailable if the controller fails. The controller in this example is a single point of failure.

    To protect against failures of this type, you could use two SCSI controllers, each with two disks, and define a failure group for the disks attached to each controller. This configuration would enable the disk group to tolerate the failure of one SCSI controller.

    Note:

    If you define custom failure groups, then you must specify a minimum of two failure groups for normal redundancy disk groups and three failure groups for high redundancy disk groups.
  5. If you are sure that a suitable disk group does not exist on the system, then install or identify appropriate disk devices to add to a new disk group. Use the following guidelines when identifying appropriate disk devices:

    • All of the devices in an Automatic Storage Management disk group should be the same size and have the same performance characteristics.

    • Do not specify more than one partition on a single physical disk as a disk group device. Automatic Storage Management expects each disk group device to be on a separate physical disk.

    • Although you can specify a logical volume as a device in an Automatic Storage Management disk group, Oracle does not recommend their use. Logical volume managers can hide the physical disk architecture, preventing Automatic Storage Management from optimizing I/O across the physical devices.

3.5.2 Configuring Database File Storage on ASM and Raw Devices

To configure disks for Automatic Storage Management (ASM) using raw devices, complete the following tasks:

  • To use ASM with raw partitions, you must create sufficient partitions for your data files, and then bind the partitions to raw devices.

  • Make a list of the raw device names you create for the data files, and have the list available during database installation.

Use the following procedure to configure disks:

  1. If necessary, install the disks that you intend to use for the disk group and restart the system.

  2. To create or identify the disk slices (partitions) that you want to include in the Automatic Storage Management disk group:

    1. To list the disks attached to the system, enter the following command:

      # /usr/sbin/format
      
      

      The output from this command is similar to the following:

      AVAILABLE DISK SELECTIONS:
             0. c0t0d0 <ST34321A cyl 8892 alt 2 hd 15 sec 63>
                /pci@1f,0/pci@1,1/ide@3/dad@0,0
             1. c1t5d0 <SUN9.0G cyl 4924 alt 2 hd 27 sec 133>
                /pci@1f,0/pci@1/scsi@1/sd@5,0
      
      

      This command displays information about each disk attached to the system, including the device name (cxtydz).

    2. Enter the number corresponding to the disk that you want to use.

    3. Use the fdisk command to create a Solaris partition on the disk if one does not already exist.

      Solaris fdisk partitions must start at cylinder 1, not cylinder 0. If you create an fdisk partition, then you must label the disk before continuing.

    4. Enter the partition command, followed by the print command to display the partition table for the disk that you want to use.

    5. If necessary, create a single whole-disk slice, starting at cylinder 1.

      Note:

      To prevent Automatic Storage Management from overwriting the partition table, you cannot use slices that start at cylinder 0 (for example, slice 2).
    6. Make a note of the number of the slice that you want to use.

    7. If you modified a partition table or created a new one, then enter the label command to write the partition table and label to the disk.

    8. Enter q to return to the format menu.

    9. If you have finished creating slices, then enter q to quit from the format utility. Otherwise, enter the disk command to select a new disk and repeat steps b to g to create or identify the slices on that disks.

  3. If you plan to use existing slices, then enter the following command to verify that they are not mounted as file systems:

    # df -h
    
    

    This command displays information about the slices on disk devices that are mounted as file systems. The device name for a slice includes the disk device name followed by the slice number. For example: cxtydzsn, where sn is the slice number.

  4. Enter commands similar to the following on every node to change the owner, group, and permissions on the character raw device file for each disk slice that you want to add to a disk group:

    # chown oracle:dba /dev/rdsk/cxtydzs6
    # chmod 660 /dev/rdsk/cxtydzs6
    
    

    In this example, the device name specifies slice 6.

    Note:

    If you are using a multi-pathing disk driver with Automatic Storage Management, then ensure that you set the permissions only on the correct logical device name for the disk.
  5. If you also want to use raw devices for storage, then refer to "Configuring Database File Storage on Raw Devices". Otherwise, when you have completed creating and configuring ASM with raw partitions, proceed to Chapter 4, "Installing Oracle Clusterware"

3.5.3 Using an Existing Automatic Storage Management Disk Group

If you want to store either database or recovery files in an existing Automatic Storage Management disk group, then you have the following choices, depending on the installation method that you select:

  • If you select an installation method that runs Database Configuration Assistant in interactive mode (for example, by choosing the Advanced database configuration option), then you can decide whether you want to create a disk group, or to use an existing one.

    The same choice is available to you if you use Database Configuration Assistant after the installation to create a database.

  • If you select an installation method that runs Database Configuration Assistant in noninteractive mode, then you must choose an existing disk group for the new database; you cannot create a disk group. However, you can add disk devices to an existing disk group if it has insufficient free space for your requirements.

Note:

The Automatic Storage Management instance that manages the existing disk group can be running in a different Oracle home directory.

To determine if an existing Automatic Storage Management disk group exists, or to determine if there is sufficient disk space in a disk group, you can use Oracle Enterprise Manager Grid Control or Database Control. Alternatively, you can use the following procedure:

  1. View the contents of the oratab file to determine if an Automatic Storage Management instance is configured on the system:

    # more /var/opt/oracle/oratab
    
    

    If an Automatic Storage Management instance is configured on the system, then the oratab file should contain a line similar to the following:

    +ASM2:oracle_home_path
    
    

    In this example, +ASM2 is the system identifier (SID) of the Automatic Storage Management instance, with the node number appended, and oracle_home_path is the Oracle home directory where it is installed. By convention, the SID for an Automatic Storage Management instance begins with a plus sign.

  2. Set the ORACLE_SID and ORACLE_HOME environment variables to specify the appropriate values for the Automatic Storage Management instance that you want to use.

  3. Connect to the Automatic Storage Management instance as the SYS user with SYSDBA privilege and start the instance if necessary:

    $ $ORACLE_HOME/bin/sqlplus "SYS/SYS_password as SYSDBA"
    SQL> STARTUP
    
    
  4. Enter the following command to view the existing disk groups, their redundancy level, and the amount of free disk space in each one:

    SQL> SELECT NAME,TYPE,TOTAL_MB,FREE_MB FROM V$ASM_DISKGROUP;
    
    
  5. From the output, identify a disk group with the appropriate redundancy level and note the free space that it contains.

  6. If necessary, install or identify the additional disk devices required to meet the storage requirements listed in the previous section.

    Note:

    If you are adding devices to an existing disk group, then Oracle recommends that you use devices that have the same size and performance characteristics as the existing devices in that disk group.

3.6 Configuring Database File Storage on Sun Cluster DID Devices

Note:

At the time of this release, Oracle does not support the use of shared disk ID (DID) devices with Solaris x86 platforms.

You can use the Sun Cluster command-line interface (CLI) device identifier configuration and administration utility wrapper, scdidadm, to obtain information about shared disk ID (DID) devices available for use with ASM.

To determine which devices are shared storage devices, and which devices share the same controllers, enter the following command:

# scdidadm -L 

The output of this command shows all the paths, including those on remote hosts, of the devices in the DID configuration file. The output is similar to the following example:

1        plynx1:/dev/rdsk/c0t0d0        /dev/did/rdsk/d1
2        plynx1:/dev/rdsk/c1t0d0        /dev/did/rdsk/d2
3        plynx1:/dev/rdsk/c1t1d0        /dev/did/rdsk/d3
4        plynx3:/dev/rdsk/c3t216000C0FF084E77d1 /dev/did/rdsk/d4
4        plynx1:/dev/rdsk/c3t216000C0FF084E77d1 /dev/did/rdsk/d4
4        plynx2:/dev/rdsk/c3t216000C0FF084E77d1 /dev/did/rdsk/d4
4        plynx4:/dev/rdsk/c3t216000C0FF084E77d1 /dev/did/rdsk/d4
5        plynx3:/dev/rdsk/c3t216000C0FF084E77d0 /dev/did/rdsk/d5
5        plynx1:/dev/rdsk/c3t216000C0FF084E77d0 /dev/did/rdsk/d5
5        plynx2:/dev/rdsk/c3t216000C0FF084E77d0 /dev/did/rdsk/d5
5        plynx4:/dev/rdsk/c3t216000C0FF084E77d0 /dev/did/rdsk/d5
6        plynx3:/dev/rdsk/c3t216000C0FF284E44d0 /dev/did/rdsk/d6
6        plynx1:/dev/rdsk/c3t216000C0FF284E44d0 /dev/did/rdsk/d6
6        plynx2:/dev/rdsk/c3t216000C0FF284E44d0 /dev/did/rdsk/d6
6        plynx4:/dev/rdsk/c3t216000C0FF284E44d0 /dev/did/rdsk/d6
7        plynx3:/dev/rdsk/c4t226000C0FF384E44d1 /dev/did/rdsk/d7
7        plynx1:/dev/rdsk/c4t226000C0FF384E44d1 /dev/did/rdsk/d7
7        plynx2:/dev/rdsk/c4t226000C0FF384E44d1 /dev/did/rdsk/d7
7        plynx4:/dev/rdsk/c4t226000C0FF384E44d1 /dev/did/rdsk/d7
8        plynx3:/dev/rdsk/c4t226000C0FF384E44d0 /dev/did/rdsk/d8
8        plynx1:/dev/rdsk/c4t226000C0FF384E44d0 /dev/did/rdsk/d8
8        plynx2:/dev/rdsk/c4t226000C0FF384E44d0 /dev/did/rdsk/d8
8        plynx4:/dev/rdsk/c4t226000C0FF384E44d0 /dev/did/rdsk/d8
9        plynx3:/dev/rdsk/c4t226000C0FF184E77d0 /dev/did/rdsk/d9
9        plynx1:/dev/rdsk/c4t226000C0FF184E77d0 /dev/did/rdsk/d9
9        plynx2:/dev/rdsk/c4t226000C0FF184E77d0 /dev/did/rdsk/d9
9        plynx4:/dev/rdsk/c4t226000C0FF184E77d0 /dev/did/rdsk/d9
16       plynx2:/dev/rdsk/c0t0d0        /dev/did/rdsk/d16
17       plynx2:/dev/rdsk/c1t0d0        /dev/did/rdsk/d17
18       plynx2:/dev/rdsk/c1t1d0        /dev/did/rdsk/d18
19       plynx3:/dev/rdsk/c0t0d0        /dev/did/rdsk/d19
20       plynx3:/dev/rdsk/c1t0d0        /dev/did/rdsk/d20
21       plynx3:/dev/rdsk/c1t1d0        /dev/did/rdsk/d21
22       plynx4:/dev/rdsk/c0t0d0        /dev/did/rdsk/d22
23       plynx4:/dev/rdsk/c1t0d0        /dev/did/rdsk/d23
24       plynx4:/dev/rdsk/c1t1d0        /dev/did/rdsk/d24

In the preceding example:

In the preceding cluster output example, the shared devices exist with the same Solaris device pathnames. For example, d9 is /dev/rdsk/c4t226000C0FF184E77d0, regardless of the nodes. For other cluster configurations, this may not be true. For example on some nodes, the Solaris device pathname can appear as /dev/rdsk/c3t226000C0FF184E77d0, or other variants. Note that while the controller number might not be the same, the device will always be /dev/did/rdsk/d9 on all the nodes. This is the Sun Cluster uniform device name space feature.

Note:

For detailed information about using the scdidadm, refer to the Sun Cluster Reference Manual for Solaris OS.

3.7 Configuring Database File Storage on Raw Devices

The following sections describe how to configure raw partitions for database files:

3.7.1 Configuring Raw Partitions for Oracle Database File Storage

A partition is a slice defined on a disk or on a disk array volume. On SPARC systems, it can also be a soft partition created using Solaris Volume Manager.

Note:

At the time of this release, Oracle does not support the use of shared disk ID (DID) devices with Solaris x86 platforms.

Table 3-3 Raw Partitions Required for Database Files

Number Size (MB)

1

500

SYSTEM tablespace:

system

1

500

SYSAUX tablespace:

sysaux

1

500

UNDOTBS1 tablespace:

undotbs1

1

250

TEMP tablespace:

temp

1

160

EXAMPLE tablespace:

example

1

120

USERS tablespace:

users

2

120

Two online redo log files (where m is the log number, 1 or 2):

redo1_m

2

110

First and second control files:

control{1|2}

1

5

Server parameter file (SPFILE):

spfile

1

5

Password file:

pwdfile


To configure raw partitions for database files:

  1. Choose a name for the database that you want to create.

    The name that you choose must start with a letter and have no more than four characters; for example, orcl.

  2. If necessary, install or configure the disks that you intend to use and restart the system.

  3. If you want to use Solaris Volume Manager soft partitions, then refer to the Solaris Volume Manager documentation for information about how to create them.

    The previous table shows the number and size of the partitions that you require.

  4. If you want to use disk slices, then follow these steps to create or identify the required disk slices:

    1. To list the disks attached to the system, enter the following command:

      # /usr/sbin/format
      
      

      The output from this command is similar to the following:

      AVAILABLE DISK SELECTIONS:
             0. c0t0d0 <ST34321A cyl 8892 alt 2 hd 15 sec 63>
                /pci@1f,0/pci@1,1/ide@3/dad@0,0
             1. c1t5d0 <SUN9.0G cyl 4924 alt 2 hd 27 sec 133>
                /pci@1f,0/pci@1/scsi@1/sd@5,0
      
      

      This command displays information about each disk attached to the system, including the device name (cxtydz).

    2. Enter the number corresponding to the disk that you want to use.

      Note:

      Ensure that the disk you choose is not being used for another purpose. For example, ensure that it is not being used as a component for a logical volume manager volume.
    3. Use the fdisk command to create a Solaris partition on the disk if one does not already exist.

      Solaris fdisk partitions must start at cylinder 1, not cylinder 0. If you create an fdisk partition, then you must label the disk before continuing.

    4. Enter the partition command, followed by the print command to display the partition table for the disk that you want to use.

    5. Identify or create slices for each of the partitions that you require.

      The previous table shows the number and size of the partitions that you require for database files.

      Note:

      To prevent the database files from overwriting the partition table, do not use slices that start at cylinder 0 (for example, slice 2).
    6. Make a note of the number of the slices that you want to use.

    7. If you modified a partition table or created a new one, then enter the label command to write the partition table and label to the disk.

    8. Enter q to return to the format menu.

    9. After you have finished creating slices, enter q to quit from the format utility.

  5. If you plan to use existing partitions, then enter the following command to verify that they are not mounted as file systems:

    # df -h
    
    

    This command displays information about the devices that are mounted as file systems. The device name for a slice includes the disk device name followed by the slice number. For example: cxtydzsn, where sn is the slice number. The device name for a Solaris Volume Manager partition is similar to /dev/md/dsk/dnnn, where dnnn is the soft partition name.

  6. Enter commands similar to the following to change the owner, group, and permissions on the character raw device file for each partition:

    Note:

    If you are using a multi-pathing disk driver, then ensure that you set the permissions only on the correct logical device name for the partition.
    • Solaris Volume Manager soft partitions:

      # chown oracle:dba /dev/md/rdsk/d100
      # chmod 660 /dev/md/rdsk/d100
      
      

      Repeat this step for each node on the cluster.

    • Disk slices:

      # chown oracle:dba /dev/rdsk/cxtydzsn
      # chmod 660 /dev/rdsk/cxtydzsn
      
      

      Repeat this step for each node on the cluster.

  7. Complete the procedure "Creating the Database Configuration Assistant Raw Device Mapping File".

3.7.2 Configuring Raw Logical Volumes for Database File Storage

On Solaris x86 (64-bit), Oracle Real Application Clusters is supported with Solaris Volume Manager and Sun Cluster.

On Solaris x86, Oracle Real Application Clusters is not supported with Solaris Volume Manager, and not supported on Sun Cluster for x86.

On Solaris SPARC (64-bit) systems, Oracle Real Application Clusters is supported with Oracle Clusterware. However, Solaris Volume Manager is not certified.

This section contains the following:

3.7.2.1 Configuring VERITAS CVM for SPARC (64-BIT)

This section describes how to configure raw logical volumes using VERITAS Cluster Volume Manager (CVxVM) with Sun Cluster 3.1 on SPARC systems.

Note:

At the time of this release, VERITAS Cluster Volume Manager is not supported for x86 and x86 (64-bit) systems.

Creating a Shared Disk Group

To create a shared disk group:

  1. If necessary, install the shared disks that you intend to use for the disk group and restart the system.

  2. To ensure that the disks are available, enter the following command:

    # /usr/sbin/format
    
    

    The output from this command is similar to the following:

    AVAILABLE DISK SELECTIONS:
           0. c0t0d0 <ST34321A cyl 8892 alt 2 hd 15 sec 63>
              /pci@1f,0/pci@1,1/ide@3/dad@0,0
           1. c1t5d0 <SUN9.0G cyl 4924 alt 2 hd 27 sec 133>
              /pci@1f,0/pci@1/scsi@1/sd@5,0
    
    

    This command displays information about each disk attached to the system, including the device name (cxtydz).

  3. From the list, identify the device names for the disk devices that you want to add to a disk group, then use Ctrl+D to exit from the format utility.

  4. Enter the following command on every node to verify that the devices you identified are not mounted as file systems:

    # df -k
    
    

    This command displays information about the partitions (slices) on disk devices that are mounted as file systems. The device name for a slice includes the disk device name followed by the slice number, for example cxtydzsn, where sn is the slice number. Slice 2 (s2) represents the entire disk. The disk devices that you choose must not be shown as mounted partitions.

  5. Enter the following commands to verify that the devices you identified are not already part of a disk group:

    Note:

    The following command displays information about VERITAS Volume Manager (VxVM) disks. If you use a different LVM, then refer to the appropriate documentation for information about determining which disk devices it is managing.
    # /usr/sbin/vxdiskconfig
    # /usr/sbin/vxdisk list
    
    

    The vxdisk list command identifies the disk devices that are already configured in a disk group. The word online in the STATUS column also identifies disks that have been initialized and placed under VxVM control. The word error in the STATUS column identifies disks that are not initialized.

    The disk devices that you choose must not be in an existing disk group.

  6. If the disk devices that you want to use are not initialized, then enter a command similar to the following to initialize each disk:

    # /usr/sbin/vxdiskadd cxtydz
    
    
  7. To create a shared disk group, enter a command similar to the following, specifying all of the disks that you want to add to the group:

    # /usr/sbin/vxdg -s init diskgroup diskname=devicename ...
    
    

    In this example:

    • -s indicates that you want to create a shared disk group

    • diskgroup is the name of the disk group that you want to create, for example, oradg

    • diskname is an administrative name that you assign to a disk, for example orad01

    • devicename is the device name, for example, c1t0d0

Creating Raw Logical Volumes in the New Disk Group

To create the required raw logical volumes in the new disk group:

  1. Choose a name for the database that you want to create.

    The name that you choose must start with a letter and have no more than four characters, for example, orcl.

  2. To create the logical volume for the Oracle Cluster Registry, enter a command similar to the following:

    # /usr/sbin/vxassist -g diskgroup make ora_ocr_raw_100m 100m user=root \
     group=oinstall mode=640
    
    

    In this example, diskgroup is the name of the disk group you created previously, for example, oradg.

  3. To create the required logical volumes, enter commands similar to the following:

    # /usr/sbin/vxassist -g diskgroup make volume size user=oracle \
     group=dba mode=660
    
    

    In this example:

    • diskgroup is the name of the disk group that you created previously, for example oradg

    • volume is the name of the logical volume that you want to create

      Oracle recommends that you use the sample names shown in the previous table for the logical volumes. Substitute the dbname variable in the sample logical volume name with the name you chose for the database in step 1.

    • size is the size of the logical volume, for example, 500m represents 500 MB

    • user=oracle group=dba mode=660 specifies the owner, group, and permissions on the volume

      Specify the Oracle software owner user and the OSDBA group for the user and group values (typically oracle and dba).

    The following example shows a sample command used to create an 800 MB logical volume in the oradg disk group for the SYSAUX tablespace of a database named test:

    # /usr/sbin/vxassist -g oradb make test_sysaux_5800m 5800m \
    user=oracle group=dba mode=660
    
    

Deporting the Disk Group and Importing It on the Other Cluster Nodes

To deport the disk group and import it on the other nodes in the cluster:

  1. Deport the disk group:

    # /usr/sbin/vxdg deport diskgroup
    
    
  2. Log into each cluster node and complete the following steps:

    1. Enter the following command to cause VxVM to examine the disk configuration:

      # /usr/sbin/vxdctl enable
      
      
    2. Import the shared disk group:

      # /usr/sbin/vxdg -s import diskgroup
      
      
    3. Start all logical volumes:

      # /usr/sbin/vxvol startall
      
      

3.7.2.2 Configuring Solaris Volume Manager

This section describes how to configure raw logical volumes using Solaris Volume Manager. It assumes that you are creating raw logical volumes on a system without prior state database replicas.

Note:

Oracle recommends that you refer to Solaris Volume Manager documentation for detailed information and configuration issues concerning Solaris Volume Manager.

Creating State Database Replicas

As root, on each node of the cluster, use the metadb command to create a Solaris Volume Manager state database replica. The command syntax is as follows:

metadb -c n -af disk_component

  • -c n specifies the number of replicas to add to the specified slice.

  • -a specifies to add a state database replica.

  • -f specifies to force the operation, even if no replicas exist.

  • disk_component specifies the name of the disk component that will hold the replica.

Ensure that your dedicated partition areas are at least 256 MB in size.

In the following command example, the local disk component name is cXtYdZs7:

# metadb -c 3 -af /dev/dsk/cXtYdZs7 

Creating a Multi-Owner Disk Set in Solaris Volume Manager for Sun Cluster for the Oracle RAC Database

Each multi-owner disk set is associated with a list of nodes. These nodes share ownership of the disk set. As root, use the following procedure to create and configure a multi-owner disk set, named rawdg:

  1. Ensure that all disk devices are directly attached to all nodes.

  2. Using the following command syntax, create the multi-owner disk set:

    # metaset -s rawdg -M -a -h host1 host2 
    
    

    In the preceding syntax example, host1 and host2 are names of the nodes that you want to have shared access to the disk set.

  3. Using the following command syntax, add the global device to the disk set. In the syntax example, the global device is /dev/did/rdsk/d0:

    # metaset -s rawdg -M -a /dev/did/rdsk/d
    
    
  4. Using the following command syntax, create a soft partition volume for the disk set:

    metainit -s rawdg_device_name -p /dev/did/rdsk/d0s0 size 
    
    

    For example, to create a 1 GB soft partition named set1 on the device /dev/did/rdsk/d0s0, enter the following command:

    # metainit -s set1 d0 -p /dev/did/rdsk/d7s0 1g
    
    
  5. Verify that each node is correctly added to the multi-owner disk set using the following command:

    # scconf -pvv | grep rawdg
    
    
  6. Verify that the multi-owner disk set is online using the following command:

    # scstat -D 
    
    
  7. Change the ownership of the disk storage device to the oracle user and OSDBA user (typically, oracle and dba). For example, with the disk storage device /dev/md/rawdg/rdsk/d0, enter the following command:

    # chown oracle:dba /dev/md/rawdg/rdsk/d0
    
    

    Repeat this step for each node on the cluster.

  8. Grant the oracle user read and write access. For example, For example, with the disk storage device /dev/md/rawdg/rdsk/d0, enter the following command:

    # chmod u+rw /dev/md/rawdg/rdsk/d0 
    
    

    Repeat this step for each node on the cluster.

3.7.3 Creating the Database Configuration Assistant Raw Device Mapping File

Note:

You must complete this procedure only if you are using raw devices for database files. You do not specify the raw devices for the Oracle Clusterware files in the Database Configuration Assistant raw device mapping file.

To allow Database Configuration Assistant to identify the appropriate raw device for each database file, you must create a raw device mapping file, as follows:

  1. Set the ORACLE_BASE environment variable to specify the Oracle base directory that you identified or created previously:

    • Bourne, Bash, or Korn shell:

      $ ORACLE_BASE=/u01/app/oracle ; export ORACLE_BASE
      
      
    • C shell:

      % setenv ORACLE_BASE /u01/app/oracle
      
      
  2. Create a database file subdirectory under the Oracle base directory and set the appropriate owner, group, and permissions on it:

    # mkdir -p $ORACLE_BASE/oradata/dbname
    # chown -R oracle:oinstall $ORACLE_BASE/oradata
    # chmod -R 775 $ORACLE_BASE/oradata
    
    

    In this example, dbname is the name of the database that you chose previously.

  3. Change directory to the $ORACLE_BASE/oradata/dbname directory.

  4. Edit the dbname_raw.conf file in any text editor to create a file similar to the following:

    Note:

    The following example shows a sample mapping file for a two-instance RAC cluster.
    system=/dev/vx/rdsk/diskgroup/dbname_system_raw_500m
    sysaux=/dev/vx/rdsk/diskgroup/dbname_sysaux_raw_5800m
    example=/dev/vx/rdsk/diskgroup/dbname_example_raw_160m
    users=/dev/vx/rdsk/diskgroup/dbname_users_raw_120m
    temp=/dev/vx/rdsk/diskgroup/dbname_temp_raw_250m
    undotbs1=/dev/vx/rdsk/diskgroup/dbname_undotbs1_raw_500m
    undotbs2=/dev/vx/rdsk/diskgroup/dbname_undotbs2_raw_500m
    redo1_1=/dev/vx/rdsk/diskgroup/dbname_redo1_1_raw_120m
    redo1_2=/dev/vx/rdsk/diskgroup/dbname_redo1_2_raw_120m
    redo2_1=/dev/vx/rdsk/diskgroup/dbname_redo2_1_raw_120m
    redo2_2=/dev/vx/rdsk/diskgroup/dbname_redo2_2_raw_120m
    control1=/dev/vx/rdsk/diskgroup/dbname_control1_raw_110m
    control2=/dev/vx/rdsk/diskgroup/dbname_control2_raw_110m
    spfile=/dev/vx/rdsk/diskgroup/dbname_spfile_raw_5m
    pwdfile=/dev/vx/rdsk/diskgroup/dbname_pwdfile_raw_5m
    
    

    Use the following guidelines when creating or editing this file:

    • Each line in the file must have the following format:

      database_object_identifier=raw_device_path
      
      
    • For a single-instance database, the file must specify one automatic undo tablespace data file (undotbs1), and at least two redo log files (redo1_1, redo1_2).

    • For a RAC database, the file must specify one automatic undo tablespace data file (undotbsn) and two redo log files (redon_1, redon_2) for each instance.

    • Specify at least two control files (control1, control2).

    • To use manual instead of automatic undo management, specify a single rollback segment tablespace data file (rbs) instead of the automatic undo management tablespace data files.

  5. Save the file, and note the file name that you specified.

  6. If you are using raw devices for database storage, then set the DBCA_RAW_CONFIG environment variable to specify the full path to the raw device mapping file:

    Bourne, Bash, or Korn shell:

    $ DBCA_RAW_CONFIG=$ORACLE_BASE/oradata/dbname/dbname_raw.conf
    $ export DBCA_RAW_CONFIG
    
    

    C shell:

    $ setenv DBCA_RAW_CONFIG=$ORACLE_BASE/oradata/dbname/dbname_raw.conf