Skip Headers
Oracle® Database Oracle Clusterware and Oracle Real Application Clusters Installation Guide
10g Release 2 (10.2) for Linux

Part Number B14203-08
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Master Index
Master Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
View PDF

3 Configuring Oracle Clusterware and Oracle Database Storage

This chapter describes the storage configuration tasks that you must complete before you start Oracle Universal Installer. It includes information about the following tasks:

3.1 Reviewing Storage Options for Oracle Clusterware, Database, and Recovery Files

This section describes supported options for storing Oracle Clusterware files, Oracle Database files, and data files.

3.1.1 Overview of Oracle Clusterware Storage Options

There are two ways of storing Oracle Clusterware files:

  • A supported shared file system: Supported file systems include the following:

    • Oracle Cluster File System (OCFS): A cluster file system Oracle provides for the Linux community

    • Oracle Cluster File System 2 (OCFS2): A cluster file system Oracle provides for the Linux community, which allows shared Oracle homes

      Note:

      For certification status of Oracle Cluster File System versions, refer to the Certify page on OracleMetaLink:
      http://metalink.oracle.com
      
      

      On IBM zSeries based Linux, block devices are supported, but OCFS is not supported.

    • General Parallel File System (GPFS) on POWER: A cluster file system provided by IBM. GPFS is only supported on POWER. You can use shared Oracle homes with GPFS. Requirements for GPFS size are the same as for OCFS2. Storage options available with GPFS are the same as those with OCFS2.

    • Network File System (NFS): A file-level protocol that enables access and sharing of files

      Note:

      NFS is not supported on POWER or on IBM zSeries based Linux.
  • Raw partitions: Raw partitions are disk partitions that are not mounted and written to using the Linux file system, but instead are accessed directly by the application.

3.1.2 Overview of Oracle Database and Recovery File Options

There are three ways of storing Oracle Database and recovery files:

  • Automatic Storage Management: Automatic Storage Management (ASM) is an integrated, high-performance database file system and disk manager for Oracle Database files.

  • A supported shared file system: Supported file systems include the following:

    • Oracle Cluster File System 1 and 2 (OCFS and OCFS2): Note that if you intend to use OCFS or OCFS2 for your data files, then you should create partitions large enough for the database files when you create partitions for Oracle Clusterware. If you intend to store Oracle Clusterware files on OCFS, then you must ensure that OCFS volume sizes are at least 500 MB each.

      Note:

      For OCFS2 certification status, refer to the Certify page on OracleMetaLink:
      http://metalink.oracle.com
      
      
    • General Parallel File System (GPFS) with POWER: GPFS is supported only with POWER Linux.

    • OSCP-Certified NAS Network File System (NFS): Note that if you intend to use NFS for your data files, then you should create partitions large enough for the database files when you create partitions for Oracle Clusterware.

  • Raw partitions (database files only): A raw partition is required for each database file.

See Also:

For information about certified compatible storage options, refer to the Oracle Storage Compatibility Program (OSCP) Web site, which is at the following URL:

http://www.oracle.com/technology/deploy/availability/htdocs/oscp.html

3.1.3 General Storage Considerations

For all installations, you must choose the storage option that you want to use for Oracle Clusterware files and Oracle Database files. If you want to enable automated backups during the installation, then you must also choose the storage option that you want to use for recovery files (the flash recovery area). You do not have to use the same storage option for each file type.

For voting disk file placement, ensure that each voting disk is configured so that it does not share any hardware device or disk, or other single point of failure. An absolute majority of voting disks configured (more than half) must be available and responsive at all times for Oracle Clusterware to operate.

For single-instance Oracle Database installations using Oracle Clusterware for failover, you must use OCFS, ASM, or shared raw disks if you do not want the failover processing to include dismounting and remounting disks.

The following table shows the storage options supported for storing Oracle Clusterware files, Oracle Database files, and Oracle Database recovery files. Oracle Database files include data files, control files, redo log files, the server parameter file, and the password file. Oracle Clusterware files include the Oracle Cluster Registry (OCR), a mirrored OCR file (optional), the Oracle Clusterware voting disk, and additional voting disk files (optional).

Note:

For the most up-to-date information about supported storage options for RAC installations, refer to the Certify pages on the OracleMetaLink Web site:
http://metalink.oracle.com

For information about Oracle Cluster File System version 2 (OCFS2), refer to the following Web site:

http://oss.oracle.com/projects/ocfs2/

For OCFS2 certification status, refer to the Certify page on OracleMetaLink.

Table 3-1 Supported Storage Options for Oracle Clusterware, Database, and Recovery Files

Storage Option File Types Supported
OCR and Voting Disks Oracle Software Database Recovery

Automatic Storage Management

No

No

Yes

Yes

OCFS

Yes

No

Yes

Yes

OCFS2

Yes

Yes

Yes

Yes

GPFS (for Linux on POWER)

Yes

Yes

Yes

Yes

Local storage

No

Yes

No

No

NFS file system

Note: Requires a certified NAS device

Yes

Yes

Yes

Yes

Shared raw partitions

Yes

No

Yes

No

Block devices (IBM zSeries based systems only)

Yes

Yes

Yes

No


Use the following guidelines when choosing the storage options that you want to use for each file type:

  • You can choose any combination of the supported storage options for each file type provided that you satisfy all requirements listed for the chosen storage options.

  • Oracle recommends that you choose Automatic Storage Management (ASM) as the storage option for database and recovery files.

  • For Standard Edition RAC installations, ASM is the only supported storage option for database or recovery files.

  • You cannot use ASM to store Oracle Clusterware files, because these files must be accessible before any ASM instance starts.

  • If you intend to use ASM with RAC, and you are configuring a new ASM instance, then your system must meet the following conditions:

    • All nodes on the cluster have the release 2 (10.2) version of Oracle Clusterware installed.

    • Any existing ASM instance on any node in the cluster is shut down.

  • If you intend to upgrade an existing RAC database, or a RAC database with ASM instances, then you must ensure that your system meets the following conditions:

    • Oracle Universal Installer (OUI) and Database Configuration Assistant (DBCA) are run on the node where the RAC database or RAC database with ASM instance is located.

    • The RAC database or RAC database with an ASM instance is running on the same nodes that you intend to make members of the new cluster installation. For example, if you have an existing RAC database running on a three-node cluster, then you must install the upgrade on all three nodes. You cannot upgrade only 2 nodes of the cluster, removing the third instance in the upgrade.

    See Also:

    Oracle Database Upgrade Guide for information about how to prepare for upgrading an existing database
  • If you do not have a storage option that provides external file redundancy, then you must configure at least three voting disk areas to provide voting disk redundancy.

3.1.4 After You Have Selected Disk Storage Options

When you have determined your disk storage options, you must perform the following tasks in the order listed:

1: Check for available shared storage with CVU

Refer to Checking for Available Shared Storage with CVU.

2: Configure shared storage for Oracle Clusterware files

3: Configure storage for Oracle Database files and recovery files

3.2 Checking for Available Shared Storage with CVU

To check for all shared file systems available across all nodes on the cluster on a supported shared file system, use the following command:

/mountpoint/crs/Disk1/cluvfy/runcluvfy.sh comp ssa -n node_list

If you want to check the shared accessibility of a specific shared storage type to specific nodes in your cluster, then use the following command syntax:

/mountpoint/crs/Disk1/cluvfy/runcluvfy.sh comp ssa -n node_list -s storageID_list

In the preceding syntax examples, the variable mountpoint is the mountpoint path of the installation media, the variable node_list is the list of nodes you want to check, separated by commas, and the variable storageID_list is the list of storage device IDs for the storage devices managed by the file system type that you want to check.

For example, if you want to check the shared accessibility from node1 and node2 of storage devices /dev/sdb and /dev/sdc, and your mountpoint is /dev/dvdrom/, then enter the following command:

/dev/dvdrom/crs/Disk1/cluvfy/runcluvfy.sh comp ssa -n node1,node2 -s /dev/sdb,/dev/sdc

If you do not specify storage device IDs in the command, then the command searches for all available storage devices connected to the nodes on the list.

Note:

On IBM zSeries based Linux, CVU checks for shared raw partitions, but does not check for shared logical volumes.

3.3 Configuring Storage for Oracle Clusterware Files on a Supported Shared File System

Oracle Universal Installer (OUI) does not suggest a default location for the Oracle Cluster Registry (OCR) or the Oracle Clusterware voting disk. If you choose to create these files on a file system, then review the following sections to complete storage requirements for Oracle Clusterware files:

3.3.1 Requirements for Using a File System for Oracle Clusterware Files

To use a file system for Oracle Clusterware files, the file system must comply with the following requirements:

  • To use a cluster file system, it must be a supported cluster file system, as listed in the section "Deciding to Use a Cluster File System for Data Files".

  • To use an NFS file system, it must be on a certified NAS device.

  • If you choose to place your Oracle Cluster Registry (OCR) files on a shared file system, then one of the following must be true:

    • The disks used for the file system are on a highly available storage device, (for example, a RAID device that implements file redundancy).

    • At least two file systems are mounted, and use the features of Oracle Database 10g Release 2 (10.2) to provide redundancy for the OCR.

  • If you intend to use a shared file system to store database files, then use at least two independent file systems, with the database files on one file system, and the recovery files on a different file system.

  • The oracle user must have write permissions to create the files in the path that you specify.

Note:

If you are upgrading from Oracle9i release 2, then you can continue to use the raw device or shared file that you used for the SRVM configuration repository instead of creating a new file for the OCR.

Use Table 3-2 to determine the partition size for shared file systems.

Table 3-2 Shared File System Volume Size Requirements

File Types Stored Number of Volumes Volume Size

Oracle Clusterware files (OCR and voting disks) with external redundancy

1

At least 256 MB for each volume

Oracle Clusterware files (OCR and voting disks) with redundancy provided by Oracle software.

1

At least 256 MB for each volume

Redundant Oracle Clusterware files with redundancy provided by Oracle software (mirrored OCR and two additional voting disks)

1

At least 256 MB of free space for each OCR location, if the OCR is configured on a file system (OCFS, OCFS2, NFS)

or

At least 256 MB available for each OCR location if the OCR is configured on raw devices or block devices.

and

At least 256 MB for each voting disk location, with a minimum of three disks.

Oracle Database files

1

At least 1.2 GB for each volume

Recovery files

Note: Recovery files must be on a different volume than database files

1

At least 2 GB for each volume


In Table 3-2, the total required volume size is cumulative. For example, to store all files on the shared file system, you should have at least 3.4 GB of storage available over a minimum of two volumes.

3.3.2 Deciding to Use a Cluster File System for Data Files

For Linux x86 (32-bit), x86 (64-bit) and Linux Itanium platforms, Oracle provides Oracle Cluster File System (OCFS). OCFS is designed for use with Linux kernel 2.4. Oracle Cluster File System 2 (OCFS2) is designed for Linux kernel 2.6. You can have a shared Oracle home on OCFS2.

If you are installing on IBM POWER, and you want to use a cluster file system, then you must use the IBM General Parallel File System (GPFS). You can have a shared Oracle home on a GPFS cluster file system.

If you have an existing Oracle installation, then use the following command to determine if OCFS or OCFS2 is installed:

# rpm -qa | grep ocfs

To ensure that OCFS is loaded, enter the following command:

/etc/init.d/ocfs status

If you want to install the Oracle Database files on an OCFS or OCFS2 file system, and the packages are not installed, then download them from the following Web site. Follow the instructions listed with the kit to install the packages and configure the file system:

OCFS:

http://oss.oracle.com/projects/ocfs/

OCFS2:

http://oss.oracle.com/projects/ocfs2/

Note:

For OCFS2 certification status, refer to the Certify page on OracleMetaLink:
http://metalink.oracle.com

3.3.3 Checking NFS Buffer Size Parameters

If you are using NFS, then you must set the values for the NFS buffer size parameters rsize and wsize to at least 16384. Oracle recommends that you use the value 32768.

For example, if you decide to use rsize and wsize buffer settings with the value 32768, then update the /etc/fstab file on each node with an entry similar to the following:

nfs_server:/vol/DATA/oradata  /home/oracle/netapp     nfs\   
rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,vers=3,timeo=600

Note:

Refer to your storage vendor documentation for additional information about mount options.

3.3.4 Creating Required Directories for Oracle Clusterware Files on Shared File Systems

Use the following instructions to create directories for Oracle Clusterware files. You can also configure shared file systems for the Oracle Database and recovery files.

Note:

For both NFS and OCFS storage, you must complete this procedure only if you want to place the Oracle Clusterware files on a separate file system from the Oracle base directory.

To create directories for the Oracle Clusterware files on separate file systems from the Oracle base directory, follow these steps:

  1. If necessary, configure the shared file systems that you want to use and mount them on each node.

    Note:

    The mount point that you use for the file system must be identical on each node. Ensure that the file systems are configured to mount automatically when a node restarts.
  2. Use the df -h command to determine the free disk space on each mounted file system.

  3. From the display, identify the file systems that you want to use:

    File Type File System Requirements
    Oracle Clusterware files Choose a file system with at least 1.4 GB of free disk space
    Database files Choose either:
    • A single file system with at least 1.2 GB of free disk space

    • Two or more file systems with at least 1.2 GB of free disk space in total

    Recovery files Choose a file system with at least 2 GB of free disk space.

    If you are using the same file system for more than one type of file, then add the disk space requirements for each type to determine the total disk space requirement.

  4. Note the names of the mount point directories for the file systems that you identified.

  5. If the user performing installation (typically, oracle) has permissions to create directories on the disks where you plan to install Oracle Clusterware and Oracle Database, then OUI creates the Oracle Clusterware file directory, and DBCA creates the Oracle Database file directory, and the Recovery file directory.

    If the user performing installation does not have write access, then you must create these directories manually using commands similar to the following to create the recommended subdirectories in each of the mount point directories and set the appropriate owner, group, and permissions on them:

    • Oracle Clusterware file directory:

      # mkdir /mount_point/oracrs
      # chown oracle:oinstall /mount_point/oracrs
      # chmod 775 /mount_point/oracrs
      
      
    • Database file directory:

      # mkdir /mount_point/oradata
      # chown oracle:oinstall /mount_point/oradata
      # chmod 775 /mount_point/oradata
      
      
    • Recovery file directory (flash recovery area):

      # mkdir /mount_point/flash_recovery_area
      # chown oracle:oinstall /mount_point/flash_recovery_area
      # chmod 775 /mount_point/flash_recovery_area
      
      

By making the oracle user the owner of these directories, this permits them to be read by multiple Oracle homes, including those with different OSDBA groups.

When you have completed creating subdirectories in each of the mount point directories, and set the appropriate owner, group, and permissions, you have completed OCFS or NFS configuration.

3.4 Configuring Storage for Oracle Clusterware Files on Raw Devices

The following subsections describe how to configure Oracle Clusterware files on raw partitions.

3.4.1 Clusterware File Restrictions for Logical Volume Manager on Linux

The procedures contained in this section describe how to create raw partitions for Oracle Clusterware files.

On x86 and Itanium systems, although Red Hat Enterprise Linux 3 and SUSE Linux Enterprise Server provide a Logical Volume Manager (LVM), this LVM is not cluster-aware. For this reason, Oracle does not support the use of logical volumes with RAC on x86 and Itanium systems for either Oracle Clusterware or database files.

On IBM zSeries based systems, Oracle supports raw logical volumes.

3.4.2 Identifying Required Raw Partitions for Clusterware Files

Table 3-3 lists the number and size of the raw partitions that you must configure for Oracle Clusterware files.

Table 3-3 Raw Partitions Required for Oracle Clusterware Files on Linux

Number Size for Each Partition (MB) Purpose

2

(or 1, if you have external redundancy support for this file)

256

Oracle Cluster Registry

Note: You need to create these raw partitions only once on the cluster. If you create more than one database on the cluster, then they all share the same Oracle Cluster Registry (OCR).

You should create two partitions: One for the OCR, and one for a mirrored OCR.

If you are upgrading from Oracle9i release 2, then you can continue to use the raw device that you used for the SRVM configuration repository instead of creating this new raw device.

3

(or 1, if you have external redundancy support for this file)

256

Oracle Clusterware voting disks

Note: You need to create these raw partitions only once on the cluster. If you create more than one database on the cluster, then they all share the same Oracle Clusterware voting disk.

You should create three partitions: One for the voting disk, and two for additional voting disks.


Note:

If you put voting disk and OCR files on Oracle Cluster File System (OCFS and OCFS2) then you should ensure that the volumes are at least 500 MB in size. OCFS requires partitions of at least 500 MB.

3.4.3 Creating the Required Raw Partitions on IDE, SCSI, or RAID Devices

If you intend to use IDE, SCSI, or RAID devices for the raw devices, then follow these steps:

  1. If necessary, install or configure the shared disk devices that you intend to use for the raw partitions and restart the system.

    Note:

    Because the number of partitions that you can create on a single device is limited, you might need to create the required raw partitions on more than one device.
  2. To identify the device name for the disks that you want to use, enter the following command:

    # /sbin/fdisk -l
    
    

    Depending on the type of disk, the device name can vary:

    Disk Type Device Name Format Description
    IDE disk
    /dev/hdxn
    
    In this example, x is a letter that identifies the IDE disk and n is the partition number. For example, /dev/hda is the first disk on the first IDE bus.
    SCSI disk
    /dev/sdxn
    
    In this example, x is a letter that identifies the SCSI disk and n is the partition number. For example, /dev/sda is the first disk on the first SCSI bus.
    RAID disk
    /dev/rd/cxdypz
    /dev/ida/cxdypz
    
    Depending on the RAID controller, RAID devices can have different device names. In the examples shown, x is a number that identifies the controller, y is a number that identifies the disk, and z is a number that identifies the partition. For example, /dev/ida/c0d1 is the second logical drive on the first controller.

    You can create the required raw partitions either on new devices that you added or on previously partitioned devices that have unpartitioned free space. To identify devices that have unpartitioned free space, examine the start and end cylinder numbers of the existing partitions and determine whether the device contains unused cylinders.

  3. To create raw partitions on a device, enter a command similar to the following:

    # /sbin/fdisk devicename
    
    

    When creating partitions:

    • Use the p command to list the partition table of the device.

    • Use the n command to create a partition.

    • After you have created the required partitions on this device, use the w command to write the modified partition table to the device.

    • Refer to the fdisk man page for more information about creating partitions.

3.4.4 Creating Oracle Clusterware Raw Logical Volumes on IBM zSeries Based Linux

On zSeries Linux only, you can use raw logical volume manager volumes for Oracle Clusterware and Oracle Database file storage. You can create the required raw logical volumes in a volume group either on direct access storage devices (DASDs), or on SCSI devices. This section describes how to create raw logical volumes for Oracle Clusterware.

Note:

On x86 and Itanium systems, although Red Hat Enterprise Linux 3 and SUSE Linux Enterprise Server provide a Logical Volume Manager (LVM), this LVM is not cluster-aware. For this reason, Oracle does not support the use of logical volumes with RAC on x86 and Itanium systems for either Oracle Clusterware or database files.

To use raw devices, refer to "Configuring Storage for Oracle Clusterware Files on Raw Devices".

If you intend to use ECKD-type direct access storage devices (DASDs) to use as RAW partitions for the Oracle Clusterware files (the Oracle Cluster Registry and the CRS voting disk), then you must format the DASDs with a 4 KB block size.

Note:

You do not have to format FBA-type DASDs in Linux. The device name for the single whole-disk partition for FBA-type DASDs is /dev/dasdxxxx1.

To configure raw logical volumes for Oracle Clusterware and Oracle Database files, follow these steps

  1. If necessary, install or configure the shared DASDs that you intend to use for the disk group and restart the system.

  2. Enter the following command to identify the DASDs configured on the system:

    # more /proc/dasd/devices
    
    

    The output from this command contains lines similar to the following:

    0302(ECKD) at ( 94: 48) is dasdm : active at blocksize: 4096, 540000 blocks, 2109 MB
    
    

    These lines display the following information for each DASD:

    • The device number (0302)

    • The device type (ECKD or FBA)

    • The Linux device major and minor numbers (94: 48)

    • The Linux device file name (dasdm)

      In general, DASDs have device names in the form dasdxxxx, where xxxx is between one and four letters that identify the device.

    • The block size and size of the device

  3. From the display, identify the devices that you want to use. Make sure that you configure the required number of partitions for Oracle Clusterware files, as described in Table 3-3.

    If the devices displayed are FBA-type DASDs, then you do not have to configure them. You can proceed to bind them for Oracle Clusterware files as described in the section "Binding Partitions to Raw Devices for Oracle Clusterware Files".

    If you want to use ECKD-type DASDs, then enter a command similar to the following to format the DASD, if it is not already formatted:

    # /sbin/dasdfmt -b 4096 -y -d cdl -v -f /dev/dasdxxxx
    
    

    In the preceding code example:

    • -b 4096: sets the block size to 4 KB

    • -y: indicates do not prompt for confirmation

    • -d cdl: indicates to use the compatible disk layout (default)

    • -v: displays verbose message output.

    Caution:

    Formatting a DASD destroys all existing data on the device. Make sure that:
    • You specify the correct DASD device name

    • You confirm that the DASD does not contain existing data that you want to preserve

    Note:

    For the DASDs that you intend to use to store Oracle Clusterware files (the Oracle Cluster Registry and the CRS voting disks), you must use a 4 KB block size.

    Also note that the dasdfmt command changes the volume serial number of ECKD devices. After running the dasdfmt and fdasd commands, you should use either the VM utilities or fdasd to relabel the volume serial number to the expected name.

    If you require only a single partition, then use the option -d ldl option to format the DASD using the Linux disk layout. If you use this disk layout, then the partition device name for the DASD is /dev/dasdxxxx1.

    If you format the DASD with the compatible disk layout, then enter a command similar to the following to create a single whole-disk partition on the device:

    # /sbin/fdasd -a /dev/dasdxxxx
    
    

    The device name for the single whole-disk partition for the DASDs is /dev/dasdxxxx1

  4. If you intend to create raw logical volumes on SCSI devices, then proceed to step 5.

    If you intend to create raw logical volumes on DASDs, and you formatted the DASD with the compatible disk layout, then determine how you want to create partitions.

    To create up to three partitions on the device (for example, if you want to create partitions for Oracle Clusterware files), enter a command similar to the following:

    # /sbin/fdasd /dev/dasdxxxx
    
    

    Use the following guidelines when creating partitions:

    • Use the p command to list the partition table of the device.

    • Use the n command to create a new partition.

    • After you have created the required partitions on this device, use the w command to write the modified partition table to the device.

    • See the fdasd man page for more information about creating partitions.

    The partitions on a DASD have device names similar to the following, where n is the partition number, between 1 and 3:

    /dev/dasdxxxxn
    
    

    When you have completed creating partitions, you are then ready to mark devices as physical volumes. Proceed to Step 6.

  5. If you intend to use SCSI devices in the volume group, then follow these steps:

    1. If necessary, install or configure the shared disk devices that you intend to use for the volume group and restart the system.

    2. To identify the device name for the disks that you want to use, enter the following command:

      # /sbin/fdisk -l
      
      

      SCSI devices have device names similar to the following:

      /dev/sdxn
      
      

      In this example, x is a letter that identifies the SCSI disk and n is the partition number. For example, /dev/sda is the first disk on the first SCSI bus.

    3. If necessary, use fdisk to create partitions on the devices that you want to use.

    4. Use the t command in fdisk to change the system ID for the partitions that you want to use to 0x8e.

  6. Enter a command similar to the following to mark each device that you want to use in the volume group as a physical volume, and to name the volume:

    # pvcreate oracle_pv /dev/dasdxx1 /dev/dasdxy1
    
    
  7. To create a volume group named oracle_vg using the devices that you marked, enter a command similar to the following:

    # vgcreate oracle_vg /dev/dasda1 /dev/dasdb1
    
    
  8. To create the required logical volumes in the volume group that you created, enter commands similar to the following:

    # lvcreate -L size -n lv_name vg_name
    
    

    In this example:

    • size is the size of the logical volume, for example 500M

    • lv_name is the name of the logical volume, for example orcl_system_raw_256m

    • vg_name is the name of the volume group, for example oracle_vg

    For example, to create a 256 MB logical volume for the Oracle Clusterware voting disk rac in the oracle_vd1 volume group, enter the following command:

    # lvcreate -L 256M -n rac_system_raw_500m oracle_vd1
    
    

    Note:

    These commands create a device name similar to the following for each logical volume:
    /dev/vg_name/lv_name
    
  9. On the other cluster nodes, enter the following commands to configure the volume group and logical volumes on those nodes:

    # vgscan
    # vgchange -a y
    
    

    Note:

    The examples in the following sections show SCSI device names. You must use the appropriate DASD device names when completing these procedures.

3.4.5 Binding Partitions to Raw Devices for Oracle Clusterware Files

After you have created the required partitions, you must bind the partitions to raw devices on every node. However, you must first determine what raw devices are already bound to other devices. The procedure that you must follow to complete this task varies, depending on the Linux distribution that you are using:

Note:

If the nodes are configured differently, then the disk device names might be different on some nodes. In the following procedure, be sure to specify the correct disk device names on each node.

After you configure raw partitions, you can choose to configure ASM to use the raw partitions and manage database file storage.

Red Hat

    1. To determine what raw devices are already bound to other devices, enter the following command on every node:

      # /usr/bin/raw -qa
      
      

      Raw devices have device names in the form /dev/raw/rawn, where n is a number that identifies the raw device.

      For each device that you want to use, identify a raw device name that is unused on all nodes.

    2. Open the /etc/sysconfig/rawdevices file in any text editor and add a line similar to the following for each partition that you created:

      /dev/raw/raw1 /dev/sdb1
      
      

      Specify an unused raw device for each partition.

    3. For the raw device that you created for the Oracle Cluster Registry (OCR), enter commands similar to the following to set the owner, group, and permissions on the device file:

      # chown root:oinstall /dev/raw/rawn
      # chmod 640 /dev/raw/rawn
      
      

      By making the oinstall group the owner of the OCR, this permits the OCR to be read by multiple Oracle homes, including those with different OSDBA groups.

    4. To bind the partitions to the raw devices, enter the following command:

      # /sbin/service rawdevices restart
      
      

      The system automatically binds the devices listed in the rawdevices file when it restarts.

    5. Repeat step 2 through step 4 on each node in the cluster.

SUSE

    1. To determine what raw devices are already bound to other devices, enter the following command on every node:

      # /usr/sbin/raw -qa
      
      

      Raw devices have device names in the form /dev/raw/rawn, where n is a number that identifies the raw device.

      For each device that you want to use, identify a raw device name that is unused on all nodes.

    2. Open the /etc/raw file in any text editor and add a line similar to the following to associate each partition with an unused raw device:

      raw1:sdb1
      
      
    3. For the raw device that you created for the Oracle Cluster Registry, enter commands similar to the following to set the owner, group, and permissions on the device file:

      # chown root:oinstall /dev/raw/rawn
      # chmod 640 /dev/raw/rawn
      
      
    4. To bind the partitions to the raw devices, enter the following command:

      # /etc/init.d/raw start
      
      
    5. To ensure that the raw devices are bound when the system restarts, enter the following command:

      # /sbin/chkconfig raw on
      
      
    6. Repeat step 2 through step 5 on the other nodes in the cluster.

3.4.6 Completing Supported Shared Storage Configuration

When you have completed creating subdirectories in each of the mount point directories, and set the appropriate owner, group, and permissions, you have completed supported shared storage configuration.

3.5 Choosing a Storage Option for Oracle Database Files

Database files consist of the files that make up the database, and the recovery area files. There are four options for storing database files:

During configuration of Oracle Clusterware, if you selected OCFS or NFS, and the volumes that you created are large enough to hold the database files and recovery files, then you have completed required pre-installation steps. You can proceed to Chapter 4, "Installing Oracle Clusterware".

If you want to place your database files on ASM, then proceed to Configuring Disks for Automatic Storage Management.

If you want to place your database files on raw devices, and manually provide storage management for your database and recovery files, then proceed to "Configuring Database File Storage on Raw Devices".

Note:

Databases can consist of a mixture of ASM files and non-ASM files. Refer to Oracle Database Administrator's Guide for additional information about ASM. For OCFS2 certification status, refer to the Certify page on OracleMetaLink.

3.6 Configuring Disks for Automatic Storage Management

This section describes how to configure disks for use with Automatic Storage Management. Before you configure the disks, you must determine the number of disks and the amount of free disk space that you require. The following sections describe how to identify the requirements and configure the disks:

Note:

For Automatic Storage Management installations:
  • Although this section refers to disks, you can also use zero-padded files on a certified NAS storage device in an Automatic Storage Management disk group. Refer to Oracle Database Installation Guide for Linux x86 for information about creating and configuring NAS-based files for use in an Automatic Storage Management disk group.

  • You can run ASM using ASMLIB, or run ASM using raw partitions.

3.6.1 Identifying Storage Requirements for Automatic Storage Management

To identify the storage requirements for using Automatic Storage Management, you must determine how many devices and the amount of free disk space that you require. To complete this task, follow these steps:

  1. Determine whether you want to use Automatic Storage Management for Oracle Database files, recovery files, or both.

    Note:

    You do not have to use the same storage mechanism for database files and recovery files. You can use the file system for one file type and Automatic Storage Management for the other.

    If you choose to enable automated backups and you do not have a shared file system available, then you must choose Automatic Storage Management for recovery file storage.

    If you enable automated backups during the installation, you can choose Automatic Storage Management as the storage mechanism for recovery files by specifying an Automatic Storage Management disk group for the flash recovery area. Depending on how you choose to create a database during the installation, you have the following options:

    • If you select an installation method that runs Database Configuration Assistant in interactive mode (for example, by choosing the Advanced database configuration option) then you can decide whether you want to use the same Automatic Storage Management disk group for database files and recovery files, or use different disk groups for each file type.

      The same choice is available to you if you use Database Configuration Assistant after the installation to create a database.

    • If you select an installation method that runs Database Configuration Assistant in noninteractive mode, then you must use the same Automatic Storage Management disk group for database files and recovery files.

  2. Choose the Automatic Storage Management redundancy level that you want to use for the Automatic Storage Management disk group.

    The redundancy level that you choose for the Automatic Storage Management disk group determines how Automatic Storage Management mirrors files in the disk group and determines the number of disks and amount of free disk space that you require, as follows:

    • External redundancy

      An external redundancy disk group requires a minimum of one disk device. The effective disk space in an external redundancy disk group is the sum of the disk space in all of its devices.

      Because Automatic Storage Management does not mirror data in an external redundancy disk group, Oracle recommends that you use only RAID or similar devices that provide their own data protection mechanisms as disk devices in this type of disk group.

    • Normal redundancy

      In a normal redundancy disk group, Automatic Storage Management uses two-way mirroring by default, to increase performance and reliability. A normal redundancy disk group requires a minimum of two disk devices (or two failure groups). The effective disk space in a normal redundancy disk group is half the sum of the disk space in all of its devices.

      For most installations, Oracle recommends that you select normal redundancy disk groups.

    • High redundancy

      In a high redundancy disk group, Automatic Storage Management uses three-way mirroring to increase performance and provide the highest level of reliability. A high redundancy disk group requires a minimum of three disk devices (or three failure groups). The effective disk space in a high redundancy disk group is one-third the sum of the disk space in all of its devices.

      While high redundancy disk groups do provide a high level of data protection, you should consider the greater cost of additional storage devices before deciding to select high redundancy disk groups.

  3. Determine the total amount of disk space that you require for the database files and recovery files.

    Use the following table to determine the minimum number of disks and the minimum disk space requirements for installing the starter database:

    Redundancy Level Minimum Number of Disks Database Files Recovery Files Both File Types
    External 1 1.15 GB 2.3 GB 3.45 GB
    Normal 2 2.3 GB 4.6 GB 6.9 GB
    High 3 3.45 GB 6.9 GB 10.35 GB

    For RAC installations, you must also add additional disk space for the Automatic Storage Management metadata. You can use the following formula to calculate the additional disk space requirements (in MB):

    15 + (2 * number_of_disks) + (126 * number_of_Automatic_Storage_Management_instances)

    For example, for a four-node RAC installation, using three disks in a high redundancy disk group, you require an additional 525 MB of disk space:

    15 + (2 * 3) + (126 * 4) = 525

    If an Automatic Storage Management instance is already running on the system, then you can use an existing disk group to meet these storage requirements. If necessary, you can add disks to an existing disk group during the installation.

    The following section describes how to identify existing disk groups and determine the free disk space that they contain.

  4. Optionally, identify failure groups for the Automatic Storage Management disk group devices.

    Note:

    You need to complete this step only if you intend to use an installation method that runs Database Configuration Assistant in interactive mode, for example, if you intend to choose the Custom installation type or the Advanced database configuration option. Other installation types do not enable you to specify failure groups.

    If you intend to use a normal or high redundancy disk group, then you can further protect your database against hardware failure by associating a set of disk devices in a custom failure group. By default, each device comprises its own failure group. However, if two disk devices in a normal redundancy disk group are attached to the same SCSI controller, then the disk group becomes unavailable if the controller fails. The controller in this example is a single point of failure.

    To protect against failures of this type, you could use two SCSI controllers, each with two disks, and define a failure group for the disks attached to each controller. This configuration would enable the disk group to tolerate the failure of one SCSI controller.

    Note:

    If you define custom failure groups, then you must specify a minimum of two failure groups for normal redundancy disk groups and three failure groups for high redundancy disk groups.
  5. If you are sure that a suitable disk group does not exist on the system, then install or identify appropriate disk devices to add to a new disk group. Use the following guidelines when identifying appropriate disk devices:

    • All of the devices in an Automatic Storage Management disk group should be the same size and have the same performance characteristics.

    • Do not specify more than one partition on a single physical disk as a disk group device. Automatic Storage Management expects each disk group device to be on a separate physical disk.

    • Although you can specify a logical volume as a device in an Automatic Storage Management disk group, Oracle does not recommend their use. Logical volume managers can hide the physical disk architecture, preventing Automatic Storage Management from optimizing I/O across the physical devices. They are not supported with RAC.

    See Also:

    The "Configuring Disks for Automatic Storage Management" section for information about completing this task

3.6.2 Using an Existing Automatic Storage Management Disk Group

If you want to store either database or recovery files in an existing Automatic Storage Management disk group, then you have the following choices, depending on the installation method that you select:

  • If you select an installation method that runs Database Configuration Assistant in interactive mode (for example, by choosing the Advanced database configuration option), then you can decide whether you want to create a disk group, or to use an existing one.

    The same choice is available to you if you use Database Configuration Assistant after the installation to create a database.

  • If you select an installation method that runs Database Configuration Assistant in noninteractive mode, then you must choose an existing disk group for the new database; you cannot create a disk group. However, you can add disk devices to an existing disk group if it has insufficient free space for your requirements.

Note:

The Automatic Storage Management instance that manages the existing disk group can be running in a different Oracle home directory.

To determine if an existing Automatic Storage Management disk group exists, or to determine if there is sufficient disk space in a disk group, you can use Oracle Enterprise Manager Grid Control or Database Control. Alternatively, you can use the following procedure:

  1. View the contents of the oratab file to determine if an Automatic Storage Management instance is configured on the system:

    $ more /etc/oratab
    
    

    If an Automatic Storage Management instance is configured on the system, then the oratab file should contain a line similar to the following:

    +ASM2:oracle_home_path
    
    

    In this example, +ASM2 is the system identifier (SID) of the Automatic Storage Management instance, with the node number appended, and oracle_home_path is the Oracle home directory where it is installed. By convention, the SID for an Automatic Storage Management instance begins with a plus sign.

  2. Set the ORACLE_SID and ORACLE_HOME environment variables to specify the appropriate values for the Automatic Storage Management instance that you want to use.

  3. Connect to the Automatic Storage Management instance as the SYS user with SYSDBA privilege and start the instance if necessary:

    $ $ORACLE_HOME/bin/sqlplus "SYS/SYS_password as SYSDBA"
    SQL> STARTUP
    
    
  4. Enter the following command to view the existing disk groups, their redundancy level, and the amount of free disk space in each one:

    SQL> SELECT NAME,TYPE,TOTAL_MB,FREE_MB FROM V$ASM_DISKGROUP;
    
    
  5. From the output, identify a disk group with the appropriate redundancy level and note the free space that it contains.

  6. If necessary, install or identify the additional disk devices required to meet the storage requirements listed in the previous section.

    Note:

    If you are adding devices to an existing disk group, then Oracle recommends that you use devices that have the same size and performance characteristics as the existing devices in that disk group.

3.6.3 Configuring Disks for Automatic Storage Management with ASMLIB

The Automatic Storage Management library driver (ASMLIB) simplifies the configuration and management of the disk devices by eliminating the need to rebind raw devices used with ASM each time the system is restarted.

A disk that is configured for use with Automatic Storage Management is known as a candidate disk.

If you intend to use Automatic Storage Management for database storage for Linux, then Oracle recommends that you install the ASMLIB driver and associated utilities, and use them to configure candidate disks.

Note:

If you do not use the Automatic Storage Management library driver, then you must bind each disk device that you want to use to a raw device, as described in Configuring Database File Storage on ASM and Raw Devices.

To use the Automatic Storage Management library driver (ASMLIB) to configure Automatic Storage Management devices, complete the following tasks.

Installing and Configuring the Automatic Storage Management Library Driver Software

To install and configure the ASMLIB driver software, follow these steps:

  1. Enter the following command to determine the kernel version and architecture of the system:

    # uname -rm
    
    
  2. If necessary, download the required ASMLIB packages from the OTN Web site:

    http://www.oracle.com/technology/tech/linux/asmlib/index.html
    
    

    Note:

    ASMLIB driver packages for some kernel versions are available in the Oracle Clusterware directory on the 10g Release 2 (10.2) DVD-ROM, in the crs/RPMS/asmlib directory. However, Oracle recommends that you check the OTN Web site for the most up-to-date packages.

    You must install oracleasm-support package version 2.0.1 or later to use ASMLib on Red Hat Enterprise Linux 4.0 Advanced Server, or SUSE Linux Enterprise Server 9.

    You must install the following packages, where version is the version of the ASMLIB driver, arch is the system architecture, and kernel is the version of the kernel that you are using:

    oracleasm-support-version.arch.rpm
    oracleasm-kernel-version.arch.rpm
    oracleasmlib-version.arch.rpm
    
    
  3. Switch user to the root user:

    $ su -
    
    
  4. Enter a command similar to the following to install the packages:

    # rpm -Uvh oracleasm-support-version.arch.rpm \
               oracleasm-kernel-version.arch.rpm \
               oracleasmlib-version.arch.rpm
    
    

    For example, if you are using the Red Hat Enterprise Linux AS 4 enterprise kernel on an AMD64 system, then enter a command similar to the following:

    # rpm -Uvh oracleasm-support-2.0.1.i386.rpm \
               oracleasmlib-2.0.1.x86_64.rpm \
               oracleasm-2.6.9-11.EL-2.0.1.x86_64.rpm
    
    
  5. Enter the following command to run the oracleasm initialization script with the configure option:

    # /etc/init.d/oracleasm configure
    
    
  6. Enter the following information in response to the prompts that the script displays:

    Prompt Suggested Response
    Default user to own the driver interface: Specify the Oracle software owner user (typically, oracle).
    Default group to own the driver interface: Specify the OSDBA group (typically dba).
    Start Oracle Automatic Storage Management Library driver on boot (y/n): Enter y to start the Oracle Automatic Storage Management library driver when the system starts.

    The script completes the following tasks:

    • Creates the /etc/sysconfig/oracleasm configuration file

    • Creates the /dev/oracleasm mount point

    • Loads the oracleasm kernel module

    • Mounts the ASMLIB driver file system

      Note:

      The ASMLIB driver file system is not a regular file system. It is used only by the Automatic Storage Management library to communicate with the Automatic Storage Management driver.
  7. Repeat this procedure on all nodes in the cluster where you want to install Oracle Real Application Clusters.

Configuring the Disk Devices to Use the Automatic Storage Management Library Driver on x86 and Itanium Systems

To configure the disk devices that you want to use in an Automatic Storage Management disk group, follow these steps:

  1. If you intend to use IDE, SCSI, or RAID devices in the Automatic Storage Management disk group, then follow these steps:

    1. If necessary, install or configure the shared disk devices that you intend to use for the disk group and restart the system.

    2. To identify the device name for the disks that you want to use, enter the following command:

      # /sbin/fdisk -l
      
      

      Depending on the type of disk, the device name can vary:

      Disk Type Device Name Format Description
      IDE disk
      /dev/hdxn
      
      In this example, x is a letter that identifies the IDE disk and n is the partition number. For example, /dev/hda is the first disk on the first IDE bus.
      SCSI disk
      /dev/sdxn
      
      In this example, x is a letter that identifies the SCSI disk and n is the partition number. For example, /dev/sda is the first disk on the first SCSI bus.
      RAID disk
      /dev/rd/cxdypz
      /dev/ida/cxdypz
      
      Depending on the RAID controller, RAID devices can have different device names. In the examples shown, x is a number that identifies the controller, y is a number that identifies the disk, and z is a number that identifies the partition. For example, /dev/ida/c0d1 is the second logical drive on the first controller.

      To include devices in a disk group, you can specify either whole-drive device names or partition device names.

      Note:

      Oracle recommends that you create a single whole-disk partition on each disk that you want to use.
    3. Use either fdisk or parted to create a single whole-disk partition on the disk devices that you want to use.

  2. Enter a command similar to the following to mark a disk as an Automatic Storage Management disk:

    # /etc/init.d/oracleasm createdisk DISK1 /dev/sdb1
    
    

    In this example, DISK1 is a name that you want to assign to the disk.

    Note:

    The disk names that you specify can contain uppercase letters, numbers, and the underscore character. They must start with an uppercase letter.

    If you are using a multi-pathing disk driver with Automatic Storage Management, then make sure that you specify the correct logical device name for the disk.

  3. To make the disk available on the other nodes in the cluster, enter the following command as root on each node:

    # /etc/init.d/oracleasm scandisks
    
    

    This command identifies shared disks attached to the node that are marked as Automatic Storage Management disks.

Configuring the Disk Devices to Use the Automatic Storage Management Library Driver on IBM zSeries Based Systems

  1. If you formatted the DASD with the compatible disk layout, then enter a command similar to the following to create a single whole-disk partition on the device:

    # /sbin/fdasd -a /dev/dasdxxxx
    
    
  2. Enter a command similar to the following to mark a disk as an ASM disk:

    # /etc/init.d/oracleasm createdisk DISK1 /dev/sdb1
    
    

    In this example, DISK1 is a name that you want to assign to the disk.

    Note:

    the disk names that you specify can contain uppercase letters, numbers, and the underscore character. They must start with an uppercase letter.

    If you are using a multi-pathing disk driver with ASM, then make sure that you specify the correct logical device name for the disk.

  3. To make the disk available on the other cluster nodes, enter the following command as root on each node:

    # /etc/init.d/oracleasm scandisks
    
    

    This command identifies shared disks attached to the node that are marked as ASM disks.

    Note:

    To create a database during the installation using the ASM library driver, you must choose an installation method that runs DBCA in interactive mode. For example, you can run DBCA in an interactive mode by choosing the Custom installation type, or the Advanced database configuration option. You must also change the default disk discovery string to ORCL:*.

Administering the Automatic Storage Management Library Driver and Disks

To administer the Automatic Storage Management library driver and disks, use the oracleasm initialization script with different options, as described in Table 3-4.

Table 3-4 ORACLEASM Script Options

Option Description
configure

Use the configure option to reconfigure the Automatic Storage Management library driver, if necessary:

# /etc/init.d/oracleasm configure
enable
disable

Use the disable and enable options to change the actions of the Automatic Storage Management library driver when the system starts. The enable option causes the Automatic Storage Management library driver to load when the system starts:

# /etc/init.d/oracleasm enable
start
stop
restart

Use the start, stop, and restart options to load or unload the Automatic Storage Management library driver without restarting the system:

# /etc/init.d/oracleasm restart
createdisk

Use the createdisk option to mark a disk device for use with the Automatic Storage Management library driver and give it a name:

# /etc/init.d/oracleasm createdisk DISKNAME devicename
deletedisk

Use the deletedisk option to unmark a named disk device:

# /etc/init.d/oracleasm deletedisk DISKNAME

Caution: Do not use this command to unmark disks that are being used by an Automatic Storage Management disk group. You must delete the disk from the Automatic Storage Management disk group before you unmark it.

querydisk

Use the querydisk option to determine if a disk device or disk name is being used by the Automatic Storage Management library driver:

# /etc/init.d/oracleasm querydisk {DISKNAME | devicename}
listdisks

Use the listdisks option to list the disk names of marked Automatic Storage Management library driver disks:

# /etc/init.d/oracleasm listdisks
scandisks

Use the scandisks option to enable cluster nodes to identify which shared disks have been marked as Automatic Storage Management library driver disks on another node:

# /etc/init.d/oracleasm scandisks

When you have completed creating and configuring Automatic Storage Management, with ASMLIB, proceed to Chapter 4, "Installing Oracle Clusterware".

3.6.4 Configuring Database File Storage on ASM and Raw Devices

Note:

For improved performance and easier administration, Oracle recommends that you use the Automatic Storage Management library driver (ASMLIB) instead of raw devices to configure Automatic Storage Management disks.

To configure disks for Automatic Storage Management (ASM) using raw devices, complete the following tasks:

  1. To use ASM with raw partitions, you must create sufficient partitions for your data files, and then bind the partitions to raw devices. To do this, follow the instructions provided for Oracle Clusterware in the section "Configuring Storage for Oracle Clusterware Files on Raw Devices".

  2. Make a list of the raw device names you create for the data files, and have the list available during database installation.

When you have completed creating and configuring ASM with raw partitions, proceed to Chapter 4, "Installing Oracle Clusterware".

3.7 Configuring Database File Storage on Raw Devices

The following sections describe how to configure raw partitions for database files.

3.7.1 Database File Restrictions for Logical Volume Manager on Linux

The procedures contained in this section describe how to create raw partitions for Oracle Database files

On x86 and Itanium systems, although Red Hat Enterprise Linux 3 and SUSE Linux Enterprise Server provide a Logical Volume Manager (LVM), this LVM is not cluster-aware. For this reason, Oracle does not support the use of logical volumes with RAC on x86 and Itanium systems for either Oracle Clusterware or database files.

On IBM zSeries based systems, Oracle supports raw logical volumes.

3.7.2 Identifying Required Raw Partitions for Database Files

Table 3-5 lists the number and size of the raw partitions that you must configure for database files.

Table 3-5 Raw Partitions or Logical Volumes Required for Database Files on Linux

Number Partition Size (MB) Purpose

1

500

SYSTEM tablespace

1

300 + (Number of instances * 250)

SYSAUX tablespace

Number of instances

500

UNDOTBSn tablespace (One tablespace for each instance)

1

250

TEMP tablespace

1

160

EXAMPLE tablespace

1

120

USERS tablespace

2 * number of instances

120

Two online redo log files for each instance

2

110

First and second control files

1

5

Server parameter file (SPFILE)

1

5

Password file


Note:

If you prefer to use manual undo management, instead of automatic undo management, then, instead of the UNDOTBSn raw devices, you must create a single rollback segment tablespace (RBS) raw device that is at least 500 MB in size.

3.7.3 Configuring Database Raw Logical Volumes on IBM zSeries Based Linux

On zSeries Linux, you can use raw logical volume manager (LVM) volumes for Oracle CRS and database file storage. You can create the required raw logical volumes in a volume group on either direct access storage devices (DASDs) or on SCSI devices. To configure the required raw logical volumes, follow these steps:

Note:

On x86 and Itanium systems, although Red Hat Enterprise Linux 3 and SUSE Linux Enterprise Server provide a Logical Volume Manager (LVM), this LVM is not cluster-aware. For this reason, Oracle does not support the use of logical volumes with RAC on x86 and Itanium systems for either Oracle Clusterware or database files.
  1. If necessary, install or configure the shared DASDs that you intend to use for the disk group and restart the system.

  2. Enter the following command to identify the DASDs configured on the system:

    # more /proc/dasd/devices
    
    

    The output from this command contains lines similar to the following:

    0302(ECKD) at ( 94: 48) is dasdm : active at blocksize: 4096, 540000 blocks, 2109 MB
    
    

    These lines display the following information for each DASD:

    • The device number (0302)

    • The device type (ECKD or FBA)

    • The Linux device major and minor numbers (94: 48)

    • The Linux device file name (dasdm)

      In general, DASDs have device names in the form dasdxxxx, where xxxx is between one and four letters that identify the device.

    • The block size and size of the device

  3. From the display, identify the devices that you want to use.

    If the devices displayed are FBA-type DASDs, then you do not have to configure them. You can proceed to bind them for Oracle Database files as described in the section "Binding Partitions to Raw Devices for Database Files" .

    If you want to use ECKD-type DASDs, then enter a command similar to the following to format the DASD, if it is not already formatted:

    # /sbin/dasdfmt -b 4096 -f /dev/dasdxxxx
    
    

    Caution:

    Formatting a DASD destroys all existing data on the device. Make sure that:
    • You specify the correct DASD device name

    • The DASD does not contain existing data that you want to preserve

    This command formats the DASD with a block size of 4 KB and the compatible disk layout (default), which enables you to create up to three partitions on the DASD.

    Alternatively, you can use the -d ldl option to format the DASD using the Linux disk layout if you require only a single partition (for example, if you want to create a partition for ASM file management). If you use this disk layout, then the partition device name for the DASD is /dev/dasdxxxx1.

  4. If you intend to create raw logical volumes on SCSI devices, then proceed to step 5.

    If you intend to create raw logical volumes on DASDs, and you formatted the DASD with the compatible disk layout, then determine how you want to create partitions.

    To create a single whole-disk partition on the device (for example, if you want to create a partition on an entire raw logical volume for database files), enter a command similar to the following:

    # /sbin/fdasd -a /dev/dasdxxxx
    
    

    This command creates one partition across the entire disk. You are then ready to mark devices as physical volumes. Proceed to Step 6.

    To create up to three partitions on the device (for example, if you want to create partitions for individual tablespaces), enter a command similar to the following:

    # /sbin/fdasd /dev/dasdxxxx
    
    

    Use the following guidelines when creating partitions:

    • Use the p command to list the partition table of the device.

    • Use the n command to create a new partition.

    • After you have created the required partitions on this device, use the w command to write the modified partition table to the device.

    • See the fdasd man page for more information about creating partitions.

    The partitions on a DASD have device names similar to the following, where n is the partition number, between 1 and 3:

    /dev/dasdxxxxn
    
    

    When you have completed creating partitions, you are then ready to mark devices as physical volumes. Proceed to Step 6.

  5. If you intend to use SCSI devices in the volume group, then follow these steps:

    1. If necessary, install or configure the shared disk devices that you intend to use for the volume group and restart the system.

    2. To identify the device name for the disks that you want to use, enter the following command:

      # /sbin/fdisk -l
      
      

      SCSI devices have device names similar to the following:

      /dev/sdxn
      
      

      In this example, x is a letter that identifies the SCSI disk and n is the partition number. For example, /dev/sda is the first disk on the first SCSI bus.

    3. If necessary, use fdisk to create partitions on the devices that you want to use.

    4. Use the t command in fdisk to change the system ID for the partitions that you want to use to 0x8e.

  6. Enter a command similar to the following to mark each device that you want to use in the volume group as a physical volume:

    # pvcreate /dev/sda1 /dev/sdb1
    
    
  7. To create a volume group named oracle_vg using the devices that you marked, enter a command similar to the following:

    # vgcreate oracle_vg /dev/dasda1 /dev/dasdb1
    
    
  8. To create the required logical volumes in the volume group that you created, enter commands similar to the following:

    # lvcreate -L size -n lv_name vg_name
    
    

    In this example:

    • size is the size of the logical volume, for example 500M

    • lv_name is the name of the logical volume, for example orcl_system_raw_500m

    • vg_name is the name of the volume group, for example oracle_vg

    For example, to create a 500 MB logical volume for the SYSTEM tablespace for a database named rac in the oracle_vg volume group, enter the following command:

    # lvcreate -L 500M -n rac_system_raw_500m oracle_vg
    
    

    Note:

    These commands create a device name similar to the following for each logical volume:
    /dev/vg_name/lv_name
    
  9. On the other cluster nodes, enter the following commands to configure the volume group and logical volumes on those nodes:

    # vgscan
    # vgchange -a y
    
    

3.7.4 Creating Required Raw Partitions for Database Files on IDE, SCSI, or RAID Devices

If you intend to use IDE, SCSI, or RAID devices for the database raw devices, then follow these steps:

  1. If necessary, install or configure the shared disk devices that you intend to use for the raw partitions and restart the system.

    Note:

    Because the number of partitions that you can create on a single device is limited, you might need to create the required raw partitions on more than one device.
  2. To identify the device name for the disks that you want to use, enter the following command:

    # /sbin/fdisk -l
    
    

    Depending on the type of disk, the device name can vary:

    Disk Type Device Name Format Description
    IDE disk
    /dev/hdxn
    
    In this example, x is a letter that identifies the IDE disk and n is the partition number. For example, /dev/hda is the first disk on the first IDE bus.
    SCSI disk
    /dev/sdxn
    
    In this example, x is a letter that identifies the SCSI disk and n is the partition number. For example, /dev/sda is the first disk on the first SCSI bus.
    RAID disk
    /dev/rd/cxdypz
    /dev/ida/cxdypz
    
    Depending on the RAID controller, RAID devices can have different device names. In the examples shown, x is a number that identifies the controller, y is a number that identifies the disk, and z is a number that identifies the partition. For example, /dev/ida/c0d1 is the second logical drive on the first controller.

    You can create the required raw partitions either on new devices that you added or on previously partitioned devices that have unpartitioned free space. To identify devices that have unpartitioned free space, examine the start and end cylinder numbers of the existing partitions and determine if the device contains unused cylinders.

  3. To create raw partitions on a device, enter a command similar to the following:

    # /sbin/fdisk devicename
    
    

    Use the following guidelines when creating partitions:

    • Use the p command to list the partition table of the device.

    • Use the n command to create a partition.

    • After you have created the required partitions on this device, use the w command to write the modified partition table to the device.

    • Refer to the fdisk man page for more information about creating partitions.

3.7.5 Binding Partitions to Raw Devices for Database Files

After you have created the required partitions for database files, you must bind the partitions to raw devices on every node. However, you must first determine what raw devices are already bound to other devices. The procedure that you must follow to complete this task varies, depending on the Linux distribution that you are using:

Note:

If the nodes are configured differently, then the disk device names might be different on some nodes. In the following procedure, be sure to specify the correct disk device names on each node.
  • Red Hat:

    1. To determine what raw devices are already bound to other devices, enter the following command on every node:

      # /usr/bin/raw -qa
      
      

      Raw devices have device names in the form /dev/raw/rawn, where n is a number that identifies the raw device.

      For each device that you want to use, identify a raw device name that is unused on all nodes.

    2. Open the /etc/sysconfig/rawdevices file in any text editor and add a line similar to the following for each partition that you created:

      /dev/raw/raw1 /dev/sdb1
      
      

      Specify an unused raw device for each partition.

    3. For each raw device that you specified in the rawdevices file, enter commands similar to the following to set the owner, group, and permissions on the device file:

      # chown oracle:dba /dev/raw/rawn
      # chmod 660 /dev/raw/rawn
      
      
    4. To bind the partitions to the raw devices, enter the following command:

      # /sbin/service rawdevices restart
      
      

      The system automatically binds the devices listed in the rawdevices file when it restarts.

    5. Repeat step 2 through step 4 on the other nodes in the cluster.

  • SUSE:

    1. To determine what raw devices are already bound to other devices, enter the following command on every node:

      # /usr/sbin/raw -qa
      
      

      Raw devices have device names in the form /dev/raw/rawn, where n is a number that identifies the raw device.

      For each device that you want to use, identify a raw device name that is unused on all nodes.

    2. Open the /etc/raw file in any text editor and add a line similar to the following to associate each partition with an unused raw device:

      raw1:sdb1
      
      
    3. For each raw device that you specified in the /etc/raw file, enter commands similar to the following to set the owner, group, and permissions on the device file:

      # chown oracle:dba /dev/raw/rawn
      # chmod 660 /dev/raw/rawn
      
      
    4. To bind the partitions to the raw devices, enter the following command:

      # /etc/init.d/raw start
      
      
    5. To ensure that the raw devices are bound when the system restarts, enter the following command:

      # /sbin/chkconfig raw on
      
      
    6. Repeat step 2 through step 5 on the other nodes in the cluster.

3.7.6 Creating the Database Configuration Assistant Raw Device Mapping File

Note:

You must complete this procedure only if you are using raw devices for database files. You do not specify the raw devices for the Oracle Clusterware files in the Database Configuration Assistant raw device mapping file.

To allow Database Configuration Assistant to identify the appropriate raw device for each database file, you must create a raw device mapping file, as follows:

  1. Set the ORACLE_BASE environment variable to specify the Oracle base directory that you identified or created previously:

    • Bourne, Bash, or Korn shell:

      $ ORACLE_BASE=/u01/app/oracle ; export ORACLE_BASE
      
      
    • C shell:

      % setenv ORACLE_BASE /u01/app/oracle
      
      
  2. Create a database file subdirectory under the Oracle base directory and set the appropriate owner, group, and permissions on it:

    # mkdir -p $ORACLE_BASE/oradata/dbname
    # chown -R oracle:oinstall $ORACLE_BASE/oradata
    # chmod -R 775 $ORACLE_BASE/oradata
    
    

    In this example, dbname is the name of the database that you chose previously.

  3. Change directory to the $ORACLE_BASE/oradata/dbname directory.

  4. Edit the dbname_raw.conf file in any text editor to create a file similar to the following:

    Note:

    The following example shows a sample mapping file for a two-instance RAC cluster.
    system=/dev/raw/raw1
    sysaux=/dev/raw/raw2
    example=/dev/raw/raw3
    users=/dev/raw/raw4
    temp=/dev/raw/raw5
    undotbs1=/dev/raw/raw6
    undotbs2=/dev/raw/raw7
    redo1_1=/dev/raw/raw8
    redo1_2=/dev/raw/raw9
    redo2_1=/dev/raw/raw10
    redo2_2=/dev/raw/raw11
    control1=/dev/raw/raw12
    control2=/dev/raw/raw13
    spfile=/dev/raw/raw14
    pwdfile=/dev/raw/raw15
    
    

    Use the following guidelines when creating or editing this file:

    • Each line in the file must have the following format:

      database_object_identifier=raw_device_path
      
      
    • For a single-instance database, the file must specify one automatic undo tablespace data file (undotbs1), and at least two redo log files (redo1_1, redo1_2).

    • For a RAC database, the file must specify one automatic undo tablespace data file (undotbsn) and two redo log files (redon_1, redon_2) for each instance.

    • Specify at least two control files (control1, control2).

    • To use manual instead of automatic undo management, specify a single rollback segment tablespace data file (rbs) instead of the automatic undo management tablespace data files.

  5. Save the file, and note the file name that you specified.

  6. If you are using raw devices for database storage, then set the DBCA_RAW_CONFIG environment variable to specify the full path to the raw device mapping file:

    Bourne, Bash, or Korn shell:

    $ DBCA_RAW_CONFIG=$ORACLE_BASE/oradata/dbname/dbname_raw.conf
    $ export DBCA_RAW_CONFIG
    
    

    C shell:

    $ setenv DBCA_RAW_CONFIG=$ORACLE_BASE/oradata/dbname/dbname_raw.conf
    
    

3.8 Configuring Raw Devices on Red Hat Enterprise Linux 4.0

If you intend to use raw devices for Oracle Clusterware or Oracle Database files, then you need to configure the raw devices. Starting with the 2. 6 Linux kernel distributions, raw devices are not supported by default in the kernel. However, Red Hat Enterprise Linux 4.0 continues to provide raw support. To confirm that raw devices are enabled, enter the following command:

# chkconfig --list

Scan the output for raw devices. If you do not find raw devices, then use the following command to enable the raw device service:

# chkconfig --level 345 rawdevices on

After you confirm that the raw devices service is running, you should change the default ownership of raw devices. When you restart a Red Hat Enterprise Linux 4.0 system, ownership and permissions on raw devices revert by default to root. If you are using raw devices with this operating system for your Oracle files (for example, for ASM storage or Oracle Clusterware files), then you need to override this default behavior.

In this section, we will use the scenario of two ASM disk files (/dev/raw/raw6 and /dev/raw/raw7), two Oracle Cluster Registry files (/dev/raw/raw1 and /dev/raw/raw2), and three Oracle Clusterware voting disks (/dev/raw/raw3, /dev/raw/raw4, and /dev/raw/raw5).

To ensure correct ownership of these devices when the operating system is restarted, create a new file in the /etc/udev/permissions.d directory, called oracle.permissions, and enter the raw device permissions information.

Note that Oracle Clusterware software can be owned either by the same user that owns the Oracle database software (typically oracle), or can be owned by a separate Oracle Clusterware user. If you create a separate Oracle Clusterware user, then that user must own the voting disks.

This example shows the permissions to be set if you use a separate Oracle Clusterware user, named crs, and the Oracle user is named oracle. The ASM disks should be owned by oracle, and the voting disks owned by crs. The Oracle Cluster Registry (OCR) is always owned by root.With the scenario for this section, the following is an example of the contents of /etc/udev/permissions.d/oracle.permissions:

# ASM
raw/raw[67]:oracle:dba:0660
# OCR
raw/raw[12]:root:oinstall:0640
# Voting Disks
raw/raw[3-5]:crs:oinstall:0640

Note that path lines can use the shell glob module, so entries such as raw/raw[3-4] or raw/raw* are permitted. Refer to your operating system help for character range usage.

After creating the oracle.permissions file, the permissions of the rawdevices files are set automatically the next time the system is restarted. To set permissions to take effect immediately, without restarting the system, use the chown and chmod commands:

chown oracle:dba /dev/raw/raw[67]
chmod 660 /dev/raw/raw[67]
chown root:oinstall /dev/raw/raw[12]
chmod 640 /dev/raw/raw[12]
chown crs:oinstall /dev/raw/raw[3-5]
chmod 640 /dev/raw/raw[3-5]

3.9 Configuring Raw Devices on SUSE Linux

To set and maintain proper device ownership and permissions for raw devices after restarts, you must install the UDEV rpm. To do this:

  1. Go to the Novell Web site download.novell.com and download the udev RPM.

  2. Using the following command, install the UDEV rpm:

    rpm -Fhv mkinitrd.rpm udev.rpm
    
    
  3. Set the default ownership of raw devices, as described in "Configuring Raw Devices on Red Hat Enterprise Linux 4.0"

3.10 Upgrading 10.1.0.3 Databases on RAW to 10.2.0.2 on Block Devices

Block devices are supported with the Oracle Database 10.2.0.2 release. To upgrade a 10.1 database on raw devices to a 10.2.0.2 database on block devices, use the following procedure:

  1. Perform Oracle Clusterware and Oracle Real Application Clusters (RAC) upgrade steps (including database upgrade), as described in Oracle Database Upgrade Guide, 10g Release 2 (10.2), part number B14238-01.

  2. Using the following procedure, stop all processes:

    1. Shut down all processes in the Oracle home that can access a database, such as Oracle Enterprise Manager Database Control or iSQL*Plus.

    2. Shut down all RAC instances on all nodes. To shut down all RAC instances for a database, enter the following command, where db_name is the name of the database:

      $ oracle_home/bin/srvctl stop database -d db_name
      
      
    3. Shut down all ASM instances on all nodes. To shut down an ASM instance, enter the following command, where node is the name of the node where the ASM instance is running:

      $ oracle_home/bin/srvctl stop asm -n node
      
      
    4. Stop all node applications on all nodes. To stop node applications running on a node, enter the following command, where node is the name of the node where the applications are running:

      $ oracle_home/bin/srvctl stop nodeapps -n node
      
      
    5. Log in as the root user, and shut down the Oracle Clusterware process by entering the following command on all nodes:

      # crs_home/bin/crsctl stop crs
      
      
  3. Unbind RAW devices that were used for OCR, VDISK and Database. To unbind these devices, log in as the root user, and enter the command for your Linux distribution, where Raw device name is the name of the raw device to unbind.

    Red Hat Linux:

    # /usr/bin/raw Raw device Name 0 0
    
    

    SUSE Linux:

    # /usr/sbin/raw Raw device Name 0 0
    
    
  4. As root, use the following command to rename the raw devices that you unbound in step 3, where Raw device name is the name of the raw device name.

    # mv Raw device Name Raw device name.10.1
    
    
  5. As root, use the following command to link the raw device files which used to exist to their corresponding block device, where Block device name is the name of the block device, and Raw device name is the name of the raw device.

    # ln -s Block device name Raw device name
    
    
  6. Using the following procedure, start up all processes

    1. Start up the Oracle Clusterware process by entering the following command on all nodes as the root user, where crs_home is the path or symbolic link to the CRS home directory:

      # crs_home/bin/crsctl start crs
      
      

      Note: This command starts Oracle Clusterware and all its resources.

    2. Start up all other processes in the Oracle home that you want to use, such as Oracle Enterprise Manager Database Control, or iSQL*Plus