:Oracle 11g R2 RAC On Oracle Enterprise Linux 5:
Real Application Clusters (RAC) is the Oracle trade name for its database server cluster product and provides Load-Balancing, Scalability, Elasticity and High-Availability by keeping the Oracle database server product available and running across a set of multiple server nodes accessing a common database residing on Shared Storage.
Oracle Real Application Clusters (RAC) technology is Oracle's clustered version of its database server product and was introduced and popularized by Oracle with the release of 9i in 2001. The technology did exist before 9i under a different name: Oracle Parallel Server. However, Oracle Parallel Server was nowhere as fast and advanced as RAC in its architecture, application and deployment.
This Document shows the step by step of installing and setting up 3-Node 11gR2 RAC cluster. This setup uses IP Based iSCSI Openfiler SAN
as a shared storage subsystem. This setup does not have IPMI and Grid Naming Service (GNS) configured. The SCAN is resolved through DNS.Hardware Used in setting up 2-node 11g R2 RAC using iSCSI SAN (Openfiler):
- Total Machines: 4 (2 for RAC nodes + 1 for SAN + 1 for DNS)
- Network Switches: 3 (for Public, Private and Shared Storage)
- Extra Network Adaptors: 5 (4 for RAC nodes (2 for each node) and one for Storage Server)
- Network cables: 9 (5 for RAC nodes (3 for each node), one for Shared Storage and 1 for DNS server)
Software Used for the 2-node RAC Setup using NAS (Openfiler):
- SAN Storage Solution: Openfiler 2.3 (2.6.26.8-1.0.11.smp.pae.gcc3.4.x86.i686)
- Operating System: Oracle Enterprise Linux 5.5 (2.6.18-194.el5PAE)
- Clusterware: Oracle 11g R2 Grid Infrastructure (11.2.0.3)
- Oracle RAC: Oracle RDBMS 11g R2 (11.2.0.3)
2-Node RAC Setup
Operating System: Oracle Enterprise Linux 5.5 (2.6.18-194.el5PAE):
Server: All the RAC Nodes + DNS server
Grid Infrastructure Software (Clusterware + ASM 11.2.0.1):
Server: All the RAC Nodes
ORACLE_BASE: /u/oracle/
ORACLE_HOME: /u/oracle/server/grid203
Owner: grid (Primary Group: oinstall, Secondary Group: dba)
Permissions: 755
OCR/Voting Disk Storage Type: ASM
Oracle Inventory Location: /u01/app/oraInventory
Oracle Database Software (RAC 11.2.0.1):
Server: All the RAC Nodes
ORACLE_BASE: /u/oracle/
ORACLE_HOME: /u/oracle/server/database203
Owner: oracle (Primary Group: oinstall, Secondary Group: dba)
Permissions: 755
Oracle Inventory Location: /u01/app/oraInventory
Database Name: labdb
Listener: LAB_LISTENER (TCP:1525)
Openfiler 2.3:
Server: single dedicated server acting as SAN
OS: Openfiler 2.3 (2.6.26.8-1.0.11.smp.pae.gcc3.4.x86.i686).
Server Hardware Requirements:
- Each node in the Cluster must meet the below requirement.
- At least 1024 x 768 display resolution, so that OUI displays correctly.
- 1 GB of space in the /tmp directory
- 5.5 GB space for Oracle Grid Infrastructure Home.
- At least 2.5 GB of RAM and equivalent swap space (for 32 bit installation as in my case).
- All the RAC nodes must share the same Instruction Set Architecture. For a testing RAC setup, it is possible to install RAC on servers with mixtures of Intel 32 and AMD 32 with differences in sizes of Memory/CPU speed.
Minimum Required RPMs for OEL 5.5 (All the 3 RAC Nodes):
binutils-2.17.50.0.6
compat-libstdc++-33-3.2.3
elfutils-libelf-0.125
elfutils-libelf-devel-0.125
elfutils-libelf-devel-static-0.125
gcc-4.1.2
gcc-c++-4.1.2
glibc-2.5-24
glibc-common-2.5
glibc-devel-2.5
glibc-headers-2.5
kernel-headers-2.6.18
ksh-20060214
libaio-0.3.106
libaio-devel-0.3.106
libgcc-4.1.2
libgomp-4.1.2
libstdc++-4.1.2
libstdc++-devel-4.1.2
make-3.81
numactl-devel-0.9.8.i386
sysstat-7.0.2
unixODBC-2.2.11
unixODBC-devel-2.2.11
Set the below Kernel Parameters with recommended range in /etc/sysctl.conf
# Kernel sysctl configuration file for Oracle Enterprise Linux
#
# For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and
# sysctl.conf(5) for more details.
# Controls IP packet forwarding
net.ipv4.ip_forward = 0
# Controls source route verification
net.ipv4.conf.default.rp_filter = 1
# Do not accept source routing
net.ipv4.conf.default.accept_source_route = 0
# Controls the System Request debugging functionality of the kernel
kernel.sysrq = 1
# Controls whether core dumps will append the PID to the core filename
# Useful for debugging multi-threaded applications
kernel.core_uses_pid = 1
# Controls the use of TCP syncookies
net.ipv4.tcp_syncookies = 1
# Controls the maximum size of a message, in bytes
kernel.msgmnb = 65536
# Controls the default maxmimum size of a mesage queue
kernel.msgmax = 8192
# Controls the maximum shared segment size, in bytes
kernel.shmmax = 4294967295
# Controls the maximum number of shared memory segments, in pages
kernel.shmall = 1073741824
# For 11g, recommended value for file-max is 6815744
fs.file-max = 6815744
# For 10g, uncomment 'fs.file-max 327679', comment other entries for this parameter and re-run sysctl -p
# fs.file-max:327679
kernel.msgmni = 2878
kernel.sem = 250 32000 100 142
kernel.shmmni = 4096
net.core.rmem_default = 262144
# For 11g, recommended value for net.core.rmem_max is 4194304
net.core.rmem_max = 4194304
# For 10g, uncomment 'net.core.rmem_max 2097152', comment other entries for this parameter and re-run sysctl -p
# net.core.rmem_max=2097152
net.core.wmem_default = 262144
# For 11g, recommended value for wmem_max is 1048576
net.core.wmem_max = 1048576
# For 10g, uncomment 'net.core.wmem_max 262144', comment other entries for this parameter and re-run sysctl -p
# net.core.wmem_max:262144
fs.aio-max-nr = 3145728
# For 11g, recommended value for ip_local_port_range is 9000 65500
net.ipv4.ip_local_port_range = 9000 65500
# For 10g, uncomment 'net.ipv4.ip_local_port_range 1024 65000', comment other entries for this parameter and re-run sysctl -p
# net.ipv4.ip_local_port_range:1024 65000
# Added min_free_kbytes 50MB to
Grid Infrastructure installation Steps