Recently, I setup a 2 node RAC environment for testing using Solaris 10 and NFS. This environment consisted of 2 RAC nodes running Solaris 10 and a Solaris 10 server which served as my NFS filer.
I thought it might prove useful to create a post on how this is achieved as I found it to be a relatively quick way to setup a cheap test RAC environment. Obviously, this setup is not supported by Oracle and should only be used for development and testing purposes.
This post will only detail the steps which are specific to this setup; meaning I wont talk about a number of steps which need to be performed such as setting up user equivalence and creating the database. I will mention when these steps should be performed but I point you to Jeffrey Hunter's article on building a 10gR2 RAC on Linux with iSCSI for more information on steps like this.
Overview of the Environment
Here is a diagram of the architecture used which is based on Jeff Hunter's diagram from the previously mentioned article (click on the image to get a larger view):
You can see that I am using an external hard drive attached to the NFS filer for storage. This external hard drive will hold all my database and Clusterware files.
Again, the hardware used is the exact same as the hardware used in Jeff Hunter's article. Notice however that I do not have a public interface configured for my NFS filer. This is mainly because I did not have any spare network interfaces lying around for me to use!
Getting Started
To get started, we will install Solaris 10 for the x86 architecture on all three machines. The ISO images for Solaris 10 x86 can be downloaded from Sun's website here. You will need a Sun Online account to access the downloads but registration is free and painless.
I won't be covering the Solaris 10 installation process here but for more information, I refer you to the official Sun basic installation guide found here.
When installing Solaris 10, make sure that you configure both network interfaces. Ensure that you do not use DHCP for either network interface and specify all the necessary details for your environment.
After installation, you should update the /etc/inet/hosts
file on all hosts. For my environment as shown in the diagram above, my hosts
file looked like the following:
#
# Internet host table
#
127.0.0.1 localhost
# Public Network - (pcn0)
172.16.16.27 solaris1
172.16.16.28 solaris2
# Private Interconnect - (pcn1)
192.168.2.111 solaris1-priv
192.168.2.112 solaris2-priv
# Public Virtual IP (VIP) addresses for - (pcn0)
172.16.16.31 solaris1-vip
172.16.16.32 solaris2-vip
# NFS Filer - (pcn1)
192.168.2.195 solaris-filer
The network settings on the RAC nodes will need to be adjusted as they can affect cluster interconnect transmissions. The UDP parameters which need to be modified on Solaris are
udp_recv_hiwat
and udp_xmit_hiwat
. The default values for these parameters on Solaris 10 are 57344 bytes. Oracle recommends that these parameters are set to at least 65536 bytes.To see what these parameters are currently set to, perform the following:
# ndd /dev/udp udp_xmit_hiwat
57344
# ndd /dev/udp udp_recv_hiwat
57344
To set the values of these parameters to 65536 bytes in current memory, perform the following:
# ndd -set /dev/udp udp_xmit_hiwat 65536
# ndd -set /dev/udp udp_recv_hiwat 65536
Now we obviously want these parameters to be set to these values when the system boots. The official Oracle documentation is incorrect when it states that the parameters are set on boot when they are placed in the
/etc/system
file. The values placed in /etc/system
will have no affect on Solaris 10. Bug 5237047 has more information on this.So what we will do is to create a startup script called
udp_rac
in /etc/init.d
. This script will have the following contents:
#!/sbin/sh
case "$1" in
'start')
ndd -set /dev/udp udp_xmit_hiwat 65536
ndd -set /dev/udp udp_recv_hiwat 65536
;;
'state')
ndd /dev/udp udp_xmit_hiwat
ndd /dev/udp udp_recv_hiwat
;;
*)
echo "Usage: $0 { start | state }"
exit 1
;;
esac
Now, we need to create a link to this script in the
/etc/rc3.d
directory:Configuring the NFS Filer
# ln -s /etc/init.d/udp_rac /etc/rc3.d/S86udp_rac
Now that we have Solaris installed on all our machines, its time to start configuring our NFS filer. As I mentioned before, I will be using an external hard drive for storing all my database files and Clusterware files. If you're not using an external hard drive you can ignore the next paragraph.
In my previous post, I talked about creating a UFS file system on an external hard drive in Solaris 10. I am going to be following that post exactly. So if you perform what I mention in that post, you will have a UFS file system ready for mounting.
Now, I have a UFS file system created on the
/dev/dsk/c2t0d0s0
device. I will create a directory for mounting this file system and then mount it:Now that we have created the base directory, lets create directories inside this which will contain the various files for our RAC environment.
# mkdir -p /export/rac
# mount -F ufs /dev/dsk/c2t0d0s0 /export/rac
# cd /export/rac
# mkdir crs_files
# mkdir oradata
The
/export/rac/crs_files
directory will contain the OCR and the voting disk files used by Oracle Clusterware. The /export/rac/oradata
directory will contain all the Oracle data files, control files, redo logs and archive logs for the cluster database.Obviously, this setup is not ideal since everything is on the same device. For setting up this environment, I didn't care. All I wanted to do was get a quick RAC environment up and running and show how easily it can be done with NFS. More care should be taken in the previous step but I'm lazy...
Now we need to make these directories accessible to the Oracle RAC nodes. I will be accomplishing this using NFS. We first need to edit the
/etc/dfs/dfstab
file to specify which directories we want to share and what options we want to use when sharing them. The dfstab
file I configured looked like so:
# Place share(1M) commands here for automatic execution
# on entering init state 3.
#
# Issue the command 'svcadm enable network/nfs/server' to
# run the NFS daemon processes and the share commands, after adding
# the very first entry to this file.
#
# share [-F fstype] [ -o options] [-d ""] [resource]
# .e.g,
# share -F nfs -o rw=engineering -d "home dirs" /export/home2
share -F nfs -o rw,anon=175 /export/rac/crs_files
share -F nfs -o rw,anon=175 /export/rac/oradata
The
anon
option in the dfstab
file as shown above, is the user ID of the oracle user on the cluster nodes. This user ID should be the same on all nodes in the cluster.After editing the
dfstab
file, the NFS daemon process needs to be restarted. You can do this on Solaris 10 like so:To check if the directories are exported correctly, the following can be performed from the NFS filer:
# svcadm restart nfs/server
# share
- /export/rac/crs_files rw,anon=175 ""
- /export/rac/oradata rw,anon=175 ""
#
The specified directories should now be accessible from the Oracle RAC nodes. To verify that these directories are accessible from the RAC nodes, run the following from both nodes (
solaris1
and solaris2
in my case):
# dfshares solaris-filer
RESOURCE SERVER ACCESS TRANSPORT
solaris-filer:/export/rac/crs_files solaris-filer - -
solaris-filer:/export/rac/oradata solaris-filer - -
#
The output should be the same on both nodes.
Configure NFS Exports on Oracle RAC Nodes
Now we need to configure the NFS exports on the two nodes in the cluster. First, we must create directories where we will be mounting the exports. In my case, I did this:
# mkdir /u02
# mkdir /u03
I am not using
u01
as I'm using this directory for installing the software. I will not be configuring a shared Oracle home in this article as I wanted to keep things as simple as possible but that might serve as a good future blog post.For mounting the NFS exports, there are specific mount options which must be used with NFS in an Oracle RAC environment. The mount command which I used to manually mount these exports is as follows:
# mount -F nfs -o rw,hard,nointr,rsize=32768,wsize=32768,noac,proto=tcp,forcedirectio,vers=3 \
solaris-filer:/export/rac/crs_files /u02
# mount -F nfs -o rw,hard,nointr,rsize=32768,wsize=32768,noac,proto=tcp,forcedirectio,vers=3 \
solaris-filer:/export/rac/oradata /u03
Obviously, we want these exports to be mounted at boot. This is accomplished by adding the necessary lines to the
/etc/vfstab
file. The extra lines which I added to the /etc/vfstab
file on both nodes were (the output below did not come out very well originally so I had to split each line into 2 lines):Configure the Solaris Servers for Oracle
solaris-filer:/export/rac/crs_files - /u02 nfs - yes
rw,hard,bg,nointr,rsize=32768,wsize=32768,noac,proto=tcp,forcedirectio,vers=3
solaris-filer:/export/rac/oradata - /u03 nfs - yes
rw,hard,bg,nointr,rsize=32768,wsize=32768,noac,proto=tcp,forcedirectio,vers=3
Now that we have shared storage setup, it's time to configure the Solaris servers on which we will be installing Oracle. One little thing which must be performed on Solaris is to create symbolic links for the SSH binaries. The Oracle Universal Installer and configuration assistants (such as NETCA) will look for the SSH binaries in the wrong location on Solaris. Even if the SSH binaries are included in your path when you start these programs, they will still look for the binaries in the wrong location. On Solaris, the SSH binaries are located in the
/usr/bin
directory by default. The OUI will throw an error stating that it cannot find the ssh
or scp
binaries. My simple workaround was to simply create a symbolic link in the /usr/local/bin
directory for these binaries.
# ln -s /usr/bin/ssh /usr/local/bin/ssh
# ln -s /usr/bin/scp /usr/local/bin/scp
You should also create the oracle user and directories now before configuring kernel parameters.
For configuring and setting kernel parameters on Solaris 10 for Oracle, I point you to this excellent installation guide for Oracle on Solaris 10 by Howard Rogers. It contains all the necessary information you need for configuring your Solaris 10 system for Oracle. Just remember to perform all steps mentioned in his article on both nodes in the cluster.
What's Left to Do
From here on in, its quite easy to follow Jeff Hunter's article. Obviously, you wont be using ASM. The only differences between what to do now and what he has documented is file locations. So you could follow along from section 14 and you should be able to get a 10gR2 RAC environment up and running. Obviously, there is some sections such as setting up OCFS2 and ASMLib that can be left out since we are installing on Solaris and not Linux.
1 comment:
One thing you may want to check is that you are setting the udp_xmit_hiwat and udp_recv_hiwat, but you are mounting the filesystems using tcp (proto=tcp). I believe you should add the tcp_xmit_hiwat and tcp_recv_hiwat into the rc script.
Post a Comment