婷婷综合国产,91蜜桃婷婷狠狠久久综合9色 ,九九九九九精品,国产综合av

主頁 > 知識(shí)庫 > Solaris9系統(tǒng)上安裝Oracle10g RAC

Solaris9系統(tǒng)上安裝Oracle10g RAC

熱門標(biāo)簽:模型地圖標(biāo)注 濟(jì)南呼叫中心外呼系統(tǒng)如何 揭陽外呼系統(tǒng)收費(fèi) 黃石外呼saas系統(tǒng) 臨沂ai電銷機(jī)器人價(jià)格 菏澤監(jiān)獄親情電話機(jī)器人 地圖標(biāo)注怎么上交呢 鳳城市地圖標(biāo)注app 怎樣使用奧維地圖標(biāo)注位置

1Oracle 官方安裝文檔中(下文中的doc 2doc 3),有多處錯(cuò)誤。最新的 Release Notes August 2006
     
更正了一些,但還有不少未得到更正。
2
.文中的三個(gè)腳本,在使用之前請(qǐng)仔閱讀。如不當(dāng)使用導(dǎo)致系統(tǒng)損壞與本文作者無關(guān)。
3
.本文所用的安裝步驟盡可能依照Oracle 的官方安裝文檔。但有一處“Configuring UDP parameters” 不得不改用其他
     
方法,因?yàn)?/span> Oracle 官方安裝文檔中講的方法不起作用。


Part1  Do the Pre-installation Tasks

This is a guide for installing Oracle 10g RAC with Oracle clusterware on Solaris9  SPARC 64-Bit
Enterprise Edition.
Followed intrusions in these doc:

1. Oracle Database Release Notes
   10g Release 2 (10.2) for Solaris Operating System (SPARC 64-Bit)  Part Number B15689-03
2. Oracle Clusterware and Oracle Real Application Clusters Installation Guide
   10g Release 2 (10.2) for Solaris Operating System  Part Number B14205-06
3. Oracle Database Installation Guide
   10g Release 2 (10.2) for Solaris Operating System (SPARC 64-Bit)  Part Number B15690-02

The Hardware:
Two Ultra-2 Enterprise servers. cpu:300MHz x 2, memory:1024mb. storage:Sun A1010 disk array
A Sun fiber channel host adapter(X1057A) is installed on each node to connect to the A1010
with a fiber cable. A Sun multi-pack is connected to one of the Ultra-2 servers.
A 10BT network card is installed on each node for the private network.
A crossover network cable connects the two 10BT network cards.
(Oracle10 does not support the crossover network cable, according to doc 2)
A SBUS Frame Buffer (X359A) is installed on both nodes.
Because the hardware is barely meet the requirement, it is only good for testing.

The OS:
Install Solaris9 4/04(64-Bit) on both nodes, and install the latest patch cluster.
Configure Tcpwrappers and NTP the two nodes. Solaris9 includes these two packages.
Install openssh-4.3p2-sol9-sparc-local on the two nodes.
Secure and harden the systems by following some procedures in the doc
SANS Solaris Security Step by Step Version 2.0:
http://www.depts.ttu.edu/tosm/se ... actices/solaris.pdf

The Media for Installation:
Oracle Database 10g Release2 10.2.0.1.0 for Solaris SPARC 64-bit Enterprise Edition
If possible, download 10.2.0.2 and up, so you will install less patches afterwards.
I have to copy the two DVDs to local hard disks, since the Ultra-2 does not have a DVD drive.
You might want to configure a NFS server on node1 and an auto client on node2,
so you can run the Cluster Verification Utility on node2 as well.

The steps of the installation/configuration:
1. Do the pre-installation tasks
2. Install Oracle clusterweare
3. Test/verify the clusterware
4. Install Oracle Database software only with RAC.
5. Configure ASM and make sure it is running on both nodes.
6. Create a RAC database with DBCA.

Disk & shared storage configuration:
----------------------------------------------------------------------------------------------------------
packages     required   real    type   mount point           partition
----------------------------------------------------------------------------------------------------------
DB software    4GB       4GB    UFS    /u01/app/oracle     c0t1d0s0 (not shared)
DB datafiles  1.2GB     12GB   ASM                                  c3t0d2s7,c3t1d2s7,c3t3d2s7
OCR             100MB     128MB  raw                                 c3t0d4s6,c3t2d4s6,c3t4d4s6
Voting disk     20MB      32MB   raw                                 c3t0d4s7,c3t2d4s7,c3t4d4s7
swap            400MB      1GB                                            c0t0d0s1
----------------------------------------------------------------------------------------------------------
The clusterware will be installed in it's own home directory on node1.
The redundancy level of OCR and voting disk will be "normal".
Because this is for testing only, no space allocated for flash recovery and log archiving.

Pre-Installation Tasks

A.
Network configuration
This part is done manually. Here is the files on node1 as an example:

/etc/hosts
127.0.0.1       localhost      
# node1
192.168.1.64   rac1  rac1.abc.com    loghost
10.10.10.1      rac1-priv rac1-priv.abc.com
192.168.1.69   rac1-vip  rac1-vip.abc.com
# node2
192.168.1.77   rac2 rac2.abc.com
10.10.10.2      rac2-priv rac2-priv.abc.com
192.168.1.76   rac2-vip  rac2-vip.abc.com

/etc/inet/netmasks
192.168.0.0     255.255.255.0
10.10.10.0      255.255.255.0

/etc/hostname.hme0
rac1

/etc/hostname.le0
10.10.10.1

/etc/defaultrouter
192.168.1.1

/etc/hosts.allow
ALL:    192.168.1. 127.0.0.1 10.


Run these commands to bring up the network interface le0 and test it.

# chown root:root /etc/hostname.le0
# ifconfig le0 plumb
# ifconfig le0 10.10.10.1 netmask 255.255.255.0 up
# ifconfig -a
lo0: flags=1000849UP,LOOPBACK,RUNNING,MULTICAST,IPv4> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
hme0: flags=1000843UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
        inet 192.168.1.64 netmask ffffff00 broadcast 192.168.1.255
        ether 8:0:20:82:47:d4
le0: flags=1000843UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
        inet 10.10.10.1 netmask ffffff00 broadcast 10.10.10.255
        ether 8:0:20:82:47:d4
#

To check the network setup, run the CVU now or run it after all tasks is done.
The command is:
/ora10.dvd2/clusterware/cluvfy/runcluvfy.sh comp nodecon -n rac1,rac2 -verbose

B.
Other pre-installation tasks except Configuring SSH
A shell script is created for doing most of the tasks: pre.install.ora10.conf.sh
The script will do the pre-installation tasks of the clusterware and most of the
pre-installation tasks of the Oracle database. It is only to run on a new installation
of a Solaris 9 system. Run this script on both nodes as root.

pre.install.ora10.conf.sh
------------------------------------------------------------
#!/bin/sh
# Pre-installation conf. on Solaris9 for installing Oracle 10g R2 with RAC.
# Written by susbin@chinaunix.net  072406

sc_name=pre.install.ora10.conf.sh

ORACLE_BASE=/u01/app/oracle;  export ORACLE_BASE
CRS_BASE=/u01/crs/oracle/product/10;  export CRS_BASE
ORACLE_HOME=${CRS_BASE}/app;  export ORACLE_HOME
PATH=$PATH:/usr/ccs/bin:/usr/local/bin;  export PATH

echo ===============================================================
echo $sc_name started at `date`.
echo " "

echo " "
echo "=============================================="
echo "Creating Required Operating System Groups and Users"
echo " "

echo "Creating groups: dba, osdba, and oinstall."
groupadd -g 201 dba
groupadd -g 202 oinstall
groupadd -g 203 osdba
echo "Check them with the command: grep 20 /etc/group"
grep 20 /etc/group
echo " "

echo "Check if \&;nobody\&; exists on the system with: id nobody"
echo ""
id -a nobody
echo " "

echo "Creating the directory \&;ORACLE_BASE\&;, which is set to $ORACLE_BASE"
mkdir -p $ORACLE_BASE
echo "Check it with the command: ls -l /u01/app "
echo ""
ls -l /u01/app
echo " "

echo "Creating a user account \&;oracle\&; and set the password of it:"
useradd -u 1005 -g 202 -G 201,203 -d $ORACLE_BASE -m -s /bin/ksh oracle
echo "Check the line in /etc/passwd with: grep oracle /etc/passwd"
grep oracle /etc/passwd
echo "Set the password of account oracle:"
passwd -r files oracle
chown -R oracle install ${ORACLE_BASE}
chmod -R 775 $ORACLE_BASE
echo " "

echo "Check if the oracle account has required groups with: id -a oracle "
echo " "
id -a oracle
echo " "

echo " "
echo "=============================================="
echo "Configuring Kernel Parameters"
echo " "

echo "Save a copy of /etc/system and append eleven lines to it."
echo "Need to reboot the system so the new parameters can take effect."
cp -p /etc/system /etc/system.orig
chmod 644 /etc/system

/bin/cat EOF >> /etc/system
set noexec_user_stack=1
set semsys:seminfo_semmni=100
set semsys:seminfo_semmns=1024
set semsys:seminfo_semmsl=256
set semsys:seminfo_semvmx=32767
set shmsys:shminfo_shmmax=4294967295
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmmni=100
set shmsys:shminfo_shmseg=10
EOF

echo " "
echo "Check /etc/system with the command: tail -11 /etc/system"
tail -11 /etc/system
echo " "

echo " "
echo "=============================================="
echo "Identifying Required Software Directories"
echo "ORACLE_BASE is set to $ORACLE_BASE, and the size of it should be 3GB or bigger."
echo "Check it with the command: $ df -h $ORACLE_BASE"
echo " "
mount /dev/dsk/c0t1d0s0 $ORACLE_BASE
df -h $ORACLE_BASE
echo " "

echo " "
echo "=============================================="
echo "Configuring the Oracle User's Environment"

/bin/cat EOF > ${ORACLE_BASE}/.profile
if [ -t 0 ]; then
   stty intr ^C
fi
umask 022
ORACLE_BASE=/u01/app/oracle;  export ORACLE_BASE
# for crs
CRS_BASE=/u01/crs/oracle/product/10;  export CRS_BASE
ORACLE_HOME=${CRS_BASE}/app;  export ORACLE_HOME
PATH=$PATH:/usr/local/bin:.:/bin:/usr/sbin:/usr/ucb;  export PATH
# end of crs
# for oraDB
#ORACLE_SID=rac1; export ORACLE_SID
# end of oraDB
EDITOR=vi;  export EDITOR
EXINIT='set nu showmode';  export EXINIT
EOF

chown -R oracle install ${ORACLE_BASE}
echo " "

echo " "
echo "=============================================="
echo "Configuring oracle clusterware home directory, which is set to"
echo " ${CRS_BASE}/crs "
mkdir -p ${CRS_BASE}/crs                                             
chown -R root install /u01/crs
chmod -R 775 /u01/crs/oracle
echo " "
ls -l $CRS_BASE

echo ""
echo "=============================================="
echo "Configuring UDP parameters by creating a S70ndd and put it under"
echo "/etc/rc2.d to set the two values of ndd to 65536."
echo " "

/bin/cat EOF > /etc/rc2.d/S70ndd
#!/sbin/sh
PATH=/usr/sbin;export PATH
ndd -set /dev/udp udp_recv_hiwat 65536
ndd -set /dev/udp udp_xmit_hiwat 65536
exit 0
EOF

chown root:sys /etc/rc2.d/S70ndd
chmod 755 /etc/rc2.d/S70ndd
echo "Check the S70ndd with \&;ls -l S70ndd\&; and \&;cat S70ndd\&; "
echo " "
ls -l /etc/rc2.d/S70ndd
echo " "
cat /etc/rc2.d/S70ndd

echo " "
echo "=============================================="
echo "Verify that the /etc/hosts file is used for name resolution"
echo "with the command: grep hosts: /etc/nsswitch.conf | grep files "
echo " "
grep hosts: /etc/nsswitch.conf | grep files

echo ""
echo "=============================================="
echo "Verify that the host name has been set with: hostname"
echo " "
hostname

echo ""
echo "=============================================="
echo "Verify that the domain name has NOT been set with: domainname"
echo " "
domainname

echo ""
echo "=============================================="
echo "Verify that the hosts file contains the fully qualified host name"
echo "with the command: grep `eval hostname` /etc/hosts "
echo " "
grep `eval hostname` /etc/hosts

echo " "
echo "The pre-installation configuring tasks is done on this node."
echo "Reboot the system so the new parameters can take effect."
echo " "
echo $sc_name ended at `date`.
echo ===============================================================

------------------------------------------------------------


Check the current values of Kernel Parameters after rebooted the system.
Login as user oracle, and then run the commands on all nodes:
/usr/sbin/sysdef | grep SEM
/usr/sbin/sysdef | grep SHM

Check the env variables of oracle account for installing the Clusterware:
$ env | grep ORACLE
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/crs/oracle/product/10/crs

$

C.
Configuring SSH on All Cluster Nodes
The ssh of Solaris9 is a Sun_SSH_1.1, which has a bug. Here is the discussion about it:
http://forum.sun.com/jive/thread.jspa?threadID=94357
The instructions of configuring SSH in doc 2 is based on OpenSSH V.3.x. The doc 2 also
points out that Oracle NetCA and DBCA require scp and ssh to be located in the path
/usr/local/bin. For these reasons, I choose to install openssh-4.3p2-sol9-sparc-local
on the two nodes. Also need to set the value of "StrictModes" to "no" in
/usr/local/etc/sshd_config, or the ssh will prompt for a pasword even all configuration
tasks of shh has done.

Two scripts are created for configuring ssh. Here is the instruction on how to run them:
1. Put ssh.conf1.ksh under the home directory of user oracle on all nodes.
2. Run ssh.conf1.ksh on node1.
3. Make changes of ssh.conf1.ksh on node2 and then run it.
4. Run ssh.conf2.ksh on node1.
5. Run command on all nodes: chmod 600 .ssh/authorized_keys
6. Test the configuration on all nodes. The command is: ssh node1 [node2] date

ssh.conf1.ksh
------------------------------------------------------------
#!/bin/ksh
# Run this script as user oracle on node1, and then on node2.
# Make sure the package ssh is installed under /usr/local.
# Written by susbin@chinaunix.net       071906

# Put the hostname of the two nodes below
node1=rac1
node2=rac2

sc_name=ssh.conf1.ksh
home_dir=/u01/app/oracle
key_dir=${home_dir}/.ssh
ssh_base=/usr/local/bin

echo ================================================================
echo $sc_name started at `date`.
echo " "
echo "You need to run this script on $node1 and $node2."
echo "Make changes on this script before you run it on $node2."
echo " "

/bin/rm -r $key_dir
/bin/mkdir $key_dir
/bin/chmod 700 $key_dir
${ssh_base}/ssh-keygen -t rsa
echo " "
${ssh_base}/ssh-keygen -t dsa
/bin/touch ${key_dir}/authorized_keys
echo " "
echo "Now save the keys into the file authorized_keys."
echo " "
## comment out the lines when you run it on node2.
${ssh_base}/ssh $node1 cat ${key_dir}/id_rsa.pub >> ${key_dir}/authorized_keys
${ssh_base}/ssh $node1 cat ${key_dir}/id_dsa.pub >> ${key_dir}/authorized_keys
## end of the lines

## uncomment the lines below when you run it on node2.
#${ssh_base}/ssh $node2 cat ${key_dir}/id_rsa.pub >> ${key_dir}/authorized_keys
#${ssh_base}/ssh $node2 cat ${key_dir}/id_dsa.pub >> ${key_dir}/authorized_keys
#${ssh_base}/ssh $node1 cat ${key_dir}/id_rsa.pub >> ${key_dir}/authorized_keys
#${ssh_base}/ssh $node1 cat ${key_dir}/id_dsa.pub >> ${key_dir}/authorized_keys
#${ssh_base}/scp ${key_dir}/authorized_keys ${node1} {key_dir}
## end of the lines
echo " "
echo "It is done."
echo " "
echo $sc_name ended at `date`.
echo ==============================================================


ssh.conf2.ksh
------------------------------------------------------------
#!/bin/ksh
# Run this script after you have run ssh.conf1.ksh on both nodes.
# Run this script as user oracle on node1 only.
# Written by susbin@chinaunix.net           071906

# Put the hostname of the two nodes below
node1=rac1
node2=rac2

sc_name=ssh.conf2.ksh
home_dir=/u01/app/oracle
key_dir=${home_dir}/.ssh
ssh_base=/usr/local/bin

echo ===========================================================
echo $sc_name started at `date`.
echo " "
echo "You only need to run this script on $node1."
echo " "
${ssh_base}/ssh $node2 cat ${key_dir}/id_rsa.pub >> ${key_dir}/authorized_keys
${ssh_base}/ssh $node2 cat ${key_dir}/id_dsa.pub >> ${key_dir}/authorized_keys
${ssh_base}/scp ${key_dir}/authorized_keys ${node2} {key_dir}
echo " "
echo "You need to run command \&;/bin/chmod 600 ${key_dir}/authorized_keys\&; "
echo "on all nodes and then test the ssh configuration with command "
echo " \&;ssh node1 [node2] date \&; "
echo " "
echo $sc_name ended at `date`.
echo ============================================================
echo " "
exec ${ssh_base}/ssh-agent $SHELL
${ssh_base}/ssh-add

## The command "exec ${ssh_base}/ssh-agent $SHELL" will spawn a sub-shell.
## and the rest of your login session will runs within this subshell.
## end of ssh.conf2.ksh

---------------------------------------------------------------

D.
Configuring clusterware and database storage (ASM installation)

After installed the host adapter(X1057A) on both nodes, run command "format" on them
to make sure the shared disks have the same controller number on both nodes.

Format the disks on node1. For disks used by ASM, create a single whole-disk slice,
starting at cylinder 1, or the ASM will NOT recognize these disks as ASM candidates.

# format
...
selecting c3t0d2
[disk formatted]
format>
...
Free Hog partition[6]? 7
Enter size of partition '0' [0b, 0c, 0.00mb, 0.00gb]: 1c
Enter size of partition '1' [0b, 0c, 0.00mb, 0.00gb]: 0
Enter size of partition '3' [0b, 0c, 0.00mb, 0.00gb]: 0
Enter size of partition '4' [0b, 0c, 0.00mb, 0.00gb]: 0
Enter size of partition '5' [0b, 0c, 0.00mb, 0.00gb]: 0
Enter size of partition '6' [0b, 0c, 0.00mb, 0.00gb]: 0

partition> p
Current partition table (sun4g):
Total disk cylinders available: 3880 + 2 (reserved cylinders)

Part      Tag    Flag     Cylinders        Size            Blocks
  0 unassigned    wm       0 -    0        1.05MB    (1/0/0)       2160
  1 unassigned    wu       0               0         (0/0/0)          0
  2     backup    wu       0 - 3879        4.00GB    (3880/0/0) 8380800
  3 unassigned    wu       0               0         (0/0/0)          0
  4 unassigned    wu       0               0         (0/0/0)          0
  5 unassigned    wu       0               0         (0/0/0)          0
  6 unassigned    wm       0               0         (0/0/0)          0
  7 unassigned    wu       1 - 3879        4.00GB    (3879/0/0) 8378640

partition>

Okay to make this the current partition table[yes]? yes
...

#

Copy the partition table from c3t0d2 to other disks.

# for disks in c3t1d2s0 c3t3d2s0
> do
> prtvtoc /dev/rdsk/c3t0d2s0 | fmthard -s - /dev/rdsk/$disks
> done
fmthard:  New volume table of contents now in place.
fmthard:  New volume table of contents now in place.
#

Format the disks for OCR and voting disks. It is a good idea to put them on sloce 3-7.
The slice 0 is not a good candidate.

# format
...
selecting c3t0d4
[disk formatted]
...
partition> p
Current partition table (sun2g):
Total disk cylinders available: 2733 + 2 (reserved cylinders)

Part      Tag    Flag     Cylinders        Size            Blocks
  0 unassigned    wm       0 - 2515        1.82GB    (2516/0/0) 3824320
  1 unassigned    wu       0               0         (0/0/0)          0
  2     backup    wu       0 - 2732        1.98GB    (2733/0/0) 4154160
  3 unassigned    wu       0               0         (0/0/0)          0
  4 unassigned    wu       0               0         (0/0/0)          0
  5 unassigned    wu       0               0         (0/0/0)          0
  6 unassigned    wm    2516 - 2688      128.40MB    (173/0/0)   262960
  7 unassigned    wu    2689 - 2732       32.66MB    (44/0/0)     66880

partition> q
...

#
# prtvtoc /dev/rdsk/c3t0d4s0 | fmthard -s - /dev/rdsk/c3t2d4s2
fmthard:  New volume table of contents now in place.
# prtvtoc /dev/rdsk/c3t0d4s0 | fmthard -s - /dev/rdsk/c3t4d4s2
fmthard:  New volume table of contents now in place.
#

On all nodes, set the owner, group and permissions on the raw devices, which are the
slices for ASM, OCR and voting disks.

# cd /
# for rawdevs in c3t0d2s7,c3t1d2s7,c3t3d2s7 \
> c3t0d4s6 c3t2d4s6 c3t4d4s6 c3t0d4s7 c3t2d4s7 c3t4d4s7
> do
> echo $rawdevs; chown oracle:dba /dev/rdsk/$rawdevs; chmod 660 /dev/rdsk/$rawdevs
> ls -l `ls -l /dev/rdsk/$rawdevs | awk -F" " '{ print $11 }'`
> done
c3t0d2s7
crw-rw----   1 oracle   dba      118, 63 Jul 25 10:54 ../../devices/sbus@1f,0/SUNW,soc@0,0/SUNW,pln@a0000000,752ee9/ssd@1,2:h,raw
c3t1d2s7
crw-rw----   1 oracle   dba      118,127 Jul 21 12:55 ../../devices/sbus@1f,0/SUNW,soc@0,0/SUNW,pln@a0000000,752ee9/ssd@3,0:h,raw
c3t3d2s7
crw-rw----   1 oracle   dba      118,143 Jul 25 10:55 ../../devices/sbus@1f,0/SUNW,soc@0,0/SUNW,pln@a0000000,752ee9/ssd@3,2:h,raw
c3t0d4s6
crw-rw----   1 oracle   dba      118, 38 Aug  9 12:07 ../../devices/sbus@1f,0/SUNW,soc@0,0/SUNW,pln@a0000000,752ee9/ssd@0,4:g,raw
c3t2d4s6
crw-rw----   1 oracle   dba      118,118 Aug 16 10:21 ../../devices/sbus@1f,0/SUNW,soc@0,0/SUNW,pln@a0000000,752ee9/ssd@2,4:g,raw
c3t4d4s6
crw-rw----   1 oracle   dba      118,198 Jul 25 10:55 ../../devices/sbus@1f,0/SUNW,soc@0,0/SUNW,pln@a0000000,752ee9/ssd@4,4:g,raw
c3t0d4s7
crw-rw----   1 oracle   dba      118, 39 Aug 16 10:19 ../../devices/sbus@1f,0/SUNW,soc@0,0/SUNW,pln@a0000000,752ee9/ssd@0,4:h,raw
c3t2d4s7
crw-rw----   1 oracle   dba      118,119 Aug 16 10:19 ../../devices/sbus@1f,0/SUNW,soc@0,0/SUNW,pln@a0000000,752ee9/ssd@2,4:h,raw
c3t4d4s7
crw-rw----   1 oracle   dba      118,199 Aug 16 10:19 ../../devices/sbus@1f,0/SUNW,soc@0,0/SUNW,pln@a0000000,752ee9/ssd@4,4:h,raw
#

On node1, run CVU to check if all shared disks are available across all nodes.

$ cd /ora10.dvd2/clusterware/cluvfy
$ ./runcluvfy.sh comp ssa -n rac1,rac2 -s \
> /dev/rdsk/c3t0d2s7,/dev/rdsk/c3t1d2s7,/dev/rdsk/c3t3d2s7\
> /dev/rdsk/c3t0d4s6,/dev/rdsk/c3t2d4s6,/dev/rdsk/c3t4d4s6\
> /dev/rdsk/c3t0d4s7,/dev/rdsk/c3t2d4s7,/dev/rdsk/c3t4d4s7

Verifying shared storage accessibility

Checking shared storage accessibility...

"/dev/rdsk/c3t0d2s7" is shared.

"/dev/rdsk/c3t1d2s7" is shared.

"/dev/rdsk/c3t3d2s7" is shared.

"/dev/rdsk/c3t0d4s6" is shared.

"/dev/rdsk/c3t2d4s6" is shared.

"/dev/rdsk/c3t4d4s6" is shared.

"/dev/rdsk/c3t0d4s7" is shared.

"/dev/rdsk/c3t2d4s7" is shared.

"/dev/rdsk/c3t4d4s7" is shared.

Shared storage check was successful on nodes "rac2,rac1".

Verification of shared storage accessibility was successful.

$

Pre-installation tasks are done. The next step is to install Oracle clusterware.

標(biāo)簽:邵陽 人事邀約 十堰 甘孜 撫順 泰安 漳州 企業(yè)管理

巨人網(wǎng)絡(luò)通訊聲明:本文標(biāo)題《Solaris9系統(tǒng)上安裝Oracle10g RAC》,本文關(guān)鍵詞  Solaris9,系統(tǒng),上,安裝,Oracle10g,;如發(fā)現(xiàn)本文內(nèi)容存在版權(quán)問題,煩請(qǐng)?zhí)峁┫嚓P(guān)信息告之我們,我們將及時(shí)溝通與處理。本站內(nèi)容系統(tǒng)采集于網(wǎng)絡(luò),涉及言論、版權(quán)與本站無關(guān)。
  • 相關(guān)文章
  • 下面列出與本文章《Solaris9系統(tǒng)上安裝Oracle10g RAC》相關(guān)的同類信息!
  • 本頁收集關(guān)于Solaris9系統(tǒng)上安裝Oracle10g RAC的相關(guān)信息資訊供網(wǎng)民參考!
  • 推薦文章
    婷婷综合国产,91蜜桃婷婷狠狠久久综合9色 ,九九九九九精品,国产综合av
    麻豆91在线观看| 久久久久久久久久看片| 国产成人综合视频| 久久国产精品99精品国产| 日韩精品乱码av一区二区| 日韩影院免费视频| 麻豆国产精品视频| 久久99热这里只有精品| 狠狠色综合播放一区二区| 国产精品亚洲第一| av一区二区不卡| 91国产免费看| 日韩三级中文字幕| 久久久国际精品| 亚洲人成亚洲人成在线观看图片| 一区二区三区日本| 美日韩黄色大片| 成人午夜伦理影院| 欧美日韩中字一区| 精品盗摄一区二区三区| 亚洲欧洲日本在线| 亚洲午夜电影网| 久久se精品一区精品二区| 国内不卡的二区三区中文字幕 | 欧美在线观看18| 91麻豆精品91久久久久久清纯| 欧美大片一区二区| 自拍偷拍国产精品| 午夜亚洲福利老司机| 国产乱人伦精品一区二区在线观看| 成人一级片网址| 欧美日韩不卡一区| 国产精品水嫩水嫩| 视频一区二区三区在线| 国产成人免费网站| 91精品久久久久久久99蜜桃| 久久综合九色综合97婷婷| 亚洲另类中文字| 国产精品一二二区| 7777精品伊人久久久大香线蕉经典版下载 | 美女性感视频久久| 日本高清成人免费播放| 久久久一区二区| 午夜欧美视频在线观看| 97久久精品人人做人人爽| 日韩精品一区二区在线| 一二三区精品福利视频| 国产99久久精品| 日韩欧美在线网站| 亚洲高清三级视频| 色欧美88888久久久久久影院| 精品久久一二三区| 日韩国产欧美三级| 在线精品亚洲一区二区不卡| 中文字幕欧美国产| 国产.精品.日韩.另类.中文.在线.播放| 日本黄色一区二区| 亚洲视频中文字幕| 本田岬高潮一区二区三区| 国产农村妇女精品| 国产九九视频一区二区三区| 日韩欧美一二区| 麻豆国产一区二区| 欧美成人r级一区二区三区| 五月开心婷婷久久| 制服丝袜亚洲色图| 美女一区二区在线观看| 日韩午夜在线影院| 激情都市一区二区| 久久久亚洲高清| 国产精品资源在线观看| 久久久久国产精品麻豆| 国产精品一区二区在线观看网站| 精品久久久久久综合日本欧美| 免费成人在线网站| 欧美不卡一二三| 国产美女一区二区三区| 久久在线观看免费| 成人激情小说网站| 亚洲欧美一区二区三区久本道91| 国产福利一区二区| 亚洲私人影院在线观看| 欧美视频三区在线播放| 亚洲地区一二三色| 欧美本精品男人aⅴ天堂| 国产成人av自拍| 亚洲精品成a人| 正在播放亚洲一区| 国产91综合网| 亚洲精品写真福利| 91精品在线观看入口| 国产乱国产乱300精品| 国产精品情趣视频| 欧美视频自拍偷拍| 国产麻豆精品一区二区| 一色屋精品亚洲香蕉网站| 欧美日韩一区高清| 国产一区二区三区久久久| 国产精品久久久久一区二区三区| 91亚洲资源网| 麻豆91在线观看| 亚洲色图.com| 欧美videos中文字幕| 99久久国产综合精品女不卡| 石原莉奈在线亚洲三区| 欧美精品一区二区久久久 | 亚洲国产成人av| 精品国产欧美一区二区| 色综合天天狠狠| 国产麻豆精品在线| 日韩二区三区四区| 国产婷婷一区二区| 91精品国产福利| 波多野结衣在线一区| 日本中文字幕一区二区视频| 国产精品久久国产精麻豆99网站| 欧美精品v日韩精品v韩国精品v| 国产jizzjizz一区二区| 日韩av电影一区| 亚洲综合一区二区| 国产精品日韩精品欧美在线| 日韩小视频在线观看专区| 色欧美片视频在线观看| 成人久久视频在线观看| 精品亚洲aⅴ乱码一区二区三区| 亚洲专区一二三| 亚洲天堂久久久久久久| 久久久久久一二三区| 日韩三级视频在线看| 欧美麻豆精品久久久久久| 91久久线看在观草草青青| 大白屁股一区二区视频| 国精产品一区一区三区mba桃花| 天天操天天干天天综合网| 国产精品国产馆在线真实露脸 | 日韩欧美在线影院| 欧美日韩精品欧美日韩精品一综合| 成人av网在线| 99久久久久久| www.性欧美| 91天堂素人约啪| 97久久超碰国产精品| 97精品电影院| 91日韩在线专区| 91福利在线观看| 欧美日免费三级在线| 1024亚洲合集| 欧美伊人久久大香线蕉综合69| 国产不卡视频一区二区三区| 久久成人免费电影| 韩国v欧美v日本v亚洲v| 狠狠色丁香久久婷婷综| 精品亚洲国内自在自线福利| 激情欧美一区二区| 国产精品一区专区| 国产精品18久久久久久久久久久久| 久久99精品视频| 韩国精品主播一区二区在线观看 | 99久久精品99国产精品| 9i看片成人免费高清| 91在线观看视频| 欧美伊人久久久久久久久影院| 欧美日韩亚洲不卡| 日韩美一区二区三区| 国产三级一区二区三区| 日韩美女啊v在线免费观看| 亚洲乱码日产精品bd| 日韩国产精品大片| 韩国在线一区二区| 色综合久久天天| 欧美肥妇free| 欧美国产成人精品| 中文字幕制服丝袜成人av| 亚洲在线成人精品| 精品亚洲porn| 欧洲亚洲精品在线| 精品国产一区二区三区av性色| 欧美国产综合色视频| 亚洲一线二线三线久久久| 狠狠色综合日日| 色激情天天射综合网| 精品日韩欧美一区二区| 中文字幕中文字幕在线一区| 日韩国产欧美在线视频| 国产精品系列在线播放| 欧美午夜精品久久久久久孕妇| 日韩你懂的在线播放| 亚洲图片你懂的| 久久国产精品第一页| 91麻豆产精品久久久久久| 精品电影一区二区| 亚洲国产va精品久久久不卡综合| 国产精品一区二区黑丝| 欧美日本国产视频| 日韩伦理免费电影| 国产乱码精品1区2区3区| 51精品视频一区二区三区| 中文字幕日本乱码精品影院| 精品在线观看免费| 欧美一级片在线看|