婷婷综合国产,91蜜桃婷婷狠狠久久综合9色 ,九九九九九精品,国产综合av

主頁 > 知識庫 > Solaris9系統上安裝Oracle10g RAC

Solaris9系統上安裝Oracle10g RAC

熱門標簽:模型地圖標注 濟南呼叫中心外呼系統如何 揭陽外呼系統收費 黃石外呼saas系統 臨沂ai電銷機器人價格 菏澤監獄親情電話機器人 地圖標注怎么上交呢 鳳城市地圖標注app 怎樣使用奧維地圖標注位置

1Oracle 官方安裝文檔中(下文中的doc 2doc 3),有多處錯誤。最新的 Release Notes August 2006
     
更正了一些,但還有不少未得到更正。
2
.文中的三個腳本,在使用之前請仔閱讀。如不當使用導致系統損壞與本文作者無關。
3
.本文所用的安裝步驟盡可能依照Oracle 的官方安裝文檔。但有一處“Configuring UDP parameters” 不得不改用其他
     
方法,因為 Oracle 官方安裝文檔中講的方法不起作用。


Part1  Do the Pre-installation Tasks

This is a guide for installing Oracle 10g RAC with Oracle clusterware on Solaris9  SPARC 64-Bit
Enterprise Edition.
Followed intrusions in these doc:

1. Oracle Database Release Notes
   10g Release 2 (10.2) for Solaris Operating System (SPARC 64-Bit)  Part Number B15689-03
2. Oracle Clusterware and Oracle Real Application Clusters Installation Guide
   10g Release 2 (10.2) for Solaris Operating System  Part Number B14205-06
3. Oracle Database Installation Guide
   10g Release 2 (10.2) for Solaris Operating System (SPARC 64-Bit)  Part Number B15690-02

The Hardware:
Two Ultra-2 Enterprise servers. cpu:300MHz x 2, memory:1024mb. storage:Sun A1010 disk array
A Sun fiber channel host adapter(X1057A) is installed on each node to connect to the A1010
with a fiber cable. A Sun multi-pack is connected to one of the Ultra-2 servers.
A 10BT network card is installed on each node for the private network.
A crossover network cable connects the two 10BT network cards.
(Oracle10 does not support the crossover network cable, according to doc 2)
A SBUS Frame Buffer (X359A) is installed on both nodes.
Because the hardware is barely meet the requirement, it is only good for testing.

The OS:
Install Solaris9 4/04(64-Bit) on both nodes, and install the latest patch cluster.
Configure Tcpwrappers and NTP the two nodes. Solaris9 includes these two packages.
Install openssh-4.3p2-sol9-sparc-local on the two nodes.
Secure and harden the systems by following some procedures in the doc
SANS Solaris Security Step by Step Version 2.0:
http://www.depts.ttu.edu/tosm/se ... actices/solaris.pdf

The Media for Installation:
Oracle Database 10g Release2 10.2.0.1.0 for Solaris SPARC 64-bit Enterprise Edition
If possible, download 10.2.0.2 and up, so you will install less patches afterwards.
I have to copy the two DVDs to local hard disks, since the Ultra-2 does not have a DVD drive.
You might want to configure a NFS server on node1 and an auto client on node2,
so you can run the Cluster Verification Utility on node2 as well.

The steps of the installation/configuration:
1. Do the pre-installation tasks
2. Install Oracle clusterweare
3. Test/verify the clusterware
4. Install Oracle Database software only with RAC.
5. Configure ASM and make sure it is running on both nodes.
6. Create a RAC database with DBCA.

Disk & shared storage configuration:
----------------------------------------------------------------------------------------------------------
packages     required   real    type   mount point           partition
----------------------------------------------------------------------------------------------------------
DB software    4GB       4GB    UFS    /u01/app/oracle     c0t1d0s0 (not shared)
DB datafiles  1.2GB     12GB   ASM                                  c3t0d2s7,c3t1d2s7,c3t3d2s7
OCR             100MB     128MB  raw                                 c3t0d4s6,c3t2d4s6,c3t4d4s6
Voting disk     20MB      32MB   raw                                 c3t0d4s7,c3t2d4s7,c3t4d4s7
swap            400MB      1GB                                            c0t0d0s1
----------------------------------------------------------------------------------------------------------
The clusterware will be installed in it's own home directory on node1.
The redundancy level of OCR and voting disk will be "normal".
Because this is for testing only, no space allocated for flash recovery and log archiving.

Pre-Installation Tasks

A.
Network configuration
This part is done manually. Here is the files on node1 as an example:

/etc/hosts
127.0.0.1       localhost      
# node1
192.168.1.64   rac1  rac1.abc.com    loghost
10.10.10.1      rac1-priv rac1-priv.abc.com
192.168.1.69   rac1-vip  rac1-vip.abc.com
# node2
192.168.1.77   rac2 rac2.abc.com
10.10.10.2      rac2-priv rac2-priv.abc.com
192.168.1.76   rac2-vip  rac2-vip.abc.com

/etc/inet/netmasks
192.168.0.0     255.255.255.0
10.10.10.0      255.255.255.0

/etc/hostname.hme0
rac1

/etc/hostname.le0
10.10.10.1

/etc/defaultrouter
192.168.1.1

/etc/hosts.allow
ALL:    192.168.1. 127.0.0.1 10.


Run these commands to bring up the network interface le0 and test it.

# chown root:root /etc/hostname.le0
# ifconfig le0 plumb
# ifconfig le0 10.10.10.1 netmask 255.255.255.0 up
# ifconfig -a
lo0: flags=1000849UP,LOOPBACK,RUNNING,MULTICAST,IPv4> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
hme0: flags=1000843UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
        inet 192.168.1.64 netmask ffffff00 broadcast 192.168.1.255
        ether 8:0:20:82:47:d4
le0: flags=1000843UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
        inet 10.10.10.1 netmask ffffff00 broadcast 10.10.10.255
        ether 8:0:20:82:47:d4
#

To check the network setup, run the CVU now or run it after all tasks is done.
The command is:
/ora10.dvd2/clusterware/cluvfy/runcluvfy.sh comp nodecon -n rac1,rac2 -verbose

B.
Other pre-installation tasks except Configuring SSH
A shell script is created for doing most of the tasks: pre.install.ora10.conf.sh
The script will do the pre-installation tasks of the clusterware and most of the
pre-installation tasks of the Oracle database. It is only to run on a new installation
of a Solaris 9 system. Run this script on both nodes as root.

pre.install.ora10.conf.sh
------------------------------------------------------------
#!/bin/sh
# Pre-installation conf. on Solaris9 for installing Oracle 10g R2 with RAC.
# Written by susbin@chinaunix.net  072406

sc_name=pre.install.ora10.conf.sh

ORACLE_BASE=/u01/app/oracle;  export ORACLE_BASE
CRS_BASE=/u01/crs/oracle/product/10;  export CRS_BASE
ORACLE_HOME=${CRS_BASE}/app;  export ORACLE_HOME
PATH=$PATH:/usr/ccs/bin:/usr/local/bin;  export PATH

echo ===============================================================
echo $sc_name started at `date`.
echo " "

echo " "
echo "=============================================="
echo "Creating Required Operating System Groups and Users"
echo " "

echo "Creating groups: dba, osdba, and oinstall."
groupadd -g 201 dba
groupadd -g 202 oinstall
groupadd -g 203 osdba
echo "Check them with the command: grep 20 /etc/group"
grep 20 /etc/group
echo " "

echo "Check if \&;nobody\&; exists on the system with: id nobody"
echo ""
id -a nobody
echo " "

echo "Creating the directory \&;ORACLE_BASE\&;, which is set to $ORACLE_BASE"
mkdir -p $ORACLE_BASE
echo "Check it with the command: ls -l /u01/app "
echo ""
ls -l /u01/app
echo " "

echo "Creating a user account \&;oracle\&; and set the password of it:"
useradd -u 1005 -g 202 -G 201,203 -d $ORACLE_BASE -m -s /bin/ksh oracle
echo "Check the line in /etc/passwd with: grep oracle /etc/passwd"
grep oracle /etc/passwd
echo "Set the password of account oracle:"
passwd -r files oracle
chown -R oracle install ${ORACLE_BASE}
chmod -R 775 $ORACLE_BASE
echo " "

echo "Check if the oracle account has required groups with: id -a oracle "
echo " "
id -a oracle
echo " "

echo " "
echo "=============================================="
echo "Configuring Kernel Parameters"
echo " "

echo "Save a copy of /etc/system and append eleven lines to it."
echo "Need to reboot the system so the new parameters can take effect."
cp -p /etc/system /etc/system.orig
chmod 644 /etc/system

/bin/cat EOF >> /etc/system
set noexec_user_stack=1
set semsys:seminfo_semmni=100
set semsys:seminfo_semmns=1024
set semsys:seminfo_semmsl=256
set semsys:seminfo_semvmx=32767
set shmsys:shminfo_shmmax=4294967295
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmmni=100
set shmsys:shminfo_shmseg=10
EOF

echo " "
echo "Check /etc/system with the command: tail -11 /etc/system"
tail -11 /etc/system
echo " "

echo " "
echo "=============================================="
echo "Identifying Required Software Directories"
echo "ORACLE_BASE is set to $ORACLE_BASE, and the size of it should be 3GB or bigger."
echo "Check it with the command: $ df -h $ORACLE_BASE"
echo " "
mount /dev/dsk/c0t1d0s0 $ORACLE_BASE
df -h $ORACLE_BASE
echo " "

echo " "
echo "=============================================="
echo "Configuring the Oracle User's Environment"

/bin/cat EOF > ${ORACLE_BASE}/.profile
if [ -t 0 ]; then
   stty intr ^C
fi
umask 022
ORACLE_BASE=/u01/app/oracle;  export ORACLE_BASE
# for crs
CRS_BASE=/u01/crs/oracle/product/10;  export CRS_BASE
ORACLE_HOME=${CRS_BASE}/app;  export ORACLE_HOME
PATH=$PATH:/usr/local/bin:.:/bin:/usr/sbin:/usr/ucb;  export PATH
# end of crs
# for oraDB
#ORACLE_SID=rac1; export ORACLE_SID
# end of oraDB
EDITOR=vi;  export EDITOR
EXINIT='set nu showmode';  export EXINIT
EOF

chown -R oracle install ${ORACLE_BASE}
echo " "

echo " "
echo "=============================================="
echo "Configuring oracle clusterware home directory, which is set to"
echo " ${CRS_BASE}/crs "
mkdir -p ${CRS_BASE}/crs                                             
chown -R root install /u01/crs
chmod -R 775 /u01/crs/oracle
echo " "
ls -l $CRS_BASE

echo ""
echo "=============================================="
echo "Configuring UDP parameters by creating a S70ndd and put it under"
echo "/etc/rc2.d to set the two values of ndd to 65536."
echo " "

/bin/cat EOF > /etc/rc2.d/S70ndd
#!/sbin/sh
PATH=/usr/sbin;export PATH
ndd -set /dev/udp udp_recv_hiwat 65536
ndd -set /dev/udp udp_xmit_hiwat 65536
exit 0
EOF

chown root:sys /etc/rc2.d/S70ndd
chmod 755 /etc/rc2.d/S70ndd
echo "Check the S70ndd with \&;ls -l S70ndd\&; and \&;cat S70ndd\&; "
echo " "
ls -l /etc/rc2.d/S70ndd
echo " "
cat /etc/rc2.d/S70ndd

echo " "
echo "=============================================="
echo "Verify that the /etc/hosts file is used for name resolution"
echo "with the command: grep hosts: /etc/nsswitch.conf | grep files "
echo " "
grep hosts: /etc/nsswitch.conf | grep files

echo ""
echo "=============================================="
echo "Verify that the host name has been set with: hostname"
echo " "
hostname

echo ""
echo "=============================================="
echo "Verify that the domain name has NOT been set with: domainname"
echo " "
domainname

echo ""
echo "=============================================="
echo "Verify that the hosts file contains the fully qualified host name"
echo "with the command: grep `eval hostname` /etc/hosts "
echo " "
grep `eval hostname` /etc/hosts

echo " "
echo "The pre-installation configuring tasks is done on this node."
echo "Reboot the system so the new parameters can take effect."
echo " "
echo $sc_name ended at `date`.
echo ===============================================================

------------------------------------------------------------


Check the current values of Kernel Parameters after rebooted the system.
Login as user oracle, and then run the commands on all nodes:
/usr/sbin/sysdef | grep SEM
/usr/sbin/sysdef | grep SHM

Check the env variables of oracle account for installing the Clusterware:
$ env | grep ORACLE
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/crs/oracle/product/10/crs

$

C.
Configuring SSH on All Cluster Nodes
The ssh of Solaris9 is a Sun_SSH_1.1, which has a bug. Here is the discussion about it:
http://forum.sun.com/jive/thread.jspa?threadID=94357
The instructions of configuring SSH in doc 2 is based on OpenSSH V.3.x. The doc 2 also
points out that Oracle NetCA and DBCA require scp and ssh to be located in the path
/usr/local/bin. For these reasons, I choose to install openssh-4.3p2-sol9-sparc-local
on the two nodes. Also need to set the value of "StrictModes" to "no" in
/usr/local/etc/sshd_config, or the ssh will prompt for a pasword even all configuration
tasks of shh has done.

Two scripts are created for configuring ssh. Here is the instruction on how to run them:
1. Put ssh.conf1.ksh under the home directory of user oracle on all nodes.
2. Run ssh.conf1.ksh on node1.
3. Make changes of ssh.conf1.ksh on node2 and then run it.
4. Run ssh.conf2.ksh on node1.
5. Run command on all nodes: chmod 600 .ssh/authorized_keys
6. Test the configuration on all nodes. The command is: ssh node1 [node2] date

ssh.conf1.ksh
------------------------------------------------------------
#!/bin/ksh
# Run this script as user oracle on node1, and then on node2.
# Make sure the package ssh is installed under /usr/local.
# Written by susbin@chinaunix.net       071906

# Put the hostname of the two nodes below
node1=rac1
node2=rac2

sc_name=ssh.conf1.ksh
home_dir=/u01/app/oracle
key_dir=${home_dir}/.ssh
ssh_base=/usr/local/bin

echo ================================================================
echo $sc_name started at `date`.
echo " "
echo "You need to run this script on $node1 and $node2."
echo "Make changes on this script before you run it on $node2."
echo " "

/bin/rm -r $key_dir
/bin/mkdir $key_dir
/bin/chmod 700 $key_dir
${ssh_base}/ssh-keygen -t rsa
echo " "
${ssh_base}/ssh-keygen -t dsa
/bin/touch ${key_dir}/authorized_keys
echo " "
echo "Now save the keys into the file authorized_keys."
echo " "
## comment out the lines when you run it on node2.
${ssh_base}/ssh $node1 cat ${key_dir}/id_rsa.pub >> ${key_dir}/authorized_keys
${ssh_base}/ssh $node1 cat ${key_dir}/id_dsa.pub >> ${key_dir}/authorized_keys
## end of the lines

## uncomment the lines below when you run it on node2.
#${ssh_base}/ssh $node2 cat ${key_dir}/id_rsa.pub >> ${key_dir}/authorized_keys
#${ssh_base}/ssh $node2 cat ${key_dir}/id_dsa.pub >> ${key_dir}/authorized_keys
#${ssh_base}/ssh $node1 cat ${key_dir}/id_rsa.pub >> ${key_dir}/authorized_keys
#${ssh_base}/ssh $node1 cat ${key_dir}/id_dsa.pub >> ${key_dir}/authorized_keys
#${ssh_base}/scp ${key_dir}/authorized_keys ${node1} {key_dir}
## end of the lines
echo " "
echo "It is done."
echo " "
echo $sc_name ended at `date`.
echo ==============================================================


ssh.conf2.ksh
------------------------------------------------------------
#!/bin/ksh
# Run this script after you have run ssh.conf1.ksh on both nodes.
# Run this script as user oracle on node1 only.
# Written by susbin@chinaunix.net           071906

# Put the hostname of the two nodes below
node1=rac1
node2=rac2

sc_name=ssh.conf2.ksh
home_dir=/u01/app/oracle
key_dir=${home_dir}/.ssh
ssh_base=/usr/local/bin

echo ===========================================================
echo $sc_name started at `date`.
echo " "
echo "You only need to run this script on $node1."
echo " "
${ssh_base}/ssh $node2 cat ${key_dir}/id_rsa.pub >> ${key_dir}/authorized_keys
${ssh_base}/ssh $node2 cat ${key_dir}/id_dsa.pub >> ${key_dir}/authorized_keys
${ssh_base}/scp ${key_dir}/authorized_keys ${node2} {key_dir}
echo " "
echo "You need to run command \&;/bin/chmod 600 ${key_dir}/authorized_keys\&; "
echo "on all nodes and then test the ssh configuration with command "
echo " \&;ssh node1 [node2] date \&; "
echo " "
echo $sc_name ended at `date`.
echo ============================================================
echo " "
exec ${ssh_base}/ssh-agent $SHELL
${ssh_base}/ssh-add

## The command "exec ${ssh_base}/ssh-agent $SHELL" will spawn a sub-shell.
## and the rest of your login session will runs within this subshell.
## end of ssh.conf2.ksh

---------------------------------------------------------------

D.
Configuring clusterware and database storage (ASM installation)

After installed the host adapter(X1057A) on both nodes, run command "format" on them
to make sure the shared disks have the same controller number on both nodes.

Format the disks on node1. For disks used by ASM, create a single whole-disk slice,
starting at cylinder 1, or the ASM will NOT recognize these disks as ASM candidates.

# format
...
selecting c3t0d2
[disk formatted]
format>
...
Free Hog partition[6]? 7
Enter size of partition '0' [0b, 0c, 0.00mb, 0.00gb]: 1c
Enter size of partition '1' [0b, 0c, 0.00mb, 0.00gb]: 0
Enter size of partition '3' [0b, 0c, 0.00mb, 0.00gb]: 0
Enter size of partition '4' [0b, 0c, 0.00mb, 0.00gb]: 0
Enter size of partition '5' [0b, 0c, 0.00mb, 0.00gb]: 0
Enter size of partition '6' [0b, 0c, 0.00mb, 0.00gb]: 0

partition> p
Current partition table (sun4g):
Total disk cylinders available: 3880 + 2 (reserved cylinders)

Part      Tag    Flag     Cylinders        Size            Blocks
  0 unassigned    wm       0 -    0        1.05MB    (1/0/0)       2160
  1 unassigned    wu       0               0         (0/0/0)          0
  2     backup    wu       0 - 3879        4.00GB    (3880/0/0) 8380800
  3 unassigned    wu       0               0         (0/0/0)          0
  4 unassigned    wu       0               0         (0/0/0)          0
  5 unassigned    wu       0               0         (0/0/0)          0
  6 unassigned    wm       0               0         (0/0/0)          0
  7 unassigned    wu       1 - 3879        4.00GB    (3879/0/0) 8378640

partition>

Okay to make this the current partition table[yes]? yes
...

#

Copy the partition table from c3t0d2 to other disks.

# for disks in c3t1d2s0 c3t3d2s0
> do
> prtvtoc /dev/rdsk/c3t0d2s0 | fmthard -s - /dev/rdsk/$disks
> done
fmthard:  New volume table of contents now in place.
fmthard:  New volume table of contents now in place.
#

Format the disks for OCR and voting disks. It is a good idea to put them on sloce 3-7.
The slice 0 is not a good candidate.

# format
...
selecting c3t0d4
[disk formatted]
...
partition> p
Current partition table (sun2g):
Total disk cylinders available: 2733 + 2 (reserved cylinders)

Part      Tag    Flag     Cylinders        Size            Blocks
  0 unassigned    wm       0 - 2515        1.82GB    (2516/0/0) 3824320
  1 unassigned    wu       0               0         (0/0/0)          0
  2     backup    wu       0 - 2732        1.98GB    (2733/0/0) 4154160
  3 unassigned    wu       0               0         (0/0/0)          0
  4 unassigned    wu       0               0         (0/0/0)          0
  5 unassigned    wu       0               0         (0/0/0)          0
  6 unassigned    wm    2516 - 2688      128.40MB    (173/0/0)   262960
  7 unassigned    wu    2689 - 2732       32.66MB    (44/0/0)     66880

partition> q
...

#
# prtvtoc /dev/rdsk/c3t0d4s0 | fmthard -s - /dev/rdsk/c3t2d4s2
fmthard:  New volume table of contents now in place.
# prtvtoc /dev/rdsk/c3t0d4s0 | fmthard -s - /dev/rdsk/c3t4d4s2
fmthard:  New volume table of contents now in place.
#

On all nodes, set the owner, group and permissions on the raw devices, which are the
slices for ASM, OCR and voting disks.

# cd /
# for rawdevs in c3t0d2s7,c3t1d2s7,c3t3d2s7 \
> c3t0d4s6 c3t2d4s6 c3t4d4s6 c3t0d4s7 c3t2d4s7 c3t4d4s7
> do
> echo $rawdevs; chown oracle:dba /dev/rdsk/$rawdevs; chmod 660 /dev/rdsk/$rawdevs
> ls -l `ls -l /dev/rdsk/$rawdevs | awk -F" " '{ print $11 }'`
> done
c3t0d2s7
crw-rw----   1 oracle   dba      118, 63 Jul 25 10:54 ../../devices/sbus@1f,0/SUNW,soc@0,0/SUNW,pln@a0000000,752ee9/ssd@1,2:h,raw
c3t1d2s7
crw-rw----   1 oracle   dba      118,127 Jul 21 12:55 ../../devices/sbus@1f,0/SUNW,soc@0,0/SUNW,pln@a0000000,752ee9/ssd@3,0:h,raw
c3t3d2s7
crw-rw----   1 oracle   dba      118,143 Jul 25 10:55 ../../devices/sbus@1f,0/SUNW,soc@0,0/SUNW,pln@a0000000,752ee9/ssd@3,2:h,raw
c3t0d4s6
crw-rw----   1 oracle   dba      118, 38 Aug  9 12:07 ../../devices/sbus@1f,0/SUNW,soc@0,0/SUNW,pln@a0000000,752ee9/ssd@0,4:g,raw
c3t2d4s6
crw-rw----   1 oracle   dba      118,118 Aug 16 10:21 ../../devices/sbus@1f,0/SUNW,soc@0,0/SUNW,pln@a0000000,752ee9/ssd@2,4:g,raw
c3t4d4s6
crw-rw----   1 oracle   dba      118,198 Jul 25 10:55 ../../devices/sbus@1f,0/SUNW,soc@0,0/SUNW,pln@a0000000,752ee9/ssd@4,4:g,raw
c3t0d4s7
crw-rw----   1 oracle   dba      118, 39 Aug 16 10:19 ../../devices/sbus@1f,0/SUNW,soc@0,0/SUNW,pln@a0000000,752ee9/ssd@0,4:h,raw
c3t2d4s7
crw-rw----   1 oracle   dba      118,119 Aug 16 10:19 ../../devices/sbus@1f,0/SUNW,soc@0,0/SUNW,pln@a0000000,752ee9/ssd@2,4:h,raw
c3t4d4s7
crw-rw----   1 oracle   dba      118,199 Aug 16 10:19 ../../devices/sbus@1f,0/SUNW,soc@0,0/SUNW,pln@a0000000,752ee9/ssd@4,4:h,raw
#

On node1, run CVU to check if all shared disks are available across all nodes.

$ cd /ora10.dvd2/clusterware/cluvfy
$ ./runcluvfy.sh comp ssa -n rac1,rac2 -s \
> /dev/rdsk/c3t0d2s7,/dev/rdsk/c3t1d2s7,/dev/rdsk/c3t3d2s7\
> /dev/rdsk/c3t0d4s6,/dev/rdsk/c3t2d4s6,/dev/rdsk/c3t4d4s6\
> /dev/rdsk/c3t0d4s7,/dev/rdsk/c3t2d4s7,/dev/rdsk/c3t4d4s7

Verifying shared storage accessibility

Checking shared storage accessibility...

"/dev/rdsk/c3t0d2s7" is shared.

"/dev/rdsk/c3t1d2s7" is shared.

"/dev/rdsk/c3t3d2s7" is shared.

"/dev/rdsk/c3t0d4s6" is shared.

"/dev/rdsk/c3t2d4s6" is shared.

"/dev/rdsk/c3t4d4s6" is shared.

"/dev/rdsk/c3t0d4s7" is shared.

"/dev/rdsk/c3t2d4s7" is shared.

"/dev/rdsk/c3t4d4s7" is shared.

Shared storage check was successful on nodes "rac2,rac1".

Verification of shared storage accessibility was successful.

$

Pre-installation tasks are done. The next step is to install Oracle clusterware.

標簽:邵陽 人事邀約 十堰 甘孜 撫順 泰安 漳州 企業管理

巨人網絡通訊聲明:本文標題《Solaris9系統上安裝Oracle10g RAC》,本文關鍵詞  Solaris9,系統,上,安裝,Oracle10g,;如發現本文內容存在版權問題,煩請提供相關信息告之我們,我們將及時溝通與處理。本站內容系統采集于網絡,涉及言論、版權與本站無關。
  • 相關文章
  • 下面列出與本文章《Solaris9系統上安裝Oracle10g RAC》相關的同類信息!
  • 本頁收集關于Solaris9系統上安裝Oracle10g RAC的相關信息資訊供網民參考!
  • 推薦文章
    婷婷综合国产,91蜜桃婷婷狠狠久久综合9色 ,九九九九九精品,国产综合av
    日韩**一区毛片| 欧美亚洲综合在线| 亚洲午夜羞羞片| 亚洲欧美一区二区三区久本道91 | 色域天天综合网| 欧美区在线观看| 欧美极品少妇xxxxⅹ高跟鞋| 免费成人在线播放| 欧美三级中文字| 亚洲尤物在线视频观看| 成人av网站免费| 国产三区在线成人av| 国内一区二区视频| 久久色在线观看| 国产精品资源站在线| 亚洲精品一区二区三区四区高清| 亚洲gay无套男同| 日韩欧美电影在线| 青娱乐精品视频| 久久综合九色综合欧美就去吻| 日本麻豆一区二区三区视频| 欧美日产国产精品| 蜜桃视频免费观看一区| 日韩美女视频在线| 国产一区美女在线| 中文字幕在线不卡一区| 欧美日韩国产小视频在线观看| 视频在线观看91| 国产日产欧美一区| 欧美军同video69gay| 美腿丝袜在线亚洲一区| 国产精品灌醉下药二区| 日韩欧美精品在线| 欧美吞精做爰啪啪高潮| 国产·精品毛片| 日韩在线一区二区三区| 久久综合色鬼综合色| 日本韩国一区二区三区视频| 日本亚洲最大的色成网站www| 在线综合亚洲欧美在线视频| 国产欧美一区二区三区在线老狼| 亚洲视频一区二区在线| 亚洲综合免费观看高清完整版在线| 欧美日韩一区高清| 国产一区二区免费看| 丝袜a∨在线一区二区三区不卡| 久久久久青草大香线综合精品| 色综合久久天天综合网| 国产成人精品三级麻豆| 久久国产精品99久久久久久老狼| 一区二区国产视频| 亚洲黄色在线视频| 亚洲一级片在线观看| 亚洲精品水蜜桃| 亚洲精品免费在线观看| 国产精品久久久久四虎| 18成人在线视频| 亚洲国产综合在线| 亚洲成人综合网站| 美女免费视频一区二区| 精品一区二区国语对白| 精品一区二区三区影院在线午夜| 麻豆极品一区二区三区| 麻豆精品久久久| 丁香婷婷综合激情五月色| 国产91精品免费| 99精品欧美一区二区蜜桃免费| 91视频xxxx| 日韩一区二区三区在线观看| 国产欧美精品区一区二区三区| 欧美激情一区二区三区| 亚洲图片欧美色图| 国产盗摄女厕一区二区三区| 91视频在线看| 久久嫩草精品久久久精品| 亚洲欧美区自拍先锋| 精品一区二区在线观看| k8久久久一区二区三区| 成人黄色网址在线观看| 中文字幕一区二区三中文字幕| 精品国产免费人成在线观看| 亚洲精品日韩综合观看成人91| 亚洲综合久久av| 久久国产精品99精品国产| 成人app软件下载大全免费| 在线综合视频播放| 亚洲视频狠狠干| 国产成人午夜99999| 97久久精品人人做人人爽50路| 一本大道av一区二区在线播放| 国产**成人网毛片九色| 日韩欧美精品三级| 亚洲精品一区在线观看| 另类人妖一区二区av| 久久久久国产成人精品亚洲午夜| 国产激情一区二区三区四区| 国产精品视频观看| 97国产精品videossex| 日产国产欧美视频一区精品 | 国产成人福利片| 一区二区三区四区不卡视频| 正在播放一区二区| 丁香啪啪综合成人亚洲小说| 亚洲一区精品在线| 亚洲精品在线免费观看视频| 91亚洲国产成人精品一区二区三| 丝袜国产日韩另类美女| 国产精品久久久久久亚洲伦| 欧美一区日本一区韩国一区| 91视频.com| 精品亚洲成av人在线观看| 亚洲免费观看高清完整版在线观看熊 | 91精品在线观看入口| 久久99最新地址| 亚洲午夜精品一区二区三区他趣| 精品伦理精品一区| 欧美在线视频日韩| 成a人片亚洲日本久久| 精品一区二区免费| 丝袜诱惑制服诱惑色一区在线观看| 精品国产精品一区二区夜夜嗨| 91电影在线观看| 国产成人午夜99999| 美国十次了思思久久精品导航| 一区二区久久久久久| 国产一区在线观看视频| 亚洲品质自拍视频| 国产亚洲婷婷免费| 欧美va亚洲va国产综合| 欧美日韩视频第一区| 91亚洲精品久久久蜜桃| 成人免费三级在线| 国产成a人无v码亚洲福利| 久久 天天综合| 奇米一区二区三区av| 亚洲一区二区在线播放相泽| 亚洲色图一区二区三区| 国产精品久久久久永久免费观看 | 一二三四社区欧美黄| 国产精品久久久久久久午夜片| 久久精品欧美一区二区三区麻豆| 欧美大度的电影原声| 91精品国产综合久久婷婷香蕉| 欧美三级视频在线| 欧美在线三级电影| 欧美视频一区二区| 欧美做爰猛烈大尺度电影无法无天| 91免费观看视频在线| 成人高清视频在线观看| 丁香一区二区三区| 国产成人精品免费一区二区| 国产美女在线观看一区| 激情偷乱视频一区二区三区| 国产一区二区在线看| 国产精品77777竹菊影视小说| 国产69精品久久久久毛片| heyzo一本久久综合| 91高清视频免费看| 在线播放91灌醉迷j高跟美女 | 色婷婷亚洲综合| 色老汉av一区二区三区| 欧美三级中文字幕| 欧美大度的电影原声| 国产视频一区在线播放| 国产精品不卡在线| 亚洲午夜私人影院| 久久99精品国产麻豆不卡| 国产另类ts人妖一区二区| av电影天堂一区二区在线| 色综合欧美在线| 天堂资源在线中文精品| 蜜臂av日日欢夜夜爽一区| 国产一区二区精品久久| 91啪亚洲精品| 日韩一区二区免费高清| 久久综合狠狠综合久久激情| 亚洲欧美日韩国产手机在线| 午夜av电影一区| 国产黑丝在线一区二区三区| 在线免费观看一区| 精品国产麻豆免费人成网站| 亚洲欧美视频在线观看视频| 另类调教123区 | 亚洲欧洲综合另类| 日韩av网站免费在线| 成人av网站在线| 欧美一区二区在线视频| 国产精品视频一二三| 日本欧美一区二区三区| aaa亚洲精品| 精品国产一区二区三区不卡| 一区二区三区在线观看国产| 狠狠色丁香久久婷婷综| 91豆麻精品91久久久久久| 国产亚洲午夜高清国产拍精品| 亚洲一区二区在线播放相泽| 国产不卡视频一区二区三区| 91精品国产综合久久香蕉麻豆| 中文字幕中文字幕一区| 国产在线视频精品一区|