Grid Infrastructure 12c 노드 추가하기
노드를 추가하기 위해서는
머신의 복제가
한 번
더 수행되어야
한다. 2번째 노드를
만들었을 때와
동일하게 이
작업을 수행하고, 네트워크 환경 및 hostname을 설정해주면 된다.
그리고, 모든 노드에 변경해주어야
하는 작업이
크게 두
가지 있는데, CRS 소유 계정인 grid 계정에 대한
SSH Equivalence를
맞추는 작업과
/etc/hosts 파일을
수정하는 것이다.
A. 모든 노드에 대해 SSH Equivalence 설정 및 배포
각 노드의 grid 계정 간에는 ssh 연결시 패스워드를
묻지 않고
붙을 수
있도록 수행하기
위해 해주어야
하는 작업이다. 이는 새로 추가할
노드와 기존의
노드 하나의
커넥션을 수립한
후, 기존 노드의
정보를 모든
노드에게 동일하게
적용하도록 한다.
.ssh 디렉토리 생성 (추가할 노드)
[grid@oel5-12c-rac3
~]$ mkdir ~/.ssh
[grid@oel5-12c-rac3
~]$ chmod 700 ~/.ssh
|
rsa, dsa 키 생성
[grid@oel5-12c-rac3
~]$ ssh-keygen -t rsa
Generating
public/private rsa key pair.
Enter
file in which to save the key (/home/grid/.ssh/id_rsa):
Enter
passphrase (empty for no passphrase):
Enter
same passphrase again:
Your
identification has been saved in /home/grid/.ssh/id_rsa.
Your
public key has been saved in /home/grid/.ssh/id_rsa.pub.
The key
fingerprint is:
91:04:9d:1e:8d:7c:51:e9:e2:ea:e4:57:08:cf:92:d8
grid@oel5-12c-rac3.litkhai.com
[grid@oel5-12c-rac3
~]$ ssh-keygen -t dsa
Generating
public/private dsa key pair.
Enter
file in which to save the key (/home/grid/.ssh/id_dsa):
Enter
passphrase (empty for no passphrase):
Enter
same passphrase again:
Your
identification has been saved in /home/grid/.ssh/id_dsa.
Your
public key has been saved in /home/grid/.ssh/id_dsa.pub.
The key
fingerprint is:
32:ed:88:f6:6d:3c:2e:2d:89:7e:5a:0b:07:2c:e5:70
grid@oel5-12c-rac3.litkhai.com
|
인증 키 파일 생성
[grid@oel5-12c-rac3
~]$ cd ~/.ssh/
[grid@oel5-12c-rac3
.ssh]$ cat id_rsa.pub >> authorized_keys
[grid@oel5-12c-rac3
.ssh]$ cat id_dsa.pub >> authorized_keys
[grid@oel5-12c-rac3
.ssh]$ ls
authorized_keys id_dsa
id_dsa.pub id_rsa id_rsa.pub
|
추가할 노드의
인증 키 파일을 기존 노드에 전송
(추가할 노드 -> 기존 노드)
[grid@oel5-12c-rac1
.ssh]$ scp authorized_keys oel5-12c-rac1:/home/grid/.ssh/authorized_keys_rac3
The
authenticity of host 'oel5-12c-rac2 (192.168.56.102)' can't be established.
RSA key
fingerprint is 94:04:36:85:5b:f0:02:ea:a9:ea:15:dc:e8:db:dd:4e.
Are you
sure you want to continue connecting (yes/no)? yes
Warning:
Permanently added 'oel5-12c-rac1,192.168.56.102' (RSA) to the list of known
hosts.
grid@oel5-12c-rac1's
password:
authorized_keys_rac3 100% 1036 1.0KB/s
00:00
|
기존 노드에 인증키 파일 추가
기존 노드로 옮겨서
인증키 정보를
추가한다.
[grid@oel5-12c-rac3
.ssh]$ ssh oel5-12c-rac1
[grid@oel5-12c-rac1
~]$ cd ~/.ssh
[grid@oel5-12c-rac1
.ssh]$ cat authorized_keys_rac3 >> authorized_keys
|
모든 노드에 인증키 배포(기존 노드 -> 모든 노드)
[grid@oel5-12c-rac1
.ssh]$ scp authorized_keys oel5-12c-rac2:/home/grid/.ssh/
authorized_keys 100% 1036 1.0KB/s
00:00
[grid@oel5-12c-rac1
.ssh]$ scp authorized_keys oel5-12c-rac3:/home/grid/.ssh/
authorized_keys 100% 1036 1.0KB/s
00:00
|
설정 완료 확인
이제 패스워드 없이
상호간에 연결이
가능한지를 확인한다.
[grid@oel5-12c-rac1
.ssh]$ ssh oel5-12c-rac2 date
Thu
Jan 2 07:18:33 KST 2014
[grid@oel5-12c-rac1
.ssh]$ ssh oel5-12c-rac3 date
Thu
Jan 2 07:18:35 KST 2014
|
B. 모든 노드에 대해 /etc/hosts 파일 수정
상호 간의 네트워크
정보를 모두
알 수
있도록 모든
노드의 /etc/hosts에 대한
정보를 통일해주어야
한다.
# Do
not remove the following line, or various programs
# that
require network functionality will fail.
127.0.0.1 localhost.litkhai.com localhost
#
Public
192.168.56.101 oel5-12c-rac1.litkhai.com oel5-12c-rac1
192.168.56.102 oel5-12c-rac2.litkhai.com oel5-12c-rac2
192.168.56.108 oel5-12c-rac3.litkhai.com oel5-12c-rac3
#
Private
192.168.1.101 oel5-12c-rac1-priv.litkhai.com oel5-12c-rac1-priv
192.168.1.102 oel5-12c-rac2-priv.litkhai.com oel5-12c-rac2-priv
192.168.1.108 oel5-12c-rac3-priv.litkhai.com ole5-12c-rac3-priv
#
Virtual
192.168.56.103 oel5-12c-rac1-vip.litkhai.com oel5-12c-rac1-vip
192.168.56.104 oel5-12c-rac2-vip.litkhai.com oel5-12c-rac2-vip
192.168.56.109 oel5-12c-rac3-vip.litkhai.com oel5-12c-rac3-vip
# SCAN
192.168.56.105 oel5-12c-scan.litkhai.com oel5-12c-scan
::1 localhost6.localdomain6
localhost6
|
C. 추가할 노드에 대해 oracleasm 설정 작업 수행
마찬가지로 3번째 노드에도 oracleasm을 통해 ASM 설정 및 디스크에
대한 접근
여부를 확인해야
한다.
1) oracleasm 설정
[root@oel5-12c-rac3
~]# oracleasm configure -i
Configuring
the Oracle ASM library driver.
This
will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine
whether the driver is
loaded
on boot and what permissions it will have.
The current values
will be
shown in brackets ('[]'). Hitting
<ENTER> without typing an
answer
will keep that current value. Ctrl-C
will abort.
Default
user to own the driver interface [grid]:
Default
group to own the driver interface [asmadmin]:
Start
Oracle ASM library driver on boot (y/n) [y]:
Scan
for Oracle ASM disks on boot (y/n) [y]:
Writing
Oracle ASM library driver configuration: done
|
2) 디스크
스캔 및
확인
[root@oel5-12c-rac3
~]# oracleasm scandisks
Reloading
disk partitions: done
Cleaning
any stale ASM disks...
Scanning
system for ASM disks...
[root@oel5-12c-rac3
~]# oracleasm listdisks
DISK1
DISK2
DISK3
DISK4
DISK5
|
D. 노드 추가 작업 실행하기
노드 추가 작업은
클러스터가 존재하는
노드에서 노드
추가를 실행하도록
하면, ssh를 통해
원격으로 새로운
노드에 필요한
파일을 설치하고
노드 추가작업을
수행하게 된다.
설치 수행 전 적합성 확인 (1번 노드)
[root@oel5-12c-rac1
~]# su - grid
[grid@oel5-12c-rac1
~]$ . oraenv
ORACLE_SID
= [grid] ? +ASM1
The
Oracle base has been set to /u01/app/grid
[grid@oel5-12c-rac1
~]$ cluvfy stage -pre crsinst -n oel5-12c-rac3
Performing
pre-checks for cluster services setup
Checking
node reachability...
Node
reachability check passed from node "oel5-12c-rac1"
Checking
user equivalence...
User
equivalence check passed for user "grid"
Checking
node connectivity...
Checking
hosts config file...
Verification
of the hosts config file successful
Check:
Node connectivity using interfaces on subnet "192.168.56.0"
Node
connectivity passed for subnet "192.168.56.0" with node(s)
oel5-12c-rac3
TCP
connectivity check passed for subnet "192.168.56.0"
Check:
Node connectivity using interfaces on subnet "192.168.1.0"
Node
connectivity passed for subnet "192.168.1.0" with node(s)
oel5-12c-rac3
TCP connectivity
check passed for subnet "192.168.1.0"
Node
connectivity check passed
Checking
multicast communication...
Checking
subnet "192.168.1.0" for multicast communication with multicast
group "224.0.0.251"...
Check
of subnet "192.168.1.0" for multicast communication with multicast
group "224.0.0.251" passed.
Check
of multicast communication passed.
Task
ASM Integrity check started...
Checking
if connectivity exists across cluster nodes on the ASM network
Checking
node connectivity...
Checking
hosts config file...
Verification
of the hosts config file successful
Check:
Node connectivity using interfaces on subnet "192.168.1.0"
Node
connectivity passed for subnet "192.168.1.0" with node(s)
oel5-12c-rac3
TCP
connectivity check passed for subnet "192.168.1.0"
Node
connectivity check passed
Network
connectivity check across cluster nodes on the ASM network passed
Task
ASM Integrity check passed...
Checking
ASMLib configuration.
Check
for ASMLib configuration passed.
Total
memory check failed
Check
failed on nodes:
oel5-12c-rac3
Available
memory check passed
Swap
space check passed
Free
disk space check passed for
"oel5-12c-rac3:/usr,oel5-12c-rac3:/var,oel5-12c-rac3:/etc,oel5-12c-rac3:/u01/app/12.1.0/grid,oel5-12c-rac3:/sbin,oel5-12c-rac3:/tmp"
Check
for multiple users with UID value 54322 passed
User
existence check passed for "grid"
Group
existence check passed for "oinstall"
Group
existence check passed for "dba"
Membership
check for user "grid" in group "oinstall" [as Primary]
passed
Membership
check for user "grid" in group "dba" failed
Check
failed on nodes:
oel5-12c-rac3
Run
level check passed
Hard
limits check passed for "maximum open file descriptors"
Soft
limits check passed for "maximum open file descriptors"
Hard
limits check passed for "maximum user processes"
Soft
limits check passed for "maximum user processes"
System
architecture check passed
Kernel
version check passed
Kernel
parameter check passed for "semmsl"
Kernel
parameter check passed for "semmns"
Kernel
parameter check passed for "semopm"
Kernel
parameter check passed for "semmni"
Kernel
parameter check passed for "shmmax"
Kernel
parameter check passed for "shmmni"
Kernel
parameter check passed for "shmall"
Kernel
parameter check passed for "file-max"
Kernel
parameter check passed for "ip_local_port_range"
Kernel
parameter check passed for "rmem_default"
Kernel
parameter check passed for "rmem_max"
Kernel
parameter check passed for "wmem_default"
Kernel
parameter check passed for "wmem_max"
Kernel
parameter check passed for "aio-max-nr"
Package
existence check passed for "make"
Package
existence check passed for "binutils"
Package
existence check passed for "gcc(x86_64)"
Package
existence check passed for "libaio(x86_64)"
Package
existence check passed for "glibc(x86_64)"
Package
existence check passed for "compat-libstdc++-33(x86_64)"
Package
existence check passed for "glibc-devel(x86_64)"
Package
existence check passed for "gcc-c++(x86_64)"
Package
existence check passed for "libaio-devel(x86_64)"
Package
existence check passed for "libgcc(x86_64)"
Package
existence check passed for "libstdc++(x86_64)"
Package
existence check passed for "libstdc++-devel(x86_64)"
Package
existence check passed for "sysstat"
Package
existence check passed for "ksh"
Package
existence check passed for "nfs-utils"
Checking
availability of ports "6200,6100" required for component
"Oracle Notification Service (ONS)"
Port
availability check passed for ports "6200,6100"
Check
for multiple users with UID value 0 passed
Current
group ID check passed
Starting
check for consistency of primary group of root user
Check
for consistency of root user's primary group passed
Starting
Clock synchronization checks using Network Time Protocol(NTP)...
NTP
Configuration file check started...
No NTP
Daemons or Services were found to be running
Clock
synchronization check using Network Time Protocol(NTP) passed
Core
file name pattern consistency check passed.
User
"grid" is not part of "root" group. Check passed
Default
user file creation mask check passed
Checking
integrity of file "/etc/resolv.conf" across nodes
"domain"
and "search" entries do not coexist in any "/etc/resolv.conf" file
The DNS
response time for an unreachable node is within acceptable limit on all nodes
Check
for integrity of file "/etc/resolv.conf" passed
Time
zone consistency check passed
Checking
integrity of name service switch configuration file
"/etc/nsswitch.conf" ...
Check
for integrity of name service switch configuration file
"/etc/nsswitch.conf" passed
Checking
daemon "avahi-daemon" is not configured and running
Daemon
not configured check failed for process "avahi-daemon"
Check
failed on nodes:
oel5-12c-rac3
Daemon
not running check failed for process "avahi-daemon"
Check
failed on nodes:
oel5-12c-rac3
Starting
check for /dev/shm mounted as temporary file system ...
Check
for /dev/shm mounted as temporary file system passed
Starting
check for /boot mount ...
Check
for /boot mount passed
Starting
check for zeroconf check ...
Check
for zeroconf check passed
Pre-check
for cluster services setup was unsuccessful on all the nodes.
|
3번 노드에 대한 fixup 작업 수행
1) 상위에서
수행한 명령에
대해 fixup 할 수
있는 부분은 fixup 작업을 수행하도록 다음
옵션으로 작업을
실행한다.
[grid@oel5-12c-rac1
~]$ cluvfy stage -pre crsinst -n oel5-12c-rac3 -fixup
Performing
pre-checks for cluster services setup
Checking
node reachability...
Node
reachability check passed from node "oel5-12c-rac1"
Checking
user equivalence...
User
equivalence check passed for user "grid"
Checking
node connectivity...
Checking
hosts config file...
Verification
of the hosts config file successful
Check:
Node connectivity using interfaces on subnet "192.168.56.0"
Node
connectivity passed for subnet "192.168.56.0" with node(s)
oel5-12c-rac3
TCP
connectivity check passed for subnet "192.168.56.0"
Check:
Node connectivity using interfaces on subnet "192.168.1.0"
Node
connectivity passed for subnet "192.168.1.0" with node(s)
oel5-12c-rac3
TCP
connectivity check passed for subnet "192.168.1.0"
Node
connectivity check passed
Checking
multicast communication...
Checking
subnet "192.168.1.0" for multicast communication with multicast
group "224.0.0.251"...
Check
of subnet "192.168.1.0" for multicast communication with multicast
group "224.0.0.251" passed.
Check
of multicast communication passed.
Task
ASM Integrity check started...
Checking
if connectivity exists across cluster nodes on the ASM network
Checking
node connectivity...
Checking
hosts config file...
Verification
of the hosts config file successful
Check:
Node connectivity using interfaces on subnet "192.168.1.0"
Node
connectivity passed for subnet "192.168.1.0" with node(s)
oel5-12c-rac3
TCP
connectivity check passed for subnet "192.168.1.0"
Node
connectivity check passed
Network
connectivity check across cluster nodes on the ASM network passed
Task
ASM Integrity check passed...
Checking
ASMLib configuration.
Check
for ASMLib configuration passed.
Total
memory check failed
Check
failed on nodes:
oel5-12c-rac3
Available
memory check passed
Swap
space check passed
Free
disk space check passed for
"oel5-12c-rac3:/usr,oel5-12c-rac3:/var,oel5-12c-rac3:/etc,oel5-12c-rac3:/u01/app/12.1.0/grid,oel5-12c-rac3:/sbin,oel5-12c-rac3:/tmp"
Check
for multiple users with UID value 54322 passed
User
existence check passed for "grid"
Group
existence check passed for "oinstall"
Group
existence check passed for "dba"
Membership
check for user "grid" in group "oinstall" [as Primary]
passed
Membership
check for user "grid" in group "dba" failed
Check
failed on nodes:
oel5-12c-rac3
Run
level check passed
Hard
limits check passed for "maximum open file descriptors"
Soft limits
check passed for "maximum open file descriptors"
Hard
limits check passed for "maximum user processes"
Soft
limits check passed for "maximum user processes"
System
architecture check passed
Kernel
version check passed
Kernel
parameter check passed for "semmsl"
Kernel
parameter check passed for "semmns"
Kernel
parameter check passed for "semopm"
Kernel
parameter check passed for "semmni"
Kernel
parameter check passed for "shmmax"
Kernel
parameter check passed for "shmmni"
Kernel
parameter check passed for "shmall"
Kernel
parameter check passed for "file-max"
Kernel
parameter check passed for "ip_local_port_range"
Kernel
parameter check passed for "rmem_default"
Kernel
parameter check passed for "rmem_max"
Kernel
parameter check passed for "wmem_default"
Kernel
parameter check passed for "wmem_max"
Kernel
parameter check passed for "aio-max-nr"
Package
existence check passed for "make"
Package
existence check passed for "binutils"
Package
existence check passed for "gcc(x86_64)"
Package
existence check passed for "libaio(x86_64)"
Package
existence check passed for "glibc(x86_64)"
Package
existence check passed for "compat-libstdc++-33(x86_64)"
Package
existence check passed for "glibc-devel(x86_64)"
Package
existence check passed for "gcc-c++(x86_64)"
Package
existence check passed for "libaio-devel(x86_64)"
Package
existence check passed for "libgcc(x86_64)"
Package
existence check passed for "libstdc++(x86_64)"
Package
existence check passed for "libstdc++-devel(x86_64)"
Package
existence check passed for "sysstat"
Package
existence check passed for "ksh"
Package
existence check passed for "nfs-utils"
Check
for multiple users with UID value 0 passed
Current
group ID check passed
Starting
check for consistency of primary group of root user
Check
for consistency of root user's primary group passed
Starting
Clock synchronization checks using Network Time Protocol(NTP)...
NTP
Configuration file check started...
No NTP
Daemons or Services were found to be running
Clock
synchronization check using Network Time Protocol(NTP) passed
Core
file name pattern consistency check passed.
User
"grid" is not part of "root" group. Check passed
Default
user file creation mask check passed
Checking
integrity of file "/etc/resolv.conf" across nodes
"domain"
and "search" entries do not coexist in any "/etc/resolv.conf" file
The DNS
response time for an unreachable node is within acceptable limit on all nodes
Check
for integrity of file "/etc/resolv.conf" passed
Time
zone consistency check passed
Checking
integrity of name service switch configuration file
"/etc/nsswitch.conf" ...
Check
for integrity of name service switch configuration file
"/etc/nsswitch.conf" passed
Checking
daemon "avahi-daemon" is not configured and running
Daemon
not configured check failed for process "avahi-daemon"
Check
failed on nodes:
oel5-12c-rac3
Daemon
not running check failed for process "avahi-daemon"
Check
failed on nodes:
oel5-12c-rac3
Starting
check for /dev/shm mounted as temporary file system ...
Check
for /dev/shm mounted as temporary file system passed
Starting
check for /boot mount ...
Check
for /boot mount passed
Starting
check for zeroconf check ...
Check
for zeroconf check passed
******************************************************************************************
Following
is the list of fixable prerequisites selected to fix in this session
******************************************************************************************
-------------- --------------- ----------------
Check
failed. Failed on
nodes Reboot required?
-------------- --------------- ----------------
Group
Membership: dba
oel5-12c-rac3 no
Daemon
"avahi-daemon" not
oel5-12c-rac3 no
configured
and running
Execute
"/tmp/CVU_12.1.0.1.0_grid/runfixup.sh" as root user on nodes
"oel5-12c-rac3" to perform the fix up operations manually
Press
ENTER key to continue after execution of
"/tmp/CVU_12.1.0.1.0_grid/runfixup.sh" has completed on nodes
"oel5-12c-rac3"
Fix:
Group Membership: dba
"Group
Membership: dba" was successfully fixed on all the applicable nodes
Fix:
Daemon "avahi-daemon" not configured and running
"Daemon
"avahi-daemon" not configured and running" was successfully
fixed on all the applicable nodes
Pre-check
for cluster services setup was unsuccessful on all the nodes.
|
2) 3번
노드에서 fixup 작업을 수행한다.
[root@oel5-12c-rac3
~]# /tmp/CVU_12.1.0.1.0_grid/runfixup.sh
All
Fix-up operations were completed successfully.
|
모든
check가 통과하기
위해서는 몇몇의
수동작업을 수행해주어야
한다. 그러나 이
상태에서도 더
진행하는데 문제가
없으므로, 그대로 진행할
것이다.
노드 추가에 대한 pre check 작업 수행
[grid@oel5-12c-rac1
~]$ cluvfy stage -pre nodeadd -n oel5-12c-rac3
Performing
pre-checks for node addition
Checking
node reachability...
Node
reachability check passed from node "oel5-12c-rac1"
Checking
user equivalence...
User
equivalence check passed for user "grid"
Checking
CRS integrity...
CRS
integrity check passed
Clusterware
version consistency passed.
Checking
shared resources...
Checking
CRS home location...
"/u01/app/12.1.0/grid"
is not shared
Shared
resources check for node addition passed
Checking
node connectivity...
Checking
hosts config file...
Verification
of the hosts config file successful
Check:
Node connectivity using interfaces on subnet "192.168.56.0"
Node
connectivity passed for subnet "192.168.56.0" with node(s)
oel5-12c-rac2,oel5-12c-rac3,oel5-12c-rac1
TCP
connectivity check passed for subnet "192.168.56.0"
Check:
Node connectivity using interfaces on subnet "192.168.1.0"
Node
connectivity passed for subnet "192.168.1.0" with node(s)
oel5-12c-rac1,oel5-12c-rac3,oel5-12c-rac2
TCP
connectivity check passed for subnet "192.168.1.0"
Checking
subnet mask consistency...
Subnet
mask consistency check passed for subnet "192.168.56.0".
Subnet
mask consistency check passed for subnet "192.168.1.0".
Subnet
mask consistency check passed.
Node
connectivity check passed
Checking
multicast communication...
Checking
subnet "192.168.1.0" for multicast communication with multicast
group "224.0.0.251"...
Check
of subnet "192.168.1.0" for multicast communication with multicast
group "224.0.0.251" passed.
Check
of multicast communication passed.
Task
ASM Integrity check started...
Checking
if connectivity exists across cluster nodes on the ASM network
Checking
node connectivity...
Checking
hosts config file...
Verification
of the hosts config file successful
Check:
Node connectivity using interfaces on subnet "192.168.1.0"
Node
connectivity passed for subnet "192.168.1.0" with node(s)
oel5-12c-rac3
TCP
connectivity check passed for subnet "192.168.1.0"
Node
connectivity check passed
Network
connectivity check across cluster nodes on the ASM network passed
Task
ASM Integrity check passed...
Total
memory check failed
Check
failed on nodes:
oel5-12c-rac3,oel5-12c-rac1
Available
memory check passed
Swap
space check passed
Free
disk space check passed for "oel5-12c-rac3:/usr,oel5-12c-rac3:/var,oel5-12c-rac3:/etc,oel5-12c-rac3:/u01/app/12.1.0/grid,oel5-12c-rac3:/sbin,oel5-12c-rac3:/tmp"
Free
disk space check failed for
"oel5-12c-rac1:/usr,oel5-12c-rac1:/var,oel5-12c-rac1:/etc,oel5-12c-rac1:/u01/app/12.1.0/grid,oel5-12c-rac1:/sbin,oel5-12c-rac1:/tmp"
Check
failed on nodes:
oel5-12c-rac1
Check
for multiple users with UID value 54322 passed
User
existence check passed for "grid"
Run
level check passed
Hard
limits check passed for "maximum open file descriptors"
Soft
limits check passed for "maximum open file descriptors"
Hard
limits check passed for "maximum user processes"
Soft
limits check passed for "maximum user processes"
System
architecture check passed
Kernel
version check passed
Kernel
parameter check passed for "semmsl"
Kernel
parameter check passed for "semmns"
Kernel
parameter check passed for "semopm"
Kernel
parameter check passed for "semmni"
Kernel
parameter check passed for "shmmax"
Kernel
parameter check passed for "shmmni"
Kernel
parameter check passed for "shmall"
Kernel
parameter check passed for "file-max"
Kernel
parameter check passed for "ip_local_port_range"
Kernel
parameter check passed for "rmem_default"
Kernel
parameter check passed for "rmem_max"
Kernel
parameter check passed for "wmem_default"
Kernel
parameter check passed for "wmem_max"
Kernel
parameter check passed for "aio-max-nr"
Package
existence check passed for "make"
Package
existence check passed for "binutils"
Package
existence check passed for "gcc(x86_64)"
Package
existence check passed for "libaio(x86_64)"
Package
existence check passed for "glibc(x86_64)"
Package
existence check passed for "compat-libstdc++-33(x86_64)"
Package
existence check passed for "glibc-devel(x86_64)"
Package
existence check passed for "gcc-c++(x86_64)"
Package
existence check passed for "libaio-devel(x86_64)"
Package
existence check passed for "libgcc(x86_64)"
Package
existence check passed for "libstdc++(x86_64)"
Package
existence check passed for "libstdc++-devel(x86_64)"
Package
existence check passed for "sysstat"
Package
existence check passed for "ksh"
Package
existence check passed for "nfs-utils"
Check
for multiple users with UID value 0 passed
Current
group ID check passed
Starting
check for consistency of primary group of root user
Check
for consistency of root user's primary group passed
Group
existence check passed for "asmadmin"
Group
existence check passed for "asmoper"
Group
existence check passed for "asmdba"
Checking
ASMLib configuration.
Check
for ASMLib configuration passed.
Checking
OCR integrity...
OCR
integrity check passed
Checking
Oracle Cluster Voting Disk configuration...
Oracle
Cluster Voting Disk configuration check passed
Time
zone consistency check passed
Starting
Clock synchronization checks using Network Time Protocol(NTP)...
NTP
Configuration file check started...
No NTP
Daemons or Services were found to be running
Clock
synchronization check using Network Time Protocol(NTP) passed
User
"grid" is not part of "root" group. Check passed
Checking
integrity of file "/etc/resolv.conf" across nodes
"domain"
and "search" entries do not coexist in any "/etc/resolv.conf" file
The DNS
response time for an unreachable node is within acceptable limit on all nodes
Check
for integrity of file "/etc/resolv.conf" passed
Checking
integrity of name service switch configuration file
"/etc/nsswitch.conf" ...
Check
for integrity of name service switch configuration file
"/etc/nsswitch.conf" passed
Pre-check
for node addition was unsuccessful on all the nodes.
|
노드 추가 작업 실행
1) 이
부분은 이전
환경에 비해
디렉토리가 일부
변경되었다. 다음 명령을
통해 노드를
추가하도록 실행하면
된다.
[grid@oel5-12c-rac1
addnode]$ cd /u01/app/12.1.0/grid/addnode/
[grid@oel5-12c-rac1
addnode]$ ./addnode.sh -silent "CLUSTER_NEW_NODES={oel5-12c-rac3}"
"CLUSTER_NEW_VIRTUAL_HOSTNAMES={oel5-12c-rac3-vip}"
Starting
Oracle Universal Installer...
Checking
Temp space: must be greater than 120 MB.
Actual 2093 MB Passed
Checking
swap space: must be greater than 150 MB.
Actual 5934 MB Passed
[WARNING]
[INS-13014] Target environment does not meet some optional requirements.
CAUSE: Some of the optional prerequisites
are not met. See logs for details.
/u01/app/oraInventory/logs/addNodeActions2013-12-31_06-27-59PM.log
ACTION: Identify the list of failed
prerequisite checks from the log: /u01/app/oraInventory/logs/addNodeActions2013-12-31_06-27-59PM.log.
Then either from the log file or from installation manual find the
appropriate configuration to meet the prerequisites and fix it manually.
Prepare
Configuration in progress.
Prepare
Configuration successful.
.................................................. 9% Done.
You can
find the log of this install session at:
/u01/app/oraInventory/logs/addNodeActions2013-12-31_06-27-59PM.log
Instantiate
files in progress.
Instantiate
files successful.
.................................................. 15% Done.
Copying
files to node in progress.
Copying
files to node successful.
.................................................. 79% Done.
Saving
cluster inventory in progress.
.................................................. 87% Done.
Saving
cluster inventory successful.
The
Cluster Node Addition of /u01/app/12.1.0/grid was successful.
Please
check '/tmp/silentInstall.log' for more details.
As a
root user, execute the following script(s):
1. /u01/app/12.1.0/grid/root.sh
Execute
/u01/app/12.1.0/grid/root.sh on the following nodes:
[oel5-12c-rac3]
The
scripts can be executed in parallel on all the nodes. If there are any policy
managed databases managed by cluster, proceed with the addnode procedure
without executing the root.sh script. Ensure that root.sh script is executed
after all the policy managed databases managed by clusterware are extended to
the new nodes.
..........
Update
Inventory in progress.
.................................................. 100% Done.
Update
Inventory successful.
Successfully
Setup Software.
|
2) 3번
노드에서 root 권한으로 해당
쉘 스크립트를
실행한다.
[root@oel5-12c-rac3
~]# /u01/app/oraInventory/orainstRoot.sh
Changing
permissions of /u01/app/oraInventory.
Adding
read,write permissions for group.
Removing
read,write,execute permissions for world.
Changing
groupname of /u01/app/oraInventory to oinstall.
The
execution of the script is complete.
[root@oel5-12c-rac3
~]# /u01/app/12.1.0/grid/root.sh
Check
/u01/app/12.1.0/grid/install/root_oel5-12c-rac3.litkhai.com_2013-12-31_17-50-20.log
for the output of root script
|
이제는 수행 결과는
로그 파일을
통해서 확인하도록
변경되었다.
로그를 확인하면 다음과
같다.
[root@oel5-12c-rac3
~]# cat
/u01/app/12.1.0/grid/install/root_oel5-12c-rac3.litkhai.com_2013-12-31_18-35-13.log
Performing
root user operation for Oracle 12c
The
following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/12.1.0/grid
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating
/etc/oratab file...
Entries
will be added to the /etc/oratab file as needed by
Database
Configuration Assistant when a database is created
Finished
running generic part of root script.
Now
product-specific root actions will be performed.
Relinking
oracle with rac_on option
Using
configuration parameter file:
/u01/app/12.1.0/grid/crs/install/crsconfig_params
2013/12/31
18:35:43 CLSRSC-363: User ignored prerequisites during installation
OLR
initialization - successful
2013/12/31
18:36:13 CLSRSC-330: Adding Clusterware entries to file '/etc/inittab'
CRS-4133:
Oracle High Availability Services has been stopped.
CRS-4123:
Oracle High Availability Services has been started.
CRS-4133:
Oracle High Availability Services has been stopped.
CRS-4123:
Oracle High Availability Services has been started.
CRS-2791:
Starting shutdown of Oracle High Availability Services-managed resources on
'oel5-12c-rac3'
CRS-2673:
Attempting to stop 'ora.drivers.acfs' on 'oel5-12c-rac3'
CRS-2677:
Stop of 'ora.drivers.acfs' on 'oel5-12c-rac3' succeeded
CRS-2793:
Shutdown of Oracle High Availability Services-managed resources on
'oel5-12c-rac3' has completed
CRS-4133:
Oracle High Availability Services has been stopped.
CRS-4123:
Starting Oracle High Availability Services-managed resources
CRS-2672:
Attempting to start 'ora.mdnsd' on 'oel5-12c-rac3'
CRS-2672:
Attempting to start 'ora.evmd' on 'oel5-12c-rac3'
CRS-2676:
Start of 'ora.mdnsd' on 'oel5-12c-rac3' succeeded
CRS-2676:
Start of 'ora.evmd' on 'oel5-12c-rac3' succeeded
CRS-2672:
Attempting to start 'ora.gpnpd' on 'oel5-12c-rac3'
CRS-2676:
Start of 'ora.gpnpd' on 'oel5-12c-rac3' succeeded
CRS-2672:
Attempting to start 'ora.gipcd' on 'oel5-12c-rac3'
CRS-2676:
Start of 'ora.gipcd' on 'oel5-12c-rac3' succeeded
CRS-2672:
Attempting to start 'ora.cssdmonitor' on 'oel5-12c-rac3'
CRS-2676:
Start of 'ora.cssdmonitor' on 'oel5-12c-rac3' succeeded
CRS-2672:
Attempting to start 'ora.cssd' on 'oel5-12c-rac3'
CRS-2672:
Attempting to start 'ora.diskmon' on 'oel5-12c-rac3'
CRS-2676:
Start of 'ora.diskmon' on 'oel5-12c-rac3' succeeded
CRS-2789:
Cannot stop resource 'ora.diskmon' as it is not running on server
'oel5-12c-rac3'
CRS-2676:
Start of 'ora.cssd' on 'oel5-12c-rac3' succeeded
CRS-2672:
Attempting to start 'ora.cluster_interconnect.haip' on 'oel5-12c-rac3'
CRS-2672:
Attempting to start 'ora.ctssd' on 'oel5-12c-rac3'
CRS-2676:
Start of 'ora.ctssd' on 'oel5-12c-rac3' succeeded
CRS-2676:
Start of 'ora.cluster_interconnect.haip' on 'oel5-12c-rac3' succeeded
CRS-2672:
Attempting to start 'ora.asm' on 'oel5-12c-rac3'
CRS-2676:
Start of 'ora.asm' on 'oel5-12c-rac3' succeeded
CRS-2672:
Attempting to start 'ora.storage' on 'oel5-12c-rac3'
CRS-2676:
Start of 'ora.storage' on 'oel5-12c-rac3' succeeded
CRS-2672:
Attempting to start 'ora.crsd' on 'oel5-12c-rac3'
CRS-2676:
Start of 'ora.crsd' on 'oel5-12c-rac3' succeeded
CRS-6017:
Processing resource auto-start for servers: oel5-12c-rac3
CRS-2672:
Attempting to start 'ora.ASMNET1LSNR_ASM.lsnr' on 'oel5-12c-rac3'
CRS-2672:
Attempting to start 'ora.ons' on 'oel5-12c-rac3'
CRS-2676:
Start of 'ora.ASMNET1LSNR_ASM.lsnr' on 'oel5-12c-rac3' succeeded
CRS-2672:
Attempting to start 'ora.asm' on 'oel5-12c-rac3'
CRS-2676:
Start of 'ora.ons' on 'oel5-12c-rac3' succeeded
CRS-2676:
Start of 'ora.asm' on 'oel5-12c-rac3' succeeded
CRS-2672:
Attempting to start 'ora.proxy_advm' on 'oel5-12c-rac3'
CRS-2676:
Start of 'ora.proxy_advm' on 'oel5-12c-rac3' succeeded
CRS-6016:
Resource auto-start has completed for server oel5-12c-rac3
CRS-6024:
Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123:
Oracle High Availability Services has been started.
2013/12/31
18:39:56 CLSRSC-343: Successfully started Oracle clusterware stack
clscfg:
EXISTING configuration version 5 detected.
clscfg:
version 5 is 12c Release 1.
Successfully
accumulated necessary OCR keys.
Creating
OCR keys for user 'root', privgrp 'root'..
Operation
successful.
Preparing
packages for installation...
cvuqdisk-1.0.9-1
2013/12/31
18:40:20 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ...
succeeded
[grid@oel5-12c-rac1
addnode]$ cd /u01/app/12.1.0/grid/addnode/
[grid@oel5-12c-rac1
addnode]$ ./addnode.sh -silent "CLUSTER_NEW_NODES={oel5-12c-rac3}"
"CLUSTER_NEW_VIRTUAL_HOSTNAMES={oel5-12c-rac3-vip}"
Starting
Oracle Universal Installer...
Checking
Temp space: must be greater than 120 MB.
Actual 2093 MB Passed
Checking
swap space: must be greater than 150 MB.
Actual 5934 MB Passed
[WARNING]
[INS-13014] Target environment does not meet some optional requirements.
CAUSE: Some of the optional prerequisites
are not met. See logs for details.
/u01/app/oraInventory/logs/addNodeActions2013-12-31_06-27-59PM.log
ACTION: Identify the list of failed
prerequisite checks from the log:
/u01/app/oraInventory/logs/addNodeActions2013-12-31_06-27-59PM.log. Then
either from the log file or from installation manual find the appropriate
configuration to meet the prerequisites and fix it manually.
Prepare
Configuration in progress.
Prepare
Configuration successful.
.................................................. 9% Done.
You can
find the log of this install session at:
/u01/app/oraInventory/logs/addNodeActions2013-12-31_06-27-59PM.log
Instantiate
files in progress.
Instantiate
files successful.
.................................................. 15% Done.
Copying
files to node in progress.
Copying
files to node successful.
.................................................. 79% Done.
Saving
cluster inventory in progress.
.................................................. 87% Done.
Saving
cluster inventory successful.
The
Cluster Node Addition of /u01/app/12.1.0/grid was successful.
Please
check '/tmp/silentInstall.log' for more details.
As a
root user, execute the following script(s):
1. /u01/app/12.1.0/grid/root.sh
Execute
/u01/app/12.1.0/grid/root.sh on the following nodes:
[oel5-12c-rac3]
The
scripts can be executed in parallel on all the nodes. If there are any policy
managed databases managed by cluster, proceed with the addnode procedure
without executing the root.sh script. Ensure that root.sh script is executed
after all the policy managed databases managed by clusterware are extended to
the new nodes.
..........
Update
Inventory in progress.
.................................................. 100% Done.
Update
Inventory successful.
Successfully
Setup Software.
|
노드 추가에
대한 post check 작업 수행
[grid@oel5-12c-rac1
addnode]$ cluvfy stage -post nodeadd -n oel5-12c-rac3
Performing
post-checks for node addition
Checking
node reachability...
Node
reachability check passed from node "oel5-12c-rac1"
Checking
user equivalence...
User
equivalence check passed for user "grid"
Checking
node connectivity...
Checking
hosts config file...
Verification
of the hosts config file successful
Check:
Node connectivity using interfaces on subnet "192.168.56.0"
Node
connectivity passed for subnet "192.168.56.0" with node(s)
oel5-12c-rac1,oel5-12c-rac3,oel5-12c-rac2
TCP
connectivity check passed for subnet "192.168.56.0"
Checking
subnet mask consistency...
Subnet
mask consistency check passed for subnet "192.168.56.0".
Subnet
mask consistency check passed.
Node
connectivity check passed
Checking
multicast communication...
Checking
subnet "192.168.56.0" for multicast communication with multicast
group "224.0.0.251"...
Check
of subnet "192.168.56.0" for multicast communication with multicast
group "224.0.0.251" passed.
Check
of multicast communication passed.
Checking
cluster integrity...
Cluster
integrity check passed
Checking
CRS integrity...
CRS
integrity check passed
Clusterware
version consistency passed.
Checking
shared resources...
Checking
CRS home location...
"/u01/app/12.1.0/grid"
is not shared
Shared
resources check for node addition passed
Checking
node connectivity...
Checking
hosts config file...
Verification
of the hosts config file successful
Check:
Node connectivity using interfaces on subnet "192.168.1.0"
Node
connectivity passed for subnet "192.168.1.0" with node(s)
oel5-12c-rac1,oel5-12c-rac2,oel5-12c-rac3
TCP
connectivity check passed for subnet "192.168.1.0"
Check:
Node connectivity using interfaces on subnet "192.168.56.0"
Node
connectivity passed for subnet "192.168.56.0" with node(s)
oel5-12c-rac3,oel5-12c-rac1,oel5-12c-rac2
TCP
connectivity check passed for subnet "192.168.56.0"
Checking
subnet mask consistency...
Subnet
mask consistency check passed for subnet "192.168.56.0".
Subnet
mask consistency check passed for subnet "192.168.1.0".
Subnet
mask consistency check passed.
Node
connectivity check passed
Checking
multicast communication...
Checking
subnet "192.168.1.0" for multicast communication with multicast
group "224.0.0.251"...
Check
of subnet "192.168.1.0" for multicast communication with multicast
group "224.0.0.251" passed.
Check
of multicast communication passed.
Checking
node application existence...
Checking
existence of VIP node application (required)
VIP
node application check passed
Checking
existence of NETWORK node application (required)
NETWORK
node application check passed
Checking
existence of ONS node application (optional)
ONS
node application check passed
Checking
Single Client Access Name (SCAN)...
Checking
TCP connectivity to SCAN Listeners...
TCP
connectivity to SCAN Listeners exists on all cluster nodes
Checking
name resolution setup for "oel5-12c-scan.litkhai.com"...
Checking
integrity of name service switch configuration file
"/etc/nsswitch.conf" ...
Check
for integrity of name service switch configuration file
"/etc/nsswitch.conf" passed
ERROR:
PRVG-1101
: SCAN name "oel5-12c-scan.litkhai.com" failed to resolve
ERROR:
PRVF-4657
: Name resolution setup check for "oel5-12c-scan.litkhai.com" (IP
address: 192.168.56.105) failed
ERROR:
PRVF-4664
: Found inconsistent name resolution entries for SCAN name
"oel5-12c-scan.litkhai.com"
Checking
SCAN IP addresses...
Check
of SCAN IP addresses passed
Verification
of SCAN VIP and Listener setup failed
User
"grid" is not part of "root" group. Check passed
Checking
if Clusterware is installed on all nodes...
Check
of Clusterware install passed
Checking
if CTSS Resource is running on all nodes...
CTSS
resource check passed
Querying
CTSS for time offset on all nodes...
Query
of CTSS for time offset passed
Check
CTSS state started...
CTSS is
in Active state. Proceeding with check of clock time offsets on all nodes...
Check
of clock time offsets passed
Oracle
Cluster Time Synchronization Services check passed
Post-check
for node addition was unsuccessful on all the nodes.
|
E. 노드 추가 결과 확인
[grid@oel5-12c-rac1
addnode]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name Target State
Server State
details
--------------------------------------------------------------------------------
Local
Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
ONLINE ONLINE
oel5-12c-rac1 STABLE
ONLINE ONLINE
oel5-12c-rac2 STABLE
ONLINE ONLINE
oel5-12c-rac3 STABLE
ora.DATA.dg
ONLINE ONLINE
oel5-12c-rac1 STABLE
ONLINE ONLINE
oel5-12c-rac2 STABLE
ONLINE ONLINE
oel5-12c-rac3 STABLE
ora.LISTENER.lsnr
ONLINE ONLINE
oel5-12c-rac1 STABLE
ONLINE ONLINE
oel5-12c-rac2 STABLE
ONLINE ONLINE
oel5-12c-rac3 STABLE
ora.net1.network
ONLINE ONLINE
oel5-12c-rac1 STABLE
ONLINE ONLINE
oel5-12c-rac2 STABLE
ONLINE ONLINE
oel5-12c-rac3 STABLE
ora.ons
ONLINE ONLINE
oel5-12c-rac1 STABLE
ONLINE ONLINE
oel5-12c-rac2 STABLE
ONLINE ONLINE
oel5-12c-rac3 STABLE
ora.proxy_advm
ONLINE ONLINE
oel5-12c-rac1 STABLE
ONLINE ONLINE
oel5-12c-rac2 STABLE
ONLINE ONLINE
oel5-12c-rac3 STABLE
--------------------------------------------------------------------------------
Cluster
Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1
ONLINE ONLINE oel5-12c-rac2 STABLE
ora.asm
1
ONLINE ONLINE oel5-12c-rac1 STABLE
2
ONLINE ONLINE oel5-12c-rac2 STABLE
3
ONLINE ONLINE oel5-12c-rac3 STABLE
ora.cdbrac.db
1
ONLINE ONLINE oel5-12c-rac1 Open,STABLE
2
ONLINE ONLINE oel5-12c-rac2 Open,STABLE
ora.cvu
1
ONLINE ONLINE oel5-12c-rac2 STABLE
ora.oc4j
1
OFFLINE OFFLINE STABLE
ora.oel5-12c-rac1.vip
1
ONLINE ONLINE oel5-12c-rac1 STABLE
ora.oel5-12c-rac2.vip
1
ONLINE ONLINE oel5-12c-rac2 STABLE
ora.oel5-12c-rac3.vip
1
ONLINE ONLINE oel5-12c-rac3 STABLE
ora.scan1.vip
1
ONLINE ONLINE oel5-12c-rac2 STABLE
--------------------------------------------------------------------------------
|
이를 통해 클러스터
노드 추가가
완료된 것을
확인할 수
있다.
댓글 없음:
댓글 쓰기