In this post the changing of IP addresses on an Oracle Exadata Database Machine system. The most common use case for this procedure is when a system is moved, so this document was written with that case in mind. Importantly, system hostnames and domain name changes are not handled by this procedure.
Assumptions
1- All database will be closed
srvctl stop database -d
2- dbfs will close if mount point exists
crsctl stop resource dbfs_mount
3- cluster services will be shut down
crsctl stop cluster -all
4- We check if crs is automatically opened
dcli -g dbs_ib_group -l root /u01/app/12.1.0.2/grid/bin/crsctl config crs
If is enabled, the cluster will be opened directly.
-dm01db01-priv: CRS-4622: Oracle High Availability Services autostart is enabled. —>
-dm01db02-priv: CRS-4622: Oracle High Availability Services autostart is enabled.
5- With this command we prevent the reboot cluster from opening automatically.
dcli -g dbs_ib_group -l root /u01/app/12.1.0.2/grid/bin/crsctl disable crs
6- We check again
dcli -g dbs_ib_group -l root /u01/app/12.1.0.2/grid/bin/crsctl config crs
7- All nodes will be shut down
Shutdown -h now
8/9 – New IP addresses will be entered in Dns
10- Dns check and check that you have untied it from the short, long and rope correctly.
11- Exadata opens. We make sure that Cluserter and databases are not opened.
12- We turn off the network service
service network stop
13- With this command, we change the gatway ips in the network file
vi / etc / sysconfig / network
14- Dns, ntp or time zone change, we do it at this step.
15- To calculate the ropes to be given
ipcalc -bnm 192.0.2.66 255.255.254.0 (based on the given yarn)
16- We stop eth0 to change threads
ifdown eth0
17- Back up Eth0
cp / etc / sysconfig / network-scripts / ifcfg-eth0 / etc / sysconfig / network-scripts / ifcfg-eth0_bck
18- eth0 ip, gatwaye and Netmask information.
vi / etc / sysconfig / network-scripts / ifcfg-eth0
19- We define according to new ip
vi / etc / sysconfig / network-scripts / route-eth0
from 192.0.2.8 table 220
to 192.0.2.8 table 220
20- Edit rule-eth0
vi / etc / sysconfig / network-scripts / rule-eth0
192.0.2.0/23 dev eth0 table 220
default via 192.0.2.1 dev eth0 table 220
21- We organize the definitions on etc according to new threads
vi / etc / host
22- After changes, we open the network service
service network start
23- We are start up the Eth0
ifup eth0
24- Reboot the server
reboot -n
25/26- Edit the ssh config file according to the new ip
vi / etc / ssh / sshd_config
ListenAddress 127.0.0.1
ListenAddress 192.168.7.170 -> management
ListenAddress 192.168.10.1 -> priv1
ListenAddress 192.168.10.2 -> priv2
We do dns and ip settings by doing ssh to 26-ilom.
set / SP / clients / dns nameserver = 203.0.113.52
cd / SP / network
set pendingipaddress = 10.196.16.152
set pendingipgateway = 10.196.23.254
set pendingipnetmask = 255.255.248.0
set pendingipdiscovery = static
set commitpending = true
27- We do İlom dns and ip control
show / SP / clients / dns
show / SP / network
28 – We provide new Cell smtpserver information
dcli -g cell_group -l root “cellcli -e alter cell smtpserver = \ ‘new.smtp.server.com \'”
29- We stop the services on all cells
dcli -g cell_ib_group -l root cellcli -e alter cell shutdown services all
30 – We move the cellconf file under / root for new threads
dcli -g cell_ib_group -l root cp /opt/oracle.cellos/cell.conf /root/new.cell.conf
31- We run it to save the changes made.
/opt/oracle.cellos/ipconf -force -newconf /root/new.cell.conf -reboot
32- We check for changes
/opt/oracle.cellos/ipconf -verify -conf /root/new.cell.conf -verbose
33- We check if the cell works correctly
(root) # dcli -g cell_ib_group -l root cellcli –e list cell detail
34- We are changing the infiniband swicth’s ip, dns and ntp.
35- Cisco swicth ip, dns and ntp change
36- We are doing Pdu’s thread change.
37- We extract the ip list for client network settings
(root) # ipcalc -bnm 198.51.100.66 255.255.254.0
NETMASK = 255.255.254.0
BROADCAST = 198.51.101.255
NETWORK = 198.51.100.0
38- We open the Cluster
dcli -g dbs_group -l root /u01/app/12.1.0.2/grid/bin/crsctl start crs
39- We check
crsctl stat res -t
40- Currently we are checking the client threads
oracle $ oif getif
oracle $ srvctl config scan
oracle $ srvctl config nodeapps
oracle $ oifcfg getif
root $ ifconfig
41- We stop the service and client network
(oracle) $ srvctl stop listener -node {node name}
(oracle) $ srvctl stop scan_listener
(oracle) $ srvctl stop vip -n {node name}
(oracle) $ srvctl stop scan
42- We delete the client network without configuring it – it will work on one node
(oracle) $ oifcfg delif -global bondeth0
43- We stop bondeth0 for change
(root) # ifdown bondeth0
44- we copy ifcfg-bond
cp / etc / sysconfig / network-scripts / ifcfg-bond
45- We make changes according to the new thread
vi / etc / sysconfig / network-scripts / ifcfg-bondeth0
46 – We set our redirects to the new thread
cd / etc / sysconfig / network-scripts /
(root) # cat rule-bondeth0
from 198.51.100.8 table 220
to 198.51.100.8 table 220
(root) # cat route-bondeth0
198.51.100.0/23 giant bondeth0 table 220
default via 198.51.100.1 giant bondeth0 table 220
47- We edit etc according to our new client threads
vi / etc / hosts
48- After the changes, we restart the service to define the threads
service network restart
49- We add the card defined according to the new threads to the configuration
(oracle) $ oifcfg setif -global bondeth0 / 198.51.100.0: public
50- We check
(oracle) oifcfg getif
51- We add the subnet to the configuration
(root) srvctl modify network –netnum 1 –subnet 198.51.100.0/255.255.254.0/bondeth0
(root) # srvctl modify scan –netnum 1 –scanname scan.mycluster.example.com
52- I check scan threads
(oracle) $ srvctl config scan
53- Vip and listeners are start
(oracle) $ srvctl start vip -node {node name}
(oracle) $ srvctl start listener
(oracle) $ srvctl start scan
(oracle) $ srvctl start scan_listener
54-We check the listeners
(oracle) $ srvctl status nodeapps
(oracle) $ srvctl status scan_listener
55-We close the cluster and it start again
(root) # /u01/app/12.1.0.2/grid/bin/crsctl stop cluster -all
(root) # /u01/app/12.1.0.2/grid/bin/crsctl start cluster -all
56- We enable Cluster to be started automatically
(root) # dcli -g dbs_group -l root /u01/app/12.1.0.2/grid/bin/crsctl enable crs
57- We start the databases
srvctl start database
58- Starting CRS dbfs_mount
(oracle) $ crsctl start res dbfs_mount
Source
Also you can check out my other post: Infiniband Switch Ports Status Showing AutomaticHighErrorRate Message
Have a nice day.
Tags: