修改SSH端口对Oracle集群的影响


最近连续有客户问我,如果修改SSH,会对Oracle RAC有什么影响。这个问题,我也看过资料,对oracle RAC的运行是没有影响的,但是“说”是没有力度的。

今天正好相对比较空闲,全程针对SSH进行测试,并将测试过程记录下来,与大家分享一下。

Part I. 测试前状态收集

1. 数据库版本为11.2.0.4的两节点的RAC,分别是node111g 和node211g
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

2. 集群状态正常
[grid@node111g ~]$ crsctl status res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
              ONLINE ONLINE node111g
              ONLINE ONLINE node211g
ora.DGROUP_01.dg
              ONLINE ONLINE node111g
              OFFLINE OFFLINE node211g
ora.FRA.dg
              ONLINE ONLINE node111g
              ONLINE ONLINE node211g
ora.LISTENER.lsnr
              ONLINE ONLINE node111g
              ONLINE ONLINE node211g
ora.LISTENER_TEST.lsnr
              ONLINE ONLINE node111g
              ONLINE ONLINE node211g
ora.OCR.dg
              ONLINE ONLINE node111g
              ONLINE ONLINE node211g
ora.VOTE3D.dg
              ONLINE ONLINE node111g
              OFFLINE OFFLINE node211g
ora.asm
              ONLINE ONLINE node111g Started
              ONLINE ONLINE node211g Started
ora.gsd
              OFFLINE OFFLINE node111g
              OFFLINE OFFLINE node211g
ora.net1.network
              ONLINE ONLINE node111g
              ONLINE ONLINE node211g
ora.ons
              ONLINE ONLINE node111g
              ONLINE ONLINE node211g
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.TEST_SCAN1.lsnr
      1 ONLINE ONLINE node111g
ora.cvu
      1 ONLINE ONLINE node211g
ora.node111g.vip
      1 ONLINE ONLINE node111g
ora.node211g.vip
      1 ONLINE ONLINE node211g
ora.oc4j
      1 ONLINE ONLINE node211g
ora.orcl.db
      1 ONLINE ONLINE node111g Open
      2 ONLINE ONLINE node211g Open
ora.orcl.romi.svc
      1 ONLINE ONLINE node111g
ora.orcl.test.svc
      1 ONLINE ONLINE node211g
      2 ONLINE ONLINE node111g
ora.scan1.vip
      1 ONLINE ONLINE node111g

3. SSH配置文件,默认SSH端口是22,文件有提及,不建议修改默认值,但是!!!我就想改,怎么办呢??
[root@node111g ~]# more /etc/ssh/sshd_config
# $OpenBSD: sshd_config,v 1.73 2005/12/06 22:38:28 reyk Exp $
# This is the sshd server system-wide configuration file. See
# sshd_config(5) for more information.
# This sshd was compiled with PATH=/usr/local/bin:/bin:/usr/bin
# The strategy used for options in the default sshd_config shipped with
# OpenSSH is to specify options with their default value where
# possible, but leave them commented. Uncommented options change a
# default value.
#Port 22
#Protocol 2,1
Protocol 2
#AddressFamily any
#ListenAddress 0.0.0.0
#ListenAddress ::
......
[root@node111g ~]#
[root@node211g ~]# more /etc/ssh/sshd_config
# $OpenBSD: sshd_config,v 1.73 2005/12/06 22:38:28 reyk Exp $
# This is the sshd server system-wide configuration file. See
# sshd_config(5) for more information.
# This sshd was compiled with PATH=/usr/local/bin:/bin:/usr/bin
# The strategy used for options in the default sshd_config shipped with
# OpenSSH is to specify options with their default value where
# possible, but leave them commented. Uncommented options change a
# default value.
#Port 22
#Protocol 2,1
Protocol 2
#AddressFamily any
#ListenAddress 0.0.0.0
#ListenAddress ::
# HostKey for protocol version 1
#HostKey /etc/ssh/ssh_host_key
# HostKeys for protocol version 2
#HostKey /etc/ssh/ssh_host_rsa_key
#HostKey /etc/ssh/ssh_host_dsa_key
# Lifetime and size of ephemeral version 1 server key
#KeyRegenerationInterval 1h

Part II  实际修改部分

1. 我们计划将端口修改为6001,首先我们需要确认这个端口是否被使用
[root@node211g ~]# lsof -i:22
COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
sshd 4543 root 3u IPv4 16317 TCP *:ssh (LISTEN)
sshd 16000 root 3r IPv4 57216374 TCP node211g:ssh->node111g:56437 (ESTABLISHED) <<<<<<<22端口呗ssh使用
sshd 16004 oracle 3u IPv4 57216374 TCP node211g:ssh->node111g:56437 (ESTABLISHED)
sshd 17907 root 3r IPv4 57462432 TCP node211g:ssh->node111g:59861 (ESTABLISHED)
sshd 17911 oracle 3u IPv4 57462432 TCP node211g:ssh->node111g:59861 (ESTABLISHED)
 
[root@node211g ~]# lsof -i:1521
COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
oracle 1437 oracle 14u IPv4 68052787 TCP node211g:20651->node-cluster-scan:ncube-lm (ESTABLISHED) <<<<<<1521端口被scan使用
oracle 14412 grid 15u IPv4 1313392 TCP node211g:34873->node-cluster-scan:ncube-lm (ESTABLISHED)
tnslsnr 14882 grid 16u IPv4 1332718 TCP node211g:ncube-lm (LISTEN)
tnslsnr 14882 grid 17u IPv4 1332719 TCP node211g-vip:ncube-lm (LISTEN)
[root@node211g ~]# lsof -i:6001
6001端口没有被使用

2. 同时在两个节点增加Port 6001 到SSH配置文件中
[root@node111g ~]# vi /etc/ssh/sshd_config
 
#Port 22
Port 6001
#Protocol 2,1
Protocol 2
3. 重启ssh服务,让端口生效
[root@node111g ~]# service sshd restart
Stopping sshd: [ OK ]
Starting sshd: [ OK ]
[root@node211g ~]# service sshd restart 
Stopping sshd: [ OK ]
Starting sshd: [ OK ]
4.查看端口,已经被SSH使用
[root@node111g ~]# lsof -i:6001
COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
sshd 28964 root 3u IPv4 111172702 TCP *:6001 (LISTEN)
[root@node211g ~]# lsof -i:6001
COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
sshd 18371 root 3u IPv4 70993328 TCP *:6001 (LISTEN)
[root@node211g ~]#
5. 检查和测试端口,端口22已经无法连接
[oracle@node111g archive_log]$ ssh node111g
ssh: connect to host node111g port 22: Connection refused
[oracle@node111g archive_log]$ ssh node211g
ssh: connect to host node211g port 22: Connection refused

Part III 测试对RAC的影响

1. 通过端口6001连接,一切正常,RAC等价性也没有受到影响,这里说明SSH等价性是不会受端口的影响的。

这里可以简单理解为,node1和node2彼此有对方的访问秘钥的,只要对方有秘钥就行,不会关心从什么端口访问过来的
[oracle@node111g archive_log]$ ssh -p 6001 node211g
Last login: Fri Dec 19 10:19:05 2014 from node111g
[oracle@node211g ~]$ hostname
node211g
[oracle@node211g ~]$ exit
logout
Connection to node211g closed.

2. 端口已经修改成功,检查crs状态,一切正常
[grid@node111g ~]$ crsctl status res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
              ONLINE ONLINE node111g
              ONLINE ONLINE node211g
ora.DGROUP_01.dg
              ONLINE ONLINE node111g
              OFFLINE OFFLINE node211g
ora.FRA.dg
              ONLINE ONLINE node111g
              ONLINE ONLINE node211g
ora.LISTENER.lsnr
              ONLINE ONLINE node111g
              ONLINE ONLINE node211g
ora.LISTENER_TEST.lsnr
              ONLINE ONLINE node111g
              ONLINE ONLINE node211g
ora.OCR.dg
              ONLINE ONLINE node111g
              ONLINE ONLINE node211g
ora.VOTE3D.dg
              ONLINE ONLINE node111g
              OFFLINE OFFLINE node211g
ora.asm
              ONLINE ONLINE node111g Started
              ONLINE ONLINE node211g Started
ora.gsd
              OFFLINE OFFLINE node111g
              OFFLINE OFFLINE node211g
ora.net1.network
              ONLINE ONLINE node111g
              ONLINE ONLINE node211g
ora.ons
              ONLINE ONLINE node111g
              ONLINE ONLINE node211g
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.TEST_SCAN1.lsnr
      1 ONLINE ONLINE node111g
ora.cvu
      1 ONLINE ONLINE node211g
ora.node111g.vip
      1 ONLINE ONLINE node111g
ora.node211g.vip
      1 ONLINE ONLINE node211g
ora.oc4j
      1 ONLINE ONLINE node211g
ora.orcl.db
      1 ONLINE ONLINE node111g Open
      2 ONLINE ONLINE node211g Open
ora.orcl.romi.svc
      1 ONLINE ONLINE node111g
ora.orcl.test.svc
      1 ONLINE ONLINE node211g
      2 ONLINE ONLINE node111g
ora.scan1.vip
      1 ONLINE ONLINE node111g

3. 开始重启CRS,看是否有影响,关闭
[root@node211g ~]# /u01/app/11.2.0/grid/bin/crsctl stop crs
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'node211g' has completed
CRS-4133: Oracle High Availability Services has been stopped.
4. 启动CRS...
[root@node111g ~]# /u01/app/11.2.0/grid/bin/crsctl start crs

5. 状态依旧正常,完全可以启动
[grid@node111g ~]$ crsctl status res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
              ONLINE ONLINE node111g
              ONLINE ONLINE node211g
ora.DGROUP_01.dg
              ONLINE ONLINE node111g
              OFFLINE OFFLINE node211g
ora.FRA.dg
              ONLINE ONLINE node111g
              ONLINE ONLINE node211g
ora.LISTENER.lsnr
              ONLINE ONLINE node111g
              ONLINE ONLINE node211g
ora.LISTENER_TEST.lsnr
              ONLINE ONLINE node111g
              ONLINE ONLINE node211g
ora.OCR.dg
              ONLINE ONLINE node111g
              ONLINE ONLINE node211g
ora.VOTE3D.dg
              ONLINE ONLINE node111g
              OFFLINE OFFLINE node211g
ora.asm
              ONLINE ONLINE node111g Started
              ONLINE ONLINE node211g Started
ora.gsd
              OFFLINE OFFLINE node111g
              OFFLINE OFFLINE node211g
ora.net1.network
              ONLINE ONLINE node111g
              ONLINE ONLINE node211g
ora.ons
              ONLINE ONLINE node111g
              ONLINE ONLINE node211g
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.TEST_SCAN1.lsnr
      1 ONLINE ONLINE node111g
ora.cvu
      1 ONLINE ONLINE node111g
ora.liyou.db
      1 OFFLINE OFFLINE Instance Shutdown
      2 OFFLINE OFFLINE Instance Shutdown
ora.node111g.vip
      1 ONLINE ONLINE node111g
ora.node211g.vip
      1 ONLINE ONLINE node211g
ora.oc4j
      1 ONLINE ONLINE node111g
ora.orcl.db
      1 ONLINE ONLINE node111g Open
      2 ONLINE ONLINE node211g Open
ora.orcl.romi.svc
      1 ONLINE ONLINE node111g
ora.orcl.test.svc
      1 ONLINE ONLINE node211g
      2 ONLINE ONLINE node111g
ora.scan1.vip
      1 ONLINE ONLINE node111g

6. 再查个数据,就可以放心了吧!!
$ sqlplus / as sysdba
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
SQL> select username from dba_users;
select username from dba_users;
USERNAME
------------------------------------------------------------------------------------------
MGMT_VIEW
SYS
SYSTEM
DBSNMP
SYSMAN
SCOTT
......
38 rows selected.

    阶段总结,SSH在数据库和GI安装完成之后,正常运行阶段,SSH是没有任何使用的,oracle节点间通信都是通过oracle自己的process,通过TCP和UDP协议通信,使用这里更改SSH对于RAC数据库的运行没有任何影响。

7. 那!!!我们要是打patch呢?嘿嘿!!

不巧,这个测试数据库已经应用最新的PSU 11.2.0.4.4,那么我们就先卸载 ......
 cd $ORACLE_HOME/OPatch
[oracle@node111g OPatch]$ ./opatch lsinventory
Oracle Interim Patch Installer 11.2.0.3.6
Copyright (c) 2013, Oracle Corporation. All rights reserved。
Oracle Home : /u01/app/oracle/product/11.2.0/dbhome_1
Central Inventory : /u01/app/oraInventory
  from : /u01/app/oracle/product/11.2.0/dbhome_1/oraInst.loc
OPatch version : 11.2.0.3.6
OUI version : 11.2.0.4.0
Log file location : /u01/app/oracle/product/11.2.0/dbhome_1/cfgtoollogs/opatch/opatch2014-12-19_15-59-03午後_1.log
Lsinventory Output file location : /u01/app/oracle/product/11.2.0/dbhome_1/cfgtoollogs/opatch/lsinv/lsinventory2014-12-19_15-59-03午後.txt
--------------------------------------------------------------------------------
 
Oracle Database 11g 11.2.0.4.0
 
Patch 19121551 : applied on Thu Nov 27 17:50:58 CST 2014
Unique Patch ID: 17949166
Patch description: "Database Patch Set Update : 11.2.0.4.4 (19121551)" <<<<<<<<
  Created on 6 Oct 2014, 10:07:57 hrs PST8PDT
Sub-patch 18522509; "Database Patch Set Update : 11.2.0.4.3 (18522509)"
Sub-patch 18031668; "Database Patch Set Update : 11.2.0.4.2 (18031668)"
Sub-patch 17478514; "Database Patch Set Update : 11.2.0.4.1 (17478514)"
  Bugs fixed:
  ......
Patch 18031740 : applied on Thu Nov 27 14:29:51 CST 2014
Unique Patch ID: 17253722
Patch description: "OCW Patch Set Update : 11.2.0.4.2 (18031740)"
  Created on 19 Mar 2014, 09:06:31 hrs PST8PDT
  Bugs fixed:
......
 
  Local node = node111g
  Remote node = node211g
--------------------------------------------------------------------------------
OPatch succeeded.

8. Rollback失败了,说明在rollback过程中,如果需要到对方节点执行操作的情况下,就需要通过SSH访问,但是由于端口更改,会导致连接失败。

同理应用patch,和节点添加删除的时候,也需要copy文件到对方节点,同样也会遇到这个问题。

(借用同事的测试环境,是日文的,但是通过LANG=en_US,一直没有修改成英文输出,这部分就伴有日文了,在同事的帮助下,理解其中的含义了,不过,日文很好猜的,“消除” ==“删除”, “失敗”就不用翻译了吧,嘿嘿!!)
All-node error message = リストされたファイル'/u01/app/oracle/product/11.2.0/dbhome_1/.patch_storage/NRollback/2014-12-19_16-16-49午後/rac/remove_files.txt.instantiated'に基づいて、ノード'node211g'でファイルを削除中にエラーが発生しました。
OPatch remote node node211g, delete the file fail.
続行しますか。[y|n]
y
User Responded with: Y
Instantiating the file "/u01/app/oracle/product/11.2.0/dbhome_1/.patch_storage/NRollback/2014-12-19_16-16-49午後/rac/remove_dirs.txt.instantiated" by replacing $ORACLE_HOME in "/u01/app/oracle/product/11.2.0/dbhome_1/.patch_storage/NRollback/2014-12-19_16-16-49午後/rac/remove_dirs.txt" with actual path.
Removing directories on remote nodes...
OPatchはリモートノード'node211g' でディレクトリの削除に失敗しました。 詳細: '/u01/app/oracle/product/11.2.0/dbhome_1/.patch_storage/NRollback/2014-12-19_16-16-49午後/rac/remove_dirs.txt.instantiated'に基づいてリストされたディレクトリをノード'node211g'から削除できませんでした。[PRKC-1083 : 指定したノード"node211g "のいずれかに、"/u01/app/oracle/product/11.2.0/dbhome_1/.patch_storage/NRollback/2014-12-19_16-16-49午後/rac/remove_dirs.txt.instantiated"に示されているディレクトリを削除することに失敗しました。
ノードnode211g:PRKC-1044 : シェル/usr/bin/sshおよび/usr/bin/rshを使用したノードnode211gのリモートコマンド実行設定のチェックに失敗しました <<<<<使用/usr/bin/sshおよび/usr/bin/rsh执行远端node211g命令检查失败。
node211g: Connection refused fail.

Part IV 测试修改IP对SSH等价性的影响

1. 下面我们在测试一下,修改IP是不是会影响SSH等价性(注意,修改private IP,一定是要同步调整GI中记录的inter-connect网络信息的,不然GI是不能正常通信的,oracle有相关文档介绍如何修改Public和Pricate IP的)

2. 从Private IP下手,只要修改node2节点的就可以了
# ifconfig eth2
eth2 Link encap:Ethernet HWaddr 00:0C:29:4D:2D:D0
          inet addr:10.0.0.2 Bcast:10.0.0.255 Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
          RX packets:8739422 errors:0 dropped:0 overruns:0 frame:0
          TX packets:11094020 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:5201547204 (4.8 GiB) TX bytes:8400210712 (7.8 GiB)
 
[root@node211g tmp]# more /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 node211g localhost
192.168.1.1 node111g
192.168.1.2 node211g
192.168.1.3 node111g-vip
192.168.1.4 node211g-vip
192.168.1.5 node-cluster-scan
10.0.0.1 node111g-priv
10.0.0.2 node211g-priv <<<<<<<<<<<<<<<<<<<

3. 修改网卡Private IP为10.0.0.12
[root@node211g tmp]# ifconfig eth2 10.0.0.12 netmask 255.255.255.0

4. 查看,已经修改成功
[root@node211g tmp]# ifconfig eth2
eth2 Link encap:Ethernet HWaddr 00:0C:29:4D:2D:D0
          inet addr:10.0.0.12 Bcast:10.0.0.255 Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
          RX packets:8744646 errors:0 dropped:0 overruns:0 frame:0
          TX packets:11100718 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:5204200949 (4.8 GiB) TX bytes:8404710975 (7.8 GiB)

5. 切换到grid用户测试, 发现在第一次连接的时候,还是要输入密码的,但是第二次就不需要的,说明SSH等价性,在更改IP之后是有影响的,需要在RSA文件中记录新IP登陆的授权信息,但是不用重新配置等价性。
[root@node211g tmp]# su - grid
[grid@node211g ~]$ ssh -p 6001 10.0.0.1
Last login: Mon Dec 22 10:34:07 2014 from node211g-priv
[grid@node111g ~]$ ssh -p 6001 10.0.0.12
The authenticity of host '10.0.0.12 (10.0.0.12)' can't be established.
RSA key fingerprint is 9b:11:59:5b:0f:0d:85:17:94:0c:e0:76:be:c4:7e:9e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.0.0.12' (RSA) to the list of known hosts.
Last login: Thu Dec 18 14:45:30 2014 from dhcp-tokyo-twvpn-1-vpnpool-10-191-12-97.vpn.oracle.com
[grid@node211g ~]$ exit
logout
Connection to 10.0.0.12 closed.
[grid@node111g ~]$ ssh -p 6001 10.0.0.12
Last login: Mon Dec 22 10:34:55 2014 from node111g-priv

    到此,本次测试结束,分别测试了在SSH端口变更情况下,对RAC的影响。还有修改IP对于SSH等价性的影响。

      ------------测试,才有说服力!!!------------

相关内容