rac 求助

第二个脚本在rac2上执行
[root@rac2 ~]# /u01/crs/oracle/product/10.2.0/crs_1/root.sh
WARNING: directory '/u01/crs/oracle/product/10.2.0' is not owned by root
WARNING: directory '/u01/crs/oracle/product' is not owned by root
WARNING: directory '/u01/crs/oracle' is not owned by root
WARNING: directory '/u01/crs' is not owned by root
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/crs/oracle/product/10.2.0' is not owned by root
WARNING: directory '/u01/crs/oracle/product' is not owned by root
WARNING: directory '/u01/crs/oracle' is not owned by root
WARNING: directory '/u01/crs' is not owned by root
WARNING: directory '/u01' is not owned by root
clscfg: EXISTING configuration version 4 detected.
clscfg: version 4 is 11 Release 1.
assigning default hostname rac1 for node 1.
assigning default hostname rac2 for node 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: rac1 rac1-priv rac1
node 2: rac2 rac2-priv rac2
clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
        rac1
        rac2
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Timed out waiting for the CRS stack to start.
执行前做过如下修改在rac2上  a.修改$CRS_HOME/bin/vipca文件
     注释掉以下几行,注释后效果如下:
     arch=`uname -m`
     #if [ "$arch" = "i686" -o "$arch" = "ia64" ]
     #then
     #      LD_ASSUME_KERNEL=2.4.19
     #      export LD_ASSUME_KERNEL
     #fi
     #End workaround

     b.注释$CRS_HOME/bin/srvctl文件和$ORACLE_HOME/bin/srvctl文件
     注释掉以下几行,注释后效果如下:
     #Remove this workaround when the bug 3937317 is fixed
     #LD_ASSUME_KERNEL=2.4.19
     #export LD_ASSUME_KERNEL
报错后手工运行[root@rac2 ~]# /u01/crs/oracle/product/10.2.0/crs_1/bin/vipca

   报错Error 0(Native: listNetInterfaces:[3])
  [Error 0(Native: listNetInterfaces:[3])]
然后做如下操作还是不成[root@rac2 ~]# rm -rf /var/tmp/.oracle
[root@rac2 ~]# cd /etc/oracle/scls_scr/rac2/oracle/
[root@rac2 oracle]# rm -rf cssfatal
[root@rac2 oracle]# /u01/crs/oracle/product/10.2.0/crs_1/root.sh
WARNING: directory '/u01/crs/oracle/product/10.2.0' is not owned by root
WARNING: directory '/u01/crs/oracle/product' is not owned by root
WARNING: directory '/u01/crs/oracle' is not owned by root
WARNING: directory '/u01/crs' is not owned by root
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/crs/oracle/product/10.2.0' is not owned by root
WARNING: directory '/u01/crs/oracle/product' is not owned by root
WARNING: directory '/u01/crs/oracle' is not owned by root
WARNING: directory '/u01/crs' is not owned by root
WARNING: directory '/u01' is not owned by root
clscfg: EXISTING configuration version 4 detected.
clscfg: version 4 is 11 Release 1.
assigning default hostname rac1 for node 1.
assigning default hostname rac2 for node 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: rac1 rac1-priv rac1
node 2: rac2 rac2-priv rac2
clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
Failure at final check of Oracle CRS stack.
10
大家给点建议,如何解决这个10.2.1的这个bug,11g就成功了!

标签: 暂无标签
dongxujian

写了 86 篇文章,拥有财富 384,被 13 人关注

转播转播 分享分享 分享淘帖
回复

使用道具

P4 | 发表于 2011-5-5 12:47:56
环境是linux5.3+oracle 10.2.1+openfiler创建的lun,三台虚拟机rac1,rac2,openfiler,11g的搭建没出现这个问题
回复

使用道具

P4 | 发表于 2011-5-5 12:55:15
老师在不?问题可能在那?给个建议
回复

使用道具

P4 | 发表于 2011-5-6 08:31:57
解决方法:

Remember to re-edit these files on all nodes:

<CRS_HOME>/bin/vipca

<CRS_HOME>/bin/srvctl

<RDBMS_HOME>/bin/srvctl

<ASM_HOME>/bin/srvctl



after applying the 10.2.0.2 or 10.2.0.3 patchsets, as these patchset will still include those settings unnecessary for OEL5 or RHEL5 or SLES10.  This issue was raised with development and is fixed in the 10.2.0.4 patchsets.



Note that we are explicitly unsetting LD_ASSUME_KERNEL and not merely commenting out its setting to handle a case where the user has it set in their environment (login shell).



$ vi vipca

... ...

Linux) LD_LIBRARY_PATH=$ORACLE_HOME/lib:/libORACLE_HOME/srvm/libLD_LIBRARY_PATH

       export LD_LIBRARY_PATH

        echo $LD_LIBRARY_PATH

        echo $CLASSPATH

       #Remove this workaround when the bug 3937317 is fixed

       arch=`uname -m`

       if [ "$arch" = "i686" -o "$arch" = "ia64" ]

       then

        # LD_ASSUME_KERNEL=2.4.19   

        # export LD_ASSUME_KERNEL

        echo  -- 这里一定要加上,不然返回会报错

       fi

       #End workaround



问题2: 如果遇到这个错误:

# vipca

Error 0(Native: listNetInterfaces:[3])

[Error 0(Native: listNetInterfaces:[3])]

解决方法:

在CRS_HOME下 运行 oifcfg 命令:



# ./oifcfg setif -global eth0/10.85.10.0:public

# ./oifcfg setif -global eth1/192.168.1.0:cluster_interconnect

# ./oifcfg getif

eth0 10.85.10.0 global public

eth1 192.168.1.0 global cluster_interconnect



-- 注意这里最后一个是0. 代表一个网段。 在一个节点设置之后,其他节点也能看到。  



然后在手工运行vipca添加nodeapps resource即可。

我当时只是修改了rac2 下的crs_home下的vipca和srvctl,问题以解决
回复

使用道具

您需要登录后才可以回帖 登录 | 加入社区

本版积分规则

意见
反馈