Oraclue

Oracle internals, debugging and undocumented features

Grid 11.2.0.2 install nightmare

Missing  cvuqdisk-1.0.9-1.rpm package:

This one you will find under grid software:

/stage/cvu/cv/remenv/cvuqdisk-1.0.9-1.rpm

Install it by running: rpm -iv cvuqdisk-1.0.9-1.rpm

PRVF-5150: Path ORCL:  is not a valid path on all nodes

Check note:  Device Checks for ASM Fails with PRVF-5150: Path xx is not a valid path on all nodes [ID 1210863.1]

bug 10026970 is not fixed yet. If ASM device passes manual verification, the warning can be ignored.

I have to re-run

/etc/init.d/oracleasm configure -i

on all  nodes and set oracle user and oinstall as a group.

9974223   grid infrastructure needs multicast communication on 230.0.1.0 addresses working

Start OUI and install Oracle Clusterware ( GRID ) .Follow screens until you get additional window to run orainstRoot.sh and root.sh on all nodes.Run only orainstRoot.sh on all nodes but DO NOT RUN root.sh ( you will save some extra steps in applying this patch):

IMPORTANT:
Before you run root.sh  do following:

export PATH=$PATH:/u01/app/11.2.0/grid/OPatch
opatch version
cd /u01/app/oracle/admin/patches
opatch lsinventory -detail -oh /u01/app/11.2.0/grid
unzip p9974223_112020_Linux-x86-64.zip
pwd
/u01/app/oracle/admin/patches
opatch napply -local -oh /u01/app/11.2.0/grid -id 9974223
opatch lsinventory -detail -oh /u01/app/11.2.0/grid

Then run root.sh on all nodes , click OK on OUI  and let if successfully finish.

Remember you have to do same step for RDBMS home!!!

u01/app/oracle/admin/patches/9974223
custom/server/9974223/custom/scripts/prepatch.sh -dbhome /u01/app/oracle/product/11.2.0/dbhome_1
cd /u01/app/oracle/product/11.2.0/dbhome_1/OPatch/
export PATH=$PATH:/u01/app/oracle/product/11.2.0/dbhome_1/OPatch/

opatch version

Invoking OPatch 11.2.0.1.1

OPatch Version: 11.2.0.1.1

/u01/app/oracle/admin/patches/9974223

opatch napply custom/server/ -local -oh /u01/app/oracle/product/11.2.0/dbhome_1 -id 9974223
custom/server/9974223/custom/scripts/postpatch.sh -dbhome /u01/app/oracle/product/11.2.0/dbhome_1
opatch lsinventory -detail

11 responses to “Grid 11.2.0.2 install nightmare

  1. Martin Decker November 10, 2010 at 8:20 am

    Hi Miladin,

    i am currently trying to find out, at which step during the upgrade process from 11.2.0.1 to 11.2.0.2 the patch has to be applied. I guess before rootupgrade.sh, but i am not sure.

    Regards,
    Martin

    • Martin Decker November 10, 2010 at 8:34 am

      Sorry, overlooked it in README.txt:

      # 3.Pre Install Instructions:
      #
      # a). After 11.2.0.2 install if you have run root.sh or rootupgrade.sh script
      # and if that may or may not have completed successfully,
      # then follow instructions from 4, 6.1, 8, 9.
      #
      # b). In case if you did not run root.sh or rootupgrade.sh after 11.2.0.2
      # install then follow only instruction 6.1, 9.

  2. oraclue November 10, 2010 at 10:42 am

    Hi Martin,

    I have updated post with word IMPORTANT..

    Also do not forget that once you install RDBMS you have to install this patch :steps 5, 6.2 and 7. I have included all steps..

    I was able to successfully install cGRID ( Clusterware and ASM ) and RDBMS.

    Thanks,

    Miladin

    • Martin Decker November 11, 2010 at 10:37 am

      Hi Miladin,

      i have successfully upgraded to 11.2.0.2 as well. There was a problem with rootupgrade.sh on the second node when trying to “crsctl stop crs” and it aborted. I could manually stop crs and then retried rootupgrade.sh which was successful.

      I am experiencing a very strange Grid Infrastructure startup issue after the successful installation when trying to reboot. If I reboot the node, the directory “/var/tmp/.oracle”, which resides on clean tmpfs filesystem, is not created by grid infrastructure. I have to manually create .oracle directory and chgrp to oinstall and chmod with 777 and +t and only then grid infrastructure starts up.

      I already have an SR open.

      Did you experience any similar issue?

      Regards,
      Martin

  3. oraclue November 11, 2010 at 2:30 pm

    Hi Martin,

    Let me try to reboot.Did not have chance to do it yet..

    No I do not have perl under /usr/local/bin/

    Regards,

    Miladin

  4. oraclue November 11, 2010 at 3:45 pm

    Martin,

    No I do not experience any problems on reboot..

    Why do you have /var/tmp/.oracle on clean tmpfs filesystem?

    Thanks,

    Miladin

  5. Martin Decker November 11, 2010 at 4:01 pm

    Sysadmins decided to have /var/tmp as tmpfs, which means that it´s not boot persistent. Although that´s not standard, In my opinion, GI should definitely be able to cope with that. It was not an issue with 11.2.0.1 as far as i can tell.

    Regards,
    Martin

  6. Chris Ruel January 25, 2011 at 10:14 am

    When applying patch 9974223, you describe the steps to do so after oraInst but before root.sh. What I am unclear about is must this be done on just one node, or, on all the nodes? The readme is not clear either…at least not to me. Thanks for the great site.
    Chris..

    • Chris Ruel January 25, 2011 at 10:23 am

      You know, I was re-readiing the readme, and, I think I found my answer…didn’t see this before:

      # Configuration B: When each node of the cluster has its own CRS Home,
      # the patch should be applied as a rolling upgrade. All of the following
      # steps should be followed for each node. Do not patch two nodes at once.

  7. phillip June 8, 2011 at 1:47 pm

    i have a question…we have already run root.sh on node 1 and tried to run on node 2. that is when we hit the multicast issue and root.sh failed on node 2
    the readme says:
    # a). After 11.2.0.2 install if you have run root.sh or rootupgrade.sh script
    # and if that may or may not have completed successfully,
    # then follow instructions from 4, 6.1, 8, 9.

    is that all we have to do or do we need to run root.sh again?
    .

Leave a reply to phillip Cancel reply