To be honest, Fernando Simon has already documented all the steps needed in ZDLRA patching . So this post is more like a reference post for me and it points to the links on his blog. One thing he could change though are the post titles. He also agrees 😉
ZDLRA patching is broadly divided into two parts. First part is where you patch the RA library and Grid & DB homes. Second part includes compute node & storage cell image patch and patches for IB/RoCE switches. Second part is exactly similar to Exadata except that it is bit restricted in terms of image versions that you can use. Only the versions that are certified for ZDLRA can be used. Also the RA library version and the Exadata image version should be compatible with each other. So if you are planning to patch only one part; RA library or the image, make sure that both the components stay compatible. The MOS note that has all these details is 1927416.1. This note should be the first place to go when you are planning to patch a ZDLRA. The steps for upgrade/patch, image patching are given in MOS note 2028931.1. There is another note 2639262.1 that discusses some of the known issues that you may face while doing the patching. It is important to review all three notes before you plan to patch.
The RA library patching part can be considered of two different types. This is an important difference. Make sure that you follow the right set of commands. When you are jumping between major versions say going from 12.x to 19.x, it is called an upgrade and the commands are like racli upgrade appliance –step=1. Fernando talks about this in detail in this post.
On the other hand, when you are not jumping between versions; say going from 19.x to 19.x only, it is called patching and the commands are like racli patch appliance –step=1. Fernando has discussed this in detail in this post.
The Exadata bit (image & switches patching) of it is exactly the same as we do in Exadata. Fernando talks about this in this post.
The RA library patching bit is pretty much automated and works fine most of the time. If you hit an issue, you may find the solution/workaround documented in one of the MOS notes.
Happy patching !
I was patching an Exadata db node from 126.96.36.199.0.180506 to 188.8.131.52.0.191119. It had been more than an hour and dbnodeupdate.sh appeared to be stuck. Trying to ssh to the node was giving “connection refused” and the console had this output (some output removed for brevity):
[ 458.006444] upgrade: [642/676] (72%) installing exadata-sun-computenode-184.108.40.206.0.191119-1...
[ 459.991449] upgrade: Created symlink /etc/systemd/system/multi-user.target.wants/exadata-iscsi-reconcile.service, pointing to /etc/systemd/system/exadata-iscsi-reconcile.service.
[ 460.011466] upgrade: Looking for unit files in (higher priority first):
[ 460.021436] upgrade: /etc/systemd/system
[ 460.028479] upgrade: /run/systemd/system
[ 460.035431] upgrade: /usr/local/lib/systemd/system
[ 460.042429] upgrade: /usr/lib/systemd/system
[ 460.049457] upgrade: Looking for SysV init scripts in:
[ 460.057474] upgrade: /etc/rc.d/init.d
[ 460.064430] upgrade: Looking for SysV rcN.d links in:
[ 460.071445] upgrade: /etc/rc.d
[ 460.076454] upgrade: Looking for unit files in (higher priority first):
[ 460.086461] upgrade: /etc/systemd/system
[ 460.093435] upgrade: /run/systemd/system
[ 460.100433] upgrade: /usr/local/lib/systemd/system
[ 460.107474] upgrade: /usr/lib/systemd/system
[ 460.114432] upgrade: Looking for SysV init scripts in:
[ 460.122455] upgrade: /etc/rc.d/init.d
[ 460.129458] upgrade: Looking for SysV rcN.d links in:
[ 460.136468] upgrade: /etc/rc.d
[ 460.141451] upgrade: Created symlink /etc/systemd/system/multi-user.target.wants/exadata-multipathmon.service, pointing to /etc/systemd/system/exadata-multipathmon.service.
There was not much that I could do so just waited. Also created an SR with Oracle Support and they also suggested to wait. It started moving after some time and completed successfully. Finally when the node came up, i checked that there was an NFS mount entry in /etc/rc.local and that was what created the problem. For the second node, we commented this out and it was all smooth. Important to comment out all NFS entries during patching to avoid all such issues. I had commented the ones in /etc/fstab but the one in rc.local was an unexpected one.
If you have installed some one off ksplice fix for kernel on Exadata, remember to uninstall it before you do a kernel upgrade eg regular Exadata patching. As such fixes are kernel version specific so they may not work with the newer version of the kernel.
Just a quick note about change in the way the compute nodes are patched starting from version 220.127.116.11.1. For earlier versions Oracle provided the minimal pack for patching the compute nodes. Starting with version 18.104.22.168.1 Oracle has discontinued the minimal pack and the updates to compute nodes are done via Unbreakable Linux Network (ULN).
Now there are three ways to update the compute nodes:
1) You have internet access on the Compute nodes. In this case you can download patch 13741363, complete the one time setup and start the update.
2) In case you don’t have internet access on the Compute nodes you can choose some intermediate system (that has internet access) to create a local repository and then point the Compute nodes to this system to install the updates.
3) Oracle will also provide all the future updates via an downloadable ISO image file (patch 14245540 for 22.214.171.124.1). You can download that ISO image file, mount it on some local system and point the compute nodes to this system for updating the rpms (the readme has all the details on how to do this).
Some useful links:
Metalink note 1466459.1