An Oracle blog – Amardeep Sidhu

September 9, 2020

Implementing ZDLRA – Part 1

Filed under: ZDLRA — Sidhu @ 6:33 pm
Tags: , ,

Zero Data Loss Recovery Appliance (ZDLRA) is Oracle’s solution for database backups. It has many advantages over other backup solutions that are available in the market. This post has a brief introduction to ZDLRA and few links for further reading. This is a quick post about few of things that you should keep in mind if you are planning to get a ZDLRA (RA in short). Of course, there is a lot more that is needed while executing the whole plan, but these are some of the basics:

  1. The very first thing is capacity planning. Depending upon the number & sizes of the DBs that you plan to backup, you need to choose the required configuration. In most cases, an Oracle guy would be doing this for you but you should actively participate in the exercise by providing all the necessary information so that the calculations can be as accurate as possible.
  2. Another things that plays an important role in deciding the capacity needed is the retention period i.e. period for which you would like to keep the backups in RA. More the number of days, more is the space that you will need.
  3. Another important thing to consider is whether you are getting only one RA (for primary or standby site) or getting two of them i.e. one each for primary and standby site. Both scenarios need different type of configurations (including the bandwidth requirements between primary and standby sites) so it needs to be planned accordingly.
  4. One more aspect you need to consider is long term retention. It could be Oracle Cloud object storage or some tape solution.
  5. Once you have enabled DB backups to ZDLRA, you will need to stop all other backups. Plan that accordingly. Oracle provides way to run the legacy and ZDLRA backups together but that is for short duration i.e. when you are migrating from legacy backups to ZDLRA. That is not really a way to run 2 backup strategies together for long term.

In the next post, will talk about few more things that are important at the time of actual implementation.

July 17, 2020

Using Secure Fabric for network isolation in KVM environments on Exadata

Filed under: Exadata — Sidhu @ 9:21 pm
Tags: , ,

Exadata storage software version 20.1 introduces a new feature called “Secure Fabric” for KVM based multi cluster deployments (Exadata X8M). It enables network isolation between multiple tenants (i.e. KVM VMs based RAC clusters). This feature aligns with Infiniband Partitioning on OVM based systems. There are customers who in such scenarios want that VMs of one RAC shouldn’t be able to see traffic of the other RAC VMs. This feature achieves that. Similar to Pkeys in IB switches, here it uses a double VLAN tagging system where the first tag identiefies the network partition and the second tag is used to denote membership level of the VM. Exadata documention has more details.

The minimum Exadata software version needed to enable this feature is 20.1. This release comes with RoCE switches firmware version 7.0(3)I7(8).

Starting Jun 2020, OEDA supports this configuraion and this feature can be enabled in OEDA itself. To enable it in OEDA, under Cluster Networks click on the Advanced button and you will see the Enable Secure Fabric option.

Once this option is enabled, you will see VLANs enabled for the private network. While doing the deployment, OneCommand will take care of the configuration needed.

As per documentation, at present there is no way to enable it on existing systems except doing a re-deployment.

May 11, 2020

Exadata Virtualized DB node restore

Filed under: Exadata — Sidhu @ 9:31 pm
Tags: , , ,

There are two common scenarios when we may need this:

  • An existing DB node has crashed and is unrecoverable (due to some failure and non-availability of any backups. Though some of the things may need to be done even if the backups were available).
  • We have an existing Exadata rack that is virtualized. Now there is a new DB node and the existing clusters need to be extended to include the VMs on this new node.

I recently faced the first scenario where a virtualized DB node crashed and wasn’t recoverable. A bare metal DB node restore is a relatively simple procedure where we just have to reimage the node, create the needed directories, users etc and add it to the RAC cluster. In case of virtualization, the creation of VMs is an additional step that needs to be done. That makes it slightly more complex.

So the scenario is that we have an Exadata quarter rack where DB node1 has issues and needs to be reimaged and reconfigured. There are multiple VMs (so RAC clusters) created. As one of the DB node has gone down, each RAC cluster is running with one less instance. This failed node will need to be cleaned up from the RAC configuration before adding it back. Here are the steps that we need to follow to restore it back:

  1. Reimage the node using an ISO and make it ready for creation of User Domains (aka VMs)
  2. Create the required VMs
  3. Create the required users, setup ssh with other nodes
  4. Clear the failed node configuration from existing RAC clusters
  5. Add the newly created VMs back to the respective RAC clusters

Now let’s discuss these steps in detail.

  1. Reimage : The simplest way to reimage an Exadata node is to connect the ISO (We can download the ISO for the version we need from MOS note 888828.1) using ILOM, set the next boot device to CD-ROM, reboot/reset the node and let it boot from CD-ROM. Most of the installation part is automated and doesn’t ask any questions. Once it is done installing, ipconf starts in interactive mode and asks for all the information like Name servers, NTP servers, IP addresses and hostnames for various network interfaces etc. Once done, it will boot into the Linux partition. Since we need to virtualize the node, we need to switch it to OVS by running a script called /opt/oracle.SupportTools/switch_to_ovm.sh. It will reboot the node to OVS partition. Next step is to run reclaim /opt/opt/oracle.SupportTools/reclaim.sh -free -reclaim to reclaim the space used for bare metal partition. At this moment we are done with the reimaging part. To use ILOM in a browser and be able to access the console, we need a Java enabled Windows/Linux system. And if there is a firewall between that system and the server, this link lists the ports that need to be allowed in the firewall.
  2. VMs creation : Next step is the creation of VMs. We will use OneCommand to achieve this. In this case, we had the original XML file used for deployment. Now we need to edit that configuration and remove the existing node’s details from it. We can import the XML into OEDA, make the required changes and save the configuration files. This needs to be done carefully as a simple mistake like a duplicate IP may cause issues with the ASM/DBs running on the other node. Once this is done, we can download the OneCommand patch (MOS note 888828.1) and run the create VMs step of OneCommand. As we have only one node in the XML file, so it is not going to touch the existing configuration.
  3. Create users : Now we need to create the users on the newly created VMs. OneCommand’s create users step can be used here. It will create users on all the VMs. There are some things that we need to do manually here. First thing is to remove binaries from Grid & DB home. As we are going to use addnode.sh to add new nodes to existing RAC clusters, so binaries are going to be copied from an existing node. Then we need to change ownership of Grid & DB home directory tree to oracle:oinstall. Also for each VM, we need to setup passwordless ssh with the respective other VM (& vice versa) that is going to be part of the cluster.
  4. Clear failed node config : Next we need to clear the failed node’s configuration from each of the RAC clusters. That is pretty much the standard stuff we do in RAC.
  5. Add the new nodes : This again is just the standard addnode stuff we do in RAC.

I have used the terms VM and Node interchangeably here but the context should make it clear if I am referring to the physical node or a VM. There is another method to do this using OEDACLI and it is documented in Exadata documentation. That automates a lot of these things. Check this link for the details.

December 21, 2019

dbnodeupdate.sh appears to be stuck

Filed under: Exadata — Sidhu @ 6:37 am
Tags: ,

I was patching an Exadata db node from 18.1.5.0.0.180506 to 19.3.2.0.0.191119. It had been more than an hour and dbnodeupdate.sh appeared to be stuck. Trying to ssh to the node was giving “connection refused” and the console had this output (some output removed for brevity):

[  458.006444] upgrade[8876]: [642/676] (72%) installing exadata-sun-computenode-19.3.2.0.0.191119-1...
<>
[  459.991449] upgrade[8876]: Created symlink /etc/systemd/system/multi-user.target.wants/exadata-iscsi-reconcile.service, pointing to /etc/systemd/system/exadata-iscsi-reconcile.service.
[  460.011466] upgrade[8876]: Looking for unit files in (higher priority first):
[  460.021436] upgrade[8876]: /etc/systemd/system
[  460.028479] upgrade[8876]: /run/systemd/system
[  460.035431] upgrade[8876]: /usr/local/lib/systemd/system
[  460.042429] upgrade[8876]: /usr/lib/systemd/system
[  460.049457] upgrade[8876]: Looking for SysV init scripts in:
[  460.057474] upgrade[8876]: /etc/rc.d/init.d
[  460.064430] upgrade[8876]: Looking for SysV rcN.d links in:
[  460.071445] upgrade[8876]: /etc/rc.d
[  460.076454] upgrade[8876]: Looking for unit files in (higher priority first):
[  460.086461] upgrade[8876]: /etc/systemd/system
[  460.093435] upgrade[8876]: /run/systemd/system
[  460.100433] upgrade[8876]: /usr/local/lib/systemd/system
[  460.107474] upgrade[8876]: /usr/lib/systemd/system
[  460.114432] upgrade[8876]: Looking for SysV init scripts in:
[  460.122455] upgrade[8876]: /etc/rc.d/init.d
[  460.129458] upgrade[8876]: Looking for SysV rcN.d links in:
[  460.136468] upgrade[8876]: /etc/rc.d
[  460.141451] upgrade[8876]: Created symlink /etc/systemd/system/multi-user.target.wants/exadata-multipathmon.service, pointing to /etc/systemd/system/exadata-multipathmon.service.

There was not much that I could do so just waited. Also created an SR with Oracle Support and they also suggested to wait. It started moving after some time and completed successfully. Finally when the node came up, i checked that there was an NFS mount entry in /etc/rc.local and that was what created the problem. For the second node, we commented this out and it was all smooth. Important to comment out all NFS entries during patching to avoid all such issues. I had commented the ones in /etc/fstab but the one in rc.local was an unexpected one.

December 5, 2019

AVDF installation error

Filed under: Database — Sidhu @ 8:18 pm
Tags:

I was installing Database Firewall version 12.2.0.11.0 on a Dell x86 machine (with 5 * 500 GB local HDDs configured in RAID 10) and it got successfully installed. Later on, I came to know that this version doesn’t support host monitor functionality on Windows hosts. The latest version that supports that is 12.2.0.10.0. So that was the time to download and install 12.2.0.10.0. The installation started fine but it failed with an error:

Exception occured

anaconda 13.21.263 exception report

File "/usr/lib/anaconda/storage/devices.py",

OSError: [Errno 2] No such file or directory:
'/dev/sr0'

From the script that it is calling i.e. device.py, I guessed it had something to do with the storage. Maybe it was not able to figure out something that was created by the latest version installation. So I removed the RAID configuration and created it again. After this the installation went through without any issues.

Next Page »

Theme: Rubric. Get a free blog at WordPress.com

%d bloggers like this: