Category Archives: Exadata

OEDA–Things to keep an eye on

So if you are filling an OEDA for Exadata deployment there are few things you should take care of. Most of the screens are self explanatory but there are some bits where one should focus little more. I am running the Aug version of it and the screenshots below are from that version.

  1. On the Define customer networks screen, the client network is the actual network where your data is going to flow. So typically it is going to be bonded (for high availability) and depending upon the network in your data center you have to select one out of 1/10 G copper and 10 G optical.

    image

  2. If you are going to use trunk VLANs for your client network, remember to enabled it by clicking the Advanced button and then entering the relevant VLAN id.
  3. image

    Also if it is going to be an OVM configuration, you may want to have different VMs in different VLAN segments. It will allow you to change VLAN ids for individual VMs on the respective cluster screens like below

    image

  4. If all the cores aren’t licensed remember to enable Capacity on Demand (COD) on the Identify Compute node OS screen.
  5. image

  6. On the Define clusters screen make sure that you enter a unique (across your environment) cluster name.

    image

  7. The cluster details screen captures some of the most important details like
    1. Whether you want to have flash cache in WriteBack mode instead of WriteThrough
    2. Whether you want to have a role separated install or want to install both GI and Oracle binaries with oracle user itself.
    3. GI & Database versions and home for binaries. Always good to leave it at the Oracle recommended values as that makes the future maintenance easy and less painful.
    4. Disk Group names, redundancy and the space allocation.
    5. Default database name and type (OLTP or DW).

      image

Of course it is important to carefully fill the information in all the screens but the above ones are some of them which should be filled very carefully after capturing the required information from other teams, if needed.

ORA-56841: Master Diskmon cannot connect to a CELL

Faced this error while querying v$asm_disk after adding new storage cell IPs to cellip.ora on DB nodes of an existing cluster on Exadata. Query ends with ORA-03113 end-of-file on communication channel and ORA-56841 is reported in $ORA_CRS_HOME/log/<hostname>/diskmon/diskmon.log. Reason in my case was that the new cell was using different subnet for IB. It was pingable from the db nodes but querying v$asm_disk wasn’t working. Changing the subnet for IB on new cell to the one on existing cells fixed the issue.

 

 

Want to learn Exadata ?

Many people have asked me this question that how they can learn Exadata ? It starts sounding even more difficult as a lot of people don’t have access to Exadata environments. So thought about writing a small post on the same.

It actually is not as difficult as it sounds. There are a lot of really good resources available from where you can learn about Exadata architecture and the things that work differently from any non-Exadata platform. You might be able to do lot more RnD if you have got access to an Exadata environment but don’t worry if you haven’t. Without that also there is a lot that you can explore. So here we go:

  1. I think the best reference that one can start with is Expert Oracle Exadata book by Tanel Poder, Kerry Osborne and Randy Johnson. As a traditional book covers the subject topic by topic from ground up so it makes a fun read. This book is also no different. It will teach you a lot. They are already working on the second edition. (See here).
  2. Next you can jump to whitepapers on Oracle website Exadata page, blog posts (keep an eye on OraNA.info) and whitepapers written by other folks. There is a lot of useful material out there. You just need to Google a bit.
  3. Exadata documentation (not public yet) should be your next stop if you have got access to it. Patch 10386736 on MOS if you have got the access.
  4. Try to attend an Oracle Users Group conference if there is one happening in your area. Most likely someone would be presenting on Exadata so you can use that opportunity to learn about it. Also you will get a chance to ask him questions.
  5. Lastly if you have an Exadata machine available do all the RnD you can.

Happy New Year and Happy Learning !

Updating to Exadata 11.2.3.1.1

Just a quick note about change in the way the compute nodes are patched starting from version 11.2.3.1.1. For earlier versions Oracle provided the minimal pack for patching the compute nodes. Starting with version 11.2.3.1.1 Oracle has discontinued the minimal pack and the updates to compute nodes are done via Unbreakable Linux Network (ULN).

Now there are three ways to update the compute nodes:

1) You have internet access on the Compute nodes. In this case you can download patch 13741363, complete the one time setup and start the update.

2) In case you don’t have internet access on the Compute nodes you can choose some intermediate system (that has internet access) to create a local repository and then point the Compute nodes to this system to install the updates.

3) Oracle will also provide all the future updates via an downloadable ISO image file (patch 14245540 for 11.2.3.1.1). You can download that ISO image file, mount it on some local system and point the compute nodes to this system for updating the rpms (the readme has all the details on how to do this).

Some useful links:

https://blogs.oracle.com/XPSONHA/entry/updating_exadata_compute_nodes_using

https://blogs.oracle.com/XPSONHA/entry/new_channels_for_exadata_11

Metalink note 1466459.1

DML and HCC – Exadata

Hybrid Columnar Compression (HCC) is a new awesome feature in Exadata that helps in saving a lot of storage space in your environment. This whitepaper on Oracle website explains this feature in detail. Also Uwe Hesse has an excellent how to use all this post on his blog. You can see the compression levels one can achive by making use of HCC. It is very simple to use feature but one needs to be aware of few things before using HCC extensively as otherwise all your storage calculations may go weird. Here are few of the things to keep in mind:

    • HCC works with direct path loads only that includes: CTAS, running impdp with ACCESS_METHOD=DIRECT or direct path inserts. If you insert data using a normal insert, it will not be HCC compressed.
    • It is most suited for tables that aren’t going to be updated once loaded. There are some complications (next point) that arise if some DML is going to be run on HCC compressed data.
    • At block level HCC stores data as compression units. A compression unit can be defined as a set of blocks. Now if some rows (stored with HCC) are updated, they need to be decompressed first. Also in that case the database needs to read the compression unit, not a single block. So once you do some update on the data stored in HCC, it will be moved out of HCC compression. To HCC compress it again you will need to do alter table table_name move compress for (Also see Metalink note 1332853.1). So if the tables you are planning to use HCC upon, undergo frequent DML, HCC may not be best suited for that scenario. Not only it will add the additional overhead of running alter table move statement every time some updates happen, it may screw up the storage space calculations as well.