Archive for the ‘Exadata’ Category
Just a quick note about change in the way the compute nodes are patched starting from version 18.104.22.168.1. For earlier versions Oracle provided the minimal pack for patching the compute nodes. Starting with version 22.214.171.124.1 Oracle has discontinued the minimal pack and the updates to compute nodes are done via Unbreakable Linux Network (ULN).
Now there are three ways to update the compute nodes:
1) You have internet access on the Compute nodes. In this case you can download patch 13741363, complete the one time setup and start the update.
2) In case you don’t have internet access on the Compute nodes you can choose some intermediate system (that has internet access) to create a local repository and then point the Compute nodes to this system to install the updates.
3) Oracle will also provide all the future updates via an downloadable ISO image file (patch 14245540 for 126.96.36.199.1). You can download that ISO image file, mount it on some local system and point the compute nodes to this system for updating the rpms (the readme has all the details on how to do this).
Some useful links:
Metalink note 1466459.1
Hybrid Columnar Compression (HCC) is a new awesome feature in Exadata that helps in saving a lot of storage space in your environment. This whitepaper on Oracle website explains this feature in detail. Also Uwe Hesse has an excellent how to use all this post on his blog. You can see the compression levels one can achive by making use of HCC. It is very simple to use feature but one needs to be aware of few things before using HCC extensively as otherwise all your storage calculations may go weird. Here are few of the things to keep in mind:
- HCC works with direct path loads only that includes: CTAS, running impdp with ACCESS_METHOD=DIRECT or direct path inserts. If you insert data using a normal insert, it will not be HCC compressed.
- It is most suited for tables that aren’t going to be updated once loaded. There are some complications (next point) that arise if some DML is going to be run on HCC compressed data.
- At block level HCC stores data as compression units. A compression unit can be defined as a set of blocks. Now if some rows (stored with HCC) are updated, they need to be decompressed first. Also in that case the database needs to read the compression unit, not a single block. So once you do some update on the data stored in HCC, it will be moved out of HCC compression. To HCC compress it again you will need to do alter table table_name move compress for (Also see Metalink note 1332853.1). So if the tables you are planning to use HCC upon, undergo frequent DML, HCC may not be best suited for that scenario. Not only it will add the additional overhead of running alter table move statement every time some updates happen, it may screw up the storage space calculations as well.