Just a quick note about change in the way the compute nodes are patched starting from version 18.104.22.168.1. For earlier versions Oracle provided the minimal pack for patching the compute nodes. Starting with version 22.214.171.124.1 Oracle has discontinued the minimal pack and the updates to compute nodes are done via Unbreakable Linux Network (ULN).
Now there are three ways to update the compute nodes:
1) You have internet access on the Compute nodes. In this case you can download patch 13741363, complete the one time setup and start the update.
2) In case you don’t have internet access on the Compute nodes you can choose some intermediate system (that has internet access) to create a local repository and then point the Compute nodes to this system to install the updates.
3) Oracle will also provide all the future updates via an downloadable ISO image file (patch 14245540 for 126.96.36.199.1). You can download that ISO image file, mount it on some local system and point the compute nodes to this system for updating the rpms (the readme has all the details on how to do this).
Some useful links:
Metalink note 1466459.1
There was an interesting issue at one of the customer sites. Few tables in the database were altered and the dependent objects became invalid. But the attempts to compile the objects using utlrp.sql or manually were failing. In all the cases it was giving the same error:
SQL> alter function SCOTT.SOME_FUNCTION compile; alter function SCOTT.SOME_FUNCTION compile * ERROR at line 1: ORA-00604: error occurred at recursive SQL level 1 ORA-01422: exact fetch returns more than requested number of rows ORA-06512: at line 27 SQL>
At first look it sounded like some issue with the dictionary as the error in case of every object (be it a view, function or package) was the same.
Everybody was trying to compile the invalid objects and surprisingly few VIEWs (that were not getting compiled from SQL*Plus) got compiled from Toad ! But that didn’t explain anything. In fact it was more confusing.
Finally I enabled errorstack for event 1422 and tried to compile a view. Here is the relevant content from the trace file
----- Error Stack Dump ----- ORA-01422: exact fetch returns more than requested number of rows ----- Current SQL Statement for this session (sql_id=7kb01v7t6s054) ----- SELECT SQL_TEXT FROM V$OPEN_CURSOR VOC, V$SESSION VS WHERE VOC.SADDR = VS.SADDR AND AUDSID=USERENV('sessionid') AND UPPER(SQL_TEXT) LIKE 'ALTER%'
I took it to be some system SQL and started searching in that direction and obviously that was of no use.
In the mean time another guy almost shouted…”oh there is a trigger to capture DDL operations in the database; it must be that”. And indeed it was. Here is the code that was creating the problem:
select sql_text into vsql_text from v$open_cursor voc, v$session vs where voc.saddr = vs.saddr and audsid=userenv('sessionid') and upper(sql_text) like 'ALTER%';
As v$open_cursor was returning multiple rows, hence the problem !
Moral is that the errorstack traces do tell a lot (of course if you listen carefully)
Sometimes you may need to run GoldenGate on some different machine than the one that hosts the database. It is very much possible but some kind of restrictions apply. First is that the Endian order of both the systems should be same and the second is the bit width has to be same. For example it is not possible to run GoldenGate on a 32 bit system to read from a database that runs on some 64 bit platform. Assuming that the environemnt satisfies the above two conditions; we can use the LOGSOURCE option of TRANSLOGOPTIONS to achieve this.
Here we run GG on host goldengate1 (192.168.0.109) and the database from which we want to capture the changes runs on the host goldengate3 (192.168.0.111). Both the systems run 188.8.131.52 on RHEL 5.5. On goldengate3 redo logs are in the mount point /home which has been NFS mounted on goldengate1 as /home_gg3
Filesystem 1K-blocks Used Available Use% Mounted on 192.168.0.111:/home 12184800 7962496 3593376 69% /home_gg3
The Extract parameters are as follows:
EXTRACT ERMT01 USERID ggadmin@orcl3, PASSWORD ggadmin EXTTRAIL ./dirdat/er TRANLOGOPTIONS LOGSOURCE LINUX, PATHMAP /home/oracle/app/oracle/oradata/orcl /home_gg3/oracle/app/oracle/oradata/or cl, PATHMAP /home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog /home_gg3/oracle/app/oracle/flash_recovery_ area/ORCL/archivelog TABLE HR.*; (The text in the line starting with TRANLOGOPTIONS is a single line)
So using PATHMAP we can make GG aware about the actual location of the red logs & archive logs on the remote server and the mapped location on the system where GG is running (It is somewhat like db_file_name_convert option for Data Guards).
We fire some DMLs on the source database and then run stats command for the Extract
GGSCI (goldengate1) 93> stats ermt01 totalsonly * Sending STATS request to EXTRACT ERMT01 ... Start of Statistics at 2012-05-26 05:17:05. Output to ./dirdat/er: Cumulative totals for specified table(s): *** Total statistics since 2012-05-26 04:51:10 *** Total inserts 1.00 Total updates 0.00 Total deletes 1.00 Total discards 0.00 Total operations 2.00 . . . End of Statistics. GGSCI (goldengate1) 94>
For more details have a look at the GG reference guide (Page 402).
Just a quick note/post about the significance of COMPRESS and TCPBUFSIZE parameter in performance of a GoldenGate Extract Pump process. COMPRESS helps in compressing the outgoing blocks hence helping in better utilization of the bandwidth from source to target. GG is going to uncompress the blocks before writing them to the remote trail file on the target. Compression ratios of 4:1 or better can be achieved. Of course, use of COMPRESS may result in increased CPU usage on both the sides.
TCPBUFSIZE controls the size of the TCP buffer socket that is going to be used by the Extract. If the bandwidth allows, it will be a good idea to send larger packets. So depending upon the available bandwidth one can experiment with the values of TCPBUFSIZE. At one of the client sites, I saw a great increase in the performance after setting TCPBUFSIZE. The trail file (10 MB size) that was taking almost a minute to transfer started getting through in few seconds after setting this parameter. Documentation (http://docs.oracle.com/cd/E35209_01/doc.1121/e29399.pdf page 313) provides the method to calculate the optimum value for TCPBUFSIZE for your environment.
While using TCPBUFSIZE value for TCPFLUSHBYTES (at least equal to the value of TCPBUFSIZE) also needs to be set. It is the buffer that collects the data that is going to be transferred to the target.
These parameters can be used like following:
rmthost, mgrport, compress, tcpbufsize 10000, tcpflushbytes 10000
Also see the metalink note 1071892.1.
Hybrid Columnar Compression (HCC) is a new awesome feature in Exadata that helps in saving a lot of storage space in your environment. This whitepaper on Oracle website explains this feature in detail. Also Uwe Hesse has an excellent how to use all this post on his blog. You can see the compression levels one can achive by making use of HCC. It is very simple to use feature but one needs to be aware of few things before using HCC extensively as otherwise all your storage calculations may go weird. Here are few of the things to keep in mind:
- HCC works with direct path loads only that includes: CTAS, running impdp with ACCESS_METHOD=DIRECT or direct path inserts. If you insert data using a normal insert, it will not be HCC compressed.
- It is most suited for tables that aren’t going to be updated once loaded. There are some complications (next point) that arise if some DML is going to be run on HCC compressed data.
- At block level HCC stores data as compression units. A compression unit can be defined as a set of blocks. Now if some rows (stored with HCC) are updated, they need to be decompressed first. Also in that case the database needs to read the compression unit, not a single block. So once you do some update on the data stored in HCC, it will be moved out of HCC compression. To HCC compress it again you will need to do alter table table_name move compress for (Also see Metalink note 1332853.1). So if the tables you are planning to use HCC upon, undergo frequent DML, HCC may not be best suited for that scenario. Not only it will add the additional overhead of running alter table move statement every time some updates happen, it may screw up the storage space calculations as well.