A customer is using an Exadata X8M-2 machine with multiple VMs (hence multiple clusters). I was working on adding a new storage cell to the configuration. After creating griddisks on the new cell and updating cellip.ora on all the VMs, I noticed that none of the clusters was able to see the new griddisks. I checked the usual suspects like if asm_diskstring was set properly, private network subnet mask on new cell was same as the old ones. All looked good. I started searching about the issue and stumbled upon some references mentioning ASM scoped security. I checked on one of the existing cells and that actually was the issue. The existing nodes had it enabled while the new one hadn’t. Running this command on an existing cell
cellcli -e list key detail
name:
key: c25a62472a160e28bf15a29c162f1d74
type: CELL
name: cluster1
key: fa292e11b31b210c4b7a24c5f1bb4d32
type: ASMCLUSTER
name: cluster2
key: b67d5587fe728118af47c57ab8da650a
type: ASMCLUSTER
We need to enable ASM scoped security on the new cell as well. There are three things that need to be done. We need to copy /etc/oracle/cell/network-config/cellkey.ora from an existing cell to the new cell, assign the key to the cell and then assign keys to the different ASM clusters. We can use these commands to do it
cellcli -e ASSIGN KEY FOR CELL 'c25a62472a160e28bf15a29c162f1d74'
cellcli -e ASSIGN KEY FOR ASMCLUSTER 'cluster1'='fa292e11b31b210c4b7a24c5f1bb4d32';
cellcli -e ASSIGN KEY FOR ASMCLUSTER 'cluster2'='b67d5587fe728118af47c57ab8da650a';
Once this is done, we need to tag the griddisks for appropriate ASM clusters. If the griddisks aren’t created yet, we can use this command to do it
cellcli -e CREATE GRIDDISK ALL HARDDISK PREFIX=sales, size=75G, availableTo='cluster1'
If the griddisks are already created, we can use the alter command to make this change
cellcli -e alter griddisk griddisk0,gridisk1,.....griddisk11 availableTo='cluster1';
Once this is done, we should be able to see new griddisks as CANDIDATE in v$asm_disk
I am also about to add 4 X9M cells to an existing X8M rack (2 DB+9 Cells). Oracle has added the cells to existing Rack and done the cabling but th.e cells are powered down. They just enabled ILOM access to first new cell node. How do I take it from here.
1. Do I need to use OEDA to create xml files for new 4 cell nodes?
2. How do I update the IP addresses of the Cells before adding them to the cluster? Do I need to run OEDA install.sh?
3. If I do need OEDA, should I enter info about only new cells or do I need to enter entire rack (2DB + 9Cells + 4newCells)? Won’t it cause any issues if I run install.sh with all this info as the cluster is already configured.
If you have any documents or a link that explains this procedure, that would be great. Thanks!
1) Yes, you will need to select storage expansion rack. You can enter 0 for number of DB nodes and 4 for number of storage nodes.
2) You will need to make these changes manually. install.sh will not be used here.
3) You don’t need to run install.sh for this. Storage expansion part is mostly handled manually.
I am not sure if this information exists in consolidation form at one place but I am sure there would be many blog posts and MOS docs describing this scenario.