1. Currently i am having one voting disk, Since the diskgroup data configured with External Redundancy .I could not add new voting disk.
[oragrid@node2 ~]$ crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 463612408dbd4f0cbf65955f3ac3ef9c (/dev/oracleasm/disks/ASMDISK5) [DATA]
Located 1 voting disk(s).
[oragrid@node2 ~]$ exit
2. So I need to create separate diskgroup with Normal redudancy ( It will have 2 mirror copy mean 2 failed group for each disk)
so we can have 3 Votedisk files.
3. Added a disk /dev/sdd in to my VM and created logical partion .
[root@node2 ~]# fdisk -l /dev/sdd
Disk /dev/sdd: 1073 MB, 1073741824 bytes, 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x539955a3
Device Boot Start End Blocks Id System
/dev/sdd1 2048 2097151 1047552 5 Extended
/dev/sdd5 4096 686079 340992 83 Linux
/dev/sdd6 688128 1370111 340992 83 Linux
/dev/sdd7 1372160 2054143 340992 83 Linux
4. Create asm disks.
[root@node2 ~]# oracleasm createdisk OCRVD1 /dev/sdd5
Writing disk header: done
Instantiating disk: done
[root@node2 ~]# oracleasm createdisk OCRVD2 /dev/sdd6
Writing disk header: done
Instantiating disk: done
[root@node2 ~]# oracleasm createdisk OCRVD3 /dev/sdd7
Writing disk header: done
Instantiating disk: done
[root@node2 ~]# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
[root@node2 ~]# oracleasm listdisks
ASMDISK10
ASMDISK11
ASMDISK12
ASMDISK13
ASMDISK14
ASMDISK5
ASMDISK6
ASMDISK7
ASMDISK8
ASMDISK9
OCRVD1
OCRVD2
OCRVD3
[root@node2 ~]#
[root@node2 ~]#
5. CREATE DISKGROUP OCRVD NORMAL REDUNDANCY DISK '/dev/oracleasm/disks/OCRVD1' SIZE 333M
DISK '/dev/oracleasm/disks/OCRVD2' SIZE 333M
DISK '/dev/oracleasm/disks/OCRVD3' SIZE 333M
SQL> select NAME,STATE,TYPE,TOTAL_MB from v$asm_diskgroup;
NAME STATE TYPE TOTAL_MB
------------------------------ ----------- ------ ----------
DATA MOUNTED EXTERN 18432
OCRVD MOUNTED NORMAL 996
SQL>
6. Replace new votedisk.
[oragrid@node2 trace]$ crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 463612408dbd4f0cbf65955f3ac3ef9c (/dev/oracleasm/disks/ASMDISK5) [DATA]
Located 1 voting disk(s).
[oragrid@node2 trace]$ crsctl replace votedisk +OCRVD
Successful addition of voting disk 4219d4f7a49b4fb3bf9601cc89f87794.
Successful addition of voting disk 95cb9d0d75384fc3bf0a5aae2752b422.
Successful addition of voting disk 715d2fbc57e44f62bf959e61915db38a.
Successful deletion of voting disk 463612408dbd4f0cbf65955f3ac3ef9c.
Successfully replaced voting disk group with +OCRVD.
CRS-4266: Voting file(s) successfully replaced
[oragrid@node2 trace]$ crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 4219d4f7a49b4fb3bf9601cc89f87794 (/dev/oracleasm/disks/OCRVD1) [OCRVD]
2. ONLINE 95cb9d0d75384fc3bf0a5aae2752b422 (/dev/oracleasm/disks/OCRVD2) [OCRVD]
3. ONLINE 715d2fbc57e44f62bf959e61915db38a (/dev/oracleasm/disks/OCRVD3) [OCRVD]
Located 3 voting disk(s).
[oragrid@node2 trace]$
7. OCR addition:
[oragrid@node2 trace]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 4
Total space (kbytes) : 491684
Used space (kbytes) : 84440
Available space (kbytes) : 407244
ID : 939418504
Device/File Name : +DATA
Device/File integrity check succeeded
Device/File not configured
Device/File not configured
Device/File not configured
Device/File not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
[oragrid@node2 trace]$ ocrconfig -add +OCRVD
PROT-20: Insufficient permission to proceed. Require privileged user
[oragrid@node2 trace]$ which ocrconfig
/opt/app/grid19c/bin/ocrconfig
[oragrid@node2 trace]$ exit
logout
[root@node2 ~]# /opt/app/grid19c/bin/ocrconfig -add +OCRVD
[root@node2 ~]# /opt/app/grid19c/bin/ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 4
Total space (kbytes) : 491684
Used space (kbytes) : 84440
Available space (kbytes) : 407244
ID : 939418504
Device/File Name : +DATA
Device/File integrity check succeeded
Device/File Name : +OCRVD
Device/File integrity check succeeded
Device/File not configured
Device/File not configured
Device/File not configured
Cluster registry integrity check succeeded
Logical corruption check succeeded
[root@node2 ~]#
[root@node2 ~]# /opt/app/grid19c/bin/ocrconfig -replace +DATA -replacement +OCRVD
PROT-34: The Oracle Cluster Registry location to be deleted is not configured.
[root@node2 ~]# /opt/app/grid19c/bin/ocrconfig -add +DATA
[root@node2 ~]# /opt/app/grid19c/bin/ocrconfig -delete +OCRVD
[root@node2 ~]# /opt/app/grid19c/bin/ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 4
Total space (kbytes) : 491684
Used space (kbytes) : 84440
Available space (kbytes) : 407244
ID : 939418504
Device/File Name : +DATA
Device/File integrity check succeeded
Device/File not configured
Device/File not configured
Device/File not configured
Device/File not configured
Cluster registry integrity check succeeded
Logical corruption check succeeded
[root@node2 ~]# /opt/app/grid19c/bin/ocrconfig -replace +DATA -replacement +OCRVD
PROT-28: Cannot delete or replace the only configured Oracle Cluster Registry location.
[root@node2 ~]# /opt/app/grid19c/bin/ocrconfig -add +DATA
PROT-29: The Oracle Cluster Registry location is already configured
[root@node2 ~]# /opt/app/grid19c/bin/ocrconfig -add +OCRVD
[root@node2 ~]# /opt/app/grid19c/bin/ocrconfig -delete +DATA
[root@node2 ~]# /opt/app/grid19c/bin/ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 4
Total space (kbytes) : 491684
Used space (kbytes) : 84440
Available space (kbytes) : 407244
ID : 939418504
Device/File Name : +OCRVD
Device/File integrity check succeeded
Device/File not configured
Device/File not configured
Device/File not configured
Device/File not configured
Cluster registry integrity check succeeded
Logical corruption check succeeded
[root@node2 ~]#
[root@node2 ~]#