Saturday 10 December 2011

solutions for solaris,backup and storage: CREATING CLUSTER FILE SYSTEMS IN SUN CLUSTER 3.2

solutions for solaris,backup and storage: CREATING CLUSTER FILE SYSTEMS IN SUN CLUSTER 3.2: This article is a useful article explaining creation of a new mount point to a solaris box running Sun Cluster 3.2. Cluster versions 3.1 an...

solutions for solaris,backup and storage: CREATING CLUSTER FILE SYSTEMS IN SUN CLUSTER 3.2

solutions for solaris,backup and storage: CREATING CLUSTER FILE SYSTEMS IN SUN CLUSTER 3.2: This article is a useful article explaining creation of a new mount point to a solaris box running Sun Cluster 3.2. Cluster versions 3.1 an...

CREATING CLUSTER FILE SYSTEMS IN SUN CLUSTER 3.2

This article is a useful article explaining creation of a new mount point to a solaris box running Sun Cluster 3.2.
Cluster versions 3.1 and 3.2 are almost the same with the exception of few commands that has changed. Below its explained in a step by step procedure.

1) Check format command and verify the no. of disks.


root@PEVDB061 # echo |format
Searching for disks...done




AVAILABLE DISK SELECTIONS:
       0. c5t5000C5000FCCB363d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
          /scsi_vhci/disk@g5000c5000fccb363
       1. c5t5000C5000FCE4E9Fd0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
          /scsi_vhci/disk@g5000c5000fce4e9f
       2. c5t60060E8006D3E3000000D3E30000014Cd0 <HITACHI-OPEN-V-SUN-7002 cyl 544 alt 2 hd 15 sec 512>
          /scsi_vhci/ssd@g60060e8006d3e3000000d3e30000014c
       3. c5t60060E8006D3E3000000D3E30000014Dd0 <HITACHI-OPEN-V-SUN-5009 cyl 9064 alt 2 hd 15 sec 512>
          /scsi_vhci/ssd@g60060e8006d3e3000000d3e30000014d
       4. c5t60060E8006D3E3000000D3E30000014Ed0 <HITACHI-OPEN-V-SUN-5009 cyl 9064 alt 2 hd 15 sec 512>
          /scsi_vhci/ssd@g60060e8006d3e3000000d3e30000014e
       5. c5t60060E8006D3E3000000D3E30000014Fd0 <HITACHI-OPEN-V-SUN-5009 cyl 9064 alt 2 hd 15 sec 512>
          /scsi_vhci/ssd@g60060e8006d3e3000000d3e30000014f
       6. c5t60060E8006D3E3000000D3E300000150d0 <HITACHI-OPEN-V-SUN-5009 cyl 9064 alt 2 hd 15 sec 512>
          /scsi_vhci/ssd@g60060e8006d3e3000000d3e300000150
       7. c5t60060E8006D3E3000000D3E300000151d0 <HITACHI-OPEN-V-SUN-5009 cyl 9064 alt 2 hd 15 sec 512>
          /scsi_vhci/ssd@g60060e8006d3e3000000d3e300000151
       8. c5t60060E8006D3E3000000D3E300000152d0 <HITACHI-OPEN-V-SUN-5009 cyl 9064 alt 2 hd 15 sec 512>
          /scsi_vhci/ssd@g60060e8006d3e3000000d3e300000152
Specify disk (enter its number): Specify disk (enter its number):



**********************************************************************************

2) After mapping the new LDEV from storage (Here we are using a hitachi VSP storage) to the server make the LUN visible in the server by following commands. Make a note of the LDEV ID thats been allocated to the host as this can help to find the disk from format command(here the LDEV ID is 0252).If the OS is running a Solaris 9, we will have to reconfigure the controller by "cfgadm -c configure Cxx" after executing " devfsadm " command, in solaris 10 we dont have to reconfigure the controller as it will  detect by its own. Still if the luns are not visible, try reconfiguring the controller.

root@PEVDB061 # devfsadm -Cvv
root@PEVDB061 # devfsadm 

root@PEVDB061 # cfgadm -al
Ap_Id                          Type         Receptacle   Occupant     Condition
c0                             scsi-bus     connected    configured   unknown
c0::dsk/c0t0d0                 CD-ROM       connected    configured   unknown
c1                             fc-private   connected    configured   unknown
c1::210000008742b8eb           disk         connected    configured   unknown
c1::210000008742cecb           disk         connected    configured   unknown
c4                             scsi-bus     connected    unconfigured unknown
c5                             scsi-bus     connected    unconfigured unknown
c6                             scsi-bus     connected    unconfigured unknown
c7                             scsi-bus     connected    unconfigured unknown
c8                             fc-fabric    connected    configured   unknown
c8::50060e8006d3e320           disk         connected    configured   unusable
c9                             fc-fabric    connected    configured   unknown
c9::50060e8006d3e330           disk         connected    configured   unusable
usb0/1                         unknown      empty        unconfigured ok
usb0/2                         unknown      empty        unconfigured ok
usb0/3                         unknown      empty        unconfigured ok
usb0/4                         unknown      empty        unconfigured ok
root@PEVDB061 # cfgadm -c configure c8
root@PEVDB061 # cfgadm -c configure c9



root@PEVDB061 # echo |format
Searching for disks...done




AVAILABLE DISK SELECTIONS:
       0. c5t5000C5000FCCB363d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
          /scsi_vhci/disk@g5000c5000fccb363
       1. c5t5000C5000FCE4E9Fd0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
          /scsi_vhci/disk@g5000c5000fce4e9f
       2. c5t60060E8006D3E3000000D3E30000014Cd0 <HITACHI-OPEN-V-SUN-7002 cyl 544 alt 2 hd 15 sec 512>
          /scsi_vhci/ssd@g60060e8006d3e3000000d3e30000014c
       3. c5t60060E8006D3E3000000D3E30000014Dd0 <HITACHI-OPEN-V-SUN-5009 cyl 9064 alt 2 hd 15 sec 512>
          /scsi_vhci/ssd@g60060e8006d3e3000000d3e30000014d
       4. c5t60060E8006D3E3000000D3E30000014Ed0 <HITACHI-OPEN-V-SUN-5009 cyl 9064 alt 2 hd 15 sec 512>
          /scsi_vhci/ssd@g60060e8006d3e3000000d3e30000014e
       5. c5t60060E8006D3E3000000D3E30000014Fd0 <HITACHI-OPEN-V-SUN-5009 cyl 9064 alt 2 hd 15 sec 512>
          /scsi_vhci/ssd@g60060e8006d3e3000000d3e30000014f
       6. c5t60060E8006D3E3000000D3E300000150d0 <HITACHI-OPEN-V-SUN-5009 cyl 9064 alt 2 hd 15 sec 512>
          /scsi_vhci/ssd@g60060e8006d3e3000000d3e300000150
       7. c5t60060E8006D3E3000000D3E300000151d0 <HITACHI-OPEN-V-SUN-5009 cyl 9064 alt 2 hd 15 sec 512>
          /scsi_vhci/ssd@g60060e8006d3e3000000d3e300000151
       8. c5t60060E8006D3E3000000D3E300000152d0 <HITACHI-OPEN-V-SUN-5009 cyl 9064 alt 2 hd 15 sec 512>
          /scsi_vhci/ssd@g60060e8006d3e3000000d3e300000152
       9. c5t60060E8006D3E3000000D3E300000252d0 <HITACHI-OPEN-V-SUN-7002 cyl 27304 alt 2 hd 15 sec 512>
          /scsi_vhci/ssd@g60060e8006d3e3000000d3e300000252
Specify disk (enter its number): Specify disk (enter its number):

**********************************************************************************

3)Check if the new LDEV from storage is visible at the server end from format command and if its there format the disk to sun format with slice 2 containing the whole disk format(This will be helpful later..) and create a new file system on the disk with "newfs" command(which is not mandatory).

root@PEVDB061 # echo |format |grep -i 0252



c5t60060E8006D3E3000000D3E300000252d0: configured with capacity of 97.99GB
      15. c5t60060E8006D3E3000000D3E300000252d0<HITACHI-OPEN-V-SUN-7002 cyl 27304 alt 2 hd 15 sec 512>
           /scsi_vhci/ssd@g60060e8006d3e3000000d3e300000252







root@PEVDB061 # format /dev/dsk/c5t60060E8006D3E3000000D3E300000252d0
(put the entire space to s0 slice)


root@PEVDB061 # newfs /dev/rdsk/c5t60060E8006D3E3000000D3E300000252d0s0



**********************************************************************************

4)Now comes the interesting part, We are going to add this disk to cluster part and mount it as a new mount point.Please see the below steps for that:
Add the new device to Cluster did devices and update the globaldevices. Please do it on both the cluster nodes.


root@PEVDB061 # scdidadm -C
root@PEVDB061 # scdidadm -r
root@PEVDB061 # scdidadm -c



root@PEVDB061 # scgdevs
Configuring DID devices
Configuring the /dev/global directory (global devices)
obtaining access to all attached disks

**********************************************************************************

5)Now check if the devices got reflected in the cluster did devices.The command "scdidadm -L " will display the devices of both the nodes


root@PEVDB061 # scdidadm -l
1        PEVDB061:/dev/rdsk/c5t60060E8006D3E3000000D3E300000252d0 /dev/did/rdsk/d1
6        PEVDB061:/dev/rdsk/c5t5000C5000FCCB363d0 /dev/did/rdsk/d6
7        PEVDB061:/dev/rdsk/c5t5000C5000FCE4E9Fd0 /dev/did/rdsk/d7
15       PEVDB061:/dev/rdsk/c5t60060E8006D3E3000000D3E300000152d0 /dev/did/rdsk/d15
16       PEVDB061:/dev/rdsk/c5t60060E8006D3E3000000D3E300000151d0 /dev/did/rdsk/d16
17       PEVDB061:/dev/rdsk/c5t60060E8006D3E3000000D3E300000150d0 /dev/did/rdsk/d17
18       PEVDB061:/dev/rdsk/c5t60060E8006D3E3000000D3E30000014Fd0 /dev/did/rdsk/d18
19       PEVDB061:/dev/rdsk/c5t60060E8006D3E3000000D3E30000014Ed0 /dev/did/rdsk/d19
20       PEVDB061:/dev/rdsk/c5t60060E8006D3E3000000D3E30000014Dd0 /dev/did/rdsk/d20
21       PEVDB061:/dev/rdsk/c5t60060E8006D3E3000000D3E30000014Cd0 /dev/did/rdsk/d21


root@PEVDB061 # scdidadm -l |grep -i 0252
1        PEVDB061:/dev/rdsk/c5t60060E8006D3E3000000D3E300000252d0 /dev/did/rdsk/d1


**********************************************************************************


6) Once the disk is visible in did devices of both the nodes, add the disk into the metaset


root@PEVDB061 # metaset


Set name = EVD-DG, Set number = 2


Host                Owner
  PEVDB061           Yes
  PEVDB064


Drive Dbase


d15   Yes


d16   Yes


d17   Yes


d18   Yes


d19   Yes


d20   Yes
root@PEVDB061 # metaset -s EVD-DG -a /dev/did/rdsk/d1

root@PEVDB061 # metaset


Set name = EVD-DG, Set number = 2


Host                Owner
  PEVDB061           Yes
  PEVDB064


Drive Dbase


d15   Yes


d16   Yes


d17   Yes


d18   Yes


d19   Yes


d20   Yes


d1     Yes

**********************************************************************************

7) Once the disk is added to metaset, Please check for which all are the metadevices under metaset thats been created earlier,


root@PEVDB061 # metastat -s EVD-DG -p
EVD-DG/d100 1 1 /dev/did/rdsk/d17s0
EVD-DG/d300 1 4 /dev/did/rdsk/d16s0 /dev/did/rdsk/d18s0 /dev/did/rdsk/d19s0 /dev/did/rdsk/d20s0 -i 512b
EVD-DG/d200 2 1 /dev/did/rdsk/d17s1 \
         1 /dev/did/rdsk/d15s0

**********************************************************************************

8) Create a new metadevice with the name d400 with the disk thats added to metaset and create a new file system


root@PEVDB061 # metainit -s  EVD-DG d400 1 1 /dev/did/rdsk/d1s0
EVD-DG/d400: Concat/Stripe is setup
root@PEVDB061 #metastat -s EVD-DG -p
EVD-DG/d400 1 1 /dev/did/rdsk/d1s0
EVD-DG/d100 1 1 /dev/did/rdsk/d17s0
EVD-DG/d300 1 4 /dev/did/rdsk/d16s0 /dev/did/rdsk/d18s0 /dev/did/rdsk/d19s0 /dev/did/rdsk/d20s0 -i 512b
EVD-DG/d200 2 1 /dev/did/rdsk/d17s1 \
         1 /dev/did/rdsk/d15s0



root@PEVDB061 # newfs /dev/did/rdsk/d1s0
newfs: construct a new file system /dev/did/rdsk/d1s0: (y/n)? y


**********************************************************************************

9) Take the name convention to be given for the new mount point (Here i am going to give it as /s1/evd/oradata02). Add the entry to /etc/vfstab file in both the active and passive nodes. This should be done without any mistake else we cant add this mount point to cluster resource. The entry should be like the below;

/dev/md/EVD-DG/dsk/d400 /dev/md/EVD-DG/rdsk/d400        /s1/evd/oradata02       ufs     2       no     logging


**********************************************************************************

10)Now try mounting the new mount point after creating the directory.Change the permissions of owner and group according to the user who is going to use the mount point.


root@PEVDB061 #mkdir -p /s1/evd/oradata02

root@PEVDB061 # mount /s1/evd/oradata02
root@PEVDB061 # df -h
Filesystem             size   used  avail capacity  Mounted on
/dev/md/dsk/d30         20G    12G   8.0G    60%    /
/devices                 0K     0K     0K     0%    /devices
ctfs                     0K     0K     0K     0%    /system/contract
proc                     0K     0K     0K     0%    /proc
mnttab                   0K     0K     0K     0%    /etc/mnttab
swap                    26G   2.0M    26G     1%    /etc/svc/volatile
objfs                    0K     0K     0K     0%    /system/object
sharefs                  0K     0K     0K     0%    /etc/dfs/sharetab
fd                       0K     0K     0K     0%    /dev/fd
swap                    26G    18M    26G     1%    /tmp
swap                    26G   120K    26G     1%    /var/run
/dev/md/dsk/d33         88G   1.6G    86G     2%    /app
/dev/md/dsk/d32        9.9G   458M   9.3G     5%    /export/home
/dev/md/dsk/d34        477M   3.8M   426M     1%    /global/.devices/node@1
/dev/md/EVD-DG/dsk/d200
                        50G    11G    38G    22%    /s1/evd/oraarch
/dev/md/EVD-DG/dsk/d100
                        16G   6.4G   9.2G    41%    /s1/evd/oracle
/dev/md/EVD-DG/dsk/d300
                       131G   102G    27G    79%    /s1/evd/oradata01
/dev/md/dsk/d35        485M   3.9M   432M     1%    /global/.devices/node@2
10.1.18.49:/export/home/Dcaccess/SWAT/monitor_data
                       5.3G   2.9G   2.4G    55%    /swat
/dev/md/EVD-DG/dsk/d400
                        98G   100M    97G     1%    /s1/evd/oradata02


**********************************************************************************

11) This server is running a 2 databases and now is the last and important part- Adding the mount point to the cluster hasp resource. This is done inorder to switch all the data base mount points from one resource to another.This will work only if the vfstab entries are correct in both the servers.Please see the command for that.

root@PEVDB061 # scstat -g


-- Resource Groups and Resources --


            Group Name     Resources
            ----------     ---------
 Resources: pevd-rg        PEVD pevd-hasp-rs pevd-db-rs pevd-lsnr-rs bor3-bo-rs




-- Resource Groups --


            Group Name     Node Name                State          Suspended
            ----------     ---------                -----          ---------
     Group: pevd-rg        PEVDB061                 Online         No
     Group: pevd-rg        PEVDB064                 Offline        No




-- Resources --


            Resource Name  Node Name                State          Status Message
            -------------  ---------                -----          --------------
  Resource: PEVD           PEVDB061                 Online         Online - LogicalHostname online.
  Resource: PEVD           PEVDB064                 Offline        Offline


  Resource: pevd-hasp-rs   PEVDB061                 Online        Online
  Resource: pevd-hasp-rs   PEVDB064                 Offline        Offline


  Resource: pevd-db-rs     PEVDB061                 Online        Online
  Resource: pevd-db-rs     PEVDB064                 Offline        Offline


  Resource: pevd-lsnr-rs   PEVDB061                 Online        Online
  Resource: pevd-lsnr-rs   PEVDB064                 Offline        Offline


  Resource: bor3-bo-rs     PEVDB061                 Online        Online
  Resource: bor3-bo-rs     PEVDB064                 Offline        Offline 




root@PEVDB061 #/usr/cluster/bin/clresource set -p FilesystemMountPoints=/s1/evd/oraarch,/s1/evd/oracle,/s1/evd/oradata01,/s1/evd/oradata02 pevd-hasp-rs




**********************************************************************************

Its done baby...Simple.....Now when you get a downtime on this server or when some maintenance activity comes, Please check switching the resource from active to passive server.This will work.

The cluster activities can be done from the GUI also, but i am sort of person who hates little mouse.... and love to work with keyboard.