Saturday 12 March 2011

Adding SWAP space and DUMP space to a ZFS installed solaris SPARC machine.

Its very easy to add swap space in ZFS installed machine and the concept is same as that of UFS.
In UFS we make use of commands 'mkfile' and 'swap -a'.Here with the use of a single command we add swap space and dump space.Below i have mentioned the step by step procedure:


1st check the size given for dump size using the below command

1)bash-3.00# zfs get volsize zpool/dump
  NAME        PROPERTY  VALUE    SOURCE
  zpool/dump  volsize   1G       local

------------------------------------------
now i am going to change the size of dump using the below command

2)bash-3.00# zfs set volsize=2G zpool/dump

------------------------------------------

we can check the swap size and we can find the size is changed to 2G

3)bash-3.00# zfs get volsize zpool/dump
  NAME        PROPERTY  VALUE    SOURCE
  zpool/dump  volsize   2G       local

------------------------------------------


similarly for swap

1)bash-3.00# zfs get volsize zpool/swap
  NAME        PROPERTY  VALUE    SOURCE
  zpool/swap  volsize   14.6G    local

2)bash-3.00# zfs set volsize=20G zpool/swap

3)bash-3.00# zfs get volsize zpool/swap
NAME        PROPERTY  VALUE    SOURCE
zpool/swap  volsize   20G      local


----------------------------------------------

Creating Alternate Boot Enviornment for ZFS

 Today i learned how to create an alternate boot enviornment for a ZFS installed solaris SPARC machine.
I follwed the below steps:
                                                                                                                          

Check if any boot enviornments are created already or not
1)bash-3.00# lustatus
  ERROR: No boot environments are configured on this system
  ERROR: cannot determine list of all boot environment names
---------------------------------------------------------------------------------------

so here only one boot enviornment is configured,now we can create
one more boot enviornment by 'lucreate' command with the alternate 
boot enviornment name as 's10s_u9wos_14a2'(here i have given name to my choice) 

2)bash-3.00# lucreate -n s10s_u9wos_14a2
  
  Analyzing system configuration.
  No name for current boot environment.
  INFORMATION: The current boot environment is not named - assigning name <s10s_u9wos_14a>.
  Current boot environment is named <s10s_u9wos_14a>.
  Creating initial configuration for primary boot environment <s10s_u9wos_14a>.
  The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID.
  PBE configuration successful: PBE name <s10s_u9wos_14a> PBE Boot Device </dev/dsk/c1t0d0s0>.
  Comparing source boot environment <s10s_u9wos_14a> file systems with the
  file system(s) you specified for the new boot environment. Determining
  which file systems should be in the new boot environment.
  Updating boot environment description database on all BEs.
  Updating system configuration files.
  Creating configuration for boot environment <s10s_u9wos_14a2>.
  Source boot environment is <s10s_u9wos_14a>.
  Creating boot environment <s10s_u9wos_14a2>.
  Cloning file systems from boot environment <s10s_u9wos_14a> to create boot environment <s10s_u9wos_14a2>.
  Creating snapshot for <zpool/ROOT/s10s_u9wos_14a> on <zpool/ROOT/s10s_u9wos_14a@s10s_u9wos_14a2>.
  Creating clone for <zpool/ROOT/s10s_u9wos_14a@s10s_u9wos_14a2> on <zpool/ROOT/s10s_u9wos_14a2>.
  Setting canmount=noauto for </> in zone <global> on <zpool/ROOT/s10s_u9wos_14a2>.
  Creating snapshot for <zpool/ROOT/s10s_u9wos_14a/var> on <zpool/ROOT/s10s_u9wos_14a/var@s10s_u9wos_14a2>.
  Creating clone for <zpool/ROOT/s10s_u9wos_14a/var@s10s_u9wos_14a2> on <zpool/ROOT/s10s_u9wos_14a2/var>.
  Setting canmount=noauto for </var> in zone <global> on <zpool/ROOT/s10s_u9wos_14a2/var>.
  Creating dataset <zpool/ROOT/s10s_u9wos_14a2/zoneds/zonesan-s10s_u9wos_14a2> for zone <zonesan>
  Copying root of zone <zonesan>.
  Creating snapshot for <zpool/export/home/zonejos> on <zpool/export/home/zonejos@s10s_u9wos_14a2>.
  Creating clone for <zpool/export/home/zonejos@s10s_u9wos_14a2> on <zpool/export/home/zonejos-s10s_u9wos_14a2>.
  Population of boot environment <s10s_u9wos_14a2> successful.
  Creation of boot environment <s10s_u9wos_14a2> successful.
----------------------------------------------------------------------------------

Check for the new boot enviornment created
3)bash-3.00# lustatus
  Boot Environment           Is       Active Active    Can    Copy
  Name                       Complete Now    On Reboot Delete Status
  -------------------------- -------- ------ --------- ------ ----------
  s10s_u9wos_14a             yes      yes    yes       no     -
  s10s_u9wos_14a2            yes      no     no        yes    -


-----------------------------------------------------------------------------------

4)bash-3.00# zfs list
NAME                                                        USED  AVAIL  REFER  MOUNTPOINT
zpool                                                      22.9G   111G    97K  /zpool
zpool/ROOT                                                 5.57G   111G    21K  legacy
zpool/ROOT/s10s_u9wos_14a                                  5.47G   111G  5.38G  /
zpool/ROOT/s10s_u9wos_14a@s10s_u9wos_14a2                  78.5K      -  5.38G  -
zpool/ROOT/s10s_u9wos_14a/var                              93.0M   111G  92.9M  /var
zpool/ROOT/s10s_u9wos_14a/var@s10s_u9wos_14a2                59K      -  92.9M  -
zpool/ROOT/s10s_u9wos_14a2                                  104M   111G  5.38G  /
zpool/ROOT/s10s_u9wos_14a2/var                                 0   111G  92.9M  /var
zpool/dump                                                 2.00G   111G  2.00G  -
zpool/export                                                201M   111G    23K  /export
zpool/export/home                                           201M   111G   104M  /export/home
zpool/swap                                                 15.1G   126G    16K  -

-----------------------------------------------------------
if you have created zones the boot enviornment will reflect there also.
here i configured a zone 'zonejos'
5)bash-3.00# zfs list
NAME                                                        USED  AVAIL  REFER  MOUNTPOINT
zpool                                                      22.9G   111G    97K  /zpool
zpool/ROOT                                                 5.57G   111G    21K  legacy
zpool/ROOT/s10s_u9wos_14a                                  5.47G   111G  5.38G  /
zpool/ROOT/s10s_u9wos_14a@s10s_u9wos_14a2                  78.5K      -  5.38G  -
zpool/ROOT/s10s_u9wos_14a/var                              93.0M   111G  92.9M  /var
zpool/ROOT/s10s_u9wos_14a/var@s10s_u9wos_14a2                59K      -  92.9M  -
zpool/ROOT/s10s_u9wos_14a2                                  104M   111G  5.38G  /
zpool/ROOT/s10s_u9wos_14a2/var                                 0   111G  92.9M  /var
zpool/ROOT/s10s_u9wos_14a2/zoneds                           103M   111G    23K  /zoneds
zpool/ROOT/s10s_u9wos_14a2/zoneds/zonesan-s10s_u9wos_14a2   103M   111G   103M  /zoneds/zonesan-s10s_u9wos_14a2
zpool/dump                                                 2.00G   111G  2.00G  -
zpool/export                                                201M   111G    23K  /export
zpool/export/home                                           201M   111G   104M  /export/home
zpool/export/home/zonejos                                  97.0M   111G  97.0M  /export/home/zonejos
zpool/export/home/zonejos@s10s_u9wos_14a2                      0      -  97.0M  -
zpool/export/home/zonejos-s10s_u9wos_14a2                  39.5K   111G  97.0M  /export/home/zonejos-s10s_u9wos_14a2
zpool/swap                                                 15.1G   126G    16K  -

---------------------------------------------------------------
you can boot from alternate boot envionment using boot -L command from ok> prompt or using luactivate command and its recommended highly not to give reboot commands or shutdoen commands
6)bash-3.00# luactivate s10s_u9wos_14a2
A Live Upgrade Sync operation will be performed on startup of boot environment <s10s_u9wos_14a2>.
---------------------------------------------------------------
{2} ok boot -L
Boot device: /pci@9,600000/SUNW,qlc@2/fp@0,0/disk@w500000e0129fa191,0:a  File and args: -L
1 s10s_u9wos_14a
2 s10s_u9wos_14a2
Select environment to boot: [ 1 - 2 ]: 2

To boot the selected entry, invoke:
boot [<root-device>] -Z zpool/ROOT/s10s_u9wos_14a2
{2} ok boot -Z rpool/ROOT/zfsnv_952BE
Resetting ...
{2} ok boot -Z zpool/ROOT/s10s_u9wos_14a2
Resetting ...


Software Reset

Enabling system bus....... Done
Initializing CPUs......... Done
Initializing boot memory.. Done
Initializing OpenBoot
Probing system devices
Probing I/O buses

Sun Fire V490, No Keyboard
Copyright 2005 Sun Microsystems, Inc.  All rights reserved.
OpenBoot 4.18.8, 8192 MB memory installed, Serial #71211080.
Ethernet address 0:14:4f:3e:98:48, Host ID: 843e9848.





Rebooting with command: boot -Z zpool/ROOT/s10s_u9wos_14a2
Boot device: /pci@9,600000/SUNW,qlc@2/fp@0,0/disk@w500000e0129fa191,0:a  File and args: -Z zpool/ROOT/s10s_u9wos_14a2
SunOS Release 5.10 Version Generic_142909-17 64-bit
Copyright (c) 1983, 2010, Oracle and/or its affiliates. All rights reserved.
Hostname: PTEST231
Reading ZFS config: done.
Mounting ZFS filesystems: (19/19)

PTEST231 console login:
----------------------------------------------------------------------------
now the boot environment is the new one created.At times we will get an error that root file system will not get mounted,what i did was:
got to single user mode from cdrom
{2}ok boot cdrom -s
now just try restarting system usin init command(Please don't use reboot or shutdown command)
#init 6
again if its coming to ok prompt then we have to try another method,
1) Boot the machine to Single User mode from solaris dvd 
 {2}ok boot cdrom -s
  
----------------------------------------------------------------------
Now mount / directory (from current boot environment) on to a new directory,say /mnt after that run command 'luactivate' which activte earlier boot enviornment
2) #zpool import zpool 
3) #zfs get mountpoint zpool/ROOT/s10s_u9wos_14a
4) #zfs get mounted zpool/ROOT/s10s_u9wos_14a
5) #zfs set mountpoint=/mnt zpool/ROOT/s10s_u9wos_14a
6) #zfs mount zpool/ROOT/s10s_u9wos_14a (name of my pool is 'zpool' and don't get confused)
7) #/mnt/sbin/luactivate
8) #init 6
it will be successful

Wednesday 9 March 2011

Breaking ROOT Password in ZFS Solaris SPARC machine





Today i learnd how to reset the root password for a ZFS installed OS on a sparc machine.The concept is same as in UFS except for the commands used .Below i have mentioned it in a systematic way:

I booted the system into single user mode after inserting the solaris 10 cd





1)# boot cdrom -s
{10} ok boot cdrom -s
Resetting ...


Software Reset

Enabling system bus....... Done
Initializing CPUs......... Done
Initializing boot memory.. Done
Initializing OpenBoot
Probing system devices
Probing I/O buses

Sun Fire V490, No Keyboard
Copyright 2005 Sun Microsystems, Inc.  All rights reserved.
OpenBoot 4.18.8, 32768 MB memory installed, Serial #712224356.
Ethernet address 0:14:4f:3r:9t:42, Host ID: 843e11324.

Rebooting with command: boot cdrom -s
Boot device: /pci@8,700000/ide@6/cdrom@0,0:f  File and args: -s
SunOS Release 5.10 Version Generic_142909-17 64-bit
Copyright (c) 1983, 2010, Oracle and/or its affiliates. All rights reserved.
Booting to milestone "milestone/single-user:default".
Configuring devices.
Using RPC Bootparams for network configuration information.
Attempting to configure interface ce5...
Skipped interface ce5
Attempting to configure interface ce4...
Skipped interface ce4
Attempting to configure interface ce3...
Skipped interface ce3
Attempting to configure interface ce2...
Skipped interface ce2
Attempting to configure interface ce1...
Skipped interface ce1
Attempting to configure interface ce0...
Configured interface ce0
WARNING: ce0 has duplicate address 010.131.048.221 (in use by 0:14:4f:3r:9t:42); disabled
Requesting System Maintenance Mode
SINGLE USER MODE
# df -h
Filesystem             size   used  avail capacity  Mounted on
/ramdisk-root:a        197M   175M   2.3M    99%    /
/devices                 0K     0K     0K     0%    /devices
ctfs                     0K     0K     0K     0%    /system/contract
proc                     0K     0K     0K     0%    /proc
mnttab                   0K     0K     0K     0%    /etc/mnttab
swap                   7.4G   344K   7.4G     1%    /etc/svc/volatile
objfs                    0K     0K     0K     0%    /system/object
sharefs                  0K     0K     0K     0%    /etc/dfs/sharetab
swap                   7.4G   592K   7.4G     1%    /tmp
/tmp/dev               7.4G   592K   7.4G     1%    /dev
fd                       0K     0K     0K     0%    /dev/fd
/devices/pci@8,700000/ide@6/sd@0,0:f
                       2.1G   2.1G     0K   100%    /cdrom
df: cannot statvfs /platform/sun4u-us3/lib/libc_psr.so.1: Operation not applicable
df: cannot statvfs /platform/sun4u-us3/lib/sparcv9/libc_psr.so.1: Operation not applicable
swap                   7.4G     8K   7.4G     1%    /tmp/root/var/run
---------------------------------------------------------------------------------
If you need to work on vi,just export your EDITOR and TERMINAL
just check for ZFS file systems mounted 
2) #export EDITOR=vi
export TERM=vt100

# zfs list
no datasets available
--------------------------------------------------------------------------------
as there are no pools we have to import pools and 
this will tell what all pools are there to import

3)# zpool import
  pool: zpool
    id: 12781544736217994069
 state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

        zpool       ONLINE
          c1t0d0s0  ONLINE
--------------------------------------------------------------------------------
import the pool required and it will display the following message.
These ZFS file systems doesn't get mounted but still it exists

4)# zpool import zpool
cannot mount '/export': failed to create mountpoint
cannot mount '/export/home': failed to create mountpoint
cannot mount '/zpool': failed to create mountpoint
--------------------------------------------------------------------------------
now check for the ZFS filesystems mounted,we can notice that the /etc/shadow file 
that we need to access is in zpool/ROOT/s10s_u9wos_14a which is mounted on / is in use  

5)# zfs list
NAME                            USED  AVAIL  REFER  MOUNTPOINT
zpool                          21.6G   112G    96K  /zpool
zpool/ROOT                     5.46G   112G    21K  legacy
zpool/ROOT/s10s_u9wos_14a      5.46G   112G  5.37G  /
zpool/ROOT/s10s_u9wos_14a/var  92.3M   112G  92.3M  /var
zpool/dump                     1.00G   112G  1.00G  -
zpool/export                     44K   112G    23K  /export
zpool/export/home                21K   112G    21K  /export/home
zpool/swap                     15.1G   127G    16K  -
---------------------------------------------------------------------------------
Just check the status of that mount point(zpool/ROOT/s10s_u9wos_14a) mounted on root 
and set a mount point  

6)# zfs get mountpoint zpool/ROOT/s10s_u9wos_14a
NAME                       PROPERTY    VALUE       SOURCE
zpool/ROOT/s10s_u9wos_14a  mountpoint  /           local
  # zfs get mounted zpool/ROOT/s10s_u9wos_14a
NAME                       PROPERTY  VALUE    SOURCE
zpool/ROOT/s10s_u9wos_14a  mounted   no       -
  # zfs set mountpoint=/mnt zpool/ROOT/s10s_u9wos_14a
  # zfs list
  NAME                            USED  AVAIL  REFER  MOUNTPOINT
  zpool                          21.6G   112G    96K  /zpool
  zpool/ROOT                     5.46G   112G    21K  legacy
  zpool/ROOT/s10s_u9wos_14a      5.46G   112G  5.37G  /mnt
  zpool/ROOT/s10s_u9wos_14a/var  92.3M   112G  92.3M  /mnt/var
  zpool/dump                     1.00G   112G  1.00G  -
  zpool/export                     44K   112G    23K  /export
  zpool/export/home                21K   112G    21K  /export/home
  zpool/swap                     15.1G   127G    16K  -
-----------------------------------------------------------------------------------
Just check the status of that mount point(zpool/ROOT/s10s_u9wos_14a) mounted on root 
and set a mount point  

6)# zfs get mountpoint zpool/ROOT/s10s_u9wos_14a
NAME                       PROPERTY    VALUE       SOURCE
zpool/ROOT/s10s_u9wos_14a  mountpoint  /           local
  # zfs get mounted zpool/ROOT/s10s_u9wos_14a
NAME                       PROPERTY  VALUE    SOURCE
zpool/ROOT/s10s_u9wos_14a  mounted   no       -
  # zfs set mountpoint=/mnt zpool/ROOT/s10s_u9wos_14a
  # zfs list
  NAME                            USED  AVAIL  REFER  MOUNTPOINT
  zpool                          21.6G   112G    96K  /zpool
  zpool/ROOT                     5.46G   112G    21K  legacy
  zpool/ROOT/s10s_u9wos_14a      5.46G   112G  5.37G  /mnt
  zpool/ROOT/s10s_u9wos_14a/var  92.3M   112G  92.3M  /mnt/var
  zpool/dump                     1.00G   112G  1.00G  -
  zpool/export                     44K   112G    23K  /export
  zpool/export/home                21K   112G    21K  /export/home
  zpool/swap                     15.1G   127G    16K  -
------------------------------------------------------------------------------
go to /etc/shadow file using VI,create a copy of shadow file and change the value in 
encrypted password field

8)# cd /mnt
  # ls
   bin       dev       export    lib       opt       sbin      usr       zpool
   boot      devices   home      mnt       platform  system    var
   cdrom     etc       kernel    net       proc      tmp       vol
  # cd /etc/
  # vi shadow
   "shadow" 17 lines, 338 characters
    root::6445::::::
    daemon:NP:6445::::::
    bin:NP:6445::::::
    sys:NP:6445::::::
    adm:NP:6445::::::
    lp:NP:6445::::::
    uucp:NP:6445::::::
    nuucp:NP:6445::::::
    smmsp:NP:6445::::::
    listen:*LK*:::::::
    gdm:*LK*:::::::
    webservd:*LK*:::::::
    postgres:NP:::::::
    svctag:*LK*:6445::::::
    nobody:*LK*:6445::::::
    noaccess:*LK*:6445::::::
    nobody4:*LK*:6445::::::
(its difficult for vi rditor to work in some systems..in that case please make use of  other unix commands to make changes to the file.For me also this didn't work.what i did was using vi commands like-w,cw,dw-i made changes)
-------------------------------------------------------------------------
umount the ZFS file system and change the mount point back to /
9)# cd /
  # zfs umount zpool/ROOT/s10s_u9wos_14a
  # zfs set mountpoint=/ zpool/ROOT/s10s_u9wos_14a
  # zfs list
  NAME                            USED  AVAIL  REFER  MOUNTPOINT
  zpool                          21.6G   112G    96K  /zpool
  zpool/ROOT                     5.46G   112G    21K  legacy
  zpool/ROOT/s10s_u9wos_14a      5.46G   112G  5.37G  /
  zpool/ROOT/s10s_u9wos_14a/var  92.3M   112G  92.3M  /var
  zpool/dump                     1.00G   112G  1.00G  -
  zpool/export                     44K   112G    23K  /export
  zpool/export/home                21K   112G    21K  /export/home
  zpool/swap                     15.1G   127G    16K  -


 if you want to check again,got to /etc/shadow and again delete the entry in 
 encrypted shadow region(What i did was i took the encypted shadow part from a server for which i knew the root password and pasted here)
---------------------------------------------------------------------------------
10)init 6
now u can login to the server without a password if you have deleted the entry in /etc/shadow or with the password of the server from which you have taken the encrypted field /etc/shadow