Wednesday 9 March 2011

Breaking ROOT Password in ZFS Solaris SPARC machine





Today i learnd how to reset the root password for a ZFS installed OS on a sparc machine.The concept is same as in UFS except for the commands used .Below i have mentioned it in a systematic way:

I booted the system into single user mode after inserting the solaris 10 cd





1)# boot cdrom -s
{10} ok boot cdrom -s
Resetting ...


Software Reset

Enabling system bus....... Done
Initializing CPUs......... Done
Initializing boot memory.. Done
Initializing OpenBoot
Probing system devices
Probing I/O buses

Sun Fire V490, No Keyboard
Copyright 2005 Sun Microsystems, Inc.  All rights reserved.
OpenBoot 4.18.8, 32768 MB memory installed, Serial #712224356.
Ethernet address 0:14:4f:3r:9t:42, Host ID: 843e11324.

Rebooting with command: boot cdrom -s
Boot device: /pci@8,700000/ide@6/cdrom@0,0:f  File and args: -s
SunOS Release 5.10 Version Generic_142909-17 64-bit
Copyright (c) 1983, 2010, Oracle and/or its affiliates. All rights reserved.
Booting to milestone "milestone/single-user:default".
Configuring devices.
Using RPC Bootparams for network configuration information.
Attempting to configure interface ce5...
Skipped interface ce5
Attempting to configure interface ce4...
Skipped interface ce4
Attempting to configure interface ce3...
Skipped interface ce3
Attempting to configure interface ce2...
Skipped interface ce2
Attempting to configure interface ce1...
Skipped interface ce1
Attempting to configure interface ce0...
Configured interface ce0
WARNING: ce0 has duplicate address 010.131.048.221 (in use by 0:14:4f:3r:9t:42); disabled
Requesting System Maintenance Mode
SINGLE USER MODE
# df -h
Filesystem             size   used  avail capacity  Mounted on
/ramdisk-root:a        197M   175M   2.3M    99%    /
/devices                 0K     0K     0K     0%    /devices
ctfs                     0K     0K     0K     0%    /system/contract
proc                     0K     0K     0K     0%    /proc
mnttab                   0K     0K     0K     0%    /etc/mnttab
swap                   7.4G   344K   7.4G     1%    /etc/svc/volatile
objfs                    0K     0K     0K     0%    /system/object
sharefs                  0K     0K     0K     0%    /etc/dfs/sharetab
swap                   7.4G   592K   7.4G     1%    /tmp
/tmp/dev               7.4G   592K   7.4G     1%    /dev
fd                       0K     0K     0K     0%    /dev/fd
/devices/pci@8,700000/ide@6/sd@0,0:f
                       2.1G   2.1G     0K   100%    /cdrom
df: cannot statvfs /platform/sun4u-us3/lib/libc_psr.so.1: Operation not applicable
df: cannot statvfs /platform/sun4u-us3/lib/sparcv9/libc_psr.so.1: Operation not applicable
swap                   7.4G     8K   7.4G     1%    /tmp/root/var/run
---------------------------------------------------------------------------------
If you need to work on vi,just export your EDITOR and TERMINAL
just check for ZFS file systems mounted 
2) #export EDITOR=vi
export TERM=vt100

# zfs list
no datasets available
--------------------------------------------------------------------------------
as there are no pools we have to import pools and 
this will tell what all pools are there to import

3)# zpool import
  pool: zpool
    id: 12781544736217994069
 state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

        zpool       ONLINE
          c1t0d0s0  ONLINE
--------------------------------------------------------------------------------
import the pool required and it will display the following message.
These ZFS file systems doesn't get mounted but still it exists

4)# zpool import zpool
cannot mount '/export': failed to create mountpoint
cannot mount '/export/home': failed to create mountpoint
cannot mount '/zpool': failed to create mountpoint
--------------------------------------------------------------------------------
now check for the ZFS filesystems mounted,we can notice that the /etc/shadow file 
that we need to access is in zpool/ROOT/s10s_u9wos_14a which is mounted on / is in use  

5)# zfs list
NAME                            USED  AVAIL  REFER  MOUNTPOINT
zpool                          21.6G   112G    96K  /zpool
zpool/ROOT                     5.46G   112G    21K  legacy
zpool/ROOT/s10s_u9wos_14a      5.46G   112G  5.37G  /
zpool/ROOT/s10s_u9wos_14a/var  92.3M   112G  92.3M  /var
zpool/dump                     1.00G   112G  1.00G  -
zpool/export                     44K   112G    23K  /export
zpool/export/home                21K   112G    21K  /export/home
zpool/swap                     15.1G   127G    16K  -
---------------------------------------------------------------------------------
Just check the status of that mount point(zpool/ROOT/s10s_u9wos_14a) mounted on root 
and set a mount point  

6)# zfs get mountpoint zpool/ROOT/s10s_u9wos_14a
NAME                       PROPERTY    VALUE       SOURCE
zpool/ROOT/s10s_u9wos_14a  mountpoint  /           local
  # zfs get mounted zpool/ROOT/s10s_u9wos_14a
NAME                       PROPERTY  VALUE    SOURCE
zpool/ROOT/s10s_u9wos_14a  mounted   no       -
  # zfs set mountpoint=/mnt zpool/ROOT/s10s_u9wos_14a
  # zfs list
  NAME                            USED  AVAIL  REFER  MOUNTPOINT
  zpool                          21.6G   112G    96K  /zpool
  zpool/ROOT                     5.46G   112G    21K  legacy
  zpool/ROOT/s10s_u9wos_14a      5.46G   112G  5.37G  /mnt
  zpool/ROOT/s10s_u9wos_14a/var  92.3M   112G  92.3M  /mnt/var
  zpool/dump                     1.00G   112G  1.00G  -
  zpool/export                     44K   112G    23K  /export
  zpool/export/home                21K   112G    21K  /export/home
  zpool/swap                     15.1G   127G    16K  -
-----------------------------------------------------------------------------------
Just check the status of that mount point(zpool/ROOT/s10s_u9wos_14a) mounted on root 
and set a mount point  

6)# zfs get mountpoint zpool/ROOT/s10s_u9wos_14a
NAME                       PROPERTY    VALUE       SOURCE
zpool/ROOT/s10s_u9wos_14a  mountpoint  /           local
  # zfs get mounted zpool/ROOT/s10s_u9wos_14a
NAME                       PROPERTY  VALUE    SOURCE
zpool/ROOT/s10s_u9wos_14a  mounted   no       -
  # zfs set mountpoint=/mnt zpool/ROOT/s10s_u9wos_14a
  # zfs list
  NAME                            USED  AVAIL  REFER  MOUNTPOINT
  zpool                          21.6G   112G    96K  /zpool
  zpool/ROOT                     5.46G   112G    21K  legacy
  zpool/ROOT/s10s_u9wos_14a      5.46G   112G  5.37G  /mnt
  zpool/ROOT/s10s_u9wos_14a/var  92.3M   112G  92.3M  /mnt/var
  zpool/dump                     1.00G   112G  1.00G  -
  zpool/export                     44K   112G    23K  /export
  zpool/export/home                21K   112G    21K  /export/home
  zpool/swap                     15.1G   127G    16K  -
------------------------------------------------------------------------------
go to /etc/shadow file using VI,create a copy of shadow file and change the value in 
encrypted password field

8)# cd /mnt
  # ls
   bin       dev       export    lib       opt       sbin      usr       zpool
   boot      devices   home      mnt       platform  system    var
   cdrom     etc       kernel    net       proc      tmp       vol
  # cd /etc/
  # vi shadow
   "shadow" 17 lines, 338 characters
    root::6445::::::
    daemon:NP:6445::::::
    bin:NP:6445::::::
    sys:NP:6445::::::
    adm:NP:6445::::::
    lp:NP:6445::::::
    uucp:NP:6445::::::
    nuucp:NP:6445::::::
    smmsp:NP:6445::::::
    listen:*LK*:::::::
    gdm:*LK*:::::::
    webservd:*LK*:::::::
    postgres:NP:::::::
    svctag:*LK*:6445::::::
    nobody:*LK*:6445::::::
    noaccess:*LK*:6445::::::
    nobody4:*LK*:6445::::::
(its difficult for vi rditor to work in some systems..in that case please make use of  other unix commands to make changes to the file.For me also this didn't work.what i did was using vi commands like-w,cw,dw-i made changes)
-------------------------------------------------------------------------
umount the ZFS file system and change the mount point back to /
9)# cd /
  # zfs umount zpool/ROOT/s10s_u9wos_14a
  # zfs set mountpoint=/ zpool/ROOT/s10s_u9wos_14a
  # zfs list
  NAME                            USED  AVAIL  REFER  MOUNTPOINT
  zpool                          21.6G   112G    96K  /zpool
  zpool/ROOT                     5.46G   112G    21K  legacy
  zpool/ROOT/s10s_u9wos_14a      5.46G   112G  5.37G  /
  zpool/ROOT/s10s_u9wos_14a/var  92.3M   112G  92.3M  /var
  zpool/dump                     1.00G   112G  1.00G  -
  zpool/export                     44K   112G    23K  /export
  zpool/export/home                21K   112G    21K  /export/home
  zpool/swap                     15.1G   127G    16K  -


 if you want to check again,got to /etc/shadow and again delete the entry in 
 encrypted shadow region(What i did was i took the encypted shadow part from a server for which i knew the root password and pasted here)
---------------------------------------------------------------------------------
10)init 6
now u can login to the server without a password if you have deleted the entry in /etc/shadow or with the password of the server from which you have taken the encrypted field /etc/shadow

6 comments:

  1. There is a simple way to do this
    SPARC: How to Boot to a ZFS Root Environment to Recover From a Lost Password or Similar Problem

    Boot the system in failsafe mode.

    ok boot -F failsafe

    When prompted, mount the ZFS BE on /a.

    .
    .
    ROOT/zfsBE was found on rpool.
    Do you wish to have it mounted read-write on /a? [y,n,?] y
    mounting rpool on /a
    Starting shell.

    Become superuser.

    Change to the /a/etc directory.

    # cd /a/etc

    Correct the passwd or shadow file.

    # vi passwd

    Reboot the system.

    # init 6

    ReplyDelete
  2. Instead of setting and re-setting mountpoint use a temporary mountpoint with:

    zfs mount -o mountpoint=/mnt rpool/...

    Otherwise a great article, you just saved my day! :)

    ReplyDelete
  3. Stupid question, but I'm wondering where the cdrom is mounted. UFS auto mounts under /cdrom, I don't see zfs mounting it.

    ReplyDelete
  4. when you are booting from a cdrom to single user mode, why should you mount cdrom again?

    ReplyDelete