Wednesday 9 October 2013

Solaris UFS to ZFS

By Kunal Raykar

Requirement:
We need a physical disk matching to the current root hard disk size. If you don’t have spare disk, you remove the current mirror disk and use it for ZFS convert.
Assumptions:
Old disk: c1t0d0
New disk:c1t1d0
The new disk should formatted with SMI label and keep all the sectors in s0. EFI label is not supported for root pool.

bash-3.00# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c1t0d0 <DEFAULT cyl 1563 alt 2 hd 255 sec 63>
/pci@0,0/pci15ad,1976@10/sd@0,0
1. c1t1d0 <DEFAULT cyl 1563 alt 2 hd 255 sec 63>
/pci@0,0/pci15ad,1976@10/sd@1,0
Specify disk (enter its number):


NOTE: You have ufs file system mounted on the old disk c1t0d0 and also has pool mounted on the old disk.Migration should be performed with data 0 loss.
The disk should be labled as the SMI(Sun Microsystem Inc.)ie vtoc, instead of the EFI(Extensible firmware
interface)



Conversion from EFI to SMI label:

bash-3.00#
bash-3.00# format -e c1t1d0
selecting c1t1d0
[disk formatted]
FORMAT MENU:
disk  - select a disk
type  - select (define) a disk type
partition  - select (define) a partition table
current  - describe the current disk
format  - format and analyze the disk
fdisk  - run the fdisk program
repair  - repair a defective sector
label  - write label to the disk
analyze  - surface analysis
defect  - defect list management
backup  - search for backup labels
verify  - read and display labels
save  - save new disk/partition definitions
inquiry  - show vendor, product and revision
scsi  - independent SCSI mode selects
cache  - enable, disable or query SCSI disk cache
volname  - set 8-character volume name
!<cmd>  - execute <cmd>, then return
quit
format> p
PARTITION MENU:
0  - change `0' partition
1  - change `1' partition
2  - change `2' partition
3  - change `3' partition
4  - change `4' partition
5  - change `5' partition
6  - change `6' partition
7  - change `7' partition
9  - change `9' partition
select - select a predefined table
modify - modify a predefined partition table
name  - name the current table
print  - display the current table
label  - write partition map and label to the disk
!<cmd> - execute <cmd>, then return
quit
partition> l
[0] SMI Label
[1] EFI Label
Specify Label type[0]: 0
Ready to label disk, continue? yes
partition>
partition>


Copy the vtoc from old disk to new disk.
bash-3.00# prtvtoc /dev/rdsk/c1t0d0s2 | fmthard –s - /dev/rdsk/c1t1d0s2

Creating rpool:
First create zpool with the name of rpool using the newly configured disk.

bash-3.00# zpool create rpool c1t1d0s0
bash-3.00#
bash-3.00#
bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 73.5K 11.8G 21K /rpool
bash-3.00#

Verify if you are having existing boot environment to name current boot environment,

bash-3.00# lustatus
ERROR: No boot environments are configured on this system
ERROR: cannot determine list of all boot environment names
bash-3.00#


Creating the new boot environment using rpool:
Now we can create a new boot environment using the newly configured zpool (i.e rpool) .
-c -- current boot environment name
-n -- new boot environment name
-p -- Pool name

bash-3.00# lucreate -c sol_ufs -n sol_zfs -p rpool
Checking GRUB menu...
Analyzing system configuration.
No name for current boot environment.
Current boot environment is named <sol_ufs>.
Creating initial configuration for primary boot environment <sol_ufs>.
The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE
ID.
PBE configuration successful: PBE name <sol_ufs> PBE Boot Device </dev/dsk/c1t0d0s0>.
Comparing source boot environment <sol_ufs> file systems with the file
system(s) you specified for the new boot environment. Determining which
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
The device </dev/dsk/c1t1d0s0> is not a root device for any boot environment; cannot get BE
ID.
Creating configuration for boot environment <sol_zfs>.
Source boot environment is <sol_ufs>.
Creating boot environment <sol_zfs>.
Creating file systems on boot environment <sol_zfs>.
Creating <zfs> file system for </> in zone <global> on <rpool/ROOT/sol_zfs>.
Populating file systems on boot environment <sol_zfs>.
Checking selection integrity.
Integrity check OK.
Populating contents of mount point </>.
Copying.
cp: cannot access //platform/i86pc/bootlst
Creating shared file system mount points.
Creating compare databases for boot environment <sol_zfs>.
Creating compare database for file system </>.
Updating compare databases on boot environment <sol_zfs>.
Making boot environment <sol_zfs> bootable.
Updating bootenv.rc on ABE <sol_zfs>.
File </boot/grub/menu.lst> propagation successful
Copied GRUB menu from PBE to ABE
No entry for BE <sol_zfs> in GRUB menu
Population of boot environment <sol_zfs> successful.
Creation of boot environment <sol_zfs> successful.
bash-3.00# 


Activating the new boot environment:
Once the lucreate is done,then activate the new boot environment.So that system will boot from new BE from next time onwards.

bash-3.00#
bash-3.00# luactivate sol_zfs
Generating boot-sign, partition and slice information for PBE <sol_ufs>
A Live Upgrade Sync operation will be performed on startup of boot environment <sol_zfs>.
Generating boot-sign for ABE <sol_zfs>
NOTE: File </etc/bootsign> not found in top level dataset for BE <sol_zfs>
Generating partition and slice information for ABE <sol_zfs>
Boot menu exists.
Generating multiboot menu entries for PBE.
Generating multiboot menu entries for ABE.
Disabling splashimage
Re-enabling splashimage
No more bootadm entries. Deletion of bootadm entries is complete.
GRUB menu default setting is unaffected
Done eliding bootadm entries.
**********************************************************************
The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.
**********************************************************************
In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot environment:
1. Boot from the Solaris failsafe or boot in Single User mode from Solaris
Install CD or Network.
2. Mount the Parent boot environment root slice to some directory (like
/mnt). You can use the following command to mount:
mount -Fufs /dev/dsk/c1t0d0s0 /mnt
3. Run <luactivate> utility with out any arguments from the Parent boot
environment root slice, as shown below:
/mnt/sbin/luactivate
4. luactivate, activates the previous working boot environment and
indicates the result.
5. Exit Single User mode and reboot the machine.
**********************************************************************
Modifying boot archive service
Propagating findroot GRUB for menu conversion.
File </etc/lu/installgrub.findroot> propagation successful
File </etc/lu/stage1.findroot> propagation successful
File </etc/lu/stage2.findroot> propagation successful
File </etc/lu/GRUB_capability> propagation successful
Deleting stale GRUB loader from all BEs.
File </etc/lu/installgrub.latest> deletion successful
File </etc/lu/stage1.latest> deletion successful
File </etc/lu/stage2.latest> deletion successful
Activation of boot environment <sol_zfs> successful.
bash-3.00#
bash-3.00#
Reboot the server using init 6 to boot from new boot environment:
bash-3.00#
bash-3.00# init 6
updating /platform/i86pc/boot_archive
propagating updated GRUB menu
Saving existing file </boot/grub/menu.lst> in top level dataset for BE <sol_zfs> as
<mount-point>//boot/grub/menu.lst.prev.
File </boot/grub/menu.lst> propagation successful
File </etc/lu/GRUB_backup_menu> propagation successful
File </etc/lu/menu.cksum> propagation successful
File </sbin/bootadm> propagation successful
bash-3.00#
bash-3.00#
bash-3.00# df -h
Filesystem size used avail capacity Mounted on
rpool/ROOT/sol_zfs 12G 4.7G 5.8G 46% /
/devices 0K 0K  0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 727M 392K 727M 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
sharefs 0K 0K 0K 0% /etc/dfs/sharetab
/usr/lib/libc/libc_hwcap1.so.1
11G 4.7G 5.8G 46% /lib/libc.so.1
fd 0K 0K 0K 0% /dev/fd
swap 727M 44K 727M 1% /tmp
swap 727M 28K 727M 1% /var/run
rpool 12G 34K 5.8G 1% /rpool
/vol/dev/dsk/c0t0d0/sol_10_910_x86
2.0G 2.0G 0K 100% /cdrom/sol_10_910_x86
bash-3.00#
bash-3.00#

bash-3.00# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------sol_ufs yes no no yes  -sol_zfs yes yes yes no  -bash-3.00#
bash-3.00#


Now you can see the server is booted in ZFS.

bash-3.00#
bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 5.97G 5.78G 34.5K /rpool
rpool/ROOT 4.75G 5.78G 21K legacy
rpool/ROOT/sol_zfs 4.75G 5.78G 4.75G /
rpool/dump 512M 5.78G 512M  -rpool/swap 745M 6.50G 16K  

bash-3.00#
bash-3.00#
bash-3.00#
bash-3.00# zpool status
pool: rpool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
c1t1d0s0 ONLINE 0 0 0
errors: No known data errors
bash-3.00#


You can remove the old boot environment using the below command

bash-3.00#
bash-3.00# ludelete -f sol_ufs
System has findroot enabled GRUB
Updating GRUB menu default setting
Changing GRUB menu default setting to <0>
Saving existing file </boot/grub/menu.lst> in top level dataset for BE <sol_zfs> as
<mount-point>//boot/grub/menu.lst.prev.
File </etc/lu/GRUB_backup_menu> propagation successful
Successfully deleted entry from GRUB menu
Determining the devices to be marked free.
Updating boot environment configuration database.
Updating boot environment description database on all BEs.
Updating all boot environment configuration databases.
Boot environment <sol_ufs> deleted.
bash-3.00#
bash-3.00#
bash-3.00# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------sol_zfs yes yes yes no  

bash-3.00#

IMPORTANT NOTE:
Old disk: c1t0d0
New disk: c1t1d0

Assumption: The configuration of the disk after the boot in which there /pool100 as zfs and d10 as the svm volume.
These are the mounted partition, which contain data.
In such case it’s better to mirror the both zfs and svm data. So the data loss will be 0.
For example
c1t0d0s6 => disk contain svm mirror this with c1t1d0s6 (d10)
c1t0d0s7 => disk contain zfs data mirror it with the c1t1d0s7 (/pool100)

Initiating the rpool mirroring:

bash-3.00# zpool attach -f rpool c1t1d0s0 c1t0d0s0
Please be sure to invoke installgrub(1M) to make 'c1t0d0s0' bootable.
Make sure to wait until resilver is done before rebooting.
bash-3.00#
bash-3.00#
bash-3.00# zpool status
pool: rpool
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress for 0h0m, 10.90% done, 0h4m to go
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0   0 0
mirror-0 ONLINE 0 0 0
c1t1d0s0 ONLINE 0 0 0
c1t0d0s0 ONLINE 0 0 0 583M resilvered
errors: No known data errors
bash-3.00#
bash-3.00#


NOTE: Reboot the server after the syncing has been done between 2 legs of the mirror.

bash-3.00#
bash-3.00# init 6
updating /platform/i86pc/boot_archive
^[`propagating updated GRUB menu
File </boot/grub/menu.lst> propagation successful
File </etc/lu/GRUB_backup_menu> propagation successful
File </etc/lu/menu.cksum> propagation successful
File </sbin/bootadm> propagation successful
bash-3.00#

1 comment:

Physical P2V migration in Solaris (Solaris 9 to Solaris 10)

Physical P2V migration in Solaris  P2V migration is the excellent feature of Solaris where you can migrate the physical server...