Monday 22 September 2014

Patching ZFS based SPARC system with Live Upgrade




Solaris – Patching with Live Upgrade, ZFS makes it so much easier
Solaris Live Upgrade is a superb tool that lets your operating system create an alternate boot environment. Live Upgrade is a simple way to update or patchs systems and minimizes downtime and mitigate risks often associated with patching efforts.  An admin can patch the system quickly without any system interruption and this is done by patching the alternate boot environment which the system will boot from on the next reboot after having been activated. Live Upgrade creates a copy of the active boot environment, and that copy is given a name. That copy becomes the alternate BE or boot environment. Because there are multiple BE’s or boot environments, the true beauty of Live Upgrade shows through. If problems occur with the newly created or patches BE, the original BE could be used as the backup plant boot image. So revertng back to a previous BE is the backout plan for almost all Live Upgrade installations. Historically with UFS for even (I dread those days) with SVM, lucreate command was much more complicated as you had software raid. ZFS with snapshots and pools makes it so easy, it’s astounding. At the OBP or boot prom level, it’s mostly the same. At the ok promg, a boot -L will list the BE’s assuming the correct boot disk is mapped properly.

Live Upgrade and patching
Patching a Solaris 10 ZFS based system is done the same way you would path any basic Solaris system. You should be able to patch the Solaris 10 ZFS based system with Live upgrade successfully, and with no outages. The patches are downloaded and unziped in a temporary location. Preferrably not on /tmp  :)  Assumptions are that you have a valid and working rpool with zfs volumes. Lets look at our existing BE’s and the active boot environment is Nov2012.

# lustatus
Boot Environment           Is       Active Active    Can    Copy      
Name                       Complete Now    On Reboot Delete Status    
————————– ——– —— ——— —— ———-
Nov2012                    yes      yes    yes       no     -         
Oct2012                    yes      no     no        yes    -      

I need a new BE for next month, December. I normally have 2 BE’s and reotate and lurename them. But for this blog article, I will crate a new one.

# lucreate -n Dec2012
Analyzing system configuration.
Updating boot environment description database on all BEs.
Updating system configuration files.
Creating configuration for boot environment <Dec2012>.
Source boot environment is <Nov2012>.
Creating file systems on boot environment <Dec2012>.
Populating file systems on boot environment <Dec2012>.
Analyzing zones.
Duplicating ZFS datasets from PBE to ABE.
Creating snapshot for <rpool/ROOT/s10s_u10wos_17b> on <rpool/ROOT/s10s_u10wos_17b@Dec2012>.
Creating clone for <rpool/ROOT/s10s_u10wos_17b@Dec2012> on <rpool/ROOT/Dec2012>.
Mounting ABE <Dec2012>.
Generating file list.
Finalizing ABE.
Fixing zonepaths in ABE.
Unmounting ABE <Dec2012>.
Fixing properties on ZFS datasets in ABE.
Reverting state of zones in PBE <Nov2012>.
Making boot environment <Dec2012> bootable.
Population of boot environment <Dec2012> successful.
Creation of boot environment <Dec2012> successful.

The lucreate in conjunction with ZFS created the rpool/ROOT/s10s_u10wos_17b@Dec2012 snapshot which was then cloned to rpool/ROOT/Dec2012. The rpool/ROOT/Dec2012 clone is what you will see at the OBP when you do a boot -L . Lets look at our BE’s status
# lutatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
————————– ——– —— ——— —— ———-
Nov2012                    yes      yes    yes       no     -
Oct2012                    yes      no     no        yes    -
Dec2012                    yes      no     no        yes    -
Lets patch the new Dec2012 BE.  Assumption here is that we have downloaded the latest recommended patch cluster from Sun or Oracle site. (depends who you have alliegience with). Lets patch the BE while the system is running, and doing whatever the system is supposed to do. Lets say it’s a DNS/NTP/Jumpstart server? Don’t know. Could be anything. I’ve downloaded the patch cluster, unziped it in /var/tmp
# uname -a
SunOS tweetybird 5.10 Generic_147440-12 sun4v sparc sun4v
# cd /var/tmp
# luupgrade -n Dec2012 -s /var/tmp/10_Recommended/patches -t `cat patch_order`
Validating the contents of the media </var/tmp/10_Recommended/patches>.
The media contains 364 software patches that can be added.
Mounting the BE <Dec2012>.
Adding patches to the BE <Dec2012>.
Validating patches …
Loading patches install installed on the system…
Done!
Loading patches requested to install.
Unmounting the BE <Dec2012>

The patch add to the BE <Dec2012> completed.
# lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
————————– ——– —— ——— —— ———-
Nov2012                    yes      yes    yes       no     -
Oct2012                    yes      no     no        yes    -
Dec2012                    yes      no     no        yes    -
# luactivate Dec2012
# lutatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
————————– ——– —— ——— —— ———-
Nov2012                    yes      no     yes       no     -
Oct2012                    yes      no     no        yes    -
Dec2012                    yes      yes    no        yes    -

Lets reboot and makes sure the prober BE comes up. Use must use either init or shutdown, do not use halt or fastboot.
# init 6
After the server reboots, the Dec2012 should automatically be booted with the newly implemented patch bundle. So the Dec2012 is the new active BE. Lets check the kernel patch level:
# uname -a
SunOS tweetybird 5.10 Generic_147440-26 sun4v sparc sun4v
Looks good. With ZFS, Live Upgrade it’s so simple now. Heck, Live Upgrade workds wonders when you have a UFS based root volume and you dearly want to migrate over to a ZFS root volume. You will need the ZFS capable kernel to start. Create a pool called rpool using slices not the whole disk, then lucreate it to the rpool, activate it and then reboot and you are booting off of a new ZFS based Solaris system. There are a few tricks about creating the proper type of rpool. Maybe another blog entry on this. But Live Upgrade is a great tool for migrating UFS systems to ZFS. Again, with a slick backout option.


Disaster – Plan B. You need an easy backout plan
Thanksfully having multiple BE’s you can choose to backout simply by choosing one of the previously installed BE’s. If the system boots up without trouble but applications are failing, simply luactivate the original BE and reboot. If the system fails to boot (yikes, this is rare), the from the boot prom, list the BE’s and choose the BE to boot from.
ok boot -L
.
.
.
Boot device: /pci@1f,0/pci@1/scsi@8/disk@0,0 File and args: -L
zfs-file-system
Loading: /platformsun4v/bootlst
    1.Nov2012
    2 Octv2012
    3 Dec2012
Select environment to boot: [ 1 - 3 ]: 1
to boot the selected entry, invoke:
boot [<root-device] -Z rpool/ROOT/Nov2012

and off you go. In special cases, when you have to backout and boot from the original BE and it fails, you will need to boot in fail safe mode and mount the current BE root slice and import the rootpool. Instructions are as follows:

ok boot –f  Failsafe
Now mount the current BE root slice to /mnt.
# zpool import rootpool
# zfs inherit -r mountpoint rootpool/ROOT/Dec2012
# zfs set mountpoint=/mnt  rootpool/ROOT/Dec2012
# zfs mount rootpool/ROOT/Dec2012
Here we are activating the previously (known good) BE
# /mnt/sbin/luactivate
If this works, now reboot with init 6.
# init 6
Please Note Live Upgrade and LDom’s Require an Extra Step A quick note about Live Upgrade, ZFS and LDom’s. Preserving the Logical Domains Constraints database file when using the Oracle Solaris 10 Live Upgrade feature requires some hand holding. This is a special situation. If you are using Live Upgrade on a Control Domain, you need to enter the following line to the bottom of the /etc/lu/synclist file. As in append this line.
# echo “/var/opt/SUNWldm/ldom-db.xml     OVERWRITE” >> /etc/lu/synclist
This line is important, as it forces the database to be copied automatically from the active boot environment to the new boot environment when you switch boot environments. Otherwise, as you may well guess, you lose your LDom configuration.

Physical P2V migration in Solaris (Solaris 9 to Solaris 10)

Physical P2V migration in Solaris  P2V migration is the excellent feature of Solaris where you can migrate the physical server...