Monday 22 September 2014

Patching ZFS based SPARC system with Live Upgrade




Solaris – Patching with Live Upgrade, ZFS makes it so much easier
Solaris Live Upgrade is a superb tool that lets your operating system create an alternate boot environment. Live Upgrade is a simple way to update or patchs systems and minimizes downtime and mitigate risks often associated with patching efforts.  An admin can patch the system quickly without any system interruption and this is done by patching the alternate boot environment which the system will boot from on the next reboot after having been activated. Live Upgrade creates a copy of the active boot environment, and that copy is given a name. That copy becomes the alternate BE or boot environment. Because there are multiple BE’s or boot environments, the true beauty of Live Upgrade shows through. If problems occur with the newly created or patches BE, the original BE could be used as the backup plant boot image. So revertng back to a previous BE is the backout plan for almost all Live Upgrade installations. Historically with UFS for even (I dread those days) with SVM, lucreate command was much more complicated as you had software raid. ZFS with snapshots and pools makes it so easy, it’s astounding. At the OBP or boot prom level, it’s mostly the same. At the ok promg, a boot -L will list the BE’s assuming the correct boot disk is mapped properly.

Live Upgrade and patching
Patching a Solaris 10 ZFS based system is done the same way you would path any basic Solaris system. You should be able to patch the Solaris 10 ZFS based system with Live upgrade successfully, and with no outages. The patches are downloaded and unziped in a temporary location. Preferrably not on /tmp  :)  Assumptions are that you have a valid and working rpool with zfs volumes. Lets look at our existing BE’s and the active boot environment is Nov2012.

# lustatus
Boot Environment           Is       Active Active    Can    Copy      
Name                       Complete Now    On Reboot Delete Status    
————————– ——– —— ——— —— ———-
Nov2012                    yes      yes    yes       no     -         
Oct2012                    yes      no     no        yes    -      

I need a new BE for next month, December. I normally have 2 BE’s and reotate and lurename them. But for this blog article, I will crate a new one.

# lucreate -n Dec2012
Analyzing system configuration.
Updating boot environment description database on all BEs.
Updating system configuration files.
Creating configuration for boot environment <Dec2012>.
Source boot environment is <Nov2012>.
Creating file systems on boot environment <Dec2012>.
Populating file systems on boot environment <Dec2012>.
Analyzing zones.
Duplicating ZFS datasets from PBE to ABE.
Creating snapshot for <rpool/ROOT/s10s_u10wos_17b> on <rpool/ROOT/s10s_u10wos_17b@Dec2012>.
Creating clone for <rpool/ROOT/s10s_u10wos_17b@Dec2012> on <rpool/ROOT/Dec2012>.
Mounting ABE <Dec2012>.
Generating file list.
Finalizing ABE.
Fixing zonepaths in ABE.
Unmounting ABE <Dec2012>.
Fixing properties on ZFS datasets in ABE.
Reverting state of zones in PBE <Nov2012>.
Making boot environment <Dec2012> bootable.
Population of boot environment <Dec2012> successful.
Creation of boot environment <Dec2012> successful.

The lucreate in conjunction with ZFS created the rpool/ROOT/s10s_u10wos_17b@Dec2012 snapshot which was then cloned to rpool/ROOT/Dec2012. The rpool/ROOT/Dec2012 clone is what you will see at the OBP when you do a boot -L . Lets look at our BE’s status
# lutatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
————————– ——– —— ——— —— ———-
Nov2012                    yes      yes    yes       no     -
Oct2012                    yes      no     no        yes    -
Dec2012                    yes      no     no        yes    -
Lets patch the new Dec2012 BE.  Assumption here is that we have downloaded the latest recommended patch cluster from Sun or Oracle site. (depends who you have alliegience with). Lets patch the BE while the system is running, and doing whatever the system is supposed to do. Lets say it’s a DNS/NTP/Jumpstart server? Don’t know. Could be anything. I’ve downloaded the patch cluster, unziped it in /var/tmp
# uname -a
SunOS tweetybird 5.10 Generic_147440-12 sun4v sparc sun4v
# cd /var/tmp
# luupgrade -n Dec2012 -s /var/tmp/10_Recommended/patches -t `cat patch_order`
Validating the contents of the media </var/tmp/10_Recommended/patches>.
The media contains 364 software patches that can be added.
Mounting the BE <Dec2012>.
Adding patches to the BE <Dec2012>.
Validating patches …
Loading patches install installed on the system…
Done!
Loading patches requested to install.
Unmounting the BE <Dec2012>

The patch add to the BE <Dec2012> completed.
# lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
————————– ——– —— ——— —— ———-
Nov2012                    yes      yes    yes       no     -
Oct2012                    yes      no     no        yes    -
Dec2012                    yes      no     no        yes    -
# luactivate Dec2012
# lutatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
————————– ——– —— ——— —— ———-
Nov2012                    yes      no     yes       no     -
Oct2012                    yes      no     no        yes    -
Dec2012                    yes      yes    no        yes    -

Lets reboot and makes sure the prober BE comes up. Use must use either init or shutdown, do not use halt or fastboot.
# init 6
After the server reboots, the Dec2012 should automatically be booted with the newly implemented patch bundle. So the Dec2012 is the new active BE. Lets check the kernel patch level:
# uname -a
SunOS tweetybird 5.10 Generic_147440-26 sun4v sparc sun4v
Looks good. With ZFS, Live Upgrade it’s so simple now. Heck, Live Upgrade workds wonders when you have a UFS based root volume and you dearly want to migrate over to a ZFS root volume. You will need the ZFS capable kernel to start. Create a pool called rpool using slices not the whole disk, then lucreate it to the rpool, activate it and then reboot and you are booting off of a new ZFS based Solaris system. There are a few tricks about creating the proper type of rpool. Maybe another blog entry on this. But Live Upgrade is a great tool for migrating UFS systems to ZFS. Again, with a slick backout option.


Disaster – Plan B. You need an easy backout plan
Thanksfully having multiple BE’s you can choose to backout simply by choosing one of the previously installed BE’s. If the system boots up without trouble but applications are failing, simply luactivate the original BE and reboot. If the system fails to boot (yikes, this is rare), the from the boot prom, list the BE’s and choose the BE to boot from.
ok boot -L
.
.
.
Boot device: /pci@1f,0/pci@1/scsi@8/disk@0,0 File and args: -L
zfs-file-system
Loading: /platformsun4v/bootlst
    1.Nov2012
    2 Octv2012
    3 Dec2012
Select environment to boot: [ 1 - 3 ]: 1
to boot the selected entry, invoke:
boot [<root-device] -Z rpool/ROOT/Nov2012

and off you go. In special cases, when you have to backout and boot from the original BE and it fails, you will need to boot in fail safe mode and mount the current BE root slice and import the rootpool. Instructions are as follows:

ok boot –f  Failsafe
Now mount the current BE root slice to /mnt.
# zpool import rootpool
# zfs inherit -r mountpoint rootpool/ROOT/Dec2012
# zfs set mountpoint=/mnt  rootpool/ROOT/Dec2012
# zfs mount rootpool/ROOT/Dec2012
Here we are activating the previously (known good) BE
# /mnt/sbin/luactivate
If this works, now reboot with init 6.
# init 6
Please Note Live Upgrade and LDom’s Require an Extra Step A quick note about Live Upgrade, ZFS and LDom’s. Preserving the Logical Domains Constraints database file when using the Oracle Solaris 10 Live Upgrade feature requires some hand holding. This is a special situation. If you are using Live Upgrade on a Control Domain, you need to enter the following line to the bottom of the /etc/lu/synclist file. As in append this line.
# echo “/var/opt/SUNWldm/ldom-db.xml     OVERWRITE” >> /etc/lu/synclist
This line is important, as it forces the database to be copied automatically from the active boot environment to the new boot environment when you switch boot environments. Otherwise, as you may well guess, you lose your LDom configuration.

Wednesday 4 June 2014

Backup root file system using tar

Preparing for Backup :


In preparation for a complete backup of the system, it is a good idea to empty the trash and remove any unwanted files and programs from your current installation. This includes the home folder which can be filled with many files not needed. Doing so will reduce the size of the archive created in relation to how much space is liberated.
A quick list of examples is below, decide for yourself what applies:
  • Delete all your emails.
  • Wipe your saved browser personal details and search history.
    • If you are not worried about the security concerns, this step is not necessary. Many users explicitly want backups of their email and browser settings.
  • Unmount any external drives and remove any optical media such as CDs or DVDs that you do not want to include in the backup.
    • This will reduce the amount of exclusions you need to type later in the process.
  • Go through the contents of your user folder in /home and delete all unwanted files in the subdirectories, often people download files and forget about them for instance.

Backing Up:

To take backup of entire root partition you can use external hard drive, another partition or disk connected internally, even a folder in your home directory could be used. In all cases, ensure the location your saving the archive to has enough space. Simply use the cd command to navigate there.
cd / 

The following is an exemplary command of how to archive your system.

tar -cvpzf backup.tar.gz --exclude=/backup.tar.gz --one-file-system / 

To understand what is going on, we will dissect each part of the command.
  • tar - is the command that creates the archive. It is modified by each letter immediately following, each is explained bellow.
    • c - create a new backup archive.
    • v - verbose mode, tar will print what it's doing to the screen.
    • p - preserves the permissions of the files put in the archive for restoration later.
    • z - compress the backup file with 'gzip' to make it smaller.
    • f <filename> - specifies where to store the backup, backup.tar.gz is the filename used in this example. It will be stored in the current working directory, the one you set when you used the cd command.
  • --exclude=/example/path - The options following this model instruct tar what directories NOT to backup. We don't want to backup everything since some directories aren't very useful to include. The first exclusion rule directs tar not to back itself up, this is important to avoid errors during the operation.
  • --one-file-system - Do not include files on a different filesystem. If you want other filesystems, such as a /home partition, or external media mounted in /media backed up, you either need to back them up separately, or omit this flag. If you do omit this flag, you will need to add several more --exclude= arguments to avoid filesystems you do not want. These would be /proc, /sys, /mnt, /media, /run and /dev directories in root. /proc and /sys are virtual filesystems that provide windows into variables of the running kernel, so you do not want to try and backup or restore them. /dev is a tmpfs whose contents are created and deleted dynamically by udev, so you also do not want to backup or restore it. Likewise, /run is a tmpfs that holds variables about the running system that do not need backed up.
  • It is important to note that these exclusions are recursive. This means that all folders located within the one excluded will be ignored as well. In the example, excluding the /media folder excludes all mounted drives and media there.
    • If there are certain partitions you wish to backup located in /media, simply remove the exclusion and write a new one excluding the partitions you don't want backed up stored within it. For example:
      • --exclude=/media/unwanted_partition 
  • / - After all options is the directory to backup. Since we want to backup everything on the system we use / for the root directory. Like exclusions, this recursively includes every folder below root not listed in the exclusions or other options.
See tips before operation for additional information.
Once satisfied with the command, execute it and wait until it has completed. The duration of the operation depends on the amount of files and compression choses. Once completed, check the directory you set to find the archive. In our example, backup.tar.gz would be located in the / directory once completed. This archive can then be moved to any other directory for long term storage.
Note: At the end of the process you might get a message along the lines of 'tar: Error exit delayed from previous errors' or something, but in most cases you can just ignore that.

Additional Tips :

  • To keep good records, you should include the date and a description of backup in the filename.
  • Another option would be to use bzip2 to compress your backup instead of gzip. Bzip2 provides a higher compression ratio at the expense of speed. If compression is important to you, just substitute the z in the command with j, and change the file name to .tar.bz2. The rest of this guides examples use gzip, make the subsequent changes to the examples before using them.
  • If you want to exclude all other mounts other than the current - by this I mean partitions mounted to directories - then use the --one-file-system option appended before the exclusion rules. This has the effect of stopping tar from crossing into any other mounts in any directory including /mnt or /media. For instance, many users create a separate mount for /home to keep user folders separate from root, adding this option to our original example would exclude home's contents entirely.

Archive Splitting :

To split the backup you can use split command. You can split the backup into 2 ways
1) Split During the Creation
2) Split After the Creation 
Ensure you keep these archives all together in a directory you label for extraction at a later date. Once the archives are split to a desirable size, they can be burned one at a time to disc.
1)To Split During Creation
tar -cvpz <put options here> / | split -d -b 3900m - /name/of/backup.tar.gz. 

  • The first half until the pipe (|) is identical to our earlier example, except for the omission of the f option. Without this, tar will output the archive to standard output, this is then piped to the split command.
  • -d - This option means that the archive suffix will be numerical instead of alphabetical, each split will be sequential starting with 01 and increasing with each new split file.
  • -b - This option designates the size to split at, in this example I've made it 3900mB to fit into a FAT32 partition.
  • - - The hyphen is a placeholder for the input file (normally an actual file already created) and directs split to use standard input.
  • /name/of/backup.tar.gz. Is the prefix that will be applied to all generated split files. It should direct to the folder you want the archives to end up. In our example, the first split archive will be in the directory /name/of/ and be named backup.tar.gz.01 .
2) To Split After Creation
split -d -b 3900m /path/to/backup.tar.gz /name/of/backup.tar.gz. 

  • Here instead of using standard input, we are simply splitting an existing file designated by /path/to/backup.tar.gz .
To Reconstitute the Archive
Reconstructing the complete archive is easy, first cd into the directory holding the split archives. Then simply use cat to write all the archives into one and send over standard output to tar to extract to the specified directory.
cat *tar.gz* | tar -xvpzf - -C /  

  • The use of * as a wild card before and after tar.gz tells cat to start with first matching file and add every other that matches, a process known as catenation, how the command got its name.
  • Afterwards, it simply passes all that through standard output to tar to be extracted into root in this example.

Backup Over a Network :

The command tar does not include network support within itself, but when used in conjunction with other programs this can be achieved. Two common options are netcat (nc) and ssh.

Netcat

The command nc is designed to be a general purpose networking tool. It sets up a simple connection between two networked machines. This connection survives until the user manually disconnects it, unlike normal connections such as tcp which terminate upon completion of a file.
Receiving Machine :
On the receiving end you'll setup netcat to write the backup file as in the following example. This command will setup a machine to receive standard input from network to port 1024 then write it to the file backup.tar.gz. The choice of port is entirely up to the user, as long as it is 1024 or larger. A simple example:
nc -l 1024 > backup.tar.gz 

Sending Machine :
On the machine to be backed up, the tar command will be piped to nc which will then send the backup over the network to the port in question to be written in the file. Take note, where it says <receiving host> replace with the name of the computer on the network. The f option was omitted since we are not writing to a local file, but moving the archive through standard output. The following is an example:
tar -cvpz <all those other options like above> / | nc -q 0 <receiving host> 1024 

If all goes well the backup will be piped through the network without touching the file system being read.

SSH

You can also use SSH. The command below is an example of what is possible.
tar -cvpz <all those other options like above> / | ssh <backuphost> "( cat > ssh_backup.tar.gz )"

In the example:
  • The tar half of the command is the same as above, with the omission of the f option to pipe the archive via standard output to ssh and onto the networked computer.
  • ssh_backup.tar.gz Is the name of the file that will be created on the machine indicated.
  • <backuphost> - Should be replaced with the name of the computer in question on the network.

Restoring


You will want to restore from a Live CD. If needed, first partition and format the drive. You can do this with gparted. Then simply mount the partition you are going to restore somewhere. If you open the drive in nautilus, it will be auto mounted somewhere under /media. Take a look to find out where with:
ls /media

Restore Your Backup
sudo tar -xvpzf /path/to/backup.tar.gz -C /media/whatever --numeric-owner
  • x - Tells tar to extract the file designated by the f option immediately after. In this case, the archive is /home/test/backup.tar.gz
  • -C <directory> - This option tells tar to change to a specific directory before extracting. In this example, we are restoring to the root (/) directory.
  • --numeric-owner - This option tells tar to restore the numeric owners of the files in the archive, rather than matching to any user names in the environment you are restoring from. This is due to that the user id:s in the system you want to restore don't necessarily match the system you use to restore (eg a live CD).
This will overwrite every single file and directory on the designated mount with the one in the archive. Any file created after the archive, will have no equivalent stored in the archive and thus will remain untouched
Allow the restoration the time it needs to complete. Once extraction is completed, you may need to recreate directories that were not included in the original archive because you excluded them with --exclude. This does not apply to filesystems excluded with --one-file-system. This can be done with the following command:
mkdir /proc /sys /mnt /media 
Once finished, reboot and everything should be restored to the state of your system when you made the backup.

Sunday 16 February 2014

Add swap space in UFS Solaris

Sometimes you have to add additional swap space when the current swap space is not sufficient.
With help to following steps you can add the swap space in UFS of Solaris 10

Following techniques are recommended to add additional swap space in UFS file system.
Creating a swap file by using the mkfile command.
Activating the swap file by using the swap command.
Adding an entry for the swap file in the /etc/vfstab file so that the swap file is activated automatically when the system is booted.

Check out the swap space using the below command.
# /usr/sbin/swap -l
swapfile             dev  swaplo blocks   free
/dev/md/dsk/d20     85,20     16 20972720 20972720
 
Create the directory for the swap file, here we are having /files for the swap space
# mkdir /files 

Using mkfile command create swap file.
# mkfile 100m /files/swapfile 

Activate the swap file using swap -a
# swap -a /files/swapfile

Make entry in /etc/vfstab file so that the swap space will stay permanent on the system even after reboot.
# vi /etc/vfstab
(An entry is added for the swap file):
/files/swapfile   -      -       swap     -     no     -

Check the swap space is added properly using swap -l command.
# swap -l
swapfile             dev  swaplo blocks   free
/dev/md/dsk/d20     85,20     16 20972720 20972720
/files/swapfile        -       16 204784  204784
 
Note - If a swap file does not get activated, make sure that the following service is running:
# svcs nfs/client 
STATE          STIME    FMRI
enabled        14:14:34 svc:/network/nfs/client:default 
 

Friday 31 January 2014

Add swap space in ZFS Solaris

Sometimes you have to add additional swap space when the current swap space is not sufficient.
With help to following steps you can add the swap space in zfs of Solaris 10 /11

Create the zfs volume of 2Gb from rpool, here I had used rpool you can use any other pool.
 
# swap -l
swapfile                 dev  swaplo   blocks   free
/dev/zvol/dsk/rpool/swap 256,1      16 1058800 1058800

# zfs create -V 2G rpool/swap2
 
Activate the second swap volume.
# swap -a /dev/zvol/dsk/rpool/swap2
# swap -l
swapfile                  dev  swaplo   blocks   free
/dev/zvol/dsk/rpool/swap  256,1      16 1058800 1058800
/dev/zvol/dsk/rpool/swap2 256,3      16 4194288 4194288 

Make an entry into /etc/vfstab file so that swap space would remain persistent 
across reboots.

/dev/zvol/dsk/rpool/swap2    -        -       swap    -       no      -

Physical P2V migration in Solaris (Solaris 9 to Solaris 10)

Physical P2V migration in Solaris  P2V migration is the excellent feature of Solaris where you can migrate the physical server...