Saturday 30 December 2017

Firmware Update in T series Serevrs

Firmware Update in T series Serevrs

I have performed this activity on the T4-1 server


start /SP/console
Are you sure you want to start /SP/console (y/n)? y 
Serial console started.  To stop, type #.
 
Firmware Version
 
-> show /HOST sysfw_version

Power off the host

-> stop /SYS
Are you sure you want to stop /SYS (y/n)? y
Stopping /SYS

-> show /SYS
power_state = Off

> load -source tftp://10.10.10.10/Sun_System_Firmware-7_2_7_d-SPARC_Enterprise_T5220+T5220.pkg

NOTE: A firmware upgrade will cause the server and ILOM to
      be reset. It is recommended that a clean shutdown of
      the server be done prior to the upgrade procedure.
      An upgrade takes about 6 minutes to complete. ILOM
      will enter a special mode to load new firmware. No
      other tasks can be performed in ILOM until the
      firmware upgrade is complete and ILOM is reset.
Are you sure you want to load the specified file (y/n)? y
Do you want to preserve the configuration (y/n)? y
 
After few minutes ILOM will be up automatically and try to connnect again.

-> start /SYS
Are you sure you want to start /SYS (y/n)? y
Starting /SYS

-> start /SP/console
Are you sure you want to start /SP/console (y/n)? y


NOTE:  10.10.10.10 is the FTP server on which the firmare is present.

Rebuilding Boot Archive in Solaris 10.


Rebuilding Boot Archive in Solaris 10.

Boot-archive, introduced in Solaris 10 10/08 (update 6) which is similar to the initrd in Linux. Boot-Archive is collection of core kernel modules and configuration files packed in either UFS or ISOFS format.

bash-3.00#  svcs -a | grep boot-archive
online         10:59:59 svc:/system/boot-archive:default
online         11:00:13 svc:/system/boot-archive-update:default
bash-3.00#

Above mentioned are the services that manages the boot-archive

You may land up into situations were you have to build the boot-archive or sometime the boot-archive is corrupt and you have recreate the boot-archive.
When you face such problem boot server either in Failsafe mode, cd/dvd media, or a network image in single user mod

ok > boot -F failsafe
ok > boot cdrom -s
ok > boot net -s

If your machine is x86 based then you simply select the failsafe option from the grub menu.
NOTE : I had updated boot-archive on the x86 system.

While booting from failsafe select yes when prompted to mount ZFS root under /a

Remove old archive
# rm /a/platform/i86pc/boot_archive

Update boot-archive
#bootadm update-archive -f -R /a

-R altroot -  Operation is applied to an alternate root path

#reboot

System will boot from updated boot-archive

You can check the content of boot-archive

bash-3.00# bootadm list-archive
etc/rtc_config
etc/system
etc/name_to_major
etc/driver_aliases
etc/name_to_sysnum
etc/dacf.conf
etc/driver_classes
etc/path_to_inst
etc/mach
etc/cluster/nodeid
etc/devices/devid_cache
etc/devices/mdi_scsi_vhci_cache
etc/devices/mdi_ib_cache
kernel
platform/i86pc/biosint
platform/i86pc/kernel
platform/i86pc/ucode/GenuineIntel
platform/i86pc/ucode/AuthenticAMD
platform/i86hvm
boot/solaris/bootenv.rc
boot/solaris/devicedb/master
boot/acpi/tables
bash-3.00#


You can also check the content of boot-archive (x86 only)

bash-3.00# bootadm list-menu
The location for the active GRUB menu is: /boot/grub/menu.lst
default 0
timeout 10
0 Solaris 10 9/10 s10x_u9wos_14a X86
1 Solaris failsafe
bash-3.00#

Here I am having only 2 options one to boot system normally and other for failsafe.

Switching the GRUB default (x86 only)

bash-3.00# bootadm set-menu default=1

Here the 1 option will be selected from the menu list.




Tuesday 21 March 2017

Increasing ZFS root pool in Solaris



Increase ZFS root pool

In ZFS, you cannot extend the root pool by adding new disks. But it has some logic too. For an example, if you are having more than one hard disk in root zpool, loss of one disk result be un-bootable system. To eliminate these kind of situations ,its better to keep the rpool in one disk and mirror it instead of spreading over the multiple disks.

But sometimes project team will commit mistake by keeping /var, /usr, /home under rpool and system may running out of root disk space.

EFI labels are not supported for Solaris rpool,rpool disk must be SMI labeled and all the sectors to be set on partition 0 like the below one.
Part      Tag    Flag     First Sector       Size       Last Sector
  0        usr    wm               256   1015.86MB        2080733
  1 unassigned    wm                 0         0             0
  2 unassigned    wm                 0         0             0
  3 unassigned    wm                 0         0             0
  4 unassigned    wm                 0         0             0
  5 unassigned    wm                 0         0             0
  6 unassigned    wm                 0         0             0
  8   reserved    wm           2080734      8.00MB        2097117
For a test, Here i am trying to extend the rpool using new disk,
# zpool add rpool c1t8d0s0
cannot add to 'rpool': root pool cannot have multiple vdevs or separate logs

Solution:

1.Add a bigger size root hard disk.(i.e. If current root disk is 40GB,then add 80GB hard disk)
2.Mirror the rpool using the new 80GB hard disk
3.Installboot block in new disk
4.Detach the old disk from rpool

# zpool list rpool
NAME    SIZE  ALLOC   FREE    CAP  HEALTH  ALTROOT
rpool  11.9G  8.56G  3.38G    71%  ONLINE  -
# zpool status rpool
  pool: rpool
 state: ONLINE
 scrub: none requested
config:
        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          c1t0d0s0  ONLINE       0     0     0
errors: No known data errors

Here i am going to extend my rpool to 20GB. My new 20GB hard disk is c1t9d0.
# format c1t9d0
selecting c1t9d0
[disk formatted]
FORMAT MENU:
        disk       - select a disk
        type       - select (define) a disk type
        partition  - select (define) a partition table
        current    - describe the current disk
        format     - format and analyze the disk
        fdisk      - run the fdisk program
        repair     - repair a defective sector
        label      - write label to the disk
        analyze    - surface analysis
        defect     - defect list management
        backup     - search for backup labels
        verify     - read and display labels
        save       - save new disk/partition definitions
        inquiry    - show vendor, product and revision
        volname    - set 8-character volume name
        !     - execute , then return
        quit
format> p
WARNING - This disk may be in use by an application that has
          modified the fdisk table. Ensure that this disk is
          not currently in use before proceeding to use fdisk.
format> fdisk
No fdisk table exists. The default partition for the disk is:
  a 100% "SOLARIS System" partition
Type "y" to accept the default partition,  otherwise type "n" to edit the
 partition table.
y
format> p
PARTITION MENU:
        0      - change `0' partition
        1      - change `1' partition
        2      - change `2' partition
        3      - change `3' partition
        4      - change `4' partition
        5      - change `5' partition
        6      - change `6' partition
        7      - change `7' partition
        select - select a predefined table
        modify - modify a predefined partition table
        name   - name the current table
        print  - display the current table
        label  - write partition map and label to the disk
        ! - execute , then return
        quit
partition>
partition> 0
Part      Tag    Flag     Cylinders        Size            Blocks
  0 unassigned    wm       0               0         (0/0/0)           0

Enter partition id tag[unassigned]:
Enter partition permission flags[wm]:
Enter new starting cyl[0]:1
Enter partition size[0b, 0c, 0e, 0.00mb, 0.00gb]: 2606c
partition> p
Current partition table (unnamed):
Total disk cylinders available: 2607 + 2 (reserved cylinders)

Part      Tag    Flag     Cylinders        Size            Blocks
  0 unassigned    wm       1 - 2605       19.96GB    (2606/0/0) 41865390
  1 unassigned    wm       0               0         (0/0/0)           0
  2     backup    wu       0 - 2606       19.97GB    (2607/0/0) 41881455
  3 unassigned    wm       0               0         (0/0/0)           0
  4 unassigned    wm       0               0         (0/0/0)           0
  5 unassigned    wm       0               0         (0/0/0)           0
  6 unassigned    wm       0               0         (0/0/0)           0
  7 unassigned    wm       0               0         (0/0/0)           0
  8       boot    wu       0 -    0        7.84MB    (1/0/0)       16065
  9 unassigned    wm       0               0         (0/0/0)           0
partition> l
Ready to label disk, continue? y
partition> q
format> q
#


Now we are going to mirror the rpool with new disk.
# zpool attach rpool c1t0d0s0 c1t9d0s0
Please be sure to invoke installgrub(1M) to make 'c1t9d0s0' bootable.
Make sure to wait until resilver is done before rebooting.
bash-3.00# zpool status rpool
  pool: rpool
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scrub: resilver in progress for 0h0m, 0.08% done, 7h6m to go
config:
        NAME          STATE     READ WRITE CKSUM
        rpool         ONLINE       0     0     0
          mirror-0    ONLINE       0     0     0
            c1t0d0s0  ONLINE       0     0     0
            c1t9d0s0  ONLINE       0     0     0  6.82M resilvered
errors: No known data errors

Once the mirroring is done, we are good to detach the old disk from zpool. But make sure you have made the new disk bootable by updating boot block.

bash-3.00# zpool detach rpool c1t0d0s0
bash-3.00# zpool status rpool
  pool: rpool
 state: ONLINE
 scrub: resilver completed after 0h31m with 0 errors on Tue May  7 02:05:56 2013
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          c1t9d0s0  ONLINE       0     0     0  8.56G resilvered

errors: No known data errors
bash-3.00# df -h /
Filesystem             size   used  avail capacity  Mounted on
rpool/ROOT/rpooldataset
                        12G   5.6G   1.6G    78%    /
bash-3.00# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t9d0s0
stage1 written to partition 0 sector 0 (abs 16065)
stage2 written to partition 0, 273 sectors starting at 50 (abs 16115)

Installgrub will work only on X86 servers. For SPARC serves,please use installboot.
# installboot -F zfs /usr/plaform/'uname i'/lib/fs/zfs/bootblk /dev/rdsk/c1t9d0s0

You may wondering after doing all the stuff's root disk space is not increased ? You have to do one small thing to activate the trick. Perform the scrub on rpool and set autoexpand to get the new size.
# zpool scrub rpool
# zpool status rpool
  pool: rpool
 state: ONLINE
 scrub: scrub in progress for 0h9m, 70.77% done, 0h4m to go
config:
        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          c1t9d0s0  ONLINE       0     0     0

# zpool status rpool
  pool: rpool
 state: ONLINE
 scrub: scrub completed after 0h19m with 0 errors on Tue May  7 02:36:21 2013
config:
        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          c1t9d0s0  ONLINE       0     0     0

errors: No known data errors
bash-3.00# df -h /
Filesystem             size   used  avail capacity  Mounted on
rpool/ROOT/rpooldataset
                        12G   5.6G   1.6G    78%    /
bash-3.00# zpool set autoexpand=on rpool
bash-3.00# df -h /
Filesystem             size   used  avail capacity  Mounted on
rpool/ROOT/rpooldataset
                        20G   5.6G   9.5G    38%    /
Now you can see root filesystem has been extended to 20GB.

Physical P2V migration in Solaris (Solaris 9 to Solaris 10)

Physical P2V migration in Solaris  P2V migration is the excellent feature of Solaris where you can migrate the physical server...