Friday 2 March 2018

Physical P2V migration in Solaris (Solaris 9 to Solaris 10)



Physical P2V migration in Solaris 

P2V migration is the excellent feature of Solaris where you can migrate the physical server as zone into any another Solaris server which has sufficient resources. In exercise shown below I am going to migrate my Solaris 9 server as zone into my Solaris 10 server. In same way to can migrate the Solaris 8 or Solaris 10 server into some other server as Solaris zone.  I have done this activity as my Solaris 9 host was running on ancient hardware and needed to be migrated.

Source Server Details:
Hardware : Fujitsu Primepower Sever
OS : Solaris 9
Hostname : Sol-9


Destination Server Details:
Hardware : T4-4
OS : Solaris 10
Hostname : Sol-10


Before you proceed with P2V migration make sure all the LUNs from your old physical box is mapped to the Destination host. Get 2 new LUN mapped to the destination host out of which one will be used for zone installation and another one will be used to store flar.

Destination Server which I have used here is already having one Solaris 9 running on it. So it already has the necessary pkgs for branded zone.
But if you are doing this migration for first time please make user have installed following pkgs on destination server.

For Solaris 8 Packages     SUNWs8brandr, SUNWs8brandu, SUNWs8brandk
For Solaris 9 Packages     SUNWs9brandr, SUNWs9brandu, SUNWs9brandk

NOTE: The destination system must be running with at least Solaris 10 8/07, and requires the following minimum patch levels:
Kernel patch: 127111-08  which needs
125369-13
125476-02

 A) On physical server Sol-9


1) Flar Creation
We are going to use Flash Archiving tools to create an image of OS and use our flar image to install the zone.

I have many mount points which are coming from under vxvm. I have unmounted all file system before creating the flar image.
Command flarcreate considers only those mount points that present in /etc/vfstab.
I want to create the flar that will only contain the mounts that are necessary for OS to boot up. Keeping this thing mind have umounted all the file system.
Also by default it ignores items that are located in "swap" partitions.

#umount -f -a
#mkdir /flar

#flarcreate -S -n sol9zone -x /flar -R / -c /flar/sol9zone.flar
Full Flash
Checking integrity...
Integrity OK.
Running precreation scripts...
Precreation scripts done.
Creating the archive...

23962103 blocks
1 error(s)
Archive creation complete.

-u ---> sysunconfig 
-a --->attach the flar image
-x ---> file system that needs to be excluded during the creation of flar.
-S ---> skip the disk check
-n ---> name of the flar
-c ----> location of the flar

#flar info /flar/sol9zone.flar
archive_id=1e1fdee9d71e227217c9559811ef6999
files_archived_method=cpio
creation_date=20180224111205
creation_master=sol9zone
content_name=sol9zone
creation_node=sol9zone
creation_hardware_class=sun4us
creation_platform=FJSV,GPUZC-M
creation_processor=sparc
creation_release=5.9
creation_os_name=SunOS
creation_os_version=Generic_112233-11
files_compressed_method=compress
content_architectures=sun4us
type=FULL


2) Deporting diskgroup
Once the flar is created make sure you deport/export all the file system from the physical server along with the LUN on which you have created the flar image to destination server.
I have all fs under vxvm so I have deported all dgs

#vxdg deport appdg

B) On destination host Sol-10

 

1) Importing disk group
I have taken flar on LUN and asked the storage team to map the LUN on destination host.
Make sure to import all the dgs from source host to destination host.

#vxdisk scandisks
#vxdg -C import appdg

2) Zone creation
For zone installation I am going to use LUN so in future if I need to move this server as P2V that can be done easily.

Sol-10 # zonecfg -z sol9zone
sol9zone: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:sol9zone> create -t SUNWsolaris9            <--- -t=template for solaris 9
zonecfg:sol9zone> set zonepath=/zones/sol9zone   <--- ensure to perform mkdir and chmod 700
zonecfg:sol9zone> set autoboot=true
zonecfg:sol9zone> set hostid=eqcmjop
zonecfg:s9-zone> add net
zonecfg:s9-zone:net> set address=192.168.100.100    <--- enter IP address
zonecfg:s9-zone:net> set physical=physical=rtls0    <--- enter interface name
zonecfg:s9-zone:net> set efrouter=192.168.100.1        <--- enter default router
zonecfg:s9-zone:net> end
zonecfg:sol9zone> verify
zonecfg:sol9zone> commit
zonecfg:sol9zone> exit

I have created vxfs volume and mounted on the path /zones/sol9zone. You can also create zfs file system for zone installation.
Make sure you set the hostid of zone that was used by physical server.

NOTE: You will need to add route of the zone in destination server first so you can zone can communicate with its default router.

3) Zone Installation
I have mounted the one LUN on /mnt the LUN and dg I got from physical Sol-9 server.

#zoneadm -z sol9zone install -a /mnt/sol9.flar -u

-u=sysunconfig & -a=archive location

This installation may take several minutes depending on the size of flar. Once the zone appears in installed state, then poweroff the physical server of which flar is created so you can bring your zone into network.
Boot up the zone and check the network connectivity.

#zoneadm -z sol9zone boot
#zlogin sol9zone

4) Adding File Systems
My original Solaris 9 of which I created flar was having 50 mount points that were under the vxvm. These LUNs are already mapped to my destination box on which my zone installed.
I will just mount all the mounted directly as temporary mount point and then will add these mount points in zone config later.

Below are the commands for add the fs temporary to the server.

# mount -F vxfs /dev/vx/dsk/appdg/vol100 /zones/zol9zone/root/kgapp

Below are the commands to add fs permanently on server, so these mounts will remain across reboots.

# zonecfg -z sol9zone
zonecfg:sol9zone> add fs
zonecfg:sol9zone:fs> set type=vxfs
zonecfg:sol9zone:fs> set special=/dev/vx/dsk/appdg/vol100
zonecfg:sol9zone:fs> set raw=/dev/vx/rdsk/appdg/vol100
zonecfg:sol9zone:fs> set dir=/kgapp
zonecfg:sol9zone:fs> end
zonecfg:sol9zone> commit
zonecfg:sol9zone> verify
zonecfg:sol9zone> exit

Make sure these commands are fired from global zone.
Once the mount points are added to you can hand over the zone to Application/DB team to bring up the Application/DB.



Wednesday 24 January 2018

Performing the HP ILO 4 Firmware upgrade via command line


Performing the HP ILO 4 Firmware upgrade via command line


The most popular way to update HP ILO firmware is using the web-interface, but in our case we are going to use the command line. ILO can be updated either directly using bin file or you can use rpm/CPxxxx.scexe file to update the ILO firmware.

The server we are using is HP ProLiant DL380p Gen8 server with ILO 4 with current firmware 1.13.
I have already downloaded the firmware for ILO 4 (CP032487.scexe).
Copy the downloaded firmware from your desktop to the server
Server Name : myhpserver
ILO Name: myhpserver-ilo
Current ILO version : 1.13
Latest ILO version : 2.55

[root@Jumphost]$ssh administrator@myhpserver-ilo

Warning: Permanently added 'myhpserver-ilo,1xx.2xx.1xx.xx1' (RSA) to the list of known hosts.
administrator@myhpserver-ilo's password:
User:administrator logged-in to myhpserver(1xx.2xx.1xx.xx1)

iLO Advanced 1.13 at Aug 2012
Server Name: xxxxxx-369B28G
Server Power: On

</>hpiLO->
</>hpiLO->

As soon as you login into server you will see the version of ILO, the current version of ILO is shown 1.13 Either bin/rpm/CPxxxx.scexe file can be used to udpate the ILO.

If you are using the bin file you are directly load file on ILO and then reset the ILO.

CASE 1: Below mentioned steps can be referred in case of bin file. 

Place the file on your FTP server under /images/fw and load firmware directly on ILO
</>hpiLO->load /map1/firmware1 -source http://192.168.1.1/images/fw/iLO4_100.bin

Once you have loaded the firmware you need to reset the ILO.
</>hpiLO-> cd /map1
</map1>hpiLO-> reset

CASE 2: Below mentioned steps can be used in case if you have rpm/CPxxxx.scexe file
Copy the rpm/CPxxxx.scexe file to myhepserver.
Here I have already copied file to myhpserver, the server whose ILO needs to be updated.

In case if you are using rpm, then install the rpm
root@myhpserver#rpm -ivh          hp-firmware-ilo4-2.55-1.1.i386.rpm

After installation of rpm list the CPxxxx.scexe file using #rpm -ql command
root@myhpserver#rpm -ql hp-firmware-ilo4-2.44-1.1.i386
root@myhpserver#./CP032487.scexe

If you have directly downloaded the CPxxxx.scexe file, then make sure you change the permission of the CPxxxx.scexe file to 7555 and then you run CPxxxx.scexe
Once you have ran the CPxxxx.scexe file then login into ILO  reset the ILO.

</>hpiLO-> cd /map1
</map1>hpiLO-> reset

As soon you can login into the ILO after ILO reset you you can see the new version of ILO 4

[root@Jumphost]$ssh administrator@myhpserver-ilo
Warning: Permanently added 'myhpserver-ilo,1xx.2xx.1xx.xx1' (RSA) to the list of known hosts.
administrator@myhpserver-ilo's password:
User:administrator logged-in to myhpserver(1xx.2xx.1xx.xx1)

iLO Advanced 2.55  at  Aug 16 2017
Server Name: xxxxxx-3xxxx8G
Server Power: On


Here the ILO 4 is updated to 2.55 from 1.13

Wednesday 10 January 2018

Removing of LUNs from VxVM in Solaris

While removing the LUNs from Solaris you might need to unmap the LUNs from the OS level.
The clean up steps are as follows. In this we are going to consider the LUN id c2t500604844A375F48d101s2

c2t500604844A375F48d101s2 => c2 is controller t500604844A375F48 and d101 is the disk

The format output of the Solaris command is not showing the drive since the LUN is already removed from the storage side. Also the Vxdmp for the LUN is present.

# format
Searching for disks...done
................
     255. c2t500604844A375F48d101s2 <drive not available>
          /pci@3,700000/SUNW,qlc@0/fp@0,0/ssd@w50000974082ccd5c,f9
................

#devfsadm -Cv

#vxdisk rm c2t500604844A375F48d101s2

Removing the LUNs path from Veritas Dynamic Path control 

# vxdmpadm getsubpaths
# vxdmpadm exclude vxvm dmpnodename=c2t500604844A375F48d101s2
# vxdmpadm exclude vxdmp dmpnodename=c2t500604844A375F48d101s2

Offline the LUN from OS level. 
# luxadm -e offline /dev/rdsk/c2t500604844A375F48d101s2

While performing the offline you might face the error shown below

# luxadm -e offline /dev/rdsk/c4t5006016C0864165Ad12s2
devctl: I/O error

If you encounter the error shown then confirm that VxVM does not have a Veritas disk access name (DA) for the removed LUN.

#vxdisk list
emc_clariion0_769 auto            -            -            nolabel
or
emc_clariion0_769 auto            -            -            error

Remove the disk from the VxVM as shown below.
#vxdisk rm emc_clariion0_769

Once you have removed the disk as shown once agian fire the command # luxadm -e offline /dev/rdsk/c4t5006016C0864165Ad12s2
Once the LUN in put in offline state it will be moved into unusable state.

# cfgadm -al -o show_SCSI_LUN | grep -i unusable
You will be able to see LUN in unusable state.

Using cfgadm unconfigure the unusable LUN of given controller.
# cfgadm -c unconfigure -o unusable_SCSI_LUN c2::500604844a375f48

# devfsadm -Cv

Check if the LUN is present in #format command output also verify the #vxdisk -eo alldgs output.

Veritas Links used for reference.
http://www.veritas.com/community/forums/remove-lun-solaris-10
https://www.veritas.com/support/en_US/article.HOWTO35877

Physical P2V migration in Solaris (Solaris 9 to Solaris 10)

Physical P2V migration in Solaris  P2V migration is the excellent feature of Solaris where you can migrate the physical server...