Physical P2V migration in
Solaris
P2V migration is the excellent feature of Solaris
where you can migrate the physical server as zone into any another Solaris
server which has sufficient resources. In exercise shown below I am going to
migrate my Solaris 9 server as zone into my Solaris 10 server. In same way to
can migrate the Solaris 8 or Solaris 10 server into some other server as
Solaris zone. I have done this activity as my Solaris 9 host was running
on ancient hardware and needed to be migrated.
Source Server
Details:
Hardware : Fujitsu Primepower Sever
OS : Solaris 9
Hostname : Sol-9
Destination Server Details:
Hardware : T4-4
OS : Solaris 10
Hostname : Sol-10
Hardware : Fujitsu Primepower Sever
OS : Solaris 9
Hostname : Sol-9
Destination Server Details:
Hardware : T4-4
OS : Solaris 10
Hostname : Sol-10
Before you proceed with P2V migration make sure all the
LUNs from your old physical box is mapped to the Destination host. Get 2 new
LUN mapped to the destination host out of which one will be used for zone
installation and another one will be used to store flar.
Destination Server which I have used here is already having
one Solaris 9 running on it. So it already has the necessary pkgs for branded
zone.
But if you are doing this migration for first time please
make user have installed following pkgs on destination server.
For Solaris 8
Packages SUNWs8brandr, SUNWs8brandu, SUNWs8brandk
For Solaris 9 Packages SUNWs9brandr, SUNWs9brandu, SUNWs9brandk
For Solaris 9 Packages SUNWs9brandr, SUNWs9brandu, SUNWs9brandk
NOTE: The
destination system must be running with at least Solaris 10 8/07, and requires
the following minimum patch levels:
Kernel patch: 127111-08 which needs
125369-13
125476-02
Kernel patch: 127111-08 which needs
125369-13
125476-02
A) On physical server Sol-9
1) Flar Creation
We are going to use Flash Archiving tools to create an
image of OS and use our flar image to install the zone.
I have many mount points which are coming from under vxvm.
I have unmounted all file system before creating the flar image.
Command flarcreate considers only those mount points that
present in /etc/vfstab.
I want to create the flar that will only contain the mounts
that are necessary for OS to boot up. Keeping this thing mind have umounted all
the file system.
Also by default it ignores items that are located in
"swap" partitions.
#umount -f -a
#mkdir /flar
#flarcreate -S -n sol9zone -x /flar -R / -c /flar/sol9zone.flar
Full Flash
Checking integrity...
Integrity OK.
Running precreation scripts...
Precreation scripts done.
Creating the archive...
23962103 blocks
1 error(s)
Archive creation complete.
-u --->
sysunconfig
-a --->attach
the flar image
-x ---> file
system that needs to be excluded during the creation of flar.
-S ---> skip the
disk check
-n ---> name of
the flar
-c ---->
location of the flar
#flar info /flar/sol9zone.flar
archive_id=1e1fdee9d71e227217c9559811ef6999
files_archived_method=cpio
creation_date=20180224111205
creation_master=sol9zone
content_name=sol9zone
creation_node=sol9zone
creation_hardware_class=sun4us
creation_platform=FJSV,GPUZC-M
creation_processor=sparc
creation_release=5.9
creation_os_name=SunOS
creation_os_version=Generic_112233-11
files_compressed_method=compress
content_architectures=sun4us type=FULL
2) Deporting
diskgroup
Once the flar is created make sure you deport/export all
the file system from the physical server along with the LUN on which you have
created the flar image to destination server.
I have all fs under vxvm so I have deported all dgs
#vxdg deport appdg
B) On destination host Sol-10
1) Importing disk group
I have taken flar on LUN and asked the storage team to map
the LUN on destination host.
Make sure to import all the dgs from source host to destination
host.
#vxdisk scandisks
#vxdg -C import appdg
#vxdg -C import appdg
2) Zone creation
For zone installation I am going to use LUN so in future if
I need to move this server as P2V that can be done easily.
Sol-10 # zonecfg -z sol9zone
sol9zone: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:sol9zone> create -t SUNWsolaris9 <--- -t=template for solaris 9
zonecfg:sol9zone> set zonepath=/zones/sol9zone <--- ensure to perform mkdir and chmod 700
zonecfg:sol9zone> set autoboot=true
zonecfg:sol9zone> set hostid=eqcmjop
zonecfg:s9-zone> add net
zonecfg:s9-zone:net> set address=192.168.100.100 <--- enter IP address
zonecfg:s9-zone:net> set physical=physical=rtls0 <--- enter interface name
zonecfg:s9-zone:net> set efrouter=192.168.100.1 <--- enter default router
zonecfg:s9-zone:net> end
zonecfg:sol9zone> verify
zonecfg:sol9zone> commit
zonecfg:sol9zone> exit
I have created vxfs volume and mounted on the path
/zones/sol9zone. You can also create zfs file system for zone installation.
Make sure you set the hostid of zone that was used by
physical server.
NOTE: You will need to add route of the zone in destination
server first so you can zone can communicate with its default router.
3) Zone Installation
I have mounted the one LUN on /mnt the LUN and dg I got
from physical Sol-9 server.
#zoneadm -z
sol9zone install -a /mnt/sol9.flar -u
-u=sysunconfig
& -a=archive location
This installation may take several minutes depending on the
size of flar. Once the zone appears in installed state, then poweroff the
physical server of which flar is created so you can bring your zone into
network.
Boot up the zone and check the network connectivity.
#zoneadm -z
sol9zone boot
#zlogin sol9zone
#zlogin sol9zone
4) Adding File Systems
My original Solaris 9 of which I created flar was having 50
mount points that were under the vxvm. These LUNs are already mapped to my
destination box on which my zone installed.
I will just mount all the mounted directly as temporary
mount point and then will add these mount points in zone config later.
Below are the commands for add the fs temporary to the
server.
# mount -F vxfs
/dev/vx/dsk/appdg/vol100 /zones/zol9zone/root/kgapp
Below are the commands to add fs permanently on server, so
these mounts will remain across reboots.
# zonecfg -z
sol9zone
zonecfg:sol9zone> add fs
zonecfg:sol9zone:fs> set type=vxfs
zonecfg:sol9zone:fs> set special=/dev/vx/dsk/appdg/vol100
zonecfg:sol9zone:fs> set raw=/dev/vx/rdsk/appdg/vol100
zonecfg:sol9zone:fs> set dir=/kgapp
zonecfg:sol9zone:fs> end
zonecfg:sol9zone> commit
zonecfg:sol9zone> verify
zonecfg:sol9zone> exit
zonecfg:sol9zone> add fs
zonecfg:sol9zone:fs> set type=vxfs
zonecfg:sol9zone:fs> set special=/dev/vx/dsk/appdg/vol100
zonecfg:sol9zone:fs> set raw=/dev/vx/rdsk/appdg/vol100
zonecfg:sol9zone:fs> set dir=/kgapp
zonecfg:sol9zone:fs> end
zonecfg:sol9zone> commit
zonecfg:sol9zone> verify
zonecfg:sol9zone> exit
Make sure these commands are fired from global zone.
Once the mount points are added to you can hand over the
zone to Application/DB team to bring up the Application/DB.