Monday 7 October 2013

IPMP in Solaris 10

IPMP configuration using LINK BASED TECHNIQUE in Solaris 10

By Kunal Raykar

IPMP eliminates single network card failure and it ensures system will be always accessible via network.
The link based IPMP detects network errors by checking the "IFF_RUNNING" flag. Normally it doesn't
require any test IP address like probe based IPMP.
"/etc/default/mpathd" is file to configure the ipmp and the default value is 10 seconds for the failure
detection. In this file there is an option called “FAILBACK" to specify IP behavior when primary interface
recovered from the fault. "in.mpathd" is a daemon which handles IPMP (Internet Protocol Multi-
Pathing) operations.

We have Solaris ( hostname=solaris IP=10.0.4.61) machine with 2 NIC card.
e1000g0 => primary card
e1000g1 => secondary card


bash-3.00# dladm show-dev
e1000g0 link: up speed: 1000 Mbps duplex: full
e1000g1 link: unknown speed: 1000 Mbps duplex: full
bash-3.00#
bash-3.00# cat /etc/hosts
::1 localhost
127.0.0.1 localhost
10.0.4.61 solaris solaris.com loghost
10.0.4.61 solaris loghost
bash-3.00#
bash-3.00# ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
e1000g0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
inet 10.0.4.61 netmask ffffff00 broadcast 10.0.4.255
ether 0:c:29:fb:38:68
bash-3.00#
Now plumb the secondary NIC card
bash-3.00#ifconfig e1000g1 plumb
bash-3.00# ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
e1000g0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
inet 10.0.4.61 netmask ffffff00 broadcast 10.0.4.255
ether 0:c:29:fb:38:68
e1000g1: flags=1000842<BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
inet 0.0.0.0 netmask 0
ether 0:c:29:fb:38:72
bash-3.00#


To configure Link based IPMP, create the hostname.<int name> files for the Active NIC and the standby NIC. In this
case we have /etc/hostname.e1000g0 and /etc/hostname.e1000g1
Edit the hostname.e1000g0 file and enter the following:

bash-3.00#vi /etc/hostname.e1000g0
solaris10 netmask + broadcast + group sol10-ipmp up


solaris is the hostname which should have a corresponding host entry in the /etc/hosts file.
sol10-ipmp is the name of the IPMP group.
Edit the hostname.e1000g1 and the following

bash-3.00# cat /etc/hostname.e1000g1
group sol10-ipmp up


Take a reboot of server.

bash-3.00# ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index
1
inet 127.0.0.1 netmask ff000000
e1000g0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
inet 10.0.4.61 netmask ffffff00 broadcast 10.0.4.255
groupname sol10-ipmp
ether 0:c:29:fb:38:68
e1000g1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
groupname sol10-ipmp
ether 0:c:29:fb:38:72
bash-3.00#


The standby NIC only has IPMP group configuration. This all is needed to setup Link based IPMP. When the server is
rebooted this configuration should take effect.
To test failover, you can do the hard way of pulling the cables or use if_mpadm command as follows:
To failover:

bash-3.00# if_mpadm -d e1000g0
-d is for detach
After detach you see that a virtual interface is created system to handle the IP. In this case e1000g1:1 is
the virtual interface

bash-3.00# ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
e1000g0: flags=89000842<BROADCAST,RUNNING,MULTICAST,IPv4,NOFAILOVER,OFFLINE> mtu 0 index2
inet 0.0.0.0 netmask 0
groupname sol10-ipmp
ether 0:c:29:fb:38:68
e1000g1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
groupname sol10-ipmp
ether 0:c:29:fb:38:72
e1000g1:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
inet 10.0.4.61 netmask ffffff00 broadcast 10.0.4.255
bash-3.00#


To failback
bash-3.00# if_mpadm -r e1000g0
-r is for reattach

3 comments:

  1. I had tested this post on the test server, its working fine.
    Great job, I appreciate your work.

    ReplyDelete
  2. Great Job Kunal . Its working fine. :)

    ReplyDelete
  3. Gr8 Job man, I was searching for long time and now got perfect answer.

    ReplyDelete

Physical P2V migration in Solaris (Solaris 9 to Solaris 10)

Physical P2V migration in Solaris  P2V migration is the excellent feature of Solaris where you can migrate the physical server...