Increase ZFS
root pool
In
ZFS, you cannot extend the root pool by adding new disks. But it has some logic
too. For an example, if you are having more than one hard disk
in root zpool, loss of one disk result be un-bootable system. To eliminate
these kind of situations ,its better to keep the rpool in one disk and mirror
it instead of spreading over the multiple disks.
But sometimes project team will commit mistake by keeping /var, /usr, /home under rpool and system may running out of root disk space.
But sometimes project team will commit mistake by keeping /var, /usr, /home under rpool and system may running out of root disk space.
EFI labels are not supported for Solaris rpool,rpool disk must be SMI labeled and all the sectors to be set on partition 0 like the below one.
Part Tag
Flag First Sector Size Last Sector
0
usr wm 256 1015.86MB 2080733
1 unassigned wm 0 0 0
2 unassigned wm 0 0 0
3 unassigned wm 0 0 0
4 unassigned wm 0 0 0
5 unassigned wm 0 0 0
6 unassigned wm 0 0 0
8
reserved wm 2080734 8.00MB 2097117
For
a test, Here i am trying to extend the rpool using new disk,
#
zpool add rpool c1t8d0s0
cannot
add to 'rpool': root pool cannot have multiple vdevs or separate logs
Solution:
1.Add a bigger size root hard disk.(i.e. If current root disk is 40GB,then add 80GB hard disk)
2.Mirror the rpool using the new 80GB hard disk
3.Installboot block in new disk
4.Detach the old disk from rpool
#
zpool list rpool
NAME SIZE
ALLOC FREE CAP
HEALTH ALTROOT
rpool 11.9G
8.56G 3.38G 71%
ONLINE -
#
zpool status rpool
pool: rpool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0
0 0
c1t0d0s0 ONLINE
0 0 0
errors:
No known data errors
Here i am going to extend my rpool to 20GB. My new 20GB hard disk is c1t9d0.
#
format c1t9d0
selecting
c1t9d0
[disk
formatted]
FORMAT
MENU:
disk - select a disk
type
- select (define) a disk type
partition - select (define) a partition table
current - describe the current disk
format - format and analyze the disk
fdisk - run the fdisk program
repair
- repair a defective sector
label - write label to the disk
analyze - surface analysis
defect - defect list management
backup - search for backup labels
verify - read and display labels
save - save new disk/partition definitions
inquiry - show vendor, product and revision
volname - set 8-character volume name
!
- execute , then return
quit
format>
p
WARNING
- This disk may be in use by an application that has
modified the fdisk table. Ensure that
this disk is
not currently in use before
proceeding to use fdisk.
format>
fdisk
No
fdisk table exists. The default partition for the disk is:
a 100% "SOLARIS System" partition
Type
"y" to accept the default partition,
otherwise type "n" to edit the
partition table.
y
format>
p
PARTITION
MENU:
0
- change `0' partition
1
- change `1' partition
2
- change `2' partition
3
- change `3' partition
4
- change `4' partition
5
- change `5' partition
6
- change `6' partition
7
- change `7' partition
select - select a predefined table
modify - modify a predefined partition
table
name
- name the current table
print
- display the current table
label
- write partition map and label to the disk
! - execute , then return
quit
partition>
partition>
0
Part Tag
Flag Cylinders Size Blocks
0 unassigned wm
0 0 (0/0/0) 0
Enter
partition id tag[unassigned]:
Enter
partition permission flags[wm]:
Enter
new starting cyl[0]:1
Enter
partition size[0b, 0c, 0e, 0.00mb, 0.00gb]: 2606c
partition>
p
Current
partition table (unnamed):
Total
disk cylinders available: 2607 + 2 (reserved cylinders)
Part Tag
Flag Cylinders Size Blocks
0 unassigned wm
1 - 2605 19.96GB
(2606/0/0) 41865390
1 unassigned wm
0 0 (0/0/0) 0
2
backup wu 0 - 2606 19.97GB (2607/0/0) 41881455
3 unassigned wm
0 0 (0/0/0) 0
4 unassigned wm
0 0 (0/0/0) 0
5 unassigned wm
0 0 (0/0/0) 0
6 unassigned wm
0 0 (0/0/0) 0
7 unassigned wm
0 0
(0/0/0) 0
8
boot wu 0 -
0 7.84MB (1/0/0) 16065
9 unassigned wm
0 0 (0/0/0) 0
partition>
l
Ready
to label disk, continue? y
partition>
q
format>
q
#
Now we are going to mirror the rpool with new disk.
#
zpool attach rpool c1t0d0s0 c1t9d0s0
Please
be sure to invoke installgrub(1M) to make 'c1t9d0s0' bootable.
Make
sure to wait until resilver is done before rebooting.
bash-3.00#
zpool status rpool
pool: rpool
state: ONLINE
status:
One or more devices is currently being resilvered. The pool will
continue to function, possibly in a
degraded state.
action:
Wait for the resilver to complete.
scrub: resilver in progress for 0h0m, 0.08%
done, 7h6m to go
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0
0 0
mirror-0 ONLINE
0 0 0
c1t0d0s0 ONLINE
0 0 0
c1t9d0s0 ONLINE
0 0 0
6.82M resilvered
errors:
No known data errors
Once
the mirroring is done, we are good to detach the old disk from zpool. But make
sure you have made the new disk bootable by updating boot block.
bash-3.00#
zpool detach rpool c1t0d0s0
bash-3.00#
zpool status rpool
pool: rpool
state: ONLINE
scrub: resilver completed after 0h31m with 0
errors on Tue May 7 02:05:56 2013
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0
0 0
c1t9d0s0 ONLINE
0 0 0
8.56G resilvered
errors:
No known data errors
bash-3.00#
df -h /
Filesystem size used
avail capacity Mounted on
rpool/ROOT/rpooldataset
12G 5.6G
1.6G 78% /
bash-3.00#
installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t9d0s0
stage1
written to partition 0 sector 0 (abs 16065)
stage2
written to partition 0, 273 sectors starting at 50 (abs 16115)
Installgrub will work only on X86 servers. For SPARC serves,please use installboot.
# installboot -F zfs
/usr/plaform/'uname i'/lib/fs/zfs/bootblk /dev/rdsk/c1t9d0s0
You may wondering after doing all the stuff's root disk space is not increased ? You have to do one small thing to activate the trick. Perform the scrub on rpool and set autoexpand to get the new size.
#
zpool scrub rpool
#
zpool status rpool
pool: rpool
state: ONLINE
scrub: scrub in progress for 0h9m, 70.77%
done, 0h4m to go
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0
0 0
c1t9d0s0 ONLINE
0 0 0
#
zpool status rpool
pool: rpool
state: ONLINE
scrub: scrub completed after 0h19m with 0
errors on Tue May 7 02:36:21 2013
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0
0 0
c1t9d0s0 ONLINE
0 0 0
errors:
No known data errors
bash-3.00#
df -h /
Filesystem size used
avail capacity Mounted on
rpool/ROOT/rpooldataset
12G 5.6G
1.6G 78% /
bash-3.00#
zpool set autoexpand=on rpool
bash-3.00#
df -h /
Filesystem size used
avail capacity Mounted on
rpool/ROOT/rpooldataset
20G 5.6G
9.5G 38% /
Now
you can see root filesystem has been extended to 20GB.