Working with Linux volumes on vSphere 7

Linux logo Modern Linux distributions running on VMware vSphere are capable for dynamic storage management, it is possible to create new file systems or extend existing without need to stop services and reboot servers. Here are some examples how to that, these methods work at least on Red Hat Enterprise Linux 5, 6 and 7 distributions and clones such CentOS, Oracle Linux and Scientific Linux.

Things to consider with Linux volumes

If your intention is to use all of your new disk for single file system I recommend that you do not use partitioning, use a plain disk as LVM physical volume instead, it simplifies managing volumes a lot. You could also chose not to use LVM and format new disk as is with ext3 but I do not recommend that, using LVM has many benefits which I might cover on some later articles. If you need to use partitioning you should be aware of following caveats

  1. To avoid performance penalty from file system misalignment you need to match beginning of new partition with underlying storage device using fdisk expert commands
  2. Using partitions makes extending file systems much more complicated compared to using plain disks instead

If you fail to align file system properly with your storage, you can lose even up to 15% of your available IOPS of your storage.

Useful tools for storage management

Sg3_utils

Sg3_utils include tools for SCSI device management such as sg_unmap and rescan-scsi-bus.sh. Example output

# rescan-scsi-bus.sh -s
Scanning SCSI subsystem for new devices
Searching for resized LUNs
RESIZED: Host: scsi0 Channel: 00 Id: 01 Lun: 00
 Vendor: VMware Model: Virtual disk Rev: 1.0
 Type: Direct-Access ANSI SCSI revision: 02
0 new or changed device(s) found.
1 remapped or resized device(s) found.
 [0:0:1:0]
0 device(s) removed.

Yum install command

# yum install sg3_utils

lsscsi

Lsscsi is simple tool for listing SCSI devices and their properties such as timeout and queue depth values. Example output

# lsscsi -l
[1:0:0:0] cd/dvd NECVMWar VMware IDE CDR10 1.00 /dev/sr0
 state=running queue_depth=1 scsi_level=6 type=5 device_blocked=0 timeout=30
[2:0:0:0] disk VMware Virtual disk 1.0 /dev/sda
 state=running queue_depth=64 scsi_level=3 type=0 device_blocked=0 timeout=180

Yum install command

# yum install lsscsi

Parted

GNU Parted is a replacement for fdisk for Linux partition management Yum install command

# yum install parted

Iotop

Iotop is a top-like tool for examining per process or thread storage IO load

iotop

iotop screenshot

Yum install command

# yum install iotop

Attach new disks without reboot

After new virtual disk have been added to virtual machine you need to rescan new disks in Linux. If you have sg3_utils installed you can use rescan-scsi-bus.sh script, -a option will scan all targets

# rescan-scsi-bus.sh -a

If you do not have rescan-scsi-bus.sh script available you can scan new disks with command below, replace “host0″ with SCSI controller ID you added disks to

# echo "- - -" > /sys/class/scsi_host/host0/scan

use dmesg command to check what new devices have been added, watch for “Attached scsi disk” message

# dmesg | tail -n 10 | grep Attached
sd 0:0:1:0: Attached scsi disk sdb

Now create LVM Physical Volume, Volume Group, Logical Volume and file system on your new disk

# pvcreate /dev/sdb
  Physical volume "/dev/sdb" successfully created
# vgcreate VolGroup01 /dev/sdb
  Volume group "VolGroup01" successfully created
# lvcreate -n LogVol00 -l+100%VG VolGroup01
  Logical volume "LogVol00" created
# mkfs.ext3 /dev/VolGroup01/LogVol00

Or if you know that you won’t need LVM you can simply format plain disk with command

# mkfs.ext3 /dev/sdb

Your new volume is ready for mounting.

Rescan disk size changes

After virtual disk has been resized you need to rescan changes in Linux. This can be done at least by two means, if you have sg3_utils installed you can use rescan-scsi-bus.sh script with -s option to rescan for size changes for any connected disks.

# rescan-scsi-bus.sh -s

If you do not have sg3_utils installed you can trigger rescan through Linux /sys file system. To do this you need to know correct SCSI ID of disk changed, replace 0:0:1:0 in example below with correct SCSI ID

# echo 1 > /sys/bus/scsi/devices/0\:0\:1\:0/rescan

like pointed out by Chris in comments, you can rescan correct device simply by echoing 1 into /sys/block/[drive]/device/rescan

# echo 1 > /sys/block/sdb/device/rescan

use dmesg to check that resize is successful, watch for “capacity change” message

# dmesg | tail -n 10 | grep change
sdb: detected capacity change from 8589934592 to 17179869184

Resizing an ext2/3/4 file system

if LVM is not used, you can resize file system online while file system is mounted with command

# resize2fs /dev/sdb

Resizing an btrfs file system

You can resize btrfs volume to match underlying device size using command btrfs filesystem resize max <mountpoint>, btrfs filesystem resize is online operation so there is no need to unmount file system first. Example how to expand /data volume

# btrfs filesystem resize max /data

Resizing an XFS file system

XFS file system can be resized to match underlying device size with xfs_growfs -d <mountpoint> Example how to expand /data volume

# xfs_growfs -d /data

Extending LVM volume and file system, easy way with no partitions

Rescan LVM physical volume size change with pvresize, replace sdb with correct device

# pvresize /dev/sdb

Extend LVM logical volume to new full size of volume group

# lvextend -l+100%VG /dev/VolGroup01/LogVol00

Now you can resize mounted ext3 file system

# resize2fs /dev/VolGroup01/LogVol00

And you are done

Extending LVM volume and file system, hard way with partitions in use

You cannot resize LVM physical device on partition without reboot, when using single disk your only option is to create new partition on space created by extending disk and use partprobe command to reload partition table without need to reboot. You could also create new disk and extend Volume Group to that. Create a new partition to existing disk

# fdisk /dev/sdb
Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 2
First cylinder (3134-3916, default 3134):
Using default value 3134
Last cylinder or +size or +sizeM or +sizeK (3134-3916, default 3916):
Using default value 3916

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.

Run partprobe to reload partition table

# partprobe /dev/sdb

Create new LVM physical volume to new partition

# pvcreate /dev/sdb2

Add new physical volume to existing Volume Group

# vgextend VolGroup01 /dev/sdb2

Extend Logical Volume up to maximum size of Volume Group

# lvextend -l+100%VG /dev/VolGroup01/LogVol00

Finally you can resize mounted ext3 file system

# resize2fs /dev/VolGroup01/LogVol00

Done. See how much work there is when using partitions? There is absolutely no point at all to create partition table for standard ext3 file system when intention is to use whole disk for single file system.

7 thoughts on “Working with Linux volumes on vSphere

  1. Pingback: Tweets that mention Working with Linux volumes on vSphere « vReality -- Topsy.com

  2. Reply Juan Dec 10,2010 20:09

    Great article Tomi. I am using Ubuntu 8.40 kernel 2.6.24-24 on an ESX server VM.

    I have an existing volume group sitting on a virtual disk that I want to grow, I expand the vmdk disk without any problems, but when it comes to detect the disk change on the Linux Vm I have problems.

    I rescan the scsi bus as stated on the tutorial but I don’t get the capacity change message on my logs. I can see that the bus is rescanned but fdisk keeps showing the old size (10G).

    Here are the kernel messages:

    Dec 10 10:31:57 vmdk-grow-test kernel: [70197.862014] ata2: EH complete
    Dec 10 10:32:16 vmdk-grow-test kernel: [70217.037848] sd 2:0:1:0: [sdb] 31457280 512-byte hardware sectors (16106 MB)
    Dec 10 10:32:16 vmdk-grow-test kernel: [70217.037899] sd 2:0:1:0: [sdb] Write Protect is off
    Dec 10 10:32:16 vmdk-grow-test kernel: [70217.037902] sd 2:0:1:0: [sdb] Mode Sense: 03 00 00 00
    Dec 10 10:32:16 vmdk-grow-test kernel: [70217.037930] sd 2:0:1:0: [sdb] Cache data unavailable
    Dec 10 10:32:16 vmdk-grow-test kernel: [70217.037932] sd 2:0:1:0: [sdb] Assuming drive cache: write through

    If I reboot the VM the changes are applied, but I don’t want to do that unless it is completely required.

    Any help is appreciated.

  3. Reply Chris Feb 22,2011 20:10

    Great article!
    In testing I found that you can resize the disk without knowing the SCSI ID:

    echo 1 > /sys/block/sdb/device/rescan

    Makes the process even easier.

  4. Pingback: SLES partitionless Installation | PHP Developer Resource

  5. Pingback: TaNNkoST | Partitions – A Thing Of The Past

  6. Reply John Beckmann Aug 21,2013 04:26

    You can actually extend a partition by simply deleting it and recreating it using the new size:-

    # fdisk /dev/sdb
    Command (m for help): d
    Selected partition 1

    Command (m for help):n
    Command action
    e extended
    p primary partition (1-4)
    p
    Partition number (1-4): 1
    First cylinder (1-46705, default 1):
    Using default value 3134
    Last cylinder, +cylinders or +size{K,M,G} (1-46705, default 46705):
    Using default value 46705

    Command (m for help): t
    Selected partition 1
    Hex code (type L to list codes): 8e
    Changed system type of partition 1 to 8e (Linux LVM)

    Command (m for help): w
    The partition table has been altered!

    Calling ioctl() to re-read partition table.

    WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
    The kernel still uses the old table.
    The new table will be used at the next reboot.
    Syncing disks.

    Run partprobe to reload partition table
    # partprobe /dev/sdb

    Rescan LVM physical volume size change with pvresize, replace sdb with correct device

    # pvresize /dev/sdb

    Extend LVM logical volume to new full size of volume group

    # lvextend -l+100%FREE /dev/VolGroup01/LogVol00

    Now you can resize mounted ext3 file system

    # resize2fs /dev/VolGroup01/LogVol00

    And you are done

  7. Pingback: Tuning Linux (Debian) in a vSphere VM - Part 1 - Installation | mwpreston.net

Leave a Reply

banner