XenSource
Skip navigation links
Overview Expand Overview
Products Expand Products
Solutions Expand Solutions
Support Services Expand Support Services
Partners Expand Partners
About Us Expand About Us
How to Buy

Chapter 4. Storage

XenServer provides support for a broad range of storage hardware. The term Storage Repository (SR) is used to describe a particular storage target, on which Virtual Disk Images (VDIs) are stored. A VDI is a disk abstraction that contains the contents of a disk as presented to a virtual machine. XenServer defines an interface to storage hardware that allows VDIs to be supported on a large number of SR types, including local disks, NFS filers, Fibre Channel disks and shared iSCSI LUNs. The SR abstraction allows advanced storage features such as thin provisioning, VDI snapshots, and fast cloning to be exposed on storage targets that support them.

In working with the XenServer CLI, there are four classes of object that are used to describe, configure, and manage storage:

  • Storage Repository (SR): is a storage target, containing virtual disks. The SR object provides XE with a set of mechanisms to use to manage disks on that particular type of storage, and allows provisioning operations, such as creating a new virtual disk, to be mapped onto a wide variety of storage types. For example, virtual disks may be stored as logical volumes on a local disk, or they may be stored as VHD files on an NFS server. SR implementations provide operations for creating, destroying, resizing, cloning, connecting, locking, and discovering the individual Virtual Disk Images (VDIs) that they contain.

    A storage repository is a persistent, on-disk data structure. So the act of "creating" a new SR is similar to that of formatting a disk -- for most SR types, creating a new SR involves erasing any existing data on the specified storage target. SRs are long-lived, and may in some cases be shared among XenServer hosts, or moved between them.CLI operations to manage storage repositories are described in Section Section 5.4.12, “SR commands”.

  • Physical Block Device (PBD): A PBD represents the interface between a physical host and an attached SR. PBDs are connector objects that allow a given SR to be mapped to a host. Importantly, PBDs store the device config fields that are used to connect to and interact with a given storage target. In the case of NFS, for instance, this device config includes the IP of the NFS server and the associated mount path. PBD objects manage the run-time attachment of a given SR to a given host. CLI operations relating to PBDs are described in Section Section 5.4.8, “PBD commands”.

  • Virtual Disk Image (VDI): A VDI is an on-disk representation of a virtual disk provided to a guest VM. It is the fundamental unit of virtualized storage in XenServer.

    Similar to SRs, VDIs are persistent, on-disk objects that exist independently of XenServer. CLI operations to manage VDIs are presented in Section Section 5.4.17, “VDI commands”.

  • Virtual Block Device (VBD): A VBD is a connector object (similar to the PBD described above), that allows mappings between VDIs and VMs. In addition to providing a mechanism to attach (or "plug") a VDI into a VM, VBDs allow the fine-tuning of parameters regarding QoS, statistics, and the bootability of a given VDI. CLI operations relating to VBDs are described in Section Section 5.4.16, “VBD commands”.

The remainder of this section describes the common types of storage that are supported by XenServer, their type-specific device configuration options, and some best practices for managing storage in XenServer environments.

4.1. Storage Repository types

This section provides a brief description of the common physical storage types that XenServer supports.

4.1.1. Local Disks

By default, XenServer uses the local disk on the physical host on which it is installed. The Linux Logical Volume Manager (LVM) is used to manage VM storage. In this case a VDI is implemented as a LVM logical volume of the specified size.

Local LVM-based storage is high-performance and allows virtual disks to be dynamically resized. Virtual disks are fully allocated as an isolated volume on the underlying physical disk and so there is a minimum of storage virtualization overhead imposed. As such, this is a good option for high-performance storage, but lacks the flexibility of file-based storage options described below.

In addition to storing disks on an LVM-managed volume, local disks may be used to serve VDIs stored in the Microsoft VHD format. This may be configured through the XenServer CLI. VHD support is described in Section 4.1.2, “Shared Network Attached Storage - NFS”.

By definition, local disks are not shared across pools of XenServer hosts. As a consequence, VMs whose VDIs are stored in SRs on local disks are not agile; they may not be moved or migrated between hosts in a pool.

Supported device-config parameters for the local LVM SR type and the local VHD SR type are:

  • device - The path to the device on which the SR should be stored.

4.1.2. Shared Network Attached Storage - NFS

The NFS filer is a ubiquitous form of storage infrastructure that is available in many deployments. XenServer allows existing NFS servers to be immediately used as a storage repository for virtual disks (VDIs). VDIs are stored in the Microsoft VHD format, which is ideally suited to NFS environments. Moreover, as NFS SRs are shared, VDIs stored in them allow VMs to be started on any host in a Resource Pool and be live migrated between them using XenMotion.

Configuring an NFS SR is very easy. You just provide the hostname or IP address of the NFS server and the path to a directory that will be used to contain the SR. The NFS server must be configured to export the specified path to all hosts in the pool, otherwise the creation of the SR or the plugging of the PBD record will fail.

VDIs stored on NFS are sparse by default: the image file is allocated as the VM writes data into the disk. This has the considerable benefit that VM image files take up only as much space on the NFS filer as is required: If a 100GB VDI is allocated for a new VM and an OS is installed, the VDI file will only reflect the size of the OS data that has been written to the disk, typically only a few gigabytes.

VHD files may also be chained, allowing two VDIs to share common data. In cases where a NFS-based VM is cloned, the resulting VMs will share the common on-disk data at the time of cloning. Each will proceed to make its own changes in an isolated copy-on-write version of the VDI. This feature allows NFS-based VMs to be quickly cloned from templates -- facilitating very fast provisioning and deployment of new VMs.

As VHD-based images involve extra metadata to provide sparseness and chaining, the format is not as high-performance as LVM-based storage is. In cases where performance really matters, it is well worth forcibly allocating the sparse regions of an image file. This will improve performance at the cost of consuming addition disk space on the filer.

XenServer's NFS and VHD implementation assume that they have full control over the SR directory on the NFS server. Administrators are advised not to modify the contents of this directory as this may risk corrupting the contents of VDIs.

In considering best practice for NFS performance it is worth considering that XenServer has been tuned for enterprise filers that use non-volatile RAM to provide fast acknowledgments of write requests while maintaining a high degree of protection from failure. For reference, XenServer has been tested extensively against Network Appliance FAS270c and FAS3020c filers, using Data OnTap 7.2.2.

In situations where XenServer is used with lower-end filers, it will err on the side of caution by waiting for all writes to be acknowledged before passing acknowledgments on to guest VMs. This will incur a noticeable performance cost, and may be remedied by setting the filer to present the SR mount point as an asynchronous mode export. Asynchronous exports, however acknowledge writes that are not actually on disk, and so administrators should consider the risks of failure carefully in these situations.

Supported device-config parameters for the NFS SR type are:

  • server - The IP address or DNS name of the NFS server.

  • serverpath - The path on the server in which the SR should reside.

4.1.3. Shared iSCSI SANs

In addition to NFS, XenServer provides support for shared SRs on iSCSI LUNs, using the open-iSCSI software iSCSI initiator. Shared iSCSI support is implemented based on the Linux Volume Manager (LVM) and provides the same performance benefits that were provided by LVM in the local disk case. iSCSI-based SRs enable VM agility -- VMs may be started on any host in a pool and migrated between them. However, iSCSI-based VDIs do not provide support for sparse provisioning or fast cloning.

Each XenServer is automatically configured with a single iSCSI software adapter and assigned a random adapter initiator IQN. The initiator IQN is an iSCSI id that uniquely identifies the host when connecting to an iSCSI target. Most targets provide access control via IQN lists and it is therefore important to ensure that all hosts always retain a unique IQN value. The host IQN value can be adjusted manually using XenCenter or via the CLI:

xe host-param-set uuid=[VALID_HOST_ID] other-config-iscsi_iqn=[NEW_INITIATOR_IQN]

Warning

Do not change the host adapter IQN while there are still iSCSI SRs attached. It may result in failed attempts to connect to new targets or to connect existing SR entries.

The XenServer only supports a single iSCSI adapter to be configured. The adapter can connect to multiple different iSCSI targets and target IQNs in parallel, however it does not support configuration of more than one host initiator IQN, so all targets must be configured to allow access for the same initiator IQN.

Warning

Some iSCSI targets do not provide per-initiator IQN filtering/ACLs. Ensure that multi-initiator access is enabled if any LUNs are intended to be shared between more than one host in a pool.

iSCSI SRs may be created through both XenCenter and the CLI. XenServer hosts are limited to a single source initiator IQN, but may connect to multiple targets. Each individual iSCSI SR must be contained entirely on a single LUN, and may not span LUNs. CHAP support is provided for client authentication, both during the data path initialization and during the LUN discovery phase.

Supported device-config parameters for the shared LVM over ISCSI SR type are:

  • target - The IP address or DNS name of the iSCSI target.

  • targetIQN - The IQN (iSCSI Qualified Name) offered by the target that should be used.

  • LUNid - The bus ID of the LUN on which the SR is created.

  • chapuser - The CHAP authentication username credential that should be applied when connecting to the target. (optional)

  • chappassword - The CHAP authentication password credential that should be applied when connecting to the target. (optional)

  • usediscoverynumber - In rare cases, for multi-homed hosts, a target record may appear more than once. This option allows an advanced user to specify an alternative record number to attach. This option should be used with the scan facility outlined below to help in discovering the record ID. (optional)

In order to aid in the LUN discovery and identification phase, certain scan facilities are provided during the sr-create phase:

If the targetIQN parameter is left blank, the command will fail and return an XML string listing the available targetIQNs on the specified target. Each target will also provide an index value that may be provided as the usediscoverynumber value.

If the LUNid parameter is left blank, the command will fail and return an XML string listing the available LUNids on the specified target and targetIQN.

4.1.4. FC SANs

XenServer hosts can also use Fibre Channel SANs using the Emulex or QLogic host bus adapter (HBA). Logical unit numbers (LUNs) are mapped to the XenServer Host as disk devices /dev/sdx just like physical disks would be.

The Command Line Interfaces (CLIs) for configuring and managing Emulex and QLogic Fibre Channel HBAs are included in the XenServer Host in the following locations:

  • Emulex: /usr/sbin/hbanyware

  • QLogic: /opt/QLogic_Corporation/SANsurferCLI

If you are using a Fibre Channel SAN with an HBA that supports boot from LUN, you should do all your boot from LUN setup before installing the XenServer Host. During installation, just select the remote LUNs as if they were local disk drives. Once you complete the installation and reboot, the system will boot from the remote LUN.

Fibre Channel LUNs will appear on the host as scsi devices. Each scsi device is symlinked under the directory /dev/disk/by_id using its unique scsi_id. If you are unsure which scsi_id corresponds to which device, you can query a device with the sginfo command followed by the path. For example: sginfo /dev/disk/by_id/ {scsi_id}.

Fibre Channel disks should always be referenced by this path since it provides persistent device identification regardless of the core device name assigned by the host which may change, e.g. across host reboots.

If you add an Emulex or QLogic HBA to the XenServer Host after installation, you should edit /etc/modprobe.conf and add a line like this:

alias scsi_hostadaptern module_name
		

where n is the next available scsi_hostadapter number, and module_name is the appropriate module name for your HBA. Emulex cards are supported by the lpfcdfc module and qlogic cards by the qla**** modules, where **** will correspond to the version number. For full compatibility details, goto the online site Hardware Compatibility List for latest information.

For more information on Fibre Channel host adaptors, see the Emulex website and the QLogic website.

4.1.5. Storage Configuration Examples

The following section provides guidelines on how to create, attach, detach and delete storage types to a XenServer Host. The examples provided pertain to storage configuration via the CLI which provides the greatest flexibility.

Creating a new SR requires the use of the sr-create command. It will create a new SR record in the database and a corresponding PBD record. Upon successful creation of the SR, the PBD will be automatically plugged. If the SR shared=true flag is set, a PBD entry will be created for every host in the pool and plugged on each host. The following steps illustrate how to create a local LVM SR and a shared LVM over iSCSI SR. In each case, a correct host-uuid is required:

  1. xe sr-create host-uuid=[VALID_UUID] content-type="Example Content-type" name-label="Example Local LVM SR" shared=false device-config-device=/dev/sdb type=lvm to create a local LVM SR on device /dev/sdb. If successful, the CLI command will return an SR UUID.
  2. xe pbd-list sr-uuid=[SR_UUID] will list the status of the PBD, and if successful will show the 'currently-attached' status as 'true'.

Creating an LVM over iSCSI SR is very similar with the exception of being a shareable SR type, causing a PBD to be created for every host in the pool. Additionally, the device-config-* parameters are different, requiring a target, targetIQN and a LUNid parameter to be provided.

  1. xe sr-create host-uuid=[VALID_UUID] content-type="Example Content-type" name-label="My iSCSI SR" shared=true device-config-target=hostname type=lvmoiscsi device-config-targetIQN=NULL to provide a list of iSCSI targets available to this host from the specified target.
  2. xe sr-create host-uuid=[VALID_UUID] content-type="Example Content-type" name-label="My iSCSI SR" shared=true device-config-target=hostname type=lvmoiscsi device-config-targetIQN=[VALID_TARGET_IQN] to provide a list of LUNs available to this host via the targetIQN.
  3. xe sr-create host-uuid=[VALID_UUID] content-type="Example Content-type" name-label="My ISCSI SR" shared=true device-config-target=hostname type=lvmoiscsi device-config-targetIQN=[VALID_TARGET_IQN] device-config-LUNid=[VALID_LUNID] to create an LVM SR on the specified target LUN.
  4. xe pbd-list sr-uuid=[SR_UUID] will list the status of all the PBDs, for every host in the pool. If successful, all PBDs will show the “currently-attached” status as “true”.

Introducing an existing SR to a host requires the manual generation of both an SR record and a PBD record. Furthermore the PBD must be manually plugged to activate the SR on the host:

  1. xe sr-introduce content-type="Example Content-type" name-label="Example Shared LVM over iSCSI SR" shared=true uuid=[VALID_SR_UUID] type=lvmoiscsi to create an SR record for the specified SR UUID.
  2. xe pbd-create host-uuid=[VALID_UUID] sr-uuid=[VALID_SR_UUID] device-config-target=examplemachinename type=lvmoiscsi device-config-targetIQN=[VALID_TARGET_IQN] device-config-LUNid=[VALID_LUNID] to create a valid PBD record to accompany the SR record above.
  3. xe pbd-plug uuid=[PBD_UUID] to attach the shared LVM over ISCSI SR to the specified host.

Destroying an SR will actually delete the contents of the SR from the physical substrate. Alternatively, an SR record can be forgotten, which allows a user to re-attach the SR, e.g. to another host without removing any of the SR contents. In both cases, the SR's PBD must first be unplugged:

  1. xe pbd-unplug uuid=[PBD_UUID] to detach the SR from the corresponding host.
  2. xe sr-destroy uuid=[SR_UUID] will destroy the contents of the SR and delete both the SR and it's corresponding PBD record.
  3. xe sr-forget uuid=[SR_UUID] will remove the SR entry from the host database.

4.1.6. Summary

The following table summarizes the storage repository capabilities described above:

SR typeDescriptionShared?Sparse?VDI Resize?Fast Clone?
lvmLVM on Local Disk or attached FC LUNnonoyesno
extVHD on Local Disknoyesnoyes
nfsNetwork File System (NFS)yesyesnoyes
lvmoiscsiLogical Volume Management over iSCSIyesnoyesno

All storage repositories in XenServer are implemented as Python scripts and stored within the control domain's file system in /opt/xensource/sm. Advanced users may examine and even modify these scripts to adapt storage operations to their needs.

This is considered an advanced operation, and is not supported. However, visibility and customization of low-level storage management is of considerable value to some power users. Note also that new SR implementations may be placed in this directory and will be automatically detected by XenServer. The available SR types may be listed using the sm-list command (see Section 5.4.11, “Storage Manager commands”).