XenServer Administrator's Guide

Release 5.0.0

Table of Contents

1. Overview
1.1. XenServer hosts and resource pools
1.2. Networking
1.3. Storage
1.4. Monitoring and managing XenServer
1.5. Command Line Interface
1.6. How this Guide relates to other documentation
2. XenServer hosts and resource pools
2.1. Requirements for creating resource pools
2.2. Creating a resource pool
2.3. Adding shared storage
2.4. Installing and managing VMs on shared storage
2.5. Removing a XenServer host from a resource pool
2.6. High Availability (HA)
2.6.1. Requirements for configuration
2.6.2. Restart priorities
2.7. Enabling HA on a XenServer pool
2.7.1. Enabling HA using the CLI
2.7.2. Removing HA protection from a VM using the CLI
2.7.3. Host Fencing
2.7.4. Recovering an unreachable host
2.7.5. Shutting down a host when HA is enabled
2.7.6. Shutting down a VM when it is protected by HA
2.8. Backups
2.9. Full metadata backup and disaster recovery
2.9.1. Moving SRs between hosts and Pools
2.10. VM Snapshots
2.10.1. Regular Snapshots
2.10.2. Quiesced Snapshots
2.10.3. Taking a VM snapshot
2.10.4. VM Rollback
2.11. Coping with machine failures
2.11.1. Member failures
2.11.2. Master failures
2.11.3. Pool failures
2.11.4. Coping with Failure due to Configuration Errors
2.11.5. Physical Machine failure
3. Storage
3.1. Overview
3.1.1. Storage repositories (SRs)
3.1.2. Virtual Disk Images (VDI)
3.1.3. Managing Storage
3.2. Storage Repository Types
3.2.1. Local Disks
3.2.2. Local hotplug devices
3.2.3. Shared Network Attached Storage using NFS
3.2.4. Shared iSCSI Storage
3.2.5. Shared LVM storage over FC or iSCSI hardware HBAs
3.2.6. Shared NetApp Storage
3.2.7. Shared EqualLogic Storage
3.3. Storage configuration examples
3.3.1. Creating Storage Repositories
3.3.2. Probing an SR
3.3.3. Creating a local LVM SR (lvm)
3.3.4. Creating a local EXT3 SR (ext)
3.3.5. Creating a shared NFS SR (nfs)
3.3.6. Creating a shared LVM over iSCSI SR using the software iSCSI initiator (lvmoiscsi)
3.3.7. Creating a shared LVM over Fibre Channel or iSCSI HBA SR (lvmohba)
3.3.8. Creating a shared NetApp SR over iSCSI
3.3.9. Creating a shared EqualLogic SR
3.3.10. Storage Multipathing
3.4. Managing Storage Repositories
3.4.1. Destroying or forgetting a SR
3.4.2. Introducing an SR
3.4.3. Converting local Fibre Channel SRs to shared SRs
3.4.4. Moving Virtual Disk Images (VDIs) between SRs
3.4.5. Managing VDIs in a NetApp SR
3.4.6. Taking VDI snapshots with a NetApp SR
3.4.7. Adjusting the disk IO scheduler for an LVM-based SR
3.5. Managing Host Bus Adapters (HBAs)
3.5.1. Sample QLogic iSCSI HBA setup
3.5.2. Removing HBA-based FC or iSCSI device entries
3.5.3. Enabling Virtual HBA support
3.6. Virtual disk QoS settings (Enterprise Edition only)
4. Networking
4.1. XenServer networking concepts
4.1.1. Network objects
4.1.2. Networks
4.1.3. VLANs
4.1.4. NIC bonds
4.2. Initial networking configuration
4.3. Managing networking configuration
4.3.1. Creating networks in a standalone server
4.3.2. Creating networks in resource pools
4.3.3. Creating VLANs
4.3.4. Creating NIC bonds on a standalone host
4.3.5. Creating NIC bonds in resource pools
4.3.6. Configuring a dedicated storage NIC
4.3.7. Controlling Quality of Service (QoS)
4.3.8. Changing networking configuration options
4.3.9. NIC/PIF ordering in resource pools
4.4. Networking Troubleshooting
4.4.1. Diagnosing network corruption
4.4.2. Recovering from a bad network configuration
5. Monitoring and managing XenServer
5.1. Alerts
5.1.1. Customizing Alerts
5.1.2. Configuring Email Alerts
5.2. Custom Fields and Tags
5.3. Custom Searches
6. Command line interface
6.1. Basic xe syntax
6.2. Special characters and syntax
6.3. Command types
6.3.1. Parameter types
6.3.2. Low-level param commands
6.3.3. Low-level list commands
6.4. xe command reference
6.4.1. Bonding commands
6.4.2. CD commands
6.4.3. Console commands
6.4.4. Event commands
6.4.5. Host (XenServer host) commands
6.4.6. Log commands
6.4.7. Network commands
6.4.8. Patch (update) commands
6.4.9. PBD commands
6.4.10. PIF commands
6.4.11. Pool commands
6.4.12. Storage Manager commands
6.4.13. SR commands
6.4.14. Task commands
6.4.15. Template commands
6.4.16. Update commands
6.4.17. User commands
6.4.18. VBD commands
6.4.19. VDI commands
6.4.20. VIF commands
6.4.21. VLAN commands
6.4.22. VM commands
7. Troubleshooting
7.1. XenServer host logs
7.1.1. Sending log messages to a central server
7.2. Troubleshooting connections between XenCenter and the XenServer host
A. CPU Allocation Guidelines
Index

This document is a system administrator's guide to XenServer™, the platform virtualization solution from Citrix™. It describes the tasks involved in configuring a XenServer deployment -- in particular, how to set up storage, networking and resource pools, and how to administer XenServer hosts using the xe command line interface (CLI).

This section summarizes the rest of the guide so that you can find the information you need. The following topics are covered:

  • XenServer hosts and resource pools
  • XenServer network configuration
  • XenServer storage configuration
  • Monitoring and Managing XenServer
  • XenServer command line interface

A resource pool is a connected group of up to 16 XenServer hosts that, combined with shared storage, provides a platform on which VMs run. VMs can be started on different hosts within the resource pool, and can even be live migrated between pool hosts in order to minimize downtime.

The XenServer hosts and resource pools chapter introduces the concept of resource pools and describes how to:

  • add and remove XenServer hosts from pools
  • create shared storage and attach it to a pool
  • start VMs on different XenServer hosts within a pool
  • live migrate running VMs between XenServer hosts within a pool

The Networking chapter introduces physical and virtual network concepts, describing how to:

  • configure physical networking on XenServer hosts and resource pools
  • create virtual network interfaces for VMs, and bridge these to physical networks
  • work with VLANs

The Storage chapter introduces physical and virtual storage concepts, describing how to:

  • create shared and local storage repositories on a variety of different substrates (including iSCSI, NFS and Fibre Channel)
  • create virtual disk images within storage repositories as part of the process of installing VMs
  • manage and administer storage repositories

The Command Line Interface chapter introduces "xe": a powerful CLI that facilitates administration of all aspects of XenServer, including host configuration, storage, networking and VMs. The CLI guide describes:

  • the syntax of xe commands
  • using xe in both on- and off-host modes on both Windows and Linux
  • using xe to query the parameters of physical and virtual objects, including hosts, networks, storage repositories, virtual disks and VMs
  • using xe to configure XenServer deployments (including host, network, storage and VM configuration)

A resource pool comprises multiple XenServer host installations, bound together into a single managed entity which can host Virtual Machines. When combined with shared storage, a resource pool enables VMs to be started on any XenServer host which has sufficient memory and then dynamically moved between XenServer hosts while running with minimal downtime (XenMotion). If an individual XenServer host suffers a hardware failure, then the administrator can restart the failed VMs on another XenServer host in the same resource pool. If high availability (HA) is enabled on the resource pool, VMs will automatically be moved if their host fails. Up to 16 hosts are supported per resource pool, although this restriction is not enforced.

This chapter describes how resource pools can be created through a series of examples using the xe command line interface (CLI). A simple NFS-based shared storage configuration is presented and a number of simple VM management examples are discussed. Procedures for dealing with physical node failures are also described.

A pool always has at least one physical node, known as the master. Other physical nodes join existing pools and are described as members. Only the master node exposes an administration interface (used by XenCenter and the CLI); the master will forward commands to individual members as necessary.

A resource pool is an aggregate of one or more homogeneous XenServer hosts, up to a maximum of 16. The definition of homogeneous is:

In addition to being homogeneous, an individual XenServer host can only join a resource pool if:

XenServer hosts in resource pools may contain different numbers of physical network interfaces. Local storage repositories may also exist, of varying size. In practice, it is often difficult to obtain multiple servers with the exact same CPUs, and so minor variations are permitted. If you are sure that it is acceptable in your environment for hosts with varying CPUs to be part of the same resource pool, then the pool joining operation can be forced.

Although not a strict technical requirement for creating a resource pool, the advantages of pools (for example, the ability to dynamically choose on which XenServer host to run a VM and to dynamically move a VM between XenServer hosts) are only available if the pool has one or more shared storage repositories. As a rule of thumb, you should postpone creating a pool of XenServer hosts until shared storage is available. Once shared storage has been added, we recommend you to move existing VMs whose disks are in local storage into shared storage. This can be done using the xe vm-copy command or XenCenter.

Resource pools can be created using either the XenCenter management console or the CLI. When you join a new host to a resource pool, the joining host synchronizes its local database with the pool-wide one, and inherits some settings from the pool:

The following example shows how to install a Debian Linux VM using the Debian Etch 4.0 template provided with XenServer.

Installing a Debian Etch (4.0) VM

  1. Open a console on any host in the pool.
  2. Use the sr-list command to find the UUID of your shared storage:
    xe sr-list
  3. Create the Debian VM by issuing the command
    xe vm-install template="Debian Etch 4.0" new-name-label=<etch> \
    sr_uuid=<shared_storage_uuid>
    When the command completes, the Debian VM will be ready to start.
  4. Start the Debian VM with the command
    xe vm-start vm=etch
    The master will choose a XenServer host from the pool to start the VM. If the on parameter is provided, the VM will start on the specified XenServer host. If the requested XenServer host is unable to start the VM, the command will fail. To request that a VM is always started on a particular XenServer host, set the affinity parameter of the VM to the UUID of the desired XenServer host using the xe vm-param-set command. Once set, the system will start the VM there if it can; if it cannot, it will default to choosing from the set of possible XenServer hosts.
  5. You can use XenMotion to move the Debian VM to another XenServer host with the command
    xe vm-migrate vm=<etch> host=<host_name> --live
    XenMotion keeps the VM running during this process to minimize downtime.

    Note

    When a VM is migrated, the domain on the original hosting server is destroyed and the memory that VM used is zeroed out before Xen makes it available to new VMs. This ensures that there is no information leak from old VMs to new ones. As a consequence, it is possible that sending multiple near-simultaneous commands to migrate a number of VMs, when near the memory limit of a server (for example, a set of VMs consuming 3GB migrated to a server with 4GB of physical memory), the memory of an old domain might not be scrubbed before a migration is attempted, causing the migration to fail with a HOST_NOT_ENOUGH_FREE_MEMORY error. Inserting a delay between migrations should allow Xen the opportunity to successfully scrub the memory and return it to general use.

When the HA feature is enabled, XenServer continually monitors the health of the hosts in a pool. The HA mechanism automatically moves protected VMs to a healthy host if the current VM host fails. Additionally, if the host that fails is the master, HA selects another host to take over the master role automatically, meaning that you will be able to continue to manage the XenServer pool.

In order to absolutely guarantee that a host is unreachable, a resource pool configured for high-availability uses several 'heartbeat' mechanisms to regularly check up on hosts. These heartbeats go through both the storage interfaces (via the 'Heartbeat SR') and also the networking interfaces (via the management interfaces). Both of these heartbeat routes can be multi-homed for additional resilience to prevent false positives from component failures.

XenServer dynamically maintains a 'failover plan' for what to do if a set of hosts in a pool fail at any given time. An important concept to understand is the 'host failures to tolerate' value, which is defined as part of HA configuration. This determines the number of failures which is allowed without any loss of service. For example, if a resource pool consisted of 16 hosts, and the tolerated failures is set to 3, then the pool will calculate a failover plan which allows for any 3 hosts to fail and still be able to restart VMs on other hosts. If a plan cannot be found, then the pool is considered to be 'overcommitted'. The plan is dynamically recalculated based on VM lifecycle operations and movement. Alerts are sent (either via XenCenter or e-mail) if changes (for example the addition on new VMs to the pool) cause your pool to become overcommitted.

In order to use the HA feature, you will require:

In order for a virtual machine to be protected by the HA feature, it must be agile. This means that it must have its virtual disks on shared storage (any type of shared storage may be used; the iSCSI or Fibre Channel LUN is only required for the storage heartbeat and can be used for virtual disk storage if you prefer, but this is not necessary), must not have a connection to a local DVD drive configured, and should have its virtual network interfaces on pool-wide networks.

We strongly recommend the use of a bonded management interface on the servers in the pool if HA is enabled, and multipathed storage for the Heartbeat SR.

If you create VLANs and bonded interfaces from the CLI, then they may not be plugged in and active despite being created. In this situation, a VM can appear to be not agile, and cannot be protected by HA. If this occurs, use the CLI pif-plug command to bring the VLAN and bond PIFs up so that the VM can become agile. You can also determine precisely why a VM is not agile by using the xe diagnostic-vm-status CLI command to analyze its placement constraints, and take remedial action if required.

Virtual machines are assigned a restart priority and a flag that indicates whether they should be protected by HA or not. When HA is enabled, every effort is made to keep protected virtual machines live. If a restart priority is specified, any protected VM that is halted will be started automatically. If a server fails then the VMs on it will be started on another server.

The possible restart priorities are:

The restart priorities determine the order in which VMs are restarted when a failure occurs. In a given configuration where a number of server failures greater than zero can be tolerated (as indicated in the HA panel in the GUI, or by the ha-plan-exists-for field on the pool object in the CLI), the VMs that have restart priorities 1, 2 or 3 are guaranteed to be restarted given the stated number of server failures. VMs with a best-effort priority setting are not part of the failover plan and are not guaranteed to be kept running, since capacity is not reserved for them. If the pool experiences server failures and enters a state where the number of tolerable failures drops to zero, the protected VMs will no longer be guaranteed to be restarted. If this condition is reached, a system alert will be generated. In this case, should an additional failure occur, all VMs that have a restart priority set will behave according to the best-effort behavior.

If a protected VM cannot be restarted at the time of a server failure (for example, if the pool was overcommitted when the failure occurred), further attempts to start this VM will be made as the state of the pool changes. This means that if extra capacity becomes available in a pool (if you shut down a non-essential VM, or add an additional server, for example), a fresh attempt to restart the protected VMs will be made, which may now succeed.

HA can be enabled on a pool using either XenCenter or the command-line interface. In either case, you will specify a set of priorities that determine which VMs should be given highest restart priority when a pool is overcommitted.

XenServer 5.0.0 introduces the concept of Portable Storage Repositories (Portable SRs). Portable SRs contain all of the information necessary to recreate all the Virtual Machines (VMs) with Virtual Disk Images (VDIs) stored on the SR after re-attaching the SR to a different host or pool. Portable SRs can be used when regular maintenance or disaster recovery requires manually moving a SR between pools or standalone hosts.

Using portable SRs has similar constraints to XenMotion as both cases result in VMs being moved between hosts. To use portable SRs:

Portable SRs work by creating a dedicated metadata VDI within the specified SR. The metadata VDI is used to store copies of the pool or host database as well as the metadata describing each VM's configuration. As a result the SR becomes fully self-contained, or portable, allowing it to be detached from one host and re-attached to another as a new SR. Once the SR is re-attached a restore process is used to recreate all of the VMs on the SR from the metadata VDI. For disaster recovery the metadata backup can be scheduled to run regularly to ensure the metadata SR is current.

The metadata backup and restore feature works at the command-line script level and the same functionality is also supported in the menu-driven text console. It is not currently available through XenCenter.

When a metadata backup is first taken, a special backup VDI is created on a SR. This VDI has an ext3 filesystem which stores the following versioned backups:

In the menu-driven text console on the XenServer host, there are some menu items under the Backup, Update and Restore menu which provide more user-friendly interfaces to these scripts. The operations should only be performed on the pool master. You can use these menu items to perform 3 operations:

For automating this scripting, there are some commands in the control domain which provide an interface to metadata backup and restore at a lower level than the menu options:

Full usage information for both scripts can be obtained by running them in the control domain with the -h flag. One particularly useful invocation mode is xe-backup-metadata -d which mounts the backup VDI into dom0, and drops into a sub-shell with the backup directory so it can be examined.

The metadata backup and restore options can be run as scripts within the control domain or through the Backup, Restore, and Update menu option in the Local Console. All other actions, such as detaching the SR from the source host and reattaching it to the destination host, can be performed using the Local Console, XenCenter, or the xe CLI. This example uses a combination of XenCenter and the Local Console.

To create and move a portable SR using Local Console and XenCenter

  1. On the source host or pool, within the Local Console select the Backup, Restore, and Update menu option, select the Backup Virtual Machine Metadata option, and then select the desired SR:
  2. Within XenCenter, select the source host or pool and shutdown all running VMs with VDIs on the SR to be moved.
  3. Within the tree view select the SR to be moved and select Storage > Detach Storage Repository. The Detach Storage Repository menu option will not be displayed if there are running VMs with VDIs on the selected SR. After being detached the SR will be displayed in a grayed-out state.

    Warning

    Do not complete this step unless you have created a backup VDI in step 1.

  4. Select Storage > Forget Storage Repository to remove the SR record from the host or pool.
  5. Select the destination host in the tree view and select Storage > New Storage Repository.
  6. Create a new SR with the appropriate parameters required to reconnect the existing SR to the destination host. In the case of moving a SR between pools or hosts within a site the parameters may be identical to the source pool.
  7. Every time a new SR is created the storage is checked to see if it contains an existing SR. If so, an option is presented allowing re-attachment of the existing SR. If this option is not displayed the parameters specified during SR creation are not correct:
  8. Select Reattach.
  9. Select the new SR in the tree view and then select the Storage tab to view the existing VDIs present on the SR.
  10. Within the Local Console on the destination host, select the Backup, Restore, and Update menu option, select the Restore Virtual Machine Metadata option, and select the newly re-attached SR.
  11. The VDIs on the selected SR will be inspected to find the metadata VDI. Once found, select the desired metadata backup to use.
  12. Select the Only VMs on this SR option to restore the VMs.

    Note

    Note: Use the All VM Metadata option when moving multiple SRs between hosts or pools, or when using tiered storage where VMs to be restored have VDIs on multiple SRs. When using this option ensure all required SRs have been reattached to the destination host prior running the restore.

  13. The VMs will be restored in the destination pool in a shutdown state and are available for use.

XenServer 5.0.0 provides a convenient snapshotting mechanism that can take a snapshot of a VM's storage and metadata at a given time. Where necessary IO is temporarily halted while the snapshot is being taken to ensure that a self-consistent disk image can be captured.

Snapshot operations result in a snapshot VM that is similar to a template. The VM snapshot contains all the storage information and VM configuration, including attached VIFs, allowing them to be exported and restored for backup purposes.

The snapshotting operation is a 2 step process:

Two types of VM snapshots are supported: regular and quiesced:

Quiesced snapshots are a special case that take advantage of the Windows Volume Snapshot Service (VSS) for services that support it, so that a supported application (for example Microsoft Exchange or SQLServer) can flush data to disk and prepare for the snapshot before it is taken.

Quiesced snapshots are therefore safer to restore, but can have a greater performance impact on a system while they are being taken. They may also fail under load so more than one attempt to take the snapshot may be required.

XenServer supports quiesced snapshots on Windows Server 2003 and Windows Server 2008 for both 32-bit and 64-bit variants which have Microsoft VSS installed. Windows 2000, Windows XP and Windows Vista are not supported. Supported storage backends are EqualLogic and NetApp.

In general, a VM can only access VDI snapshots (not VDI clones) of itself via the VSS interface. There is a flag that can be set by the XenServer administrator whereby adding an attribute of snapmanager=true to the VM's other-config allows that VM to import snapshots of VDIs from other VMs. Note that this opens a security vulnerability and should be used with care. This feature allows an administrator to attach VSS snapshots using an in-guest transportable snapshot ID as generated by the VSS layer to another VM for the purposes of backup.

VSS quiesce timeout: the Microsoft OS quiesce period is limited to only 10 seconds, therefore it is quite feasible that a snaphshot may not be able to complete in time. If, for example the XAPI daemon has queued additional blocking tasks such as an SR scan, the VSS snapshot may timeout and fail. The operation should be retried if this happens. Note also that the larger the number of VBDs attached to a VM, the more likely it is that this timeout may be reached.

VSS snap of entire VM's disks: in order to store all data available at the time of a VSS snapshot, the XAPI manager will snapshot all disks plus VM metadata associated with a VM that are snapshottable via the Xen storage manager API. If the VSS layer requests a subset of the disks, you will notice that a full VM snapshot is taken regardless. The extra snapshot data can be deleted via XenCenter if not required.

Before taking a snapshot, see the section called “Preparing to clone a Windows VM” in XenServer Virtual Machine Installation Guide and the section called “Preparing to clone a Linux VM” in XenServer Virtual Machine Installation Guide for information about any special operating system-specific configuration and considerations to take into account.

Use the vm-snapshot and vm-snapshot-with-quiesce commands to take a snapshot of a VM:

xe vm-snapshot vm=<vm_name> new-name-label=<vm_snapshot_name>	
xe vm-snapshot-with-quiesce vm=<vm_name> new-name-label=<vm_snapshot_name>

This section provides details of how to recover from various failure scenarios. All failure recovery scenarios require the use of one or more of the backup types listed in Section 2.8, “Backups”.

In the absence of HA, master nodes detect the failures of members by receiving regular heartbeat messages. If no heartbeat has been received for 200 seconds, the master assumes the member is dead. There are two ways to recover from this problem:

When a member XenServer host fails, there may be VMs still registered in the running state. If you are sure that the member XenServer host is definitely down, and that the VMs have not been brought up on another XenServer host in the pool, use the xe vm-reset-powerstate CLI command to set the power state of the VMs to halted. See Section 6.4.22.20, “vm-reset-powerstate” for more details.

Warning

Incorrect use of this command can lead to data corruption. Only use this command if absolutely necessary.

Every member of a resource pool contains all the information necessary to take over the role of master if required. When a master node fails, the following sequence of events occurs:

If the master comes back up at this point, it will re-establish communication with its members. The members will leave emergency mode and operation will return to normal.

If the master is really dead, though, you should choose one of the members and issue to it the command xe pool-emergency-transition-to-master. Once it has become the master, issue the command xe pool-recover-slaves and the members will now point to the new master.

If you repair or replace the server that was the original master, you can simply bring it up, install the XenServer host software, and add it to the pool. Since the XenServer hosts in the pool are enforced to be homogeneous, there is no real need to make the replaced server the master.

When a member XenServer host is transitioned to being a master, you should also check that the default pool storage repository is set to an appropriate value. This can be done using the xe pool-param-list command and verifying that the default-SR parameter is pointing to a valid storage repository.

If the physical host machine has failed, use the appropriate procedure listed below to recover.

This chapter discusses the framework for storage abstractions. It describes the way physical storage hardware of various kinds is mapped to VMs, and the software objects used by the XenServer host API to perform storage-related tasks. Detailed sections on each of the supported storage types include procedures for creating storage for VMs using the CLI, with type-specific device configuration options, and some best practices for managing storage in XenServer host environments. Finally, the virtual disk QoS (quality of service) settings available to XenServer Enterprise Edition are described.

There are four basic VDI types, VHD, Logical Volume Manager (LVM), EqualLogic, and NetApp managed LUNs. Both VHD and LVM types can exist on local dedicated storage or remote shared storage.

In working with the XenServer host CLI, there are four object classes that are used to describe, configure, and manage storage:

This section provides descriptions of the physical storage types that XenServer supports. Device configuration options and examples of creating SRs are given for each type.

The storage repository types supported in XenServer are provided by plug-ins in the control domain; these can be examined and plugins supported by third parties may be added to the /opt/xensource/sm directory. Modification of these files is unsupported, but visibility of these files may be valuable to developers and power users. New storage manager plugins can be placed in this directory and will be automatically detected by XenServer. The available SR types may be listed using the sm-list command (see Section 6.4.12, “Storage Manager commands”).

New storage repositories are created using via the GUI using the New Storage wizard that will guide you through the various probing and configuration steps, or by using the sr-create CLI command. This command creates a new SR on the storage substrate (potentially destroying any existing data), and makes the storage repository API object and a corresponding PBD records, allowing use of the storage by virtual machines. Upon successful creation of the SR, the PBD will be automatically plugged. If the SR shared=true flag is set, a PBD entry will be created and plugged for every XenServer Host in the resource pool.

The NFS filer is a ubiquitous form of storage infrastructure that is available in many environments. XenServer allows existing NFS servers that support NFS V3 over TCP/IP to be used immediately as a storage repository for virtual disks (VDIs). VDIs are stored in the Microsoft VHD format only. Moreover, as NFS SRs can be shared, VDIs stored in a shared SR allow VMs to be started on any XenServer hosts in a resource pool and be migrated between them using XenMotion with no noticeable downtime.

Creating an NFS SR requires the hostname or IP address of the NFS server. The sr-probe command can provide a list of valid destination paths exported by the server on which the SR may be created. The NFS server must be configured to export the specified path to all XenServer hosts in the pool, or the creation of the SR and the plugging of the PBD record will fail.

As mentioned at the beginning of this chapter, VDIs stored on NFS are sparse. The image file is allocated as the VM writes data into the disk. This has the considerable benefit that VM image files take up only as much space on the NFS filer as is required. If a 100GB VDI is allocated for a new VM and an OS is installed, the VDI file will only reflect the size of the OS data that has been written to the disk rather than the entire 100GB.

VHD files may also be chained, allowing two VDIs to share common data. In cases where a NFS-based VM is cloned, the resulting VMs will share the common on-disk data at the time of cloning. Each will proceed to make its own changes in an isolated copy-on-write version of the VDI. This feature allows NFS-based VMs to be quickly cloned from templates, facilitating very fast provisioning and deployment of new VMs.

As VHD-based images require extra metadata to support sparseness and chaining, the format is not as high-performance as LVM-based storage. In cases where performance really matters, it is well worth forcibly allocating the sparse regions of an image file. This will improve performance at the cost of consuming additional disk space.

XenServer's NFS and VHD implementation assume that they have full control over the SR directory on the NFS server. Administrators should not modify the contents of the SR directory, as this can risk corrupting the contents of VDIs.

For NFS best practices, consider performance and access control issues. Access control can specify which client IP address has access to an NFS export. Alternatively, one can allow world read/write access control. The administrator ought to make policy decisions based on the specific requirements of their installation.

XenServer has been tuned for enterprise-class filers that use non-volatile RAM to provide fast acknowledgments of write requests while maintaining a high degree of data protection from failure. For reference, XenServer has been tested extensively against Network Appliance FAS270c and FAS3020c filers, using Data OnTap 7.2.2.

In situations where XenServer is used with lower-end filers, it will err on the side of caution by waiting for all writes to be acknowledged before passing acknowledgments on to guest VMs. This will incur a noticeable performance cost, and might be remedied by setting the filer to present the SR mount point as an asynchronous mode export. Asynchronous exports acknowledge writes that are not actually on disk, and so administrators should consider the risks of failure carefully in these situations.

The XenServer NFS implementation uses TCP by default. If your situation allows, you can configure the implementation to use UDP in situations where there may be a performance benefit. To do this, specify the device-config parameter useUDP=true at SR creation time.

XenServer provides support for shared SRs on iSCSI LUNs. iSCSI is supported using the open-iSCSI software iSCSI initiator or by using a supported iSCSI Host Bus Adapter (HBA). The steps for using iSCSI HBAs are identical to those for Fibre Channel HBAs, both of which are described in Section 3.2.5, “Shared LVM storage over FC or iSCSI hardware HBAs ”.

Shared iSCSI support using the software iSCSI initiator is implemented based on the Linux Volume Manager (LVM) and provides the same performance benefits provided by LVM VDIs in the local disk case. Shared iSCSI SRs using the software-based host initiator are capable of supporting VM agility using XenMotion: VMs can be started on any XenServer host in a resource pool and migrated between them with no noticeable downtime. The LVM VDIs used in software-based iSCSI SRs VDIs do not provide support for sparse provisioning or fast cloning.

iSCSI SRs utilize the entire LUN specified at creation time and may not span more than one LUN. CHAP support is provided for client authentication, during both the data path initialization and the LUN discovery phases.

All iSCSI initiators and targets must have a unique name to ensure they can be uniquely identified on the network. An initiator has an iSCSI initiator address, and a target has an iSCSI target address. Collectively these are called iSCSI Qualified Names, or IQNs.

XenServer hosts support a single iSCSI initiator which is automatically created and configured with a random IQN during host installation. The single initiator can be used to connect to multiple iSCSI targets concurrently.

iSCSI targets commonly provide access control via iSCSI initiator IQN lists, so all iSCSI targets/LUNs to be accessed by a XenServer host must be configured to allow access by the host's initiator IQN. Similarly, targets/LUNs to be used as shared iSCSI SRs must be configured to allow access by all host IQNs in the resource pool.

The XenServer host IQN value can be adjusted using XenCenter, or via the CLI with the following command when using the iSCSI software initiator:

xe host-param-set uuid=<valid_host_id> other-config:iscsi_iqn=<new_initiator_iqn>

If you have access to a Network Appliance™ (NetApp) filer with sufficient disk space, running a version of Data ONTAP 7G (version 7.0 or greater), you can configure a custom NetApp storage repository for VM storage on your XenServer deployment. The XenServer driver uses the ZAPI interface to the filer to create a group of FlexVols which correspond to an SR. VDIs are created as virtual LUNs on the filer, and attached to XenServer hosts using an iSCSI data path. There is a direct mapping between a VDI and a raw LUN without requiring any additional volume metadata. Thus, at a logical level, the NetApp SR is a managed volume and the VDIs are the LUNs within the volume. VM cloning uses the snapshotting and cloning capabilities of the filer for data efficiency and performance and to ensure compatibility with existing ONTAP filer management tools.

As with the iSCSI-based SR type, the NetApp driver also uses the built-in software initiator and its assigned server IQN, which can be modified by changing the value shown on the General tab when the storage repository is selected in XenCenter.

The simplest way to create NetApp SRs is with XenCenter. See XenCenter Help for details. They can also be created using xe CLI commands. See Section 3.3.8, “Creating a shared NetApp SR over iSCSI” for an example.

FlexVols

NetApp introduces a notion of FlexVol as the basic unit of manageable data. It is important to note that there are limitations that constrain the design of NetApp-based SRs. These are:

Precise system limits vary per Filer type, however as a general guide, a FlexVol may contain up to 200 LUNs, and provides up to 255 snapshots. Since there is a one-to-one mapping of LUNs to VDIs, and often a VM will have more than one VDI, it is apparent that the resource limitations of a single FlexVol can easily be reached. Also, consider that the act of taking a snapshot includes snapshotting all the LUNs within a FlexVol and that the VM clone operation indirectly relies on snapshots in the background as well as the CLI-based VDI snapshot operation for backup purposes.

There are two constraints to consider, therefore, in mapping the virtual storage objects of the XenServer host to the filer; in to order to maintain space efficiency it makes sense to limit the number of LUNs per FlexVol, yet at the other extreme, in order to avoid resource limitations a single LUN per FlexVol provides the most flexibility. However, since there is a vendor-imposed limit of 200 or 500 FlexVols, per filer (depending on the NetApp model), this would create a limit of 200 or 500 VDIs per filer and it is therefore important to select a suitable number of FlexVols around these parameters.

Given these resource constraints, the mapping of virtual storage objects to the Ontap storage system has been designed in the following manner; LUNs are distributed evenly across FlexVols, with the expectation of using VM UUIDs to opportunistically group LUNs attached to the same VM into the same FlexVol. This is a reasonable usage model that allows a snapshot of all the VDIs in a VM at one time, maximizing the efficiency of the snapshot operation.

An optional parameter you can set is the number of FlexVols assigned to the SR. You can use between 1 and 32 FlexVols; the default is 8. The trade-off in the number of FlexVols to the SR is that, for a greater number of FlexVols, the snapshot and clone operations become more efficient, since there are statistically fewer VMs backed off the same FlexVol. The disadvantage is that more FlexVol resources are used for a single SR, where there is a typical system-wide limitation of 200 for some smaller filers.

Aggregates

When creating a NetApp driver-based SR, you select an appropriate aggregate. The driver can be probed for non-traditional type aggregates, that is, newer-style aggregates that support FlexVols, and then lists all aggregates available and the unused disk space on each.

We strongly recommend that you configure an aggregate exclusively for use by XenServer storage, since space guarantees and allocation cannot be correctly managed if other applications are also sharing the resource.

Thick or thin provisioning

When creating NetApp storage, you can also choose the type of space management used. By default, allocated space is "thickly provisioned" to ensure that VMs never run out of disk space and that all virtual allocation guarantees are fully enforced on the filer. Selecting "thick provisioning" ensures that whenever a VDI (LUN) is allocated on the filer, sufficient space is reserved to guarantee that it will never run out of space and consequently experience failed writes to disk. Due to the nature of the Ontap FlexVol space provisioning algorithms the best practice guidelines for the filer require that at least twice the LUN space is reserved to account for background snapshot data collection and to ensure that writes to disk are never blocked. In addition to the double disk space guarantee, Ontap also requires some additional space reservation for management of unique blocks across snapshots. The guideline on this amount is 20% above the reserved space. Therefore, the space guarantees afforded by "thick provisioning" will reserve up to 2.4 times the requested virtual disk space.

The alternative allocation strategy is thin provisioning, which allows the admin to present more storage space to the VMs connecting to the SR than is actually available on the SR. There are no space guarantees, and allocation of a LUN does not claim any data blocks in the FlexVol until the VM writes data. This would be appropriate for development and test environments where you might find it convenient to over-provision virtual disk space on the SR in the anticipation that VMs may be created and destroyed frequently without ever utilizing the full virtual allocated disk. This method should be used with extreme caution and only in non-critical environments.

FAS Deduplication

FAS Deduplication is a NetApp technology for reclaiming redundant disk space. Newly-stored data objects are divided into small blocks, each block containing a digital signature, which is compared to all other signatures in the data volume. If an exact block match exists, the duplicate block is discarded and the disk space reclaimed. FAS Deduplication can be enabled on "thin provisioned" NetApp-based SRs and will operate according to the default filer FAS Deduplication parameters, typically every 24 hours. It must be enabled at the point the SR is created and any custom FAS Deduplication configuration managed directly on the filer.

Access Control

Since FlexVol operations such as volume creation and volume snapshotting require administrator privileges on the filer itself, it is recommended that the XenServer host should be provided with suitable administrator username and password credentials at configuration time. In situations where the XenServer host does not have full administrator rights to the filer, the filer administrator could perform an out-of-band preparation and provisioning of the filer. The SR is then introduced to the XenServer host using the XenCenter or via the sr-introduce xe CLI command. Note however that operations such as VM cloning or snapshot generation will fail in this situation due to insufficient access privileges.

Licenses

You need to have an iSCSI license on the NetApp filer to use this storage repository type; for the generic plugins you will required either an iSCSI or NFS license depending on the SR type being used.

Further information

For more information about NetApp technology, see the following links:

This section covers creating storage repository types and making them available to a XenServer host. The examples provided pertain to storage configuration via the CLI, which provides the greatest flexibility. See the XenCenter Help for details on using the New Storage Repository wizard.

The sr-probe CLI command can be used in two ways:

In both cases sr-probe works by specifying an SR type and one or more device-config parameters for that SR type. When an incomplete set of parameters is supplied sr-probe returns an error message indicating parameters are missing and the possible options for the missing parameters. When a complete set of parameters is supplied a list of existing SRs is returned. All sr-probe output is returned as an XML list.

For example, a known iSCSI target can be probed by specifying its name or IP address, and the set of IQNs available on the target will be returned:

xe sr-probe type=lvmoiscsi device-config:target=192.168.1.10

Error code: SR_BACKEND_FAILURE_96
Error parameters: , The request is missing or has an incorrect target IQN parameter, \
<?xml version="1.0" ?>
<iscsi-target-iqns>
    <TGT>
        <Index>
            0
        </Index>
        <IPAddress>
            192.168.1.10
        </IPAddress>
        <TargetIQN>
            iqn.192.168.1.10:filer1
        </TargetIQN>
    </TGT>
</iscsi-target-iqns>

Probing the same target again and specifying both the name/IP address and desired IQN will return the set of SCSIids (LUNs) available on the target/IQN.

xe sr-probe type=lvmoiscsi device-config:target=192.168.1.10  \ 
device-config:targetIQN=iqn.192.168.1.10:filer1

Error code: SR_BACKEND_FAILURE_107
Error parameters: , The SCSIid parameter is missing or incorrect, \
<?xml version="1.0" ?>
<iscsi-target>
    <LUN>
        <vendor>
			IET
        </vendor>
        <LUNid>
            0
        </LUNid>
        <size>
            42949672960
        </size>
        <SCSIid>
            149455400000000000000000002000000b70200000f000000
        </SCSIid>
    </LUN>
</iscsi-target>

Probing the same target and supplying all three parameters will return a list of SRs that exist on the LUN, if any.

xe sr-probe type=lvmoiscsi device-config:target=192.168.1.10  \ 
device-config:targetIQN=192.168.1.10:filer1 \
device-config:SCSIid=149455400000000000000000002000000b70200000f000000

<?xml version="1.0" ?>
<SRlist>
    <SR>
        <UUID>
            3f6e1ebd-8687-0315-f9d3-b02ab3adc4a6
        </UUID>
        <Devlist>
            /dev/disk/by-id/scsi-149455400000000000000000002000000b70200000f000000
        </Devlist>
    </SR>
</SRlist>

The following parameters can be probed for each SR type:

SRs of type lvmohba can only be created via the xe Command Line Interface (CLI). Once created lvmohba SRs can be managed using either XenCenter or the xe CLI.

Device-config parameters for lvmohba SRs:

Parameter nameDescriptionRequired?
SCSIidDevice SCSI IDYes

To create a shared lvmohba SR, perform the following steps on each host in the pool:

  1. Zone in one or more LUNs to each XenServer host in the pool. This process is highly specific to the SAN equipment in use. Please refer to the documentation for your SAN or contact your storage administrator for details.
  2. If necessary, use the HBA Command Line Interfaces (CLIs) included in the XenServer host to configure the HBA:
    • Emulex: /usr/sbin/hbanyware
    • QLogic FC: /opt/QLogic_Corporation/SANsurferCLI
    • QLogic iSCSI: /opt/QLogic_Corporation/SANsurferiCLI
    See
    Section 3.5, “Managing Host Bus Adapters (HBAs) ” for an example of QLogic iSCSI HBA configuration. For more information on Fibre Channel and iSCSI HBAs please refer to the Emulex and QLogic websites.
  3. Use the sr-probe command to determine the global device path of the HBA LUN. sr-probe will force a re-scan of HBAs installed in the system to detect any new LUNs that have been zoned to the host and return a list of properties for each LUN found. Use the host-uuid parameter to ensure the probe occurs on the desired host. The global device path returned as the <path> property will be common across all hosts in the pool and therefore must be used as the value for the device-config:device parameter when creating the SR. If multiple LUNs are present use the vendor, LUN size, LUN serial number, or the SCSI ID as included in the <path> property to identify the desired LUN.
    xe sr-probe type=lvmohba \
    host-uuid=1212c7b3-f333-4a8d-a6fb-80c5b79b5b31
    Error code: SR_BACKEND_FAILURE_90
    Error parameters: , The request is missing the device parameter, \
    <?xml version="1.0" ?>
    <Devlist>
        <BlockDevice>
            <path>
                /dev/disk/by-id/scsi-360a9800068666949673446387665336f
            </path>
            <vendor>
                HITACHI
            </vendor>
            <serial>
                730157980002
            </serial>
            <size>
                80530636800
            </size>
            <adapter>
                4
            </adapter>
            <channel>
                0
            </channel>
            <id>
                4
            </id>
            <lun>
                2
            </lun>
            <hba>
                qla2xxx
            </hba>
        </BlockDevice>
        <Adapter>
            <host>
                Host4
            </host>
            <name>
                qla2xxx
            </name>
            <manufacturer>
                QLogic HBA Driver
            </manufacturer>
            <id>
                4
            </id>
        </Adapter>
    </Devlist>
  4. On the master host of the pool create the SR, specifying the global device path returned in the <path> property from sr-probe. PBDs will be created and plugged for each host in the pool automatically.
xe sr-create host-uuid=<valid_uuid> \
content-type=user \
name-label=<"Example shared LVM over HBA SR"> shared=true \
device-config:SCSIid=<device_scsi_id> type=lvmohba

Note

The Repair Storage Repository function within XenCenter can be used to retry the PBD creation and plugging portions of the sr-create operation. This can be valuable in cases where the LUN zoning was incorrect for one or more member servers in a pool when the SR was created. Correct the zoning for the affected hosts and use Repair Storage Repository instead of removing and re-creating the SR.

Device-config parameters for netapp SRs:

Setting the SR other-config:multiplier parameter to a valid value will adjust the default multipler attribute. By default we allocate 2.4 times the requested space to account for snapshot and metadata overhead associated with each LUN. Note that a valid value means "a value that is supported by the array." If you try to set the amount to 0.1 and attempt to create a really small VDI, it will likely fail.

Setting the SR other-config:enforce_allocation parameter to true will resize the FlexVols to precisely the amount specified by either the multiplier value above, or the default 2.4 value. Note that this works on new VDI creation in the selected FlexVol, or on all FlexVols during an SR scan. Note also that this will override any manual size adjustments made by the administrator to the SR FlexVols.

To create a NetApp SR, use the following command.

xe sr-create host-uuid=<valid_uuid> content-type=user \
  name-label=<"Example shared NetApp SR"> shared=true \
  device-config:target=192.168.1.10 device-config:username=<admin_username> \
  device-config:password=<admin_password> \
  type=netapp

Device-config parameters for EqualLogic SRs:

To create a EqualLogic SR, use the following command.

xe sr-create host-uuid=<valid_uuid> content-type=user \
name-label=<"Example shared Equallogic SR"> \
shared=true device-config:target=<target_ip> \
device-config:username=<admin_username> \
device-config:password=<admin_password> \
device-config:storagepool=<my_storagepool> \
device-config:chapuser=<chapusername> \
device-config:chappassword=<chapuserpassword> \
device-config:allocation=<thick> \
type=equal

Dynamic multipathing support is available for Fibre Channel and iSCSI storage backends. By default, it uses round-robin mode load balancing, so both routes will have active traffic on them during normal operation. Multipathing can be enabled in XenCenter or on the command line.

To disable multipathing, first unplug your VBDs, set the host other-config:multipathing to false and then replug your PBDs as described above. Do not modify the other-config:multipathhandle as this will be done automatically.

This section covers various operations required in the ongoing management of Storage Repositories (SRs).

The XenServer 4.0.1 release supported only local (non-shared) Fibre Channel (FC) SRs. In cases where a local FC SR is actually accessible by other hosts in a pool the SR can be converted to shared, allowing VMs with VDIs on the SR to be started on and migrated between hosts within the pool.

Converting a local FC SR to a shared FC SR requires using the xe CLI and the XenCenter Repair Storage Repository feature:

  1. Upgrade all hosts in the resource pool to XenServer 5.0.0.
  2. Ensure all hosts in the pool have the SR's LUN zoned appropriately. See Section 3.3.2, “Probing an SR” for details on using sr-probe to verify the LUN is present on each host.
  3. Convert the SR to shared:
    xe sr-param-set shared=true uuid=<local_fc_sr>
  4. Within XenCenter the SR will be move from the host level to the pool level, indicating it is now shared. The SR will be marked with a red ! to indicate it is not currently plugged on all hosts in the pool.
  5. Select the SR and then select the Storage ... Repair Storage Repository menu option.
  6. Click Repair to create and plug a PBD for each host in the pool.

The set of VDIs associated with a VM can be copied from one SR to another to accommodate maintenance requirements or tiered storage configurations. XenCenter provides the ability to copy a VM and all of its VDIs to the same or a different SR, and a combination of XenCenter and the xe CLI can be used to copy individual VDIs.

As outlined earlier in Section 3.2.6, “Shared NetApp Storage”, a NetApp SR comprises a collection of FlexVols. Cloning a VDI entails generating a snapshot of the FlexVol and then creating a LUN clone backed off the snapshot. When generating a VM snapshot, an admin must snapshot each of the VMs disks in sequence. Since all the disks are expected to be located in the same FlexVol, and the FlexVol snapshot operates on all LUNs in the same FlexVol, it makes sense to re-use an existing snapshot for all subsequent LUN clones. By default, if no snapshot hint is passed into the backend driver it will generate a random ID with which to name the FlexVol snapshot. There is a CLI override however for this value, passed in as an epochhint. The first time the epochhint value or 'cookie' is received, the backend will generate a new snapshot based on the cookie name. Any subsequent snapshot requests with the same epochhint value will be backed off the existing snapshot:

xe vdi-snapshot uuid=<valid_vdi_uuid> driver-params:epochhint=<cookie>

During provisioning of a NetApp SR, additional disk space is reserved for snapshots. If you never plan to use the snapshotting functionality, you might want to free up this reserved space. To do so, you can reduce the value of the other-config:multiplier parameter. By default the value of the multiplier is 2.4, so the amount of space reserved is 2.4 times the amount of space that would be needed for the FlexVols themselves.

For general performance, the default disk scheduler noop is applied on all new SR types that implement LVM based storage over a disk, i.e. Local LVM, LVM over iSCSI and LVM over HBA attached LUNs. The noop scheduler provides the fairest performance for competing VMs accessing the same device. In order to apply disk QoS however (Section 3.6, “Virtual disk QoS settings (Enterprise Edition only)”) it is necessary to override the default setting and assign the 'cfq' disk scheduler to any LVM-based SR type. For any LVM-based SR type, the corresponding PBD must be unplugged and re-plugged in order for the scheduler parameter to take effect. The disk scheduler can be adjusted using the following CLI parameter:

xe sr-param-set other-config:scheduler={noop|cfq|anticipatory|deadline} \
uuid=<valid_sr_uuid>

This section covers various operations required to manage Fibre Channel and iSCSI HBAs.

For full details on configuring QLogic Fibre Channel and iSCSI HBAs please refer to the QLogic website.

Once the HBA is physically installed into the XenServer host, use the following steps to configure the HBA:

  1. Set the IP networking configuration for the HBA. This example assumes DHCP and HBA port 0. Specify the appropriate values if using static IP addressing or a multi-port HBA.
    /opt/QLogic_Corporation/SANsurferiCLI/iscli -ipdhcp 0
  2. Add a persistent iSCSI target to port 0 of the HBA.
    /opt/QLogic_Corporation/SANsurferiCLI/iscli -pa 0 <iscsi_target_ip_address> \
    -INAME <iscsi_target_iqn>
  3. Use the xe sr-probe command to force a rescan of the HBA controller and display available LUNs. See Section 3.3.2, “Probing an SR” and Section 3.3.7, “Creating a shared LVM over Fibre Channel or iSCSI HBA SR (lvmohba) ” for more details.

Each HBA-based LUN has a corresponding global device path entry under /dev/disk/by-id and a standard device path under /dev. To remove the device entries for LUNs no longer in use as SRs use the following steps:

  1. Use sr-forget or sr-destroy as appropriate to remove the SR from the XenServer host database. See Section 3.4.1, “Destroying or forgetting a SR ” for details.
  2. Remove the zoning configuration within the SAN for the desired LUN to the desired host.
  3. Use the sr-probe command to determine the ADAPTER, BUS, TARGET, and LUN values corresponding to the LUN to be removed. See Section 3.3.2, “Probing an SR” for details.
  4. Remove the device entries with the following command:
    echo "1" > /sys/class/scsi_device/<adapter>:<bus>:<target>:<lun>/device/delete

Warning

Make absolutely sure you are certain which LUN you are removing. Accidentally removing a LUN required for host operation, such as the boot or root device, will render the host unusable.

In Enterprise Edition, virtual disks have an optional I/O priority Quality of Service (QoS) setting. This setting can be made to existing virtual disks with the CLI as described in this section.

In order to enable QoS for disk I/O the SR type underlying the VDI must be an LVM-based volume. QoS will only take effect therefore on SRs of type Local LVM, LVM over iSCSI and LVM over HBA attached LUNs. Note also that in the shared SR case i.e. where multiple hosts are accessing the same LUN, the QoS is applied to VBDs accessing the LUN from the same host, i.e. QoS is not applied across hosts in the pool. Note that QoS settings will not have any effect on VHD-based storage types.

Before configuring any QoS parameters for a VBD, ensure that the disk scheduler for the SR has been set appropriately. See Section 3.4.7, “Adjusting the disk IO scheduler for an LVM-based SR ” for details on how to adjust the scheduler. The scheduler parameter must be set to cfq on the SR for which the QoS is desired.

Note

Don't forget to set the scheduler to 'cfq' on the SR, and make sure that the PBD has been re-plugged in order for the scheduler change to take effect.

The first parameter is qos_algorithm_type. This parameter needs to be set to the value ionice, which is the only type of QoS algorithm supported for virtual disks in this release.

The QoS parameters themselves are set with key/value pairs assigned to the qos_algorithm_type parameter. For virtual disks, qos_algorithm_type takes a sched key, and depending on the value, also requires a class key.

Possible values of qos_algorithm_type:sched are

  • sched=rt or sched=real-time sets the QoS scheduling parameter to real time priority, which requires a class parameter to set a value
  • sched=idle sets the QoS scheduling parameter to idle priority, which requires no class parameter to set any value
  • sched=<anything> sets the QoS scheduling parameter to best effort priority, which requires a class parameter to set a value

The possible values for class are

  • One of the following keywords: highest, high, normal, low, lowest
  • an integer between 0 and 7, where 0 is the highest priority and 7 is the lowest

To enable the disk QoS settings, you also need to set the other-config:scheduler to cfq and replug PBDs for the storage in question.

For example, the following CLI commands set the virtual disk's VBD to use real time priority=5:

xe vbd-param-set uuid=<vbd_uuid> qos_algorithm_type=ionice
xe vbd-param-set uuid=<vbd_uuid> qos_algorithm_params:sched=rt
xe vbd-param-set uuid=<vbd_uuid> qos_algorithm_params:class=5
xe pbd-param-set uuid=<pbd_uuid> other-config:scheduler-cfg
xe pbd-plug uuid=<pbd_uuid>

This chapter discusses how physical network interface cards (NICs) in XenServer hosts are used to enable networking within Virtual Machines (VMs). XenServer supports up to 6 physical network interfaces (or up to 6 pairs of bonded network interfaces) per XenServer host and up to 7 virtual network interfaces per VM.

XenServer provides automated configuration and management of NICs via the xe command line interface (CLI).

Note

Unlike previous XenServer versions, the host's networking configuration files should not be edited directly in most cases; where a CLI command is available, do not edit the underlying files.

Some networking options have different behaviors when used with standalone XenServer hosts compared to resource pools. This chapter contains sections on general information that applies to both standalone hosts and pools, followed by specific information and procedures for each.

This section describes the general concepts of networking in the XenServer environment.

NIC bonds can improve XenServer host resiliency by using two physical NICs as if they were one. If one NIC within the bond fails the host's network traffic will automatically be routed over the second NIC. NIC bonds work in an active/active mode, with traffic balanced between the bonded NICs.

XenServer NIC bonds completely subsume the underlying physical devices (PIFs). In order to activate a bond the underlying PIFs must not be in use, either as the management interface for the host or by running VMs with VIFs attached to the networks associated with the PIFs.

XenServer NIC bonds are represented by additional PIFs. The bond PIF can then be connected to a XenServer network to allow VM traffic and host management functions to occur over the bonded NIC. The exact steps to use to create a NIC bond depend on the number of NICs in your host, and whether the management interface of the host is assigned to a PIF to be used in the bond.

XenServer supports Source Level Balancing (SLB) NIC bonding. SLB bonding:

Any given VIF will only utilize one of the links in the bond at a time. At startup no guarantees are made about the affinity of a given VIF to a link in the bond. However, for VIFs with high throughput, periodic rebalancing ensures that the load on the links is approximately equal.

API Management traffic can be assigned to a XenServer 5.0.0 bond interface and will be automatically load-balanced across the physical NICs.

XenServer 5.0.0 bonded PIFs do not require IP configuration for the bond when used for guest traffic. This is because the bond operates at Layer 2 of the OSI, the data link layer, and no IP addressing is used at this layer. When used for non-guest traffic (to connect to it with XenCenter for management, or to connect to shared network storage), one IP configuration is required per bond. (Incidentally, this is true of unbonded PIFs as well, and is unchanged from XenServer 4.1.0.)

As in XenServer 4.1.0, gratuitous ARP packets are sent when assignment of traffic changes from one interface to another as a result of fail-over.

Re-balancing is provided by the existing ALB re-balance capabilities: the number of bytes going over each slave (interface) is tracked over a given period. When a packet is to be sent that contains a new source MAC address it is assigned to the slave interface with the lowest utilization. Traffic is re-balanced every 10 seconds.

For procedures on how to create bonds for standalone XenServer hosts, see Section 4.3.4, “Creating NIC bonds on a standalone host”.

For procedures on how to create bonds for XenServer hosts that are configured in a resource pool, see Section 4.3.5, “Creating NIC bonds in resource pools”.

The networking configuration for a XenServer host is specified during initial host installation. Options such as IP address configuration (DHCP/static), the NIC used as the management interface, and hostname are set based on the values provided during installation.

When a XenServer host has a single NIC, the follow configuration will be present after installation:

When a host has multiple NICs the configuration present after installation depends on which NIC is selected for management operations during installation:

In both cases the resulting networking configuration allows connection to the XenServer host by XenCenter, the xe CLI, and any other management software running on separate machines via the IP address of the management interface. The configuration also provides external networking for VMs created on the host.

The PIF used for management operations is the only PIF ever configured with an IP address. External networking for VMs is achieved by bridging PIFs to VIFs via the network object which acts as a virtual Ethernet switch.

The steps required for networking features such as VLANs, NIC bonds, and dedicating a NIC to storage traffic are covered in the following sections.

Some of the network configuration procedures in this section differ depending on whether you are configuring a stand-alone server or a server that is part of a resource pool.

We recommend using XenCenter to create NIC bonds. For details, refer to the XenCenter help.

This section describes using the xe CLI to create bonded NIC interfaces on a standalone XenServer host. See Section 4.3.5, “Creating NIC bonds in resource pools” for details on using the xe CLI to create NIC bonds on XenServer hosts that comprise a resource pool.

Creating a bond on a dual-NIC host implies that the PIF/NIC currently in use as the management interface for the host will be subsumed by the bond. The additional steps required to move the management interface to the bond PIF are included.

Bonding two NICs together

  1. Use XenCenter or the vm-shutdown command to shut down all VMs on the host, thereby forcing all VIFs to be unplugged from their current networks. The existing VIFs will be invalid after the bond is enabled.
    xe vm-shutdown uuid=<vm_uuid>
  2. Use the network-create command to create a new network for use with the bonded NIC. The UUID of the new network is returned:
    xe network-create name-label=<bond0>
  3. Use the pif-list command to determine the UUIDs of the PIFs to use in the bond:
    xe pif-list
  4. Use the bond-create command to create the bond by specifying the newly created network UUID and the UUIDs of the PIFs to be bonded separated by commas. The UUID for the bond is returned:
    xe bond-create network-uuid=<network_uuid> pif-uuids=<pif_uuid_1>,<pif_uuid_2>

    Note

    See Section 4.3.4.2, “Controlling the MAC address of the bond” for details on controlling the MAC address used for the bond PIF.

  5. Use the pif-list command to determine the UUID of the new bond PIF:
    xe pif-list device=<bond0>
  6. Use the pif-reconfigure-ip command to configure the desired management interface IP address settings for the bond PIF. See Chapter 6, Command line interface for more detail on the options available for the pif-reconfigure-ip command.
    xe pif-reconfigure-ip uuid=<bond_pif_uuid> mode=DHCP
  7. Use the host-management-reconfigure command to move the management interface from the existing physical PIF to the bond PIF. This step will activate the bond:
    xe host-management-reconfigure pif-uuid=<bond_pif_uuid>
  8. Use the pif-reconfigure-ip command to remove the IP address configuration from the non-bonded PIF previously used for the management interface. This step is not strictly necessary but might help reduce confusion when reviewing the host networking configuration.
    xe pif-reconfigure-ip uuid=<old_management_pif_uuid> mode=None
  9. Move existing VMs to the bond network using the vif-destroy and vif-create commands. This step can also be completed via XenCenter by editing the VM configuration and connecting the existing VIFs of a VM to the bond network.
  10. Restart the VMs shut down in step 1.

Whenever possible, create NIC bonds as part of initial resource pool creation prior to joining additional member servers to the pool or creating VMs. Doing so allows the bond configuration to be automatically replicated to member servers as they are joined to the pool and reduces the number of steps required. Adding a NIC bond to an existing pool requires creating the bond configuration manually on the master and each of the members of the pool. Adding a NIC bond to an existing pool after VMs have been installed is also a disruptive operation, as all VMs in the pool must be shut down.

We recommend using XenCenter to create NIC bonds. For details, refer to the XenCenter help.

This section describes using the xe CLI to create bonded NIC interfaces on XenServer hosts that comprise a resource pool. See Section 4.3.4.1, “Creating a NIC bond on a dual-NIC host” for details on using the xe CLI to create NIC bonds on a standalone XenServer host.

  1. Select the host you want to be the master. The master host belongs to an unnamed pool by default. To create a resource pool with the CLI, rename the existing nameless pool:
    xe pool-param-set name-label=<"New Pool"> uuid=<pool_uuid>
    							
  2. Create the NIC bond on the master as follows:
    1. Use the network-create command to create a new pool-wide network for use with the bonded NICs. The UUID of the new network is returned.
      xe network-create name-label=<network_name>
    2. Use the host-list command to find the UUID of the master host:
      xe host-list
    3. Use the pif-list command to determine the UUIDs of the PIFs to use in the bond:
      xe pif-list
    4. Use the bond-create command to create the bond, specifying the network UUID created in step a and the UUIDs of the PIFs to be bonded, separated by commas. The UUID for the bond is returned:
      xe bond-create network-uuid=<network_uuid> pif-uuids=<pif_uuid_1>,<pif_uuid_2>

      Note

      See Section 4.3.4.2, “Controlling the MAC address of the bond” for details on controlling the MAC address used for the bond PIF.

    5. Use the pif-list command to determine the UUID of the new bond PIF:
      xe pif-list device=<network_name>
    6. Use the pif-reconfigure-ip command to configure the desired management interface IP address settings for the bond PIF. See Chapter 6, Command line interface, for more detail on the options available for the pif-reconfigure-ip command.
      xe pif-reconfigure-ip uuid=<bond_pif_uuid> mode=DHCP
    7. Use the host-management-reconfigure command to move the management interface from the existing physical PIF to the bond PIF. This step will activate the bond:
      xe host-management-reconfigure pif-uuid=<bond_pif_uuid>
    8. Use the pif-reconfigure-ip command to remove the IP address configuration from the non-bonded PIF previously used for the management interface. This step is not strictly necessary but might help reduce confusion when reviewing the host networking configuration.
      xe pif-reconfigure-ip uuid=<old_management_pif_uuid> mode=None
  3. Open a console on a host that you want to join to the pool and type:
    xe pool-join master-address=<host1> master-username=root master-password=<password>
    							
    The network and bond information will be automatically replicated to the member server. However, the management interface is not automatically moved from the member server's NIC to the bonded NIC. Move the management interface on the member server to enable the bond as follows:
    1. Use the host-list command to find the UUID of the member host being configured:
      xe host-list
    2. Use the pif-list command to determine the UUID of bond PIF on the new member host. Include the host-uuid parameter to list only the PIFs on the host being configured:
      xe pif-list device=<network_name> host-uuid=<host_uuid>
    3. Use the pif-reconfigure-ip command to configure the desired management interface IP address settings for the bond PIF with the pif-reconfigure-ip command. See Chapter 6, Command line interface, for more detail on the options available for the pif-reconfigure-ip command. This command must be run directly on the member server:
      xe pif-reconfigure-ip uuid=<bond_pif_uuid> mode=DHCP
    4. Use the host-management-reconfigure command to move the management interface from the existing physical PIF to the bond PIF. This step will activate the bond. This command must be run directly on the member server:
      xe host-management-reconfigure pif-uuid=<bond_pif_uuid>
    5. Use the pif-reconfigure-ip command to remove the IP address configuration from the non-bonded PIF previously used for the management interface. This step is not strictly necessary but may help reduce confusion when reviewing the host networking configuration. This command must be run directly on the member server:
      xe pif-reconfigure-ip uuid=<old_mgmt_pif_uuid> mode=None
  4. For each additional member server you want to join to the pool, repeat steps 3 and 4 to move the management interface on the member server to enable the bond.

When adding a NIC bond to an existing pool, the bond must be manually created on each host in the pool. The steps below can be used to add NIC bonds on both the pool master and member servers with the following requirements:

To add NIC bonds to existing pool master and member hosts

  1. Use the network-create command to create a new pool-wide network for use with the bonded NICs. This step should only be performed once per pool. The UUID of the new network is returned.
    xe network-create name-label=<bond0>
  2. Use XenCenter or the vm-shutdown command to shut down all VMs in the host pool to force all existing VIFs to be unplugged from their current networks. The existing VIFs will be invalid after the bond is enabled.
    xe vm-shutdown uuid=<vm_uuid>
  3. Use the host-list command to find the UUID of the host being configured:
    xe host-list
  4. Use the pif-list command to determine the UUIDs of the PIFs to use in the bond. Include the host-uuid parameter to list only the PIFs on the host being configured:
    xe pif-list host-uuid=<host_uuid>
  5. Use the bond-create command to create the bond, specifying the network UUID created in step 1 and the UUIDs of the PIFs to be bonded, separated by commas. The UUID for the bond is returned.
    xe bond-create network-uuid=<network_uuid> pif-uuids=<pif_uuid_1>,<pif_uuid_2>

    Note

    See Section 4.3.4.2, “Controlling the MAC address of the bond” for details on controlling the MAC address used for the bond PIF.

  6. Use the pif-list command to determine the UUID of the new bond PIF. Include the host-uuid parameter to list only the PIFs on the host being configured:
    xe pif-list device=bond0 host-uuid=<host_uuid>
  7. Use the pif-reconfigure-ip command to configure the desired management interface IP address settings for the bond PIF. See Chapter 6, Command line interface for more detail on the options available for the pif-reconfigure-ip command. This command must be run directly on the member server::
    xe pif-reconfigure-ip uuid=<bond_pif_uuid> mode=DHCP
  8. Use the host-management-reconfigure command to move the management interface from the existing physical PIF to the bond PIF. This step will activate the bond. This command must be run directly on the member server:
    xe host-management-reconfigure pif-uuid=<bond_pif_uuid>
  9. Use the pif-reconfigure-ip command to remove the IP address configuration from the non-bonded PIF previously used for the management interface. This step is not strictly necessary, but might help reduce confusion when reviewing the host networking configuration. This command must be run directly on the member server:
    xe pif-reconfigure-ip uuid=<old_management_pif_uuid> mode=None
  10. Move existing VMs to the bond network using the vif-destroy and vif-create commands. This step can also be completed via XenCenter by editing the VM configuration and connecting the existing VIFs of the VM to the bond network.
  11. Repeat steps 3 - 10 for member servers.
  12. Restart the VMs previously shut down.

XenServer 4.1 allowed dedicating NICs to storage traffic by configuring the NIC as unmanaged with the xe CLI and requiring manual configuration of the underlying networking settings for that NIC within the control domain. XenServer 5.0.0 allows use of either XenCenter or the xe CLI to configure and dedicate a NIC to specific functions, such as storage traffic.

Assigning a NIC to a specific function will prevent the use of the NIC for other functions such as host management, but requires that the appropriate network configuration be in place in order to ensure the NIC is used for the desired traffic. For example, to dedicate a NIC to storage traffic the NIC, storage target, switch, and/or VLAN must be configured such that the target is only accessible over the assigned NIC. This allows use of standard IP routing to control how traffic is routed between multiple NICs within a XenServer.

If you do wish to use a storage interface which can be routed from the management interface also (bearing in mind that this configuration is not recommended), then you have two options:

This section discusses how to change the networking configuration of a XenServer host. This includes:

XenServer hosts in resource pools have a single management IP address used for management and communication to and from other hosts in the pool. The steps required to change the IP address of a host's management interface are different for master and member hosts.

Changing the IP address of a pool member host

  1. Use the pif-reconfigure-ip CLI command to set the IP address as desired. See Chapter 6, Command line interface for details on the parameters of the pif-reconfigure-ip command:
    xe pif-reconfigure-ip uuid=<pif_uuid> mode=DHCP
  2. Use the host-list CLI command to confirm that the member host has successfully reconnected to the master host by checking that all the other XenServer hosts in the pool are visible:
    xe host-list

Changing the IP address of the master XenServer host requires additional steps because each of the member hosts uses the master's advertised IP address for communication and will not know how to contact the master when its IP address changes.

Whenever possible, use a dedicated IP address that is not likely to change for the lifetime of the pool for pool masters.

To change the IP address of a pool master host

  1. Use the pif-reconfigure-ip CLI command to set the IP address as desired. See Chapter 6, Command line interface for details on the parameters of the pif-reconfigure-ip command:
    xe pif-reconfigure-ip uuid=<pif_uuid> mode=DHCP
  2. When the IP address of the pool master host is changed, all member hosts will enter into an emergency mode when they fail to contact the master host.
  3. On the master XenServer host, use the pool-recover-slaves command to force the master to contact each of the member servers and inform them of the master's new IP address:
    xe pool-recover-slaves

Refer to Section 2.11.2, “Master failures” for more information on emergency mode.

It is possible for physical NIC devices to be discovered in different orders on different servers even though the servers contain the same hardware. Verifying NIC ordering is recommended before using the pooling features of XenServer.

It is not possible to directly rename a PIF, although you can use the pif-forget and pif-introduce commands to achieve the same effect with the following restrictions:

For the example configuration shown above use the following steps to change the NIC ordering so that eth0 corresponds to the device with a MAC address of 00:19:bb:2d:7e:7a:

If you are having problems with configuring networking, first ensure that you have not directly modified any of the control domain ifcfg-* files directly. These files are directly managed by the control domain host agent, and changes will be overwritten.

XenServer and XenCenter provide access to alerts that are generated when noteworthy things happen. XenCenter provides various mechanisms of grouping and maintaining metadata about managed VMs, hosts, storage repositories, and so on.

XenServer generates alerts for the following events.

Configurable Alerts:

Alerts generated by XenCenter:

Alerts generated by XenServer:

The following alerts appear on the performance graphs in XenCenter. See the XenCenter online help for more information:

The performance monitoring perfmon runs once every 5 minutes and requests updates from XenServer which are averages over 1 minute, but these defaults can be changed in /etc/sysconfig/perfmon.

Every 5 minutes perfmon reads updates of performance variables exported by the XAPI instance running on the same host. These variables are separated into one group relating to the host itself, and a group for each VM running on that host. For each VM and also for the host, perfmon reads in the other-config:perfmon parameter and uses this string to determine which variables it should monitor, and under which circumstances to generate a message.

vm:other-config:perfmon and host:other-config:perfmon values consist of an XML string like the one below:

<config>
	<variable>
		<name value="cpu_usage"/>
		<alarm_trigger_level value="LEVEL"/>
	</variable>
	<variable>
		<name value="network_usage"/>
		<alarm_trigger_level value="LEVEL"/>
	</variable>
</config>

Valid VM Elements

Valid Host Elements

This chapter describes the XenServer command line interface (CLI). The xe CLI enables the writing of scripts for automating system administration tasks and allows integration of XenServer into an existing IT infrastructure.

The xe command line interface is installed by default on XenServer hosts and is included with XenCenter. A stand-alone remote CLI is also available for Linux.

On Windows, the xe.exe CLI executable is installed along with XenCenter.

To use it, open a Windows Command Prompt and change directories to the directory where the file resides (typically C:\Program Files\XenSource\XenCenter), or add its installation location to your system path.

On Linux, you can install the stand-alone xe CLI executable from the RPM named xe-cli-5.0.0-10918p.i386.rpm on the Linux Pack CD, as follows:

rpm -ivh xe-cli-5.0.0-10918p.i386.rpm

Basic help is available for CLI commands on-host by typing:

xe help command

A list of the most commonly-used xe commands is displayed if you type:

xe help

or a list of all xe commands is displayed if you type:

xe help --all

The basic syntax of all XenServer xe CLI commands is:

Each specific command contains its own set of arguments that are of the form argument=value. Some commands have required arguments, and most have some set of optional arguments. Typically a command will assume default values for some of the optional arguments when invoked without them.

If the xe command is executed remotely, additional connection and authentication arguments are used. These arguments also take the form argument=argument_value.

The server argument is used to specify the hostname or IP address. The username and password arguments are used to specify credentials. A password-file argument can be specified instead of the password directly. In this case an attempt is made to read the password from the specified file (stripping CRs and LFs off the end of the file if necessary), and use that to connect. This is more secure than specifying the password directly at the command line.

The optional port argument can be used to specify the agent port on the remote XenServer host (defaults to 443).

Shorthand syntax is also available for remote connection arguments:

Arguments are also taken from the environment variable XE_EXTRA_ARGS, in the form of comma-separated key/value pairs. For example, in order to enter commands on one XenServer host that are run on a remote XenServer host, you could do the following:

export XE_EXTRA_ARGS="server=jeffbeck,port=443,username=root,password=pass"

and thereafter you would not need to specify the remote XenServer host parameters in each xe command you execute.

Using the XE_EXTRA_ARGS environment variable also enables tab completion of xe commands when issued against a remote XenServer host, which is disabled by default.

Broadly speaking, the CLI commands can be split in two halves: Low-level commands concerned with listing and parameter manipulation of API objects, and higher level commands for interacting with VMs or hosts in a more abstract level. The low-level commands are:

where <class> is one of:

Note that not every value of <class> has the full set of <class>-param- commands; some have just a subset.

The objects that are addressed with the xe commands have sets of parameters that identify them and define their states.

Most parameters take a single value. For example, the name-label parameter of a VM contains a single string value. In the output from parameter list commands such as xe vm-param-list, such parameters have an indication in parentheses that defines whether they can be read and written to, or are read-only. For example, the output of xe vm-param-list on a specified VM might have the lines

user-version ( RW): 1
 is-control-domain ( RO): false

The first parameter, user-version, is writeable and has the value 1. The second, is-control-domain, is read-only and has a value of false.

The two other types of parameters are multi-valued. A set parameter contains a list of values. A map parameter is a set of key/value pairs. As an example, look at the following excerpt of some sample output of the xe vm-param-list on a specified VM:

platform (MRW): acpi: true; apic: true; pae: true; nx: false
allowed-operations (SRO): pause; clean_shutdown; clean_reboot; \
hard_shutdown; hard_reboot; suspend

The platform parameter has a list of items that represent key/value pairs. The key names are followed by a colon character (:). Each key/value pair is separated from the next by a semicolon character (;). The M preceding the RW indicates that this is a map parameter and is readable and writeable. The allowed-operations parameter has a list that makes up a set of items. The S preceding the RO indicates that this is a set parameter and is readable but not writeable.

In xe commands where you want to filter on a map parameter, or set a map parameter, use the separator : (colon) between the map parameter name and the key/value pair. For example, to set the value of the foo key of the other-config parameter of a VM to baa, the command would be

xe vm-param-set uuid=<VM uuid> other-config:foo=baa

The <class>-list command lists the objects of type <class>. By default it will list all objects, printing a subset of the parameters. This behavior can be modified in two ways: it can filter the objects so that it only outputs a subset, and the parameters that are printed can be modified.

To change the parameters that are printed, the argument params should be specified as a comma-separated list of the required parameters, e.g.:

xe vm-list params=name-label,other-config

Alternatively, to list all of the parameters, use the syntax:

xe vm-list params=all

Note that some parameters that are expensive to calculate will not be shown by the list command. These will be shown as e.g.:

allowed-VBD-devices (SRO): <expensive field>

In order to obtain these fields, use either the command <class>-param-list or <class>-param-get

To filter the list, the CLI will match parameter values with those specified on the command-line, only printing object that match all of the specified constraints. For example:

xe vm-list HVM-boot-policy="BIOS order" power-state=halted

will only list those VMs for which both the field power-state has the value halted, and for which the field HVM-boot-policy has the value BIOS order.

It is also possible to filter the list based on the value of keys in maps, or on the existence of values in a set. The syntax for the first of these is map-name:key=value, and the second is set-name:contains=value

For scripting, a useful technique is passing --minimal on the command line, causing xe to print only the first field in a comma-separated list. For example, the command xe vm-list --minimal on a XenServer host with three VMs installed gives the three UUIDs of the VMs, for example:

a85d6717-7264-d00e-069b-3b1d19d56ad9,aaa3eec5-9499-bcf3-4c03-af10baea96b7, \
42c044de-df69-4b30-89d9-2c199564581d

This section provides a reference to the xe commands. They are grouped by objects that the commands address, and listed alphabetically.

Commands for working with network bonds, for resilience with physical interface failover. See Section 4.3.4, “Creating NIC bonds on a standalone host” for details.

The bond object is a reference object which glues together master and member PIFs. The master PIF is the bonding interface which must be used as the overall PIF to refer to the bond. The member PIFs are a set of 2 or more physical interfaces which have been combined into the high-level bonded interface.

Bond parameters

Bonds have the following parameters:

Parameter NameDescriptionType
uuidunique identifier/object reference for the bondread only
masterUUID for the master bond PIFread only
membersset of UUIDs for the underlying bonded PIFsread only set parameter

Commands for working with physical CD/DVD drives on XenServer hosts.

CD parameters

CDs have the following parameters:

Parameter NameDescriptionType
uuidunique identifier/object reference for the CDread only
name-labelName for the CDread/write
name-descriptionDescription text for the CDread/write
allowed-operationsA list of the operations that can be performed on this CDread only set parameter
current-operationsA list of the operations that are currently in progress on this CDread only set parameter
sr-uuidThe unique identifier/object reference for the SR this CD is part ofread only
sr-name-labelThe name for the SR this CD is part ofread only
vbd-uuidsA list of the unique identifiers for the VBDs on VMs that connect to this CDread only set parameter
crashdump-uuidsNot used on CDs since crashdumps cannot be written to themread only set parameter
virtual-sizeSize of the CD as it appears to VMs (in bytes)read only
physical-utilisationamount of physical space that the CD image is currently taking up on the SR (in bytes)read only
typeSet to User for CDsread only
sharableWhether or not the storage is sharable. Always true for CDs.read only
read-onlyWhether the CD is read-only, if false, the device is writeable. Always true for CDs.read only
storage-locktrue if this disk is locked at the storage levelread only
parentReference to the parent disk, if this CD is part of a chainread only
missingtrue if SR scan operation reported this CD as not present on diskread only
other-configA list of key/value pairs that specify additional configuration parameters for the CDread/write map parameter
location The path on which the device is mounted read only
managed true if the device is managed read only
xenstore-data Data to be inserted into the xenstore tree read only map parameter
sm-config names and descriptions of storage manager device config keys read only map parameter
is-a-snapshot True if this template is a CD snapshot read only
snapshot_of The UUID of the CD that this template is a snapshot of read only
snapshots The UUID(s) of any snapshots that have been taken of this CD read only
snapshot_time The timestamp of the snapshot operation read only

Commands for interacting with XenServer host.

XenServer hosts are the physical servers running XenServer software. They have VMs running on them under the control of a special privileged Virtual Machine, known as the control domain or domain 0.

The XenServer host objects can be listed with the standard object listing command (xe host-list, xe host-cpu-list, and xe host-crashdump-list), and the parameters manipulated with the standard parameter commands. See Section 6.3.2, “Low-level param commands” for details.

Host selectors

Several of the commands listed here have a common mechanism for selecting one or more XenServer hosts on which to perform the operation. The simplest is by supplying the argument host=<uuid_or_name_label>. XenServer hosts can also be specified by filtering the full list of hosts on the values of fields. For example, specifying enabled=true will select all XenServer hosts whose enabled field is equal to true. Where multiple XenServer hosts are matching, and the operation can be performed on multiple XenServer hosts, the option --multiple must be specified to perform the operation. The full list of parameters that can be matched is described at the beginning of this section, and can be obtained by the command xe host-list params=all. If no parameters to select XenServer hosts are given, the operation will be performed on all XenServer hosts.

Host parameters

XenServer hosts have the following parameters:

Parameter NameDescriptionType
uuidThe unique identifier/object reference for the XenServer hostread only
name-labelThe name of the XenServer hostread/write
name-descriptionThe description string of the XenServer hostread only
enabledfalse if disabled which prevents any new VMs from starting on them, which prepares the XenServer hosts to be shut down or rebooted; true if the host is currently enabledread only
API-version-majormajor version numberread only
API-version-minorminor version numberread only
API-version-vendor read only
API-version-vendor-implementationdetails of vendor implementationread only map parameter
logginglogging configurationread/write map parameter
suspend-image-sr-uuidthe unique identifier/object reference for the SR where suspended images are putread/write
crash-dump-sr-uuidthe unique identifier/object reference for the SR where crash dumps are putread/write
software-versionlist of versioning parameters and their valuesread only map parameter
capabilitieslist of Xen versions that the XenServer host can runread only set parameter
other-configA list of key/value pairs that specify additional configuration parameters for the XenServer hostread/write map parameter
hostnameXenServer host hostnameread only
addressXenServer host IP addressread only
supported-bootloaderslist of bootloaders that the XenServer host supports, for example, pygrub, eliloaderread only set parameter
memory-totaltotal amount of physical RAM on the XenServer host, in bytesread only
memory-freetotal amount of physical RAM remaining that can be allocated to VMs, in bytesread only
host-metrics-livetrue if the host is operationalread only
loggingThe syslog_destination key can be set to the hostname of a remote listening syslog service.read/write map parameter
allowed-operations list of the operations allowed in this state. This list is advisory only and the server state may have changed by the time this field is read by a client. read only set parameter
current-operations links each of the running tasks using this object (by reference) to a current_operation enum which describes the nature of the task. read only set parameter
patches Set of host patches read only set parameter
blobs Binary data store read only
memory-free-computed A conservative estimate of the maximum amount of memory free on a host read only
ha-statefiles The UUID(s) of all HA statefiles read only
ha-network-peers The UUIDs of all hosts that could host the VMs on this host in case of failure read only

XenServer hosts contain some other objects that also have parameter lists.

CPUs on XenServer hosts have the following parameters:

Crash dumps on XenServer hosts have the following parameters:

Download a backup of the control domain of the specified XenServer host to the machine that the command is invoked from, and saving it there as a file with the name file-name.

The host(s) on which this operation should be performed are selected via the standard selection mechanism (see host selectors above). Optional arguments can be any number of the host selectors listed at the beginning of this section.

Caution

While the xe host-backup command will work if executed on the local host (that is, without a specific hostname specified), do not use it this way. Doing so would fill up the control domain partition with the backup file, which would be a bad thing. The command should only be used from a remote off-host machine where you have space to hold the backup file.

Generate a fresh bug report (via xen-bugtool, with all optional files included) and upload to Citrix Support ftp site or other location.

The host(s) on which this operation should be performed are selected via the standard selection mechanism (see host selectors above). Optional arguments can be any number of the host selectors listed at the beginning of this section.

Optional parameters are http-proxy: use specified http proxy, and url: upload to this destination url. If optional parameters are not used, no proxy server is identified and the destination will be the default Citrix Support ftp site.

Disables the selected host, and live migrates all running VMs to other suitable hosts on a pool.

If the evacuated host is the pool master, then another host must be selected to be the pool master. To change the pool master with HA disabled, you need to use the pool-designate-new-master command. See Section 6.4.11.1, “pool-designate-new-master” for details. With HA enabled, your only option is to shut down the server, which will cause HA to elect a new master at random. See Section 6.4.5.22, “host-shutdown”.

The host(s) on which this operation should be performed are selected via the standard selection mechanism (see host selectors above). Optional arguments can be any number of the host selectors listed at the beginning of this section.

Get system status capabilities for the specified host(s). The capabilities are returned as an XML fragment that looks something like this:

<?xml version="1.0" ?>	<system-status-capabilities>
 		<capability content-type="text/plain" default-checked="yes" key="xenserver-logs"  \
           max-size="150425200" max-time="-1" min-size="150425200" min-time="-1" \
		   pii="maybe"/>
 		<capability content-type="text/plain" default-checked="yes" \
		   key="xenserver-install" max-size="51200" max-time="-1" min-size="10240" \
		   min-time="-1" pii="maybe"/>
 		...
	</system-status-capabilities>

Each capability entity has a number of attributes.

AttributeDescription 
keyA unique identifier for the capability. 
content-typeCan be either text/plain or application/data. Indicates whether a UI can render the entries for human consumption. 
default-checkedCan be either yes or no. Indicates whether a UI should select this entry by default. 
min-size, max-sizeIndicates an approximate range for the size, in bytes, of this entry. -1 indicates that the size is unimportant. 
min-time, max-timeIndicate an approximate range for the time, in seconds, taken to collect this entry. -1 indicates the time is unimportant. 
piiPersonally identifiable information. Indicates whether the entry would have information that would identify the system owner, or details of their network topology. This is one of:
  • no: no PII will be in these entries
  • yes: PII will likely or certainly be in these entries
  • maybe: you might wish to audit these entries for PII
  • if_customized if the files are unmodified, then they will contain no PII, but since we encourage editing of these files, PII may have been introduced by such customization. This is used in particular for the networking scripts in the control domain.

Passwords are never to be included in any bug report, regardless of any PII declaration.

 

The host(s) on which this operation should be performed are selected via the standard selection mechanism (see host selectors above).

Download a copy of the logs of the specified XenServer hosts. The copy is saved by default in a timestamped file named hostname-yyyy-mm-dd T hh:mm:ssZ.tar.gz. You can specify a different filename using the optional parameter file-name.

The host(s) on which this operation should be performed are selected via the standard selection mechanism (see host selectors above). Optional arguments can be any number of the host selectors listed at the beginning of this section.

Caution

While the xe host-logs-download command will work if executed on the local host (that is, without a specific hostname specified), do not use it this way. Doing so will clutter the control domain partition with the copy of the logs, which would be a bad thing. The command should only be used from a remote off-host machine where you have space to hold the copy of the logs.

Reboot the specified XenServer hosts. The specified XenServer hosts must be disabled first using the xe host-disable command, otherwise a "HOST_IN_USE" error message is displayed.

The host(s) on which this operation should be performed are selected via the standard selection mechanism (see host selectors above). Optional arguments can be any number of the host selectors listed at the beginning of this section.

If the specified XenServer hosts are members of a pool, the loss of connectivity on shutdown will be handled and the pool will recover when the XenServer hosts returns. If you shut down a pool member, other members and the master will continue to function. If you shut down the master, the pool will be out of action until the master is rebooted and back on line, at which point the members will reconnect and synchronize with the master, or until you make one of the members into the master.

Shut down the specified XenServer hosts. The specified XenServer hosts must be disabled first using the xe host-disable command, otherwise a "HOST_IN_USE" error message is displayed.

The host(s) on which this operation should be performed are selected via the standard selection mechanism (see host selectors above). Optional arguments can be any number of the host selectors listed at the beginning of this section.

If the specified XenServer hosts are members of a pool, the loss of connectivity on shutdown will be handled and the pool will recover when the XenServer hosts returns. If you shut down a pool member, other members and the master will continue to function. If you shut down the master, the pool will be out of action until the master is rebooted and back on line, at which point the members will reconnect and synchronize with the master, or until one of the members is made into the master. If HA is enabled for the pool, one of the members will be made into a master automatically. If HA is disabled, you must manually make the desired server into the master with the pool-designate-new-master command. See Section 6.4.11.1, “pool-designate-new-master”.

Commands for working with XenServer host patches (updates). These are for the standard non-OEM editions of XenServer for commands relating to updating the OEM edition of XenServer, see Section 6.4.16, “Update commands” for details.

The patch objects can be listed with the standard object listing command (xe patch-list), and the parameters manipulated with the standard parameter commands. See Section 6.3.2, “Low-level param commands” for details.

Patch parameters

Patches have the following parameters:

Commands for working with PBDs (Physical Block Devices). These are the software objects through which the XenServer host accesses storage repositories (SRs).

The PBD objects can be listed with the standard object listing command (xe pbd-list), and the parameters manipulated with the standard parameter commands. See Section 6.3.2, “Low-level param commands” for details.

PBD parameters

PBDs have the following parameters:

Commands for working with PIFs (objects representing the physical network interfaces).

The PIF objects can be listed with the standard object listing command (xe pif-list), and the parameters manipulated with the standard parameter commands. See Section 6.3.2, “Low-level param commands” for details.

PIF parameters

PIFs have the following parameters:

Parameter NameDescriptionType
uuidThe unique identifier/object reference for the PIFread only
devicemachine-readable name of the interface (for example, eth0)read only
MACthe MAC address of the PIFread only
physicalIf true, the PIF points to an actual physical network interfaceread only
currently-attachedIs the PIF currently attached on this host? true or falseread only
MTUMaximum Transmission Unit of the PIF in bytes.read only
VLANVLAN tag for all traffic passing through this interface; -1 indicates no VLAN tag is assignedread only
bond-master-ofThe UUID of the bond this PIF is the master of (if any)read only
bond-slave-ofThe UUID of the bond this PIF is the slave of (if any)read only
management Is this PIF designated to be a management interface for the control domainread only
network-uuidthe unique identifier/object reference of the virtual network to which this PIF is connectedread only
network-name-labelthe name of the of the virtual network to which this PIF is connectedread only
host-uuidthe unique identifier/object reference of the XenServer host to which this PIF is connectedread only
host-name-labelthe name of the XenServer host to which this PIF is connectedread only
IP-configuration-modeType of network address configuration used; DHCP or staticread only
IPIP address of the PIF, defined here if IP-configuration-mode is static; undefined if DHCPread only
netmask Netmask of the PIF, defined here if IP-configuration-mode is static; undefined if supplied by DHCPread only
gatewayGateway address of the PIF, defined here if IP-configuration-mode is static; undefined if supplied by DHCPread only
DNSDNS address of the PIF, defined here if IP-configuration-mode is static; undefined if supplied by DHCPread only
io_read_kbsaverage read rate in kB/s for the deviceread only
io_write_kbsaverage write rate in kB/s for the deviceread only
carrierlink state for this deviceread only
vendor-idthe ID assigned to NIC's vendorread only
vendor-namethe NIC vendor's nameread only
device-idthe ID assigned by the vendor to this NIC modelread only
device-namethe name assigned by the vendor to this NIC modelread only
speedData transfer rate of the NICread only
duplexDuplexing mode of the NIC; full or halfread only
pci-bus-path read only
other-config additional configuration read/write map parameter
disallow-unplug True if this PIF is a dedicated storage NIC, false otherwise read/write

Commands for working with pools. A pool is an aggregate of one or more XenServer hosts. A pool uses one or more shared storage repositories so that the VMs running on one XenServer host in the pool can be migrated in near-real time (while still running, without needing to be shut down and brought back up) to another XenServer host in the pool. Each XenServer host is really a pool consisting of a single member by default. When a XenServer host is joined to a pool, it is designated as a member, and the pool it has joined becomes the master for the pool.

The singleton pool object can be listed with the standard object listing command (xe pool-list), and its parameters manipulated with the standard parameter commands. See Section 6.3.2, “Low-level param commands” for details.

Pool parameters

Pools have the following parameters:

Commands for controlling SRs (storage repositories).

The SR objects can be listed with the standard object listing command (xe sr-list), and the parameters manipulated with the standard parameter commands. See Section 6.3.2, “Low-level param commands” for details.

SR parameters

SRs have the following parameters:

Parameter NameDescriptionType
uuidThe unique identifier/object reference for the SRread only
name-labelThe name of the SRread/write
name-descriptionThe description string of the SRread/write
allowed-operationslist of the operations allowed on the SR in this stateread only set parameter
current-operationsA list of the operations that are currently in progress on this SRread only set parameter
VDIsunique identifier/object reference for the virtual disks in this SRread only set parameter
PBDsunique identifier/object reference for the PBDs attached to this SRread only set parameter
physical-utilisationphysical space currently utilized on this SR, in bytes. Note that for sparse disk formats, physical utilization may be less than virtual allocationread only
physical-sizetotal physical size of the SR, in bytesread only
typetype of the SR, used to specify the SR backend driver to useread only
content-typethe type of the SR's content. Currently used only to distinguish ISO libraries from other SRs. For storage repositories that store a library of ISOs, the content-type must be set to iso. In other cases, it is recommended that this be set either to empty, or the string 'user'.read only
sharedtrue if this SR is capable of being shared between multiple XenServer hosts; false otherwiseread/write
other-configA list of key/value pairs that specify additional configuration parameters for the SR .read/write map parameter
host The storage repository host name read only
virtual-allocation sum of virtual_sizes of all VDIs in this storage repository (in bytes) read only
sm-config SM dependent data read only map parameter
blobs Binary data store read only

Commands for working with long-running asynchronous tasks. These are tasks such as starting, stopping, and suspending a Virtual Machine, which are typically made up of a set of other atomic subtasks that together accomplish the requested operation.

The task objects can be listed with the standard object listing command (xe task-list), and the parameters manipulated with the standard parameter commands. See Section 6.3.2, “Low-level param commands” for details.

Task parameters

Tasks have the following parameters:

Commands for working with VM templates.

Templates are essentially VMs with the is-a-template parameter set to true. A template is a "gold image" that contains all the various configuration settings to instantiate a specific VM. XenServer ships with a base set of templates, which range from generic "raw" VMs that can boot an OS vendor installation CD (RHEL, CentOS, SLES, Windows) to complete pre-configured OS instances (Debian Etch and Sarge). With XenServer you can create VMs, configure them in standard forms for your particular needs, and save a copy of them as templates for future use in VM deployment.

The template objects can be listed with the standard object listing command (xe template-list), and the parameters manipulated with the standard parameter commands. See Section 6.3.2, “Low-level param commands” for details.

Template parameters

Templates have the following parameters:

Parameter NameDescriptionType
uuidThe unique identifier/object reference for the templateread only
name-labelThe name of the templateread/write
name-descriptionThe description string of the templateread/write
user-versionstring for creators of VMs and templates to put version informationread/write
is-a-templatetrue if this is a template. Template VMs can never be started, they are used only for cloning other VMs

Note that setting is-a-template using the CLI is not supported.

read/write
is-control-domaintrue if this is a control domain (domain 0 or a driver domain)read only
power-stateCurrent power state; always halted for a templateread only
power-stateCurrent power state; always halted for a templateread only
memory-dynamic-maxdynamic maximum memory in bytes. Currently unused, but if changed the following constraint must be obeyed: memory_static_max >= memory_dynamic_max >= memory_dynamic_min >= memory_static_min.read/write
memory-dynamic-mindynamic minimum memory in bytes. Currently unused, but if changed the same constraints for memory-dynamic-max must be obeyed.read/write
memory-static-maxstatically-set (absolute) maximum memory in bytes. This is the main value used to determine the amount of memory assigned to a VM.read/write
memory-static-minstatically-set (absolute) minimum memory in bytes. This represents the absolute minimum memory, and memory-static-min must be less than memory-static-max. This value is currently unused in normal operation, but the previous constraint must be obeyed.read/write
suspend-VDI-uuidThe VDI that a suspend image is stored on (has no meaning for a template)read only
VCPUs-paramsconfiguration parameters for the selected VCPU policy.

You can tune a VCPU's pinning with

xe vm-param-set uuid=<vm_uuid> \
VCPUs-params:mask=1,2,3

A VM created from this template will then run on physical CPUs 1, 2, and 3 only.

You can also tune the VCPU priority (xen scheduling) with the cap and weight parameters; for example

xe vm-param-set uuid=<vm_uuid> \
VCPUs-params:weight=512
xe vm-param-set uuid=<vm_uuid> \
VCPUs-params:cap=100

A VM based on this template with a weight of 512 will get twice as much CPU as a domain with a weight of 256 on a contended XenServer host. Legal weights range from 1 to 65535 and the default is 256.

The cap optionally fixes the maximum amount of CPU a VM based on this template will be able to consume, even if the XenServer host has idle CPU cycles. The cap is expressed in percentage of one physical CPU: 100 is 1 physical CPU, 50 is half a CPU, 400 is 4 CPUs, etc. The default, 0, means there is no upper cap.

read/write map parameter
VCPUs-maxMaximum number of VCPUsread/write
VCPUs-at-startupBoot number of VCPUsread/write
actions-after-crashaction to take if a VM based on this template crashesread/write
console-uuidsvirtual console devicesread only set parameter
platformplatform-specific configurationread/write map parameter
allowed-operationslist of the operations allowed in this stateread only set parameter
current-operationsA list of the operations that are currently in progress on this templateread only set parameter
allowed-VBD-deviceslist of VBD identifiers available for use, represented by integers of the range 0-15. This list is informational only, and other devices may be used (but may not work).read only set parameter
allowed-VIF-deviceslist of VIF identifiers available for use, represented by integers of the range 0-15. This list is informational only, and other devices may be used (but may not work).read only set parameter
HVM-boot-policy read/write
HVM-boot-paramsThe order key controls the HVM guest boot order, represented as a string where each character is a boot method: "d" for the CD/DVD, "c" for the root disk, and "n" for network PXE boot. The default is "dc".read/write map parameter
PV-kernelpath to the kernelread/write
PV-ramdiskpath to the initrdread/write
PV-argsstring of kernel command line argumentsread/write
PV-legacy-argsstring of arguments to make legacy VMs based on this template bootread/write
PV-bootloadername of or path to bootloaderread/write
PV-bootloader-argsstring of miscellaneous arguments for the bootloaderread/write
last-boot-CPU-flagsdescribes the CPU flags on which a VM based on this template was last booted; not populated for a templateread only
resident-onthe XenServer host on which a VM based on this template is currently resident; appears as <not in database> for a templateread only
affinitya XenServer host which a VM based on this template has preference for running on; used by the xe vm-start command to decide where to run the VMread/write
other-configA list of key/value pairs that specify additional configuration parameters for the templateread/write map parameter
start-timeTimestamp of the date and time that the metrics for a VM based on this template were read, in the form yyyymmddThh:mm:ss z, where z is the single-letter military timezone indicator, for example, Z for UTC (GMT); set to 1 Jan 1970 Z (beginning of Unix/POSIX epoch) for a templateread only
install-timeTimestamp of the date and time that the metrics for a VM based on this template were read, in the form yyyymmddThh:mm:ss z, where z is the single-letter military timezone indicator, for example, Z for UTC (GMT); set to 1 Jan 1970 Z (beginning of Unix/POSIX epoch) for a templateread only
memory-actualThe actual memory being used by a VM based on this template; 0 for a templateread only
VCPUs-numberThe number of virtual CPUs assigned to a VM based on this template; 0 for a templateread only
VCPUs-utilisationA list of virtual CPUs and their weightread only map parameter
os-versionthe version of the operating system for a VM based on this template; appears as <not in database> for a templateread only map parameter
PV-drivers-versionthe versions of the paravirtualized drivers for a VM based on this template; appears as <not in database> for a templateread only map parameter
PV-drivers-up-to-dateflag for latest version of the paravirtualized drivers for a VM based on this template; appears as <not in database> for a templateread only
memorymemory metrics reported by the agent on a VM based on this template; appears as <not in database> for a templateread only map parameter
disksdisk metrics reported by the agent on a VM based on this template; appears as <not in database> for a templateread only map parameter
networksnetwork metrics reported by the agent on a VM based on this template; appears as <not in database> for a templateread only map parameter
otherother metrics reported by the agent on a VM based on this template; appears as <not in database> for a templateread only map parameter
guest-metrics-last-updatedtimestamp when the last write to these fields was performed by the in-guest agent, in the form yyyymmddThh:mm:ss z, where z is the single-letter military timezone indicator, for example, Z for UTC (GMT)read only
actions-after-shutdownaction to take after the VM has shutdownread/write
actions-after-reboot action to take after the VM has rebooted read/write
possible-hosts list of hosts that could potentially host the VM read only
HVM-shadow-multiplier multiplier applied to the amount of shadow that will be made available to the guest read/write
dom-id domain ID (if available, -1 otherwise) read only
recommendations An XML specification of recommended values and ranges for properties of this VM read only
xenstore-data data to be inserted into the xenstore tree (/local/domain/<domid>/vm-data) after the VM is created. read/write map parameter
is-a-snapshot True if this template is a VM snapshot read only
snapshot_of The UUID of the VM that this template is a snapshot of read only
snapshots The UUID(s) of any snapshots that have been taken of this template read only
snapshot_time The timestamp of the most recent VM snapshot taken read only
memory-target The target amount of memory set for this template read only
blocked-operations Lists the operations that cannot be performed on this template read/write map parameter
last-boot-record Record of the last boot parameters for this template, in XML format read only
ha-always-run True if an instance of this template will always restarted on another host in case of the failure of the host it is resident on read/write
ha-restart-priority 1, 2, 3 or best effort. 1 is the highest restart priority read/write
blobs Binary data store read only
live Only relevant to a running VM. read only

Commands for working with VBDs (Virtual Block Devices).

A VBD is a software object that connects a VM to the VDI, which represents the contents of the virtual disk. The VBD has the attributes which tie the VDI to the VM (is it bootable, its read/write metrics, and so on), while the VDI has the information on the physical attributes of the virtual disk (which type of SR, whether the disk is shareable, whether the media is read/write or read only, and so on).

The VBD objects can be listed with the standard object listing command (xe vbd-list), and the parameters manipulated with the standard parameter commands. See Section 6.3.2, “Low-level param commands” for details.

VBD parameters

VBDs have the following parameters:

Parameter NameDescriptionType
uuidThe unique identifier/object reference for the VBDread only
vm-uuidThe unique identifier/object reference for the VM this VBD is attached toread only
vm-name-labelThe name of the VM this VBD is attached toread only
vdi-uuidThe unique identifier/object reference for the VDI this VBD is mapped toread only
vdi-name-labelThe name of the VDI this VBD is mapped toread only
emptyif true, this represents an empty driveread only
devicethe device seen by the guest, for example hda1read only
userdeviceuser-friendly device nameread/write
bootabletrue if this VBD is bootableread/write
modethe mode the VBD should be mounted withread/write
typehow the VBD appears to the VM, for example disk or CDread/write
currently-attachedIs the VBD currently attached on this host? true or falseread only
storage-locktrue if a storage-level lock was acquiredread only
status-codeerror/success code associated with the last attach operationread only
status-detailerror/success information associated with the last attach operation statusread only
qos_algorithm_typethe QoS algorithm to useread/write
qos_algorithm_paramsparameters for the chosen QoS algorithmread/write map parameter
qos_supported_algorithmssupported QoS algorithms for this VBDread only set parameter
io_read_kbsaverage read rate in kB per second for this VBDread only
io_write_kbsaverage write rate in kB per second for this VBDread only
allowed-operationslist of the operations allowed in this state. This list is advisory only and the server state may have changed by the time this field is read by a client.read only set parameter
current-operationslinks each of the running tasks using this object (by reference) to a current_operation enum which describes the nature of the task.read only set parameter
unpluggable true if this VBD will support hot-unplug read/write
attachable true if the device can be attached read only
other-config additional configuration read/write map parameter

Commands for working with VDIs (Virtual Disk Images).

A VDI is a software object that represents the contents of the virtual disk seen by a VM, as opposed to the VBD, which is a connector object that ties a VM to the VDI. The VDI has the information on the physical attributes of the virtual disk (which type of SR, whether the disk is shareable, whether the media is read/write or read only, and so on), while the VBD has the attributes which tie the VDI to the VM (is it bootable, its read/write metrics, and so on).

The VDI objects can be listed with the standard object listing command (xe vdi-list), and the parameters manipulated with the standard parameter commands. See Section 6.3.2, “Low-level param commands” for details.

VDI parameters

VDIs have the following parameters:

Parameter NameDescriptionType
uuidThe unique identifier/object reference for the VDIread only
name-labelThe name of the VDIread/write
name-descriptionThe description string of the VDIread/write
allowed-operationsa list of the operations allowed in this stateread only set parameter
current-operationsa list of the operations that are currently in progress on this VDIread only set parameter
sr-uuidSR in which the VDI residesread only
vbd-uuidsa list of VBDs that refer to this VDIread only set parameter
crashdump-uuidslist of crash dumps that refer to this VDIread only set parameter
virtual-sizesize of disk as presented to the VM, in bytes. Note that, depending on storage backend type, the size may not be respected exactlyread only
physical-utilisationamount of physical space that the VDI is currently taking up on the SR, in bytesread only
typetype of VDI, for example, System or Userread only
sharabletrue if this VDI may be sharedread only
read-onlytrue if this VDI can only be mounted read-onlyread only
storage-locktrue if this VDI is locked at the storage levelread only
parentReferences the parent VDI, if this VDI is part of a chainread only
missingtrue if SR scan operation reported this VDI as not presentread only
other-configadditional configuration information for this VDIread/write map parameter
sr-name-label name of the containing storage repository read only
location location information read only
managed true if the CDI is managed read only
xenstore-data data to be inserted into the xenstore tree (/local/domain/0/backend/vbd/<domid>/<device-id>/sm-data) after the VDI is attached. This is generally set by the SM backends on vdi_attach. read only map parameter
sm-config SM dependent data read only map parameter
is-a-snapshot True if this VDI is a VM storage snapshot read only
snapshot_of The UUID of the storage this VDI is a snapshot of read only
snapshots The UUID(s) of all snapshots of this VDI read only
snapshot_time The timestamp of the snapshot operation that created this VDI read only

Commands for working with VIFs (Virtual network interfaces).

The VIF objects can be listed with the standard object listing command (xe vif-list), and the parameters manipulated with the standard parameter commands. See Section 6.3.2, “Low-level param commands” for details.

VIF parameters

VIFs have the following parameters:

Parameter NameDescriptionType
uuidThe unique identifier/object reference for the VIFread only
vm-uuidThe unique identifier/object reference for the VM that this VIF resides onread only
vm-name-labelThe name of the VM that this VIF resides onread only
allowed-operationsa list of the operations allowed in this stateread only set parameter
current-operationsa list of the operations that are currently in progress on this VIFread only set parameter
deviceinteger label of this VIF, indicating the order in which VIF backends were createdread only
MACMAC address of VIF, as exposed to the VMread only
MTUMaximum Transmission Unit of the VIF in bytes. This parameter is read-only, but you can override the MTU setting with the mtu key via the other-config map parameter. For example, to reset the MTU on a virtual NIC to use jumbo frames:
xe vif-param-set uuid=<vif_uuid> \
other-config:mtu=9000
read only
currently-attachedtrue if the device is currently attachedread only
qos_algorithm_typeQoS algorithm to useread/write
qos_algorithm_paramsparameters for the chosen QoS algorithmread/write map parameter
qos_supported_algorithmssupported QoS algorithms for this VIFread only set parameter
other-configA list of key/value pairs that specify additional configuration parameters for this VIF.read/write map parameter
network-uuidThe unique identifier/object reference of the virtual network to which this VIF is connectedread only
network-name-labelThe descriptive name of the virtual network to which this VIF is connectedread only
io_read_kbsaverage read rate in kB/s for this VIFread only
io_write_kbsaverage write rate in kB/s for this VIFread only

Commands for controlling VMs and their attributes.

VM selectors

Several of the commands listed here have a common mechanism for selecting one or more VMs on which to perform the operation. The simplest way is by supplying the argument vm=<name_or_uuid> . VMs can also be specified by filtering the full list of VMs on the values of fields. For example, specifying power-state=halted will select all VMs whose power-state parameter is equal to halted. Where multiple VMs are matching, the option --multiple must be specified to perform the operation. The full list of parameters that can be matched is described at the beginning of this section, and can be obtained by the command xe vm-list params=all. If no parameters to select VMs are given, the operation will be performed on all VMs.

The VM objects can be listed with the standard object listing command (xe vm-list), and the parameters manipulated with the standard parameter commands. See Section 6.3.2, “Low-level param commands” for details.

VM parameters

VMs have the following parameters:

Parameter NameDescriptionType
uuidThe unique identifier/object reference for the VMread only
name-labelThe name of the VMread/write
name-descriptionThe description string of the VMread/write
user-versionstring for creators of VMs and templates to put version informationread/write
is-a-templatefalse unless this is a template; template VMs can never be started, they are used only for cloning other VMs

Note that setting is-a-template using the CLI is not supported.

read/write
is-control-domaintrue if this is a control domain (domain 0 or a driver domain)read only
power-stateCurrent power stateread only
memory-dynamic-maxdynamic maximum in bytesread/write
memory-dynamic-mindynamic minimum in bytesread/write
memory-static-maxstatically-set (absolute) maximum in bytes.

If you want to change this value, the VM must be shut down.

read/write
memory-static-minstatically-set (absolute) minimum in bytes

If you want to change this value, the VM must be shut down.

read/write
suspend-VDI-uuidThe VDI that a suspend image is stored onread only
VCPUs-paramsconfiguration parameters for the selected VCPU policy.

You can tune a VCPU's pinning with

xe vm-param-set uuid=<vm_uuid> \
VCPUs-params:mask=1,2,3

The selected VM will then run on physical CPUs 1, 2, and 3 only.

You can also tune the VCPU priority (xen scheduling) with the cap and weight parameters; for example

xe vm-param-set \
uuid=<template_uuid> \
VCPUs-params:weight=512
xe vm-param-set \
uuid=<template UUID> \
VCPUs-params:cap=100

A VM with a weight of 512 will get twice as much CPU as a domain with a weight of 256 on a contended XenServer host. Legal weights range from 1 to 65535 and the default is 256.

The cap optionally fixes the maximum amount of CPU a VM will be able to consume, even if the XenServer host has idle CPU cycles. The cap is expressed in percentage of one physical CPU: 100 is 1 physical CPU, 50 is half a CPU, 400 is 4 CPUs, etc. The default, 0, means there is no upper cap.

read/write map parameter
VCPUs-maxMaximum number of virtual CPUs.read/write
VCPUs-at-startupBoot number of virtual CPUsread/write
actions-after-crashaction to take if the VM crashes. For PV guests, valid parameters are: preserve (for analysis only), coredump_and_restart (record a coredump and reboot VM), coredump_and_destroy (record a coredump and leave VM halted), restart (no coredump and restart VM), and destroy (no coredump and leave VM halted).read/write
console-uuidsvirtual console devicesread only set parameter
platformplatform-specific configurationread/write map parameter
allowed-operationslist of the operations allowed in this stateread only set parameter
current-operationsA list of the operations that are currently in progress on the VMread only set parameter
allowed-VBD-deviceslist of VBD identifiers available for use, represented by integers of the range 0-15. This list is informational only, and other devices may be used (but may not work).read only set parameter
allowed-VIF-deviceslist of VIF identifiers available for use, represented by integers of the range 0-15. This list is informational only, and other devices may be used (but may not work).read only set parameter
HVM-boot-policy read/write
HVM-boot-params read/write map parameter
HVM-shadow-multiplierFloating point value which controls the amount of shadow memory overhead to grant the VM. Defaults to 1.0 (the minimum value), and should only be changed by advanced users.read/write
PV-kernelpath to the kernelread/write
PV-ramdiskpath to the initrdread/write
PV-argsstring of kernel command line argumentsread/write
PV-legacy-argsstring of arguments to make legacy VMs bootread/write
PV-bootloadername of or path to bootloaderread/write
PV-bootloader-argsstring of miscellaneous arguments for the bootloaderread/write
last-boot-CPU-flagsdescribes the CPU flags on which the VM was last bootedread only
resident-onthe XenServer host on which a VM is currently residentread only
affinitya XenServer host which the VM has preference for running on; used by the xe vm-start command to decide where to run the VMread/write
other-configA list of key/value pairs that specify additional configuration parameters for the VM

For example, a VM will be started automatically after host boot if the other-config parameter includes the key/value pair auto_poweron: true

read/write map parameter
start-timeTimestamp of the date and time that the metrics for the VM were read, in the form yyyymmddThh:mm:ss z, where z is the single-letter military timezone indicator, for example, Z for UTC (GMT)read only
install-timeTimestamp of the date and time that the metrics for the VM were read, in the form yyyymmddThh:mm:ss z, where z is the single-letter military timezone indicator, for example, Z for UTC (GMT)read only
memory-actualThe actual memory being used by a VMread only
VCPUs-numberThe number of virtual CPUs assigned to the VM

For a paravirtualized Linux VM, this number can differ from VCPUS-max and can be changed without rebooting the VM via the vm-vcpu-hotplug command. See Section 6.4.22.26, “vm-vcpu-hotplug”. Windows VMs always run with the number of vCPUs set to VCPUs-max and must be rebooted to change this value.

Note that performance will drop sharply if you set VCPUs-number to a value greater than the number of physical CPUs on the XenServer host.

read only
VCPUs-utilisationA list of virtual CPUs and their weightread only map parameter
os-versionthe version of the operating system for the VMread only map parameter
PV-drivers-versionthe versions of the paravirtualized drivers for the VMread only map parameter
PV-drivers-up-to-dateflag for latest version of the paravirtualized drivers for the VMread only
memorymemory metrics reported by the agent on the VMread only map parameter
disksdisk metrics reported by the agent on the VMread only map parameter
networksnetwork metrics reported by the agent on the VMread only map parameter
otherother metrics reported by the agent on the VMread only map parameter
guest-metrics-last-updatedtimestamp when the last write to these fields was performed by the in-guest agent, in the form yyyymmddThh:mm:ss z, where z is the single-letter military timezone indicator, for example, Z for UTC (GMT)read only
actions-after-shutdown action to take after the VM has shutdown read/write
actions-after-reboot action to take after the VM has rebooted read/write
possible-hosts potential hosts of this VM read only
dom-id domain ID (if available, -1 otherwise) read only
recommendations An XML specification of recommended values and ranges for properties of this VM read only
xenstore-data data to be inserted into the xenstore tree (/local/domain/<domid>/vm-data) after the VM is created read/write map parameter
is-a-snapshot True if this VM is a snapshot read only
snapshot_of The UUID of the VM this is a snapshot of read only
snapshots The UUID(s) of all snapshots of this VM read only
snapshot_time he timestamp of the snapshot operation that created this VM snapshot read only
memory-target The target amount of memory set for this VM read only
blocked-operations Lists the operations that cannot be performed on this VM read/write map parameter
last-boot-record Record of the last boot parameters for this template, in XML format read only
ha-always-run True if this VM will always restarted on another host in case of the failure of the host it is resident on read/write
ha-restart-priority 1, 2, 3 or best effort. 1 is the highest restart priority read/write
blobs Binary data store read only
live True if the VM is running, false if HA suspects that the VM may not be running. read only

Import a VM from a previously-exported file. If preserve is set to true, the MAC address of the original VM will be preserved. The sr-uuid determines the destination SR to import the VM into, and is the default SR if not specified.

The filename parameter can also point to an XVA-format VM, which is the legacy export format from XenServer 3.2 and is used by some third-party vendors to provide virtual appliances. This format uses a directory to store the VM data, so set filename to the root directory of the XVA export and not an actual file. Subsequent exports of the imported legacy guest will automatically be upgraded to the new filename-based format, which stores much more data about the configuration of the VM.

If the metadata is true, then a previously exported set of metadata can be imported without their associated disk blocks. Metadata-only import will fail if any VDIs cannot be found (named by SR and VDI.location) unless the --force option is specified, in which case the import will proceed regardless. If disks can be mirrored/moved out-of-band then metadata import/export represents a fast way of moving VMs between disjoint pools (e.g. as part of a disaster recovery plan).

Reboot the specified VMs.

The VM or VMs on which this operation should be performed are selected via the standard selection mechanism (see VM selectors). Optional arguments can be any number of the VM parameters listed at the beginning of this section.

Use the force argument to cause an ungraceful shutdown, akin to pulling the plug on a physical server.

The VM or VMs on which this operation should be performed are selected via the standard selection mechanism (see VM selectors). Optional arguments can be any number of the VM parameters listed at the beginning of this section.

This is an advanced command only to be used when a member host in a pool goes down. You can use this command to force the pool master to reset the power-state of the VMs to be "halted". Essentially this forces the lock on the VM and its disks so it can be subsequently started on another pool host. This call requires the force flag to be specified, and fails if it is not on the command-line.

Resume the specified VMs.

The VM or VMs on which this operation should be performed are selected via the standard selection mechanism (see VM selectors). Optional arguments can be any number of the VM parameters listed at the beginning of this section.

If the VM is on a shared SR in a pool of hosts, use the on argument to specify which host in the pool on which to start it. By default the system will determine an appropriate host, which might be any of the members of the pool.

Shut down the specified VM.

The VM or VMs on which this operation should be performed are selected via the standard selection mechanism (see VM selectors). Optional arguments can be any number of the VM parameters listed at the beginning of this section.

Use the force argument to cause an ungraceful shutdown, akin to pulling the plug on a physical server.

Start the specified VMs.

The VM or VMs on which this operation should be performed are selected via the standard selection mechanism (see VM selectors). Optional arguments can be any number of the VM parameters listed at the beginning of this section.

If the VMs are on a shared SR in a pool of hosts, use the on argument to specify which host in the pool on which to start the VMs. By default the system will determine an appropriate host, which might be any of the members of the pool.

Lists the VIFs from the specified VMs.

The VM or VMs on which this operation should be performed are selected via the standard selection mechanism (see VM selectors). Note that the selectors operate on the VM records when filtering, and not on the VIF values. Optional arguments can be any number of the VM parameters listed at the beginning of this section.

If you experience odd behavior, application crashes, or have other issues with a XenServer host, this chapter is meant to help you solve the problem if possible and, failing that, describes where the application logs are located and other information that can help your Citrix Solution Provider and Citrix track and resolve the issue.

Troubleshooting of installation issues is covered in the XenServer Installation Guide. Troubleshooting of Virtual Machine issues is covered in the XenServer Virtual Machine Installation Guide.

Important

We recommend that you follow the troubleshooting information in this chapter solely under the guidance of your Citrix Solution Provider or Citrix Support.

Citrix provides two forms of support: you can receive free self-help support via the Support site, or you may purchase our Support Services and directly submit requests by filing an online Support Case. Our free web-based resources include product documentation, a Knowledge Base, and discussion forums.

The XenCenter can be used to gather XenServer host information. Click on Get Server Status Report... in the Tools menu to open the Server Status Report wizard. You can select from a list of different types of information (various logs, crash dumps, etc.). The information is compiled and downloaded to the machine that XenCenter is running on. For details, see the XenCenter Help.

Additionally, the XenServer host has several CLI commands to make it simple to collate the output of logs and various other bits of system information using the utility xen-bugtool. Use the xe command host-bugreport-upload to collect the appropriate log files and system information and upload them to the Citrix Support ftp site. Please refer to Section 6.4.5.2, “host-bugreport-upload” for a full description of this command and its optional parameters. If you are requested to send a crashdump to Citrix Support, use the xe command host-crashdump-upload. Please refer to Section 6.4.5.4, “host-crashdump-upload” for a full description of this command and its optional parameters.

Caution

It is possible that sensitive information might be written into the XenServer host logs.

By default, the server logs report only errors and warnings. If you need to see more detailed information, you can enable more verbose logging. To do so, use the host-loglevel-set command:

host-loglevel-set log-level=level

where level can be 0, 1, 2, 3, or 4, where 0 is the most verbose and 4 is the least verbose.

Log files greater than 5 MB are rotated, keeping 4 revisions. The logrotate command is run hourly.

Consolidation of virtualized application workloads is most successful when all workloads are not competing for the identical physical resource (CPU, disk, network) at the same time. Workload optimization is best informed by measurements (e.g. PlateSpin PowerRecon) of the historical resource consumption of each workload on physical hardware.

This section provides a simple guideline for distributing physical CPU cores among multiple XenServer virtual machines. There are situations where this simple rule would not be appropriate, but for many use cases, two principles apply:

Don't put more work on a server than it can do. Generally server time is less expensive than human time. If you put too much work on a physical server, so that VMs have to wait for CPU time, the performance will suffer, and people will be waiting for the server instead of the server waiting for people.

Provision just enough Virtual CPUs (VCPUs) for each VM's workload. 

As a formal application of these principles for workloads that require multi-VCPU virtual machines, allocate VCPUs to maintain this constraint: (V - N) <= (P - 1), where: