XenServer Software Development Kit Guide

Release 5.0.0

Table of Contents

1. Introduction
2. Getting Started
2.1. System Requirements and Preparation
2.2. Downloading
2.3. Installation
2.4. What's new
2.5. Content Map
2.6. Building Samples for the Linux Platform
2.7. Building Samples for the Windows Platform
2.8. Running the CLI
2.8.1. Tab Completion
2.9. Accessing SDK reference
3. Overview of the XenServer API
3.1. Getting Started with the API
3.1.1. Authentication: acquiring a session reference
3.1.2. Acquiring a list of templates to base a new VM installation on
3.1.3. Installing the VM based on a template
3.1.4. Taking the VM through a start/suspend/resume/stop cycle
3.1.5. Logging out
3.1.6. Install and start example: summary
3.2. Object Model Overview
3.3. Working with VIFs and VBDs
3.3.1. Creating disks and attaching them to VMs
3.3.2. Creating and attaching Network Devices to VMs
3.3.3. Host configuration for networking and storage
3.4. Exporting and Importing VMs
3.4.1. Xen Virtual Appliance (XVA) VM Import Format
3.5. XML-RPC notes
3.5.1. Datetimes
3.6. Where to look next
4. Using the API
4.1. Anatomy of a typical application
4.1.1. Choosing a low-level transport
4.1.2. Authentication and session handling
4.1.3. Finding references to useful objects
4.1.4. Invoking synchronous operations on objects
4.1.5. Using Tasks to manage asynchronous operations
4.1.6. Subscribing to and listening for events
4.2. Language bindings
4.2.1. C
4.2.2. C#
4.2.3. Python
4.2.4. Command Line Interface (CLI)
4.3. Complete application examples
4.3.1. Simultaneously migrating VMs using XenMotion
4.3.2. Cloning a VM via the XE CLI
5. Using HTTP to interact with XenServer
5.1. Persistent XenServer Performance Statistics
6. XenServer API extensions
6.1. VM console forwarding
6.1.1. Retrieving VNC consoles via the API
6.1.2. Disabling VNC forwarding for Linux VM
6.2. Paravirtual Linux installation
6.2.1. Red Hat Enterprise Linux 4.1/4.4
6.2.2. Red Hat Enterprise Linux 4.5/5.0
6.2.3. SUSE Enterprise Linux 10 SP1
6.2.4. CentOS 4.5 / 5.0
6.3. Adding Xenstore entries to VMs
6.4. Security enhancements
6.5. Advanced settings for network interfaces
6.5.1. ethtool settings
6.5.2. Miscellaneous settings
6.6. Internationalization for SR names
6.7. Hiding objects from XenCenter
7. XenCenter API Extensions
7.1. Pool
7.2. Host
7.3. VM
7.4. SR
7.5. VDI
7.6. VBD
7.7. Network
7.8. VM_guest_metrics
7.9. Task

The XenServer SDK is packaged as a Linux VM that must be imported into a XenServer Host. This document refers to the SDK virtual machine interchangeably as an SDK and an SDK VM. The first step towards working with the SDK is to install XenServer. A free version, Express Edition, is available to download at http://xenserver.citrix.vivoconcepts.com/prg/form/download_xenserver_express_4_1.cfm. Please refer to the XenServer Installation Guide for detailed instructions on how to set up your development host. When the installation is complete, please note the host IP address and the host password.

Once you have installed your XenServer Host, install XenCenter on a Windows PC. Launch the application and connect to your new XenServer Host using its IP address and the password.

In this chapter we introduce the XenServer API (hereafter referred to as just "the API") and its associated object model. The API has the following key features:

  • Management of all aspects of the XenServer Host.  Through the API one can manage VMs, storage, networking, host configuration and pools. Performance and status metrics can also be queried via the API.
  • Persistent Object Model.  The results of all side-effecting operations (e.g. object creation, deletion and parameter modifications) are persisted in a server-side database that is managed by the XenServer installation.
  • An event mechanism.  Through the API, clients can register to be notified when persistent (server-side) objects are modified. This enables applications to keep track of datamodel modifications performed by concurrently executing clients.
  • Synchronous and asynchronous invocation.  All API calls can be invoked synchronously (i.e. block until completion); any API call that may be long-running can also be invoked asynchronously. Asynchronous calls return immediately with a reference to a task object. This task object can be queried (through the API) for progress and status information. When an asynchronously invoked operation completes, the result (or error code) is available via the task object.
  • Remotable and Cross-Platform.  The client issuing the API calls does not have to be resident on the host being managed; nor does it have to be connected to the host via ssh in order to execute the API. API calls make use of the XML-RPC protocol to transmit requests and responses over the network.
  • Secure and Authenticated Access.  The XML-RPC API server executing on the host accepts secure socket connections. This allows a client to execute the APIs over the https protocol. Further, all the API calls execute in the context of a login session generated through username and password validation at the server. This provides secure and authenticated access to the XenServer installation.

We will start our tour of the API by describing the calls required to create a new VM on a XenServer installation, and take it through a start/suspend/resume/stop cycle. This is done without reference to code in any specific language; at this stage we just describe the informal sequence of RPC invocations that accomplish our "install and start" task.

The next step is to query the list of "templates" on the host. Templates are specially-marked VM objects that specify suitable default parameters for a variety of supported guest types. (If you want to see a quick enumeration of the templates on a XenServer installation for yourself then you can execute the "xe template-list" CLI command.) To get a list of templates via the API, we need to find the VM objects on the server that have their "is_a_template" field set to true. One way to do this by calling VM.get_all_records(session) where the session parameter is the reference we acquired from our Session.login_with_password call earlier. This call queries the server, returning a snapshot (taken at the time of the call) containing all the VM object references and their field values.

(Remember that at this stage we are not concerned about the particular mechanisms by which the returned object references and field values can be manipulated in any particular client language: that detail is dealt with by our language-specific API bindings and described concretely in the following chapter. For now it suffices just to assume the existence of an abstract mechanism for reading and manipulating objects and field values returned by API calls.)

Now that we have a snapshot of all the VM objects' field values in the memory of our client application we can simply iterate through them and find the ones that have their "is_a_template" set to true. At this stage let's assume that our example application further iterates through the template objects and remembers the reference corresponding to the one that has its "name_label" set to "Debian Etch 4.0" (one of the default Linux templates supplied with XenServer).

This section gives a high-level overview of the object model of the API. A more detailed description of the parameters and methods of each class outlined here can be found in the XenEnterprise Management API document. Python, C and C# sample programs that demonstrate how the API can be used practice to accomplish a variety of tasks are available in the SDK VM and described in the following Chapter.

We start by giving a brief outline of some of the core classes that make up the API. (Don't worry if these definitions seem somewhat abstract in their initial presentation; the textual description in subsequent sections, and the code-sample walk through in the next Chapter will help make these concepts concrete.)

VM A VM object represents a particular virtual machine instance on a XenServer Host or Resource Pool. Example methods include start, suspend, pool_migrate; example parameters include power_state, memory_static_max, and name_label. (In the previous section we saw how the VM class is used to represent both templates and regular VMs)
Host A host object represents a physical host in a XenServer pool. Example methods include reboot and shutdown. Example parameters include software_version, hostname, and [IP] address.
VDI A VDI object represents a Virtual Disk Image. Virtual Disk Images can be attached to VMs, in which case a block device appears inside the VM through which the bits encapsulated by the Virtual Disk Image can be read and written. Example methods of the VDI class include "resize" and "clone". Example fields include "virtual_size" and "sharable". (When we called VM.provision on the VM template in our previous example, some VDI objects were automatically created to represent the newly created disks, and attached to the VM object.)
SR An SR (Storage Repository) aggregates a collection of VDIs and encapsulates the properties of physical storage on which the VDIs' bits reside. Example parameters include type (which determines the storage-specific driver a XenServer installation uses to read/write the SR's VDIs) and physical_utilisation; example methods include scan (which invokes the storage-specific driver to acquire a list of the VDIs contained with the SR and the properties of these VDIs) and create (which initializes a block of physical storage so it is ready to store VDIs).
Network A network object represents a layer-2 network that exists in the environment in which the XenServer Host instance lives. Since XenServer does not manage networks directly this is a lightweight class that serves merely to model physical and virtual network topology. VM and Host objects that are attached to a particular Network object (by virtue of VIF and PIF instances -- see below) can send network packets to each other.

At this point, readers who are finding this enumeration of classes rather terse may wish to skip to the code walk-throughs of the next chapter: there are plenty of useful applications that can be written using only a subset of the classes already described! For those who wish to continue this description of classes in the abstract, read on.

On top of the classes listed above, there are 4 more that act as connectors, specifying relationships between VMs and Hosts, and Storage and Networks. The first 2 of these classes that we will consider, VBD and VIF, determine how VMs are attached to virtual disks and network objects respectively:

VBD A VBD (Virtual Block Device) object represents an attachment between a VM and a VDI. When a VM is booted its VBD objects are queried to determine which disk images (i.e. VDIs) should be attached. Example methods of the VBD class include "plug" (which hot plugs a disk device into a running VM, making the specified VDI accessible therein) and "unplug" (which hot unplugs a disk device from a running guest); example fields include "device" (which determines the device name inside the guest under which the specified VDI will be made accessible).
VIF A VIF (Virtual network InterFace) object represents an attachment between a VM and a Network object. When a VM is booted its VIF objects are queried to determine which network devices should be create. Example methods of the VIF class include "plug" (which hot plugs a network device into a running VM) and "unplug" (which hot unplugs a network device from a running guest).

The second set of "connector classes" that we will consider determine how Hosts are attached to Networks and Storage.

PIF A PIF (Physical InterFace) object represents an attachment between a Host and a Network object. If a host is connected to a Network (via a PIF) then packets from the specified host can be transmitted/received by the corresponding host. Example fields of the PIF class include "device" (which specifies the device name to which the PIF corresponds -- e.g. eth0) and "MAC" (which specifies the MAC address of the underlying NIC that a PIF represents). Note that PIFs abstract both physical interfaces and VLANs (the latter distinguished by the existence of a positive integer in the "VLAN" field).
PBD A PBD (Physical Block Device) object represents an attachment between a Host and a SR (Storage Repository) object. Fields include "currently-attached" (which specifies whether the chunk of storage represented by the specified SR object) is currently available to the host; and "device_config" (which specifies storage-driver specific parameters that determines how the low-level storage devices are configured on the specified host -- e.g. in the case of an SR rendered on an NFS filer, device_config may specify the host-name of the filer and the path on the filer in which the SR files live.)


Figure 3.1, “Common API Classes” presents a graphical overview of the API classes involved in managing VMs, Hosts, Storage and Networking. From this diagram, the symmetry between storage and network configuration, and also the symmetry between virtual machine and host configuration is plain to see.

In this section we walk through a few more complex scenarios, describing informally how various tasks involving virtual storage and network devices can be accomplished using the API.

Let's start by considering how to make a new blank disk image and attach it to a running VM. We will assume that we already have ourselves a running VM, and we know its corresponding API object reference (e.g. we may have created this VM using the procedure described in the previous section, and had the server return its reference to us.) We will also assume that we have authenticated with the XenServer installation and have a corresponding session reference. Indeed in the rest of this chapter, for the sake of brevity, we will stop mentioning sessions altogether.

The first step is to instantiate the disk image on physical storage. We do this via a call to VDI.create(). The VDI.create call takes a number of parameters, including:

Invoking the VDI.create call causes the XenServer installation to create a blank disk image on physical storage, create an associated VDI object (the datamodel instance that refers to the disk image on physical storage) and return a reference to this newly created VDI object.

The way in which the disk image is represented on physical storage depends on the type of the SR in which the created VDI resides. For example, if the SR is of type "lvm" then the new disk image will be rendered as an LVM volume; if the SR is of type "nfs" then the new disk image will be a sparse VHD file created on an NFS filer. (You can query the SR type through the API using the SR.get_type() call.)

So far we have a running VM (that we assumed the existence of at the start of this example) and a fresh VDI that we just created. Right now, these are both independent objects that exist on the XenServer Host, but there is nothing linking them together. So our next step is to create such a link, associating the VDI with our VM.

The attachment is formed by creating a new "connector" object called a VBD (Virtual Block Device). To create our VBD we invoke the VBD.create() call. The VBD.create() call takes a number of parameters including:

Invoking VBD.create makes a VBD object on the XenServer installation and returns its object reference. However, this call in itself does not have any side-effects on the running VM (i.e. if you go and look inside the running VM you will see that the block device has not been created). The fact that the VBD object exists but that the block device in the guest is not active, is reflected by the fact that the VBD object's currently_attached field is set to false.


For expository purposes, Figure 3.2, “A VM object with 2 associated VDIs” presents a graphical example that shows the relationship between VMs, VBDs, VDIs and SRs. In this instance a VM object has 2 attached VDIs: there are 2 VBD objects that form the connections between the VM object and its VDIs; and the VDIs reside within the same SR.

We have seen that the VBD and VIF classes are used to manage configuration of block devices and network devices (respectively) inside VMs. To manage host configuration of storage and networking there are two analogous classes: PBD (Physical Block Device) and PIF (Physical [network] InterFace).

Let us start by considering the PBD class. A PBD_create() call takes a number of parameters including:

For example, imagine we have an SR object s of type "nfs" (representing a directory on an NFS filer within which VDIs are stored as VHD files); and let's say that we want a host, h, to be able to access s. In this case we invoke PBD.create() specifying host h, SR s, and a value for the device_config parameter that is the following map:

("server", "my_nfs_server.example.com"), ("serverpath", "/scratch/mysrs/sr1")

This tells the XenServer Host that SR s is accessible on host h, and further that to access SR s, the host needs to mount the directory /scratch/mysrs/sr1 on the NFS server named my_nfs_server.example.com.

Like VBD objects, PBD objects also have a field called currently_attached. Storage repositories can be attached and detached from a given host by invoking PBD.plug and PBD.unplug methods respectively.

VMs can be exported to a file and later imported to any XenServer Host. The export protocol is a simple HTTP(S) GET, which should be performed on the master. Authorization is either standard HTTP basic authentication, or if a session has already been obtained, this can be used. The VM to export is specified either by UUID or by reference. To keep track of the export, a task can be created and passed in via its reference. The request might result in a redirect if the VM's disks are only accessible on a slave.

The following arguments are passed on the command line:

ArgumentDescription
session_idthe reference of the session being used to authenticate; required only when not using HTTP basic authentication
task_idthe reference of the task object with which to keep track of the operation
refthe reference of the VM; required only if not using the UUID
uuidthe UUID of the VM; required only if not using the reference

For example:

curl http://root:foo@xenserver/export&uuid=<vm_uuid>&task_id=<task_id> -o export

To export just the metadata, use the URI http://server/export_metadata.

The import protocol is similar, using HTTP(S) PUT. The session_id and task_id arguments are as for the export. The ref and uuid are not used; a new reference and uuid will be generated for the VM. There are some additional parameters:

ArgumentDescription
restoreif this parameter is true, the import is treated as replacing the original VM - the implication of this currently is that the MAC addresses on the VIFs are exactly as the export was, which will lead to conflicts if the original VM is still being run.
forceif this parameter is true, any checksum failures will be ignored (the default is to destroy the VM if a checksum error is detected)
sr_idThe reference of an SR into which the VM should be imported. The default behavior is to import into the Pool.default_SR.

To import just the metadata, use the URI http://server/import_metadata

XenServer supports a human-readable legacy VM input format called XVA. This section describes the syntax and structure of XVA.

An XVA consists of a directory containing XML metadata and a set of disk images. A VM represented by an XVA is not intended to be directly executable. Data within an XVA package is compressed and intended for either archiving on permanent storage or for being transmitted to a VM server - such as a Xen Enterprise host - where it can be decompressed and executed.

XVA is a hypervisor-neutral packaging format; it should be possible to create simple tools to instantiate an XVA VM on any other platform. XVA does not specify any particular runtime format; for example disks may be instantiated as file images, LVM volumes, QCoW images, VMDK or VHD images. An XVA VM may be instantiated any number of times, each instantiation may have a different runtime format.

XVA does not:

These issues are all addressed by the related Open Virtual Appliance specification.

An XVA is a directory containing, at a minimum, a file called ova.xml. This file describes the VM contained within the XVA and is described in Section 3.2. Disks are stored within sub-directories and are referenced from the ova.xml. The format of disk data is described later in Section 3.3.

The following terms will be used in the rest of the chapter:

The "ova.xml" file contains the following elements:

<appliance version="0.1">

The number in the attribute "version" indicates the version of this specification to which the XVA is constructed; in this case version 0.1. Inside the <appliance> there is exactly one <vm>: (in the OVA specification, multiple <vm>s are permitted)

<vm name="name">

Each <vm> element describes one VM. The "name" attribute is for future internal use only and must be unique within the ova.xml file. The "name" attribute is permitted to be any valid UTF-8 string. Inside each <vm> tag are the following compulsory elements:

<label>... text ... </label>

A short name for the VM to be displayed in a UI.

<shortdesc> ... description ... </shortdesc>

A description for the VM to be displayed in the UI. Note that for both <label> and <shortdesc> contents, leading and trailing whitespace will be ignored.

<config mem_set="268435456" vcpus="1"/>

The <config> element has attributes which describe the amount of memory in bytes (mem_set) and number of CPUs (vcpus) the VM should have.

Each <vm> has zero or more <vbd> elements representing block devices which look like the following:

<vbd device="sda" function="root" mode="w" vdi="vdi_sda"/>

The attributes have the following meanings:

Each <vm> may have an optional <hacks> section like the following: <hacks is_hvm="false" kernel_boot_cmdline="root=/dev/sda1 ro"/> The <hacks> element is present in the XVA files generated by XenEnterprise but will be removed in future. The attribute "is_hvm" is either "true" or "false", depending on whether the VM should be booted in HVM or not. The "kernel_boot_cmdline" contains additional kernel commandline arguments when booting a guest via pygrub.

In addition to a <vm> element, the <appliance> will contain zero or more <vdi> elements like the following:

<vdi name="vdi_sda" size="5368709120" source="file://sda"
type="dir-gzipped-chunks">

Each <vdi> corresponds to a disk image. The attributes have the following meanings:

A single disk image encoding is specified in which has type "dir-gzipped-chunks": Each image is represented by a directory containing a sequence of files as follows:

-rw-r--r-- 1 dscott xendev 458286013    Sep 18 09:51 chunk000000000.gz
-rw-r--r-- 1 dscott xendev 422271283    Sep 18 09:52 chunk000000001.gz
-rw-r--r-- 1 dscott xendev 395914244    Sep 18 09:53 chunk000000002.gz
-rw-r--r-- 1 dscott xendev 9452401      Sep 18 09:53 chunk000000003.gz
-rw-r--r-- 1 dscott xendev 1096066      Sep 18 09:53 chunk000000004.gz
-rw-r--r-- 1 dscott xendev 971976       Sep 18 09:53 chunk000000005.gz
-rw-r--r-- 1 dscott xendev 971976       Sep 18 09:53 chunk000000006.gz
-rw-r--r-- 1 dscott xendev 971976       Sep 18 09:53 chunk000000007.gz
-rw-r--r-- 1 dscott xendev 573930       Sep 18 09:53 chunk000000008.gz

Each file (named "chunk-XXXXXXXXX.gz") is a gzipped file containing exactly 1e9 bytes (1GB, not 1GiB) of raw block data. The small size was chosen to be safely under the maximum file size limits of several filesystems. If the files are gunzipped and then concatenated together, the original image is recovered.

XenEnterprise provides two mechanisms for booting a VM: (i) using a paravirtualized kernel extracted through pygrub; and (ii) using HVM. The current implementation uses the "is_hvm" flag within the <hacks> section to decide which mechanism to use.

This rest of this section describes a very simple Debian VM packaged as an XVA. The VM has two disks, one with size 5120MiB and used for the root filesystem and used to boot the guest via pygrub and the other of size 512MiB which is used for swap. The VM has 512MiB of memory and uses one virtual CPU.

At the topmost level the simple Debian VM is represented by a single directory:

$ ls -l
total 4
drwxr-xr-x 3 dscott xendev 4096 Oct 24 09:42 very simple Debian VM

Inside the main XVA directory are two sub-directories - one per disk - and the single file: ova.xml:

$ ls -l very\ simple\ Debian\ VM/
total 8
-rw-r--r-- 1 dscott xendev 1016 Oct 24 09:42 ova.xml
drwxr-xr-x 2 dscott xendev 4096 Oct 24 09:42 sda
drwxr-xr-x 2 dscott xendev 4096 Oct 24 09:53 sdb

Inside each disk sub-directory are a set of files, each file contains 1GB of raw disk block data compressed via gzip:

$ ls -l very\ simple\ Debian\ VM/sda/
total 2053480
-rw-r--r-- 1 dscott xendev 202121645 Oct 24 09:43 chunk-000000000.gz
-rw-r--r-- 1 dscott xendev 332739042 Oct 24 09:45 chunk-000000001.gz
-rw-r--r-- 1 dscott xendev 401299288 Oct 24 09:48 chunk-000000002.gz
-rw-r--r-- 1 dscott xendev 389585534 Oct 24 09:50 chunk-000000003.gz
-rw-r--r-- 1 dscott xendev 624567877 Oct 24 09:53 chunk-000000004.gz
-rw-r--r-- 1 dscott xendev 150351797 Oct 24 09:54 chunk-000000005.gz
$ ls -l very\ simple\ Debian\ VM/sdb
total 516
-rw-r--r-- 1 dscott xendev 521937 Oct 24 09:54 chunk-000000000.gz

The example simple Debian VM would have an XVA file like the following:

<?xml version="1.0" ?>
<appliance version="0.1">
  <vm name="vm">
  <label>
  very simple Debian VM
  </label>
  <shortdesc>
  the description field can contain any valid UTF-8
  </shortdesc>
  <config mem_set="536870912" vcpus="1"/>
  <hacks is_hvm="false" kernel_boot_cmdline="root=/dev/sda1 ro ">
  <!--This section is temporary and will be ignored in future. Attribute
is_hvm ("true" or "false") indicates whether the VM will be booted in HVM mode. In
future this will be autodetected. Attribute kernel_boot_cmdline contains the kernel
commandline for the case where a proper grub menu.lst is not present. In future
booting shall only use pygrub.-->
  </hacks>
  <vbd device="sda" function="root" mode="w" vdi="vdi_sda"/>
  <vbd device="sdb" function="swap" mode="w" vdi="vdi_sdb"/>
  </vm>
  <vdi name="vdi_sda" size="5368709120" source="file://sda" type="dir-gzippedchunks"/>
  <vdi name="vdi_sdb" size="536870912" source="file://sdb" type="dir-gzippedchunks"/>
</appliance>

This chapter describes how to use the XenServer Management API from real programs to manage XenServer Hosts and VMs. The chapter begins with a walk-through of a typical client application and demonstrates how the API can be used to perform common tasks. Example code fragments are given in python syntax but equivalent code in C and C# would look very similar. The language bindings themselves are discussed afterwards and the chapter finishes with walk-throughs of two complete examples included in the SDK.

This section describes the structure of a typical application using the XenServer Management API. Most client applications begin by connecting to a XenServer Host and authenticating (e.g. with a username and password). Assuming the authentication succeeds, the server will create a "session" object and return a reference to the client. This reference will be passed as an argument to all future API calls. Once authenticated, the client may search for references to other useful objects (e.g. XenServer Hosts, VMs, etc.) and invoke operations on them. Operations may be invoked either synchronously or asynchronously; special task objects represent the state and progress of asynchronous operations. These application elements are all described in detail in the following sections.

With the exception of the task and metrics classes, whenever an object is modified the server generates an event. Clients can subscribe to this event stream on a per-class basis and receive updates rather than resorting to frequent polling. Events come in three types:

Events also contain a monotonically increasing ID, the name of the class of object and a snapshot of the object state equivalent to the result of a get_record().

Clients register for events by calling event.register() with a list of class names or the special string "*". Clients receive events by executing event.next() which blocks until events are available and returns the new events.

The following python code fragment demonstrates how to print a summary of every event generated by a system: (similar code exists in /SDK/client-examples/python/watch-all-events.py)

fmt = "%8s  %20s  %5s  %s"
session.xenapi.event.register(["*"])
while True:
    try:
        for event in session.xenapi.event.next():
            name = "(unknown)"
            if "snapshot" in event.keys():
                snapshot = event["snapshot"]
                if "name_label" in snapshot.keys():
                    name = snapshot["name_label"]
            print fmt % (event['id'], event['class'], event['operation'], name)           
    except XenAPI.Failure, e:
        if e.details == [ "EVENTS_LOST" ]:
            print "Caught EVENTS_LOST; should reregister"

Although it is possible to write applications which use the XenServer Management API directly through raw XML-RPC calls, the task of developing third-party applications is greatly simplified through the use of a language binding which exposes the individual API calls as first-class functions in the target language. The SDK includes language bindings and example code for the C, C# and python programming languages and for both Linux and Windows clients.

This section describes two complete examples of real programs using the API. The application source code is contained within the SDK.

This python example (contained in /SDK/client-examples/python/permute.py) demonstrates how to use XenMotion to move VMs simultaneously between hosts in a Resource Pool. The example makes use of asynchronous API calls and shows how to wait for a set of tasks to complete.

The program begins with some standard boilerplate and imports the API bindings module

import sys, time
import XenAPI

Next the commandline arguments containing a server URL, username, password and a number of iterations are parsed. The username and password are used to establish a session which is passed to the function main, which is called multiple times in a loop. Note the use of try: finally: to make sure the program logs out of its session at the end.

if __name__ == "__main__":
    if len(sys.argv) <> 5:
        print "Usage:"
        print sys.argv[0], " <url> <username> <password> <iterations>"
        sys.exit(1)
    url = sys.argv[1]
    username = sys.argv[2]
    password = sys.argv[3]
    iterations = int(sys.argv[4])
    # First acquire a valid session by logging in:
    session = XenAPI.Session(url)
    session.xenapi.login_with_password(username, password)
    try:
        for i in range(iterations):
            main(session, i)
    finally:
        session.xenapi.session.logout()

The main function examines each running VM in the system, taking care to filter out control domains (which are part of the system and not controllable by the user). A list of running VMs and their current hosts is constructed.

def main(session, iteration):
    # Find a non-template VM object
    all = session.xenapi.VM.get_all()
    vms = []
    hosts = []
    for vm in all:
        record = session.xenapi.VM.get_record(vm)
        if not(record["is_a_template"]) and \
           not(record["is_control_domain"]) and \
           record["power_state"] == "Running":
            vms.append(vm)
            hosts.append(record["resident_on"])
    print "%d: Found %d suitable running VMs" % (iteration, len(vms))

Next the list of hosts is rotated:

# use a rotation as a permutation
    hosts = [hosts[-1]] + hosts[:(len(hosts)-1)]

Each VM is then moved via XenMotion to the new host under this rotation (i.e. a VM running on host at position 2 in the list will be moved to the host at position 1 in the list etc.) In order to execute each of the movements in parallel, the asynchronous version of the VM.pool_migrate is used and a list of task references constructed. Note the live flag passed to the VM.pool_migrate; this causes the VMs to be moved while they are still running.

tasks = []
    for i in range(0, len(vms)):
        vm = vms[i]
        host = hosts[i]
        task = session.xenapi.Async.VM.pool_migrate(vm, host, { "live": "true" })
        tasks.append(task)

The list of tasks is then polled for completion:

finished = False
    records = {}
    while not(finished):
        finished = True
        for task in tasks:
            record = session.xenapi.task.get_record(task)
            records[task] = record
            if record["status"] == "pending":
                finished = False
        time.sleep(1)

Once all tasks have left the pending state (i.e. they have successfully completed, failed or been cancelled) the tasks are polled once more to see if they all succeeded:

allok = True
    for task in tasks:
        record = records[task]
        if record["status"] <> "success":
            allok = False

If any one of the tasks failed then details are printed, an exception is raised and the task objects left around for further inspection. If all tasks succeeded then the task objects are destroyed and the function returns.

if not(allok):
        print "One of the tasks didn't succeed at", \
            time.strftime("%F:%HT%M:%SZ", time.gmtime())
        idx = 0
        for task in tasks:
            record = records[task]
            vm_name = session.xenapi.VM.get_name_label(vms[idx])
            host_name = session.xenapi.host.get_name_label(hosts[idx])
            print "%s : %12s %s -> %s [ status: %s; result = %s; error = %s ]" % \
                  (record["uuid"], record["name_label"], vm_name, host_name,      \
                   record["status"], record["result"], repr(record["error_info"]))
            idx = idx + 1
        raise "Task failed"
    else:
        for task in tasks:
            session.xenapi.task.destroy(task)

This example (contained in /SDK/client-examples/bash-cli/clone-vms) is a bash script which uses the XE CLI to clone a VM taking care to shut it down first if it is powered on.

The example begins with some boilerplate which first checks if the environment variable XE has been set: if it has it assumes that it points to the full path of the CLI, else it is assumed that the XE CLI is on the current path. Next the script prompts the user for a server name, username and password:

# Allow the path to the 'xe' binary to be overridden by the XE environment varia
ble
if [ -z "${XE}" ]; then
  XE=xe
fi

if [ ! -e "${HOME}/.xe" ]; then
  read -p "Server name: " SERVER
  read -p "Username: " USERNAME
  read -p "Password: " PASSWORD
  XE="${XE} -s ${SERVER} -u ${USERNAME} -pw ${PASSWORD}"
fi

Next the script checks its commandline arguments. It requires exactly one: the UUID of the VM which is to be cloned:

# Check if there's a VM by the uuid specified
${XE} vm-list params=uuid | grep -q " ${vmuuid}$"
if [ $? -ne 0 ]; then
        echo "error: no vm uuid \"${vmuuid}\" found"
        exit 2
fi

The script then checks the power state of the VM and if it is running, it attempts a clean shutdown. The event system is used to wait for the VM to enter state "Halted".

# Check the power state of the vm
name=$(${XE} vm-list uuid=${vmuuid} params=name-label --minimal)
state=$(${XE} vm-list uuid=${vmuuid} params=power-state --minimal)
wasrunning=0

# If the VM state is running, we shutdown the vm first
if [ "${state}" = "running" ]; then
        ${XE} vm-shutdown uuid=${vmuuid}
        ${XE} event-wait class=vm power-state=halted uuid=${vmuuid}
        wasrunning=1
fi

The VM is then cloned and the new VM has its name_label set to cloned_vm.

# Clone the VM
newuuid=$(${XE} vm-clone uuid=${vmuuid} new-name-label=cloned_vm)

Finally, if the original VM had been running and was shutdown, both it and the new VM are started.

# If the VM state was running before cloning, we start it again
# along with the new VM.
if [ "$wasrunning" -eq 1 ]; then
        ${XE} vm-start uuid=${vmuuid}
        ${XE} vm-start uuid=${newuuid}
fi

XenServer records statistics about the performance of various aspects of your XenServer installation. The metrics are stored persistently for long term access and analysis of historical trends. Where storage is available to a VM, the statistics are written to disk when a VM is shut down. Statistics are stored in RRDs (Round Robin Databases), which are maintained for individual VMs (including the control domain) and the server. RRDs are resident on the server on which the VM is running, or the pool master when the VM is not running. The RRDs are also backed up every day.

Statistics are persisted for a maximum of one year, and are stored at different granularities. The average, minimum, maximum and most recent values are stored at intervals of:

RRDs are saved to disk as uncompressed XML. The size of each RRD when written to disk ranges from 200KiB to approximately 1.2MiB when the RRD stores the full year of statistics.

Statistics can be downloaded over HTTP in XML format. See http://oss.oetiker.ch/rrdtool/doc/rrddump.en.html and http://oss.oetiker.ch/rrdtool/doc/rrdxport.en.html for information about the XML format. HTTP authentication can take the form of a username and password or a session token.

To obtain an update of all VM statistics on a host:

wget http://<username>:<password>@<host>/rrd_updates?start=<secondssinceepoch>

This request returns data in an rrdtool xport style xml format, for every VM resident on the particular host that is being queried. In order to differentiate which column in the export is associated with which VM, the legend field is prefixed with the VM's UUID. The type of data consolidation mode used is also prefixed, e.g. AVERAGE or MIN, etc.

To obtain host updates too, use the query parameter ?host=true:

wget http://<username>:<password>@<host>/rrd_updates?start=<secondssinceepoch>\
?host=true

The step will decrease as the period decreases, which means that if you request statistics for a shorter time period you will get more detailed statistics.

Additional rrd_updates Parameters

?cf=<ave|min|max> - the data consolidation mode.

?interval=<interval> - the interval between values to be reported.

To obtain all statistics for a host:

wget http://<username:password@host>/host_rrd

To obtain all statistics for a VM:

wget http://<username:password@host>/vm_rrd?uuid=<vm_uuid>

The XenAPI is a general and comprehensive interface to managing the life-cycles of Virtual Machines, and offers a lot of flexibility in the way that XenAPI providers may implement specific functionality (e.g. storage provisioning, or console handling). XenServer has several extensions which provide useful functionality used in our own XenCenter interface. The workings of these mechanisms are described in this chapter.

Extensions to the XenAPI are often provided by specifying other_config map keys to various objects. The use of this parameter indicates that the functionality is supported for that particular release of XenServer, but not as a long-term feature. We are constantly evaluating promoting functionality into the API, but this requires the nature of the interface to be well-understood. Developer feedback as to how you are using some of these extensions is always welcome to help us make these decisions.

Most XenAPI graphical interfaces will want to gain access to the VM consoles, in order to render them to the user as if they were physical machines. There are several types of consoles available, depending on the type of guest or if the physical host console is being accessed:

Console access

Operating SystemTextGraphicalOptimized graphical
WindowsNoVNC, via API callRDP, directly from guest
LinuxYes, through VNC and an API callNoVNC, directly from guest
Physical HostYes, through VNC and an API callNoNo

Hardware-assisted VMs, such as Windows, directly provide a graphical console via VNC. There is no text-based console, and guest networking is not necessary to use the graphical console. Once guest networking has been established, it is more efficient to setup Remote Desktop Access and use an RDP client to connect directly (this must be done outside of the XenAPI).

Paravirtual VMs, such as Linux guests, provide a native text console directly. XenServer provides a utility (called vncterm) to convert this text-based console into a graphical VNC representation. Guest networking is not necessary for this console to function. As with Windows above, Linux distributions often configure VNC within the guest, and directly connect to it via a guest network interface.

The physical host console is only available as a vt100 console, which is exposed through the XenAPI as a VNC console by using vncterm in the control domain.

RFB (Remote Framebuffer) is the protocol which underlies VNC, specified in The RFB Protocol. Third-party developers are expected to provide their own VNC viewers, and many freely available implementations can be adapted for this purpose. RFB 3.3 is the minimum version which viewers must support.

VNC consoles are retrieved via a special URL passed through to the host agent. The sequence of API calls is as follows:

The final HTTP CONNECT is slightly non-standard since the HTTP/1.1 RFC specifies that it should only be a host and a port, rather than a URL. Once the HTTP connect is complete, the connection can subsequently directly be used as a VNC server without any further HTTP protocol action.

This scheme requires direct access from the client to the control domain's IP, and will not work correctly if there are Network Address Translation (NAT) devices blocking such connectivity. You can use the CLI to retrieve the console URI from the client and perform a connectivity check.

Use command-line utilities like ping to test connectivity to the IP address provided in the location field.

The installation of paravirtual Linux guests is complicated by the fact that a Xen-aware kernel must be booted, rather than simply installing the guest via hardware-assistance. This does have the benefit of providing near-native installation speed due to the lack of emulation overhead. XenServer supports the installation of several different Linux distributions, and abstracts this process as much as possible.

To this end, a special bootloader known as eliloader is present in the control domain which reads various other_config keys in the VM record at start time and performs distribution-specific installation behavior.

The control domain in XenServer 5.0.0 and above has various security enhancements in order to harden it against attack from malicious guests. Developers should never notice any loss of correct functionality as a result of these changes, but they are documented here as variations of behavior from other distributions.

The control domain privileged user-space interfaces can now be restricted to only work for certain domains. There are three interfaces affected by this change:

Virtual and physical network interfaces have some advanced settings that can be configured using the other-config map parameter. There is a set of custom ethtool settings and some miscellaneous settings.

Developers might wish to configure custom ethtool settings for physical and virtual network interfaces. This is accomplished with ethtool-<option> keys via the other-config map parameter.

KeyDescriptionValid settings
ethtool-rxSpecify if RX checksumming is enabled on or true to enable the setting, off or false to disable it
ethtool-txSpecify if TX checksumming is enabled on or true to enable the setting, off or false to disable it
ethtool-sgSpecify if scatter-gather is enabled on or true to enable the setting, off or false to disable it
ethtool-tsoSpecify if tcp segmentation offload is enabled on or true to enable the setting, off or false to disable it
ethtool-ufoSpecify if UDP fragmentation offload is enabled on or true to enable the setting, off or false to disable it
ethtool-gsoSpecify if generic segmentation offload is enabled on or true to enable the setting, off or false to disable it
ethtool-autonegSpecify if autonegotiation is enabled on or true to enable the setting, off or false to disable it
ethtool-speedSet the device speed in Mb/s10, 100. or 1000
ethtool-duplexSet full or half duplex modehalf or full

For example, to enable TX checksumming on a virtual NIC via the xe CLI:

xe vif-param-set uuid=<VIF UUID> other-config:ethtool-tx="on"

or:

xe vif-param-set uuid=<VIF UUID> other-config:ethtool-tx="true"

To set the duplex setting on a physical NIC to half duplex via the xe CLI:

xe vif-param-set uuid=<VIF UUID> other-config:ethtool-duplex="half"

The following section details the assumptions and API extensions that we have made, over and above the documented API. Extensions are encoded as particular key-value pairs in dictionaries such as VM.other_config.

Key

Semantics

pool.name_label

An empty name_label indicates that the pool should be hidden on the tree view.

pool.rolling_upgrade_in_progress

Present if the pool is in the middle of a rolling upgrade.

Key

Semantics

host.other_config["iscsi_iqn"]

The host's iSCSI IQN.

host.license_params["expiry"]

The expiry date of the host's license, in ISO 8601, UTC.

host.license_params["sku_type"]

The host licence type i.e. Express, Server or Enterprise.

host.license_params["restrict_pooling"]

Returns true if pooling is restricted by the host.

host.license_params["restrict_connection"]

The number of connections that can be made from XenCenter is restricted.

host.license_params["restrict_qos"]

Returns true if Quality of Service settings are enabled on the host.

host.license_params["restrict_vlan"]

Returns true if creation of virtual networks is restricted on the host.

host.license_params["restrict_pool_attached_storage"]

Returns true if the creation of shared storage is restricted on this host.

host.software_version["product_version"]

Returns the host's product version.

host.software_version["build_number"]

Returns the host's build number.

host.software_version["xapi"]

Returns the host's api revision number.

host.software_version["package-linux"]

Returns "installed" if the Linux pack has been installed.

host.software_version["oem_build_number"]

If the host is the OEM version, return its revision number.

host.logging["syslog_destination"]

Gets or sets the destination for the XenServer system logger (null for local logging).

host.logging["multipathing"]

"true" if storage multipathing is enabled on this host.

host.logging["boot_time"]

A floating point Unix time giving the time that the host booted.

host.logging["agent_start_time"]

A floating point Unix time giving the time that the control domain management daemon started.

Key

Semantics

VM.other_config["default_template"]

This template is one that was installed by Citrix. This is used to selectively hide these in the tree view, to use a different icon for them, and to disallow deletion.

VM.other_config["xensource_internal"]

This template is special, such as the P2V server template. These are completely hidden by the UI.

VM.other_config["install_distro"] == "rhlike"

This template is for RHEL 4.5, RHEL 5, or CentOS equivalents. This is used to prompt for the Install Repository during install, including support for install from ISO / CD on Miami, and to modify NFS URLs to suit these installers.

VM.other_config["install_distro"] in { "rhel41" | "rhel44" }

This template is for RHEL 4.1, RHEL 4.4, or CentOS equivalents. This is used to prompt for the Install Repository during install, and to modify NFS URLs to suit these installers. No ISO support is available for these templates.

VM.other_config["install_distro"] == "sleslike"

This template is for SLES 10 and SLES 9. This is used to prompt for the Install Repository during install, like the EL5 ones, but in this case the NFS URLs are not modified. ISO support is available for SLES 10 on XenServer 5.0.0. Use install-methods to distinguish between SLES 9 and SLES 10 on that platform.

VM.other_config["install-repository"] == "cdrom"

Requests an install from a repository in the VM's attached CD drive, rather than a URL.

VM.other_config["auto_poweron"]

Gets or sets whether the VM starts when the server boots, "true" or "false".

VM.other_config["ignore_excessive_vcpus"]

Gets or sets to ignore XenCenter's warning if a VM has more VCPUs than its host has physical CPUs, true to ignore.

VM.other_config["HideFromXenCenter"]

Gets or sets whether XenCenter will show the VM in the treeview, "true" to hide.

VM.other_config["import_task"]

Gets the import task that created this VM.

VM.HVM_boot_params["order"]

Gets or sets the VM's boot order on HVM VM's only, eg "CDN" will boot in the following order - First boot disk, CD drive, Network.

VM.VCPU_params["weight"]

Gets or sets the IONice value for the VM's VCPUs, ranges from 1 to 65536, 65536 being the highest.

VM.pool_migrate(..., options['live'])

true indicates live migration. XenCenter always uses this.

VM.other_config["install-methods"]

A comma-separated list of install methods available for this template. May include "cdrom", "nfs", "http" or "ftp".

VM.other_config["last_shutdown_time"]

The time that this VM was last shut down or rebooted, formatted as a UTC ISO8601 datetime.

VM.other_config["p2v_source_machine"]

The source machine, if this VM was imported by a P2V process.

VM.other_config["p2v_import_date"]

The date the VM was imported, if it was imported by a P2V process. Formatted as a UTC ISO8601 datetime.

Key

Semantics

SR.other_config["auto-scan"]

The SR will be automatically scanned for changes. Set on all SRs created by XenCenter.

SR.sm_config["type"]

Set as type cd for SRs which are physical CD drives.

Key

Semantics

VDI.type

user vs system is used to mean "do or do not allow deletion of the VDI through the GUI, if this disk is attached to a VM". The intention here is to prevent you from corrupting a VM (you should uninstall it instead). suspend and crashdump record suspend and core dumps respectively. ephemeral is currently unused.

VDI.managed

All unmanaged VDIs are completely hidden in the UI. These are branch points in VHD chains, or unused LUN-per-VDI disks.

VDI.sm_config["vmhint"]

The UUID of the VM that this VDI supports. This is set when VDIs are created through the user interface, to improve performance for certain storage backends.

Key

Semantics

VBD.other_config["is_owner"]

If set, then this disk may be deleted when the VM is uninstalled.

VBD.other_config["class"]

Set to an integer, corresponding to the Best Effort setting of ionice.

Key

Semantics

network.other_config["automatic"]

The New VM wizard will create a VIF connected to this network by default, if this key has any value other than false.

network.other_config["import_task"]

Gets the import task that created this network.

Key

Semantics

PV_drivers_version["major"]

Gets the major version of the VM's PV drivers' version.

PV_drivers_version["minor"]

Gets the minor version of the VM's PV drivers' version.

PV_drivers_version["micro"]

Gets the micro (build number) of the VM's PV drivers' version.

Key

Semantics

task.other_config["object_creation"] == "complete"

For the task associated with a VM import, this flag will be set when all the objects (VMs, networks) have been created. This is useful in the import VM wizard for us to then go and re-map all the networks that need it.