XenSource
Skip navigation links
Overview Expand Overview
Products Expand Products
Solutions Expand Solutions
Support Services Expand Support Services
Partners Expand Partners
About Us Expand About Us
How to Buy
XenSource : Documentation :

XenServer Installation Guide

Release 4.0.1

Xen®, XenSource™, XenEnterprise™, XenServer™, XenExpress™, XenCenter™ and logos are either registered trademarks or trademarks of XenSource, Inc. in the United States and/or other countries. Other company or product names are for informational purposes only and may be trademarks of their respective owners.

This product contains an embodiment of the following patent pending intellectual property of XenSource, Inc.:

  1. United States Non-Provisional Utility Patent Application Serial Number 11/487,945, filed on July 17, 2006, and entitled “Using Writeable Page Tables for Memory Address Translation in a Hypervisor Environment”.
  2. United States Non-Provisional Utility Patent Application Serial Number 11/879,338, filed on July 17, 2007, and entitled “Tracking Current Time on Multiprocessor Hosts and Virtual Machines”.

October 2007


Table of Contents

1. Introducing XenServer
1.1. About this document
1.2. What's new in XenServer 4.0.1
1.3. What is XenServer?
1.3.1. Xen: the engine that powers XenServer
1.3.2. XenServer extends the power of Xen virtualization to server clusters
1.3.3. Powerful VM storage management and clustering
1.3.4. XenMotion™ delivers an agile virtual infrastructure
1.3.5. The XenServer product family
1.3.6. XenServer elements
2. System Requirements
2.1. XenServer Host system requirements
2.2. XenCenter requirements
2.3. VM support
2.3.1. Microsoft Windows
2.3.2. Linux
3. Installing XenServer
3.1. Installing the XenServer Host
3.2. Installing XenCenter
3.2.1. Uninstalling XenCenter
3.3. Installation and deployment scenarios
3.3.1. XenServer Hosts with local storage
3.3.2. XenServer Hosts with shared NFS storage
3.3.3. XenServer Hosts with shared iSCSI storage
A. Troubleshooting
B. Maintenance Procedures
B.1. Reinstalling from version 4.0 to the current version
B.2. Upgrading from version 3.2 to the current version
B.3. Upgrading from version 3.1 to the current version
B.4. Backing up and restoring XenServer Hosts and VMs
B.4.1. Backing up Virtual Machine metadata
B.4.2. Backing up XenServer Hosts
B.4.3. Backing up VMs
C. PXE installation of XenServer Host
C.1. Setting up the PXE boot environment
C.2. Creating an answerfile for unattended PXE installation
C.3. Installation media repository format
C.3.1. Presence of installation media repositories
C.3.2. Installation media repository metadata
C.3.3. Package metadata
C.3.4. Example files
C.3.5. Notes on best practice
D. Xen Memory Usage
D.1. Memory scaling algorithm
D.1.1. Increasing the reserved memory size

Chapter 1. Introducing XenServer

Thank you for choosing XenServer™ from XenSource, Inc., the creators of the Xen® hypervisor and leaders of the open source Xen project.

1.1. About this document

This document is an installation guide for XenServer, the platform virtualization solution from XenSource. The XenServer package contains all you need for creating a network of virtual x86 computers running on Xen®, the open-source paravirtualizing hypervisor with near-native performance.

This document contains procedures to guide you through the installation, configuration, and initial operation of XenServer. This document also contains information about troubleshooting problems that might occur during installation, and where to go for further information.

1.2. What's new in XenServer 4.0.1

This release of XenServer contains the following new features:

  • 64-bit hypervisor This release incorporates Xen 3.1, which exploits 64-bit servers to tap the large memory capability of these machines, up to a maximum of 128 GB
  • Resource Pools, shared storage, and XenMotion™: live and static relocation of Virtual Machines XenServer now includes shared storage support, with a new administration model. You can now pool XenServer Host machines together to share storage devices, enabling live relocation of VMs between XenServer Hosts. A Resource Pool can contain up to sixteen (16) XenServer Hosts. This provides the XenCenter administrative interface with a way to address a single IP address to manage any XenServer Host within the Pool.

    Enhanced storage support includes:

    • XenServer formerly supported a single Storage Repository (SR) on a XenServer Host. This release supports multiple SRs, which can be attached or detached dynamically.
    • Local and iSCSI (SCSI protocol over TCP/IP) LVM (Logical Volume Management) Storage Repositories
    • File-based virtual disks in Microsoft's VHD (Virtual Hard Drive) image format stored locally and on NFS (Network File System) exported directories
    • Pool-attached shared storage (iSCSI and NFS), accessible from multiple hosts, enabling XenMotion
    • Thin provisioning of virtual disks for appropriate storage types using sparse files, so storage allocated to a Virtual Machine is not instantiated until required
    • Fast Snapshot of virtual disks (via the CLI and API only, and for file-backed virtual disks only)
  • A new API and SDK XenAPI, implemented in both open source Xen and XenServer, is an extensible, upward-compatible API providing the programmable management interface to the XenServer product family. It is remoteable, and is used by XenCenter and the CLI. XenAPI is published for ISVs.

    A software development kit (SDK) and a driver development kit (DDK), both packaged as VMs, are provided with this release to enable ISVs and IHVs to rapidly develop their products. The SDK and DDK run only on XenServer-based systems.

  • A new, renamed administrative interface and new CLI XenCenter, formerly called the Administrator Console, is now a native Windows application and makes use of the Xen API. The xe CLI has been extended to provide even more fine-grained control of the system. A backward-compatible mode enables conversion of existing scripts.

    XenCenter and the CLI only support management of XenServer-based systems.

1.3. What is XenServer?

XenServer is a server virtualization platform that offers near bare-metal virtualization performance for virtualized server and client operating systems. XenServer uses the Xen hypervisor to virtualize each server on which it is installed, enabling each to host multiple Virtual Machines simultaneously with guaranteed performance. XenServer allows you to combine multiple Xen-enabled servers into a powerful Resource Pool, using industry-standard shared storage architectures and leveraging resource clustering technology created by XenSource. In doing so, XenServer extends the basic single-server notion of virtualization to enable seamless virtualization of multiple servers as a Resource Pool, whose storage, memory, CPU and networking resources can be dynamically controlled to deliver optimal performance, increased resiliency and availability, and maximum utilization of data center resources. XenServer allows IT managers to create multiple clusters of Resource Pools, and to manage them and their resources from a single point of control, reducing complexity and cost, and dramatically simplifying the adoption and utility of a virtualized data center environment. With XenServer, a rack of servers can become a highly available compute cluster that protects key application workloads, leverages industry standard storage architectures, and offers no-downtime maintenance by allowing Virtual Machines to be moved while they are running between machines in the cluster. XenServer extends the most powerful abstraction: virtualization across servers, storage and networking to enable users to realize the full potential of a dynamic, responsive, efficient data center environment for Windows and Linux workloads.

By providing a unified view of the resources of one or more clusters of servers, and through its use of a standardized abstraction for control of storage resources assigned to Virtual Machines, XenServer dramatically simplifies the job of the IT administrator seeking a painless solution for virtualization of demanding production workloads. XenServer is ideally suited to users seeking to maximize the benefits of server consolidation, automate test and development of software, or automate the assignment of resources and protection of performance-sensitive production workloads.

1.3.1. Xen: the engine that powers XenServer

Xen provides fast, secure, open source virtualization that allows multiple operating system instances to run as Xen Virtual Machines or VMs on a single physical x86 computer. Xen supports modified guest operating systems using a technique known as paravirtualization, which requires modifying the operating system to run on Xen, but offers near-native performance. Paravirtualized operating systems "know" that they are virtualized. Xen also supports unmodified operating systems using processor extensions from Intel (VT) and AMD (AMD-V).

Xen supports 32-bit processors with and without Physical Address Extension (PAE), 64-bit processors, and Symmetric Multiprocessing (SMP) guest operating systems.

Xen is exceptionally lean, which leads to extremely low overhead and near-native performance for VMs. Xen re-uses existing Linux device drivers for Linux VMs, and uses special paravirtualized device drivers for network and disk I/O on Windows VMs, making device management easy. Moreover, Xen is robust in the event of device driver failure and protects VMs, and also protects the hypervisor from faulty or malicious drivers.

Xen provides superb resource partitioning for CPU, memory, and block and network I/O. This resource protection model leads to improved security because VMs and drivers are not susceptible to denial of service attacks. Xen is fully open to scrutiny by the security community and its security is continuously tested. Xen is also the foundation for a Multi-Level Secure System architecture being developed by XenSource, IBM and Intel.

Xen was originally developed by the Systems Research Group at the University of Cambridge Computer Laboratory as part of the XenoServers project, funded by the Engineering and Physical Sciences Research Council (EPSRC), the main funding agency in the United Kingdom for research in engineering and the physical sciences as well as the managing agent on behalf of the other Research Councils for High Performance Computing.

1.3.2. XenServer extends the power of Xen virtualization to server clusters

XenServer allows IT administrators to flexibly assign up to 16 x86 servers into a single Resource Pool of server resources. Multiple Pools can be managed from a single XenCenter management console. A Resource Pool is a tightly coupled collection of servers whose resources are virtualized to host a set of Virtual Machines. Servers in a Resource Pool monitor the configuration state and availability of their peers. XenServer management state is also replicated across all servers in a Pool, with the benefit that failure of a Pool master can be quickly remedied, since any node in the cluster can replace the failed node. Using the XenServer clustering architecture, the workload of a cluster can be protected from server failures, through a unique combination of shared storage, Xen virtualization, and replicated state management between servers in the cluster.

Virtual Machines assigned to a Resource Pool are automatically mapped onto the physical resources of the Pool, but IT administrators retain full control of resource assignment, and full visibility into each system and each Virtual Machine, including the ability to manually place workload on specific servers, and drill down into each server within the Pool to get a precise view of each server's resources and the Virtual Machines it hosts. At the simplest level, all the administrator needs to do is assign a Virtual Machine or a set of Virtual Machines to a Resource Pool. XenServer manages the rest, including the assignment of physical resources from servers in the Pool to host the VMs, and ensuring that administrator policies for resilient restart of VMs are implemented. XenServer ensures that the overall utilization of the resources of the servers in the Pool is maximized, to deliver lowest possible TCO. Of course, if you want to assume full control, XenServer gives you the ability to manage each resource for each VM, but most users will appreciate the simplicity of the "drag and drop" interface for VM provisioning with guaranteed VM performance, automated VM storage and network management, and the use of policies for automatic restart on failure of physical components of the cluster.

1.3.3. Powerful VM storage management and clustering

In most datacenters, storage is managed as a shared, separately administered resource independent of the different server applications and OS types that make use of it. The rich set of choices for datacenter storage, and the emergence in its own right of storage virtualization as a powerful technology that reduces TCO for storage, leaves IT managers with a bewildering set of choices for storage and storage management. XenServer aims to simplify the management of diverse storage technologies for virtualized infrastructure. It does this by

  • providing a simple plug-in interface for each of the different storage technologies used in the datacenter today, extensible by storage vendors
  • hiding the complexity of storage-related operations on each technology, for example snapshotting for VM backup
  • enabling easy import and export of Virtual Machines in the virtual hard disk formats of all major vendors, as well as offering raw block storage to Virtual Machines
  • leveraging shared storage technologies, where present, as a core building block of XenServer Resource Pools, to facilitate live-relocation of running Virtual Machines and easy relocation of workload to achieve optimal utilization of Pool resources.

XenServer, uniquely amongst all virtualization products on the market, offers an open API to integrate directly with the various kinds of storage infrastructures available. With built-in support for IDE, SATA, SCSI and SAS drives, XenServer can manage all forms of storage local to any server in a Resource Pool. Through NAS, iSCSI and SAN support, XenServer extends the available Virtual Machine storage options to the most common datacenter architectures in use today, while providing plug-in APIs for storage vendors to integrate any storage management technology, from clustered file systems, through clustered volume management. Storage Repositories (or SRs) are elemental components of the XenServer architecture. All are managed via the XenServer API, and through this API XenServer can leverage the built-in features of the underlying storage infrastructure, including snapshotting, backup, and automated creation and assignment of LUNs for new Virtual Machines. Not all SRs support all primitives - but XenServer can accommodate this by adding software-level features that can be used if the storage infrastructure cannot support a particular primitive, such as snapshotting.

1.3.4. XenMotion™ delivers an agile virtual infrastructure

When a XenServer Host in the Pool needs physical maintenance, VMs can be relocated to other servers in the Pool, while they are running, with only hundreds of milliseconds of observable delay. This live relocation capability is called XenMotion.

1.3.5. The XenServer product family

The three variants available are

  • XenExpress™ supports a single XenServer Host with dual sockets (or multiple XenServer Hosts, one at a time), up to 4GB physical RAM, hosting up to 4 concurrent VMs.
  • XenServer™ supports multiple simultaneous XenServer Hosts with up to 128GB physical RAM, and no limit on the number of concurrent VMs except the amount of available RAM.
  • XenEnterprise™ supports multiple simultaneous XenServer Hosts with up to 128GB physical RAM, and no limit on the number of concurrent VMs except the amount of available RAM. It also offers the following additional features:
    • clustering of XenServer Hosts into Resource Pools
    • support for NFS and iSCSI shared Storage Repositories
    • live relocation (XenMotion) of VMs within the same Resource Pool
    • support for specifying VLAN trunk ports in virtual bridges on the XenServer Host
    • additional Quality of Service (QoS) control for VMs

Each member of the XenServer product family provides the XenCenter management interface and a full set of product documentation.

1.3.6. XenServer elements

XenServer contains all you need to quickly and easily set up a virtualized Xen environment:

  • the Xen hypervisor
  • installers for both the XenServer Host and for XenCenter
  • a tool for creating Linux VMs by converting existing physical installations of supported Linux distributions (P2V)
  • XenCenter, a Windows client application. From XenCenter you can manage XenServer Hosts, Resource Pools, and shared storage, and deploy, manage, and monitor VMs
  • The xe command line interface (CLI), for both Windows and Linux systems
  • VM Templates for installation of Windows Server 2000 SP4, Windows XP SP2, Windows Server 2003, Red Hat Enterprise Linux 5, and SUSE Linux Enterprise Server 10 VMs from installation CDs
  • VM Templates for installation of Red Hat Enterprise Linux 4.1-4.4 VMs from vendor media stored on a network repository
  • VM Templates for installation of Debian Sarge or Debian Etch VMs without installation media

Chapter 2. System Requirements

XenServer requires at least two separate physical x86 computers: one to be the XenServer Host, and the other to run the XenCenter application. The XenServer Host machine is dedicated entirely to the task of hosting VMs and is not used for other applications. The computer that runs XenCenter can be any general-purpose Windows computer that satisfies the hardware requirements, and can be used to run other applications simultaneously.

2.1. XenServer Host system requirements

The XenServer Host is a 64-bit x86 server-class machine devoted to hosting multiple VMs. This machine runs a stripped-down Linux operating system with a Xen-enabled kernel which controls the interaction between the virtualized devices seen by VMs and the physical hardware.

The following are the system requirements for the XenServer Host:

CPUsOne or more 64-bit x86 CPU(s), 1.5 GHz minimum, 2 GHz or faster multicore CPU recommended

To support VMs running Windows, an Intel VT or AMD-V x86-based system with one or more (up to 32) CPUs is required.

Note

To run Windows VMs, hardware support for virtualization must be enabled on the XenServer Host. This is an option in the BIOS. It is possible your BIOS might have virtualization support disabled. Consult your BIOS documentation for more details.

To support VMs running supported paravirtualized Linux, a standard x86-based system with one or more (up to 32) CPUs is required.

RAM 1 GB minimum, 2 GB or more recommended
Disk spaceLocally attached storage (PATA, SATA, SCSI) with 16 GB of disk space minimum, 60 GB of disk space recommended

General disk space requirements for VMs:

  • Product installation creates two 4GB partitions for the XenServer Host Control Domain; remaining space is available for VMs.
  • VMs based on the Debian templates are allocated a 1GB root device, and a 512MB swap device.
  • VMs made using the RHEL 4.1 or 4.4 vendor installers are allocated a root device of 8 GB.
Network100 Mbit/s or faster network interface card (NIC). A gigabit NIC is recommended for faster P2V and export/import data transfers and for live relocation of VMs.

2.2. XenCenter requirements

The remote XenCenter application for managing the XenServer Host can be installed and run on any Windows 2000/XP/Vista workstation or laptop.

The following are the system requirements for XenCenter:

Operating systemWindows 2000, Windows XP, Windows Server 2003, or Windows Vista
.NET frameworkversion 2.0 or above
CPU Speed750 MHz minimum, 1 GHz or faster recommended
RAM1 GB minimum, 2 GB or more recommended
Disk space100 MB minimum
Network interface card100Mb or faster NIC

2.3. VM support

XenServer supports Virtual Machines that run the following operating systems:

2.3.1. Microsoft Windows

Windows VMs can be created only on XenServer Hosts equipped with Intel VT-enabled or AMD-V CPUs. All Windows VMs are created via installing the operating system from either the Microsoft installation media in the XenServer Host physical CD/DVD-ROM drive or a network-accessible ISO image to the appropriate Template.

64-bit VMs

  • Windows Server 2003 Standard/Enterprise/Datacenter SP2
  • Windows Small Business Server 2003 SP2

32-bit VMs

  • Windows 2000 SP4
  • Windows XP SP2
  • Windows Server 2003 Web/Standard/Enterprise/Datacenter SP0, SP1, SP2, R2
  • Windows Small Business Server 2003 SP0, SP1, SP2, R2

2.3.2. Linux

Linux VMs are created via one of three basic methods:

  • installing the operating system from a network-accessible vendor media repository to the appropriate Template
    • Red Hat Enterprise Linux 4.1, 4.4, and 4.5
    • Red Hat Enterprise Linux 5.0
    • SUSE Linux Enterprise Server 10 SP1
    • CentOS 4.5
    • CentOS 5.0
  • using a Template that contains a complete pre-installed operating system

    • Debian 3.1 (Sarge)
    • Debian 4.0 (Etch)
  • converting an existing physical instance using the P2V tool

    • Red Hat Enterprise Linux 3.5, 3.6, 3.7
    • Red Hat Enterprise Linux 4.1, 4.2, 4.3, and 4.4
    • SUSE Linux Enterprise Server 9 SP2, SP3

Chapter 3. Installing XenServer

Any XenServer network, from the simplest to the most complex deployment, is made up of one or more XenServer Hosts, each running some number of VMs, and one or more workstations running XenCenter to administer the XenServer Hosts.

In order to create Resource Pools and enable live migration of VMs, a means of shared storage also needs to be deployed on the network. This version of the XenServer product family supports LVM over iSCSI and NFS shared storage.

This chapter describes installing XenServer Host software on physical servers, installing XenCenter on Windows workstations, and connecting them to form the infrastructure for a network of Virtual Machines. It also describes using NFS and iSCSI shared storage, and creating Pools of XenServer Hosts that share this storage.

The first sections describe the installation of XenServer Host and XenCenter, which are common to all deployments. The following sections describe several common installation and deployment scenarios and provide information that is specific to each scenario.

Installers for both the XenServer Host and XenCenter are on the installation media. The installation media also includes:

  • a P2V tool for creating VM templates from an existing installation of supported Linux distributions. See the XenServer Virtual Machine Installation Guide for details.
  • a tool for restoring a backed-up XenServer Host filesystem. See the XenServer Administrator's Guide for details.

3.1. Installing the XenServer Host

The XenServer Host consists of a Xen-enabled Linux operating system, a management agent, VM templates, and a local Storage Repository reserved for VMs. The XenServer Host must be installed on a dedicated x86 server. XenServer is not supported in a dual-boot configuration with any other operating system.

You can install the XenServer Host from the installation CDs or set up a network-accessible TFTP server to boot from via PXE. For details about setting up a TFTP server for PXE-booting the installer, see Appendix C, PXE installation of XenServer Host.

Note

Do not install any other operating system in a dual-boot configuration with the XenServer Host; this is an unsupported configuration.

The main installation CD contains the basic packages to set up the XenServer Host on a computer. The XenServer package also contains a separate CD containing support for creating Linux VMs, and six CDs containing source code for the included open source software.

To install the XenServer Host

  1. Boot the computer from the main installation CD, or PXE-boot from your TFTP server if applicable.

  2. After the initial boot messages, the installer does some hardware detection and initialization, then presents a screen asking you to select which keyboard keymap you want to use for the installation. In this and the screens that follow, use Tab or Alt+Tab to move between elements, Space to select, and F12 to move to the next screen.

    Select the desired keymap and choose OK to proceed.

  3. Next, the "Welcome to XenServer" screen appears. Select Install XenServer Host and choose OK to proceed.

  4. The next screen displays a message telling you that the setup program will install XenServer on the computer, and warns that it will overwrite data on any hard drives that you select to use for the installation. Choose OK to proceed.

  5. The XenServer End User License Agreement (EULA) is displayed. Use the up and down arrow keys to scroll through and read the agreement. Choose Accept EULA to proceed.
  6. At this point, if the computer on which you are installing the XenServer Host does not have a CPU which supports hardware virtualization, or if the support is disabled in the BIOS, a message appears to warn you that you will not be able to run Windows VMs. Choose OK to proceed.

    Note that some systems have bugs in their BIOS software which can result in the setting being incorrect. If you get a spurious warning about a lack of hardware virtualization (or do not see a warning when you expected one), then perform a hard power cycle of the host and restart the installation. You should also check the hardware manufacturer's support site for BIOS upgrades.

  7. If the installer detects a previously-installed version of XenServer Host, you are offered the choice to perform a clean installation, or to re-install over the existing version, which preserves any of the VMs present. Select an installation type and choose OK to proceed.

  8. If you have multiple local hard disks, you are asked to choose the Primary Disk for the installation. Select the desired disk and choose OK to proceed. After selecting the primary one, you are also prompted to choose if you want any of the other drives to be formatted for use by XenServer for VM storage. Select and choose OK to proceed.

    If the computer has a single hard disk, these two screens do not appear.

  9. The next screen asks you to specify the source of the installation packages. If you are installing off the CD, you will most likely want to select Local media. If you are installing via PXE you will most likely want to select HTTP or FTP or NFS, as appropriate.

    If you select HTTP or FTP or NFS, you are next prompted to set up Networking.

    If the computer has multiple network interfaces, you are prompted to select one of them to be used as the management NIC for the XenServer Host software. Select and choose OK to proceed.

    Note

    Only the management NIC will be configured with an IP address. If you wish to use other NICs in a computer with multiple NICs, see the section titles "Network interface configuration" in the "Networking" chapter of the XenServer Administrator's Guide.

    If the computer has a single network interface, that interface is used as the management NIC and no prompt is displayed.

    You can select to configure the management interface using DHCP, or specify a different network configuration, which prompts you to configure the management interface's properties manually. Following that, you are prompted to provide the URL or NFS server and path where the installation media are, as appropriate.

    Note

    To be part of a Resource Pool, XenServer Hosts need to have static IP addresses.

    If you select Local media, this networking setup appears later in the installation process.

    If you are planning to install VMs that will run Linux operating systems, check the Use additional media checkbox. If you are planning to install only Windows VMs, you can leave this checkbox unchecked. Choose OK to proceed.

    Important

    In a pooled setup, the Linux pack must be installed on all or none of the pool hosts so they are matched.

  10. The next screen asks you if you want to verify the integrity of the installation media. If you select Verify installation source, the MD5 checksum of the packages is calculated and checked against the known value. This may take some time. If you select Skip verification, this check is bypassed. Make your selection and choose OK to proceed.

  11. You are next prompted to set a root password. (This will be the password that the XenCenter application will use to connect to the XenServer Host.) Type the desired password and type it again to verify it.

  12. You are prompted to select the general geographical area for the Time Zone. Choose from the displayed list of geographical areas, then choose OK to proceed.

  13. You are prompted to select the specific locale for the Time Zone. (Note that this list is long, but if you type the first letter of the desired locale, the selection will jump to the first entry that begins with this letter.) Choose from the displayed list of locales, then choose OK to proceed.

  14. You are prompted to choose a method of setting the System Time. You can select Using NTP or Manual time entry. Make your selection and choose OK to proceed.

  15. If you selected Using NTP in the preceding step, you are prompted to identify the time server or servers you want to use. You can check NTP is configured by my DHCP server and the time server will be set by DHCP. Otherwise, enter at least one NTP server name or IP address in the fields below. Choose OK to proceed.

    Otherwise, the installation script moves to the next step; you will be prompted for the manually-entered time later, near the end of the installation.

    Warning

    Currently XenServer assumes that the time setting for the server’s BIOS is the current time in UTC, and that the time for the VMs reflects the local time based on the time zone offset specified.

  16. If you selected Local media in the installation source step above, you are next prompted to set up Networking.

    If the computer has multiple network interfaces, you are prompted to select one of them to be used as the management NIC for the XenServer Host software. Select and choose OK to proceed.

    If the computer has a single network interface, that interface is used as the management NIC and no prompt is displayed.

    If you selected HTTP or FTP or NFS in the installation source step above, this first Networking step has already been performed.

    Note

    There is no provision for changing the network configuration of the XenServer Host via XenCenter after installation. To make changes, you need to edit the scripts in /etc/sysconfig/network-scripts.

  17. You are next prompted to specify the hostname and the configuration for the name service.

    In the Hostname Configuration section, if you select Automatically set via DHCP, the DHCP server will provide the hostname along with the IP address. If you select Manually specify, enter the desired hostname for the server in the field below.

    In the DNS Configuration section, if you select Manually specify, enter the IP addresses of your primary (required), secondary (optional), and tertiary (optional) Nameservers in the fields below. Otherwise, select Automatically set up via DHCP to get name service configuration via DHCP.

    Note

    To be part of a Resource Pool, XenServer Hosts need to have static IP addresses.

    Select OK to proceed.

  18. A message is displayed that the installation is ready to proceed and that this will format the primary disk and any other disks selected for VM storage, destroying any data that is currently on them. Select Install XenServer Host to proceed.

    A progress bar is displayed as the installation commences. If you chose to set the system date and time manually, a dialog box appears when the progress bar has reached about 90%. Enter the correct numbers in the fields and select OK to proceed.

  19. If you selected to install support for Linux VMs, you will be prompted to put in the Linux Support disk. Eject the main disk, put in the Linux Pack disk, and close the CD drawer. Select OK. A screen appears, identifying that this disk contains the Linux Pack. Select OK to proceed with installing it. Another progress bar is displayed, and when it reaches 100%, a completion message is displayed.

    If you selected not to install support for Linux VMs, a completion message is displayed.

    Note

    If you decide later to add Linux support, mount the Linux Pack installation CD or ISO image on the XenServer Host and run the script Linux/install.sh, located in the root of the CD.

  20. Select Reboot. Upon reaching the login prompt, the system should now be ready to manage via XenCenter. To connect to it, you will need the IP address or hostname of the XenServer Host. This is displayed at the login prompt.

3.2. Installing XenCenter

XenCenter is a Windows client application. XenCenter must be installed on a remote machine that can connect to the XenServer Host through the network; it cannot run on the same machine as the XenServer Host. It can be installed and run on Windows 2000/2003/XP/Vista computers. The .NET framework version 2.0 or above must be installed as well.

To install XenCenter

  1. Before installing XenCenter, be sure to uninstall the previous version if one exists.

  2. Put the Base Pack CD in the drive.

  3. If Auto-play is enabled for the CD drive, the application installer launches automatically after a few moments.

    If Auto-play is not enabled for the CD drive, browse to the /client_install directory on the CD and find the file named XenCenterSetup.exe. Then double-click on the file’s icon to launch the application installer.

  4. Follow the instructions displayed in the installer window. When prompted for installation directory, either click Browse to change the default installation location, or click Next to accept the default path C:\Program Files\XenSource\XenCenter.

    When complete, there will be a XenSource XenCenter group on the All Programs list.

3.2.1. Uninstalling XenCenter

Should you need to, XenCenter can be uninstalled from a system quite easily.

To uninstall XenCenter

  1. Select Uninstall from the Start menu:

    All Programs > XenSource XenCenter > Uninstall XenCenter

  2. Click Yes on the uninstallation confirmation message.

    This will remove the XenSource application. At the end, a message is displayed. Click OK to close the message box.

3.3. Installation and deployment scenarios

This section describes several common installation and deployment scenarios:

  • one or more XenServer Hosts with local storage
  • two or more XenServer Hosts with shared NFS storage
  • two or more XenServer Hosts with shared iSCSI storage

and details the steps that differ between scenarios.

3.3.1. XenServer Hosts with local storage

The simplest use of XenServer is to set up a simple network of VMs running on one or more XenServer Hosts without shared storage. This, of course, means that live relocation of VMs from one XenServer Host to another is not possible, as this requires shared storage.

Requirements

  • one or more x86 servers with local storage
  • one or more Windows workstations, on same network as the XenServer Hosts

Basic procedure

  1. Install XenServer Host software on server(s)

  2. Install XenCenter on workstation(s)

  3. Run XenCenter and connect to XenServer Hosts

3.3.2. XenServer Hosts with shared NFS storage

Adding shared storage to the XenServer network enables grouping of XenServer Hosts into Resource Pools, enabling live relocation of VMs and sharing of server resources.

Requirements

  • two or more x86 servers with local storage
  • one or more Windows workstations, on same network as the XenServer Hosts
  • a server exporting a shared directory via NFS

Note

To be part of a Resource Pool, the XenServer Hosts and the server or servers providing the shared NFS storage need to have static IP addresses.

Basic procedure

  1. Install XenServer Host software on server(s)

  2. Install XenCenter on workstation(s)

  3. Set up the NFS server

  4. Run XenCenter and connect to XenServer Hosts

  5. Choose one XenServer Host as a Pool master and join other XenServer Hosts to its Pool.

  6. Create an SR on the NFS share at the Pool level

For this procedure, a server running a typical Linux distribution is assumed as the NFS server. Consult your Linux distribution documentation for further information.

Set up NFS share on NFS server

  1. Check to see if the portmap daemon is installed and running:

    # chkconfig --list portmap
    portmap         0:off   1:off   2:off   3:on    4:on    5:on    6:off
    					

    Note that in the preceding example, runlevels 3, 4, and 5 say "on". That means that at boot, for runlevels 3, 4 and 5, the portmap daemon is started automatically. If either 3, 4 or 5 say "off," turn them on with the following command:

    chkconfig portmap on
    					
  2. Check to see if the NFS daemon is installed and running:

    # chkconfig --list nfs
    nfs             0:off   1:off   2:on    3:on    4:on    5:on    6:off
    					

    If either 3, 4 or 5 say "off," turn them on with the following command:

    chkconfig nfs on
    					
  3. Make a directory for the shared storage to live in:

    mkdir /<vm_share_dir>
    					
  4. Edit the file /etc/exports and add the line

    /<vm_share_dir>      *(rw,no_root_squash,sync)
    					

    Save and close the file.

  5. Restart the portmap and nfs daemons as follows:

    service portmap restart
    service nfs restart
    					

    The <vm_share_dir> should now be exported on the network and you should be able to use XenCenter to point to it using the Storage wizard. See the XenCenter online help for details.

Create an SR on the NFS share at the Pool level

  1. Open a host text console on any XenServer Host in the Pool.

  2. Create the Storage Repository on server:/path

    xe sr-create content-type=user type=nfs name-label=<SR name> shared=true device-config-server=<server> device-config-serverpath=<path>

    The device-config-server refers to the hostname of the NFS server and device-config-serverpath refers to the path on the server. Since shared is set to true, the shared storage will be automatically connected to every host in the Pool and any hosts that subsequently join will also be connected to the storage. The UUID of the created Storage Repository will be printed on the screen.

  3. Find the UUID of the Pool

    xe pool-list
  4. Set the shared storage as the Pool-wide default

    xe pool-param-set uuid=<UUID of the Pool> default-SR=<UUID of the Storage Repository>

    Since the shared storage has been set as the Pool-wide default, all future VMs will have their disks created on shared storage by default.

3.3.3. XenServer Hosts with shared iSCSI storage

Adding shared storage to the XenServer network enables grouping of XenServer Hosts into Resource Pools, enabling live relocation of VMs and sharing of server resources.

Requirements

  • two or more x86 servers with local storage
  • one or more Windows workstations, on same network as the XenServer Hosts
  • a server providing a shared directory via iSCSI

Note

To be part of a Resource Pool, the XenServer Hosts and the server or servers providing the shared iSCSI storage need to have static IP addresses.

Basic procedure

  1. Install XenServer Host software on server(s)

  2. Install XenCenter on workstation(s)

  3. Prepare the iSCSI storage

  4. If necessary, enable your iSCSI device for multiple initiators

  5. Run XenCenter and connect to XenServer Hosts

  6. Choose one XenServer Host as a Pool master and join other XenServer Hosts to its Pool.

  7. Configure the iSCSI IQN for each XenServer Host

  8. Create an SR on the iSCSI share at the Pool level

The details of how to set up iSCSI storage differ between the various iSCSI solutions on the market. In general, though, you need to provide an iSCSI target on the SAN for the VM storage, and then configure XenServer Hosts to be able to see and connect to it. This is done by providing a valid iSCSI Qualified Name (IQN) to the iSCSI target and to the iSCSCI initiator on each XenServer Host.

Prepare the iSCSI storage

  1. Assign a virtual storage volume on the iSCSI SAN for VM storage
  2. Create IQNs on the SAN for each XenServer Host that will use the storage.

You can use either XenCenter or the CLI to configure the IQN for each XenServer Host and to create the SR. The following describes using the CLI; see the XenServer Help for details on using XenCenter.

To configure the iSCSI IQN for each XenServer Host via the CLI

  1. In the Text Console, issue the command:

    xe-set-iscsi-iqn <iSCSI-IQN>
    					

    Alternatively, the CLI can be used directly:

    xe host-param-set uuid=<host-UUID> other-config-iscsi_iqn=<iSCSI-IQN>
    					
  2. Repeat for each XenServer Host in the Pool.

To create an SR on the iSCSI share at the Pool level via the CLI

  1. In the Text Console of any server in the Pool, issue the command:

    xe sr-create name-label=<name for SR>
    content-type=user device-config-target=<iSCSI server IP address> 
    device-config-targetIQN=<iSCSI target IQN> 
    device-config-localIQN=<iSCSI local IQN>
    type=lvmoiscsi shared=true device-config-LUNid=<LUN ID>
    

    The device-config-target argument refers to the hostname or IP address of the iSCSI server. The device-config-LUNid argument can be a comma-separated list of LUN IDs. Since the shared argument is set to true, the shared storage will be automatically connected to every host in the Pool and any hosts that subsequently join will also be connected to the storage.

    The command returns the UUID of the created Storage Repository.

  2. Find the UUID of the Pool by issuing the command

    xe pool-list
    
  3. Set the shared storage as the Pool-wide default as follows:

    xe pool-param-set uuid=<UUID of the Pool> default-SR=<UUID of the iSCSI shared SR>
    

    Now that the shared storage has been set as the Pool-wide default, all future VMs will have their disks created on shared storage by default.

Appendix A. Troubleshooting

If you experience odd behavior, crashes, or have other issues during installation, this chapter is meant to help you solve the problem if possible and, failing that, describes where logs are located and other information that can help your XenSource Solution Provider and XenSource track and resolve the issue.

Note

We recommend that you follow the troubleshooting information in this chapter solely under the guidance of your XenSource Solution Provider or XenSource Support.

XenSource provides two forms of support: you can receive free self-help support via the Support site, or you may purchase our Support Services and directly submit requests by filing an online Support Case. Our free web-based resources include product documentation, a Knowledge Base, and discussion forums.

The XenServer Host installation CD runs Linux, so most standard Linux commands can be used to diagnose installation problems. There are three virtual terminals available during installation, which display the installation menu, an interactive console and an event log, respectively. Use the ALT + F1-F3 keys to toggle between the virtual terminals.

You can check some basic things in the interactive terminal:

  • fdisk lists all disks that can be seen as a result of the loaded storage device drivers. If a particular device driver did not load, for example,the driver for a RAID card, then the disks attached to that card will not appear in the output from the fdisk command.
  • ifconfig shows the network configuration of physical NICs, including their IP addresses, netmasks, and gateway.
  • ping can be used to verify network connectivity from the XenServer Host to a remote IP address and vice-versa.

You should use the two additional virtual terminals solely under the guidance of your XenSource Solution Provider.

Installation logs are written to /install/tmp/

Appendix B. Maintenance Procedures

This chapter documents some miscellaneous procedures for maintaining XenServer.

B.1. Reinstalling from version 4.0 to the current version

The following procedure describes how to reinstall the current version of the XenServer Host over an existing installation of XenServer Host 4.0, and preserve settings an VMs. These instructions also apply when upgrading from the XenServer 4.0.0b2 beta release.

When reinstalling your host, be aware that any custom RPMs which you might have installed on the XenServer Host Control Domain will not be preserved.

To reinstall XenServer Host from version 4.0.0b2 or 4.0.1

  1. Perform an orderly shutdown on the VMs hosted on the XenServer Host.

    If any of your VMs are in the in the suspended state, resume them first, and then perform an orderly shutdown on them too.

  2. Reboot the XenServer Host, and boot from the new version's Installation CD.

  3. The installation script will identify the version and prompt you whether you want to reinstall over the existing installation and preserve VMs. Select OK to proceed with the installation.

  4. Follow the rest of the installation procedure as described in Section 3.1, “Installing the XenServer Host”.

  5. Run XenCenter and connect to the upgraded XenServer Host.

  6. To upgrade the drivers for a Windows VM, select the "Install Tools" menu option and open its console. Run the xensetup.exe installation program to upgrade your paravirtualized drivers. When finished, reboot the VM.

    Repeat for all other Windows VMs.

  7. If using iSCSI, then the host IQN will have been reset to a random value. Set this to the desired value if a fixed IQN is required. If the host IQN is changed, reboot the host to ensure that iSCSI Storage Repositories plug in correctly with the correct IQN, before using the reinstalled host.

If you are upgrading a pooled installation from 4.0.0 beta2, then you will need to perform an additional step when upgrading the slave hosts. After the upgrade has completed, copy your XenEnterprise license file into the control domain at /etc/xensource/license. If you are using Windows, you will need to use a Secure Copy program such as PuTTY or WinSCP for this purpose. Then reboot the XenServer host and verify that it is available for use.

B.2. Upgrading from version 3.2 to the current version

The following procedure describes how to install the current version of the XenServer Host over an existing installation of XenServer Host 3.1.

Note

When upgrading from version 3.2.0 to the current version, be aware of the following:

  • any custom RPMs which you might have installed on the XenServer Host Control Domain will not be preserved
  • existing Windows VMs will need to have the paravirtualized device drivers reinstalled

To upgrade XenServer Host from version 3.2

  1. Perform an orderly shutdown on the VMs hosted on the XenServer Host.

    If any of your VMs are in the in the Suspended state, Resume them first, and then perform an orderly shutdown on them too.

  2. Reboot the XenServer Host, and boot from the new version's Installation CD.

  3. The installation script will identify the older version and prompt you whether you want to install over the existing 3.2 installation and preserve VMs. Select OK to proceed with the installation.

  4. Follow the rest of the installation procedure as described in Section 3.1, “Installing the XenServer Host”.

  5. Run XenCenter and connect to the upgraded XenServer Host.

  6. To upgrade the drivers for a Windows VM, select the "Install Tools" menu option and open its console. Run the xensetup.exe installation program to upgrade your paravirtualized drivers. When finished, reboot the VM.

    To upgrade the kernel and guest utilities for Linux VMs, follow the instructions in the XenServer Virtual Machine Installation Guide.

    Repeat for all other Windows VMs.

B.3. Upgrading from version 3.1 to the current version

There is no direct upgrade path from version 3.1 to the current version. To do so, first upgrade from version 3.1 to 3.2, then follow the procedure described in Section B.2, “Upgrading from version 3.2 to the current version”.

B.4. Backing up and restoring XenServer Hosts and VMs

We recommend that, whenever possible, you leave the installed state of XenServer Hosts unaltered. That is, do not install any additional packages or start additional services on XenServer Hosts, and treat them as if they are appliances. The best way to restore, then, is to re-install XenServer Host software from the installation media. If you have multiple XenServer Hosts, the best approach is to configure a PXE boot server and appropriate answerfiles for this purpose (see Appendix C, PXE installation of XenServer Host).

For VMs, the best approach is to install backup agents on them, just as if they were standard physical servers. For Windows VMs, as of this release we have tested CA BrightStor ARCserve Backup, and Symantec NetBackup and Backup Exec.

For more information about backup tools tested, best practices, and backups in general, see the XenSource Knowledge Base.

B.4.1. Backing up Virtual Machine metadata

XenServer Hosts use a per-host database to store metadata about VMs and associated resources such as storage and networking. When combined with storage repositories, this database forms the complete view of all VMs available across the pool. Thus, it is important to understand how to backup this database in order to recover from physical hardware failure and other disaster scenarios.

This section first describes how to backup metadata for single-host installations, and then for more complex pool setups.

B.4.1.1. Backing up single host installations

The CLI must be used to backup the pool database. To obtain a consistent pool metadata backup file, run xe pool-dump-database against the host and archive the resulting file. The backup file will contain sensitive authentication information about the pool, so ensure it is securely stored.

To restore the pool metadata, use the xe pool-restore-database from a previous dump file. If your XenServer host has died completely then you must first do a fresh install, and then run the xe pool-restore-database command against the freshly installed host.

After a restoration of metadata, some VMs may still be registered as being “suspended”, but the Storage Repository with the suspended metadata is no longer available since the host has been reinstalled. To reset these VMs back to the halted state so that they can be started up again, use the xe vm-reset-powerstate command.

Note that hosts restored using this method will have their UUIDs preserved. Thus, if you restore to a different physical machine while the original host is still running, there will be a UUID clash. The main observable effect of this clash will be that XenCenter will refuse to connect to the second host. Pool metadata backup is not the recommended mechanism for cloning physical hosts; you should use the automated installation support for that (see Appendix C, PXE installation of XenServer Host).

B.4.1.2. Backing up pooled installations

In a pool scenario, the master host provides an authoritative database which is synchronously mirrored by all the slave hosts in the pool. This provides a degree of built-in redundancy to a pool; the master can be replaced by any slave since each of them have an accurate version of the pool database. Please refer to the XenServer Administrator's Guide for more information on how to transition a slave into becoming a master host.

This level of protection may not be sufficient; for example, if your shared storage containing the VM data is backed up in multiple sites, but your local server storage (containing the pool metadata) is not. To fully recreate a pool given just a set of shared storage, you must first backup the xe pool-dump-database against the master host, and archive this file.

To subsequently restore this backup on a brand new set of hosts

  1. Install a fresh set of XenServer hosts from the installation media, or via PXE.

  2. Use the xe pool-restore-database on the host designated to be the new master.

  3. Run the xe host-forget command on the new master to remove the old slave machines.

  4. Use the xe pool-join command on the slave hosts to connect them to the new cluster.

B.4.2. Backing up XenServer Hosts

This section describes the XenServer Host control domain backup and restore procedures. These procedures do not back up the Storage Repositories that house the VMs, but only the privileged Control Domain that runs Xen and the XenServer agent.

Note that since the the privileged privileged Control Domain is best left as installed, without customizing it with other packages, we recommend you set up a PXE boot environment to cleanly perform a fresh installation from the XenServer media as a recovery strategy. In many cases you will not need to backup the control domain at all, but just save the pool metadata (see Section B.4.1, “Backing up Virtual Machine metadata”).

Another approach is to run the XenServer installation twice, selecting to back up the existing installation when prompted. This will create a pristine copy of the freshly-installed Control Domain that can later be restored if necessary by using the installation CD and choosing the Restore option.

Using the xe commands host-backup and host-restore is another approach that you can take. If using these commands, the VM metadata must also be separately restored by using the xe pool-restore-database command as described above (see Section B.4.1, “Backing up Virtual Machine metadata”).

Note

The CLI host-backup command captures a live stream of the Control Domain, and so is not guaranteed to be completely consistent with a fresh installation. If live Control Domain backups are required, then the CLI commands are appropriate. Otherwise, this is not recommended.

Be careful not to mix up the pool metadata backup file (creating using xe pool-dump-database) with the backup files for the control domain (created using xe host-backup). These files are in different formats and contain different data, and so should not be interchanged.

To back up a XenServer Host

  1. Backup the pool metadata using the xe pool-dump-database, as described above in Section B.4.1, “Backing up Virtual Machine metadata”.

  2. On a remote host with enough disk space, run the command:

    xe host-backup file-name=filename -h hostname -u root -pw password

    This creates a compressed image of the Control Domain file system in the location specified by the file-name argument.

To restore a running XenServer Host

  1. If you want to restore a XenServer Host from a specific backup, run the following command while the XenServer Host is up and reachable:

    xe host-restore file-name=filename -h hostname -u root -pw password

    This restores the compressed image back to the hard disk of the XenServer Host. In this context “restore” is something of a misnomer, as the word usually suggests that the backed-up state has been put fully in place. The restore command here only unpacks the compressed backup file and restores it to its normal form, but it is written to another partition (/dev/sda2) and does not overwrite the current version of the filesystem.

  2. To actually use the restored version of the root filesystem, you need to reboot the XenServer Host using the XenServer installation CD and select the Restore from backup option.

    After the Restore from Backup is completed, reboot the XenServer Host machine and it will start up from the restored image.

    Finally, restore the VM meta-data using the xe pool-database-restore command.

To restart a crashed XenServer Host

  1. If your XenServer Host is crashed and not reachable anymore, you need to use the XenServer installation CD to do an upgrade install (see Section B.2, “Upgrading from version 3.2 to the current version”). When that is completed, reboot the machine and make sure your host is reachable with XenCenter or remote CLI.

  2. Then proceed with the procedure on restoring a running XenServer Host.

B.4.3. Backing up VMs

VMs are best backed up using standard backup tools running on them individually. For Windows VMs, we have tested CA BrightStor ARCserve Backup.

Appendix C. PXE installation of XenServer Host

This appendix describes setting up a TFTP server to enable PXE booting of XenServer Host installations. It also describes the use of an XML answerfile, which allows you to perform unattended installations.

C.1. Setting up the PXE boot environment

To create a PXE environment, you need:

  • a TFTP server to enable PXE booting
  • a DHCP server to provide IP addresses to the systems that are going to PXE-boot
  • an NFS, FTP, or HTTP server to house the installation files

These can all co-exist on the same server, or be distributed on different servers on the network.

Additionally, each system that you want to PXE boot and install the XenServer on needs a PXE boot-enabled Ethernet card.

The following steps assume that the Linux server or servers you will use have RPM support.

To set up a TFTP server for PXE booting

  1. TFTP requires SYSLINUX 3.11 or above. SYSLINUX is a collection of boot loaders for the Linux operating system which operates on Linux EXT2/EXT3 file systems, MS-DOS FAT file systems, network servers using PXE firmware, and CD-ROMs. Make sure you have SYSLINUX version 3.11 or above installed on your system with the command

    #rpm -q syslinux
    				

    If you have a earlier version, you can download an appropriate later version from ftp://ftp.kernel.org/pub/linux/utils/boot/syslinux/RPMS/i386/, then install it using the command

    #rpm -Uvh syslinux.-.rpm
  2. Check if the tftp server package is installed:

    #rpm -q tftp-server

    If not, use system-config-packages and install.

  3. Edit the file /etc/xinetd.d/tftp to change the line

    disable = yes

    to

    disable = no
  4. Restart the xinetd service, which manages tftp:

    # service xinetd restart
  5. Make a directory inside /tftpboot called xenserver.

  6. Copy the files mboot.c32 and pxelinux.0 from /usr/lib/syslinux to the /tftboot directory.

  7. Copy the files install.img, vmlinuz, and xen.gz from the Base Pack CD (found in the root of the Base Pack CD, and in its /boot directory respectively), and place them in /tftpboot/xenserver.

  8. Make a directory called pxelinux.cfg inside /tftboot and create a file named default. The file contents depend on how you want to configure your PXE boot environment. For example, you might have a configuration file like the following:

    Note

    The backslashes at the ends of lines in the example PXE configuration files shown below denote continuation of lines; do not actually include them in your PXE configuration file.

    Also note that the three hyphens in the examples are necessary parts of the mboot.c32 loader syntax, and not including them will cause PXE boot attempts to fail.

    default xenserver
    label xenserver
       kernel mboot.c32
       append path/to/boot/directory/xen.gz watchdog com1=115200,8n1i \
          console=com1,tty --- path/to/boot/directory/vmlinuz \
          root=/dev/ram0 console=tty0 console=ttyS0,115200n8 \
          ramdisk_size=32758 --- path/to/boot/directory/install.img
    				

    (where path/to/boot/directory is the directory where you copied install.img, vmlinuz, and xen.gz files in the previous step). This will start an installation on any machine that boots from this server. Someone would then need to manually respond to the prompts to complete the installation. Alternatively, you might have a configuration file like the following:

    default xenserver-auto
    label xenserver-auto
       kernel mboot.c32
       append path/to/boot/directory/xen.gz watchdog com1=115200,8n1 \
          console=com1,tty --- path/to/boot/directory/vmlinuz \
          root=/dev/ram0 console=tty0 console=ttyS0,115200n8 \
          ramdisk_size=32758 \
          answerfile=http://pxehost.acme.com/XenServer_4.0.1-answerfile \
          install --- path/to/boot/directory/install.img
    				

    This will perform an unattended installation using the answerfile at the URL specified.

    Note

    The install keyword should only be present on the kernel command line when an answerfile is being used.

    Also, if you want to use the serial console to do an installation, be sure to include the argument output=ttyS0 on the kernel command-line in addition to any other appropriate console= values.

    For details on creating an answerfile for unattended installation, see Section C.2, “Creating an answerfile for unattended PXE installation”. For more information on PXE configuration file syntax, see the SYSLINUX website.

To set up a DHCP server

  1. On the server that you will be using for DHCP, check if you have DHCP installed by issuing the command

    # rpm -qa dhcp

    If not, install using system-config-packages.

  2. Configure the dhcp server. Refer to article 4221 in the Red Hat Knowledgebase for details.

  3. Add these lines to the end of the existing dhcpd.conf file where is your tftp server address:

    allow booting;
     allow bootp;
     class "pxeclients" {
        match if substring(option vendor-class-identifier, 0, 9) = "PXEClient";
        next-server ;
        filename "pxelinux.0";
    }
    			
  4. Restart the dhcpd service:

    # service dhcpd restart

To set up the installation media host

  1. On the server where you are going to house the installation files, copy the contents of the packages directories from the Base Pack CD to a location where they are exported by HTTP, FTP, or NFS. For example, you might make a directory in the document root of a webserver named XenServer_4.0.1 and then copy the directory packages.main from the Base Pack disk to XenServer_4.0.1/packages.main.

  2. If Linux support is also desired, copy packages.linux from the Linux Pack disk to XenServer_4.0.1/packages.linux. This structure allows you to install either both packages by having the answerfile's source element contain the enclosing directory XenServer_4.0.1, or you can install just the base pack (no support for Linux VMs) by putting in the path to XenServer_4.0.1/packages.main.

For example, to install both packages from the webserver http://pxehost.acme.com where the packages are in the directories mentioned above relative to the server's document root, the answerfile would contain this source element:

<source type="url">http://pxehost.acme.com/XenServer_4.0.1</source>
		

or, to install just the basic pack and skip Linux support:

<source type="url">http://pxehost.acme.com/XenServer_4.0.1/packages.main</source>
		

To prepare the destination system

  1. Start the system and enter the Boot Menu (F12 in most BIOSes) and select to boot from your Ethernet card.

  2. The system should then PXE boot from the installation source you set up, and the installation script will commence. if you have set up an answerfile, the installation can proceed unattended.

C.2. Creating an answerfile for unattended PXE installation

In order to perform installations in an unattended fashion, you need to create an XML answerfile.

Here is an example answerfile:

<?xml version="1.0"?>
   <installation>
      <primary-disk>sda</primary-disk>
      <guest-disk>sdb</guest-disk>
      <guest-disk>sdc</guest-disk>
      <keymap>us</keymap>
      <root-password>mypassword</root-password>
      <source type="url">http://pxehost.acme.com</source>
		<post-install-script type="url">http://pxehost.acme.com/myscripts/post-install-script</post-install-script>
      <admin-interface name="eth0" proto="dhcp" />
      <timezone>Europe/London</timezone>
   </installation>
		

All nodes should be within a root node named installation.

The following is a summary of the elements. All values should be PCDATA within the nodes, unless otherwise stated. Required elements are indicated.

ElementDescriptionRequired?
<primary-disk> The name of the storage device where the Dom0 should be installed, equivalent to the choice made on the Select Primary Disk step of the interactive installation process.

Attributes:

You can specify a gueststorage attribute with possible values yes and no. For example:

<primary-disk gueststorage="no">sda</primary-disk>

If this attribute is not specified, the default is yes. If you specify no, it is possible to automate an installation scenario where no storage repository is created, if, in addition, no guest-disk keys are specified.

Y
<guest-disk>The name of a storage device to be used for storing guests. You should use one of these elements for each extra disk.N
<keymap>The name of the keymap to use during installation.

<keymap>us</keymap>

Y
<root-password>The desired root password for the XenServer Host.Y
<source>Where the packages should be installed from.

Attributes:

type: url, nfs, or local

If local, leave the PCDATA empty. For example,

<source type="url">http://server/packages</source>
<source type="local" />
<source type="nfs">server:packages</source>
							
Y
<post-install-script>Where the post-install-script is.

Attributes:

type: url, nfs, or local

If url or nfs, put the url or NFS path in the PCDATA; if local, leave the PCDATA empty. For example,

<source type="url">http://server/scripts</source>
<source type="local" />
<source type="nfs">server:scripts</source>
							
Y
<admin-interface>The single network interface to be used as the host administration interface.

Attributes:

proto: dhcp or static

name: eth0 for example.

Children:

  • <ip>: The IP address, if proto="static"
  • <subnet-mask>: The subnet mask, if proto="static"
  • <gateway>: The gateway, if proto="static"

All three child elements are required if proto="static"

N
<timezone>In the format used by the TZ variable, e.g. Europe/London, or America/Los_Angeles.Y
<nameserver>The name of a nameserver. You should use one of these elements for each nameserver you wish to nominate.N
<hostname>Specify if you want to manually set a hostname.N

C.3. Installation media repository format

The repository format described here should be used by installation sources and driver disks.

C.3.1. Presence of installation media repositories

Given a path, the presence of a XenSource installation media repository is determined by checking for the existence of valid XS-REPOSITORY and XS-PACKAGES files. From a given base, that base is checked, along with the packages, packages.main, packages.linux, and packages.site subdirectories. Thus, a typical installation point will have the following format:

  xs-installation
  +-- packages.main
  |   +-- XS-REPOSITORY
  |   +-- XS-PACKAGES
  |   +-- ...
  +-- packages.linux
  |   +-- XS-REPOSITORY
  |   +-- XS-PACKAGES
  |   +-- ...
  +-- packages.site
  |   +-- XS-REPOSITORY
  |   +-- XS-PACKAGES
  |   +-- ...
			

A typical driver disk will have the following layout:

  xs-driver-disk
  +-- XS-REPOSITORY
  +-- XS-PACKAGES
			

In the first example, given a path to xs-installation, the XenServer installer will detect the presence of three repositories. In the second example, xs-driver-disk, a single repository will be detected.

C.3.2. Installation media repository metadata

The XS-REPOSITORY file is used to describe a XenSource-format installation media repository. It has four fields, separated by newlines:

  • repository id
  • repository name
  • intended target product
  • intended target version

Repository IDs should be alphanumeric strings that provide a machine identifier for the repository. They should be unique within a target product and version. Best practice is to use the form

vendor:repository

XenSource repositories start with xs (for example, xs:main), custom repositories should be custom:my-repo, and third-party add-ons should be identified as such by using an appropriate vendor string. This will help avoid name clashes.

Repository names are presented to the user, so should be a string that identifies the repository in a sensible manner so the user can confirm that they wish to install from it.

The intended target product will be XenServer; version 4.0.1-build.

C.3.3. Package metadata

The XS-PACKAGES file describes the packages in a repository, one line per package. Fields are separated by spaces.

There are three types of package:

  • tbz2 packages are bzipped tarballs that get extracted onto the root filesystem
  • driver packages are kernel modules that get loaded by the installer at runtime as well as being installed into the filesystem
  • firmware packages are made available during the installation so that they may be loaded by udev in addition to getting installed into the target filesystem.

Firmware loading support is currently limited; this will be addressed in a future release.

The first three fields are mandatory: package name, package size, and package checksum (md5). The fourth field is the package type, either tbz2, driver, or firmware. Which type is used dictates the contents of the subsequent fields.

If the type is tbz2, the subsequent fields are required or optional, source filename, and destination (usually just /).

Example:

docs 37750 2ba1783d84d10c71f07469252c555427 tbz2 required docs.tar.bz2 /

If the type is driver, the subsequent fields are source filename and destination (${KERNEL_VERSION} will be substituted with the Xen-kernel version.)

Example:

firmware_example 77001 3452c04dfcc237cde11c63d43e97a303 driver \
firmware_example.ko \
/lib/modules/${KERNEL_VERSION}/extra/firmware_example.ko

If the type is firmware, the subsequent field is destination filename (no path is necessary - it is automatically prefixed with /lib/firmware/).

Example:

firmware 12 6f5902ac237024bdd0c176cb93063dc4 firmware sample_firware.bin

Note

The backslashes at the ends of lines in the examples in this section denote continuation of lines; do not actually include them in a XS-PACKAGES file.

C.3.4. Example files

C.3.4.1. XS-REPOSITORY

xs:main
Base Pack and extra driver
XenServer
3.2.0-1934
					

C.3.4.2. XS-PACKAGES

storage-manager 59831 b66672f0aa681bd2b498e3d902f17c04 tbz2 required \
storage-manager.tar.bz2 /
docs 37750 2ba1783d84d10c71f07469252c555427 tbz2 required docs.tar.bz2 /
xgts-main 1133 59dda9c318f4205167350b7ed993b5cd tbz2 required \
xgts-main.tar.bz2 /
pvdrivers-win 524477 37ea0c145f5b0d7a2740ecb69d21ed52 tbz2 required \
pvdrivers-win.tar.bz2 /
dom0fs 169875708 c1a86d705915eda16cca84cccffaca9f tbz2 required \
dom0fs.tar.bz2 /
					

C.3.5. Notes on best practice

If a driver disk is used, any tbz2 packages on it will also be installed to the target. However, a copy of the repository will be taken so that the drivers can be loaded at runtime; this copy is placed into memory. Therefore, if you are constructing a driver disk that also includes user-space tools, and if these result in a large repository, it is better to split it up into two repositories and require that people use the packages.site mechanism to install your add-ons. Alternatively, provide a post-install script to install them after the fact.

Appendix D. Xen Memory Usage

When calculating the memory footprint of a Xen host there are two components that must be taken into consideration. First there is the memory consumed by the Xen hypervisor itself; then there is the memory consumed by the host's control domain. The control domain is a privileged VM that provides low-level services to other VMs, such as providing access to physical devices. It also runs the management tool stack.

D.1. Memory scaling algorithm

On a XenServer host, the Xen hypervisor (and its associated system tools) occupies approximately 128 MB of RAM.

Calculating the memory consumed by the control domain is more complicated, since this value depends on the amount of physical RAM in the particular host. The memory used by the control domain is always at least 200MB, and is never more than 752MB; within that range it is scaled as a linear function of total host RAM. For hosts with up to 3.5GB of physical RAM, the control domain usage remains at 200MB; on a 5GB host the control domain will use 228MB; on a 16GB host the control domain consumes 454MB; and on hosts with 32GB or more the control domain consumes 752MB.

D.1.1. Increasing the reserved memory size

The default memory scaling algorithm is designed to be sufficient for using normal guest operating systems with over 256MB of RAM and 2-4 virtual disks. If you have more specialized requirements (e.g. a large number of VMs with 64MB of RAM each, and 7 virtual disks each), you may need to tweak the amount of memory reserved for the control domain. This is an advanced operation, and you may wish to contact XenSource Support before doing this.

When you have installed your XenServer host, log into the host console and type in cat /proc/meminfo. The value of MemTotal will tell you how much memory has been reserved for the control domain. If your control domain is under memory pressure, the value of the SwapFree parameter will be lower than the SwapTotal, and you may improve overall system performance by increasing the reserved memory.

The memory reservation algorithm is based on the total amount of RAM in your XenServer host, and is calculated by:126 + (G * T) where T is the total RAM in the host (in MB), and G represents the memory gradient. The memory gradient is the scaling parameter which increases the control domain memory as the amount of host memory increases, and defaults to 0.0205.

The value of the memory gradient can be increased by altering the value in XAPI_DOM0_MEM_GRADIENT in the /etc/sysconfig/xapi configuration file, and rebooting the system. Do not decrease the memory gradient under any circumstances.