--- /dev/null
+Configuring mclu
+----------------
+
+Introduction
+------------
+
+mclu is based around the idea that you have a small collection of
+fairly similar "nodes". Ideally they would be identical nodes, if you
+want live migration to work seamlessly, but they don't have to be.
+
+Here is a picture of the cluster that I run mclu on:
+http://rwmj.wordpress.com/2014/04/28/caseless-virtualization-cluster-part-5/#content
+
+Nodes, control node, NFS server
+-------------------------------
+
+There is also one "control" node, which could be your laptop or could
+be one of the cluster nodes. This is where you run the 'mclu'
+command, and also where the single configuration file is located
+(mclu.conf, usually located in /etc/mclu).
+
+Each node must be accessible from the control node over ssh. Each
+node must be running the libvirt daemon (libvirtd).
+
+mclu uses a mix of ssh commands and remote libvirt to manage the
+nodes. You should configure ssh so it can access the nodes without
+needing passwords (eg. using ssh-agent). If you use the default
+libvirt URI (see config file) then you also need to set up
+passwordless root ssh access to the nodes; there are other ways to
+configure this, eg. opening the libvirtd port on each node, but they
+are probably not as secure.
+
+Each node, including the control node, must have access to shared
+storage where the guest disk images are stored. The easiest way to do
+this is to export /var/lib/libvirt/images from one machine and
+NFS-mount it on all the nodes (and also to have a nice fast network).
+Cluster filesystems are another possibility. mclu does NOT support
+non-shared storage nor storage migration.
+
+Wake-on-LAN
+-----------
+
+The nodes can be up or down. mclu deals transparently with nodes
+being switched off. If you configure wake-on-LAN (usually a BIOS
+setting) then mclu will be able to wake up nodes.
+
+Running guests
+--------------
+
+The guest libvirt XML files are also stored on the control node
+(usually /etc/mclu/xmls). Guests are created as transient, which
+means the libvirt daemon running on each node does not have a
+persistent configuration of any guest.
+
+Guests run on a single node at a time. You can list/start/stop/
+migrate them using mclu. The requirement for a guest to be running on
+a single node may be enforced if you run libvirt sanlock or virtlockd.
+This requires further configuration, see:
+http://libvirt.org/locking.html
+https://rwmj.wordpress.com/2014/05/08/setting-up-virtlockd-on-nfs/#content
+
+If sanlock/virtlockd is not running then mclu will try its best not to
+have the guest running in two places at once (if it happens, this will
+cause permanent disk corruption in the guest).
+
+Migration
+---------
+
+For guest live migration to work transparently, you will probably want
+to configure libvirt bridged networking and open firewall ports
+49152-49215 on every node.
+
+Bridged networking means that each guest appears as a local machine on
+your network, and if it migrates then network connections will not be
+interrupted. See:
+http://wiki.libvirt.org/page/Networking#Bridged_networking_.28aka_.22shared_physical_device.22.29
+
+The firewall ports have to be opened because libvirt cannot (yet?) do
+fully managed migration over two SSH connections (even though the
+documentation says it can). Hopefully they will fix this soon.
+
+Editing guest configuration or disk images
+------------------------------------------
+
+The guest configuration (libvirt XML) is stored in the xmls directory
+(usually /etc/mclu/xmls). You can edit this file directly if you want.
+Changes will not take effect until the guest is restarted.
+
+The guest disk images are located in the images directory (usually
+/var/lib/libvirt/images). You can use libguestfs against these in
+read-only mode (or read-write mode PROVIDED the guest is not running).
+
+You can also create or import guests by dropping an XML file and an
+image into those directories. (But it might be easier to use the
+'mclu build' and 'mclu import' subcommands).
Configuration notes
----------------------------------------------------------------------
-mclu is based around the idea that you have a small collection of
-fairly similar "nodes". Ideally they would be identical nodes, if you
-want live migration to work seamlessly, but they don't have to be.
-
-Here is a picture of the cluster that I run mclu on:
-http://rwmj.wordpress.com/2014/04/28/caseless-virtualization-cluster-part-5/#content
-
-The nodes can be up or down. mclu deals transparently with nodes
-being switched off. If you configure wake-on-LAN (usually a BIOS
-setting) then mclu will be able to wake up nodes.
-
-There is also one "control" node, which could be your laptop or could
-be one of the cluster nodes. This is where you run the 'mclu'
-command, and also where the single configuration file is located
-(mclu.conf, usually located in /etc/mclu). The guest libvirt XML
-files are also stored on the control node (usually /etc/mclu/xmls).
-
-Each node must be accessible from the control node over ssh. Each
-node must be running the libvirt daemon (libvirtd).
-
-mclu uses a mix of ssh commands and remote libvirt to manage the
-nodes. You should configure ssh so it can access the nodes without
-needing passwords (eg. using ssh-agent). If you use the default
-libvirt URI (see config file) then you also need to set up
-passwordless root ssh access to the nodes; there are other ways to
-configure this, eg. opening the libvirtd port on each node, but they
-are probably not as secure.
-
-Each node, including the control node, must have access to shared
-storage where the guest disk images are stored. The easiest way to do
-this is to export /var/lib/libvirt/images from one machine and
-NFS-mount it on all the nodes (and also to have a nice fast network).
-Cluster filesystems are another possibility. mclu does NOT support
-non-shared storage nor storage migration.
-
-Guests run on a single node at a time. You can list/start/stop/
-migrate them using mclu. The requirement for a guest to be running on
-a single node may be enforced if you run libvirt sanlock or virtlockd.
-This requires further configuration, see:
-http://libvirt.org/locking.html
-https://rwmj.wordpress.com/2014/05/08/setting-up-virtlockd-on-nfs/#content
-
-If sanlock/virtlockd is not running then mclu will try its best not to
-have the guest running in two places at once (if it happens, this will
-cause permanent disk corruption in the guest).
-
-For guest live migration to work transparently, you will probably want
-to configure libvirt bridged networking and open firewall ports
-49152-49215 on every node.
-
-Bridged networking means that each guest appears as a local machine on
-your network, and if it migrates then network connections will not be
-interrupted. See:
-http://wiki.libvirt.org/page/Networking#Bridged_networking_.28aka_.22shared_physical_device.22.29
-
-The firewall ports have to be opened because libvirt cannot (yet?) do
-fully managed migration over two SSH connections (even though the
-documentation says it can). Hopefully they will fix this soon.
+See `CONFIGURATION'.
Dependencies
----------------------------------------------------------------------