Cloud management doesn't require a daemon running on each node. We
already have L<sshd(8)> for secure access, and L<libvirtd(8)> to
manage the state of the guests. On most Linux systems, those are
-running out of the box. That is sufficient to manage all te state we
+running out of the box. That is sufficient to manage all the state we
care about. C<mclu> just goes out and queries each node for that
information when it needs it (in parallel of course). Nodes that are
switched off are handled by ignoring them.
=back
+=item cmdline
+
+The template MAY print a kernel command line. This is used to boot
+the guest, but only when C<needs-external-kernel> is set (see below).
+
=item disk-bus
The template MAY print the disk type supported by this guest. Possible
The template MAY print the minimum disk space required by this guest.
Abbreviations like C<10G> are supported.
+=item needs-external-kernel
+
+The template MAY print C<yes> or C<1>. If it does so then after the
+guest has been built, L<virt-get-kernel(1)> is run to extract the
+kernel and initrd from the guest, and these are used to boot the guest
+with an external kernel and initrd (ie. using the libvirt
+C<E<lt>kernelE<gt>> and C<E<lt>initrdE<gt>> directives).
+
=item network-model
The template MAY print the network type supported by this guest.
=head1 COPYRIGHT
-(C) Copyright 2014-2015 Red Hat Inc.
+(C) Copyright 2014-2016 Red Hat Inc.
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by