2 # Copyright (C) 2015 Richard W.M. Jones <rjones@redhat.com>
3 # Copyright (C) 2015 Red Hat Inc.
5 # This program is free software; you can redistribute it and/or modify
6 # it under the terms of the GNU General Public License as published by
7 # the Free Software Foundation; either version 2 of the License, or
8 # (at your option) any later version.
10 # This program is distributed in the hope that it will be useful,
11 # but WITHOUT ANY WARRANTY; without even the implied warranty of
12 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13 # GNU General Public License for more details.
15 # You should have received a copy of the GNU General Public License
16 # along with this program; if not, write to the Free Software
17 # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
25 use File::Temp qw(tempdir);
26 use POSIX qw(_exit setgid setuid strftime);
33 import-to-ovirt.pl - Import virtual machine disk image to RHEV or oVirt
37 sudo ./import-to-ovirt.pl disk.img server:/esd
39 sudo ./import-to-ovirt.pl disk.img /esd_mountpoint
41 =head1 IMPORTANT NOTES
43 In the latest oVirt/RHEV/RHV there is a GUI option to import disks.
44 You B<do not need to use this script> if you are using a sufficiently
47 This tool should B<only> be used if the guest can already run on KVM.
49 B<If you need to convert the guest from some foreign hypervisor, like VMware, Xen or Hyper-V, you should use L<virt-v2v(1)> instead.>
53 Import a KVM guest to the Export Storage Domain of your RHEV or oVirt
54 system. The NFS mount of the Export Storage Domain is C<server:/esd>.
56 sudo ./import-to-ovirt.pl disk.img server:/esd
58 Import a KVM guest to an already-mounted Export Storage Domain:
60 sudo ./import-to-ovirt.pl disk.img /esd_mountpoint
62 If the single guest has multiple disks, use:
64 sudo ./import-to-ovirt.pl disk1.img [disk2.img [...]] server:/esd
66 If you are importing multiple guests then you must import each one
67 separately. Do not use multiple disk parameters if each disk comes
68 from a different guest.
72 This is a command line script for importing a KVM virtual machine to
73 RHEV or oVirt. The script assumes that the guest can already run on
74 KVM, ie. that it was previously running on KVM and has the required
75 drivers. If the guest comes from a foreign hypervisor like VMware,
76 Xen or Hyper-V, use L<virt-v2v(1)> instead.
78 This script only imports the guest into the oVirt "Export Storage
79 Domain" (ESD). After the import is complete, you must then go to the
80 oVirt user interface, go to the C<Storage> tab, select the right ESD,
81 and use C<VM Import> to take the guest from the ESD to the data
82 domain. This process is outside the scope of this script, but could
83 be automated using the oVirt API.
89 ./import-to-ovirt.pl [list of disks] server:/esd
91 If you are unclear about the C<server:/esd> parameter, go to the oVirt
92 Storage tab, select the C<Domain Type> C<Export> and look in the
93 C<General> tab under C<Path>.
95 If the ESD is already mounted on your machine (or if you are using a
96 non-NFS ESD), then you can supply a direct path to the mountpoint
99 ./import-to-ovirt.pl [list of disks] /esd_mountpoint
101 The list of disks should all belong to a single guest (most guests
102 will only have a single disk). If you want to import multiple guests,
103 you must run the script multiple times.
105 Importing from OVA etc is not supported. Try C<ovirt-image-uploader>
106 (if the OVA was exported from oVirt), or L<virt-v2v(1)> (if the OVA
107 was exported from VMware).
111 You probably need to run this script as root, because it has to create
112 files on the ESD as a special C<vdsm> user (UID:GID C<36:36>).
114 It may also be possible to run the script as the vdsm user. But if
115 you run it as some non-root, non-vdsm user, then oVirt won't be able
116 to read the data from the ESD and will give an error.
118 NFS "root squash" should be turned off on the NFS server, since it
119 stops us from creating files as the vdsm user. Also NFSv4 may not
120 work unless you have set up idmap correctly (good luck!)
122 =head2 Network card and disk model
124 (See also L</TO DO> below)
126 Currently this script doesn't add a network card to the guest. You
127 will need to add one yourself in the C<VM Import> tab when importing
130 Similarly, the script always adds the disks as virtio-blk devices. If
131 the guest is expecting IDE, SCSI or virtio-scsi, you will need to
132 change the disk type when importing the guest.
144 Display brief help and exit.
152 Display the manual page and exit.
156 my $memory_mb = 1024;
160 Set the memory size I<in megabytes>. The default is 1024.
168 Set the guest name. If not present, a name is made up based on
169 the filename of the first disk.
177 Set the number of virtual CPUs. The default is 1.
181 my $vmtype = "Desktop";
183 =item B<--vmtype> Desktop
185 =item B<--vmtype> Server
187 Set the VmType field in the OVF. It must be C<Desktop> or
188 C<Server>. The default is C<Desktop>.
198 GetOptions ("help|?" => \$help,
200 "memory=i" => \$memory_mb,
202 "vcpus=i" => \$vcpus,
203 "vmtype=s" => \$vmtype,
205 or die "$0: unknown command line option\n";
207 pod2usage (1) if $help;
208 pod2usage (-exitval => 0, -verbose => 2) if $man;
210 # Get the parameters.
212 die "Use '$0 --man' to display the manual.\n"
215 my @disks = @ARGV[0 .. $#ARGV-1];
216 my $output = $ARGV[$#ARGV];
218 if (!defined $name) {
221 $name =~ s{\.[^.]+}{};
224 if ($vmtype =~ /^Desktop$/i) {
226 } elsif ($vmtype =~ /^Server$/i) {
229 die "$0: --vmtype parameter must be 'Desktop' or 'Server'\n"
232 # Open the guest in libguestfs so we can inspect it.
233 my $g = Sys::Guestfs->new ();
234 $g->set_program ("virt-import-to-ovirt");
235 $g->add_drive_opts ($_, readonly => 1) foreach (@disks);
237 my @roots = $g->inspect_os ();
239 die "$0: no operating system was found on the disk\n"
242 die "$0: either this is a multi-OS disk, or you passed multiple unrelated guest disks on the command line\n"
244 my $root = $roots[0];
246 # Save the inspection data.
247 my $type = $g->inspect_get_type ($root);
248 my $distro = $g->inspect_get_distro ($root);
249 my $arch = $g->inspect_get_arch ($root);
250 my $major_version = $g->inspect_get_major_version ($root);
251 my $minor_version = $g->inspect_get_major_version ($root);
252 my $product_name = $g->inspect_get_product_name ($root);
253 my $product_variant = $g->inspect_get_product_variant ($root);
255 # Get the virtual size of each disk.
258 push @virtual_sizes, $g->disk_virtual_size ($_);
263 # Map inspection data to RHEV ostype.
265 if ($type eq "linux" && $distro eq "rhel" && $major_version <= 6) {
266 if ($arch eq "x86_64") {
267 $ostype = "RHEL${major_version}x64"
269 $ostype = "RHEL$major_version"
272 elsif ($type eq "linux" && $distro eq "rhel") {
273 if ($arch eq "x86_64") {
274 $ostype = "rhel_${major_version}x64"
276 $ostype = "rhel_$major_version"
279 elsif ($type eq "linux") {
280 $ostype = "OtherLinux"
282 elsif ($type eq "windows" && $major_version == 5 && $minor_version == 1) {
283 $ostype = "WindowsXP"
285 elsif ($type eq "windows" && $major_version == 5 && $minor_version == 2) {
286 if ($product_name =~ /XP/) {
287 $ostype = "WindowsXP"
288 } elsif ($arch eq "x86_64") {
289 $ostype = "Windows2003x64"
291 $ostype = "Windows2003"
294 elsif ($type eq "windows" && $major_version == 6 && $minor_version == 0) {
295 if ($arch eq "x86_64") {
296 $ostype = "Windows2008x64"
298 $ostype = "Windows2008"
301 elsif ($type eq "windows" && $major_version == 6 && $minor_version == 1) {
302 if ($product_variant eq "Client") {
303 if ($arch eq "x86_64") {
304 $ostype = "Windows7x64"
309 $ostype = "Windows2008R2x64"
312 elsif ($type eq "windows" && $major_version == 6 && $minor_version == 2) {
313 if ($product_variant eq "Client") {
314 if ($arch eq "x86_64") {
315 $ostype = "windows_8x64"
317 $ostype = "windows_8"
320 $ostype = "windows_2012x64"
323 elsif ($type eq "windows" && $major_version == 6 && $minor_version == 3) {
324 $ostype = "windows_2012R2x64"
327 $ostype = "Unassigned"
330 # Mount the ESD if needed (or just check it exists).
333 $mountpoint = $output;
334 } elsif ($output =~ m{^.*:.*$}) {
336 $umount = $mountpoint = tempdir (CLEANUP => 1);
337 system ("mount", "-t", "nfs", $output, $mountpoint) == 0
338 or die "$0: mount $output failed: $?\n";
339 END { system ("umount", $umount) if defined $umount }
341 die "$0: ESD $output is not a directory or an NFS mountpoint\n"
344 # Check the ESD looks like an ESD.
345 my @entries = <$mountpoint/*>;
347 grep { m{/[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$}i }
350 die "$0: does $output really point to an oVirt Export Storage Domain?\n"
353 die "$0: multiple GUIDs found in oVirt Export Storage Domain\n"
355 my $esd_uuid_dir = $entries[0];
356 my $esd_uuid = $esd_uuid_dir;
357 $esd_uuid =~ s{.*/}{};
359 if (! -d $esd_uuid_dir ||
360 ! -d "$esd_uuid_dir/images" ||
361 ! -d "$esd_uuid_dir/master" ||
362 ! -d "$esd_uuid_dir/master/vms") {
363 die "$0: $output doesn't look like an Export Storage Domain\n"
367 print "Importing $product_name to $output...\n";
369 # A helper function that forks and runs some code / a command as
370 # an alternate UID:GID.
376 die "fork: $!" unless defined $pid;
390 waitpid ($pid, 0) or die "waitpid: $!";
392 die "$0: run_as_vdsm: child process failed (status $?)\n";
399 local $_ = `uuidgen -r`;
401 die unless length $_ >= 30; # Sanity check.
405 # Generate some random UUIDs.
406 my $vm_uuid = uuidgen ();
409 push @image_uuids, uuidgen ();
413 push @vol_uuids, uuidgen ();
416 # Make sure the output is deleted on unsuccessful exit. We set
417 # $delete_output_on_exit to false at the end of the script.
418 my $delete_output_on_exit = 1;
420 if ($delete_output_on_exit) {
421 # Can't use run_as_vdsm in an END{} block.
422 foreach (@image_uuids) {
423 system ("rm", "-rf", "$esd_uuid_dir/images/$_");
425 system ("rm", "-rf", "$esd_uuid_dir/master/vms/$vm_uuid");
429 # Copy and convert the disk images.
432 my $iso_time = strftime ("%Y/%m/%d %H:%M:%S", gmtime ());
433 my $imported_by = "Imported by import-to-ovirt.pl";
436 for ($i = 0; $i < @disks; ++$i) {
437 my $input_file = $disks[$i];
438 my $image_uuid = $image_uuids[$i];
440 my $path = "$esd_uuid_dir/images/$image_uuid";
441 mkdir ($path, 0755) or die "mkdir: $path: $!";
443 my $output_file = "$esd_uuid_dir/images/$image_uuid/".$vol_uuids[$i];
445 open (my $fh, ">", $output_file) or die "open: $output_file: $!";
446 # Well done NFS root_squash, you make the world less secure.
447 chmod (0666, $output_file) or die "chmod: $output_file: $!";
449 print "Copying $input_file ...\n";
450 system ("qemu-img", "convert", "-p",
453 "-o", "compat=0.10", # for RHEL 6-based ovirt nodes
455 or die "qemu-img: $input_file: failed (status $?)";
456 push @real_sizes, -s $output_file;
458 my $size_in_sectors = $virtual_sizes[$i] / 512;
460 # Create .meta files per disk.
468 PUUID=00000000-0000-0000-0000-000000000000
471 SIZE=$size_in_sectors
474 DESCRIPTION=$imported_by
476 my $meta_file = $output_file . ".meta";
478 open (my $fh, ">", $meta_file) or die "open: $meta_file: $!";
484 print "Creating OVF metadata ...\n";
486 my $rasd_ns = "http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData";
487 my $vssd_ns = "http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingData";
488 my $xsi_ns = "http://www.w3.org/2001/XMLSchema-instance";
489 my $ovf_ns = "http://schemas.dmtf.org/ovf/envelope/1/";
496 my @forced_ns_decls = keys %prefix_map;
498 my $w = XML::Writer->new (
501 PREFIX_MAP => \%prefix_map,
502 FORCED_NS_DECLS => \@forced_ns_decls,
507 $w->startTag ([$ovf_ns, "Envelope"],
508 [$ovf_ns, "version"] => "0.9");
509 $w->comment ($imported_by);
511 $w->startTag ("References");
513 for ($i = 0; $i < @disks; ++$i)
515 my $href = $image_uuids[$i] . "/" . $vol_uuids[$i];
516 $w->startTag ("File",
517 [$ovf_ns, "href"] => $href,
518 [$ovf_ns, "id"] => $vol_uuids[$i],
519 [$ovf_ns, "size"] => $virtual_sizes[$i],
520 [$ovf_ns, "description"] => $imported_by);
526 $w->startTag ("Section",
527 [$xsi_ns, "type"] => "ovf:NetworkSection_Type");
528 $w->startTag ("Info");
529 $w->characters ("List of networks");
533 $w->startTag ("Section",
534 [$xsi_ns, "type"] => "ovf:DiskSection_Type");
535 $w->startTag ("Info");
536 $w->characters ("List of Virtual Disks");
539 for ($i = 0; $i < @disks; ++$i)
541 my $virtual_size_in_gb = $virtual_sizes[$i];
542 $virtual_size_in_gb /= 1024;
543 $virtual_size_in_gb /= 1024;
544 $virtual_size_in_gb /= 1024;
545 my $real_size_in_gb = $real_sizes[$i];
546 $real_size_in_gb /= 1024;
547 $real_size_in_gb /= 1024;
548 $real_size_in_gb /= 1024;
549 my $href = $image_uuids[$i] . "/" . $vol_uuids[$i];
553 $boot_drive = "True";
555 $boot_drive = "False";
558 $w->startTag ("Disk",
559 [$ovf_ns, "diskId" ] => $vol_uuids[$i],
560 [$ovf_ns, "actual_size"] =>
561 sprintf ("%.0f", $real_size_in_gb),
563 sprintf ("%.0f", $virtual_size_in_gb),
564 [$ovf_ns, "fileRef"] => $href,
565 [$ovf_ns, "parentRef"] => "",
566 [$ovf_ns, "vm_snapshot_id"] => uuidgen (),
567 [$ovf_ns, "volume-format"] => "COW",
568 [$ovf_ns, "volume-type"] => "Sparse",
569 [$ovf_ns, "format"] => "http://en.wikipedia.org/wiki/Byte",
570 [$ovf_ns, "disk-interface"] => "VirtIO",
571 [$ovf_ns, "disk-type"] => "System",
572 [$ovf_ns, "boot"] => $boot_drive);
578 $w->startTag ("Content",
579 [$ovf_ns, "id"] => "out",
580 [$xsi_ns, "type"] => "ovf:VirtualSystem_Type");
581 $w->startTag ("Name");
582 $w->characters ($name);
584 $w->startTag ("TemplateId");
585 $w->characters ("00000000-0000-0000-0000-000000000000");
587 $w->startTag ("TemplateName");
588 $w->characters ("Blank");
590 $w->startTag ("Description");
591 $w->characters ($imported_by);
593 $w->startTag ("Domain");
595 $w->startTag ("CreationDate");
596 $w->characters ($iso_time);
598 $w->startTag ("IsInitilized"); # sic
599 $w->characters ("True");
601 $w->startTag ("IsAutoSuspend");
602 $w->characters ("False");
604 $w->startTag ("TimeZone");
606 $w->startTag ("IsStateless");
607 $w->characters ("False");
609 $w->startTag ("Origin");
610 $w->characters ("0");
612 $w->startTag ("VmType");
613 $w->characters ($vmtype);
615 $w->startTag ("DefaultDisplayType");
616 $w->characters ("1"); # qxl
619 $w->startTag ("Section",
620 [$ovf_ns, "id"] => $vm_uuid,
621 [$ovf_ns, "required"] => "false",
622 [$xsi_ns, "type"] => "ovf:OperatingSystemSection_Type");
623 $w->startTag ("Info");
624 $w->characters ($product_name);
626 $w->startTag ("Description");
627 $w->characters ($ostype);
631 $w->startTag ("Section",
632 [$xsi_ns, "type"] => "ovf:VirtualHardwareSection_Type");
633 $w->startTag ("Info");
634 $w->characters (sprintf ("%d CPU, %d Memory", $vcpus, $memory_mb));
637 $w->startTag ("Item");
638 $w->startTag ([$rasd_ns, "Caption"]);
639 $w->characters (sprintf ("%d virtual cpu", $vcpus));
641 $w->startTag ([$rasd_ns, "Description"]);
642 $w->characters ("Number of virtual CPU");
644 $w->startTag ([$rasd_ns, "InstanceId"]);
645 $w->characters ("1");
647 $w->startTag ([$rasd_ns, "ResourceType"]);
648 $w->characters ("3");
650 $w->startTag ([$rasd_ns, "num_of_sockets"]);
651 $w->characters ($vcpus);
653 $w->startTag ([$rasd_ns, "cpu_per_socket"]);
658 $w->startTag ("Item");
659 $w->startTag ([$rasd_ns, "Caption"]);
660 $w->characters (sprintf ("%d MB of memory", $memory_mb));
662 $w->startTag ([$rasd_ns, "Description"]);
663 $w->characters ("Memory Size");
665 $w->startTag ([$rasd_ns, "InstanceId"]);
666 $w->characters ("2");
668 $w->startTag ([$rasd_ns, "ResourceType"]);
669 $w->characters ("4");
671 $w->startTag ([$rasd_ns, "AllocationUnits"]);
672 $w->characters ("MegaBytes");
674 $w->startTag ([$rasd_ns, "VirtualQuantity"]);
675 $w->characters ($memory_mb);
679 $w->startTag ("Item");
680 $w->startTag ([$rasd_ns, "Caption"]);
681 $w->characters ("USB Controller");
683 $w->startTag ([$rasd_ns, "InstanceId"]);
684 $w->characters ("3");
686 $w->startTag ([$rasd_ns, "ResourceType"]);
687 $w->characters ("23");
689 $w->startTag ([$rasd_ns, "UsbPolicy"]);
690 $w->characters ("Disabled");
694 $w->startTag ("Item");
695 $w->startTag ([$rasd_ns, "Caption"]);
696 $w->characters ("Graphical Controller");
698 $w->startTag ([$rasd_ns, "InstanceId"]);
699 $w->characters (uuidgen ());
701 $w->startTag ([$rasd_ns, "ResourceType"]);
702 $w->characters ("20");
704 $w->startTag ("Type");
705 $w->characters ("video");
707 $w->startTag ([$rasd_ns, "VirtualQuantity"]);
708 $w->characters ("1");
710 $w->startTag ([$rasd_ns, "Device"]);
711 $w->characters ("qxl");
715 for ($i = 0; $i < @disks; ++$i)
717 my $href = $image_uuids[$i] . "/" . $vol_uuids[$i];
719 $w->startTag ("Item");
721 $w->startTag ([$rasd_ns, "Caption"]);
722 $w->characters ("Drive " . ($i+1));
724 $w->startTag ([$rasd_ns, "InstanceId"]);
725 $w->characters ($vol_uuids[$i]);
727 $w->startTag ([$rasd_ns, "ResourceType"]);
728 $w->characters ("17");
730 $w->startTag ("Type");
731 $w->characters ("disk");
733 $w->startTag ([$rasd_ns, "HostResource"]);
734 $w->characters ($href);
736 $w->startTag ([$rasd_ns, "Parent"]);
737 $w->characters ("00000000-0000-0000-0000-000000000000");
739 $w->startTag ([$rasd_ns, "Template"]);
740 $w->characters ("00000000-0000-0000-0000-000000000000");
742 $w->startTag ([$rasd_ns, "ApplicationList"]);
744 $w->startTag ([$rasd_ns, "StorageId"]);
745 $w->characters ($esd_uuid);
747 $w->startTag ([$rasd_ns, "StoragePoolId"]);
748 $w->characters ("00000000-0000-0000-0000-000000000000");
750 $w->startTag ([$rasd_ns, "CreationDate"]);
751 $w->characters ($iso_time);
753 $w->startTag ([$rasd_ns, "LastModified"]);
754 $w->characters ($iso_time);
756 $w->startTag ([$rasd_ns, "last_modified_date"]);
757 $w->characters ($iso_time);
763 $w->endTag ("Section"); # ovf:VirtualHardwareSection_Type
765 $w->endTag ("Content");
767 $w->endTag ([$ovf_ns, "Envelope"]);
770 my $ovf = $w->to_string;
772 #print "OVF:\n$ovf\n";
774 my $ovf_dir = "$esd_uuid_dir/master/vms/$vm_uuid";
776 mkdir ($ovf_dir, 0755) or die "mkdir: $ovf_dir: $!";
778 my $ovf_file = "$ovf_dir/$vm_uuid.ovf";
780 open (my $fh, ">", $ovf_file) or die "open: $ovf_file: $!";
785 $delete_output_on_exit = 0;
787 print "OVF written to $ovf_file\n";
789 print "Import finished without errors. Now go to the Storage tab ->\n";
790 print "Export Storage Domain -> VM Import, and import the guest.\n";
801 Add a network card to the OVF. The problem is detecting what
802 network devices the guest can support.
806 Detect what disk models (eg. IDE, virtio-blk, virtio-scsi) the
807 guest can support and add the correct type of disk.
811 =head1 DEBUGGING IMPORT FAILURES
813 When you export to the ESD, and then import that guest through the
814 oVirt / RHEV-M UI, you may encounter an import failure. Diagnosing
815 these failures is infuriatingly difficult as the UI generally hides
816 the true reason for the failure.
818 There are two log files of interest. The first is stored on the oVirt
819 engine / RHEV-M server itself, and is called
820 F</var/log/ovirt-engine/engine.log>
822 The second file, which is the most useful, is found on the SPM host
823 (SPM stands for "Storage Pool Manager"). This is a oVirt node that is
824 elected to do all metadata modifications in the data center, such as
825 image or snapshot creation. You can find out which host is the
826 current SPM from the "Hosts" tab "Spm Status" column. Once you have
827 located the SPM, log into it and grab the file
828 F</var/log/vdsm/vdsm.log> which will contain detailed error messages
829 from low-level commands.
833 L<https://bugzilla.redhat.com/show_bug.cgi?id=998279>,
834 L<https://bugzilla.redhat.com/show_bug.cgi?id=1049604>,
836 L<engine-image-uploader(8)>.
840 Richard W.M. Jones <rjones@redhat.com>
844 Copyright (C) 2015 Richard W.M. Jones <rjones@redhat.com>
846 Copyright (C) 2015 Red Hat Inc.
850 This program is free software; you can redistribute it and/or modify it
851 under the terms of the GNU General Public License as published by the
852 Free Software Foundation; either version 2 of the License, or (at your
853 option) any later version.
855 This program is distributed in the hope that it will be useful, but
856 WITHOUT ANY WARRANTY; without even the implied warranty of
857 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
858 General Public License for more details.
860 You should have received a copy of the GNU General Public License along
861 with this program; if not, write to the Free Software Foundation, Inc.,
862 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.