From: Richard W.M. Jones Date: Mon, 14 Nov 2022 08:59:36 +0000 (+0000) Subject: Talk about using virt-v2v in Kubevirt X-Git-Url: http://git.annexia.org/?a=commitdiff_plain;h=65b3746ea784e2a089b992d6fe234fa232e25b24;p=libguestfs-talks.git Talk about using virt-v2v in Kubevirt --- diff --git a/2022-v2v-in-kubevirt/2022-v2v-in-kubevirt.odp b/2022-v2v-in-kubevirt/2022-v2v-in-kubevirt.odp new file mode 100644 index 0000000..54aee8f Binary files /dev/null and b/2022-v2v-in-kubevirt/2022-v2v-in-kubevirt.odp differ diff --git a/2022-v2v-in-kubevirt/notes.txt b/2022-v2v-in-kubevirt/notes.txt new file mode 100644 index 0000000..0863161 --- /dev/null +++ b/2022-v2v-in-kubevirt/notes.txt @@ -0,0 +1,125 @@ +Virt-v2v in Kubevirt +==================== + +(0) What is it + +Convert a virtual machine from VMware (usually) to run on KVM. +Involves installing the correct device drivers inside the guest +and making many other changes so it is bootable. + +Supports Windows and some Linux distros. + +First version of virt-p2v was written in 2007(!) We've been +serious about it since about 2008/2009. + + +(1) Virt-v2v is a process, not a step + +Many different types of inputs supported: + +* VMware over HTTP + +* VMware over VDDK + +* VMware over SSH + +* Xen + +* local disk files + +* OVA files (GUI export or ovftool) + +* local VMX files + +* libvirt + +* from physical machines (p2v*) + *slightly different process + +Command line is complicated, but stick to the documentation and +you'll be fine. + + +(2) Correctness, supportability + +For a correct conversion, it's vitally important that we query +VMware for the full metadata, and also vital that the metadata +that virt-v2v writes is consumed by the target hypervisor. + +Metadata == VMware VMX data on the input side + == Kubevirt YAML on the output side + +Virt-v2v already knows how to do this, and also how to efficiently +copy the guest. There are many complex corner cases. +Don't duplicate this work. + +We only support virt-v2v when used as directed. + +I'm the one who does third line support when things go wrong. + +I've been in several "difficult" calls with customers +when we've fucked up over the years, and I'm not keen on being +on any more. + + +(3) What we've tried + +Modified Tomas's example volume populator +(https://github.com/nyoxi/vmware-populator) so that it just runs +virt-v2v. + +=> Ugly having to pass virt-v2v args + + * could be cleaned up with some CRDs + +=> Only a single disk supported + +=> No place to write out the metadata + +=> Have to specify the size up front + +=> How to report errors? + +=> Possible we could use a filesystem PVC and write out: + + * disk1.qcow2 + * disk2.qcow2 + * guest.yaml + * virt-v2v.log + + but would involve an extra copying step (or else modifying + Kubevirt to be able to boot from this). + + * The main point about reducing copies is reducing *remote* copies + because VDDK is expensive and finnicky. Local copies of fully + sparsified data within the target cluster may be acceptable for a + first pass. + + +(4) What we also talked about + +Could we just run virt-v2v in a pod? + +=> It would do the conversion and run kubectl / virtctl commands + in order to create resources and the final guest as needed. + +=> Cannot create PVCs + +Can we chain pods together? + +=> One pod sets up the PVCs, second pod does the conversion and + finishes up. + +I feel we are missing some "automation" thing in Kubernetes. +Is this what operators are? + + +(5) Miscellaneous + +We have an "-o kubevirt" output mode which generates Kubevirt YAML. +It's not upstream, but could be once we've decided what to do. + +If we decide that some other process must create the PVC(s) in +advance, then virt-v2v needs a way to estimate the number/size +of disks needed. (We had this in the past and removed it, but +could add something again once it's clear what is exactly needed.)