[kata-dev] virtio-fs + VM Templating

Dr. David Alan Gilbert dgilbert at redhat.com
Thu Jun 13 18:45:38 UTC 2019


* Yury Kotov (yury-kotov at yandex-team.ru) wrote:
> 13.06.2019, 16:46, "Dr. David Alan Gilbert" <dgilbert at redhat.com>:
> > * Yury Kotov (yury-kotov at yandex-team.ru) wrote:
> >>  Hi,
> >>
> >>  13.06.2019, 15:44, "Dr. David Alan Gilbert" <dgilbert at redhat.com>:
> >>  > * Peng Tao (bergwolf at hyper.sh) wrote:
> >>  >>  On Thu, Jun 13, 2019 at 7:09 PM Dr. David Alan Gilbert
> >>  >>  <dgilbert at redhat.com> wrote:
> >>  >>  >
> >>  >>  > * Peng Tao (bergwolf at hyper.sh) wrote:
> >>  >>  > > On Thu, Jun 6, 2019 at 1:33 AM Dr. David Alan Gilbert
> >>  >>  > > <dgilbert at redhat.com> wrote:
> >>  >>  > > >
> >>  >>  > > > * Castelino, Manohar R (manohar.r.castelino at intel.com) wrote:
> >>  >>  > > > > > It's very similar to the trick that NEMU uses for templating.
> >>  >>  > > > > > With the x-ignore-shared-ram migration capability enabled, migration will
> >>  >>  > > > > > not write to the migration stream any RAM block that had the
> >>  >>  > > > > > shared=on flag on the qemu commandline. So you should then be able
> >>  >>  > > > > > to restart from the migration stream and existing RAM image.
> >>  >>  > > > >
> >>  >>  > > > > So does it mean we can drop our vm-templating patches and move to using " x-ignore-shared-ram" on QEMU 4.0.
> >>  >>  > > > >
> >>  >>  > > > > Today we only need two patches, which will come down to a single patch which would bring up closer to upstream QEMU 4.0 which would be ideal.
> >>  >>  > > > > https://github.com/kata-containers/packaging/blob/master/qemu/patches/4.0.x/0001-9p-removing-coroutines-of-9p-to-increase-the-I-O-per.patch
> >>  >>  > > > > https://github.com/kata-containers/packaging/blob/master/qemu/patches/4.0.x/0002-migration-add-capability-to-bypass-the-shared-memory.patch
> >>  >>  > > >
> >>  >>  > > > Yes, I'm hoping that with 4.0 you can avoid 0002 - but I've not tried
> >>  >>  > > > it; I'd be interested in your results.
> >>  >>  > > >
> >>  >>  > > Hi Dave,
> >>  >>  > >
> >>  >>  > > I gave it a try and failed to do vm templating with x-ignore-shared.
> >>  >>  > > One key difference between x-ignore-shared and Lai's
> >>  >>  > > migrate-bypass-shared patch is that Lai doesn't verify shared memory
> >>  >>  > > block upon ram load. OTOH x-ignore-shared rely on ram share property
> >>  >>  > > to reconstruct the destination guest memory. In kata, we make use of
> >>  >>  > > the fact that destination ram can be private to implement vm
> >>  >>  > > templating feature so that multiple new guests can share map the same
> >>  >>  > > template VM memory privately (memory-backend-file share=off).
> >>  >>  >
> >>  >>  > (Copying in Yury who wrote those patches).
> >>  >>  >
> >>  >>  > > The way we implement vm templating in kata is:
> >>  >>  > > 1. Start the template VM:
> >>  >>  > > qemu-system-x86 -m 2G \
> >>  >>  > > -object memory-backend-file,id=mem0,size=2G,share=on,mem-path=/tmpfs/template-memory
> >>  >>  > > \
> >>  >>  > > -numa node,memdev=mem0
> >>  >>  > > 2. Stop the template VM, set migration bypass-shared-memory
> >>  >>  > > capability, migrate exec:cat>/tmpfs/state, quit it
> >>  >>  > > 3. Start target VM:
> >>  >>  > > qemu-system-x86 -m 2G \
> >>  >>  > > -object memory-backend-file,id=mem0,size=2G,share=off,mem-path=/tmpfs/template-memory
> >>  >>  > > \
> >>  >>  > > -numa node,memdev=mem0 \
> >>  >>  > > -incoming "exec:cat /tmpfs/template-state"
> >>  >>  > > 4. Start more target VMs like 3
> >>  >>  > >
> >>  >>  > > There are some major differences between above and the example given
> >>  >>  > > by x-ignore-shared patchset[1]:
> >>  >>  > > 1. on target vm, memory-backend-file is mapped privately (share=off)
> >>  >>  > > 2. no migration capability is set on the target vm
> >>  >>  > > 3. it is possible to create multiple target VMs based on the same template VM
> >>  >>  > > I have made some hacky change to make x-ignore-shared work for vm
> >>  >>  > > templating, mostly by reverting the commit "migration: Add
> >>  >>  > > capabilities validation" and reconstructing destination VM ram without
> >>  >>  > > using migration share capability or ram share property[2].
> >>  >>  > >
> >>  >>  > > So I want to ask about your opinion on how to make vm templating work.
> >>  >>  > > Shall we change qemu to load ram without the x-ignore-shared
> >>  >>  > > capability? Or add another capability to implement similar feature
> >>  >>  > > along side x-ignore-shared?
> >>  >>  >
> >>  >>  > Can we first try a different hack; if you set the ignore shared on the
> >>  >>  > destination as well as the source, what happens? I guess it'll hit one
> >>  >>  > of the errors in ram_load? Which one?
> >>  >>  >
> >>  >>  With vanilla 4.0? I got:
> >>  >>   qemu-system-x86_64: RAM block mem should be migrated
> >>  >>   qemu-system-x86_64: error while loading state for instance 0x0 of device 'ram'
> >>  >>   qemu-system-x86_64: load of migration failed: Invalid argument
> >>  >>
> >>  >>  which matches:
> >>  >>  4341 if (migrate_ignore_shared()) {
> >>  >>  4342 hwaddr addr = qemu_get_be64(f);
> >>  >>  4343 bool ignored = qemu_get_byte(f);
> >>  >>  4344 if (ignored != ramblock_is_ignored(block)) {
> >>  >>  4345 error_report("RAM block %s should %s
> >>  >>  be migrated",
> >>  >>  4346 id, ignored ? "" : "not");
> >>  >>  4347 ret = -EINVAL;
> >>  >>
> >>  >>  If I remove the check it succeeds.
> >>  >
> >>  > Great.
> >>  >
> >>  >>  If we go this way, there is no need
> >>  >>  to pass shared ram states during migration since this is the only
> >>  >>  place they are used.
> >>  >
> >>  > Well, we could remove them (since the flag is still an x- we're allowed
> >>  > to break compatibility).
> >>  >
> >>  > Yury: What do you think?
> >>  >       In this use case they don't run shared on the destination.
> >>  >
> >>
> >>  I'm not sure I understand this use case correctly. If you don't send memory and
> >>  don't share it, so how does it migrate? If memory is not shared then target
> >>  will use obsolete RAM from disk, right?
> >
> > The trick here is that they set it shareable initially when they create
> > a template, then they do the migrate with the capability set; so now
> > you've got a migration image without the RAM, and you've got a template
> > RAM file.
> > Now you start your VM from the template but you DONT set the shared
> > flag - so you get the old RAM as expected and the VM starts quickly, but
> > it doesn't write it back to the file; so now you can start another
> > VM using hte same template quickly as well.
> >
> 
> Oh, I get it, thanks. So, as I said, one of the possible solutions is to add an
> option for the memory-backend-file to mark the area of RAM that should not be
> migrated, instead of checking whether it is shared or not.

I think I'd rather just remove the check - the check is there to stop
someone doing something silly; this case shows it's valid.

Dave

> > Dave
> >
> >>  I thought that ignoring all the shared RAM blocks was enough, but in this case
> >>  it might be better to mark the migratable memory backends explicitly.
> >>
> >>  > Dave
> >>  >
> >>  >>  Cheers,
> >>  >>  Tao
> >>  >>  --
> >>  >>  Into something rich and strange.
> >>  > --
> >>  > Dr. David Alan Gilbert / dgilbert at redhat.com / Manchester, UK
> >>
> >>  Regards,
> >>  Yury
> > --
> > Dr. David Alan Gilbert / dgilbert at redhat.com / Manchester, UK
> 
> Regards,
> Yury
--
Dr. David Alan Gilbert / dgilbert at redhat.com / Manchester, UK



More information about the kata-dev mailing list