Peng, Stefan, While working on enabling virtio-fs on Kata there was a small confusion on whether virtio-fs can work with VM Templating. With the current setup of kata, VM templating and virtio-fs are mutually exclusive (VM templating is enabled in qemu-lite and virtio-fs currently only with NEMU), but they will converge very soon. An open question we had was, would they both work with each other given that VM templating turns off memory sharing for all VMs other than the first one and virtio-fs expects guest memory be shared & file-backed for all VMs. Can both these features to be enabled in kata at the same time and if so, what are we missing to get to that end point? I will open a bug once we know what needs to be worked on to make this happen. -- Ganesh
On Tue, Jun 4, 2019 at 7:18 AM Mahalingam, Ganesh <ganesh.mahalingam@intel.com> wrote:
Peng, Stefan, While working on enabling virtio-fs on Kata there was a small confusion on whether virtio-fs can work with VM Templating. With the current setup of kata, VM templating and virtio-fs are mutually exclusive (VM templating is enabled in qemu-lite and virtio-fs currently only with NEMU), but they will converge very soon.
An open question we had was, would they both work with each other given that VM templating turns off memory sharing for all VMs other than the first one and virtio-fs expects guest memory be shared & file-backed for all VMs.
Hi Stefan, What happens if we specify share=on and share=off for different memory file backends? Can virtio-fs only pick up the shared one? Thanks, Tao -- Into something rich and strange.
* Tao Peng (bergwolf@hyper.sh) wrote:
On Tue, Jun 4, 2019 at 7:18 AM Mahalingam, Ganesh <ganesh.mahalingam@intel.com> wrote:
Peng, Stefan, While working on enabling virtio-fs on Kata there was a small confusion on whether virtio-fs can work with VM Templating. With the current setup of kata, VM templating and virtio-fs are mutually exclusive (VM templating is enabled in qemu-lite and virtio-fs currently only with NEMU), but they will converge very soon.
An open question we had was, would they both work with each other given that VM templating turns off memory sharing for all VMs other than the first one and virtio-fs expects guest memory be shared & file-backed for all VMs.
Hi Stefan,
What happens if we specify share=on and share=off for different memory file backends? Can virtio-fs only pick up the shared one?
All vhost-user devices (including virtio-fs) need the share=on - because that's the only way that the vhost-user process sees the writes done by qemu or the guest. I think we need it for all of RAM. Dave
Thanks, Tao
-- Into something rich and strange.
_______________________________________________ kata-dev mailing list kata-dev@lists.katacontainers.io http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev -- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
On Tue, Jun 4, 2019 at 5:53 PM Dr. David Alan Gilbert <dgilbert@redhat.com> wrote:
* Tao Peng (bergwolf@hyper.sh) wrote:
On Tue, Jun 4, 2019 at 7:18 AM Mahalingam, Ganesh <ganesh.mahalingam@intel.com> wrote:
Peng, Stefan, While working on enabling virtio-fs on Kata there was a small confusion on whether virtio-fs can work with VM Templating. With the current setup of kata, VM templating and virtio-fs are mutually exclusive (VM templating is enabled in qemu-lite and virtio-fs currently only with NEMU), but they will converge very soon.
An open question we had was, would they both work with each other given that VM templating turns off memory sharing for all VMs other than the first one and virtio-fs expects guest memory be shared & file-backed for all VMs.
Hi Stefan,
What happens if we specify share=on and share=off for different memory file backends? Can virtio-fs only pick up the shared one?
All vhost-user devices (including virtio-fs) need the share=on - because that's the only way that the vhost-user process sees the writes done by qemu or the guest. I think we need it for all of RAM.
Hmm, vm templating needs the initial ram to be mapped private so that it can be shared safely across multiple VM instances. It does conflict with vhost-user requirement that guest memory can be accessed by a different process. Ganesh, how about we just marking vm templating and vhost-user conflict with each other? Cheers, Tao -- Into something rich and strange.
* Peng Tao (bergwolf@hyper.sh) wrote:
On Tue, Jun 4, 2019 at 5:53 PM Dr. David Alan Gilbert <dgilbert@redhat.com> wrote:
* Tao Peng (bergwolf@hyper.sh) wrote:
On Tue, Jun 4, 2019 at 7:18 AM Mahalingam, Ganesh <ganesh.mahalingam@intel.com> wrote:
Peng, Stefan, While working on enabling virtio-fs on Kata there was a small confusion on whether virtio-fs can work with VM Templating. With the current setup of kata, VM templating and virtio-fs are mutually exclusive (VM templating is enabled in qemu-lite and virtio-fs currently only with NEMU), but they will converge very soon.
An open question we had was, would they both work with each other given that VM templating turns off memory sharing for all VMs other than the first one and virtio-fs expects guest memory be shared & file-backed for all VMs.
Hi Stefan,
What happens if we specify share=on and share=off for different memory file backends? Can virtio-fs only pick up the shared one?
All vhost-user devices (including virtio-fs) need the share=on - because that's the only way that the vhost-user process sees the writes done by qemu or the guest. I think we need it for all of RAM.
Hmm, vm templating needs the initial ram to be mapped private so that it can be shared safely across multiple VM instances. It does conflict with vhost-user requirement that guest memory can be accessed by a different process.
Yes, it's hard. Thre's a really nasty idea I had last year, https://lists.gnu.org/archive/html/qemu-devel/2018-04/msg03055.html that *might* work for vhost-user + templating. You'd still need to hotplug the virtio-fs device after restoring from the template.
Ganesh, how about we just marking vm templating and vhost-user conflict with each other?
Slightly related, but we should see if we can use QEMU's x-ignore-shared-ram flag to do the templating in qemu 4.0; I think that should do the same as the older NEMU code. Dave
Cheers, Tao -- Into something rich and strange. -- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
* Castelino, Manohar R (manohar.r.castelino@intel.com) wrote:
David
Slightly related, but we should see if we can use QEMU's x-ignore-shared-ram flag to do the templating in qemu 4.0; I think that should do the same as the older NEMU code.
What does this " x-ignore-shared-ram" do. We have not used it in the past.
It's very similar to the trick that NEMU uses for templating. With the x-ignore-shared-ram migration capability enabled, migration will not write to the migration stream any RAM block that had the shared=on flag on the qemu commandline. So you should then be able to restart from the migration stream and existing RAM image. Dave -- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
It's very similar to the trick that NEMU uses for templating. With the x-ignore-shared-ram migration capability enabled, migration will not write to the migration stream any RAM block that had the shared=on flag on the qemu commandline. So you should then be able to restart from the migration stream and existing RAM image.
So does it mean we can drop our vm-templating patches and move to using " x-ignore-shared-ram" on QEMU 4.0. Today we only need two patches, which will come down to a single patch which would bring up closer to upstream QEMU 4.0 which would be ideal. https://github.com/kata-containers/packaging/blob/master/qemu/patches/4.0.x/... https://github.com/kata-containers/packaging/blob/master/qemu/patches/4.0.x/...
* Castelino, Manohar R (manohar.r.castelino@intel.com) wrote:
It's very similar to the trick that NEMU uses for templating. With the x-ignore-shared-ram migration capability enabled, migration will not write to the migration stream any RAM block that had the shared=on flag on the qemu commandline. So you should then be able to restart from the migration stream and existing RAM image.
So does it mean we can drop our vm-templating patches and move to using " x-ignore-shared-ram" on QEMU 4.0.
Today we only need two patches, which will come down to a single patch which would bring up closer to upstream QEMU 4.0 which would be ideal. https://github.com/kata-containers/packaging/blob/master/qemu/patches/4.0.x/... https://github.com/kata-containers/packaging/blob/master/qemu/patches/4.0.x/...
Yes, I'm hoping that with 4.0 you can avoid 0002 - but I've not tried it; I'd be interested in your results. Dave
-- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
On Thu, Jun 6, 2019 at 1:33 AM Dr. David Alan Gilbert <dgilbert@redhat.com> wrote:
* Castelino, Manohar R (manohar.r.castelino@intel.com) wrote:
It's very similar to the trick that NEMU uses for templating. With the x-ignore-shared-ram migration capability enabled, migration will not write to the migration stream any RAM block that had the shared=on flag on the qemu commandline. So you should then be able to restart from the migration stream and existing RAM image.
So does it mean we can drop our vm-templating patches and move to using " x-ignore-shared-ram" on QEMU 4.0.
Today we only need two patches, which will come down to a single patch which would bring up closer to upstream QEMU 4.0 which would be ideal. https://github.com/kata-containers/packaging/blob/master/qemu/patches/4.0.x/... https://github.com/kata-containers/packaging/blob/master/qemu/patches/4.0.x/...
Yes, I'm hoping that with 4.0 you can avoid 0002 - but I've not tried it; I'd be interested in your results.
Hi Dave, I gave it a try and failed to do vm templating with x-ignore-shared. One key difference between x-ignore-shared and Lai's migrate-bypass-shared patch is that Lai doesn't verify shared memory block upon ram load. OTOH x-ignore-shared rely on ram share property to reconstruct the destination guest memory. In kata, we make use of the fact that destination ram can be private to implement vm templating feature so that multiple new guests can share map the same template VM memory privately (memory-backend-file share=off). The way we implement vm templating in kata is: 1. Start the template VM: qemu-system-x86 -m 2G \ -object memory-backend-file,id=mem0,size=2G,share=on,mem-path=/tmpfs/template-memory \ -numa node,memdev=mem0 2. Stop the template VM, set migration bypass-shared-memory capability, migrate exec:cat>/tmpfs/state, quit it 3. Start target VM: qemu-system-x86 -m 2G \ -object memory-backend-file,id=mem0,size=2G,share=off,mem-path=/tmpfs/template-memory \ -numa node,memdev=mem0 \ -incoming "exec:cat /tmpfs/template-state" 4. Start more target VMs like 3 There are some major differences between above and the example given by x-ignore-shared patchset[1]: 1. on target vm, memory-backend-file is mapped privately (share=off) 2. no migration capability is set on the target vm 3. it is possible to create multiple target VMs based on the same template VM I have made some hacky change to make x-ignore-shared work for vm templating, mostly by reverting the commit "migration: Add capabilities validation" and reconstructing destination VM ram without using migration share capability or ram share property[2]. So I want to ask about your opinion on how to make vm templating work. Shall we change qemu to load ram without the x-ignore-shared capability? Or add another capability to implement similar feature along side x-ignore-shared? [1] https://lists.gnu.org/archive/html/qemu-devel/2019-02/msg04332.html [2] https://github.com/bergwolf/qemu/commits/vm-templating Cheers, Tao -- Into something rich and strange.
* Peng Tao (bergwolf@hyper.sh) wrote:
On Thu, Jun 6, 2019 at 1:33 AM Dr. David Alan Gilbert <dgilbert@redhat.com> wrote:
* Castelino, Manohar R (manohar.r.castelino@intel.com) wrote:
It's very similar to the trick that NEMU uses for templating. With the x-ignore-shared-ram migration capability enabled, migration will not write to the migration stream any RAM block that had the shared=on flag on the qemu commandline. So you should then be able to restart from the migration stream and existing RAM image.
So does it mean we can drop our vm-templating patches and move to using " x-ignore-shared-ram" on QEMU 4.0.
Today we only need two patches, which will come down to a single patch which would bring up closer to upstream QEMU 4.0 which would be ideal. https://github.com/kata-containers/packaging/blob/master/qemu/patches/4.0.x/... https://github.com/kata-containers/packaging/blob/master/qemu/patches/4.0.x/...
Yes, I'm hoping that with 4.0 you can avoid 0002 - but I've not tried it; I'd be interested in your results.
Hi Dave,
I gave it a try and failed to do vm templating with x-ignore-shared. One key difference between x-ignore-shared and Lai's migrate-bypass-shared patch is that Lai doesn't verify shared memory block upon ram load. OTOH x-ignore-shared rely on ram share property to reconstruct the destination guest memory. In kata, we make use of the fact that destination ram can be private to implement vm templating feature so that multiple new guests can share map the same template VM memory privately (memory-backend-file share=off).
(Copying in Yury who wrote those patches).
The way we implement vm templating in kata is: 1. Start the template VM: qemu-system-x86 -m 2G \ -object memory-backend-file,id=mem0,size=2G,share=on,mem-path=/tmpfs/template-memory \ -numa node,memdev=mem0 2. Stop the template VM, set migration bypass-shared-memory capability, migrate exec:cat>/tmpfs/state, quit it 3. Start target VM: qemu-system-x86 -m 2G \ -object memory-backend-file,id=mem0,size=2G,share=off,mem-path=/tmpfs/template-memory \ -numa node,memdev=mem0 \ -incoming "exec:cat /tmpfs/template-state" 4. Start more target VMs like 3
There are some major differences between above and the example given by x-ignore-shared patchset[1]: 1. on target vm, memory-backend-file is mapped privately (share=off) 2. no migration capability is set on the target vm 3. it is possible to create multiple target VMs based on the same template VM I have made some hacky change to make x-ignore-shared work for vm templating, mostly by reverting the commit "migration: Add capabilities validation" and reconstructing destination VM ram without using migration share capability or ram share property[2].
So I want to ask about your opinion on how to make vm templating work. Shall we change qemu to load ram without the x-ignore-shared capability? Or add another capability to implement similar feature along side x-ignore-shared?
Can we first try a different hack; if you set the ignore shared on the destination as well as the source, what happens? I guess it'll hit one of the errors in ram_load? Which one? Dave
[1] https://lists.gnu.org/archive/html/qemu-devel/2019-02/msg04332.html [2] https://github.com/bergwolf/qemu/commits/vm-templating
Cheers, Tao -- Into something rich and strange. -- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
On Thu, Jun 13, 2019 at 7:09 PM Dr. David Alan Gilbert <dgilbert@redhat.com> wrote:
* Peng Tao (bergwolf@hyper.sh) wrote:
On Thu, Jun 6, 2019 at 1:33 AM Dr. David Alan Gilbert <dgilbert@redhat.com> wrote:
* Castelino, Manohar R (manohar.r.castelino@intel.com) wrote:
It's very similar to the trick that NEMU uses for templating. With the x-ignore-shared-ram migration capability enabled, migration will not write to the migration stream any RAM block that had the shared=on flag on the qemu commandline. So you should then be able to restart from the migration stream and existing RAM image.
So does it mean we can drop our vm-templating patches and move to using " x-ignore-shared-ram" on QEMU 4.0.
Today we only need two patches, which will come down to a single patch which would bring up closer to upstream QEMU 4.0 which would be ideal. https://github.com/kata-containers/packaging/blob/master/qemu/patches/4.0.x/... https://github.com/kata-containers/packaging/blob/master/qemu/patches/4.0.x/...
Yes, I'm hoping that with 4.0 you can avoid 0002 - but I've not tried it; I'd be interested in your results.
Hi Dave,
I gave it a try and failed to do vm templating with x-ignore-shared. One key difference between x-ignore-shared and Lai's migrate-bypass-shared patch is that Lai doesn't verify shared memory block upon ram load. OTOH x-ignore-shared rely on ram share property to reconstruct the destination guest memory. In kata, we make use of the fact that destination ram can be private to implement vm templating feature so that multiple new guests can share map the same template VM memory privately (memory-backend-file share=off).
(Copying in Yury who wrote those patches).
The way we implement vm templating in kata is: 1. Start the template VM: qemu-system-x86 -m 2G \ -object memory-backend-file,id=mem0,size=2G,share=on,mem-path=/tmpfs/template-memory \ -numa node,memdev=mem0 2. Stop the template VM, set migration bypass-shared-memory capability, migrate exec:cat>/tmpfs/state, quit it 3. Start target VM: qemu-system-x86 -m 2G \ -object memory-backend-file,id=mem0,size=2G,share=off,mem-path=/tmpfs/template-memory \ -numa node,memdev=mem0 \ -incoming "exec:cat /tmpfs/template-state" 4. Start more target VMs like 3
There are some major differences between above and the example given by x-ignore-shared patchset[1]: 1. on target vm, memory-backend-file is mapped privately (share=off) 2. no migration capability is set on the target vm 3. it is possible to create multiple target VMs based on the same template VM I have made some hacky change to make x-ignore-shared work for vm templating, mostly by reverting the commit "migration: Add capabilities validation" and reconstructing destination VM ram without using migration share capability or ram share property[2].
So I want to ask about your opinion on how to make vm templating work. Shall we change qemu to load ram without the x-ignore-shared capability? Or add another capability to implement similar feature along side x-ignore-shared?
Can we first try a different hack; if you set the ignore shared on the destination as well as the source, what happens? I guess it'll hit one of the errors in ram_load? Which one?
With vanilla 4.0? I got: qemu-system-x86_64: RAM block mem should be migrated qemu-system-x86_64: error while loading state for instance 0x0 of device 'ram' qemu-system-x86_64: load of migration failed: Invalid argument which matches: 4341 if (migrate_ignore_shared()) { 4342 hwaddr addr = qemu_get_be64(f); 4343 bool ignored = qemu_get_byte(f); 4344 if (ignored != ramblock_is_ignored(block)) { 4345 error_report("RAM block %s should %s be migrated", 4346 id, ignored ? "" : "not"); 4347 ret = -EINVAL; If I remove the check it succeeds. If we go this way, there is no need to pass shared ram states during migration since this is the only place they are used. Cheers, Tao -- Into something rich and strange.
On Thu, Jun 13, 2019 at 8:17 PM Peng Tao <bergwolf@hyper.sh> wrote:
On Thu, Jun 13, 2019 at 7:09 PM Dr. David Alan Gilbert <dgilbert@redhat.com> wrote:
* Peng Tao (bergwolf@hyper.sh) wrote:
On Thu, Jun 6, 2019 at 1:33 AM Dr. David Alan Gilbert <dgilbert@redhat.com> wrote:
* Castelino, Manohar R (manohar.r.castelino@intel.com) wrote:
It's very similar to the trick that NEMU uses for templating. With the x-ignore-shared-ram migration capability enabled, migration will not write to the migration stream any RAM block that had the shared=on flag on the qemu commandline. So you should then be able to restart from the migration stream and existing RAM image.
So does it mean we can drop our vm-templating patches and move to using " x-ignore-shared-ram" on QEMU 4.0.
Today we only need two patches, which will come down to a single patch which would bring up closer to upstream QEMU 4.0 which would be ideal. https://github.com/kata-containers/packaging/blob/master/qemu/patches/4.0.x/... https://github.com/kata-containers/packaging/blob/master/qemu/patches/4.0.x/...
Yes, I'm hoping that with 4.0 you can avoid 0002 - but I've not tried it; I'd be interested in your results.
Hi Dave,
I gave it a try and failed to do vm templating with x-ignore-shared. One key difference between x-ignore-shared and Lai's migrate-bypass-shared patch is that Lai doesn't verify shared memory block upon ram load. OTOH x-ignore-shared rely on ram share property to reconstruct the destination guest memory. In kata, we make use of the fact that destination ram can be private to implement vm templating feature so that multiple new guests can share map the same template VM memory privately (memory-backend-file share=off).
(Copying in Yury who wrote those patches).
The way we implement vm templating in kata is: 1. Start the template VM: qemu-system-x86 -m 2G \ -object memory-backend-file,id=mem0,size=2G,share=on,mem-path=/tmpfs/template-memory \ -numa node,memdev=mem0 2. Stop the template VM, set migration bypass-shared-memory capability, migrate exec:cat>/tmpfs/state, quit it 3. Start target VM: qemu-system-x86 -m 2G \ -object memory-backend-file,id=mem0,size=2G,share=off,mem-path=/tmpfs/template-memory \ -numa node,memdev=mem0 \ -incoming "exec:cat /tmpfs/template-state" 4. Start more target VMs like 3
There are some major differences between above and the example given by x-ignore-shared patchset[1]: 1. on target vm, memory-backend-file is mapped privately (share=off) 2. no migration capability is set on the target vm 3. it is possible to create multiple target VMs based on the same template VM I have made some hacky change to make x-ignore-shared work for vm templating, mostly by reverting the commit "migration: Add capabilities validation" and reconstructing destination VM ram without using migration share capability or ram share property[2].
So I want to ask about your opinion on how to make vm templating work. Shall we change qemu to load ram without the x-ignore-shared capability? Or add another capability to implement similar feature along side x-ignore-shared?
Can we first try a different hack; if you set the ignore shared on the destination as well as the source, what happens? I guess it'll hit one of the errors in ram_load? Which one?
With vanilla 4.0? I got: qemu-system-x86_64: RAM block mem should be migrated qemu-system-x86_64: error while loading state for instance 0x0 of device 'ram' qemu-system-x86_64: load of migration failed: Invalid argument
which matches: 4341 if (migrate_ignore_shared()) { 4342 hwaddr addr = qemu_get_be64(f); 4343 bool ignored = qemu_get_byte(f); 4344 if (ignored != ramblock_is_ignored(block)) { 4345 error_report("RAM block %s should %s be migrated", 4346 id, ignored ? "" : "not"); 4347 ret = -EINVAL;
If I remove the check it succeeds.
By success, I meant the following steps: 1. Start the template VM: qemu-system-x86 -m 2G \ -object memory-backend-file,id=mem0,size=2G,share=on,mem-path=/tmpfs/template-memory \ -numa node,memdev=mem0 2. Stop the template VM, set migration x-ignore-shared capability, migrate "exec:cat>/tmpfs/state", quit it 3. Start target VM: qemu-system-x86 -m 2G \ -object memory-backend-file,id=mem0,size=2G,share=off,mem-path=/tmpfs/template-memory \ -numa node,memdev=mem0 \ -incoming defer 4. connect to target VM qmp, set migration x-ignore-shared capability, migrate_incoming "exec:cat /tmpfs/state" -Tao -- Into something rich and strange.
* Peng Tao (bergwolf@hyper.sh) wrote:
On Thu, Jun 13, 2019 at 7:09 PM Dr. David Alan Gilbert <dgilbert@redhat.com> wrote:
* Peng Tao (bergwolf@hyper.sh) wrote:
On Thu, Jun 6, 2019 at 1:33 AM Dr. David Alan Gilbert <dgilbert@redhat.com> wrote:
* Castelino, Manohar R (manohar.r.castelino@intel.com) wrote:
It's very similar to the trick that NEMU uses for templating. With the x-ignore-shared-ram migration capability enabled, migration will not write to the migration stream any RAM block that had the shared=on flag on the qemu commandline. So you should then be able to restart from the migration stream and existing RAM image.
So does it mean we can drop our vm-templating patches and move to using " x-ignore-shared-ram" on QEMU 4.0.
Today we only need two patches, which will come down to a single patch which would bring up closer to upstream QEMU 4.0 which would be ideal. https://github.com/kata-containers/packaging/blob/master/qemu/patches/4.0.x/... https://github.com/kata-containers/packaging/blob/master/qemu/patches/4.0.x/...
Yes, I'm hoping that with 4.0 you can avoid 0002 - but I've not tried it; I'd be interested in your results.
Hi Dave,
I gave it a try and failed to do vm templating with x-ignore-shared. One key difference between x-ignore-shared and Lai's migrate-bypass-shared patch is that Lai doesn't verify shared memory block upon ram load. OTOH x-ignore-shared rely on ram share property to reconstruct the destination guest memory. In kata, we make use of the fact that destination ram can be private to implement vm templating feature so that multiple new guests can share map the same template VM memory privately (memory-backend-file share=off).
(Copying in Yury who wrote those patches).
The way we implement vm templating in kata is: 1. Start the template VM: qemu-system-x86 -m 2G \ -object memory-backend-file,id=mem0,size=2G,share=on,mem-path=/tmpfs/template-memory \ -numa node,memdev=mem0 2. Stop the template VM, set migration bypass-shared-memory capability, migrate exec:cat>/tmpfs/state, quit it 3. Start target VM: qemu-system-x86 -m 2G \ -object memory-backend-file,id=mem0,size=2G,share=off,mem-path=/tmpfs/template-memory \ -numa node,memdev=mem0 \ -incoming "exec:cat /tmpfs/template-state" 4. Start more target VMs like 3
There are some major differences between above and the example given by x-ignore-shared patchset[1]: 1. on target vm, memory-backend-file is mapped privately (share=off) 2. no migration capability is set on the target vm 3. it is possible to create multiple target VMs based on the same template VM I have made some hacky change to make x-ignore-shared work for vm templating, mostly by reverting the commit "migration: Add capabilities validation" and reconstructing destination VM ram without using migration share capability or ram share property[2].
So I want to ask about your opinion on how to make vm templating work. Shall we change qemu to load ram without the x-ignore-shared capability? Or add another capability to implement similar feature along side x-ignore-shared?
Can we first try a different hack; if you set the ignore shared on the destination as well as the source, what happens? I guess it'll hit one of the errors in ram_load? Which one?
With vanilla 4.0? I got: qemu-system-x86_64: RAM block mem should be migrated qemu-system-x86_64: error while loading state for instance 0x0 of device 'ram' qemu-system-x86_64: load of migration failed: Invalid argument
which matches: 4341 if (migrate_ignore_shared()) { 4342 hwaddr addr = qemu_get_be64(f); 4343 bool ignored = qemu_get_byte(f); 4344 if (ignored != ramblock_is_ignored(block)) { 4345 error_report("RAM block %s should %s be migrated", 4346 id, ignored ? "" : "not"); 4347 ret = -EINVAL;
If I remove the check it succeeds.
Great.
If we go this way, there is no need to pass shared ram states during migration since this is the only place they are used.
Well, we could remove them (since the flag is still an x- we're allowed to break compatibility). Yury: What do you think? In this use case they don't run shared on the destination. Dave
Cheers, Tao -- Into something rich and strange. -- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
Hi, 13.06.2019, 15:44, "Dr. David Alan Gilbert" <dgilbert@redhat.com>:
* Peng Tao (bergwolf@hyper.sh) wrote:
On Thu, Jun 13, 2019 at 7:09 PM Dr. David Alan Gilbert <dgilbert@redhat.com> wrote: > > * Peng Tao (bergwolf@hyper.sh) wrote: > > On Thu, Jun 6, 2019 at 1:33 AM Dr. David Alan Gilbert > > <dgilbert@redhat.com> wrote: > > > > > > * Castelino, Manohar R (manohar.r.castelino@intel.com) wrote: > > > > > It's very similar to the trick that NEMU uses for templating. > > > > > With the x-ignore-shared-ram migration capability enabled, migration will > > > > > not write to the migration stream any RAM block that had the > > > > > shared=on flag on the qemu commandline. So you should then be able > > > > > to restart from the migration stream and existing RAM image. > > > > > > > > So does it mean we can drop our vm-templating patches and move to using " x-ignore-shared-ram" on QEMU 4.0. > > > > > > > > Today we only need two patches, which will come down to a single patch which would bring up closer to upstream QEMU 4.0 which would be ideal. > > > > https://github.com/kata-containers/packaging/blob/master/qemu/patches/4.0.x/... > > > > https://github.com/kata-containers/packaging/blob/master/qemu/patches/4.0.x/... > > > > > > Yes, I'm hoping that with 4.0 you can avoid 0002 - but I've not tried > > > it; I'd be interested in your results. > > > > > Hi Dave, > > > > I gave it a try and failed to do vm templating with x-ignore-shared. > > One key difference between x-ignore-shared and Lai's > > migrate-bypass-shared patch is that Lai doesn't verify shared memory > > block upon ram load. OTOH x-ignore-shared rely on ram share property > > to reconstruct the destination guest memory. In kata, we make use of > > the fact that destination ram can be private to implement vm > > templating feature so that multiple new guests can share map the same > > template VM memory privately (memory-backend-file share=off). > > (Copying in Yury who wrote those patches). > > > The way we implement vm templating in kata is: > > 1. Start the template VM: > > qemu-system-x86 -m 2G \ > > -object memory-backend-file,id=mem0,size=2G,share=on,mem-path=/tmpfs/template-memory > > \ > > -numa node,memdev=mem0 > > 2. Stop the template VM, set migration bypass-shared-memory > > capability, migrate exec:cat>/tmpfs/state, quit it > > 3. Start target VM: > > qemu-system-x86 -m 2G \ > > -object memory-backend-file,id=mem0,size=2G,share=off,mem-path=/tmpfs/template-memory > > \ > > -numa node,memdev=mem0 \ > > -incoming "exec:cat /tmpfs/template-state" > > 4. Start more target VMs like 3 > > > > There are some major differences between above and the example given > > by x-ignore-shared patchset[1]: > > 1. on target vm, memory-backend-file is mapped privately (share=off) > > 2. no migration capability is set on the target vm > > 3. it is possible to create multiple target VMs based on the same template VM > > I have made some hacky change to make x-ignore-shared work for vm > > templating, mostly by reverting the commit "migration: Add > > capabilities validation" and reconstructing destination VM ram without > > using migration share capability or ram share property[2]. > > > > So I want to ask about your opinion on how to make vm templating work. > > Shall we change qemu to load ram without the x-ignore-shared > > capability? Or add another capability to implement similar feature > > along side x-ignore-shared? > > Can we first try a different hack; if you set the ignore shared on the > destination as well as the source, what happens? I guess it'll hit one > of the errors in ram_load? Which one? > With vanilla 4.0? I got: qemu-system-x86_64: RAM block mem should be migrated qemu-system-x86_64: error while loading state for instance 0x0 of device 'ram' qemu-system-x86_64: load of migration failed: Invalid argument
which matches: 4341 if (migrate_ignore_shared()) { 4342 hwaddr addr = qemu_get_be64(f); 4343 bool ignored = qemu_get_byte(f); 4344 if (ignored != ramblock_is_ignored(block)) { 4345 error_report("RAM block %s should %s be migrated", 4346 id, ignored ? "" : "not"); 4347 ret = -EINVAL;
If I remove the check it succeeds.
Great.
If we go this way, there is no need to pass shared ram states during migration since this is the only place they are used.
Well, we could remove them (since the flag is still an x- we're allowed to break compatibility).
Yury: What do you think? In this use case they don't run shared on the destination.
I'm not sure I understand this use case correctly. If you don't send memory and don't share it, so how does it migrate? If memory is not shared then target will use obsolete RAM from disk, right? I thought that ignoring all the shared RAM blocks was enough, but in this case it might be better to mark the migratable memory backends explicitly.
Dave
Cheers, Tao -- Into something rich and strange. -- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
Regards, Yury
* Yury Kotov (yury-kotov@yandex-team.ru) wrote:
Hi,
13.06.2019, 15:44, "Dr. David Alan Gilbert" <dgilbert@redhat.com>:
* Peng Tao (bergwolf@hyper.sh) wrote:
On Thu, Jun 13, 2019 at 7:09 PM Dr. David Alan Gilbert <dgilbert@redhat.com> wrote: > > * Peng Tao (bergwolf@hyper.sh) wrote: > > On Thu, Jun 6, 2019 at 1:33 AM Dr. David Alan Gilbert > > <dgilbert@redhat.com> wrote: > > > > > > * Castelino, Manohar R (manohar.r.castelino@intel.com) wrote: > > > > > It's very similar to the trick that NEMU uses for templating. > > > > > With the x-ignore-shared-ram migration capability enabled, migration will > > > > > not write to the migration stream any RAM block that had the > > > > > shared=on flag on the qemu commandline. So you should then be able > > > > > to restart from the migration stream and existing RAM image. > > > > > > > > So does it mean we can drop our vm-templating patches and move to using " x-ignore-shared-ram" on QEMU 4.0. > > > > > > > > Today we only need two patches, which will come down to a single patch which would bring up closer to upstream QEMU 4.0 which would be ideal. > > > > https://github.com/kata-containers/packaging/blob/master/qemu/patches/4.0.x/... > > > > https://github.com/kata-containers/packaging/blob/master/qemu/patches/4.0.x/... > > > > > > Yes, I'm hoping that with 4.0 you can avoid 0002 - but I've not tried > > > it; I'd be interested in your results. > > > > > Hi Dave, > > > > I gave it a try and failed to do vm templating with x-ignore-shared. > > One key difference between x-ignore-shared and Lai's > > migrate-bypass-shared patch is that Lai doesn't verify shared memory > > block upon ram load. OTOH x-ignore-shared rely on ram share property > > to reconstruct the destination guest memory. In kata, we make use of > > the fact that destination ram can be private to implement vm > > templating feature so that multiple new guests can share map the same > > template VM memory privately (memory-backend-file share=off). > > (Copying in Yury who wrote those patches). > > > The way we implement vm templating in kata is: > > 1. Start the template VM: > > qemu-system-x86 -m 2G \ > > -object memory-backend-file,id=mem0,size=2G,share=on,mem-path=/tmpfs/template-memory > > \ > > -numa node,memdev=mem0 > > 2. Stop the template VM, set migration bypass-shared-memory > > capability, migrate exec:cat>/tmpfs/state, quit it > > 3. Start target VM: > > qemu-system-x86 -m 2G \ > > -object memory-backend-file,id=mem0,size=2G,share=off,mem-path=/tmpfs/template-memory > > \ > > -numa node,memdev=mem0 \ > > -incoming "exec:cat /tmpfs/template-state" > > 4. Start more target VMs like 3 > > > > There are some major differences between above and the example given > > by x-ignore-shared patchset[1]: > > 1. on target vm, memory-backend-file is mapped privately (share=off) > > 2. no migration capability is set on the target vm > > 3. it is possible to create multiple target VMs based on the same template VM > > I have made some hacky change to make x-ignore-shared work for vm > > templating, mostly by reverting the commit "migration: Add > > capabilities validation" and reconstructing destination VM ram without > > using migration share capability or ram share property[2]. > > > > So I want to ask about your opinion on how to make vm templating work. > > Shall we change qemu to load ram without the x-ignore-shared > > capability? Or add another capability to implement similar feature > > along side x-ignore-shared? > > Can we first try a different hack; if you set the ignore shared on the > destination as well as the source, what happens? I guess it'll hit one > of the errors in ram_load? Which one? > With vanilla 4.0? I got: qemu-system-x86_64: RAM block mem should be migrated qemu-system-x86_64: error while loading state for instance 0x0 of device 'ram' qemu-system-x86_64: load of migration failed: Invalid argument
which matches: 4341 if (migrate_ignore_shared()) { 4342 hwaddr addr = qemu_get_be64(f); 4343 bool ignored = qemu_get_byte(f); 4344 if (ignored != ramblock_is_ignored(block)) { 4345 error_report("RAM block %s should %s be migrated", 4346 id, ignored ? "" : "not"); 4347 ret = -EINVAL;
If I remove the check it succeeds.
Great.
If we go this way, there is no need to pass shared ram states during migration since this is the only place they are used.
Well, we could remove them (since the flag is still an x- we're allowed to break compatibility).
Yury: What do you think? In this use case they don't run shared on the destination.
I'm not sure I understand this use case correctly. If you don't send memory and don't share it, so how does it migrate? If memory is not shared then target will use obsolete RAM from disk, right?
The trick here is that they set it shareable initially when they create a template, then they do the migrate with the capability set; so now you've got a migration image without the RAM, and you've got a template RAM file. Now you start your VM from the template but you DONT set the shared flag - so you get the old RAM as expected and the VM starts quickly, but it doesn't write it back to the file; so now you can start another VM using hte same template quickly as well. Dave
I thought that ignoring all the shared RAM blocks was enough, but in this case it might be better to mark the migratable memory backends explicitly.
Dave
Cheers, Tao -- Into something rich and strange. -- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
Regards, Yury -- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
13.06.2019, 16:46, "Dr. David Alan Gilbert" <dgilbert@redhat.com>:
* Yury Kotov (yury-kotov@yandex-team.ru) wrote:
Hi,
13.06.2019, 15:44, "Dr. David Alan Gilbert" <dgilbert@redhat.com>: > * Peng Tao (bergwolf@hyper.sh) wrote: >> On Thu, Jun 13, 2019 at 7:09 PM Dr. David Alan Gilbert >> <dgilbert@redhat.com> wrote: >> > >> > * Peng Tao (bergwolf@hyper.sh) wrote: >> > > On Thu, Jun 6, 2019 at 1:33 AM Dr. David Alan Gilbert >> > > <dgilbert@redhat.com> wrote: >> > > > >> > > > * Castelino, Manohar R (manohar.r.castelino@intel.com) wrote: >> > > > > > It's very similar to the trick that NEMU uses for templating. >> > > > > > With the x-ignore-shared-ram migration capability enabled, migration will >> > > > > > not write to the migration stream any RAM block that had the >> > > > > > shared=on flag on the qemu commandline. So you should then be able >> > > > > > to restart from the migration stream and existing RAM image. >> > > > > >> > > > > So does it mean we can drop our vm-templating patches and move to using " x-ignore-shared-ram" on QEMU 4.0. >> > > > > >> > > > > Today we only need two patches, which will come down to a single patch which would bring up closer to upstream QEMU 4.0 which would be ideal. >> > > > > https://github.com/kata-containers/packaging/blob/master/qemu/patches/4.0.x/... >> > > > > https://github.com/kata-containers/packaging/blob/master/qemu/patches/4.0.x/... >> > > > >> > > > Yes, I'm hoping that with 4.0 you can avoid 0002 - but I've not tried >> > > > it; I'd be interested in your results. >> > > > >> > > Hi Dave, >> > > >> > > I gave it a try and failed to do vm templating with x-ignore-shared. >> > > One key difference between x-ignore-shared and Lai's >> > > migrate-bypass-shared patch is that Lai doesn't verify shared memory >> > > block upon ram load. OTOH x-ignore-shared rely on ram share property >> > > to reconstruct the destination guest memory. In kata, we make use of >> > > the fact that destination ram can be private to implement vm >> > > templating feature so that multiple new guests can share map the same >> > > template VM memory privately (memory-backend-file share=off). >> > >> > (Copying in Yury who wrote those patches). >> > >> > > The way we implement vm templating in kata is: >> > > 1. Start the template VM: >> > > qemu-system-x86 -m 2G \ >> > > -object memory-backend-file,id=mem0,size=2G,share=on,mem-path=/tmpfs/template-memory >> > > \ >> > > -numa node,memdev=mem0 >> > > 2. Stop the template VM, set migration bypass-shared-memory >> > > capability, migrate exec:cat>/tmpfs/state, quit it >> > > 3. Start target VM: >> > > qemu-system-x86 -m 2G \ >> > > -object memory-backend-file,id=mem0,size=2G,share=off,mem-path=/tmpfs/template-memory >> > > \ >> > > -numa node,memdev=mem0 \ >> > > -incoming "exec:cat /tmpfs/template-state" >> > > 4. Start more target VMs like 3 >> > > >> > > There are some major differences between above and the example given >> > > by x-ignore-shared patchset[1]: >> > > 1. on target vm, memory-backend-file is mapped privately (share=off) >> > > 2. no migration capability is set on the target vm >> > > 3. it is possible to create multiple target VMs based on the same template VM >> > > I have made some hacky change to make x-ignore-shared work for vm >> > > templating, mostly by reverting the commit "migration: Add >> > > capabilities validation" and reconstructing destination VM ram without >> > > using migration share capability or ram share property[2]. >> > > >> > > So I want to ask about your opinion on how to make vm templating work. >> > > Shall we change qemu to load ram without the x-ignore-shared >> > > capability? Or add another capability to implement similar feature >> > > along side x-ignore-shared? >> > >> > Can we first try a different hack; if you set the ignore shared on the >> > destination as well as the source, what happens? I guess it'll hit one >> > of the errors in ram_load? Which one? >> > >> With vanilla 4.0? I got: >> qemu-system-x86_64: RAM block mem should be migrated >> qemu-system-x86_64: error while loading state for instance 0x0 of device 'ram' >> qemu-system-x86_64: load of migration failed: Invalid argument >> >> which matches: >> 4341 if (migrate_ignore_shared()) { >> 4342 hwaddr addr = qemu_get_be64(f); >> 4343 bool ignored = qemu_get_byte(f); >> 4344 if (ignored != ramblock_is_ignored(block)) { >> 4345 error_report("RAM block %s should %s >> be migrated", >> 4346 id, ignored ? "" : "not"); >> 4347 ret = -EINVAL; >> >> If I remove the check it succeeds. > > Great. > >> If we go this way, there is no need >> to pass shared ram states during migration since this is the only >> place they are used. > > Well, we could remove them (since the flag is still an x- we're allowed > to break compatibility). > > Yury: What do you think? > In this use case they don't run shared on the destination. >
I'm not sure I understand this use case correctly. If you don't send memory and don't share it, so how does it migrate? If memory is not shared then target will use obsolete RAM from disk, right?
The trick here is that they set it shareable initially when they create a template, then they do the migrate with the capability set; so now you've got a migration image without the RAM, and you've got a template RAM file. Now you start your VM from the template but you DONT set the shared flag - so you get the old RAM as expected and the VM starts quickly, but it doesn't write it back to the file; so now you can start another VM using hte same template quickly as well.
Oh, I get it, thanks. So, as I said, one of the possible solutions is to add an option for the memory-backend-file to mark the area of RAM that should not be migrated, instead of checking whether it is shared or not.
Dave
I thought that ignoring all the shared RAM blocks was enough, but in this case it might be better to mark the migratable memory backends explicitly.
> Dave > >> Cheers, >> Tao >> -- >> Into something rich and strange. > -- > Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
Regards, Yury -- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
Regards, Yury
* Yury Kotov (yury-kotov@yandex-team.ru) wrote:
13.06.2019, 16:46, "Dr. David Alan Gilbert" <dgilbert@redhat.com>:
* Yury Kotov (yury-kotov@yandex-team.ru) wrote:
Hi,
13.06.2019, 15:44, "Dr. David Alan Gilbert" <dgilbert@redhat.com>: > * Peng Tao (bergwolf@hyper.sh) wrote: >> On Thu, Jun 13, 2019 at 7:09 PM Dr. David Alan Gilbert >> <dgilbert@redhat.com> wrote: >> > >> > * Peng Tao (bergwolf@hyper.sh) wrote: >> > > On Thu, Jun 6, 2019 at 1:33 AM Dr. David Alan Gilbert >> > > <dgilbert@redhat.com> wrote: >> > > > >> > > > * Castelino, Manohar R (manohar.r.castelino@intel.com) wrote: >> > > > > > It's very similar to the trick that NEMU uses for templating. >> > > > > > With the x-ignore-shared-ram migration capability enabled, migration will >> > > > > > not write to the migration stream any RAM block that had the >> > > > > > shared=on flag on the qemu commandline. So you should then be able >> > > > > > to restart from the migration stream and existing RAM image. >> > > > > >> > > > > So does it mean we can drop our vm-templating patches and move to using " x-ignore-shared-ram" on QEMU 4.0. >> > > > > >> > > > > Today we only need two patches, which will come down to a single patch which would bring up closer to upstream QEMU 4.0 which would be ideal. >> > > > > https://github.com/kata-containers/packaging/blob/master/qemu/patches/4.0.x/... >> > > > > https://github.com/kata-containers/packaging/blob/master/qemu/patches/4.0.x/... >> > > > >> > > > Yes, I'm hoping that with 4.0 you can avoid 0002 - but I've not tried >> > > > it; I'd be interested in your results. >> > > > >> > > Hi Dave, >> > > >> > > I gave it a try and failed to do vm templating with x-ignore-shared. >> > > One key difference between x-ignore-shared and Lai's >> > > migrate-bypass-shared patch is that Lai doesn't verify shared memory >> > > block upon ram load. OTOH x-ignore-shared rely on ram share property >> > > to reconstruct the destination guest memory. In kata, we make use of >> > > the fact that destination ram can be private to implement vm >> > > templating feature so that multiple new guests can share map the same >> > > template VM memory privately (memory-backend-file share=off). >> > >> > (Copying in Yury who wrote those patches). >> > >> > > The way we implement vm templating in kata is: >> > > 1. Start the template VM: >> > > qemu-system-x86 -m 2G \ >> > > -object memory-backend-file,id=mem0,size=2G,share=on,mem-path=/tmpfs/template-memory >> > > \ >> > > -numa node,memdev=mem0 >> > > 2. Stop the template VM, set migration bypass-shared-memory >> > > capability, migrate exec:cat>/tmpfs/state, quit it >> > > 3. Start target VM: >> > > qemu-system-x86 -m 2G \ >> > > -object memory-backend-file,id=mem0,size=2G,share=off,mem-path=/tmpfs/template-memory >> > > \ >> > > -numa node,memdev=mem0 \ >> > > -incoming "exec:cat /tmpfs/template-state" >> > > 4. Start more target VMs like 3 >> > > >> > > There are some major differences between above and the example given >> > > by x-ignore-shared patchset[1]: >> > > 1. on target vm, memory-backend-file is mapped privately (share=off) >> > > 2. no migration capability is set on the target vm >> > > 3. it is possible to create multiple target VMs based on the same template VM >> > > I have made some hacky change to make x-ignore-shared work for vm >> > > templating, mostly by reverting the commit "migration: Add >> > > capabilities validation" and reconstructing destination VM ram without >> > > using migration share capability or ram share property[2]. >> > > >> > > So I want to ask about your opinion on how to make vm templating work. >> > > Shall we change qemu to load ram without the x-ignore-shared >> > > capability? Or add another capability to implement similar feature >> > > along side x-ignore-shared? >> > >> > Can we first try a different hack; if you set the ignore shared on the >> > destination as well as the source, what happens? I guess it'll hit one >> > of the errors in ram_load? Which one? >> > >> With vanilla 4.0? I got: >> qemu-system-x86_64: RAM block mem should be migrated >> qemu-system-x86_64: error while loading state for instance 0x0 of device 'ram' >> qemu-system-x86_64: load of migration failed: Invalid argument >> >> which matches: >> 4341 if (migrate_ignore_shared()) { >> 4342 hwaddr addr = qemu_get_be64(f); >> 4343 bool ignored = qemu_get_byte(f); >> 4344 if (ignored != ramblock_is_ignored(block)) { >> 4345 error_report("RAM block %s should %s >> be migrated", >> 4346 id, ignored ? "" : "not"); >> 4347 ret = -EINVAL; >> >> If I remove the check it succeeds. > > Great. > >> If we go this way, there is no need >> to pass shared ram states during migration since this is the only >> place they are used. > > Well, we could remove them (since the flag is still an x- we're allowed > to break compatibility). > > Yury: What do you think? > In this use case they don't run shared on the destination. >
I'm not sure I understand this use case correctly. If you don't send memory and don't share it, so how does it migrate? If memory is not shared then target will use obsolete RAM from disk, right?
The trick here is that they set it shareable initially when they create a template, then they do the migrate with the capability set; so now you've got a migration image without the RAM, and you've got a template RAM file. Now you start your VM from the template but you DONT set the shared flag - so you get the old RAM as expected and the VM starts quickly, but it doesn't write it back to the file; so now you can start another VM using hte same template quickly as well.
Oh, I get it, thanks. So, as I said, one of the possible solutions is to add an option for the memory-backend-file to mark the area of RAM that should not be migrated, instead of checking whether it is shared or not.
I think I'd rather just remove the check - the check is there to stop someone doing something silly; this case shows it's valid. Dave
Dave
I thought that ignoring all the shared RAM blocks was enough, but in this case it might be better to mark the migratable memory backends explicitly.
> Dave > >> Cheers, >> Tao >> -- >> Into something rich and strange. > -- > Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
Regards, Yury -- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
Regards, Yury -- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
On Fri, Jun 14, 2019 at 2:46 AM Dr. David Alan Gilbert <dgilbert@redhat.com> wrote:
* Yury Kotov (yury-kotov@yandex-team.ru) wrote:
13.06.2019, 16:46, "Dr. David Alan Gilbert" <dgilbert@redhat.com>:
* Yury Kotov (yury-kotov@yandex-team.ru) wrote:
Hi,
13.06.2019, 15:44, "Dr. David Alan Gilbert" <dgilbert@redhat.com>:
* Peng Tao (bergwolf@hyper.sh) wrote:
On Thu, Jun 13, 2019 at 7:09 PM Dr. David Alan Gilbert <dgilbert@redhat.com> wrote: > > * Peng Tao (bergwolf@hyper.sh) wrote: > > On Thu, Jun 6, 2019 at 1:33 AM Dr. David Alan Gilbert > > <dgilbert@redhat.com> wrote: > > > > > > * Castelino, Manohar R (manohar.r.castelino@intel.com) wrote: > > > > > It's very similar to the trick that NEMU uses for templating. > > > > > With the x-ignore-shared-ram migration capability enabled, migration will > > > > > not write to the migration stream any RAM block that had the > > > > > shared=on flag on the qemu commandline. So you should then be able > > > > > to restart from the migration stream and existing RAM image. > > > > > > > > So does it mean we can drop our vm-templating patches and move to using " x-ignore-shared-ram" on QEMU 4.0. > > > > > > > > Today we only need two patches, which will come down to a single patch which would bring up closer to upstream QEMU 4.0 which would be ideal. > > > > https://github.com/kata-containers/packaging/blob/master/qemu/patches/4.0.x/... > > > > https://github.com/kata-containers/packaging/blob/master/qemu/patches/4.0.x/... > > > > > > Yes, I'm hoping that with 4.0 you can avoid 0002 - but I've not tried > > > it; I'd be interested in your results. > > > > > Hi Dave, > > > > I gave it a try and failed to do vm templating with x-ignore-shared. > > One key difference between x-ignore-shared and Lai's > > migrate-bypass-shared patch is that Lai doesn't verify shared memory > > block upon ram load. OTOH x-ignore-shared rely on ram share property > > to reconstruct the destination guest memory. In kata, we make use of > > the fact that destination ram can be private to implement vm > > templating feature so that multiple new guests can share map the same > > template VM memory privately (memory-backend-file share=off). > > (Copying in Yury who wrote those patches). > > > The way we implement vm templating in kata is: > > 1. Start the template VM: > > qemu-system-x86 -m 2G \ > > -object memory-backend-file,id=mem0,size=2G,share=on,mem-path=/tmpfs/template-memory > > \ > > -numa node,memdev=mem0 > > 2. Stop the template VM, set migration bypass-shared-memory > > capability, migrate exec:cat>/tmpfs/state, quit it > > 3. Start target VM: > > qemu-system-x86 -m 2G \ > > -object memory-backend-file,id=mem0,size=2G,share=off,mem-path=/tmpfs/template-memory > > \ > > -numa node,memdev=mem0 \ > > -incoming "exec:cat /tmpfs/template-state" > > 4. Start more target VMs like 3 > > > > There are some major differences between above and the example given > > by x-ignore-shared patchset[1]: > > 1. on target vm, memory-backend-file is mapped privately (share=off) > > 2. no migration capability is set on the target vm > > 3. it is possible to create multiple target VMs based on the same template VM > > I have made some hacky change to make x-ignore-shared work for vm > > templating, mostly by reverting the commit "migration: Add > > capabilities validation" and reconstructing destination VM ram without > > using migration share capability or ram share property[2]. > > > > So I want to ask about your opinion on how to make vm templating work. > > Shall we change qemu to load ram without the x-ignore-shared > > capability? Or add another capability to implement similar feature > > along side x-ignore-shared? > > Can we first try a different hack; if you set the ignore shared on the > destination as well as the source, what happens? I guess it'll hit one > of the errors in ram_load? Which one? > With vanilla 4.0? I got: qemu-system-x86_64: RAM block mem should be migrated qemu-system-x86_64: error while loading state for instance 0x0 of device 'ram' qemu-system-x86_64: load of migration failed: Invalid argument
which matches: 4341 if (migrate_ignore_shared()) { 4342 hwaddr addr = qemu_get_be64(f); 4343 bool ignored = qemu_get_byte(f); 4344 if (ignored != ramblock_is_ignored(block)) { 4345 error_report("RAM block %s should %s be migrated", 4346 id, ignored ? "" : "not"); 4347 ret = -EINVAL;
If I remove the check it succeeds.
Great.
If we go this way, there is no need to pass shared ram states during migration since this is the only place they are used.
Well, we could remove them (since the flag is still an x- we're allowed to break compatibility).
Yury: What do you think? In this use case they don't run shared on the destination.
I'm not sure I understand this use case correctly. If you don't send memory and don't share it, so how does it migrate? If memory is not shared then target will use obsolete RAM from disk, right?
The trick here is that they set it shareable initially when they create a template, then they do the migrate with the capability set; so now you've got a migration image without the RAM, and you've got a template RAM file. Now you start your VM from the template but you DONT set the shared flag - so you get the old RAM as expected and the VM starts quickly, but it doesn't write it back to the file; so now you can start another VM using hte same template quickly as well.
Oh, I get it, thanks. So, as I said, one of the possible solutions is to add an option for the memory-backend-file to mark the area of RAM that should not be migrated, instead of checking whether it is shared or not.
I think I'd rather just remove the check - the check is there to stop someone doing something silly; this case shows it's valid.
Thank you both for the suggestions. Let me cook up a patch and make sure it works for kata. Cheers, Tao -- Into something rich and strange.
-----Original Message----- From: Peng Tao [mailto:bergwolf@hyper.sh] Sent: Wednesday, June 5, 2019 12:06 AM To: Dr. David Alan Gilbert <dgilbert@redhat.com>; Mahalingam, Ganesh <ganesh.mahalingam@intel.com> Cc: kata-dev@lists.katacontainers.io Subject: Re: [kata-dev] virtio-fs + VM Templating
On Tue, Jun 4, 2019 at 5:53 PM Dr. David Alan Gilbert <dgilbert@redhat.com> wrote:
* Tao Peng (bergwolf@hyper.sh) wrote:
On Tue, Jun 4, 2019 at 7:18 AM Mahalingam, Ganesh <ganesh.mahalingam@intel.com> wrote:
Peng, Stefan, While working on enabling virtio-fs on Kata there was a small
confusion on whether virtio-fs can work with VM Templating. With the current setup of kata, VM templating and virtio-fs are mutually exclusive (VM templating is enabled in qemu-lite and virtio-fs currently only with NEMU), but they will converge very soon.
An open question we had was, would they both work with each other
given that VM templating turns off memory sharing for all VMs other than the first one and virtio-fs expects guest memory be shared & file-backed for all VMs. Hi Stefan,
What happens if we specify share=on and share=off for different memory file backends? Can virtio-fs only pick up the shared one?
All vhost-user devices (including virtio-fs) need the share=on - because that's the only way that the vhost-user process sees the writes done by qemu or the guest. I think we need it for all of RAM.
Hmm, vm templating needs the initial ram to be mapped private so that it can be shared safely across multiple VM instances. It does conflict with vhost- user requirement that guest memory can be accessed by a different process.
Ganesh, how about we just marking vm templating and vhost-user conflict with each other?
Currently that is what we have done. If you enable VM-templating virtio-fs cannot be and scheduling of the pod/container should fail. Will check the code to confirm.
Cheers, Tao -- Into something rich and strange.
participants (6)
-
Castelino, Manohar R
-
Dr. David Alan Gilbert
-
Mahalingam, Ganesh
-
Peng Tao
-
Tao Peng
-
Yury Kotov