[Announce] virtio-fs released with Kata Containers support
Dear Kata Containers Community, I'm delighted to announce the first release of virtio-fs, a new shared file system for virtual machines that is designed for container use cases, including shared volumes. Unlike virtio-9p and NFS over AF_VSOCK, virtio-fs aims to take advantage of the co-location between the virtual machine and the hypervisor in order to achieve local file system semantics and improve performance. For example, it can use Linux Direct Access (DAX) to access file contents directly from the host page cache. This reduces communication with the file server and avoids duplicating data into each sandbox VM. It also means that mmap MAP_SHARED on a shared volume is coherent between sandbox VMs. The Linux kernel code (including performance numbers) has been posted here: https://marc.info/?l=linux-fsdevel&m=154446243324255&w=2 Kata Containers integration is already available so you can benchmark and test virtio-fs. The project is under active development and we still expect to make significant changes based on feedback and collaboration. We hope virtio-fs is interesting as a next step in overcoming virtio-9p's performance and limitations. Let us know how it performs! You can read more about virtio-fs here: https://virtio-fs.gitlab.io/ The Kata HowTo is here: https://virtio-fs.gitlab.io/howto-kata.html The Kata runtime and agent changes are fairly straightforward and comparable to virtio-9p. There are several other code changes due to using a Fedora initramfs, systemd, and modular kernel. These are not essential to virtio-fs but are simply how I preferred to develop and test. The FAQ on the virtio-fs website explains the main technical features. Please let me know if you have any questions or need help getting it running! I'm also on #kata-dev IRC if you need a hand. Stefan
On Tue, Dec 11, 2018 at 3:25 AM Stefan Hajnoczi <stefanha@redhat.com> wrote:
Hi Stefan,
Dear Kata Containers Community, I'm delighted to announce the first release of virtio-fs, a new shared file system for virtual machines that is designed for container use cases, including shared volumes.
Unlike virtio-9p and NFS over AF_VSOCK, virtio-fs aims to take advantage of the co-location between the virtual machine and the hypervisor in order to achieve local file system semantics and improve performance.
For example, it can use Linux Direct Access (DAX) to access file contents directly from the host page cache. This reduces communication with the file server and avoids duplicating data into each sandbox VM. It also means that mmap MAP_SHARED on a shared volume is coherent between sandbox VMs.
The Linux kernel code (including performance numbers) has been posted here: https://marc.info/?l=linux-fsdevel&m=154446243324255&w=2
Excellent! Both performance and POSIX compliance look just as promised. It's a good replacement for 9pfs and indeed very suitable for kata containers.
Kata Containers integration is already available so you can benchmark and test virtio-fs. The project is under active development and we still expect to make significant changes based on feedback and collaboration.
We hope virtio-fs is interesting as a next step in overcoming virtio-9p's performance and limitations. Let us know how it performs!
You can read more about virtio-fs here: https://virtio-fs.gitlab.io/
The Kata HowTo is here: https://virtio-fs.gitlab.io/howto-kata.html
The Kata runtime and agent changes are fairly straightforward and comparable to virtio-9p. There are several other code changes due to using a Fedora initramfs, systemd, and modular kernel. These are not essential to virtio-fs but are simply how I preferred to develop and test.
Could you please send these changes on github so that we can properly review and get them merged? Please separate preparation patches as they can be merged before virtio-fs is mergeable. Thanks!
The FAQ on the virtio-fs website explains the main technical features. Please let me know if you have any questions or need help getting it running! I'm also on #kata-dev IRC if you need a hand.
I tried to follow the instructions on the virtio-fs website and got stuck at cloning https://gitlab.com/virtio-fs/qemu.git. It seems to be private atm? Cheers, Tao -- bergwolf@hyper.sh
* Tao Peng (bergwolf@hyper.sh) wrote:
On Tue, Dec 11, 2018 at 3:25 AM Stefan Hajnoczi <stefanha@redhat.com> wrote:
Hi Stefan,
Dear Kata Containers Community, I'm delighted to announce the first release of virtio-fs, a new shared file system for virtual machines that is designed for container use cases, including shared volumes.
Unlike virtio-9p and NFS over AF_VSOCK, virtio-fs aims to take advantage of the co-location between the virtual machine and the hypervisor in order to achieve local file system semantics and improve performance.
For example, it can use Linux Direct Access (DAX) to access file contents directly from the host page cache. This reduces communication with the file server and avoids duplicating data into each sandbox VM. It also means that mmap MAP_SHARED on a shared volume is coherent between sandbox VMs.
The Linux kernel code (including performance numbers) has been posted here: https://marc.info/?l=linux-fsdevel&m=154446243324255&w=2
Excellent! Both performance and POSIX compliance look just as promised. It's a good replacement for 9pfs and indeed very suitable for kata containers.
Kata Containers integration is already available so you can benchmark and test virtio-fs. The project is under active development and we still expect to make significant changes based on feedback and collaboration.
We hope virtio-fs is interesting as a next step in overcoming virtio-9p's performance and limitations. Let us know how it performs!
You can read more about virtio-fs here: https://virtio-fs.gitlab.io/
The Kata HowTo is here: https://virtio-fs.gitlab.io/howto-kata.html
The Kata runtime and agent changes are fairly straightforward and comparable to virtio-9p. There are several other code changes due to using a Fedora initramfs, systemd, and modular kernel. These are not essential to virtio-fs but are simply how I preferred to develop and test.
Could you please send these changes on github so that we can properly review and get them merged? Please separate preparation patches as they can be merged before virtio-fs is mergeable. Thanks!
The FAQ on the virtio-fs website explains the main technical features. Please let me know if you have any questions or need help getting it running! I'm also on #kata-dev IRC if you need a hand.
I tried to follow the instructions on the virtio-fs website and got stuck at cloning https://gitlab.com/virtio-fs/qemu.git. It seems to be private atm?
Oops, missed that one; fixed! Dave
Cheers, Tao -- bergwolf@hyper.sh -- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
On Tue, Dec 11, 2018 at 03:23:03PM +0800, Tao Peng wrote:
On Tue, Dec 11, 2018 at 3:25 AM Stefan Hajnoczi <stefanha@redhat.com> wrote:
Kata Containers integration is already available so you can benchmark and test virtio-fs. The project is under active development and we still expect to make significant changes based on feedback and collaboration.
We hope virtio-fs is interesting as a next step in overcoming virtio-9p's performance and limitations. Let us know how it performs!
You can read more about virtio-fs here: https://virtio-fs.gitlab.io/
The Kata HowTo is here: https://virtio-fs.gitlab.io/howto-kata.html
The Kata runtime and agent changes are fairly straightforward and comparable to virtio-9p. There are several other code changes due to using a Fedora initramfs, systemd, and modular kernel. These are not essential to virtio-fs but are simply how I preferred to develop and test.
Could you please send these changes on github so that we can properly review and get them merged? Please separate preparation patches as they can be merged before virtio-fs is mergeable. Thanks!
Sure, I'll raise them on GitHub. Stefan
Hi Stefan, Very good news. Seems data is much better than 9pfs. And I have some doubt, it's better if they can be answered: 1) Did you compare the performance to the virtio-blk raw or qcow2 solution of normal virtual machine? 2) Whether does it impact live-migration of guest os ? Anyway, it's a major step. Thank you Stefan. Regards Qixuan Wu On 2018/12/11 AM 3:25, Stefan Hajnoczi Wrote:
Dear Kata Containers Community, I'm delighted to announce the first release of virtio-fs, a new shared file system for virtual machines that is designed for container use cases, including shared volumes.
Unlike virtio-9p and NFS over AF_VSOCK, virtio-fs aims to take advantage of the co-location between the virtual machine and the hypervisor in order to achieve local file system semantics and improve performance.
For example, it can use Linux Direct Access (DAX) to access file contents directly from the host page cache. This reduces communication with the file server and avoids duplicating data into each sandbox VM. It also means that mmap MAP_SHARED on a shared volume is coherent between sandbox VMs.
The Linux kernel code (including performance numbers) has been posted here: https://marc.info/?l=linux-fsdevel&m=154446243324255&w=2 Kata Containers integration is already available so you can benchmark and test virtio-fs. The project is under active development and we still expect to make significant changes based on feedback and collaboration.
We hope virtio-fs is interesting as a next step in overcoming virtio-9p's performance and limitations. Let us know how it performs!
You can read more about virtio-fs here: https://virtio-fs.gitlab.io/
The Kata HowTo is here: https://virtio-fs.gitlab.io/howto-kata.html
The Kata runtime and agent changes are fairly straightforward and comparable to virtio-9p. There are several other code changes due to using a Fedora initramfs, systemd, and modular kernel. These are not essential to virtio-fs but are simply how I preferred to develop and test.
The FAQ on the virtio-fs website explains the main technical features. Please let me know if you have any questions or need help getting it running! I'm also on #kata-dev IRC if you need a hand.
Stefan
On Sun, Dec 16, 2018 at 09:09:48AM +0800, Qixuan Wu wrote:
1) Did you compare the performance to the virtio-blk raw or qcow2 solution of normal virtual machine?
Not yet. Guest and host page cache performance can dominate benchmarks, so we typically use fio direct=1 with QEMU -drive cache=none (O_DIRECT) to focus purely on disk I/O performance and not page cache. The same thing can be done with virtio-fs so that every I/O operation requires communication with the host. In theory virtio-fs should be comparable to virtio-blk on raw. In real-world scenarios the page cache will be enabled, especially for the virtio-fs DAX feature. So I need to think carefully about what to benchmark, but it will probably include both configurations.
2) Whether does it impact live-migration of guest os ?
Virtio-fs currently does not support live migration. Is there a requirement for live migration with Kata Containers use cases? Stefan
Stefan - No, there isn’t a live migration requirement for Kata (sorry for the top post). Eric
On Dec 17, 2018, at 6:27 AM, Stefan Hajnoczi <stefanha@redhat.com> wrote:
On Sun, Dec 16, 2018 at 09:09:48AM +0800, Qixuan Wu wrote: 1) Did you compare the performance to the virtio-blk raw or qcow2 solution of normal virtual machine?
Not yet.
Guest and host page cache performance can dominate benchmarks, so we typically use fio direct=1 with QEMU -drive cache=none (O_DIRECT) to focus purely on disk I/O performance and not page cache. The same thing can be done with virtio-fs so that every I/O operation requires communication with the host. In theory virtio-fs should be comparable to virtio-blk on raw.
In real-world scenarios the page cache will be enabled, especially for the virtio-fs DAX feature. So I need to think carefully about what to benchmark, but it will probably include both configurations.
2) Whether does it impact live-migration of guest os ?
Virtio-fs currently does not support live migration. Is there a requirement for live migration with Kata Containers use cases?
Stefan
Hi Stefan, On Tue, Dec 11, 2018 at 3:25 AM Stefan Hajnoczi <stefanha@redhat.com> wrote:
Dear Kata Containers Community, I'm delighted to announce the first release of virtio-fs, a new shared file system for virtual machines that is designed for container use cases, including shared volumes.
One more question, is there plan to support hotplug a virtio-fs device to the guest? Or is it already supported? Thanks, Tao -- bergwolf@hyper.sh
Stefan, Eric, There isn't live migration requirement so far, but, I will take this as a potential requirement when Kata is growing more powerful. In my mind, this is rational as some of us are running kata on bare metal, in this scene, we don't have an infrastructure software such as OpenStack to guarantee the lifecycle of workload. Virtio-fs is in RFC state, it could be OK as long as it doesn't have native gap for supporting live migration, and I will be glad to see it being listed in some roadmap. By the way, really nice work! We finally get a better option against 9pfs :-) Thanks! -----邮件原件----- 发件人: Ernst, Eric [mailto:eric.ernst@intel.com] 发送时间: 2018年12月17日 22:30 收件人: Stefan Hajnoczi <stefanha@redhat.com> 抄送: sweil@redhat.com; Qixuan Wu <qixuan.wu@linux.alibaba.com>; Graham Whaley <graham.whaley@gmail.com>; miklos@szeredi.hu; kata-dev@lists.katacontainers.io; swhiteho@redhat.com; vgoyal@redhat.com 主题: Re: [kata-dev] [Announce] virtio-fs released with Kata Containers support Stefan - No, there isn’t a live migration requirement for Kata (sorry for the top post). Eric
On Dec 17, 2018, at 6:27 AM, Stefan Hajnoczi <stefanha@redhat.com> wrote:
On Sun, Dec 16, 2018 at 09:09:48AM +0800, Qixuan Wu wrote: 1) Did you compare the performance to the virtio-blk raw or qcow2 solution of normal virtual machine?
Not yet.
Guest and host page cache performance can dominate benchmarks, so we typically use fio direct=1 with QEMU -drive cache=none (O_DIRECT) to focus purely on disk I/O performance and not page cache. The same thing can be done with virtio-fs so that every I/O operation requires communication with the host. In theory virtio-fs should be comparable to virtio-blk on raw.
In real-world scenarios the page cache will be enabled, especially for the virtio-fs DAX feature. So I need to think carefully about what to benchmark, but it will probably include both configurations.
2) Whether does it impact live-migration of guest os ?
Virtio-fs currently does not support live migration. Is there a requirement for live migration with Kata Containers use cases?
Stefan
_______________________________________________ kata-dev mailing list kata-dev@lists.katacontainers.io http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev
Hi Zhangwei, Could you go into more detail on the use case for live migration on bare metal? Thanks, EJ On Mon, Dec 17, 2018 at 6:20 PM zhangwei (CR) <zhangwei555@huawei.com> wrote:
Stefan, Eric,
There isn't live migration requirement so far, but, I will take this as a potential requirement when Kata is growing more powerful. In my mind, this is rational as some of us are running kata on bare metal, in this scene, we don't have an infrastructure software such as OpenStack to guarantee the lifecycle of workload.
Virtio-fs is in RFC state, it could be OK as long as it doesn't have native gap for supporting live migration, and I will be glad to see it being listed in some roadmap.
By the way, really nice work! We finally get a better option against 9pfs :-) Thanks!
-----邮件原件----- 发件人: Ernst, Eric [mailto:eric.ernst@intel.com] 发送时间: 2018年12月17日 22:30 收件人: Stefan Hajnoczi <stefanha@redhat.com> 抄送: sweil@redhat.com; Qixuan Wu <qixuan.wu@linux.alibaba.com>; Graham Whaley <graham.whaley@gmail.com>; miklos@szeredi.hu; kata-dev@lists.katacontainers.io; swhiteho@redhat.com; vgoyal@redhat.com 主题: Re: [kata-dev] [Announce] virtio-fs released with Kata Containers support
Stefan -
No, there isn’t a live migration requirement for Kata (sorry for the top post).
Eric
On Dec 17, 2018, at 6:27 AM, Stefan Hajnoczi <stefanha@redhat.com> wrote:
On Sun, Dec 16, 2018 at 09:09:48AM +0800, Qixuan Wu wrote: 1) Did you compare the performance to the virtio-blk raw or qcow2 solution of normal virtual machine?
Not yet.
Guest and host page cache performance can dominate benchmarks, so we typically use fio direct=1 with QEMU -drive cache=none (O_DIRECT) to focus purely on disk I/O performance and not page cache. The same thing can be done with virtio-fs so that every I/O operation requires communication with the host. In theory virtio-fs should be comparable to virtio-blk on raw.
In real-world scenarios the page cache will be enabled, especially for the virtio-fs DAX feature. So I need to think carefully about what to benchmark, but it will probably include both configurations.
2) Whether does it impact live-migration of guest os ?
Virtio-fs currently does not support live migration. Is there a requirement for live migration with Kata Containers use cases?
Stefan
_______________________________________________ kata-dev mailing list kata-dev@lists.katacontainers.io http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev _______________________________________________ kata-dev mailing list kata-dev@lists.katacontainers.io http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev
Hi EJ, Kata Container is one tiny VM per POD, you can choose to run k8s+kata in another VM provisioned by OpenStack, in this case, we run into the nested virtualization scenario. It works well but brings some unnecessary overhead from nested VM usage. To avoid this kind of overhead, we can also choose to run K8S cluster on bare metal, which means every worker/node of K8S is bare metal machine instead of VM. By doing this, K8S becomes the lowest level of infrastructure and should provide live migration capability similar to OpenStack VM scenarios. Live migration could be useful in this scene as every POD is a VM running on bare metal now, we need a reliable way to keep workload alive when host machine needs reboot(let’s say, fixing some CVEs). -Wei 发件人: EJ Campbell [mailto:ejc3@oath.com] 发送时间: 2018年12月18日 14:11 收件人: zhangwei (CR) <zhangwei555@huawei.com> 抄送: Ernst, Eric <eric.ernst@intel.com>; Stefan Hajnoczi <stefanha@redhat.com>; sweil@redhat.com; Qixuan Wu <qixuan.wu@linux.alibaba.com>; Graham Whaley <graham.whaley@gmail.com>; miklos@szeredi.hu; kata-dev@lists.katacontainers.io; swhiteho@redhat.com; vgoyal@redhat.com 主题: Re: [kata-dev] 答复: [Announce] virtio-fs released with Kata Containers support Hi Zhangwei, Could you go into more detail on the use case for live migration on bare metal? Thanks, EJ On Mon, Dec 17, 2018 at 6:20 PM zhangwei (CR) <zhangwei555@huawei.com<mailto:zhangwei555@huawei.com>> wrote: Stefan, Eric, There isn't live migration requirement so far, but, I will take this as a potential requirement when Kata is growing more powerful. In my mind, this is rational as some of us are running kata on bare metal, in this scene, we don't have an infrastructure software such as OpenStack to guarantee the lifecycle of workload. Virtio-fs is in RFC state, it could be OK as long as it doesn't have native gap for supporting live migration, and I will be glad to see it being listed in some roadmap. By the way, really nice work! We finally get a better option against 9pfs :-) Thanks! -----邮件原件----- 发件人: Ernst, Eric [mailto:eric.ernst@intel.com<mailto:eric.ernst@intel.com>] 发送时间: 2018年12月17日 22:30 收件人: Stefan Hajnoczi <stefanha@redhat.com<mailto:stefanha@redhat.com>> 抄送: sweil@redhat.com<mailto:sweil@redhat.com>; Qixuan Wu <qixuan.wu@linux.alibaba.com<mailto:qixuan.wu@linux.alibaba.com>>; Graham Whaley <graham.whaley@gmail.com<mailto:graham.whaley@gmail.com>>; miklos@szeredi.hu<mailto:miklos@szeredi.hu>; kata-dev@lists.katacontainers.io<mailto:kata-dev@lists.katacontainers.io>; swhiteho@redhat.com<mailto:swhiteho@redhat.com>; vgoyal@redhat.com<mailto:vgoyal@redhat.com> 主题: Re: [kata-dev] [Announce] virtio-fs released with Kata Containers support Stefan - No, there isn’t a live migration requirement for Kata (sorry for the top post). Eric
On Dec 17, 2018, at 6:27 AM, Stefan Hajnoczi <stefanha@redhat.com<mailto:stefanha@redhat.com>> wrote:
On Sun, Dec 16, 2018 at 09:09:48AM +0800, Qixuan Wu wrote: 1) Did you compare the performance to the virtio-blk raw or qcow2 solution of normal virtual machine?
Not yet.
Guest and host page cache performance can dominate benchmarks, so we typically use fio direct=1 with QEMU -drive cache=none (O_DIRECT) to focus purely on disk I/O performance and not page cache. The same thing can be done with virtio-fs so that every I/O operation requires communication with the host. In theory virtio-fs should be comparable to virtio-blk on raw.
In real-world scenarios the page cache will be enabled, especially for the virtio-fs DAX feature. So I need to think carefully about what to benchmark, but it will probably include both configurations.
2) Whether does it impact live-migration of guest os ?
Virtio-fs currently does not support live migration. Is there a requirement for live migration with Kata Containers use cases?
Stefan
_______________________________________________ kata-dev mailing list kata-dev@lists.katacontainers.io<mailto:kata-dev@lists.katacontainers.io> http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev _______________________________________________ kata-dev mailing list kata-dev@lists.katacontainers.io<mailto:kata-dev@lists.katacontainers.io> http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev
Hi EJ, Kata Container is one tiny VM per POD, you can choose to run k8s+kata in another VM provisioned by OpenStack, in this case, we run into the nested virtualization scenario. It works well but brings some unnecessary overhead from nested VM usage. To avoid this kind of overhead, we can also choose to run K8S cluster on bare metal, which means every worker/node of K8S is bare metal machine instead of VM. By doing this, K8S becomes the lowest level of infrastructure and should provide live migration capability similar to OpenStack VM scenarios. Live migration could be useful in this scene as every POD is a VM running on bare metal now, we need a reliable way to keep workload alive when host machine needs reboot(let’s say, fixing some CVEs). -Wei 发件人: EJ Campbell [mailto:ejc3@oath.com] 发送时间: 2018年12月18日 14:11 收件人: zhangwei (CR) <zhangwei555@huawei.com<mailto:zhangwei555@huawei.com>> 抄送: Ernst, Eric <eric.ernst@intel.com<mailto:eric.ernst@intel.com>>; Stefan Hajnoczi <stefanha@redhat.com<mailto:stefanha@redhat.com>>; sweil@redhat.com<mailto:sweil@redhat.com>; Qixuan Wu <qixuan.wu@linux.alibaba.com<mailto:qixuan.wu@linux.alibaba.com>>; Graham Whaley <graham.whaley@gmail.com<mailto:graham.whaley@gmail.com>>; miklos@szeredi.hu<mailto:miklos@szeredi.hu>; kata-dev@lists.katacontainers.io<mailto:kata-dev@lists.katacontainers.io>; swhiteho@redhat.com<mailto:swhiteho@redhat.com>; vgoyal@redhat.com<mailto:vgoyal@redhat.com> 主题: Re: [kata-dev] 答复: [Announce] virtio-fs released with Kata Containers support Hi Zhangwei, Could you go into more detail on the use case for live migration on bare metal? Thanks, EJ On Mon, Dec 17, 2018 at 6:20 PM zhangwei (CR) <zhangwei555@huawei.com<mailto:zhangwei555@huawei.com>> wrote: Stefan, Eric, There isn't live migration requirement so far, but, I will take this as a potential requirement when Kata is growing more powerful. In my mind, this is rational as some of us are running kata on bare metal, in this scene, we don't have an infrastructure software such as OpenStack to guarantee the lifecycle of workload. Virtio-fs is in RFC state, it could be OK as long as it doesn't have native gap for supporting live migration, and I will be glad to see it being listed in some roadmap. By the way, really nice work! We finally get a better option against 9pfs :-) Thanks! -----邮件原件----- 发件人: Ernst, Eric [mailto:eric.ernst@intel.com<mailto:eric.ernst@intel.com>] 发送时间: 2018年12月17日 22:30 收件人: Stefan Hajnoczi <stefanha@redhat.com<mailto:stefanha@redhat.com>> 抄送: sweil@redhat.com<mailto:sweil@redhat.com>; Qixuan Wu <qixuan.wu@linux.alibaba.com<mailto:qixuan.wu@linux.alibaba.com>>; Graham Whaley <graham.whaley@gmail.com<mailto:graham.whaley@gmail.com>>; miklos@szeredi.hu<mailto:miklos@szeredi.hu>; kata-dev@lists.katacontainers.io<mailto:kata-dev@lists.katacontainers.io>; swhiteho@redhat.com<mailto:swhiteho@redhat.com>; vgoyal@redhat.com<mailto:vgoyal@redhat.com> 主题: Re: [kata-dev] [Announce] virtio-fs released with Kata Containers support Stefan - No, there isn’t a live migration requirement for Kata (sorry for the top post). Eric
On Dec 17, 2018, at 6:27 AM, Stefan Hajnoczi <stefanha@redhat.com<mailto:stefanha@redhat.com>> wrote:
On Sun, Dec 16, 2018 at 09:09:48AM +0800, Qixuan Wu wrote: 1) Did you compare the performance to the virtio-blk raw or qcow2 solution of normal virtual machine?
Not yet.
Guest and host page cache performance can dominate benchmarks, so we typically use fio direct=1 with QEMU -drive cache=none (O_DIRECT) to focus purely on disk I/O performance and not page cache. The same thing can be done with virtio-fs so that every I/O operation requires communication with the host. In theory virtio-fs should be comparable to virtio-blk on raw.
In real-world scenarios the page cache will be enabled, especially for the virtio-fs DAX feature. So I need to think carefully about what to benchmark, but it will probably include both configurations.
2) Whether does it impact live-migration of guest os ?
Virtio-fs currently does not support live migration. Is there a requirement for live migration with Kata Containers use cases?
Stefan
_______________________________________________ kata-dev mailing list kata-dev@lists.katacontainers.io<mailto:kata-dev@lists.katacontainers.io> http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev _______________________________________________ kata-dev mailing list kata-dev@lists.katacontainers.io<mailto:kata-dev@lists.katacontainers.io> http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev
Hi Wei, Got it. Though given your workloads need to be able to survive a hardware failure anyway, it seems like focusing all your effort on having your workloads support using K8s primitives to drain nodes and have them safely respawn on another physical host might be a better option. Thanks for the quick response, EJ On Mon, Dec 17, 2018 at 11:28 PM zhangwei (CR) <zhangwei555@huawei.com> wrote:
Hi EJ,
Kata Container is one tiny VM per POD, you can choose to run k8s+kata in another VM provisioned by OpenStack, in this case, we run into the nested virtualization scenario. It works well but brings some unnecessary overhead from nested VM usage.
To avoid this kind of overhead, we can also choose to run K8S cluster on bare metal, which means every worker/node of K8S is bare metal machine instead of VM. By doing this, K8S becomes the lowest level of infrastructure and should provide live migration capability similar to OpenStack VM scenarios.
Live migration could be useful in this scene as every POD is a VM running on bare metal now, we need a reliable way to keep workload alive when host machine needs reboot(let’s say, fixing some CVEs).
-Wei
*发件人**:* EJ Campbell [mailto:ejc3@oath.com] *发送时间:* 2018年12月18日 14:11 *收件人:* zhangwei (CR) <zhangwei555@huawei.com> *抄送:* Ernst, Eric <eric.ernst@intel.com>; Stefan Hajnoczi < stefanha@redhat.com>; sweil@redhat.com; Qixuan Wu < qixuan.wu@linux.alibaba.com>; Graham Whaley <graham.whaley@gmail.com>; miklos@szeredi.hu; kata-dev@lists.katacontainers.io; swhiteho@redhat.com; vgoyal@redhat.com *主题:* Re: [kata-dev] 答复: [Announce] virtio-fs released with Kata Containers support
Hi Zhangwei,
Could you go into more detail on the use case for live migration on bare metal?
Thanks,
EJ
On Mon, Dec 17, 2018 at 6:20 PM zhangwei (CR) <zhangwei555@huawei.com> wrote:
Stefan, Eric,
There isn't live migration requirement so far, but, I will take this as a potential requirement when Kata is growing more powerful. In my mind, this is rational as some of us are running kata on bare metal, in this scene, we don't have an infrastructure software such as OpenStack to guarantee the lifecycle of workload.
Virtio-fs is in RFC state, it could be OK as long as it doesn't have native gap for supporting live migration, and I will be glad to see it being listed in some roadmap.
By the way, really nice work! We finally get a better option against 9pfs :-) Thanks!
-----邮件原件----- 发件人: Ernst, Eric [mailto:eric.ernst@intel.com] 发送时间: 2018年12月17日 22:30 收件人: Stefan Hajnoczi <stefanha@redhat.com> 抄送: sweil@redhat.com; Qixuan Wu <qixuan.wu@linux.alibaba.com>; Graham Whaley <graham.whaley@gmail.com>; miklos@szeredi.hu; kata-dev@lists.katacontainers.io; swhiteho@redhat.com; vgoyal@redhat.com 主题: Re: [kata-dev] [Announce] virtio-fs released with Kata Containers support
Stefan -
No, there isn’t a live migration requirement for Kata (sorry for the top post).
Eric
On Dec 17, 2018, at 6:27 AM, Stefan Hajnoczi <stefanha@redhat.com> wrote:
On Sun, Dec 16, 2018 at 09:09:48AM +0800, Qixuan Wu wrote: 1) Did you compare the performance to the virtio-blk raw or qcow2 solution of normal virtual machine?
Not yet.
Guest and host page cache performance can dominate benchmarks, so we typically use fio direct=1 with QEMU -drive cache=none (O_DIRECT) to focus purely on disk I/O performance and not page cache. The same thing can be done with virtio-fs so that every I/O operation requires communication with the host. In theory virtio-fs should be comparable to virtio-blk on raw.
In real-world scenarios the page cache will be enabled, especially for the virtio-fs DAX feature. So I need to think carefully about what to benchmark, but it will probably include both configurations.
2) Whether does it impact live-migration of guest os ?
Virtio-fs currently does not support live migration. Is there a requirement for live migration with Kata Containers use cases?
Stefan
_______________________________________________ kata-dev mailing list kata-dev@lists.katacontainers.io http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev _______________________________________________ kata-dev mailing list kata-dev@lists.katacontainers.io http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev
EJ,
it seems like focusing all your effort on having your workloads support using K8s primitives to drain nodes and have them safely respawn on another physical host might be a better option
I can’t agree more. But our users/customers always want more when you give him/her an option. Who can blame their customers? ☺ -Wei 发件人: EJ Campbell [mailto:ejc3@oath.com] 发送时间: 2018年12月18日 15:33 收件人: zhangwei (CR) <zhangwei555@huawei.com> 抄送: Ernst, Eric <eric.ernst@intel.com>; Stefan Hajnoczi <stefanha@redhat.com>; sweil@redhat.com; Qixuan Wu <qixuan.wu@linux.alibaba.com>; Graham Whaley <graham.whaley@gmail.com>; miklos@szeredi.hu; kata-dev@lists.katacontainers.io; swhiteho@redhat.com; vgoyal@redhat.com 主题: Re: [kata-dev] 答复: [Announce] virtio-fs released with Kata Containers support Hi Wei, Got it. Though given your workloads need to be able to survive a hardware failure anyway, it seems like focusing all your effort on having your workloads support using K8s primitives to drain nodes and have them safely respawn on another physical host might be a better option. Thanks for the quick response, EJ On Mon, Dec 17, 2018 at 11:28 PM zhangwei (CR) <zhangwei555@huawei.com<mailto:zhangwei555@huawei.com>> wrote: Hi EJ, Kata Container is one tiny VM per POD, you can choose to run k8s+kata in another VM provisioned by OpenStack, in this case, we run into the nested virtualization scenario. It works well but brings some unnecessary overhead from nested VM usage. To avoid this kind of overhead, we can also choose to run K8S cluster on bare metal, which means every worker/node of K8S is bare metal machine instead of VM. By doing this, K8S becomes the lowest level of infrastructure and should provide live migration capability similar to OpenStack VM scenarios. Live migration could be useful in this scene as every POD is a VM running on bare metal now, we need a reliable way to keep workload alive when host machine needs reboot(let’s say, fixing some CVEs). -Wei 发件人: EJ Campbell [mailto:ejc3@oath.com<mailto:ejc3@oath.com>] 发送时间: 2018年12月18日 14:11 收件人: zhangwei (CR) <zhangwei555@huawei.com<mailto:zhangwei555@huawei.com>> 抄送: Ernst, Eric <eric.ernst@intel.com<mailto:eric.ernst@intel.com>>; Stefan Hajnoczi <stefanha@redhat.com<mailto:stefanha@redhat.com>>; sweil@redhat.com<mailto:sweil@redhat.com>; Qixuan Wu <qixuan.wu@linux.alibaba.com<mailto:qixuan.wu@linux.alibaba.com>>; Graham Whaley <graham.whaley@gmail.com<mailto:graham.whaley@gmail.com>>; miklos@szeredi.hu<mailto:miklos@szeredi.hu>; kata-dev@lists.katacontainers.io<mailto:kata-dev@lists.katacontainers.io>; swhiteho@redhat.com<mailto:swhiteho@redhat.com>; vgoyal@redhat.com<mailto:vgoyal@redhat.com> 主题: Re: [kata-dev] 答复: [Announce] virtio-fs released with Kata Containers support Hi Zhangwei, Could you go into more detail on the use case for live migration on bare metal? Thanks, EJ On Mon, Dec 17, 2018 at 6:20 PM zhangwei (CR) <zhangwei555@huawei.com<mailto:zhangwei555@huawei.com>> wrote: Stefan, Eric, There isn't live migration requirement so far, but, I will take this as a potential requirement when Kata is growing more powerful. In my mind, this is rational as some of us are running kata on bare metal, in this scene, we don't have an infrastructure software such as OpenStack to guarantee the lifecycle of workload. Virtio-fs is in RFC state, it could be OK as long as it doesn't have native gap for supporting live migration, and I will be glad to see it being listed in some roadmap. By the way, really nice work! We finally get a better option against 9pfs :-) Thanks! -----邮件原件----- 发件人: Ernst, Eric [mailto:eric.ernst@intel.com<mailto:eric.ernst@intel.com>] 发送时间: 2018年12月17日 22:30 收件人: Stefan Hajnoczi <stefanha@redhat.com<mailto:stefanha@redhat.com>> 抄送: sweil@redhat.com<mailto:sweil@redhat.com>; Qixuan Wu <qixuan.wu@linux.alibaba.com<mailto:qixuan.wu@linux.alibaba.com>>; Graham Whaley <graham.whaley@gmail.com<mailto:graham.whaley@gmail.com>>; miklos@szeredi.hu<mailto:miklos@szeredi.hu>; kata-dev@lists.katacontainers.io<mailto:kata-dev@lists.katacontainers.io>; swhiteho@redhat.com<mailto:swhiteho@redhat.com>; vgoyal@redhat.com<mailto:vgoyal@redhat.com> 主题: Re: [kata-dev] [Announce] virtio-fs released with Kata Containers support Stefan - No, there isn’t a live migration requirement for Kata (sorry for the top post). Eric
On Dec 17, 2018, at 6:27 AM, Stefan Hajnoczi <stefanha@redhat.com<mailto:stefanha@redhat.com>> wrote:
On Sun, Dec 16, 2018 at 09:09:48AM +0800, Qixuan Wu wrote: 1) Did you compare the performance to the virtio-blk raw or qcow2 solution of normal virtual machine?
Not yet.
Guest and host page cache performance can dominate benchmarks, so we typically use fio direct=1 with QEMU -drive cache=none (O_DIRECT) to focus purely on disk I/O performance and not page cache. The same thing can be done with virtio-fs so that every I/O operation requires communication with the host. In theory virtio-fs should be comparable to virtio-blk on raw.
In real-world scenarios the page cache will be enabled, especially for the virtio-fs DAX feature. So I need to think carefully about what to benchmark, but it will probably include both configurations.
2) Whether does it impact live-migration of guest os ?
Virtio-fs currently does not support live migration. Is there a requirement for live migration with Kata Containers use cases?
Stefan
_______________________________________________ kata-dev mailing list kata-dev@lists.katacontainers.io<mailto:kata-dev@lists.katacontainers.io> http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev _______________________________________________ kata-dev mailing list kata-dev@lists.katacontainers.io<mailto:kata-dev@lists.katacontainers.io> http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev
Hi EJ and Wei, First of all, I don't think there is a strong relation between bare-metal and live-migration. In Kubernetes, once a Pod is assigned to a node, it won't go anywhere else in its entire life. And the pod itself should be stateless and even stateful workload should rely on permanent volumes or other global resources. Under the above assumption, any migration-like requirements should be defined by deployment or other high-level tasks definition instead of sandbox-level migration. This is the philosophy of Kubernetes. However, I understand Wei, who is at the role of an operator just like me, he may be very happy if he could be able to do some tweak to avoid unnecessary re-schedule operations, which will reduce service interruption at least. The only problem here is how much is the cost of the live migration. Personally, I don't think I need live migration in the current stage but I won't reject the live-migration feature if it does not have significant side effects. -Xu On Tue, Dec 18, 2018 at 3:30 PM zhangwei (CR) <zhangwei555@huawei.com> wrote:
Hi EJ,
Kata Container is one tiny VM per POD, you can choose to run k8s+kata in another VM provisioned by OpenStack, in this case, we run into the nested virtualization scenario. It works well but brings some unnecessary overhead from nested VM usage.
To avoid this kind of overhead, we can also choose to run K8S cluster on bare metal, which means every worker/node of K8S is bare metal machine instead of VM. By doing this, K8S becomes the lowest level of infrastructure and should provide live migration capability similar to OpenStack VM scenarios.
Live migration could be useful in this scene as every POD is a VM running on bare metal now, we need a reliable way to keep workload alive when host machine needs reboot(let’s say, fixing some CVEs).
-Wei
*发件人**:* EJ Campbell [mailto:ejc3@oath.com <ejc3@oath.com>] *发送时间:* 2018年12月18日 14:11 *收件人:* zhangwei (CR) <zhangwei555@huawei.com> *抄送:* Ernst, Eric <eric.ernst@intel.com>; Stefan Hajnoczi < stefanha@redhat.com>; sweil@redhat.com; Qixuan Wu < qixuan.wu@linux.alibaba.com>; Graham Whaley <graham.whaley@gmail.com>; miklos@szeredi.hu; kata-dev@lists.katacontainers.io; swhiteho@redhat.com; vgoyal@redhat.com *主题:* Re: [kata-dev] 答复: [Announce] virtio-fs released with Kata Containers support
Hi Zhangwei,
Could you go into more detail on the use case for live migration on bare metal?
Thanks,
EJ
On Mon, Dec 17, 2018 at 6:20 PM zhangwei (CR) <zhangwei555@huawei.com> wrote:
Stefan, Eric,
There isn't live migration requirement so far, but, I will take this as a potential requirement when Kata is growing more powerful. In my mind, this is rational as some of us are running kata on bare metal, in this scene, we don't have an infrastructure software such as OpenStack to guarantee the lifecycle of workload.
Virtio-fs is in RFC state, it could be OK as long as it doesn't have native gap for supporting live migration, and I will be glad to see it being listed in some roadmap.
By the way, really nice work! We finally get a better option against 9pfs :-) Thanks!
-----邮件原件----- 发件人: Ernst, Eric [mailto:eric.ernst@intel.com] 发送时间: 2018年12月17日 22:30 收件人: Stefan Hajnoczi <stefanha@redhat.com> 抄送: sweil@redhat.com; Qixuan Wu <qixuan.wu@linux.alibaba.com>; Graham Whaley <graham.whaley@gmail.com>; miklos@szeredi.hu; kata-dev@lists.katacontainers.io; swhiteho@redhat.com; vgoyal@redhat.com 主题: Re: [kata-dev] [Announce] virtio-fs released with Kata Containers support
Stefan -
No, there isn’t a live migration requirement for Kata (sorry for the top post).
Eric
On Dec 17, 2018, at 6:27 AM, Stefan Hajnoczi <stefanha@redhat.com> wrote:
On Sun, Dec 16, 2018 at 09:09:48AM +0800, Qixuan Wu wrote: 1) Did you compare the performance to the virtio-blk raw or qcow2 solution of normal virtual machine?
Not yet.
Guest and host page cache performance can dominate benchmarks, so we typically use fio direct=1 with QEMU -drive cache=none (O_DIRECT) to focus purely on disk I/O performance and not page cache. The same thing can be done with virtio-fs so that every I/O operation requires communication with the host. In theory virtio-fs should be comparable to virtio-blk on raw.
In real-world scenarios the page cache will be enabled, especially for the virtio-fs DAX feature. So I need to think carefully about what to benchmark, but it will probably include both configurations.
2) Whether does it impact live-migration of guest os ?
Virtio-fs currently does not support live migration. Is there a requirement for live migration with Kata Containers use cases?
Stefan
_______________________________________________ kata-dev mailing list kata-dev@lists.katacontainers.io http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev _______________________________________________ kata-dev mailing list kata-dev@lists.katacontainers.io http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev
_______________________________________________ kata-dev mailing list kata-dev@lists.katacontainers.io http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev
-- -- Xu Wang CTO & Cofounder, Hyper github/twitter/wechat: @gnawux http://hyper.sh Hyper_: Make VM run like container
Hi Xu, I mentioned the bare metal because if you run kata-containers in another VM provided by OpenStack, you can do live-migration to the whole VM(host for kata), and bypass kata live-migration requirement.
In Kubernetes, once a Pod is assigned to a node, it won't go anywhere else in its entire life. And the pod itself should be stateless and even stateful workload should rely on permanent volumes or other global resources. Under the above assumption, any migration-like requirements should be defined by deployment or other high-level tasks definition instead of sandbox-level migration. This is the philosophy of Kubernetes.
I agree with you as a technical guy, this philosophy simplifies things a lot and I love simplicity. But life is not always easy, when users/customers pay enough money, the marketing guy will force me to do anything customers want, in general, I will choose to argue --> fail --> then add the new feature, or simply find a new job ☺ Permanent volumes sounds good for stateful workload, but not all workloads are containerized perfectly. When drain the node or respawn the POD in another node, you still lost some(number changes according to how frequently the workload write the persist volume) calculation/data. Another use case is the workload start very slowly and they want to avoid restarting. I’ll suggest users only run stateless workload in K8s cluster, but this isn’t achievable. Some people are evening running systemd+sshd in a container which is worse. -Wei 发件人: Xu Wang [mailto:xu@hyper.sh] 发送时间: 2018年12月18日 16:02 收件人: zhangwei (CR) <zhangwei555@huawei.com> 抄送: EJ Campbell <ejc3@oath.com>; kata-dev@lists.katacontainers.io 主题: Re: [kata-dev] 答复: 答复: [Announce] virtio-fs released with Kata Containers support Hi EJ and Wei, First of all, I don't think there is a strong relation between bare-metal and live-migration. In Kubernetes, once a Pod is assigned to a node, it won't go anywhere else in its entire life. And the pod itself should be stateless and even stateful workload should rely on permanent volumes or other global resources. Under the above assumption, any migration-like requirements should be defined by deployment or other high-level tasks definition instead of sandbox-level migration. This is the philosophy of Kubernetes. However, I understand Wei, who is at the role of an operator just like me, he may be very happy if he could be able to do some tweak to avoid unnecessary re-schedule operations, which will reduce service interruption at least. The only problem here is how much is the cost of the live migration. Personally, I don't think I need live migration in the current stage but I won't reject the live-migration feature if it does not have significant side effects. -Xu On Tue, Dec 18, 2018 at 3:30 PM zhangwei (CR) <zhangwei555@huawei.com<mailto:zhangwei555@huawei.com>> wrote: Hi EJ, Kata Container is one tiny VM per POD, you can choose to run k8s+kata in another VM provisioned by OpenStack, in this case, we run into the nested virtualization scenario. It works well but brings some unnecessary overhead from nested VM usage. To avoid this kind of overhead, we can also choose to run K8S cluster on bare metal, which means every worker/node of K8S is bare metal machine instead of VM. By doing this, K8S becomes the lowest level of infrastructure and should provide live migration capability similar to OpenStack VM scenarios. Live migration could be useful in this scene as every POD is a VM running on bare metal now, we need a reliable way to keep workload alive when host machine needs reboot(let’s say, fixing some CVEs). -Wei 发件人: EJ Campbell [mailto:ejc3@oath.com] 发送时间: 2018年12月18日 14:11 收件人: zhangwei (CR) <zhangwei555@huawei.com<mailto:zhangwei555@huawei.com>> 抄送: Ernst, Eric <eric.ernst@intel.com<mailto:eric.ernst@intel.com>>; Stefan Hajnoczi <stefanha@redhat.com<mailto:stefanha@redhat.com>>; sweil@redhat.com<mailto:sweil@redhat.com>; Qixuan Wu <qixuan.wu@linux.alibaba.com<mailto:qixuan.wu@linux.alibaba.com>>; Graham Whaley <graham.whaley@gmail.com<mailto:graham.whaley@gmail.com>>; miklos@szeredi.hu<mailto:miklos@szeredi.hu>; kata-dev@lists.katacontainers.io<mailto:kata-dev@lists.katacontainers.io>; swhiteho@redhat.com<mailto:swhiteho@redhat.com>; vgoyal@redhat.com<mailto:vgoyal@redhat.com> 主题: Re: [kata-dev] 答复: [Announce] virtio-fs released with Kata Containers support Hi Zhangwei, Could you go into more detail on the use case for live migration on bare metal? Thanks, EJ On Mon, Dec 17, 2018 at 6:20 PM zhangwei (CR) <zhangwei555@huawei.com<mailto:zhangwei555@huawei.com>> wrote: Stefan, Eric, There isn't live migration requirement so far, but, I will take this as a potential requirement when Kata is growing more powerful. In my mind, this is rational as some of us are running kata on bare metal, in this scene, we don't have an infrastructure software such as OpenStack to guarantee the lifecycle of workload. Virtio-fs is in RFC state, it could be OK as long as it doesn't have native gap for supporting live migration, and I will be glad to see it being listed in some roadmap. By the way, really nice work! We finally get a better option against 9pfs :-) Thanks! -----邮件原件----- 发件人: Ernst, Eric [mailto:eric.ernst@intel.com<mailto:eric.ernst@intel.com>] 发送时间: 2018年12月17日 22:30 收件人: Stefan Hajnoczi <stefanha@redhat.com<mailto:stefanha@redhat.com>> 抄送: sweil@redhat.com<mailto:sweil@redhat.com>; Qixuan Wu <qixuan.wu@linux.alibaba.com<mailto:qixuan.wu@linux.alibaba.com>>; Graham Whaley <graham.whaley@gmail.com<mailto:graham.whaley@gmail.com>>; miklos@szeredi.hu<mailto:miklos@szeredi.hu>; kata-dev@lists.katacontainers.io<mailto:kata-dev@lists.katacontainers.io>; swhiteho@redhat.com<mailto:swhiteho@redhat.com>; vgoyal@redhat.com<mailto:vgoyal@redhat.com> 主题: Re: [kata-dev] [Announce] virtio-fs released with Kata Containers support Stefan - No, there isn’t a live migration requirement for Kata (sorry for the top post). Eric
On Dec 17, 2018, at 6:27 AM, Stefan Hajnoczi <stefanha@redhat.com<mailto:stefanha@redhat.com>> wrote:
On Sun, Dec 16, 2018 at 09:09:48AM +0800, Qixuan Wu wrote: 1) Did you compare the performance to the virtio-blk raw or qcow2 solution of normal virtual machine?
Not yet.
Guest and host page cache performance can dominate benchmarks, so we typically use fio direct=1 with QEMU -drive cache=none (O_DIRECT) to focus purely on disk I/O performance and not page cache. The same thing can be done with virtio-fs so that every I/O operation requires communication with the host. In theory virtio-fs should be comparable to virtio-blk on raw.
In real-world scenarios the page cache will be enabled, especially for the virtio-fs DAX feature. So I need to think carefully about what to benchmark, but it will probably include both configurations.
2) Whether does it impact live-migration of guest os ?
Virtio-fs currently does not support live migration. Is there a requirement for live migration with Kata Containers use cases?
Stefan
_______________________________________________ kata-dev mailing list kata-dev@lists.katacontainers.io<mailto:kata-dev@lists.katacontainers.io> http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev _______________________________________________ kata-dev mailing list kata-dev@lists.katacontainers.io<mailto:kata-dev@lists.katacontainers.io> http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev _______________________________________________ kata-dev mailing list kata-dev@lists.katacontainers.io<mailto:kata-dev@lists.katacontainers.io> http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev -- -- Xu Wang CTO & Cofounder, Hyper github/twitter/wechat: @gnawux http://hyper.sh<http://hyper.sh/> Hyper_: Make VM run like container
Yeah. Kata is designed to function like regular containers and thus driven through normal Kubernetes orchestration. It does this by adding a new container somewhere else and stopping it on the old host. aka cattle. If you want live migration of stuff in a vm, then kubevirt is probably a better fit for your use case, as its optimized for pets. Thanks, Kevin ________________________________ From: EJ Campbell via kata-dev [kata-dev@lists.katacontainers.io] Sent: Monday, December 17, 2018 11:32 PM To: zhangwei (CR) Cc: sweil@redhat.com; Qixuan Wu; Graham Whaley; miklos@szeredi.hu; kata-dev@lists.katacontainers.io; swhiteho@redhat.com; vgoyal@redhat.com Subject: Re: [kata-dev] 答复: [Announce] virtio-fs released with Kata Containers support Hi Wei, Got it. Though given your workloads need to be able to survive a hardware failure anyway, it seems like focusing all your effort on having your workloads support using K8s primitives to drain nodes and have them safely respawn on another physical host might be a better option. Thanks for the quick response, EJ On Mon, Dec 17, 2018 at 11:28 PM zhangwei (CR) <zhangwei555@huawei.com<mailto:zhangwei555@huawei.com>> wrote: Hi EJ, Kata Container is one tiny VM per POD, you can choose to run k8s+kata in another VM provisioned by OpenStack, in this case, we run into the nested virtualization scenario. It works well but brings some unnecessary overhead from nested VM usage. To avoid this kind of overhead, we can also choose to run K8S cluster on bare metal, which means every worker/node of K8S is bare metal machine instead of VM. By doing this, K8S becomes the lowest level of infrastructure and should provide live migration capability similar to OpenStack VM scenarios. Live migration could be useful in this scene as every POD is a VM running on bare metal now, we need a reliable way to keep workload alive when host machine needs reboot(let’s say, fixing some CVEs). -Wei 发件人: EJ Campbell [mailto:ejc3@oath.com<mailto:ejc3@oath.com>] 发送时间: 2018年12月18日 14:11 收件人: zhangwei (CR) <zhangwei555@huawei.com<mailto:zhangwei555@huawei.com>> 抄送: Ernst, Eric <eric.ernst@intel.com<mailto:eric.ernst@intel.com>>; Stefan Hajnoczi <stefanha@redhat.com<mailto:stefanha@redhat.com>>; sweil@redhat.com<mailto:sweil@redhat.com>; Qixuan Wu <qixuan.wu@linux.alibaba.com<mailto:qixuan.wu@linux.alibaba.com>>; Graham Whaley <graham.whaley@gmail.com<mailto:graham.whaley@gmail.com>>; miklos@szeredi.hu<mailto:miklos@szeredi.hu>; kata-dev@lists.katacontainers.io<mailto:kata-dev@lists.katacontainers.io>; swhiteho@redhat.com<mailto:swhiteho@redhat.com>; vgoyal@redhat.com<mailto:vgoyal@redhat.com> 主题: Re: [kata-dev] 答复: [Announce] virtio-fs released with Kata Containers support Hi Zhangwei, Could you go into more detail on the use case for live migration on bare metal? Thanks, EJ On Mon, Dec 17, 2018 at 6:20 PM zhangwei (CR) <zhangwei555@huawei.com<mailto:zhangwei555@huawei.com>> wrote: Stefan, Eric, There isn't live migration requirement so far, but, I will take this as a potential requirement when Kata is growing more powerful. In my mind, this is rational as some of us are running kata on bare metal, in this scene, we don't have an infrastructure software such as OpenStack to guarantee the lifecycle of workload. Virtio-fs is in RFC state, it could be OK as long as it doesn't have native gap for supporting live migration, and I will be glad to see it being listed in some roadmap. By the way, really nice work! We finally get a better option against 9pfs :-) Thanks! -----邮件原件----- 发件人: Ernst, Eric [mailto:eric.ernst@intel.com<mailto:eric.ernst@intel.com>] 发送时间: 2018年12月17日 22:30 收件人: Stefan Hajnoczi <stefanha@redhat.com<mailto:stefanha@redhat.com>> 抄送: sweil@redhat.com<mailto:sweil@redhat.com>; Qixuan Wu <qixuan.wu@linux.alibaba.com<mailto:qixuan.wu@linux.alibaba.com>>; Graham Whaley <graham.whaley@gmail.com<mailto:graham.whaley@gmail.com>>; miklos@szeredi.hu<mailto:miklos@szeredi.hu>; kata-dev@lists.katacontainers.io<mailto:kata-dev@lists.katacontainers.io>; swhiteho@redhat.com<mailto:swhiteho@redhat.com>; vgoyal@redhat.com<mailto:vgoyal@redhat.com> 主题: Re: [kata-dev] [Announce] virtio-fs released with Kata Containers support Stefan - No, there isn’t a live migration requirement for Kata (sorry for the top post). Eric
On Dec 17, 2018, at 6:27 AM, Stefan Hajnoczi <stefanha@redhat.com<mailto:stefanha@redhat.com>> wrote:
On Sun, Dec 16, 2018 at 09:09:48AM +0800, Qixuan Wu wrote: 1) Did you compare the performance to the virtio-blk raw or qcow2 solution of normal virtual machine?
Not yet.
Guest and host page cache performance can dominate benchmarks, so we typically use fio direct=1 with QEMU -drive cache=none (O_DIRECT) to focus purely on disk I/O performance and not page cache. The same thing can be done with virtio-fs so that every I/O operation requires communication with the host. In theory virtio-fs should be comparable to virtio-blk on raw.
In real-world scenarios the page cache will be enabled, especially for the virtio-fs DAX feature. So I need to think carefully about what to benchmark, but it will probably include both configurations.
2) Whether does it impact live-migration of guest os ?
Virtio-fs currently does not support live migration. Is there a requirement for live migration with Kata Containers use cases?
Stefan
_______________________________________________ kata-dev mailing list kata-dev@lists.katacontainers.io<mailto:kata-dev@lists.katacontainers.io> http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev _______________________________________________ kata-dev mailing list kata-dev@lists.katacontainers.io<mailto:kata-dev@lists.katacontainers.io> http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev
On Tue, Dec 18, 2018 at 12:07:32AM +0800, Tao Peng wrote:
On Tue, Dec 11, 2018 at 3:25 AM Stefan Hajnoczi <stefanha@redhat.com> wrote:
Dear Kata Containers Community, I'm delighted to announce the first release of virtio-fs, a new shared file system for virtual machines that is designed for container use cases, including shared volumes.
One more question, is there plan to support hotplug a virtio-fs device to the guest? Or is it already supported?
Yes, hotplug is planned. It's not tested yet but may already work at the QEMU level. How would you like to use hotplug at the Kata level? Stefan
On Tue, Dec 18, 2018 at 02:19:26AM +0000, zhangwei (CR) wrote:
Virtio-fs is in RFC state, it could be OK as long as it doesn't have native gap for supporting live migration, and I will be glad to see it being listed in some roadmap.
Migration is always good to have at the QEMU level, so I'm sure we'll look into it more deeply. It's tricky when sharing local file systems because the destination host must have access to the same files in order for migration to be possible. One approach is to carefully copy the files during live migration. The file system daemon must also migrate its state (e.g. open files and handles). It's non-trivial but can be done with enough development effort. Stefan
On Tue, 18 Dec 2018, Stefan Hajnoczi wrote:
On Tue, Dec 18, 2018 at 02:19:26AM +0000, zhangwei (CR) wrote:
Virtio-fs is in RFC state, it could be OK as long as it doesn't have native gap for supporting live migration, and I will be glad to see it being listed in some roadmap.
Migration is always good to have at the QEMU level, so I'm sure we'll look into it more deeply.
It's tricky when sharing local file systems because the destination host must have access to the same files in order for migration to be possible. One approach is to carefully copy the files during live migration. The file system daemon must also migrate its state (e.g. open files and handles). It's non-trivial but can be done with enough development effort.
I think the live migration case is only going to make sense when the files in question are on a network file system and the same path is available at the destination. (I think it also makes sense that this is something KubeVirt will be more interested in than Kata!) sage
On Wed, Dec 19, 2018, 00:27 Stefan Hajnoczi <stefanha@redhat.com wrote:
On Tue, Dec 18, 2018 at 12:07:32AM +0800, Tao Peng wrote:
On Tue, Dec 11, 2018 at 3:25 AM Stefan Hajnoczi <stefanha@redhat.com> wrote:
Dear Kata Containers Community, I'm delighted to announce the first release of virtio-fs, a new shared file system for virtual machines that is designed for container use cases, including shared volumes.
One more question, is there plan to support hotplug a virtio-fs device to the guest? Or is it already supported?
Yes, hotplug is planned. It's not tested yet but may already work at the QEMU level.
How would you like to use hotplug at the Kata level?
With 9pfs, we configure a specified shared dir for virtfs and bind the files/dirs we want to share in the specified one. And we can't turn it off even if we have nothing to share in one pod, which may imply some security risks. The reason behind the activity is that 9pfs doesn't support hotplug, on the other hand, we hotplug the block devices. Then if virtio-fs support hotplug, we may hotplug any dirs we want to share with the guest instead of bind mount them to such a indirect places. However, the bind mount operation should be much faster than a hotplug attempt. We still need to make trade-off to use hotplug or keep the current 9p-like sharing configuration.
Stefan _______________________________________________ kata-dev mailing list kata-dev@lists.katacontainers.io http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev
And one more question, is socket and named pipe files supported? If supported, how could we prevent from the unexpected host-guest communication. If not, what will happened if - the guest try to create a socket on the shared dir. - the host has a socket in the dir. Maybe I should read the code and test it 😀 On Wed, Dec 19, 2018, 00:27 Stefan Hajnoczi <stefanha@redhat.com wrote:
On Tue, Dec 18, 2018 at 12:07:32AM +0800, Tao Peng wrote:
On Tue, Dec 11, 2018 at 3:25 AM Stefan Hajnoczi <stefanha@redhat.com> wrote:
Dear Kata Containers Community, I'm delighted to announce the first release of virtio-fs, a new shared file system for virtual machines that is designed for container use cases, including shared volumes.
One more question, is there plan to support hotplug a virtio-fs device to the guest? Or is it already supported?
Yes, hotplug is planned. It's not tested yet but may already work at the QEMU level.
How would you like to use hotplug at the Kata level?
Stefan _______________________________________________ kata-dev mailing list kata-dev@lists.katacontainers.io http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev
On Wed, Dec 19, 2018 at 12:42 AM Sage Weil <sage@newdream.net> wrote:
On Tue, 18 Dec 2018, Stefan Hajnoczi wrote:
On Tue, Dec 18, 2018 at 02:19:26AM +0000, zhangwei (CR) wrote:
Virtio-fs is in RFC state, it could be OK as long as it doesn't have native gap for supporting live migration, and I will be glad to see it being listed in some roadmap.
Migration is always good to have at the QEMU level, so I'm sure we'll look into it more deeply.
It's tricky when sharing local file systems because the destination host must have access to the same files in order for migration to be possible. One approach is to carefully copy the files during live migration. The file system daemon must also migrate its state (e.g. open files and handles). It's non-trivial but can be done with enough development effort.
I think the live migration case is only going to make sense when the files in question are on a network file system and the same path is available at the destination.
I agree. Live migration should be considered from a total solution POV not just in the container runtime itself. Other important parts include network migration and storage migration. For storage, it is more often to use a shared or distributed storage rather than trying to move data between hosts. OTOH, virtio-fs is a special case here. Even if the files are backed by a remote file system, we still need to migrate the user space fuse daemon otherwise all the file handles are invalidated, or we design the fuse daemon to be able to re-instantiate itself.
(I think it also makes sense that this is something KubeVirt will be more interested in than Kata!)
Sure, it mostly rests in the realm of KubeVirt, but kata is also interested in keeping the container states especially for cloud providers to provide non-disruptive service. Cheers, Tao -- bergwolf@hyper.sh
On 2018/12/17 下午10:27, Stefan Hajnoczi Wrote:
On Sun, Dec 16, 2018 at 09:09:48AM +0800, Qixuan Wu wrote:
1) Did you compare the performance to the virtio-blk raw or qcow2 solution of normal virtual machine?
Not yet.
Guest and host page cache performance can dominate benchmarks, so we typically use fio direct=1 with QEMU -drive cache=none (O_DIRECT) to focus purely on disk I/O performance and not page cache. The same thing can be done with virtio-fs so that every I/O operation requires communication with the host. In theory virtio-fs should be comparable to virtio-blk on raw.
In real-world scenarios the page cache will be enabled, especially for the virtio-fs DAX feature. So I need to think carefully about what to benchmark, but it will probably include both configurations.
Thank you for your reply. Hope to see the data with virtio-blk later.
2) Whether does it impact live-migration of guest os ?
Virtio-fs currently does not support live migration. Is there a requirement for live migration with Kata Containers use cases?
Yes, currently it's not a common case, that maybe because kata container is not used in product so much. I think kata container is not only used for serverless scenario, also can be used by the application which has long lifecycle. If the physical host need reboot because of kernel bug or hardware bug, when kata container is running, live-migration is needed. So for long lifecycle cloud application, live-migration case is always there. Because for cloud application, include runc container, kata container and fully virtual machine container, they don't care about which host is running. So host need reboot without disturbing cloud application. Regards Qixuan Wu.
Stefan
On 2018/12/11 3:25, Stefan Hajnoczi wrote:
Dear Kata Containers Community, I'm delighted to announce the first release of virtio-fs, a new shared file system for virtual machines that is designed for container use cases, including shared volumes.
Unlike virtio-9p and NFS over AF_VSOCK, virtio-fs aims to take advantage of the co-location between the virtual machine and the hypervisor in order to achieve local file system semantics and improve performance.
For example, it can use Linux Direct Access (DAX) to access file contents directly from the host page cache. This reduces communication with the file server and avoids duplicating data into each sandbox VM. It also means that mmap MAP_SHARED on a shared volume is coherent between sandbox VMs.
The Linux kernel code (including performance numbers) has been posted here: https://marc.info/?l=linux-fsdevel&m=154446243324255&w=2
Kata Containers integration is already available so you can benchmark and test virtio-fs. The project is under active development and we still expect to make significant changes based on feedback and collaboration.
We hope virtio-fs is interesting as a next step in overcoming virtio-9p's performance and limitations. Let us know how it performs!
You can read more about virtio-fs here: https://virtio-fs.gitlab.io/
The Kata HowTo is here: https://virtio-fs.gitlab.io/howto-kata.html
The Kata runtime and agent changes are fairly straightforward and comparable to virtio-9p. There are several other code changes due to using a Fedora initramfs, systemd, and modular kernel. These are not essential to virtio-fs but are simply how I preferred to develop and test.
The FAQ on the virtio-fs website explains the main technical features. Please let me know if you have any questions or need help getting it running! I'm also on #kata-dev IRC if you need a hand.
Stefan
_______________________________________________ kata-dev mailing list kata-dev@lists.katacontainers.io http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev
Hi Stefan, It is amazing to see such a great job. I setup a virtio-fs environment using qemu, virtio-fs using "cache=none, dax" mode, and I found some problems as follows: 1. Now it don't support write file with direct_io, then I filter O_DIRECT flag in libfuse-daemon when opening file, and success. So I think it may be related to dio alignment and mmap(). Like 9p also filter the O_DIRECT flag in qemu. 2. Cache-size can't be hot increased, I mean after the memory layout is like "Data cache->Metadata Cache->Journal Cache, once data cache size is changed, the metadata cache and journal cache offset will be changed, this memory data need to be moved. I don't know if there are any plans to support the dynamic increase. Because I understand user can have the demand of hot increase cache. 3. Why real mmap operation is executed in qemu process instead of libfuse daemon? I hope the data plane doesn't not need to pass through qemu again. 4. When I use fio with psync, write/read only 20% improve than 9p, I think psync can be more used than mmap. This user-case may need to improved. Thanks, Yiwen.
On Wed, Dec 26, 2018 at 10:32:06AM +0800, jiangyiwen wrote:
On 2018/12/11 3:25, Stefan Hajnoczi wrote:
Dear Kata Containers Community, I'm delighted to announce the first release of virtio-fs, a new shared file system for virtual machines that is designed for container use cases, including shared volumes.
Unlike virtio-9p and NFS over AF_VSOCK, virtio-fs aims to take advantage of the co-location between the virtual machine and the hypervisor in order to achieve local file system semantics and improve performance.
For example, it can use Linux Direct Access (DAX) to access file contents directly from the host page cache. This reduces communication with the file server and avoids duplicating data into each sandbox VM. It also means that mmap MAP_SHARED on a shared volume is coherent between sandbox VMs.
The Linux kernel code (including performance numbers) has been posted here: https://marc.info/?l=linux-fsdevel&m=154446243324255&w=2
Kata Containers integration is already available so you can benchmark and test virtio-fs. The project is under active development and we still expect to make significant changes based on feedback and collaboration.
We hope virtio-fs is interesting as a next step in overcoming virtio-9p's performance and limitations. Let us know how it performs!
You can read more about virtio-fs here: https://virtio-fs.gitlab.io/
The Kata HowTo is here: https://virtio-fs.gitlab.io/howto-kata.html
The Kata runtime and agent changes are fairly straightforward and comparable to virtio-9p. There are several other code changes due to using a Fedora initramfs, systemd, and modular kernel. These are not essential to virtio-fs but are simply how I preferred to develop and test.
The FAQ on the virtio-fs website explains the main technical features. Please let me know if you have any questions or need help getting it running! I'm also on #kata-dev IRC if you need a hand.
Stefan
_______________________________________________ kata-dev mailing list kata-dev@lists.katacontainers.io http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev
Hi Stefan,
It is amazing to see such a great job. I setup a virtio-fs environment using qemu, virtio-fs using "cache=none, dax" mode, and I found some problems as follows:
1. Now it don't support write file with direct_io, then I filter O_DIRECT flag in libfuse-daemon when opening file, and success. So I think it may be related to dio alignment and mmap(). Like 9p also filter the O_DIRECT flag in qemu.
Can you give some more details. What's the error you face and what do you mean by "I filter O_DIRECT flag in libfuse-daemon".
2. Cache-size can't be hot increased, I mean after the memory layout is like "Data cache->Metadata Cache->Journal Cache, once data cache size is changed, the metadata cache and journal cache offset will be changed, this memory data need to be moved. I don't know if there are any plans to support the dynamic increase. Because I understand user can have the demand of hot increase cache.
I think one should be able to increase cache size. Right now that feature is not there but can't think why it can't be added. For example, for data cache, we just need to some kind of notification to virtio-fs driver and it will allocate more free ranges internally and start using these for next allocation onwards.
3. Why real mmap operation is executed in qemu process instead of libfuse daemon? I hope the data plane doesn't not need to pass through qemu again.
If libfuse daemon does mmap() in its address space, then how guest kernel will see those mappings. So, IIUC, qemu needs to call mmap().
4. When I use fio with psync, write/read only 20% improve than 9p, I think psync can be more used than mmap. This user-case may need to improved.
Can you send me your fio job. I want to try it out. Thanks Vivek
On 2018/12/27 3:05, Vivek Goyal wrote:
On Wed, Dec 26, 2018 at 10:32:06AM +0800, jiangyiwen wrote:
On 2018/12/11 3:25, Stefan Hajnoczi wrote:
Dear Kata Containers Community, I'm delighted to announce the first release of virtio-fs, a new shared file system for virtual machines that is designed for container use cases, including shared volumes.
Unlike virtio-9p and NFS over AF_VSOCK, virtio-fs aims to take advantage of the co-location between the virtual machine and the hypervisor in order to achieve local file system semantics and improve performance.
For example, it can use Linux Direct Access (DAX) to access file contents directly from the host page cache. This reduces communication with the file server and avoids duplicating data into each sandbox VM. It also means that mmap MAP_SHARED on a shared volume is coherent between sandbox VMs.
The Linux kernel code (including performance numbers) has been posted here: https://marc.info/?l=linux-fsdevel&m=154446243324255&w=2
Kata Containers integration is already available so you can benchmark and test virtio-fs. The project is under active development and we still expect to make significant changes based on feedback and collaboration.
We hope virtio-fs is interesting as a next step in overcoming virtio-9p's performance and limitations. Let us know how it performs!
You can read more about virtio-fs here: https://virtio-fs.gitlab.io/
The Kata HowTo is here: https://virtio-fs.gitlab.io/howto-kata.html
The Kata runtime and agent changes are fairly straightforward and comparable to virtio-9p. There are several other code changes due to using a Fedora initramfs, systemd, and modular kernel. These are not essential to virtio-fs but are simply how I preferred to develop and test.
The FAQ on the virtio-fs website explains the main technical features. Please let me know if you have any questions or need help getting it running! I'm also on #kata-dev IRC if you need a hand.
Stefan
_______________________________________________ kata-dev mailing list kata-dev@lists.katacontainers.io http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev
Hi Stefan,
It is amazing to see such a great job. I setup a virtio-fs environment using qemu, virtio-fs using "cache=none, dax" mode, and I found some problems as follows:
1. Now it don't support write file with direct_io, then I filter O_DIRECT flag in libfuse-daemon when opening file, and success. So I think it may be related to dio alignment and mmap(). Like 9p also filter the O_DIRECT flag in qemu.
Can you give some more details. What's the error you face and what do you mean by "I filter O_DIRECT flag in libfuse-daemon".
What's the error I face? I use fio test, command as follows: fio -filename=/mnt/virtio_fs/file -direct=1 -rw=write -bs=1M -size=6G -iodepth=1 -ioengine=psync -numjobs=1 -group_reporting -name=xxx -time_based -runtime=120 and returned following messages: --- 4K0R0RD1: (g=0): rw=write, bs=1M-1M/1M-1M/1M-1M, ioengine=psync, iodepth=1 fio-2.13 Starting 1 process fio: io_u error on file /mnt/virtio_fs/file: Invalid argument: write offset=0, buflen=1048576 fio: first direct IO errored. File system may not support direct IO, or iomem_align= is bad. Try setting direct=0. fio: pid=7073, err=22/file:io_u.c:1707, func=io_u error, error=Invalid argument 4K0R0RD1: (groupid=0, jobs=1): err=22 (file:io_u.c:1707, func=io_u error, error=Invalid argument): pid=7073: Thu Dec 27 10:44:47 2018 cpu : usr=0.00%, sys=0.00%, ctx=3, majf=0, minf=14 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=50.0%, 4=50.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued : total=r=0/w=1/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0 latency : target=0, window=0, percentile=100.00%, depth=1 Run status group 0 (all jobs): --- Then I write a tool "open file with O_DIRECT and write file return failed, errno = 22", The "libfuse-daemon" return following errors: --- unique: 30, opcode: WRITE (16), nodeid: 140319467243856, insize: 131152, pid: 3157 lo_write(ino=140319467243856, size=131072, off=0) unique: 30, error: -22 (Invalid argument), outsize: 16 virtio_send_msg: elem 0: with 2 in desc of length 24 fv_queue_thread: Waiting for Queue 2 event fv_queue_thread: Got queue event on Queue 2 fv_queue_thread: Queue 2 gave evalue: 1 available: in: 16 out: 64 fv_queue_thread: elem 0: with 2 out desc of length 64 --- what do I mean by "I filter O_DIRECT flag in libfuse-daemon"? I modify passthrough_ll.c code and ignore O_DIRECT flag when opening file, then it return success. The codes as follows: diff --git a/example/passthrough_ll.c b/example/passthrough_ll.c index 3e37dbd..9edb5bb 100644 --- a/example/passthrough_ll.c +++ b/example/passthrough_ll.c @@ -1234,7 +1234,7 @@ static void lo_open(fuse_req_t req, fuse_ino_t ino, struct fuse_file_info *fi) fi->flags &= ~O_APPEND; sprintf(buf, "/proc/self/fd/%i", lo_fd(req, ino)); - fd = open(buf, fi->flags & ~O_NOFOLLOW); + fd = open(buf, fi->flags & ~(O_NOFOLLOW | O_DIRECT)); if (fd == -1) return (void) fuse_reply_err(req, errno);
2. Cache-size can't be hot increased, I mean after the memory layout is like "Data cache->Metadata Cache->Journal Cache, once data cache size is changed, the metadata cache and journal cache offset will be changed, this memory data need to be moved. I don't know if there are any plans to support the dynamic increase. Because I understand user can have the demand of hot increase cache.
I think one should be able to increase cache size. Right now that feature is not there but can't think why it can't be added. For example, for data cache, we just need to some kind of notification to virtio-fs driver and it will allocate more free ranges internally and start using these for next allocation onwards.
Great, thanks.
3. Why real mmap operation is executed in qemu process instead of libfuse daemon? I hope the data plane doesn't not need to pass through qemu again.
If libfuse daemon does mmap() in its address space, then how guest kernel will see those mappings. So, IIUC, qemu needs to call mmap().
Oh, I know, Can we use huge page or /dev/shm?
4. When I use fio with psync, write/read only 20% improve than 9p, I think psync can be more used than mmap. This user-case may need to improved.
Can you send me your fio job. I want to try it out.
fio -filename=/mnt/virtio_fs/file -rw=read -bs=4k -size=6G -iodepth=1 -ioengine=psync -numjobs=1 -group_reporting -name=xxx -time_based -runtime=120 Thanks, Yiwen.
Thanks Vivek
.
* jiangyiwen (jiangyiwen@huawei.com) wrote:
On 2018/12/27 3:05, Vivek Goyal wrote:
On Wed, Dec 26, 2018 at 10:32:06AM +0800, jiangyiwen wrote:
On 2018/12/11 3:25, Stefan Hajnoczi wrote:
Dear Kata Containers Community,
<snip>
3. Why real mmap operation is executed in qemu process instead of libfuse daemon? I hope the data plane doesn't not need to pass through qemu again.
If libfuse daemon does mmap() in its address space, then how guest kernel will see those mappings. So, IIUC, qemu needs to call mmap().
Oh, I know, Can we use huge page or /dev/shm?
Those don't help for this problem; the issue here is that we need to cause the mapping of the file to appear in the guests physical address space; if we mapped /dev/shm or hugepages instead then there's no way to map the file through that - it's just a different mapping. Dave
4. When I use fio with psync, write/read only 20% improve than 9p, I think psync can be more used than mmap. This user-case may need to improved.
Can you send me your fio job. I want to try it out.
fio -filename=/mnt/virtio_fs/file -rw=read -bs=4k -size=6G -iodepth=1 -ioengine=psync -numjobs=1 -group_reporting -name=xxx -time_based -runtime=120
Thanks, Yiwen.
Thanks Vivek
.
_______________________________________________ kata-dev mailing list kata-dev@lists.katacontainers.io http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev -- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
On Wed, Dec 19, 2018 at 01:21:25AM +0800, Xu Wang wrote:
On Wed, Dec 19, 2018, 00:27 Stefan Hajnoczi <stefanha@redhat.com wrote:
On Tue, Dec 18, 2018 at 12:07:32AM +0800, Tao Peng wrote:
On Tue, Dec 11, 2018 at 3:25 AM Stefan Hajnoczi <stefanha@redhat.com> wrote:
Dear Kata Containers Community, I'm delighted to announce the first release of virtio-fs, a new shared file system for virtual machines that is designed for container use cases, including shared volumes.
One more question, is there plan to support hotplug a virtio-fs device to the guest? Or is it already supported?
Yes, hotplug is planned. It's not tested yet but may already work at the QEMU level.
How would you like to use hotplug at the Kata level?
With 9pfs, we configure a specified shared dir for virtfs and bind the files/dirs we want to share in the specified one. And we can't turn it off even if we have nothing to share in one pod, which may imply some security risks. The reason behind the activity is that 9pfs doesn't support hotplug, on the other hand, we hotplug the block devices.
Then if virtio-fs support hotplug, we may hotplug any dirs we want to share with the guest instead of bind mount them to such a indirect places.
However, the bind mount operation should be much faster than a hotplug attempt. We still need to make trade-off to use hotplug or keep the current 9p-like sharing configuration.
I see. The current patches follow the 9pfs model but hotplug should be doable in the future, too. Stefan
On Wed, Dec 19, 2018 at 01:30:29AM +0800, Xu Wang wrote:
And one more question, is socket and named pipe files supported?
If supported, how could we prevent from the unexpected host-guest communication.
If not, what will happened if
- the guest try to create a socket on the shared dir. - the host has a socket in the dir.
Maybe I should read the code and test it 😀
No, there is no inter-VM communication through UNIX domain sockets or named pipes. Opening such files just results in local IPC within the VM. Stefan
On Thu, Dec 27, 2018 at 10:57:21AM +0800, jiangyiwen wrote: [..]
what do I mean by "I filter O_DIRECT flag in libfuse-daemon"?
I modify passthrough_ll.c code and ignore O_DIRECT flag when opening file, then it return success. The codes as follows: diff --git a/example/passthrough_ll.c b/example/passthrough_ll.c index 3e37dbd..9edb5bb 100644 --- a/example/passthrough_ll.c +++ b/example/passthrough_ll.c @@ -1234,7 +1234,7 @@ static void lo_open(fuse_req_t req, fuse_ino_t ino, struct fuse_file_info *fi) fi->flags &= ~O_APPEND;
sprintf(buf, "/proc/self/fd/%i", lo_fd(req, ino)); - fd = open(buf, fi->flags & ~O_NOFOLLOW); + fd = open(buf, fi->flags & ~(O_NOFOLLOW | O_DIRECT)); if (fd == -1) return (void) fuse_reply_err(req, errno);
Hi Yiwen, Ok, this issue is now fixed. We still do not ignore O_DIRECT flag on host. I think if file is opened with O_DIRECT in guest, then it makes sense to open with O_DIRECT on host as well. We were getting -EINVAL (-22) due to misaligned buffers. We were creating extra data copy in libfuse daemon and that copy created misalignment. David Gilbert has now fixed it. He has got rid of that extra copy and that means if guest hands over aligned buffers, I/O should succeed. This should also result in some speed up on write path as one extra copy has been avoided. These fixes are still internal. David might push these externally soon. Thanks Vivek
On 2019/1/10 23:57, Vivek Goyal wrote:
On Thu, Dec 27, 2018 at 10:57:21AM +0800, jiangyiwen wrote:
[..]
what do I mean by "I filter O_DIRECT flag in libfuse-daemon"?
I modify passthrough_ll.c code and ignore O_DIRECT flag when opening file, then it return success. The codes as follows: diff --git a/example/passthrough_ll.c b/example/passthrough_ll.c index 3e37dbd..9edb5bb 100644 --- a/example/passthrough_ll.c +++ b/example/passthrough_ll.c @@ -1234,7 +1234,7 @@ static void lo_open(fuse_req_t req, fuse_ino_t ino, struct fuse_file_info *fi) fi->flags &= ~O_APPEND;
sprintf(buf, "/proc/self/fd/%i", lo_fd(req, ino)); - fd = open(buf, fi->flags & ~O_NOFOLLOW); + fd = open(buf, fi->flags & ~(O_NOFOLLOW | O_DIRECT)); if (fd == -1) return (void) fuse_reply_err(req, errno);
Hi Yiwen,
Ok, this issue is now fixed. We still do not ignore O_DIRECT flag on host. I think if file is opened with O_DIRECT in guest, then it makes sense to open with O_DIRECT on host as well.
We were getting -EINVAL (-22) due to misaligned buffers. We were creating extra data copy in libfuse daemon and that copy created misalignment.
David Gilbert has now fixed it. He has got rid of that extra copy and that means if guest hands over aligned buffers, I/O should succeed. This should also result in some speed up on write path as one extra copy has been avoided.
These fixes are still internal. David might push these externally soon.
Thanks Vivek
.
Hi Vivek, Great, I am glad to test it if this patch is pushed. Thanks, Yiwen.
On Thu, Dec 27, 2018 at 10:57:21AM +0800, jiangyiwen wrote: [..]
4. When I use fio with psync, write/read only 20% improve than 9p, I think psync can be more used than mmap. This user-case may need to improved.
Can you send me your fio job. I want to try it out.
fio -filename=/mnt/virtio_fs/file -rw=read -bs=4k -size=6G -iodepth=1 -ioengine=psync -numjobs=1 -group_reporting -name=xxx -time_based -runtime=120
Ok, finally I tried this in bunch of configurations. without dax and cache=none, I see roughly 100% improvement. cache=none ========== virtio-9p: 28MB/s virtio-fs: 59MB/s Above is without dax enabled and libfuse daemon filters O_DIRECT flag on host so that file will be cached in host (despite the fact guest opened it with O_DIRECT). I then tried "cache=always" and did direct I/O from guest. This will avoid page cache in guest but will use page cache on host. cache=always =========== virtio-fs: 52MB/s virtio-fs (dax): 175MB/s Notice that with dax performance is alsmost 5x times better as compared to virtio-9p. Following is my fio job for testing. ======================================= [global] name=fio-psync rw=read direct=1 numjobs=1 runtime=60 bs=4k [file1] size=6G ioengine=psync iodepth=1 filename=fio-psync-file ======================================== So I think even without dax, performance improvement is significant and enabling dax speeds it up very significantly. Thanks Vivek
This is great - thank you Vivek. On 1/15/19, 11:38 AM, "Vivek Goyal" <vgoyal@redhat.com> wrote: On Thu, Dec 27, 2018 at 10:57:21AM +0800, jiangyiwen wrote: [..] > >> > >> 4. When I use fio with psync, write/read only 20% improve than 9p, I think > >> psync can be more used than mmap. This user-case may need to improved. > > > > Can you send me your fio job. I want to try it out. > > > > fio -filename=/mnt/virtio_fs/file -rw=read -bs=4k -size=6G -iodepth=1 -ioengine=psync -numjobs=1 -group_reporting -name=xxx -time_based -runtime=120 Ok, finally I tried this in bunch of configurations. without dax and cache=none, I see roughly 100% improvement. cache=none ========== virtio-9p: 28MB/s virtio-fs: 59MB/s Above is without dax enabled and libfuse daemon filters O_DIRECT flag on host so that file will be cached in host (despite the fact guest opened it with O_DIRECT). I then tried "cache=always" and did direct I/O from guest. This will avoid page cache in guest but will use page cache on host. cache=always =========== virtio-fs: 52MB/s virtio-fs (dax): 175MB/s Notice that with dax performance is alsmost 5x times better as compared to virtio-9p. Following is my fio job for testing. ======================================= [global] name=fio-psync rw=read direct=1 numjobs=1 runtime=60 bs=4k [file1] size=6G ioengine=psync iodepth=1 filename=fio-psync-file ======================================== So I think even without dax, performance improvement is significant and enabling dax speeds it up very significantly. Thanks Vivek _______________________________________________ kata-dev mailing list kata-dev@lists.katacontainers.io http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev
On 2019/1/16 3:16, Vivek Goyal wrote:
On Thu, Dec 27, 2018 at 10:57:21AM +0800, jiangyiwen wrote:
[..]
4. When I use fio with psync, write/read only 20% improve than 9p, I think psync can be more used than mmap. This user-case may need to improved.
Can you send me your fio job. I want to try it out.
fio -filename=/mnt/virtio_fs/file -rw=read -bs=4k -size=6G -iodepth=1 -ioengine=psync -numjobs=1 -group_reporting -name=xxx -time_based -runtime=120
Ok, finally I tried this in bunch of configurations. without dax and cache=none, I see roughly 100% improvement.
cache=none ========== virtio-9p: 28MB/s virtio-fs: 59MB/s
Above is without dax enabled and libfuse daemon filters O_DIRECT flag on host so that file will be cached in host (despite the fact guest opened it with O_DIRECT).
I then tried "cache=always" and did direct I/O from guest. This will avoid page cache in guest but will use page cache on host.
cache=always =========== virtio-fs: 52MB/s virtio-fs (dax): 175MB/s
Notice that with dax performance is alsmost 5x times better as compared to virtio-9p.
Following is my fio job for testing.
======================================= [global] name=fio-psync rw=read direct=1 numjobs=1 runtime=60 bs=4k
[file1] size=6G ioengine=psync iodepth=1 filename=fio-psync-file ========================================
So I think even without dax, performance improvement is significant and enabling dax speeds it up very significantly.
Thanks Vivek
.
Hi Vivek, Thank your very much, this is great. Yiwen.
* jiangyiwen (jiangyiwen@huawei.com) wrote:
On 2019/1/10 23:57, Vivek Goyal wrote:
On Thu, Dec 27, 2018 at 10:57:21AM +0800, jiangyiwen wrote:
[..]
what do I mean by "I filter O_DIRECT flag in libfuse-daemon"?
I modify passthrough_ll.c code and ignore O_DIRECT flag when opening file, then it return success. The codes as follows: diff --git a/example/passthrough_ll.c b/example/passthrough_ll.c index 3e37dbd..9edb5bb 100644 --- a/example/passthrough_ll.c +++ b/example/passthrough_ll.c @@ -1234,7 +1234,7 @@ static void lo_open(fuse_req_t req, fuse_ino_t ino, struct fuse_file_info *fi) fi->flags &= ~O_APPEND;
sprintf(buf, "/proc/self/fd/%i", lo_fd(req, ino)); - fd = open(buf, fi->flags & ~O_NOFOLLOW); + fd = open(buf, fi->flags & ~(O_NOFOLLOW | O_DIRECT)); if (fd == -1) return (void) fuse_reply_err(req, errno);
Hi Yiwen,
Ok, this issue is now fixed. We still do not ignore O_DIRECT flag on host. I think if file is opened with O_DIRECT in guest, then it makes sense to open with O_DIRECT on host as well.
We were getting -EINVAL (-22) due to misaligned buffers. We were creating extra data copy in libfuse daemon and that copy created misalignment.
David Gilbert has now fixed it. He has got rid of that extra copy and that means if guest hands over aligned buffers, I/O should succeed. This should also result in some speed up on write path as one extra copy has been avoided.
These fixes are still internal. David might push these externally soon.
Thanks Vivek
.
Hi Vivek,
Great, I am glad to test it if this patch is pushed.
Hi Yiwen, That's now pushed to the 'dev' branch, here: https://gitlab.com/virtio-fs/libfuse/commits/dev it's only lightly tested and has a few rough edges, but seems to work! Dave
Thanks, Yiwen.
-- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
On 2019/1/17 1:25, Dr. David Alan Gilbert wrote:
* jiangyiwen (jiangyiwen@huawei.com) wrote:
On 2019/1/10 23:57, Vivek Goyal wrote:
On Thu, Dec 27, 2018 at 10:57:21AM +0800, jiangyiwen wrote:
[..]
what do I mean by "I filter O_DIRECT flag in libfuse-daemon"?
I modify passthrough_ll.c code and ignore O_DIRECT flag when opening file, then it return success. The codes as follows: diff --git a/example/passthrough_ll.c b/example/passthrough_ll.c index 3e37dbd..9edb5bb 100644 --- a/example/passthrough_ll.c +++ b/example/passthrough_ll.c @@ -1234,7 +1234,7 @@ static void lo_open(fuse_req_t req, fuse_ino_t ino, struct fuse_file_info *fi) fi->flags &= ~O_APPEND;
sprintf(buf, "/proc/self/fd/%i", lo_fd(req, ino)); - fd = open(buf, fi->flags & ~O_NOFOLLOW); + fd = open(buf, fi->flags & ~(O_NOFOLLOW | O_DIRECT)); if (fd == -1) return (void) fuse_reply_err(req, errno);
Hi Yiwen,
Ok, this issue is now fixed. We still do not ignore O_DIRECT flag on host. I think if file is opened with O_DIRECT in guest, then it makes sense to open with O_DIRECT on host as well.
We were getting -EINVAL (-22) due to misaligned buffers. We were creating extra data copy in libfuse daemon and that copy created misalignment.
David Gilbert has now fixed it. He has got rid of that extra copy and that means if guest hands over aligned buffers, I/O should succeed. This should also result in some speed up on write path as one extra copy has been avoided.
These fixes are still internal. David might push these externally soon.
Thanks Vivek
.
Hi Vivek,
Great, I am glad to test it if this patch is pushed.
Hi Yiwen, That's now pushed to the 'dev' branch, here:
https://gitlab.com/virtio-fs/libfuse/commits/dev
it's only lightly tested and has a few rough edges, but seems to work!
Dave
Thanks, Yiwen.
-- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
.
Hi Dave, Thanks, based on dev branch I tested basic direct-io ability and it can work. That's great. Thanks, Yiwen.
participants (12)
-
Dr. David Alan Gilbert
-
EJ Campbell
-
Ernst, Eric
-
Fox, Kevin M
-
jiangyiwen
-
Qixuan Wu
-
Sage Weil
-
Stefan Hajnoczi
-
Tao Peng
-
Vivek Goyal
-
Xu Wang
-
zhangwei (CR)