About the future kata rootfs, qcow2 or nfs/vsock
Hi Stefan, Now we found 9pfs performance is poor, we are thinking of other solution for the rootfs of kata container. As per the link, https://github.com/kata-containers/runtime/issues/279. Seem you are doing about nfs/vsock optimization, and nfs/vsock seems that can possible be the future default rootfs of kata container. And I heard about qcow2+snapshot+virtio_scsi also. And how do you think about it as the rootfs of kata container? And do you know how the kata commumity think about this solution also and compare it with nfs/vsock ? Anything is welcome and helpful. Thanks & Regards Qixuan Wu.
* Qixuan Wu (qixuan.wu@linux.alibaba.com) wrote:
Hi Stefan,
Hi, Stefan is out at the moment, so I thought I'd reply.
Now we found 9pfs performance is poor, we are thinking of other solution for the rootfs of kata container.
As per the link, https://github.com/kata-containers/runtime/issues/279. Seem you are doing about nfs/vsock optimization, and nfs/vsock seems that can possible be the future default rootfs of kata container.
And I heard about qcow2+snapshot+virtio_scsi also. And how do you think about it as the rootfs of kata container? And do you know how the kata commumity think about this solution also and compare it with nfs/vsock ?
We're currently experimenting with something a bit different; we've got a setup that uses a modified version of the FUSE protocol running over vhost-user; it's: a) Got the filesystem access split out of qemu into a separate daemon - that's just a modified version of a normal FUSE filesystem daemon with the nice bit being that since it's a separate process you can do whatever isolation on it you want. b) But the latency is low because vhost-user means the daemon can read the request queue straight out of the guest memory c) We've got a setup with DAX so that the files are mapped straight into guest address space, so the overhead is very low for large files. d) We've got a caching scheme for metadata, which again removes a lot of latency. e) We've got some patches to use it in KATA; I can start a basic KATA guest with it. This is the first public mention of it because I didn't want you waiting for a reply; but our code is still rather messy and experimental; give us a few weeks and as soon as it survives some smoke tests we'll make the code public. Because we're reusing both FUSE and vhost-user the kernel changes are quite small, as are the qemu changes. I realise that's not much detail yet; we're starting to write some of it up; feel free to ask any specifics. Dave
Anything is welcome and helpful.
Thanks & Regards Qixuan Wu.
_______________________________________________ kata-dev mailing list kata-dev@lists.katacontainers.io http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev -- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
We're currently experimenting with something a bit different; we've got a setup that uses a modified version of the FUSE protocol running over vhost-user; it's: a) Got the filesystem access split out of qemu into a separate daemon - that's just a modified version of a normal FUSE filesystem daemon with the nice bit being that since it's a separate process you can do whatever isolation on it you want. b) But the latency is low because vhost-user means the daemon can read the request queue straight out of the guest memory c) We've got a setup with DAX so that the files are mapped straight into guest address space, so the overhead is very low for large files. That's so cool. I guess it will not use virtio. And this way maybe the new para-virtualization method, it's specific about the file system for the data shared between guest and host.
d) We've got a caching scheme for metadata, which again removes a lot of latency. e) We've got some patches to use it in KATA; I can start a basic KATA guest with it.
This is the first public mention of it because I didn't want you waiting for a reply; but our code is still rather messy and experimental; give us a few weeks and as soon as it survives some smoke tests we'll make the code public.
Because we're reusing both FUSE and vhost-user the kernel changes are quite small, as are the qemu changes.
I realise that's not much detail yet; we're starting to write some of it up; feel free to ask any specifics.
Thank for the replay. Seems that the file data are mmapping direct, but the control plane, like metadata are still using some other simple protocal, maybe new protocal ? Because 9p and nfs are very complex, they are not developed for the file sharing between guest and host. I always hope there is a simple file sharing protocal. I am very looking forward for the code. :-). Thanks & Regards Qixuan Wu.
* Qixuan Wu (qixuan.wu@linux.alibaba.com) wrote:
We're currently experimenting with something a bit different; we've got a setup that uses a modified version of the FUSE protocol running over vhost-user; it's: a) Got the filesystem access split out of qemu into a separate daemon - that's just a modified version of a normal FUSE filesystem daemon with the nice bit being that since it's a separate process you can do whatever isolation on it you want. b) But the latency is low because vhost-user means the daemon can read the request queue straight out of the guest memory c) We've got a setup with DAX so that the files are mapped straight into guest address space, so the overhead is very low for large files. That's so cool. I guess it will not use virtio. And this way maybe the new para-virtualization method, it's specific about the file system for the data shared between guest and host.
It does use virtio! It's basically just the existing FUSE protocol carried over virtio; it's got some tweaks to allow the direct mappings and to deal with some difference sin the setup. It uses the existing vhost-user implementation of virtio (just like vhost-user for network does virtio for dpdk).
d) We've got a caching scheme for metadata, which again removes a lot of latency. e) We've got some patches to use it in KATA; I can start a basic KATA guest with it.
This is the first public mention of it because I didn't want you waiting for a reply; but our code is still rather messy and experimental; give us a few weeks and as soon as it survives some smoke tests we'll make the code public.
Because we're reusing both FUSE and vhost-user the kernel changes are quite small, as are the qemu changes.
I realise that's not much detail yet; we're starting to write some of it up; feel free to ask any specifics.
Thank for the replay. Seems that the file data are mmapping direct, but the control plane, like metadata are still using some other simple protocal, maybe new protocal ?
The control plane again is basically just the existing FUSE protocol; but we've got a shared mmap'd region for a fast lookup for some of the metadata.
Because 9p and nfs are very complex, they are not developed for the file sharing between guest and host. I always hope there is a simple file sharing protocal. I am very looking forward for the code. :-).
Glad you like the sound of it; we'll try and get it out ASAP. Dave
Thanks & Regards Qixuan Wu.
-- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
于 2018/9/24 下午11:53, Dr. David Alan Gilbert 写道:
* Qixuan Wu (qixuan.wu@linux.alibaba.com) wrote:
We're currently experimenting with something a bit different; we've got a setup that uses a modified version of the FUSE protocol running over vhost-user; it's: a) Got the filesystem access split out of qemu into a separate daemon - that's just a modified version of a normal FUSE filesystem daemon with the nice bit being that since it's a separate process you can do whatever isolation on it you want. b) But the latency is low because vhost-user means the daemon can read the request queue straight out of the guest memory c) We've got a setup with DAX so that the files are mapped straight into guest address space, so the overhead is very low for large files. That's so cool. I guess it will not use virtio. And this way maybe the new para-virtualization method, it's specific about the file system for the data shared between guest and host.
It does use virtio! It's basically just the existing FUSE protocol carried over virtio; it's got some tweaks to allow the direct mappings and to deal with some difference sin the setup. It uses the existing vhost-user implementation of virtio (just like vhost-user for network does virtio for dpdk).
d) We've got a caching scheme for metadata, which again removes a lot of latency. e) We've got some patches to use it in KATA; I can start a basic KATA guest with it.
This is the first public mention of it because I didn't want you waiting for a reply; but our code is still rather messy and experimental; give us a few weeks and as soon as it survives some smoke tests we'll make the code public.
Because we're reusing both FUSE and vhost-user the kernel changes are quite small, as are the qemu changes.
I realise that's not much detail yet; we're starting to write some of it up; feel free to ask any specifics.
Thank for the replay. Seems that the file data are mmapping direct, but the control plane, like metadata are still using some other simple protocal, maybe new protocal ?
The control plane again is basically just the existing FUSE protocol; but we've got a shared mmap'd region for a fast lookup for some of the metadata.
Because 9p and nfs are very complex, they are not developed for the file sharing between guest and host. I always hope there is a simple file sharing protocal. I am very looking forward for the code. :-).
Glad you like the sound of it; we'll try and get it out ASAP.
Though has some doubts about it, anyway, seems like it's faster and simpler than 9pfs and nfs+vsock. It's a good news for kata user. Thanks & Regards Qixuan.
* Qixuan Wu (qixuan.wu@linux.alibaba.com) wrote:
于 2018/9/24 下午11:53, Dr. David Alan Gilbert 写道:
* Qixuan Wu (qixuan.wu@linux.alibaba.com) wrote:
We're currently experimenting with something a bit different; we've got a setup that uses a modified version of the FUSE protocol running over vhost-user; it's: a) Got the filesystem access split out of qemu into a separate daemon - that's just a modified version of a normal FUSE filesystem daemon with the nice bit being that since it's a separate process you can do whatever isolation on it you want. b) But the latency is low because vhost-user means the daemon can read the request queue straight out of the guest memory c) We've got a setup with DAX so that the files are mapped straight into guest address space, so the overhead is very low for large files. That's so cool. I guess it will not use virtio. And this way maybe the new para-virtualization method, it's specific about the file system for the data shared between guest and host.
It does use virtio! It's basically just the existing FUSE protocol carried over virtio; it's got some tweaks to allow the direct mappings and to deal with some difference sin the setup. It uses the existing vhost-user implementation of virtio (just like vhost-user for network does virtio for dpdk).
d) We've got a caching scheme for metadata, which again removes a lot of latency. e) We've got some patches to use it in KATA; I can start a basic KATA guest with it.
This is the first public mention of it because I didn't want you waiting for a reply; but our code is still rather messy and experimental; give us a few weeks and as soon as it survives some smoke tests we'll make the code public.
Because we're reusing both FUSE and vhost-user the kernel changes are quite small, as are the qemu changes.
I realise that's not much detail yet; we're starting to write some of it up; feel free to ask any specifics.
Thank for the replay. Seems that the file data are mmapping direct, but the control plane, like metadata are still using some other simple protocal, maybe new protocal ?
The control plane again is basically just the existing FUSE protocol; but we've got a shared mmap'd region for a fast lookup for some of the metadata.
Because 9p and nfs are very complex, they are not developed for the file sharing between guest and host. I always hope there is a simple file sharing protocal. I am very looking forward for the code. :-).
Glad you like the sound of it; we'll try and get it out ASAP.
Though has some doubts about it, anyway, seems like it's faster and simpler than 9pfs and nfs+vsock. It's a good news for kata user.
Please ask about your doubts; I'd like to make sure we have good answers to them. Dave
Thanks & Regards Qixuan. -- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
于 2018/9/25 上午12:14, Dr. David Alan Gilbert 写道:
* Qixuan Wu (qixuan.wu@linux.alibaba.com) wrote:
于 2018/9/24 下午11:53, Dr. David Alan Gilbert 写道:
* Qixuan Wu (qixuan.wu@linux.alibaba.com) wrote:
We're currently experimenting with something a bit different; we've got a setup that uses a modified version of the FUSE protocol running over vhost-user; it's: a) Got the filesystem access split out of qemu into a separate daemon - that's just a modified version of a normal FUSE filesystem daemon with the nice bit being that since it's a separate process you can do whatever isolation on it you want. b) But the latency is low because vhost-user means the daemon can read the request queue straight out of the guest memory c) We've got a setup with DAX so that the files are mapped straight into guest address space, so the overhead is very low for large files. That's so cool. I guess it will not use virtio. And this way maybe the new para-virtualization method, it's specific about the file system for the data shared between guest and host.
It does use virtio! It's basically just the existing FUSE protocol carried over virtio; it's got some tweaks to allow the direct mappings and to deal with some difference sin the setup. It uses the existing vhost-user implementation of virtio (just like vhost-user for network does virtio for dpdk).
d) We've got a caching scheme for metadata, which again removes a lot of latency. e) We've got some patches to use it in KATA; I can start a basic KATA guest with it.
This is the first public mention of it because I didn't want you waiting for a reply; but our code is still rather messy and experimental; give us a few weeks and as soon as it survives some smoke tests we'll make the code public.
Because we're reusing both FUSE and vhost-user the kernel changes are quite small, as are the qemu changes.
I realise that's not much detail yet; we're starting to write some of it up; feel free to ask any specifics.
Thank for the replay. Seems that the file data are mmapping direct, but the control plane, like metadata are still using some other simple protocal, maybe new protocal ?
The control plane again is basically just the existing FUSE protocol; but we've got a shared mmap'd region for a fast lookup for some of the metadata.
Because 9p and nfs are very complex, they are not developed for the file sharing between guest and host. I always hope there is a simple file sharing protocal. I am very looking forward for the code. :-).
Glad you like the sound of it; we'll try and get it out ASAP.
Though has some doubts about it, anyway, seems like it's faster and simpler than 9pfs and nfs+vsock. It's a good news for kata user.
Please ask about your doubts; I'd like to make sure we have good answers to them.
My doubts are: 1. Is the Filesystem in Userspace (FUSE) used in Guest os or Host os? 2. As my understand, FUSE is the mechanism used between user space and kernel space, not a protocal. So I cannot understand how create or unlink command be transfered from guest to host over virtio. So i did not understant "FUSE protocol over virtio". 3. Did you test the performance compared to 9P? Thanks for replay and clarification very much. Regards Qixuan.
* Qixuan Wu (qixuan.wu@linux.alibaba.com) wrote:
于 2018/9/25 上午12:14, Dr. David Alan Gilbert 写道:
* Qixuan Wu (qixuan.wu@linux.alibaba.com) wrote:
于 2018/9/24 下午11:53, Dr. David Alan Gilbert 写道:
* Qixuan Wu (qixuan.wu@linux.alibaba.com) wrote:
We're currently experimenting with something a bit different; we've got a setup that uses a modified version of the FUSE protocol running over vhost-user; it's: a) Got the filesystem access split out of qemu into a separate daemon - that's just a modified version of a normal FUSE filesystem daemon with the nice bit being that since it's a separate process you can do whatever isolation on it you want. b) But the latency is low because vhost-user means the daemon can read the request queue straight out of the guest memory c) We've got a setup with DAX so that the files are mapped straight into guest address space, so the overhead is very low for large files. That's so cool. I guess it will not use virtio. And this way maybe the new para-virtualization method, it's specific about the file system for the data shared between guest and host.
It does use virtio! It's basically just the existing FUSE protocol carried over virtio; it's got some tweaks to allow the direct mappings and to deal with some difference sin the setup. It uses the existing vhost-user implementation of virtio (just like vhost-user for network does virtio for dpdk).
d) We've got a caching scheme for metadata, which again removes a lot of latency. e) We've got some patches to use it in KATA; I can start a basic KATA guest with it.
This is the first public mention of it because I didn't want you waiting for a reply; but our code is still rather messy and experimental; give us a few weeks and as soon as it survives some smoke tests we'll make the code public.
Because we're reusing both FUSE and vhost-user the kernel changes are quite small, as are the qemu changes.
I realise that's not much detail yet; we're starting to write some of it up; feel free to ask any specifics.
Thank for the replay. Seems that the file data are mmapping direct, but the control plane, like metadata are still using some other simple protocal, maybe new protocal ?
The control plane again is basically just the existing FUSE protocol; but we've got a shared mmap'd region for a fast lookup for some of the metadata.
Because 9p and nfs are very complex, they are not developed for the file sharing between guest and host. I always hope there is a simple file sharing protocal. I am very looking forward for the code. :-).
Glad you like the sound of it; we'll try and get it out ASAP.
Though has some doubts about it, anyway, seems like it's faster and simpler than 9pfs and nfs+vsock. It's a good news for kata user.
Please ask about your doubts; I'd like to make sure we have good answers to them.
My doubts are:
1. Is the Filesystem in Userspace (FUSE) used in Guest os or Host os?
It's between the guest OS and the host qemu+daemon. The host OS doesn't see it.
2. As my understand, FUSE is the mechanism used between user space and kernel space, not a protocal. So I cannot understand how create or unlink command be transfered from guest to host over virtio. So i did not understant "FUSE protocol over virtio".
Ignoring this work; the way FUSE works is that: 1) application -> syscalls to kernel 2) kernel translates those to a message stream over an fd 3) A daemon running as a normal process under the same kernel reads commands from that fd and passes data back to the kernel now we swivel this around a bit: a) Guest application -> syscalls to guest kernel b) guest kernel translates those to a message stream - this time over a virtio command stream. c) A daemon connected to qemu via vhost-user reads that command stream. so it's actually pretty much the same; but we've replaced the fd used between the kernel and the daemon by a virtio transport.
3. Did you test the performance compared to 9P?
Only a little; at the moment our code is full of debug and we're just trying to get it hang together to run benchmarks solidly. It's looking promising though; there's a couple of things we need to fix but it's getting there.
Thanks for replay and clarification very much.
No problem. Dave
Regards Qixuan. -- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
于 2018/9/25 下午10:25, Dr. David Alan Gilbert 写道:
* Qixuan Wu (qixuan.wu@linux.alibaba.com) wrote:
于 2018/9/25 上午12:14, Dr. David Alan Gilbert 写道:
* Qixuan Wu (qixuan.wu@linux.alibaba.com) wrote:
于 2018/9/24 下午11:53, Dr. David Alan Gilbert 写道:
* Qixuan Wu (qixuan.wu@linux.alibaba.com) wrote:
> We're currently experimenting with something a bit different; > we've got a setup that uses a modified version of the FUSE protocol > running over vhost-user; it's: > a) Got the filesystem access split out of qemu into a separate daemon > - that's just a modified version of a normal FUSE filesystem daemon > with the nice bit being that since it's a separate process you > can do whatever isolation on it you want. > b) But the latency is low because vhost-user means the daemon can read > the request queue straight out of the guest memory > c) We've got a setup with DAX so that the files are mapped straight > into guest address space, so the overhead is very low for large > files. That's so cool. I guess it will not use virtio. And this way maybe the new para-virtualization method, it's specific about the file system for the data shared between guest and host.
It does use virtio! It's basically just the existing FUSE protocol carried over virtio; it's got some tweaks to allow the direct mappings and to deal with some difference sin the setup. It uses the existing vhost-user implementation of virtio (just like vhost-user for network does virtio for dpdk).
> d) We've got a caching scheme for metadata, which again removes a lot > of latency. > e) We've got some patches to use it in KATA; I can start a basic KATA > guest with it. > > This is the first public mention of it because I didn't want you waiting > for a reply; but our code is still rather messy and experimental; give > us a few weeks and as soon as it survives some smoke tests we'll make > the code public. > > Because we're reusing both FUSE and vhost-user the kernel changes are > quite small, as are the qemu changes. > > I realise that's not much detail yet; we're starting to write some of it > up; feel free to ask any specifics. >
Thank for the replay. Seems that the file data are mmapping direct, but the control plane, like metadata are still using some other simple protocal, maybe new protocal ?
The control plane again is basically just the existing FUSE protocol; but we've got a shared mmap'd region for a fast lookup for some of the metadata.
Because 9p and nfs are very complex, they are not developed for the file sharing between guest and host. I always hope there is a simple file sharing protocal. I am very looking forward for the code. :-).
Glad you like the sound of it; we'll try and get it out ASAP.
Though has some doubts about it, anyway, seems like it's faster and simpler than 9pfs and nfs+vsock. It's a good news for kata user.
Please ask about your doubts; I'd like to make sure we have good answers to them.
My doubts are:
1. Is the Filesystem in Userspace (FUSE) used in Guest os or Host os?
It's between the guest OS and the host qemu+daemon. The host OS doesn't see it.
2. As my understand, FUSE is the mechanism used between user space and kernel space, not a protocal. So I cannot understand how create or unlink command be transfered from guest to host over virtio. So i did not understant "FUSE protocol over virtio".
Ignoring this work; the way FUSE works is that:
1) application -> syscalls to kernel
2) kernel translates those to a message stream over an fd
3) A daemon running as a normal process under the same kernel reads commands from that fd and passes data back to the kernel
now we swivel this around a bit:
a) Guest application -> syscalls to guest kernel
b) guest kernel translates those to a message stream - this time over a virtio command stream.
c) A daemon connected to qemu via vhost-user reads that command stream.
so it's actually pretty much the same; but we've replaced the fd used between the kernel and the daemon by a virtio transport.
I totally got it, thank you very much. So the daemon operate the normal file syscall to host kernel, right ? Seems that it's similiar with the guest syscall pass through to host kernel. Is there any security problem?
3. Did you test the performance compared to 9P?
Only a little; at the moment our code is full of debug and we're just trying to get it hang together to run benchmarks solidly. It's looking promising though; there's a couple of things we need to fix but it's getting there.
Got it. Hope to see the data. But seems the procedure is similiar with 9PFS. In the guest, you still need to implement a new Filesystem. And so the reason why new one is faster than 9P is because that metadata command is non-copy, am I right? Thanks & Regards Qixuan.
* Qixuan Wu (qixuan.wu@linux.alibaba.com) wrote:
于 2018/9/25 下午10:25, Dr. David Alan Gilbert 写道:
* Qixuan Wu (qixuan.wu@linux.alibaba.com) wrote:
于 2018/9/25 上午12:14, Dr. David Alan Gilbert 写道:
* Qixuan Wu (qixuan.wu@linux.alibaba.com) wrote:
于 2018/9/24 下午11:53, Dr. David Alan Gilbert 写道:
* Qixuan Wu (qixuan.wu@linux.alibaba.com) wrote: > > We're currently experimenting with something a bit different; > > we've got a setup that uses a modified version of the FUSE protocol > > running over vhost-user; it's: > > a) Got the filesystem access split out of qemu into a separate daemon > > - that's just a modified version of a normal FUSE filesystem daemon > > with the nice bit being that since it's a separate process you > > can do whatever isolation on it you want. > > b) But the latency is low because vhost-user means the daemon can read > > the request queue straight out of the guest memory > > c) We've got a setup with DAX so that the files are mapped straight > > into guest address space, so the overhead is very low for large > > files. > That's so cool. I guess it will not use virtio. And this way maybe the new > para-virtualization method, it's specific about the file system for the data > shared between guest and host.
It does use virtio! It's basically just the existing FUSE protocol carried over virtio; it's got some tweaks to allow the direct mappings and to deal with some difference sin the setup. It uses the existing vhost-user implementation of virtio (just like vhost-user for network does virtio for dpdk).
> > d) We've got a caching scheme for metadata, which again removes a lot > > of latency. > > e) We've got some patches to use it in KATA; I can start a basic KATA > > guest with it. > > > > This is the first public mention of it because I didn't want you waiting > > for a reply; but our code is still rather messy and experimental; give > > us a few weeks and as soon as it survives some smoke tests we'll make > > the code public. > > > > Because we're reusing both FUSE and vhost-user the kernel changes are > > quite small, as are the qemu changes. > > > > I realise that's not much detail yet; we're starting to write some of it > > up; feel free to ask any specifics. > > > > Thank for the replay. Seems that the file data are mmapping direct, but the > control plane, like metadata are still using some other simple protocal, > maybe new protocal ?
The control plane again is basically just the existing FUSE protocol; but we've got a shared mmap'd region for a fast lookup for some of the metadata.
> Because 9p and nfs are very complex, they are not > developed for the file sharing between guest and host. I always hope there > is a simple file sharing protocal. I am very looking forward for the code. > :-).
Glad you like the sound of it; we'll try and get it out ASAP.
Though has some doubts about it, anyway, seems like it's faster and simpler than 9pfs and nfs+vsock. It's a good news for kata user.
Please ask about your doubts; I'd like to make sure we have good answers to them.
My doubts are:
1. Is the Filesystem in Userspace (FUSE) used in Guest os or Host os?
It's between the guest OS and the host qemu+daemon. The host OS doesn't see it.
2. As my understand, FUSE is the mechanism used between user space and kernel space, not a protocal. So I cannot understand how create or unlink command be transfered from guest to host over virtio. So i did not understant "FUSE protocol over virtio".
Ignoring this work; the way FUSE works is that:
1) application -> syscalls to kernel
2) kernel translates those to a message stream over an fd
3) A daemon running as a normal process under the same kernel reads commands from that fd and passes data back to the kernel
now we swivel this around a bit:
a) Guest application -> syscalls to guest kernel
b) guest kernel translates those to a message stream - this time over a virtio command stream.
c) A daemon connected to qemu via vhost-user reads that command stream.
so it's actually pretty much the same; but we've replaced the fd used between the kernel and the daemon by a virtio transport.
I totally got it, thank you very much. So the daemon operate the normal file syscall to host kernel, right ?
Seems that it's similiar with the guest syscall pass through to host kernel. Is there any security problem?
Note it's not a raw passthrough; just like 9p it's an abstraction so the guest kernel builds the protocol packets and the daemon unpacks them, checks them and executes the appropriate host call. Because the host fs calls are done in a separate daemon you can apply whatever security rules you like to that daemon to lock it down.
3. Did you test the performance compared to 9P?
Only a little; at the moment our code is full of debug and we're just trying to get it hang together to run benchmarks solidly. It's looking promising though; there's a couple of things we need to fix but it's getting there.
Got it. Hope to see the data. But seems the procedure is similiar with 9PFS. In the guest, you still need to implement a new Filesystem. And so the reason why new one is faster than 9P is because that metadata command is non-copy, am I right?
We don't need to implement the new FS; we just take the existing FUSE filesystem code, and the existing virtio code and force them together. Then we add some optimisations for metadata and caching. Dave
Thanks & Regards Qixuan. -- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
于 2018/9/25 下午11:41, Dr. David Alan Gilbert 写道:
于 2018/9/24 下午11:53, Dr. David Alan Gilbert 写道: >>> We're currently experimenting with something a bit different; >>> we've got a setup that uses a modified version of the FUSE protocol >>> running over vhost-user; it's: >>> a) Got the filesystem access split out of qemu into a separate daemon >>> - that's just a modified version of a normal FUSE filesystem daemon >>> with the nice bit being that since it's a separate process you >>> can do whatever isolation on it you want. >>> b) But the latency is low because vhost-user means the daemon can read >>> the request queue straight out of the guest memory >>> c) We've got a setup with DAX so that the files are mapped straight >>> into guest address space, so the overhead is very low for large >>> files. >> That's so cool. I guess it will not use virtio. And this way maybe the new >> para-virtualization method, it's specific about the file system for the data >> shared between guest and host. > > It does use virtio! It's basically just the existing FUSE protocol > carried over virtio; it's got some tweaks to allow the direct mappings > and to deal with some difference sin the setup. > It uses the existing vhost-user implementation of virtio (just like > vhost-user for network does virtio for dpdk). > >>> d) We've got a caching scheme for metadata, which again removes a lot >>> of latency. >>> e) We've got some patches to use it in KATA; I can start a basic KATA >>> guest with it. >>> >>> This is the first public mention of it because I didn't want you waiting >>> for a reply; but our code is still rather messy and experimental; give >>> us a few weeks and as soon as it survives some smoke tests we'll make >>> the code public. >>> >>> Because we're reusing both FUSE and vhost-user the kernel changes are >>> quite small, as are the qemu changes. >>> >>> I realise that's not much detail yet; we're starting to write some of it >>> up; feel free to ask any specifics. >>> >> >> Thank for the replay. Seems that the file data are mmapping direct, but the >> control plane, like metadata are still using some other simple protocal, >> maybe new protocal ? > > The control plane again is basically just the existing FUSE protocol; > but we've got a shared mmap'd region for a fast lookup for some of the > metadata. > >> Because 9p and nfs are very complex, they are not >> developed for the file sharing between guest and host. I always hope there >> is a simple file sharing protocal. I am very looking forward for the code. >> :-). > > Glad you like the sound of it; we'll try and get it out ASAP. >
Though has some doubts about it, anyway, seems like it's faster and simpler than 9pfs and nfs+vsock. It's a good news for kata user.
Please ask about your doubts; I'd like to make sure we have good answers to them.
My doubts are:
1. Is the Filesystem in Userspace (FUSE) used in Guest os or Host os?
It's between the guest OS and the host qemu+daemon. The host OS doesn't see it.
2. As my understand, FUSE is the mechanism used between user space and kernel space, not a protocal. So I cannot understand how create or unlink command be transfered from guest to host over virtio. So i did not understant "FUSE protocol over virtio".
Ignoring this work; the way FUSE works is that:
1) application -> syscalls to kernel
2) kernel translates those to a message stream over an fd
3) A daemon running as a normal process under the same kernel reads commands from that fd and passes data back to the kernel
now we swivel this around a bit:
a) Guest application -> syscalls to guest kernel
b) guest kernel translates those to a message stream - this time over a virtio command stream.
c) A daemon connected to qemu via vhost-user reads that command stream.
so it's actually pretty much the same; but we've replaced the fd used between the kernel and the daemon by a virtio transport.
I totally got it, thank you very much. So the daemon operate the normal file syscall to host kernel, right ?
Seems that it's similiar with the guest syscall pass through to host kernel. Is there any security problem?
Note it's not a raw passthrough; just like 9p it's an abstraction so the guest kernel builds the protocol packets and the daemon unpacks them, checks them and executes the appropriate host call. Because the host fs calls are done in a separate daemon you can apply whatever security rules you like to that daemon to lock it down.
Got it, it's another simple 9p abstraction, at least simpler than nfs+vsock. I think it's similiar as the FS protocal over socket pair between Gofer and sentry in gvisor project.
3. Did you test the performance compared to 9P?
Only a little; at the moment our code is full of debug and we're just trying to get it hang together to run benchmarks solidly. It's looking promising though; there's a couple of things we need to fix but it's getting there.
Got it. Hope to see the data. But seems the procedure is similiar with 9PFS. In the guest, you still need to implement a new Filesystem. And so the reason why new one is faster than 9P is because that metadata command is non-copy, am I right?
We don't need to implement the new FS; we just take the existing FUSE filesystem code, and the existing virtio code and force them together. Then we add some optimisations for metadata and caching.
Yes, fantastic. :-) You are correct, no need new FS, just FUSE call virtio function to sent to the host daemon through vhost-user. Thanks & Regards Qixuan.
On Mon, Sep 24, 2018 at 10:32 PM, Dr. David Alan Gilbert <dgilbert@redhat.com> wrote:
* Qixuan Wu (qixuan.wu@linux.alibaba.com) wrote:
Hi Stefan,
Hi, Stefan is out at the moment, so I thought I'd reply.
Now we found 9pfs performance is poor, we are thinking of other solution for the rootfs of kata container.
As per the link, https://github.com/kata-containers/runtime/issues/279. Seem you are doing about nfs/vsock optimization, and nfs/vsock seems that can possible be the future default rootfs of kata container.
And I heard about qcow2+snapshot+virtio_scsi also. And how do you think about it as the rootfs of kata container? And do you know how the kata commumity think about this solution also and compare it with nfs/vsock ?
We're currently experimenting with something a bit different; we've got a setup that uses a modified version of the FUSE protocol running over vhost-user; it's: a) Got the filesystem access split out of qemu into a separate daemon - that's just a modified version of a normal FUSE filesystem daemon with the nice bit being that since it's a separate process you can do whatever isolation on it you want. b) But the latency is low because vhost-user means the daemon can read the request queue straight out of the guest memory c) We've got a setup with DAX so that the files are mapped straight into guest address space, so the overhead is very low for large files. d) We've got a caching scheme for metadata, which again removes a lot of latency. e) We've got some patches to use it in KATA; I can start a basic KATA guest with it.
This is the first public mention of it because I didn't want you waiting for a reply; but our code is still rather messy and experimental; give us a few weeks and as soon as it survives some smoke tests we'll make the code public.
Because we're reusing both FUSE and vhost-user the kernel changes are quite small, as are the qemu changes.
Hi Dave, Thanks for sharing and sorry to chime in late. IIUC this is pretty like a vhost-user-fuse design. On the guest side, it uses a virtio-fuse frontend that takes any fs IO and encode it in fuse wire protocol and send through virtio. And the host daemon is a vhost-user-fuse process that just need to talk in fuse wire protocol with vhost-user fd rather than /dev/fuse. Am I understanding correctly? Does it require any modification to the fuse wire protocol (e.g., include/uapi/linux/fuse.h)? Cheers, Tao -- bergwolf@hyper.sh
* Tao Peng (bergwolf@hyper.sh) wrote:
On Mon, Sep 24, 2018 at 10:32 PM, Dr. David Alan Gilbert <dgilbert@redhat.com> wrote:
* Qixuan Wu (qixuan.wu@linux.alibaba.com) wrote:
Hi Stefan,
Hi, Stefan is out at the moment, so I thought I'd reply.
Now we found 9pfs performance is poor, we are thinking of other solution for the rootfs of kata container.
As per the link, https://github.com/kata-containers/runtime/issues/279. Seem you are doing about nfs/vsock optimization, and nfs/vsock seems that can possible be the future default rootfs of kata container.
And I heard about qcow2+snapshot+virtio_scsi also. And how do you think about it as the rootfs of kata container? And do you know how the kata commumity think about this solution also and compare it with nfs/vsock ?
We're currently experimenting with something a bit different; we've got a setup that uses a modified version of the FUSE protocol running over vhost-user; it's: a) Got the filesystem access split out of qemu into a separate daemon - that's just a modified version of a normal FUSE filesystem daemon with the nice bit being that since it's a separate process you can do whatever isolation on it you want. b) But the latency is low because vhost-user means the daemon can read the request queue straight out of the guest memory c) We've got a setup with DAX so that the files are mapped straight into guest address space, so the overhead is very low for large files. d) We've got a caching scheme for metadata, which again removes a lot of latency. e) We've got some patches to use it in KATA; I can start a basic KATA guest with it.
This is the first public mention of it because I didn't want you waiting for a reply; but our code is still rather messy and experimental; give us a few weeks and as soon as it survives some smoke tests we'll make the code public.
Because we're reusing both FUSE and vhost-user the kernel changes are quite small, as are the qemu changes.
Hi Dave,
Thanks for sharing and sorry to chime in late. IIUC this is pretty like a vhost-user-fuse design. On the guest side, it uses a virtio-fuse frontend that takes any fs IO and encode it in fuse wire protocol and send through virtio. And the host daemon is a vhost-user-fuse process that just need to talk in fuse wire protocol with vhost-user fd rather than /dev/fuse. Am I understanding correctly?
Yes! That's exactly what it is.
Does it require any modification to the fuse wire protocol (e.g., include/uapi/linux/fuse.h)?
We've got a couple of extra opcodes for a performance trick, but otherwise it's just the same. Dave
Cheers, Tao
-- bergwolf@hyper.sh
-- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
* Tao Peng (bergwolf@hyper.sh) wrote:
On Mon, Sep 24, 2018 at 10:32 PM, Dr. David Alan Gilbert <dgilbert@redhat.com> wrote:
* Qixuan Wu (qixuan.wu@linux.alibaba.com) wrote:
Hi Stefan,
Hi, Stefan is out at the moment, so I thought I'd reply.
Now we found 9pfs performance is poor, we are thinking of other solution for the rootfs of kata container.
As per the link, https://github.com/kata-containers/runtime/issues/279. Seem you are doing about nfs/vsock optimization, and nfs/vsock seems that can possible be the future default rootfs of kata container.
And I heard about qcow2+snapshot+virtio_scsi also. And how do you think about it as the rootfs of kata container? And do you know how the kata commumity think about this solution also and compare it with nfs/vsock ?
We're currently experimenting with something a bit different; we've got a setup that uses a modified version of the FUSE protocol running over vhost-user; it's: a) Got the filesystem access split out of qemu into a separate daemon - that's just a modified version of a normal FUSE filesystem daemon with the nice bit being that since it's a separate process you can do whatever isolation on it you want. b) But the latency is low because vhost-user means the daemon can read the request queue straight out of the guest memory c) We've got a setup with DAX so that the files are mapped straight into guest address space, so the overhead is very low for large files. d) We've got a caching scheme for metadata, which again removes a lot of latency. e) We've got some patches to use it in KATA; I can start a basic KATA guest with it.
This is the first public mention of it because I didn't want you waiting for a reply; but our code is still rather messy and experimental; give us a few weeks and as soon as it survives some smoke tests we'll make the code public.
Because we're reusing both FUSE and vhost-user the kernel changes are quite small, as are the qemu changes.
Hi Dave,
Thanks for sharing and sorry to chime in late. IIUC this is pretty like a vhost-user-fuse design. On the guest side, it uses a virtio-fuse frontend that takes any fs IO and encode it in fuse wire protocol and send through virtio. And the host daemon is a vhost-user-fuse process that just need to talk in fuse wire protocol with vhost-user fd rather than /dev/fuse. Am I understanding correctly?
Yes! That's exactly what it is. Thanks for confirming! It's a brilliant idea IMO and a truly native solution to the host fs sharing problem. Since it's based on FUSE, I suppose we can get better POSIX compliance than 9pfs. And I don't
On Wed, Sep 26, 2018 at 4:35 PM, Dr. David Alan Gilbert <dgilbert@redhat.com> wrote: think it requires any host kernel change, right?
Does it require any modification to the fuse wire protocol (e.g., include/uapi/linux/fuse.h)?
We've got a couple of extra opcodes for a performance trick, but otherwise it's just the same.
Great, I'm really looking forward to it! Thanks, Tao -- bergwolf@hyper.sh
Yes! That's exactly what it is. Thanks for confirming! It's a brilliant idea IMO and a truly native solution to the host fs sharing problem. Since it's based on FUSE, I suppose we can get better POSIX compliance than 9pfs. And I don't think it requires any host kernel change, right?
Do we really get better POSIX compliance? From what I read I think we will still have some POSIX issues.
* Castelino, Manohar R (manohar.r.castelino@intel.com) wrote:
Yes! That's exactly what it is. Thanks for confirming! It's a brilliant idea IMO and a truly native solution to the host fs sharing problem. Since it's based on FUSE, I suppose we can get better POSIX compliance than 9pfs. And I don't think it requires any host kernel change, right?
Do we really get better POSIX compliance? From what I read I think we will still have some POSIX issues.
Which ones are you worried about? Dave -- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
* Castelino, Manohar R (manohar.r.castelino@intel.com) wrote:
Do we really get better POSIX compliance? From what I read I think we will still have some POSIX issues.
Which ones are you worried about?
With 9p we ran into issues with unlink, fallocate and fstat, which caused some workloads to fail with Kata.
If you've got test cases or can remember the details we'd be interested to see them. Dave -- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
We have run the pjdfstest test suite in the past for POSIX compliance and seen quite a few failures with 9p. An old issue documenting this: https://github.com/clearcontainers/runtime/issues/828 I think the pjdfstest will be a good place to start. Graham has documented this process here: https://github.com/kata-containers/runtime/issues/279#issuecomment-39437129 9 Basically one needs to run Kata Containers with the following Dockerfile: FROM ubuntu RUN apt-get update && \ apt-get -y install autoconf git bc libacl1-dev libacl1 acl gcc make perl-modules && \ git clone https://github.com/pjd/pjdfstest.git && \ cd pjdfstest && \ autoreconf -ifs && \ ./configure && \ make # and run using # prove -r . That test suite does not include the fallocate tests. But simply running fallocate(1) in a Kata container today, fails with "Operation not supported" with 9p. -Archana On 9/26/18, 11:37 AM, "Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
* Castelino, Manohar R (manohar.r.castelino@intel.com) wrote:
Do we really get better POSIX compliance? From what I read I think we will still have some POSIX issues.
Which ones are you worried about?
With 9p we ran into issues with unlink, fallocate and fstat, which caused some workloads to fail with Kata.
If you've got test cases or can remember the details we'd be interested to see them.
Dave
-- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
_______________________________________________ kata-dev mailing list kata-dev@lists.katacontainers.io http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev
On Thu, Sep 27, 2018 at 3:32 AM, Shinde, Archana M <archana.m.shinde@intel.com> wrote:
We have run the pjdfstest test suite in the past for POSIX compliance and seen quite a few failures with 9p. An old issue documenting this: https://github.com/clearcontainers/runtime/issues/828
I think the pjdfstest will be a good place to start. Graham has documented this process here: https://github.com/kata-containers/runtime/issues/279#issuecomment-39437129 9
Basically one needs to run Kata Containers with the following Dockerfile:
FROM ubuntu
RUN apt-get update && \ apt-get -y install autoconf git bc libacl1-dev libacl1 acl gcc make perl-modules && \ git clone https://github.com/pjd/pjdfstest.git && \ cd pjdfstest && \ autoreconf -ifs && \ ./configure && \ make
# and run using # prove -r .
That test suite does not include the fallocate tests. But simply running fallocate(1) in a Kata container today, fails with "Operation not supported" with 9p.
For one thing, fallocate(2) is Linux specific and not part of the POSIX semantics. For another thing, FUSE file systems can support fallocate(2) by implementing the FUSE_FALLOCATE opcode. Cheers, Tao -- bergwolf@hyper.sh
I thought posix_fallocate() is implemented by fallocate, so we may still need to support it. On 9/26/18, 6:10 PM, "Tao Peng" <bergwolf@hyper.sh> wrote:
On Thu, Sep 27, 2018 at 3:32 AM, Shinde, Archana M <archana.m.shinde@intel.com> wrote:
We have run the pjdfstest test suite in the past for POSIX compliance and seen quite a few failures with 9p. An old issue documenting this: https://github.com/clearcontainers/runtime/issues/828
I think the pjdfstest will be a good place to start. Graham has documented this process here:
https://github.com/kata-containers/runtime/issues/279#issuecomment-394371 29 9
Basically one needs to run Kata Containers with the following Dockerfile:
FROM ubuntu
RUN apt-get update && \ apt-get -y install autoconf git bc libacl1-dev libacl1 acl gcc make perl-modules && \ git clone https://github.com/pjd/pjdfstest.git && \ cd pjdfstest && \ autoreconf -ifs && \ ./configure && \ make
# and run using # prove -r .
That test suite does not include the fallocate tests. But simply running fallocate(1) in a Kata container today, fails with "Operation not supported" with 9p.
For one thing, fallocate(2) is Linux specific and not part of the POSIX semantics. For another thing, FUSE file systems can support fallocate(2) by implementing the FUSE_FALLOCATE opcode.
Cheers, Tao
-- bergwolf@hyper.sh
* Shinde, Archana M (archana.m.shinde@intel.com) wrote:
We have run the pjdfstest test suite in the past for POSIX compliance and seen quite a few failures with 9p. An old issue documenting this: https://github.com/clearcontainers/runtime/issues/828
I think the pjdfstest will be a good place to start. Graham has documented this process here: https://github.com/kata-containers/runtime/issues/279#issuecomment-39437129 9
OK, thanks; we'll try and keep an eye on those. Dave
Basically one needs to run Kata Containers with the following Dockerfile:
FROM ubuntu
RUN apt-get update && \ apt-get -y install autoconf git bc libacl1-dev libacl1 acl gcc make perl-modules && \ git clone https://github.com/pjd/pjdfstest.git && \ cd pjdfstest && \ autoreconf -ifs && \ ./configure && \ make
# and run using # prove -r .
That test suite does not include the fallocate tests. But simply running fallocate(1) in a Kata container today, fails with "Operation not supported" with 9p.
-Archana
On 9/26/18, 11:37 AM, "Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
* Castelino, Manohar R (manohar.r.castelino@intel.com) wrote:
Do we really get better POSIX compliance? From what I read I think we will still have some POSIX issues.
Which ones are you worried about?
With 9p we ran into issues with unlink, fallocate and fstat, which caused some workloads to fail with Kata.
If you've got test cases or can remember the details we'd be interested to see them.
Dave
-- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
_______________________________________________ kata-dev mailing list kata-dev@lists.katacontainers.io http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev
-- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
* Shinde, Archana M (archana.m.shinde@intel.com) wrote:
We have run the pjdfstest test suite in the past for POSIX compliance and seen quite a few failures with 9p. An old issue documenting this: https://github.com/clearcontainers/runtime/issues/828
I think the pjdfstest will be a good place to start. Graham has documented this process here: https://github.com/kata-containers/runtime/issues/279#issuecomment- 39437129 9
OK, thanks; we'll try and keep an eye on those.
Hi David. If you want real/actual examples as well, then (and I'm sure I wrote/summarized this list somewhere recently.... but I cannot find it...), here are the ones off the top of my head: - running iperf3 server in a container does an unlink/ref in /tmp, and fails on 9p. We work around that in our tests by setting TMPDIR to a ramfs (https://github.com/kata-containers/tests/blob/master/metrics/network/network...) - running apt or dnf in Ubuntu or Fedora images fails on 9p - iirc, those are both for similar unlink/ref actions - We know 9p does not support fallocate - it doesn't implement the VFS callback (https://github.com/kata-containers/runtime/issues/687#issuecomment-418073598). - In some modes, 9p does not support mmap. We just changed our 9p cache mode to 'mmap' though, which I suspect now fixes that? - We fail some pjdfstest posix tests. I've not dug into details. If you do need me to dig more into any previous examples of any of those, then let me know and I'll see how my foo fairs on github... We'll have to sieve out some of the other container failures we get reported - some of them are due to needing to run in priv mode and/or share the docker socket from the host etc., but quite often it takes a little detective work to figure out why a specific container is failing to run. Graham --------------------------------------------------------------------- Intel Corporation (UK) Limited Registered No. 1134945 (England) Registered Office: Pipers Way, Swindon SN3 1RJ VAT No: 860 2173 47 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies.
On Wed, Sep 26, 2018 at 07:32:52PM +0000, Shinde, Archana M wrote:
We have run the pjdfstest test suite in the past for POSIX compliance and seen quite a few failures with 9p. An old issue documenting this: https://github.com/clearcontainers/runtime/issues/828
I think the pjdfstest will be a good place to start. Graham has documented this process here: https://github.com/kata-containers/runtime/issues/279#issuecomment-39437129 9
Hi Archana, I have run pjdfstest with a working version and it had passed. It do see some failures again as code is changing. I am hopeful that by the time we release the code, pjdfstest should pass. Thanks Vivek
Thanks Vivek. Great to hear that the tests are largely passing. ________________________________________ From: Vivek Goyal [vgoyal@redhat.com] Sent: Thursday, September 27, 2018 6:04 AM To: Shinde, Archana M Cc: Dr. David Alan Gilbert; Castelino, Manohar R; mszeredi@redhat.com; Qixuan Wu; kata-dev; rhandlin@redhat.com; swhiteho@redhat.com Subject: Re: [kata-dev] About the future kata rootfs, qcow2 or nfs/vsock On Wed, Sep 26, 2018 at 07:32:52PM +0000, Shinde, Archana M wrote:
We have run the pjdfstest test suite in the past for POSIX compliance and seen quite a few failures with 9p. An old issue documenting this: https://github.com/clearcontainers/runtime/issues/828
I think the pjdfstest will be a good place to start. Graham has documented this process here: https://github.com/kata-containers/runtime/issues/279#issuecomment-39437129 9
Hi Archana, I have run pjdfstest with a working version and it had passed. It do see some failures again as code is changing. I am hopeful that by the time we release the code, pjdfstest should pass. Thanks Vivek
participants (7)
-
Castelino, Manohar R
-
Dr. David Alan Gilbert
-
Qixuan Wu
-
Shinde, Archana M
-
Tao Peng
-
Vivek Goyal
-
Whaley, Graham