[kata-dev] About the future kata rootfs, qcow2 or nfs/vsock

Qixuan Wu qixuan.wu at linux.alibaba.com
Tue Sep 25 14:53:35 UTC 2018


于 2018/9/25 下午10:25, Dr. David Alan Gilbert 写道:
> * Qixuan Wu (qixuan.wu at linux.alibaba.com) wrote:
>>
>>
>> 于 2018/9/25 上午12:14, Dr. David Alan Gilbert 写道:
>>> * Qixuan Wu (qixuan.wu at linux.alibaba.com) wrote:
>>>>
>>>>
>>>> 于 2018/9/24 下午11:53, Dr. David Alan Gilbert 写道:
>>>>> * Qixuan Wu (qixuan.wu at linux.alibaba.com) wrote:
>>>>>>> We're currently experimenting with something a bit different;
>>>>>>> we've got a setup that uses a modified version of the FUSE protocol
>>>>>>> running over vhost-user;  it's:
>>>>>>>       a) Got the filesystem access split out of qemu into a separate daemon
>>>>>>>           - that's just a modified version of a normal FUSE filesystem daemon
>>>>>>>           with the nice bit being that since it's a separate process you
>>>>>>>           can do whatever isolation on it you want.
>>>>>>>       b) But the latency is low because vhost-user means the daemon can read
>>>>>>>          the request queue straight out of the guest memory
>>>>>>>       c) We've got a setup with DAX so that the files are mapped straight
>>>>>>>          into guest address space, so the overhead is very low for large
>>>>>>>          files.
>>>>>> That's so cool. I guess it will not use virtio. And this way maybe the new
>>>>>> para-virtualization method, it's specific about the file system for the data
>>>>>> shared between guest and host.
>>>>>
>>>>> It does use virtio!  It's basically just the existing FUSE protocol
>>>>> carried over virtio; it's got some tweaks to allow the direct mappings
>>>>> and to deal with some difference sin the setup.
>>>>> It uses the existing vhost-user implementation of virtio (just like
>>>>> vhost-user for network does virtio for dpdk).
>>>>>
>>>>>>>       d) We've got a caching scheme for metadata, which again removes a lot
>>>>>>>          of latency.
>>>>>>>       e) We've got some patches to use it in KATA; I can start a basic KATA
>>>>>>>          guest with it.
>>>>>>>
>>>>>>> This is the first public mention of it because I didn't want you waiting
>>>>>>> for a reply; but our code is still rather messy and experimental; give
>>>>>>> us a few weeks and as soon as it survives some smoke tests we'll make
>>>>>>> the code public.
>>>>>>>
>>>>>>> Because we're reusing both FUSE and vhost-user the kernel changes are
>>>>>>> quite small, as are the qemu changes.
>>>>>>>
>>>>>>> I realise that's not much detail yet; we're starting to write some of it
>>>>>>> up; feel free to ask any specifics.
>>>>>>>
>>>>>>
>>>>>> Thank for the replay. Seems that the file data are mmapping direct, but the
>>>>>> control plane, like metadata are still using some other simple protocal,
>>>>>> maybe new protocal ?
>>>>>
>>>>> The control plane again is basically just the existing FUSE protocol;
>>>>> but we've got a shared mmap'd region for a fast lookup for some of the
>>>>> metadata.
>>>>>
>>>>>> Because 9p and nfs are very complex, they are not
>>>>>> developed for the file sharing between guest and host. I always hope there
>>>>>> is a simple file sharing protocal. I am very looking forward for the code.
>>>>>> :-).
>>>>>
>>>>> Glad you like the sound of it;  we'll try and get it out ASAP.
>>>>>
>>>>
>>>> Though has some doubts about it, anyway, seems like it's faster and simpler
>>>> than 9pfs and nfs+vsock. It's a good news for kata user.
>>>
>>> Please ask about your doubts; I'd like to make sure we have good
>>> answers to them.
>>
>> My doubts are:
>>
>> 1. Is the Filesystem in Userspace (FUSE) used in Guest os or Host os?
>
> It's between the guest OS and the host qemu+daemon. The host OS doesn't
> see it.
>
>> 2. As my understand, FUSE is the mechanism used between user space and
>> kernel space, not a protocal. So I cannot understand how create or
>> unlink command be transfered from guest to host over virtio. So i did
>> not understant "FUSE protocol over virtio".
>
> Ignoring this work; the way FUSE works is that:
>
>      1) application -> syscalls to kernel
>
>      2) kernel translates those to a message stream over an fd
>
>      3) A daemon running as a normal process under the same kernel
>      reads commands from that fd and passes data back to the kernel
>
> now we swivel this around a bit:
>
>      a) Guest application -> syscalls to guest kernel
>
>      b) guest kernel translates those to a message stream - this time
>         over a virtio command stream.
>
>      c) A daemon connected to qemu via vhost-user reads that
>         command stream.
>
> so it's actually pretty much the same; but we've replaced
> the fd used between the kernel and the daemon by a virtio transport.
>
I totally got it, thank you very much. So the daemon operate the
normal file syscall to host kernel, right ?

Seems that it's similiar with the guest syscall pass through to host
kernel. Is there any security problem?

>> 3. Did you test the performance compared to 9P?
>
> Only a little; at the moment our code is full of debug and we're
> just trying to get it hang together to run benchmarks solidly.
> It's looking promising though; there's a couple of things we need
> to fix but it's getting there.
>
Got it. Hope to see the data. But seems the procedure is similiar with 
9PFS. In the guest, you still need to implement a new Filesystem. And
so the reason why new one is faster than 9P is because that metadata
command is non-copy, am I right?

Thanks & Regards
Qixuan.



More information about the kata-dev mailing list