[kata-dev] [Announce] virtio-fs released with Kata Containers support

jiangyiwen jiangyiwen at huawei.com
Thu Dec 27 02:57:21 UTC 2018


On 2018/12/27 3:05, Vivek Goyal wrote:
> On Wed, Dec 26, 2018 at 10:32:06AM +0800, jiangyiwen wrote:
>> On 2018/12/11 3:25, Stefan Hajnoczi wrote:
>>> Dear Kata Containers Community,
>>> I'm delighted to announce the first release of virtio-fs, a new shared
>>> file system for virtual machines that is designed for container use
>>> cases, including shared volumes.
>>>
>>> Unlike virtio-9p and NFS over AF_VSOCK, virtio-fs aims to take advantage
>>> of the co-location between the virtual machine and the hypervisor in
>>> order to achieve local file system semantics and improve performance.
>>>
>>> For example, it can use Linux Direct Access (DAX) to access file
>>> contents directly from the host page cache.  This reduces communication
>>> with the file server and avoids duplicating data into each sandbox VM.
>>> It also means that mmap MAP_SHARED on a shared volume is coherent
>>> between sandbox VMs.
>>>
>>> The Linux kernel code (including performance numbers) has been posted
>>> here:
>>> https://marc.info/?l=linux-fsdevel&m=154446243324255&w=2
>>>
>>> Kata Containers integration is already available so you can benchmark
>>> and test virtio-fs.  The project is under active development and we
>>> still expect to make significant changes based on feedback and
>>> collaboration.
>>>
>>> We hope virtio-fs is interesting as a next step in overcoming
>>> virtio-9p's performance and limitations.  Let us know how it performs!
>>>
>>> You can read more about virtio-fs here:
>>> https://virtio-fs.gitlab.io/
>>>
>>> The Kata HowTo is here:
>>> https://virtio-fs.gitlab.io/howto-kata.html
>>>
>>> The Kata runtime and agent changes are fairly straightforward and
>>> comparable to virtio-9p.  There are several other code changes due to
>>> using a Fedora initramfs, systemd, and modular kernel.  These are not
>>> essential to virtio-fs but are simply how I preferred to develop and
>>> test.
>>>
>>> The FAQ on the virtio-fs website explains the main technical features.
>>> Please let me know if you have any questions or need help getting it
>>> running!  I'm also on #kata-dev IRC if you need a hand.
>>>
>>> Stefan
>>>
>>>
>>>
>>> _______________________________________________
>>> kata-dev mailing list
>>> kata-dev at lists.katacontainers.io
>>> http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev
>>>
>>
>> Hi Stefan,
>>
>> It is amazing to see such a great job. I setup a virtio-fs environment
>> using qemu, virtio-fs using "cache=none, dax" mode, and I found some
>> problems as follows:
>>
>> 1. Now it don't support write file with direct_io, then I filter
>> O_DIRECT flag in libfuse-daemon when opening file, and success. So I
>> think it may be related to dio alignment and mmap(). Like 9p also filter
>> the O_DIRECT flag in qemu.
> 
> Can you give some more details. What's the error you face and what
> do you mean by "I filter O_DIRECT flag in libfuse-daemon".
> 

What's the error I face?

I use fio test, command as follows:
fio -filename=/mnt/virtio_fs/file -direct=1 -rw=write -bs=1M -size=6G -iodepth=1 -ioengine=psync -numjobs=1 -group_reporting -name=xxx -time_based -runtime=120
and returned following messages:
---
4K0R0RD1: (g=0): rw=write, bs=1M-1M/1M-1M/1M-1M, ioengine=psync, iodepth=1
fio-2.13
Starting 1 process
fio: io_u error on file /mnt/virtio_fs/file: Invalid argument: write offset=0, buflen=1048576
fio: first direct IO errored. File system may not support direct IO, or iomem_align= is bad. Try setting direct=0.
fio: pid=7073, err=22/file:io_u.c:1707, func=io_u error, error=Invalid argument

4K0R0RD1: (groupid=0, jobs=1): err=22 (file:io_u.c:1707, func=io_u error, error=Invalid argument): pid=7073: Thu Dec 27 10:44:47 2018
  cpu          : usr=0.00%, sys=0.00%, ctx=3, majf=0, minf=14
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=50.0%, 4=50.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=1/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
---
Then I write a tool "open file with O_DIRECT and write file return failed, errno = 22",
The "libfuse-daemon" return following errors:
---
unique: 30, opcode: WRITE (16), nodeid: 140319467243856, insize: 131152, pid: 3157
lo_write(ino=140319467243856, size=131072, off=0)
   unique: 30, error: -22 (Invalid argument), outsize: 16
virtio_send_msg: elem 0: with 2 in desc of length 24
fv_queue_thread: Waiting for Queue 2 event
fv_queue_thread: Got queue event on Queue 2
fv_queue_thread: Queue 2 gave evalue: 1 available: in: 16 out: 64
fv_queue_thread: elem 0: with 2 out desc of length 64
---

what do I mean by "I filter O_DIRECT flag in libfuse-daemon"?

I modify passthrough_ll.c code and ignore O_DIRECT flag when opening file,
then it return success. The codes as follows:
diff --git a/example/passthrough_ll.c b/example/passthrough_ll.c
index 3e37dbd..9edb5bb 100644
--- a/example/passthrough_ll.c
+++ b/example/passthrough_ll.c
@@ -1234,7 +1234,7 @@ static void lo_open(fuse_req_t req, fuse_ino_t ino, struct fuse_file_info *fi)
                fi->flags &= ~O_APPEND;

        sprintf(buf, "/proc/self/fd/%i", lo_fd(req, ino));
-       fd = open(buf, fi->flags & ~O_NOFOLLOW);
+       fd = open(buf, fi->flags & ~(O_NOFOLLOW | O_DIRECT));
        if (fd == -1)
                return (void) fuse_reply_err(req, errno);

>>
>> 2. Cache-size can't be hot increased, I mean after the memory layout
>> is like "Data cache->Metadata Cache->Journal Cache, once data cache size
>> is changed, the metadata cache and journal cache offset will be changed,
>> this memory data need to be moved. I don't know if there are any plans
>> to support the dynamic increase. Because I understand user can have the
>> demand of hot increase cache.
> 
> I think one should be able to increase cache size. Right now that feature
> is not there but can't think why it can't be added. For example, for data
> cache, we just need to some kind of notification to virtio-fs driver and
> it will allocate more free ranges internally and start using these for
> next allocation onwards.
> 

Great, thanks.

>>
>> 3. Why real mmap operation is executed in qemu process instead of libfuse
>> daemon? I hope the data plane doesn't not need to pass through qemu again.
> 
> If libfuse daemon does mmap() in its address space, then how guest kernel
> will see those mappings. So, IIUC, qemu needs to call mmap().
> 

Oh, I know, Can we use huge page or /dev/shm?

>>
>> 4. When I use fio with psync, write/read only 20% improve than 9p, I think
>> psync can be more used than mmap. This user-case may need to improved.
> 
> Can you send me your fio job. I want to try it out.
> 

fio -filename=/mnt/virtio_fs/file -rw=read -bs=4k -size=6G -iodepth=1 -ioengine=psync -numjobs=1 -group_reporting -name=xxx -time_based -runtime=120

Thanks,
Yiwen.

> Thanks
> Vivek
> 
> .
> 





More information about the kata-dev mailing list