[kata-dev] Some question about virtiofs cache and dax

Dr. David Alan Gilbert dgilbert at redhat.com
Tue May 25 08:48:25 UTC 2021


* 李瑞友 (liruiyou at huayun.com) wrote:
> Hi, guys
> 
> I have Some doubts about virtiofs cache and dax . In the past two days, I have asked a few questions on WeChat and slack:https://katacontainers.slack.com/archives/C86U7NZND/p1621580960033100
> , and some experts have answered some of them, which is very useful. But I still hope to get more detailed knowledge and information. @fidencio suggested that I post the question to the mailing list, so I posted it
> 
> Question 1
> On the official website of virtio-fs, in this article https://virtio-fs.gitlab.io/howto-qemu.html, it seems that $TESTDIR can  set an existing directory on the host , but if I set the source parameter in the /usr/share/defaults/kata-containers/configuration.toml  configuration file and specify a directory like /opt/kata-instance on the host
> # Format example:
> # ["-o", "arg1=xxx,arg2", "-o", "hello world", "--arg3=yyy"]
> #
> # see `virtiofsd -h` for possible options.
> virtio_fs_extra_args = ["-o","--thread-pool-size=1","-o","/opt/kata-instance"]

That doesn't look right; not everything is a -o option so I think you
want something more like:
  ["--thread-pool-size=1", "-o", "source=/opt/kata-instance"]

(Although I thought kata wired up the source path for you; I don't do
much kata; more virtiofsd)

> kubelet will report error
>    Warning FailedCreatePodSandBox 1s (x14 over 15s) kubelet, k8s05 Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create containerd task: failed to launch qemu: exit status 1, error messages from qemu log: qemu-system- x86_64: cannot create PID file: Cannot open pid file: No such file or directory
> 
> Question 2
> I saw a picture that said that DAX uses the nvdimm device simulated by qemu. It seems to be a device for each VM, which is mapped to the host filesystem buffer cache. For example, there are 100 kata VMs on a single host. I set virtio_fs_cache_size=1024M, Then the host will use up to 1024M memory, or 1024M*100 memory?Also, can this shmem be writable or read-only? Is it only for kata pod images or all data read and write, including kubernetes persistence volume?
> From the official website of qemu, I found that nvdimm is used like this
> -machine pc, nvdimm
> -m $RAM_SIZE, slots=$N,maxmem=$MAX_SIZE
> -object memory-backend-file,id=mem1,share=on,mem-path=$PATH,size=$NVDIMM_SIZE
> -device nvdimm,id=nvdimm1,memdev=mem1
> 
> When the kata instance is created, the qemu process can see that it contains a parameter memory-backend-file,
> -object memory-backend-file,id=dimm1,size=3072M,mem-path=/dev/shm,share=on -numa node,memdev=dimm1
> Is this cache dedicated to virtiofs? Where does dimm1 come from? It has nothing to do with nvdimm, right?

The DAX cache isn't like that; it's not really an nvdimm device; it's
just an area of virtual memory that files get mapped into and exposed to
the guest via a PCI bar.  That doesn't really use up host memory, so
even with a 100 guests, each with a 1G DAX cache, you shouldn't find the
host needing 100G of RAM.

You only need sto make sure the 'cache-size=' is set on the -device
vhost-user-fs-pci on qemu; you don't need any extra DIMMs for that.

> Qeustion 3
> 
> Now my understanding is that kata has two caches
> 3.1 Data transmission between guest and virtiofsd requires a cache. The corresponding qemu parameter is -object memory-backend-file,id=dimm1,size=XXX. I guess the function of this cache may be the memory space of the simulated PCI device? It seems unlikely that it is the vring queue size used by the vhost-user protocol. Sorry, I'm not very proficient in qemu

The memory-backend-file, dimm stuff is something separate;  that is just
a trick to get qemu to allocate memory using a file that can be shared
between the qemu and virtiofsd.  (Actually on newer qemu's there are
easier ways; -object backend-memfd in particular).

> There is another question. Is the virtiofs cache specially allocated for each VM? If a single host starts many VMs, won't the memory consumption be very large?
> 
> 
> 3.2 dax, the corresponding parameter is virtio_fs_cache_size, there is no cache policy control in the kata configuration file.
> # Cache mode: virtio_fs_cache
> It controls the virtiofs cache, not DAX windows, right?

That I'll leave to a kata person.

Dave

> Sorry,asked a lot of questions, it looks a bit scary :)
> 
> 
> 
> ryo from shanghai  2021-05-21
> 
> 
> 
> 
> 

> _______________________________________________
> kata-dev mailing list
> kata-dev at lists.katacontainers.io
> http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev

-- 
Dr. David Alan Gilbert / dgilbert at redhat.com / Manchester, UK




More information about the kata-dev mailing list