Some question about virtiofs cache and dax
Hi, guys I have Some doubts about virtiofs cache and dax . In the past two days, I have asked a few questions on WeChat and slack:https://katacontainers.slack.com/archives/C86U7NZND/p1621580960033100 , and some experts have answered some of them, which is very useful. But I still hope to get more detailed knowledge and information. @fidencio suggested that I post the question to the mailing list, so I posted it Question 1 On the official website of virtio-fs, in this article https://virtio-fs.gitlab.io/howto-qemu.html, it seems that $TESTDIR can set an existing directory on the host , but if I set the source parameter in the /usr/share/defaults/kata-containers/configuration.toml configuration file and specify a directory like /opt/kata-instance on the host # Format example: # ["-o", "arg1=xxx,arg2", "-o", "hello world", "--arg3=yyy"] # # see `virtiofsd -h` for possible options. virtio_fs_extra_args = ["-o","--thread-pool-size=1","-o","/opt/kata-instance"] kubelet will report error Warning FailedCreatePodSandBox 1s (x14 over 15s) kubelet, k8s05 Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create containerd task: failed to launch qemu: exit status 1, error messages from qemu log: qemu-system- x86_64: cannot create PID file: Cannot open pid file: No such file or directory Question 2 I saw a picture that said that DAX uses the nvdimm device simulated by qemu. It seems to be a device for each VM, which is mapped to the host filesystem buffer cache. For example, there are 100 kata VMs on a single host. I set virtio_fs_cache_size=1024M, Then the host will use up to 1024M memory, or 1024M*100 memory?Also, can this shmem be writable or read-only? Is it only for kata pod images or all data read and write, including kubernetes persistence volume? From the official website of qemu, I found that nvdimm is used like this -machine pc, nvdimm -m $RAM_SIZE, slots=$N,maxmem=$MAX_SIZE -object memory-backend-file,id=mem1,share=on,mem-path=$PATH,size=$NVDIMM_SIZE -device nvdimm,id=nvdimm1,memdev=mem1 When the kata instance is created, the qemu process can see that it contains a parameter memory-backend-file, -object memory-backend-file,id=dimm1,size=3072M,mem-path=/dev/shm,share=on -numa node,memdev=dimm1 Is this cache dedicated to virtiofs? Where does dimm1 come from? It has nothing to do with nvdimm, right? Qeustion 3 Now my understanding is that kata has two caches 3.1 Data transmission between guest and virtiofsd requires a cache. The corresponding qemu parameter is -object memory-backend-file,id=dimm1,size=XXX. I guess the function of this cache may be the memory space of the simulated PCI device? It seems unlikely that it is the vring queue size used by the vhost-user protocol. Sorry, I'm not very proficient in qemu There is another question. Is the virtiofs cache specially allocated for each VM? If a single host starts many VMs, won't the memory consumption be very large? 3.2 dax, the corresponding parameter is virtio_fs_cache_size, there is no cache policy control in the kata configuration file. # Cache mode: virtio_fs_cache It controls the virtiofs cache, not DAX windows, right? Sorry,asked a lot of questions, it looks a bit scary :) ryo from shanghai 2021-05-21
* 李瑞友 (liruiyou@huayun.com) wrote:
Hi, guys
I have Some doubts about virtiofs cache and dax . In the past two days, I have asked a few questions on WeChat and slack:https://katacontainers.slack.com/archives/C86U7NZND/p1621580960033100 , and some experts have answered some of them, which is very useful. But I still hope to get more detailed knowledge and information. @fidencio suggested that I post the question to the mailing list, so I posted it
Question 1 On the official website of virtio-fs, in this article https://virtio-fs.gitlab.io/howto-qemu.html, it seems that $TESTDIR can set an existing directory on the host , but if I set the source parameter in the /usr/share/defaults/kata-containers/configuration.toml configuration file and specify a directory like /opt/kata-instance on the host # Format example: # ["-o", "arg1=xxx,arg2", "-o", "hello world", "--arg3=yyy"] # # see `virtiofsd -h` for possible options. virtio_fs_extra_args = ["-o","--thread-pool-size=1","-o","/opt/kata-instance"]
That doesn't look right; not everything is a -o option so I think you want something more like: ["--thread-pool-size=1", "-o", "source=/opt/kata-instance"] (Although I thought kata wired up the source path for you; I don't do much kata; more virtiofsd)
kubelet will report error Warning FailedCreatePodSandBox 1s (x14 over 15s) kubelet, k8s05 Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create containerd task: failed to launch qemu: exit status 1, error messages from qemu log: qemu-system- x86_64: cannot create PID file: Cannot open pid file: No such file or directory
Question 2 I saw a picture that said that DAX uses the nvdimm device simulated by qemu. It seems to be a device for each VM, which is mapped to the host filesystem buffer cache. For example, there are 100 kata VMs on a single host. I set virtio_fs_cache_size=1024M, Then the host will use up to 1024M memory, or 1024M*100 memory?Also, can this shmem be writable or read-only? Is it only for kata pod images or all data read and write, including kubernetes persistence volume? From the official website of qemu, I found that nvdimm is used like this -machine pc, nvdimm -m $RAM_SIZE, slots=$N,maxmem=$MAX_SIZE -object memory-backend-file,id=mem1,share=on,mem-path=$PATH,size=$NVDIMM_SIZE -device nvdimm,id=nvdimm1,memdev=mem1
When the kata instance is created, the qemu process can see that it contains a parameter memory-backend-file, -object memory-backend-file,id=dimm1,size=3072M,mem-path=/dev/shm,share=on -numa node,memdev=dimm1 Is this cache dedicated to virtiofs? Where does dimm1 come from? It has nothing to do with nvdimm, right?
The DAX cache isn't like that; it's not really an nvdimm device; it's just an area of virtual memory that files get mapped into and exposed to the guest via a PCI bar. That doesn't really use up host memory, so even with a 100 guests, each with a 1G DAX cache, you shouldn't find the host needing 100G of RAM. You only need sto make sure the 'cache-size=' is set on the -device vhost-user-fs-pci on qemu; you don't need any extra DIMMs for that.
Qeustion 3
Now my understanding is that kata has two caches 3.1 Data transmission between guest and virtiofsd requires a cache. The corresponding qemu parameter is -object memory-backend-file,id=dimm1,size=XXX. I guess the function of this cache may be the memory space of the simulated PCI device? It seems unlikely that it is the vring queue size used by the vhost-user protocol. Sorry, I'm not very proficient in qemu
The memory-backend-file, dimm stuff is something separate; that is just a trick to get qemu to allocate memory using a file that can be shared between the qemu and virtiofsd. (Actually on newer qemu's there are easier ways; -object backend-memfd in particular).
There is another question. Is the virtiofs cache specially allocated for each VM? If a single host starts many VMs, won't the memory consumption be very large?
3.2 dax, the corresponding parameter is virtio_fs_cache_size, there is no cache policy control in the kata configuration file. # Cache mode: virtio_fs_cache It controls the virtiofs cache, not DAX windows, right?
That I'll leave to a kata person. Dave
Sorry,asked a lot of questions, it looks a bit scary :)
ryo from shanghai 2021-05-21
_______________________________________________ kata-dev mailing list kata-dev@lists.katacontainers.io http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev
-- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
Hi, guys
I have Some doubts about virtiofs cache and dax . In the past two days, I have asked a few questions on WeChat and slack:https://katacontainers.slack.com/archives/C86U7NZND/p1621580960033100
, and some experts have answered some of them, which is very useful. But I still hope to get more detailed knowledge and information. @fidencio suggested that I post the question to the mailing list, so I posted it
*Question 1*
On the official website of virtio-fs, in this article https://virtio-fs.gitlab.io/howto-qemu.html, it seems that $TESTDIR can set an existing directory on the host , but if I set the source parameter in the /usr/share/defaults/kata-containers/configuration.toml configuration file and specify a directory like |/opt/kata-instance| on the host |# Format example:| |# ["-o", "arg1=xxx,arg2", "-o", "hello world", "--arg3=yyy"]| |#| |# see `virtiofsd -h` for possible options.| |virtio_fs_extra_args = ["-o","--thread-pool-size=1","-o","/opt/kata-instance"]|
kubelet will report error |Warning FailedCreatePodSandBox 1s (x14 over 15s) kubelet, k8s05 Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create containerd task: failed to launch qemu: exit status 1, error messages from qemu log: qemu-system- x86_64: cannot create PID file: Cannot open pid file: No such file or directory | The $TESTDIR in the virtio-fs example is to show to users how to test virtio-fs directly. With Kata Containers, users do not have to set it
||
*Question 2* I saw a picture that said that DAX uses the nvdimm device simulated by qemu. It seems to be a device for each VM, which is mapped to the host filesystem buffer cache. For example, there are 100 kata VMs on a single host. I set virtio_fs_cache_size=1024M, Then the host will use up to 1024M memory, or 1024M*100 memory? 1024*100. There is no sharing between dax windows unless when they map
On 2021/5/25 16:18, 李瑞友 wrote: directly and in fact Kata runtime would need the `source=xxx` parameter and if users specify it directly in the config file, virtiofsd might fail to start and it results in the error you see above. the same file on the host.
Also, can this shmem be writable or read-only? It is writable for the guest.
Is it only for kata pod images or all data read and write, including kubernetes persistence volume? Anything that goes through virtio-fs. So it is for images and volumes.
From the official website of qemu, I found that nvdimm is used like this
|-||machine pc, nvdimm| |-m $RAM_SIZE, slots=$N,maxmem=$MAX_SIZE| |-object memory-backend-file,id=mem1,share=on,mem-path=$PATH,size=$NVDIMM_SIZE| |-device nvdimm,id=nvdimm1,memdev=mem1|
When the kata instance is created, the qemu process can see that it contains a parameter memory-backend-file, |-object memory-backend-file,id=dimm1,size=3072M,mem-path=/dev/shm,share=on -numa node,memdev=dimm1| Is this cache dedicated to virtiofs? Where does dimm1 come from? It has nothing to do with nvdimm, right?
Yup. It is *NOT* an nvdimm device but just a normal file backed memory dim for the guest to use. And the dimm is not dedicated to virtiofs either. It is the guest's memory dimm.
*Qeustion 3*
Now my understanding is that kata has two caches
3.1 Data transmission between guest and virtiofsd requires a cache. The corresponding qemu parameter is-object memory-backend-file,id=dimm1,size=XXX. I guess the function of this cache may be the memory space of the simulated PCI device? It seems unlikely that it is the vring queue size used by the vhost-user protocol. Sorry, I'm not very proficient in qemu
Nop, it is guest's memory. Not *JUST* vrings.
There is another question. Is the virtiofs cache specially allocated for each VM? If a single host starts many VMs, won't the memory consumption be very large? It could be.
3.2 dax, the corresponding parameter isvirtio_fs_cache_size,there is no cache policy control in the kata configuration file.
# Cache mode: virtio_fs_cache
It controls the virtiofs cache, not DAX windows, right?
DAX is just a page mapping mechanism. It doesn't have any cache policy. It is guest's page cache file virtiofs point of view. And the file system cache policy is controlled by guest virtiofs file system. Hope I answered your questions ;) Cheers, Tao
Sorry,asked a lot of questions, it looks a bit scary:)
ryo from shanghai 2021-05-21
_______________________________________________ kata-dev mailing list kata-dev@lists.katacontainers.io http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev
On 25 May 2021, at 11:05, Peng Tao via kata-dev <kata-dev@lists.katacontainers.io> wrote:
On 2021/5/25 16:18, 李瑞友 wrote:
Hi, guys I have Some doubts about virtiofs cache and dax . In the past two days, I have asked a few questions on WeChat and slack:https://katacontainers.slack.com/archives/C86U7NZND/p1621580960033100 , and some experts have answered some of them, which is very useful. But I still hope to get more detailed knowledge and information. @fidencio suggested that I post the question to the mailing list, so I posted it *Question 1* On the official website of virtio-fs, in this article https://virtio-fs.gitlab.io/howto-qemu.html, it seems that $TESTDIR can set an existing directory on the host , but if I set the source parameter in the /usr/share/defaults/kata-containers/configuration.toml configuration file and specify a directory like |/opt/kata-instance| on the host |# Format example:| |# ["-o", "arg1=xxx,arg2", "-o", "hello world", "--arg3=yyy"]| |#| |# see `virtiofsd -h` for possible options.| |virtio_fs_extra_args = ["-o","--thread-pool-size=1","-o","/opt/kata-instance"]| kubelet will report error |Warning FailedCreatePodSandBox 1s (x14 over 15s) kubelet, k8s05 Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create containerd task: failed to launch qemu: exit status 1, error messages from qemu log: qemu-system- x86_64: cannot create PID file: Cannot open pid file: No such file or directory | The $TESTDIR in the virtio-fs example is to show to users how to test virtio-fs directly. With Kata Containers, users do not have to set it directly and in fact Kata runtime would need the `source=xxx` parameter and if users specify it directly in the config file, virtiofsd might fail to start and it results in the error you see above.
|| *Question 2* I saw a picture that said that DAX uses the nvdimm device simulated by qemu. It seems to be a device for each VM, which is mapped to the host filesystem buffer cache. For example, there are 100 kata VMs on a single host. I set virtio_fs_cache_size=1024M, Then the host will use up to 1024M memory, or 1024M*100 memory? 1024*100. There is no sharing between dax windows unless when they map the same file on the host.
Actually, on this one, I think you would only use 1024*100 of _virtual_ memory. But that really does not matter, virtual memory is relatively cheap. Since the DAX window is used to map _host_ pages (specifically the buffer cache), you will use approximately 0 bytes of additional physical memory. The actual physical memory being consumed is what is used by the host buffer cache, i.e. what shows under the buf/cache column when you run `free`. This is documented here: https://virtio-fs.gitlab.io/howto-qemu.html, I quote:
Note that the size of the 'cache' used doesn't increase the host RAM used directly, since it's just a mapping area for files.
(memory usage is not strictly zero because virtual memory mappings do require a bit of memory for memory management, but that's relatively small) The whole idea behind DAX is to avoid having a _second_ buffer cache in the guest. Normally, for regular local files, the guest maintains its own file buffers to have a local file cache for only the files in that guest. That's what you see if you run `free` in the guest. Now, for files shared with virtiofs, since they reside on the host, you would have one cache copy on the host, then virtiofs would send it to the guest, and one cache copy on the guest. That is somewhat inefficient, since you would have to make a copy of every single cache buffer used by the guest. This is the scenario where you would use N*100 memory, where N is whatever buffer cache usage exists in the guest. That is what DAX solves. What it does is simply _map_ the host buffer into the guest. So now the guest uses the _same_ physical memory as the host, even if it uses a different virtual address. To do that, you need some reserved guest physical memory space, which is what the DAX window gives you. The PCI BAR that Dave referred to stands for "Bus Address Register", and it's a way to specify that some device has physical memory somewhere that does not belong to the memory of the guest.
Also, can this shmem be writable or read-only? It is writable for the guest.
See that cache as being the same kind of cache that the host uses when you write a file or map it using mmap(). It's really not very different. So whether you can write to it or not is more a matter of whether you have a right to write the file than anything else, and whether you opened the file for R/W and not just RO.
Is it only for kata pod images or all data read and write, including kubernetes persistence volume? Anything that goes through virtio-fs. So it is for images and volumes.
From the official website of qemu, I found that nvdimm is used like this |-||machine pc, nvdimm| |-m $RAM_SIZE, slots=$N,maxmem=$MAX_SIZE| |-object memory-backend-file,id=mem1,share=on,mem-path=$PATH,size=$NVDIMM_SIZE| |-device nvdimm,id=nvdimm1,memdev=mem1| When the kata instance is created, the qemu process can see that it contains a parameter memory-backend-file, |-object memory-backend-file,id=dimm1,size=3072M,mem-path=/dev/shm,share=on -numa node,memdev=dimm1| Is this cache dedicated to virtiofs? Where does dimm1 come from? It has nothing to do with nvdimm, right? Yup. It is *NOT* an nvdimm device but just a normal file backed memory dim for the guest to use. And the dimm is not dedicated to virtiofs either. It is the guest's memory dimm.
*Qeustion 3* Now my understanding is that kata has two caches 3.1 Data transmission between guest and virtiofsd requires a cache. The corresponding qemu parameter is-object memory-backend-file,id=dimm1,size=XXX. I guess the function of this cache may be the memory space of the simulated PCI device? It seems unlikely that it is the vring queue size used by the vhost-user protocol. Sorry, I'm not very proficient in qemu Nop, it is guest's memory. Not *JUST* vrings.
There is another question. Is the virtiofs cache specially allocated for each VM? If a single host starts many VMs, won't the memory consumption be very large? It could be.
3.2 dax, the corresponding parameter isvirtio_fs_cache_size,there is no cache policy control in the kata configuration file. # Cache mode: virtio_fs_cache It controls the virtiofs cache, not DAX windows, right? DAX is just a page mapping mechanism. It doesn't have any cache policy. It is guest's page cache file virtiofs point of view. And the file system cache policy is controlled by guest virtiofs file system.
Hope I answered your questions ;)
I wonder if the original question does not raise from the existence of additional caching mechanisms in virtiofs for _metadata_. That may be what the original poster was alluding to? In short: without DAX, memory usage could be very large because each guest will have its own file buffers. For example, if you start a fedora container and to "dnf install something", you will see the memory usage climb by ~300M just because of the many files that were read/written during that operation. Without DAX, that's 350M that need to be allocated by the guest as well. With DAX, it will leverage the 350M already allocated in the host for these files buffers. Now, this obviously raises problems for shared files. In general, the filesystems of your containers are isolated from one another, so each container owns their own files, and can scribble all over the place at will. In case this condition is not true, i.e. you share files with virtiofs across containers, then you need to change the "caching mode" of virtiofs to account for that. This may be the "second level of caching" that ryo was referring to… In any case, that second level is only intended to allow consistency of meta-data across multiple virtual machines. Hoping this clarifies?
Cheers, Tao
Sorry,asked a lot of questions, it looks a bit scary:) ryo from shanghai 2021-05-21 _______________________________________________ kata-dev mailing list kata-dev@lists.katacontainers.io http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev
_______________________________________________ kata-dev mailing list kata-dev@lists.katacontainers.io http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev
On 2021/5/25 17:58, Christophe de Dinechin wrote:
On 25 May 2021, at 11:05, Peng Tao via kata-dev <kata-dev@lists.katacontainers.io> wrote:
On 2021/5/25 16:18, 李瑞友 wrote:
Hi, guys I have Some doubts about virtiofs cache and dax . In the past two days, I have asked a few questions on WeChat and slack:https://katacontainers.slack.com/archives/C86U7NZND/p1621580960033100 , and some experts have answered some of them, which is very useful. But I still hope to get more detailed knowledge and information. @fidencio suggested that I post the question to the mailing list, so I posted it *Question 1* On the official website of virtio-fs, in this article https://virtio-fs.gitlab.io/howto-qemu.html, it seems that $TESTDIR can set an existing directory on the host , but if I set the source parameter in the /usr/share/defaults/kata-containers/configuration.toml configuration file and specify a directory like |/opt/kata-instance| on the host |# Format example:| |# ["-o", "arg1=xxx,arg2", "-o", "hello world", "--arg3=yyy"]| |#| |# see `virtiofsd -h` for possible options.| |virtio_fs_extra_args = ["-o","--thread-pool-size=1","-o","/opt/kata-instance"]| kubelet will report error |Warning FailedCreatePodSandBox 1s (x14 over 15s) kubelet, k8s05 Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create containerd task: failed to launch qemu: exit status 1, error messages from qemu log: qemu-system- x86_64: cannot create PID file: Cannot open pid file: No such file or directory | The $TESTDIR in the virtio-fs example is to show to users how to test virtio-fs directly. With Kata Containers, users do not have to set it directly and in fact Kata runtime would need the `source=xxx` parameter and if users specify it directly in the config file, virtiofsd might fail to start and it results in the error you see above.
|| *Question 2* I saw a picture that said that DAX uses the nvdimm device simulated by qemu. It seems to be a device for each VM, which is mapped to the host filesystem buffer cache. For example, there are 100 kata VMs on a single host. I set virtio_fs_cache_size=1024M, Then the host will use up to 1024M memory, or 1024M*100 memory? 1024*100. There is no sharing between dax windows unless when they map the same file on the host.
Actually, on this one, I think you would only use 1024*100 of _virtual_ memory. But that really does not matter, virtual memory is relatively cheap.
Since the DAX window is used to map _host_ pages (specifically the buffer cache), you will use approximately 0 bytes of additional physical memory. The actual physical memory being consumed is what is used by the host buffer cache, i.e. what shows under the buf/cache column when you run `free`.
This is documented here: https://virtio-fs.gitlab.io/howto-qemu.html, I quote:
Note that the size of the 'cache' used doesn't increase the host RAM used directly, since it's just a mapping area for files.
(memory usage is not strictly zero because virtual memory mappings do require a bit of memory for memory management, but that's relatively small)
The whole idea behind DAX is to avoid having a _second_ buffer cache in the guest. Normally, for regular local files, the guest maintains its own file buffers to have a local file cache for only the files in that guest. That's what you see if you run `free` in the guest.
Now, for files shared with virtiofs, since they reside on the host, you would have one cache copy on the host, then virtiofs would send it to the guest, and one cache copy on the guest. That is somewhat inefficient, since you would have to make a copy of every single cache buffer used by the guest. This is the scenario where you would use N*100 memory, where N is whatever buffer cache usage exists in the guest.
That is what DAX solves. What it does is simply _map_ the host buffer into the guest. So now the guest uses the _same_ physical memory as the host, even if it uses a different virtual address. To do that, you need some reserved guest physical memory space, which is what the DAX window gives you. The PCI BAR that Dave referred to stands for "Bus Address Register", and it's a way to specify that some device has physical memory somewhere that does not belong to the memory of the guest.
Also, can this shmem be writable or read-only? It is writable for the guest.
See that cache as being the same kind of cache that the host uses when you write a file or map it using mmap(). It's really not very different.
So whether you can write to it or not is more a matter of whether you have a right to write the file than anything else, and whether you opened the file for R/W and not just RO.
Is it only for kata pod images or all data read and write, including kubernetes persistence volume? Anything that goes through virtio-fs. So it is for images and volumes.
From the official website of qemu, I found that nvdimm is used like this |-||machine pc, nvdimm| |-m $RAM_SIZE, slots=$N,maxmem=$MAX_SIZE| |-object memory-backend-file,id=mem1,share=on,mem-path=$PATH,size=$NVDIMM_SIZE| |-device nvdimm,id=nvdimm1,memdev=mem1| When the kata instance is created, the qemu process can see that it contains a parameter memory-backend-file, |-object memory-backend-file,id=dimm1,size=3072M,mem-path=/dev/shm,share=on -numa node,memdev=dimm1| Is this cache dedicated to virtiofs? Where does dimm1 come from? It has nothing to do with nvdimm, right? Yup. It is *NOT* an nvdimm device but just a normal file backed memory dim for the guest to use. And the dimm is not dedicated to virtiofs either. It is the guest's memory dimm.
*Qeustion 3* Now my understanding is that kata has two caches 3.1 Data transmission between guest and virtiofsd requires a cache. The corresponding qemu parameter is-object memory-backend-file,id=dimm1,size=XXX. I guess the function of this cache may be the memory space of the simulated PCI device? It seems unlikely that it is the vring queue size used by the vhost-user protocol. Sorry, I'm not very proficient in qemu Nop, it is guest's memory. Not *JUST* vrings.
There is another question. Is the virtiofs cache specially allocated for each VM? If a single host starts many VMs, won't the memory consumption be very large? It could be.
3.2 dax, the corresponding parameter isvirtio_fs_cache_size,there is no cache policy control in the kata configuration file. # Cache mode: virtio_fs_cache It controls the virtiofs cache, not DAX windows, right? DAX is just a page mapping mechanism. It doesn't have any cache policy. It is guest's page cache file virtiofs point of view. And the file system cache policy is controlled by guest virtiofs file system.
Hope I answered your questions ;)
I wonder if the original question does not raise from the existence of additional caching mechanisms in virtiofs for _metadata_. That may be what the original poster was alluding to?
DAX pages are treated as page cache pages in the guest. Its flushing policy is the same as file system *data*.
In short: without DAX, memory usage could be very large because each guest will have its own file buffers. For example, if you start a fedora container and to "dnf install something", you will see the memory usage climb by ~300M just because of the many files that were read/written during that operation. Without DAX, that's 350M that need to be allocated by the guest as well. With DAX, it will leverage the 350M already allocated in the host for these files buffers.
I agree DAX can help save guest memory, with host page cache in the back, especially if the files are read only shared among different guests.
Now, this obviously raises problems for shared files. In general, the filesystems of your containers are isolated from one another, so each container owns their own files, and can scribble all over the place at will. In case this condition is not true, i.e. you share files with virtiofs across containers, then you need to change the "caching mode" of virtiofs to account for that. This may be the "second level of caching" that ryo was referring to… In any case, that second level is only intended to allow consistency of meta-data across multiple virtual machines.
It seems ryo is mixing dax with data transferring between guest and virtiofsd. In that case when dax is used, dax window is the channel for guest/virtiofsd data transfer. So there is no second level of cache involved. Cheers, Tao
Hoping this clarifies?
Cheers, Tao
Sorry,asked a lot of questions, it looks a bit scary:) ryo from shanghai 2021-05-21 _______________________________________________ kata-dev mailing list kata-dev@lists.katacontainers.io http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev
_______________________________________________ kata-dev mailing list kata-dev@lists.katacontainers.io http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev
participants (4)
-
Christophe de Dinechin
-
Dr. David Alan Gilbert
-
Peng Tao
-
李瑞友